Translation lookaside buffer

A translation lookaside buffer (TLB) is a cache that memory management hardware uses to improve virtual address translation speed.[1] The majority of desktop, laptop, and server processors includes one or more TLBs in the memory management hardware, and it is nearly always present in any hardware that utilizes paged or segmented virtual memory.

The TLB is sometimes implemented as content-addressable memory (CAM). The CAM search key is the virtual address and the search result is a physical address. If the requested address is present in the TLB, the CAM search yields a match quickly and the retrieved physical address can be used to access memory. This is called a TLB hit. If the requested address is not in the TLB, it is a miss, and the translation proceeds by looking up the page table in a process called a page walk. The page walk requires a lot of time when compared to the processor speed, as it involves reading the contents of multiple memory locations and using them to compute the physical address. After the physical address is determined by the page walk, the virtual address to physical address mapping is entered into the TLB. The PowerPC 604, for example, has a two-way set-associative TLB for data loads and stores.[2]

Overview

A translation lookaside buffer (TLB) has a fixed number of slots containing page table entries and segment table entries; page table entries map virtual addresses to physical addresses and intermediate table addresses, while segment table entries map virtual addresses to segment addresses, intermediate table addresses and page table addresses.[3] The virtual memory is the memory space as seen from a process; this space is often split into pages of a fixed size (in paged memory), or less commonly into segments of variable sizes (in segmented memory). The page table, generally stored in main memory, keeps track of where the virtual pages are stored in the physical memory. The TLB is a cache of the page table, representing only a subset of the page table contents.[4]

Referencing the physical memory addresses, a TLB may reside between the CPU and the CPU cache, between the CPU cache and primary storage memory, or between levels of a multi-level cache. The placement determines whether the cache uses physical or virtual addressing. If the cache is virtually addressed, requests are sent directly from the CPU to the cache, and the TLB is accessed only on a cache miss. If the cache is physically addressed, the CPU does a TLB lookup on every memory operation and the resulting physical address is sent to the cache.

In a Harvard architecture or hybrid thereof, a separate virtual address space or memory access hardware may exist for instructions and data. This can lead to distinct TLBs for each access type, an Instruction Translation Lookaside Buffer (ITLB) and a Data Translation Lookaside Buffer (DTLB). Various benefits have been demonstrated with separate data and instruction TLBs.[5]

A common optimization for physically addressed caches is to perform the TLB lookup in parallel with the cache access. The low-order bits of any virtual address (e.g., in a virtual memory system having 4 KB pages, the lower 12 bits of the virtual address) represent the offset of the desired address within the page, and thus they do not change in the virtual-to-physical translation. During a cache access, two steps are performed: an index is used to find an entry in the cache's data store, and then the tags for the cache line found are compared. If the cache is structured in such a way that it can be indexed using only the bits that do not change in translation, the cache can perform its "index" operation while the TLB translates the upper bits of the address. Then, the translated address from the TLB is passed to the cache. The cache performs a tag comparison to determine if this access was a hit or miss. It is possible to perform the TLB lookup in parallel with the cache access even if the cache must be indexed using some bits that may change upon address translation; see the address translation section in the cache article for more details about virtual addressing as it pertains to caches and TLBs.

Performance implications

The CPU has to access main memory for an instruction cache miss, data cache miss, or TLB miss. The third case (the simplest one) is where the desired information itself actually is in a cache, but the information for virtual-to-physical translation is not in a TLB. These are all slow, due to requiring accessing a slower level of the memory hierarchy, so a well-functioning TLB is important. Indeed, a TLB miss can be more expensive than an instruction or data cache miss, due to requiring not just a load from main memory, but a page walk, requiring several loads.

If the page working set does not fit into the TLB, then TLB thrashing occurs, where frequent TLB misses occur, with each newly cached page displacing one that will soon be used again, degrading performance in exactly the same way as thrashing of the instruction or data cache does. TLB thrashing can occur even if instruction cache or data cache thrashing are not occurring, because these are cached in different size units. Instructions and data are cached in small blocks (cache lines), not entire pages, but address lookup is done at the page level. Thus even if the code and data working sets fit into cache, if the working sets are fragmented across many pages, the virtual address working set may not fit into TLB, causing TLB thrashing. Appropriate sizing of the TLB thus requires considering not only the size of the corresponding instruction and data caches, but also how these are fragmented across multiple pages.

Multiple TLBs

Similar to caches, TLBs may have multiple levels. CPUs can be (and nowadays usually are) built with multiple TLBs, for example a small "L1" TLB (potentially fully associative) that is extremely fast, and a larger "L2" TLB that is somewhat slower. When ITLB and DTLB are used, a CPU can have three (ITLB1, DTLB1, TLB2) or four TLBs.

For instance, Intel's Nehalem microarchitecture has a four-way set associative L1 DTLB with 64 entries for 4 KiB pages and 32 entries for 2/4 MiB pages, an L1 ITLB with 128 entries for 4 KiB pages using four-way associativity and 14 fully associative entries for 2/4 MiB pages (both parts of the ITLB divided statically between two threads)[6] and a unified 512-entry L2 TLB for 4 KiB pages,[7] both 4-way associative.[8]

Some TLBs may have separate sections for small pages and huge pages.

TLB miss handling

Two schemes for handling TLB misses are commonly found in modern architectures:

The Itanium architecture provides an option of using either software or hardware managed TLBs.[13]

The Alpha architecture's TLB is managed in PALcode, rather than in the operating system. As the PALcode for a processor can be processor-specific and operating-system-specific, this allows different versions of PALcode to implement different page table formats for different operating systems, without requiring that the TLB format, and the instructions to control the TLB, to be specified by the architecture.[14]

Typical TLB

These are typical performance levels of a TLB:[15]

If a TLB hit takes 1 clock cycle, a miss takes 30 clock cycles, and the miss rate is 1%, the effective memory cycle rate is an average of 1 × 0.99 + (1 + 30) × 0.01 = 1.30 (1.30 clock cycles per memory access).

Address space switch

On an address space switch, as occurs on a process switch but not on a thread switch, some TLB entries can become invalid, since the virtual-to-physical mapping is different. The simplest strategy to deal with this is to completely flush the TLB. This means that after a switch, the TLB is empty and any memory reference will be a miss, and it will be some time before things are running back at full speed. Newer CPUs use more effective strategies marking which process an entry is for. This means that if a second process runs for only a short time and jumps back to a first process, it may still have valid entries, saving the time to reload them.[16]

For example, in the Alpha 21264, each TLB entry is tagged with an "address space number" (ASN), and only TLB entries with an ASN matching the current task are considered valid. Another example in the Intel Pentium Pro, the page global enable (PGE) flag in the register CR4 and the global (G) flag of a page-directory or page-table entry can be used to prevent frequently used pages from being automatically invalidated in the TLBs on a task switch or a load of register CR3.

While selective flushing of the TLB is an option in software managed TLBs, the only option in some hardware TLBs (for example, the TLB in the Intel 80386) is the complete flushing of the TLB on an address space switch. Other hardware TLBs (for example, the TLB in the Intel 80486 and later x86 processors, and the TLB in ARM processors) allow the flushing of individual entries from the TLB indexed by virtual address.

Virtualization and x86 TLB

With the advent of virtualization for server consolidation, a lot of effort has gone into making the x86 architecture easier to virtualize and to ensure better performance of virtual machines on x86 hardware.[17][18] In a long list of such changes to the x86 architecture, the TLB is the latest.

Normally, entries in the x86 TLBs are not associated with a particular address space; they implicitly refer to the current address space. Hence, every time there is a change in address space, such as a context switch, the entire TLB has to be flushed. Maintaining a tag that associates each TLB entry with an address space in software and comparing this tag during TLB lookup and TLB flush is very expensive, especially since the x86 TLB is designed to operate with very low latency and completely in hardware. In 2008, both Intel (Nehalem)[19] and AMD (SVM)[20] have introduced tags as part of the TLB entry and dedicated hardware that checks the tag during lookup. Even though these are not fully exploited, it is envisioned that in the future, these tags will identify the address space to which every TLB entry belongs. Thus a context switch will not result in the flushing of the TLB but just changing the tag of the current address space to the tag of the address space of the new task.

See also

References

  1. Arpaci-Dusseau, Remzi H.; Arpaci-Dusseau, Andrea C. (2014), Operating Systems: Three Easy Pieces [Chapter: Faster Translations (TLBs)] (PDF), Arpaci-Dusseau Books
  2. S. Peter Song; Marvin Denman; Joe Chang (1994). "The PowerPC 604 RISC Microprocessor" (PDF). IEEE Micro.
  3. "Operating Systems: Paging" (PPT). dcs.ed.ac.uk. Retrieved 2013-12-11.
  4. Frank Uyeda (2009). "Lecture 7: Memory Management" (PDF). CSE 120: Principles of Operating Systems. UC San Diego. Retrieved 2013-12-04.
  5. Chen, J. Bradley; Borg, Anita; Jouppi, Norman P. (1992). "A Simulation Based Study of TLB Performance". SIGARCH Computer Architecture News (20): 114123. doi:10.1145/146628.139708.
  6. "Inside Nehalem: Intel's Future Processor and System". Real World Technologies.
  7. "Intel Core i7 (Nehalem): Architecture By AMD?". Tom's Hardware. Retrieved 2010-11-24.
  8. "Inside Nehalem: Intel's Future Processor and System". Real World Technologies. Retrieved 2010-11-24.
  9. J. Smith and R. Nair. Virtual Machines: Versatile Platforms for Systems and Processes (The Morgan Kaufmann Series in Computer Architecture and Design). Morgan Kaufmann Publishers Inc., 2005.
  10. Welsh, Matt. "MIPS r2000/r3000 Architecture". Retrieved 16 November 2008. If no matching TLB entry is found, a TLB miss exception occurs
  11. SPARC International, Inc. The SPARC Architecture Manual, Version 9 (PDF). PTR Prentice Hall.
  12. Sun Microsystems. UltraSPARC Architecture 2005 (PDF). Draft D0.9.2, 19 Jun 2008. Sun Microsystems.
  13. Virtual Memory in the IA-64 Kernel > Translation Lookaside Buffer
  14. Compaq Computer Corporation. Alpha Architecture Handbook (PDF). Version 4. Compaq Computer Corporation.
  15. David A. Patterson; John L. Hennessy (2009). Computer Organization And Design. Hardware/Software interface. 4th edition. Burlington, MA 01803, USA: Morgan Kaufmann Publishers. p. 503. ISBN 978-0-12-374493-7.
  16. Ulrich Drepper (9 October 2014). "Memory part 3: Virtual Memory". LWN.net.
  17. D. Abramson, J. Jackson, S. Muthrasanallur, G. Neiger, G. Regnier, R. Sankaran, I. Schoinas, R. Uhlig, B. Vembu, and J. Wiegert. Intel Virtualization Technology for Directed I/O. Intel Technology Journal, 10(03):179–192.
  18. Advanced Micro Devices. AMD Secure Virtual Machine Architecture Reference Manual. Advanced Micro Devices, 2008.
  19. G. Neiger, A. Santoni, F. Leung, D. Rodgers, and R. Uhlig. Intel Virtualization Technology: Hardware Support for Efficient Processor Virtualization. Intel Technology Journal, 10(3).
  20. Advanced Micro Devices. AMD Secure Virtual Machine Architecture Reference Manual. Advanced Micro Devices, 2008.

External links

This article is issued from Wikipedia - version of the Friday, April 15, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.