The Design and Implementation of the FreeBSD Operating System, Second Edition
Now available: The Design and Implementation of the FreeBSD Operating System (Second Edition)


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]

FreeBSD/Linux Kernel Cross Reference
sys/Documentation/cachetlb.txt

Version: -  FREEBSD  -  FREEBSD-13-STABLE  -  FREEBSD-13-0  -  FREEBSD-12-STABLE  -  FREEBSD-12-0  -  FREEBSD-11-STABLE  -  FREEBSD-11-0  -  FREEBSD-10-STABLE  -  FREEBSD-10-0  -  FREEBSD-9-STABLE  -  FREEBSD-9-0  -  FREEBSD-8-STABLE  -  FREEBSD-8-0  -  FREEBSD-7-STABLE  -  FREEBSD-7-0  -  FREEBSD-6-STABLE  -  FREEBSD-6-0  -  FREEBSD-5-STABLE  -  FREEBSD-5-0  -  FREEBSD-4-STABLE  -  FREEBSD-3-STABLE  -  FREEBSD22  -  l41  -  OPENBSD  -  linux-2.6  -  MK84  -  PLAN9  -  xnu-8792 
SearchContext: -  none  -  3  -  10 

    1                 Cache and TLB Flushing
    2                      Under Linux
    3 
    4             David S. Miller <davem@redhat.com>
    5 
    6 This document describes the cache/tlb flushing interfaces called
    7 by the Linux VM subsystem.  It enumerates over each interface,
    8 describes it's intended purpose, and what side effect is expected
    9 after the interface is invoked.
   10 
   11 The side effects described below are stated for a uniprocessor
   12 implementation, and what is to happen on that single processor.  The
   13 SMP cases are a simple extension, in that you just extend the
   14 definition such that the side effect for a particular interface occurs
   15 on all processors in the system.  Don't let this scare you into
   16 thinking SMP cache/tlb flushing must be so inefficient, this is in
   17 fact an area where many optimizations are possible.  For example,
   18 if it can be proven that a user address space has never executed
   19 on a cpu (see vma->cpu_vm_mask), one need not perform a flush
   20 for this address space on that cpu.
   21 
   22 First, the TLB flushing interfaces, since they are the simplest.  The
   23 "TLB" is abstracted under Linux as something the cpu uses to cache
   24 virtual-->physical address translations obtained from the software
   25 page tables.  Meaning that if the software page tables change, it is
   26 possible for stale translations to exist in this "TLB" cache.
   27 Therefore when software page table changes occur, the kernel will
   28 invoke one of the following flush methods _after_ the page table
   29 changes occur:
   30 
   31 1) void flush_tlb_all(void)
   32 
   33         The most severe flush of all.  After this interface runs,
   34         any previous page table modification whatsoever will be
   35         visible to the cpu.
   36 
   37         This is usually invoked when the kernel page tables are
   38         changed, since such translations are "global" in nature.
   39 
   40 2) void flush_tlb_mm(struct mm_struct *mm)
   41 
   42         This interface flushes an entire user address space from
   43         the TLB.  After running, this interface must make sure that
   44         any previous page table modifications for the address space
   45         'mm' will be visible to the cpu.  That is, after running,
   46         there will be no entries in the TLB for 'mm'.
   47 
   48         This interface is used to handle whole address space
   49         page table operations such as what happens during
   50         fork, and exec.
   51 
   52 3) void flush_tlb_range(struct mm_struct *mm,
   53                         unsigned long start, unsigned long end)
   54 
   55         Here we are flushing a specific range of (user) virtual
   56         address translations from the TLB.  After running, this
   57         interface must make sure that any previous page table
   58         modifications for the address space 'mm' in the range 'start'
   59         to 'end' will be visible to the cpu.  That is, after running,
   60         there will be no entries in the TLB for 'mm' for virtual
   61         addresses in the range 'start' to 'end'.
   62 
   63         Primarily, this is used for munmap() type operations.
   64 
   65         The interface is provided in hopes that the port can find
   66         a suitably efficient method for removing multiple page
   67         sized translations from the TLB, instead of having the kernel
   68         call flush_tlb_page (see below) for each entry which may be
   69         modified.
   70 
   71 4) void flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
   72 
   73         This time we need to remove the PAGE_SIZE sized translation
   74         from the TLB.  The 'vma' is the backing structure used by
   75         Linux to keep track of mmap'd regions for a process, the
   76         address space is available via vma->vm_mm.  Also, one may
   77         test (vma->vm_flags & VM_EXEC) to see if this region is
   78         executable (and thus could be in the 'instruction TLB' in
   79         split-tlb type setups).
   80 
   81         After running, this interface must make sure that any previous
   82         page table modification for address space 'vma->vm_mm' for
   83         user virtual address 'page' will be visible to the cpu.  That
   84         is, after running, there will be no entries in the TLB for
   85         'vma->vm_mm' for virtual address 'page'.
   86 
   87         This is used primarily during fault processing.
   88 
   89 5) void flush_tlb_pgtables(struct mm_struct *mm,
   90                            unsigned long start, unsigned long end)
   91 
   92    The software page tables for address space 'mm' for virtual
   93    addresses in the range 'start' to 'end' are being torn down.
   94 
   95    Some platforms cache the lowest level of the software page tables
   96    in a linear virtually mapped array, to make TLB miss processing
   97    more efficient.  On such platforms, since the TLB is caching the
   98    software page table structure, it needs to be flushed when parts
   99    of the software page table tree are unlinked/freed.
  100 
  101    Sparc64 is one example of a platform which does this.
  102 
  103    Usually, when munmap()'ing an area of user virtual address
  104    space, the kernel leaves the page table parts around and just
  105    marks the individual pte's as invalid.  However, if very large
  106    portions of the address space are unmapped, the kernel frees up
  107    those portions of the software page tables to prevent potential
  108    excessive kernel memory usage caused by erratic mmap/mmunmap
  109    sequences.  It is at these times that flush_tlb_pgtables will
  110    be invoked.
  111 
  112 6) void update_mmu_cache(struct vm_area_struct *vma,
  113                          unsigned long address, pte_t pte)
  114 
  115         At the end of every page fault, this routine is invoked to
  116         tell the architecture specific code that a translation
  117         described by "pte" now exists at virtual address "address"
  118         for address space "vma->vm_mm", in the software page tables.
  119 
  120         A port may use this information in any way it so chooses.
  121         For example, it could use this event to pre-load TLB
  122         translations for software managed TLB configurations.
  123         The sparc64 port currently does this.
  124 
  125 Next, we have the cache flushing interfaces.  In general, when Linux
  126 is changing an existing virtual-->physical mapping to a new value,
  127 the sequence will be in one of the following forms:
  128 
  129         1) flush_cache_mm(mm);
  130            change_all_page_tables_of(mm);
  131            flush_tlb_mm(mm);
  132 
  133         2) flush_cache_range(mm, start, end);
  134            change_range_of_page_tables(mm, start, end);
  135            flush_tlb_range(mm, start, end);
  136 
  137         3) flush_cache_page(vma, page);
  138            set_pte(pte_pointer, new_pte_val);
  139            flush_tlb_page(vma, page);
  140 
  141 The cache level flush will always be first, because this allows
  142 us to properly handle systems whose caches are strict and require
  143 a virtual-->physical translation to exist for a virtual address
  144 when that virtual address is flushed from the cache.  The HyperSparc
  145 cpu is one such cpu with this attribute.
  146 
  147 The cache flushing routines below need only deal with cache flushing
  148 to the extent that it is necessary for a particular cpu.  Mostly,
  149 these routines must be implemented for cpus which have virtually
  150 indexed caches which must be flushed when virtual-->physical
  151 translations are changed or removed.  So, for example, the physically
  152 indexed physically tagged caches of IA32 processors have no need to
  153 implement these interfaces since the caches are fully synchronized
  154 and have no dependency on translation information.
  155 
  156 Here are the routines, one by one:
  157 
  158 1) void flush_cache_all(void)
  159 
  160         The most severe flush of all.  After this interface runs,
  161         the entire cpu cache is flushed.
  162 
  163         This is usually invoked when the kernel page tables are
  164         changed, since such translations are "global" in nature.
  165 
  166 2) void flush_cache_mm(struct mm_struct *mm)
  167 
  168         This interface flushes an entire user address space from
  169         the caches.  That is, after running, there will be no cache
  170         lines associated with 'mm'.
  171 
  172         This interface is used to handle whole address space
  173         page table operations such as what happens during
  174         fork, exit, and exec.
  175 
  176 3) void flush_cache_range(struct mm_struct *mm,
  177                           unsigned long start, unsigned long end)
  178 
  179         Here we are flushing a specific range of (user) virtual
  180         addresses from the cache.  After running, there will be no
  181         entries in the cache for 'mm' for virtual addresses in the
  182         range 'start' to 'end'.
  183 
  184         Primarily, this is used for munmap() type operations.
  185 
  186         The interface is provided in hopes that the port can find
  187         a suitably efficient method for removing multiple page
  188         sized regions from the cache, instead of having the kernel
  189         call flush_cache_page (see below) for each entry which may be
  190         modified.
  191 
  192 4) void flush_cache_page(struct vm_area_struct *vma, unsigned long page)
  193 
  194         This time we need to remove a PAGE_SIZE sized range
  195         from the cache.  The 'vma' is the backing structure used by
  196         Linux to keep track of mmap'd regions for a process, the
  197         address space is available via vma->vm_mm.  Also, one may
  198         test (vma->vm_flags & VM_EXEC) to see if this region is
  199         executable (and thus could be in the 'instruction cache' in
  200         "Harvard" type cache layouts).
  201 
  202         After running, there will be no entries in the cache for
  203         'vma->vm_mm' for virtual address 'page'.
  204 
  205         This is used primarily during fault processing.
  206 
  207 There exists another whole class of cpu cache issues which currently
  208 require a whole different set of interfaces to handle properly.
  209 The biggest problem is that of virtual aliasing in the data cache
  210 of a processor.
  211 
  212 Is your port susceptible to virtual aliasing in it's D-cache?
  213 Well, if your D-cache is virtually indexed, is larger in size than
  214 PAGE_SIZE, and does not prevent multiple cache lines for the same
  215 physical address from existing at once, you have this problem.
  216 
  217 If your D-cache has this problem, first define asm/shmparam.h SHMLBA
  218 properly, it should essentially be the size of your virtually
  219 addressed D-cache (or if the size is variable, the largest possible
  220 size).  This setting will force the SYSv IPC layer to only allow user
  221 processes to mmap shared memory at address which are a multiple of
  222 this value.
  223 
  224 NOTE: This does not fix shared mmaps, check out the sparc64 port for
  225 one way to solve this (in particular SPARC_FLAG_MMAPSHARED).
  226 
  227 Next, you have two methods to solve the D-cache aliasing issue for all
  228 other cases.  Please keep in mind that fact that, for a given page
  229 mapped into some user address space, there is always at least one more
  230 mapping, that of the kernel in it's linear mapping starting at
  231 PAGE_OFFSET.  So immediately, once the first user maps a given
  232 physical page into its address space, by implication the D-cache
  233 aliasing problem has the potential to exist since the kernel already
  234 maps this page at its virtual address.
  235 
  236 First, I describe the old method to deal with this problem.  I am
  237 describing it for documentation purposes, but it is deprecated and the
  238 latter method I describe next should be used by all new ports and all
  239 existing ports should move over to the new mechanism as well.
  240 
  241   flush_page_to_ram(struct page *page)
  242 
  243         The physical page 'page' is about to be place into the
  244         user address space of a process.  If it is possible for
  245         stores done recently by the kernel into this physical
  246         page, to not be visible to an arbitrary mapping in userspace,
  247         you must flush this page from the D-cache.
  248 
  249         If the D-cache is writeback in nature, the dirty data (if
  250         any) for this physical page must be written back to main
  251         memory before the cache lines are invalidated.
  252 
  253 Admittedly, the author did not think very much when designing this
  254 interface.  It does not give the architecture enough information about
  255 what exactly is going on, and there is no context to base a judgment
  256 on about whether an alias is possible at all.  The new interfaces to
  257 deal with D-cache aliasing are meant to address this by telling the
  258 architecture specific code exactly which is going on at the proper points
  259 in time.
  260 
  261 Here is the new interface:
  262 
  263   void copy_user_page(void *to, void *from, unsigned long address)
  264   void clear_user_page(void *to, unsigned long address)
  265 
  266         These two routines store data in user anonymous or COW
  267         pages.  It allows a port to efficiently avoid D-cache alias
  268         issues between userspace and the kernel.
  269 
  270         For example, a port may temporarily map 'from' and 'to' to
  271         kernel virtual addresses during the copy.  The virtual address
  272         for these two pages is chosen in such a way that the kernel
  273         load/store instructions happen to virtual addresses which are
  274         of the same "color" as the user mapping of the page.  Sparc64
  275         for example, uses this technique.
  276 
  277         The "address" parameter tells the virtual address where the
  278         user will ultimately have this page mapped.
  279 
  280         If D-cache aliasing is not an issue, these two routines may
  281         simply call memcpy/memset directly and do nothing more.
  282 
  283   void flush_dcache_page(struct page *page)
  284 
  285         Any time the kernel writes to a page cache page, _OR_
  286         the kernel is about to read from a page cache page and
  287         user space shared/writable mappings of this page potentially
  288         exist, this routine is called.
  289 
  290         NOTE: This routine need only be called for page cache pages
  291               which can potentially ever be mapped into the address
  292               space of a user process.  So for example, VFS layer code
  293               handling vfs symlinks in the page cache need not call
  294               this interface at all.
  295 
  296         The phrase "kernel writes to a page cache page" means,
  297         specifically, that the kernel executes store instructions
  298         that dirty data in that page at the page->virtual mapping
  299         of that page.  It is important to flush here to handle
  300         D-cache aliasing, to make sure these kernel stores are
  301         visible to user space mappings of that page.
  302 
  303         The corollary case is just as important, if there are users
  304         which have shared+writable mappings of this file, we must make
  305         sure that kernel reads of these pages will see the most recent
  306         stores done by the user.
  307 
  308         If D-cache aliasing is not an issue, this routine may
  309         simply be defined as a nop on that architecture.
  310 
  311         There is a bit set aside in page->flags (PG_arch_1) as
  312         "architecture private".  The kernel guarantees that,
  313         for pagecache pages, it will clear this bit when such
  314         a page first enters the pagecache.
  315 
  316         This allows these interfaces to be implemented much more
  317         efficiently.  It allows one to "defer" (perhaps indefinitely)
  318         the actual flush if there are currently no user processes
  319         mapping this page.  See sparc64's flush_dcache_page and
  320         update_mmu_cache implementations for an example of how to go
  321         about doing this.
  322 
  323         The idea is, first at flush_dcache_page() time, if
  324         page->mapping->i_mmap{,_shared} are empty lists, just mark the
  325         architecture private page flag bit.  Later, in
  326         update_mmu_cache(), a check is made of this flag bit, and if
  327         set the flush is done and the flag bit is cleared.
  328 
  329         IMPORTANT NOTE: It is often important, if you defer the flush,
  330                         that the actual flush occurs on the same CPU
  331                         as did the cpu stores into the page to make it
  332                         dirty.  Again, see sparc64 for examples of how
  333                         to deal with this.
  334 
  335   void flush_icache_range(unsigned long start, unsigned long end)
  336         When the kernel stores into addresses that it will execute
  337         out of (eg when loading modules), this function is called.
  338 
  339         If the icache does not snoop stores then this routine will need
  340         to flush it.
  341 
  342   void flush_icache_user_range(struct vm_area_struct *vma,
  343                         struct page *page, unsigned long addr, int len)
  344         This is called when the kernel stores into addresses that are
  345         part of the address space of a user process (which may be some
  346         other process than the current process).  The addr argument
  347         gives the virtual address in that process's address space,
  348         page is the page which is being modified, and len indicates
  349         how many bytes have been modified.  The modified region must
  350         not cross a page boundary.  Currently this is only called from
  351         kernel/ptrace.c.
  352 
  353   void flush_icache_page(struct vm_area_struct *vma, struct page *page)
  354         This is called when a page-cache page is about to be mapped
  355         into a user process' address space.  It offers an opportunity
  356         for a port to ensure d-cache/i-cache coherency if necessary.

Cache object: 6b109aa632c74868f0c07135866e4d1b


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]


This page is part of the FreeBSD/Linux Linux Kernel Cross-Reference, and was automatically generated using a modified version of the LXR engine.