Lines Matching +full:next +full:- +full:level +full:- +full:cache
2 Cache and TLB Flushing Under Linux
7 This document describes the cache/tlb flushing interfaces called
17 thinking SMP cache/tlb flushing must be so inefficient, this is in
24 "TLB" is abstracted under Linux as something the cpu uses to cache
25 virtual-->physical address translations obtained from the software
27 possible for stale translations to exist in this "TLB" cache.
59 modifications for the address space 'vma->vm_mm' in the range
60 'start' to 'end-1' will be visible to the cpu. That is, after
62 virtual addresses in the range 'start' to 'end-1'.
78 address space is available via vma->vm_mm. Also, one may
79 test (vma->vm_flags & VM_EXEC) to see if this region is
81 split-tlb type setups).
84 page table modification for address space 'vma->vm_mm' for
87 'vma->vm_mm' for virtual address 'addr'.
97 in the software page tables for address space "vma->vm_mm"
104 For example, it could use this event to pre-load TLB
108 Next, we have the cache flushing interfaces. In general, when Linux
109 is changing an existing virtual-->physical mapping to a new value,
124 The cache level flush will always be first, because this allows
126 a virtual-->physical translation to exist for a virtual address
127 when that virtual address is flushed from the cache. The HyperSparc
130 The cache flushing routines below need only deal with cache flushing
133 indexed caches which must be flushed when virtual-->physical
144 the caches. That is, after running, there will be no cache
153 the caches. That is, after running, there will be no cache
166 addresses from the cache. After running, there will be no
167 entries in the cache for 'vma->vm_mm' for virtual addresses in
168 the range 'start' to 'end-1'.
175 sized regions from the cache, instead of having the kernel
182 from the cache. The 'vma' is the backing structure used by
184 address space is available via vma->vm_mm. Also, one may
185 test (vma->vm_flags & VM_EXEC) to see if this region is
186 executable (and thus could be in the 'instruction cache' in
187 "Harvard" type cache layouts).
192 the cache.
194 After running, there will be no entries in the cache for
195 'vma->vm_mm' for virtual address 'addr' which translates
206 After running, there will be no entries in the cache for
216 of (kernel) virtual addresses from the cache. After running,
217 there will be no entries in the cache for the kernel address
218 space for virtual addresses in the range 'start' to 'end-1'.
224 There exists another whole class of cpu cache issues which currently
226 The biggest problem is that of virtual aliasing in the data cache
229 Is your port susceptible to virtual aliasing in its D-cache?
230 Well, if your D-cache is virtually indexed, is larger in size than
231 PAGE_SIZE, and does not prevent multiple cache lines for the same
234 If your D-cache has this problem, first define asm/shmparam.h SHMLBA
236 addressed D-cache (or if the size is variable, the largest possible
246 Next, you have to solve the D-cache aliasing issue for all
251 physical page into its address space, by implication the D-cache
259 pages. It allows a port to efficiently avoid D-cache alias
273 If D-cache aliasing is not an issue, these two routines may
280 a) the kernel did write to a page that is in the page cache page
282 b) the kernel is about to read from a page cache page and user space
290 This routine need only be called for page cache pages
293 handling vfs symlinks in the page cache need not call
296 The phrase "kernel writes to a page cache page" means, specifically,
299 flush here to handle D-cache aliasing, to make sure these kernel stores
306 If D-cache aliasing is not an issue, this routine may simply be defined
309 There is a bit set aside in folio->flags (PG_arch_1) as "architecture
343 Any necessary cache flushing or other coherency operations
345 instruction cache does not snoop cpu stores, it is very
346 likely that you will need to flush the instruction cache
358 the cache of the page at vmaddr.
385 flushes the kernel cache for a given virtual address range in
394 the cache for a given virtual address range in the vmap area
395 which prevents the processor from making the cache stale by