| /Documentation/devicetree/bindings/cache/ |
| D | freescale-l2cache.txt | 1 Freescale L2 Cache Controller 3 L2 cache is present in Freescale's QorIQ and QorIQ Qonverge platforms. 4 The cache bindings explained below are Devicetree Specification compliant 9 "fsl,b4420-l2-cache-controller" 10 "fsl,b4860-l2-cache-controller" 11 "fsl,bsc9131-l2-cache-controller" 12 "fsl,bsc9132-l2-cache-controller" 13 "fsl,c293-l2-cache-controller" 14 "fsl,mpc8536-l2-cache-controller" 15 "fsl,mpc8540-l2-cache-controller" [all …]
|
| D | socionext,uniphier-system-cache.yaml | 4 $id: http://devicetree.org/schemas/cache/socionext,uniphier-system-cache.yaml# 7 title: UniPhier outer cache controller 10 UniPhier ARM 32-bit SoCs are integrated with a full-custom outer cache 11 controller system. All of them have a level 2 cache controller, and some 12 have a level 3 cache controller as well. 19 const: socionext,uniphier-system-cache 29 Interrupts can be used to notify the completion of cache operations. 35 cache-unified: true 37 cache-size: true 39 cache-sets: true [all …]
|
| D | starfive,jh8100-starlink-cache.yaml | 4 $id: http://devicetree.org/schemas/cache/starfive,jh8100-starlink-cache.yaml# 7 title: StarFive StarLink Cache Controller 13 StarFive's StarLink Cache Controller manages the L3 cache shared between 14 clusters of CPU cores. The cache driver enables RISC-V non-standard cache 18 - $ref: /schemas/cache-controller.yaml# 20 # We need a select here so we don't match all nodes with 'cache' 26 - starfive,jh8100-starlink-cache 34 - const: starfive,jh8100-starlink-cache 35 - const: cache 45 - cache-block-size [all …]
|
| D | andestech,ax45mp-cache.yaml | 5 $id: http://devicetree.org/schemas/cache/andestech,ax45mp-cache.yaml# 8 title: Andestech AX45MP L2 Cache Controller 14 A level-2 cache (L2C) is used to improve the system performance by providing 15 a large amount of cache line entries and reasonable access delays. The L2C 23 - andestech,ax45mp-cache 31 - const: andestech,ax45mp-cache 32 - const: cache 40 cache-line-size: 43 cache-level: 46 cache-sets: [all …]
|
| D | sifive,ccache0.yaml | 5 $id: http://devicetree.org/schemas/cache/sifive,ccache0.yaml# 8 title: SiFive Composable Cache Controller 14 The SiFive Composable Cache Controller is used to provide access to fast copies 15 of memory for masters in a Core Complex. The Composable Cache Controller also 39 - const: cache 45 - const: cache 49 - const: cache 51 cache-block-size: 54 cache-level: 57 cache-sets: [all …]
|
| D | l2c2x0.yaml | 4 $id: http://devicetree.org/schemas/cache/l2c2x0.yaml# 7 title: ARM L2 Cache Controller 14 PL220/PL310 and variants) based level 2 cache controller. All these various 15 implementations of the L2 cache controller have compatible programming 16 models (Note 1). Some of the properties that are just prefixed "cache-*" are 22 cache controllers as found in e.g. Cortex-A15/A7/A57/A53. These 28 - $ref: /schemas/cache-controller.yaml# 34 - arm,pl310-cache 35 - arm,l220-cache 36 - arm,l210-cache [all …]
|
| D | marvell,feroceon-cache.txt | 1 * Marvell Feroceon Cache 4 - compatible : Should be either "marvell,feroceon-cache" or 5 "marvell,kirkwood-cache". 8 - reg : Address of the L2 cache control register. Mandatory for 9 "marvell,kirkwood-cache", not used by "marvell,feroceon-cache" 13 l2: l2-cache@20128 { 14 compatible = "marvell,kirkwood-cache";
|
| D | marvell,tauros2-cache.txt | 1 * Marvell Tauros2 Cache 4 - compatible : Should be "marvell,tauros2-cache". 5 - marvell,tauros2-cache-features : Specify the features supported for the 6 tauros2 cache. 11 arch/arm/include/asm/hardware/cache-tauros2.h 14 L2: l2-cache { 15 compatible = "marvell,tauros2-cache"; 16 marvell,tauros2-cache-features = <0x3>;
|
| /Documentation/filesystems/caching/ |
| D | backend-api.rst | 4 Cache Backend API 7 The FS-Cache system provides an API by which actual caches can be supplied to 8 FS-Cache for it to then serve out to network filesystems and other interested 11 #include <linux/fscache-cache.h>. 17 Interaction with the API is handled on three levels: cache, volume and data 23 Cache cookie struct fscache_cache 28 Cookies are used to provide some filesystem data to the cache, manage state and 29 pin the cache during access in addition to acting as reference points for the 34 The cache backend and the network filesystem can both ask for cache cookies - 39 Cache Cookies [all …]
|
| D | cachefiles.rst | 4 Cache on Already Mounted Filesystem 15 (*) Starting the cache. 19 (*) Cache culling. 21 (*) Cache structure. 37 CacheFiles is a caching backend that's meant to use as a cache a directory on 40 CacheFiles uses a userspace daemon to do some of the cache management - such as 44 The filesystem and data integrity of the cache are only as good as those of the 51 and while it is open, a cache is at least partially in existence. The daemon 52 opens this and sends commands down it to control the cache. 54 CacheFiles is currently limited to a single cache. [all …]
|
| /Documentation/devicetree/bindings/cpufreq/ |
| D | cpufreq-qcom-hw.yaml | 226 next-level-cache = <&L2_0>; 229 L2_0: l2-cache { 230 compatible = "cache"; 231 cache-unified; 232 cache-level = <2>; 233 next-level-cache = <&L3_0>; 234 L3_0: l3-cache { 235 compatible = "cache"; 236 cache-unified; 237 cache-level = <3>; [all …]
|
| /Documentation/admin-guide/device-mapper/ |
| D | cache.rst | 2 Cache title 8 dm-cache is a device mapper target written by Joe Thornber, Heinz 40 may be out of date or kept in sync with the copy on the cache device 54 2. A cache device - the small, fast one. 56 3. A small metadata device - records which blocks are in the cache, 58 This information could be put on the cache device, but having it 61 be used by a single cache device. 67 is configurable when you first create the cache. Typically we've been 73 getting hit a lot, yet the whole block will be promoted to the cache. 74 So large block sizes are bad because they waste cache space. And small [all …]
|
| D | writecache.rst | 6 doesn't cache reads because reads are supposed to be cached in page cache 14 1. type of the cache device - "p" or "s" 18 3. the cache device 25 offset from the start of cache device in 512-byte sectors 57 arguments or by a message), the cache will not promote 68 a block is stored in the cache for too long, it will be 71 only metadata is promoted to the cache. This option 84 6. the number of read blocks that hit the cache 88 10. the number of write blocks that bypass the cache 89 11. the number of write blocks that are allocated in the cache [all …]
|
| D | cache-policies.rst | 26 Overview of supplied cache replacement policies 55 DM table that is using the cache target. Doing so will cause all of the 56 mq policy's hints to be dropped. Also, performance of the cache may 63 The mq policy used a lot of memory; 88 bytes per cache block on a 64 68 has a 'hotspot' queue, rather than a pre-cache, which uses a quarter of 70 cache block). 72 All this means smq uses ~25bytes per cache block. Still a lot of 91 The mq policy maintained a hit count for each cache block. For a 92 different block to get promoted to the cache its hit count has to 93 exceed the lowest currently in the cache. This meant it could take a [all …]
|
| /Documentation/driver-api/md/ |
| D | raid5-cache.rst | 2 RAID 4/5/6 cache 5 Raid 4/5/6 could include an extra disk for data cache besides normal RAID 6 disks. The role of RAID disks isn't changed with the cache disk. The cache disk 7 caches data to the RAID disks. The cache can be in write-through (supported 9 3.4) has a new option '--write-journal' to create array with cache. Please 10 refer to mdadm manual for details. By default (RAID array starts), the cache is 19 In both modes, all writes to the array will hit cache disk first. This means 20 the cache disk must be fast and sustainable. 34 The write-through cache will cache all data on cache disk first. After the data 35 is safe on the cache disk, the data will be flushed onto RAID disks. The [all …]
|
| /Documentation/ABI/testing/ |
| D | sysfs-block-bcache | 5 A write to this file causes the backing device or cache to be 6 unregistered. If a backing device had dirty data in the cache, 17 What: /sys/block/<disk>/bcache/cache 21 For a backing device that has cache, a symlink to 22 the bcache/ dir of that cache. 28 For backing devices: integer number of full cache hits, 29 counted per bio. A partial cache hit counts as a miss. 35 For backing devices: integer number of cache misses. 41 For backing devices: cache hits as a percentage. 48 skip the cache. Read and written as bytes in human readable [all …]
|
| D | sysfs-kernel-slab | 8 internal state of the SLUB allocator for each cache. Certain 9 files may be modified to change the behavior of the cache (and 10 any cache it aliases, if any). 13 What: /sys/kernel/slab/<cache>/aliases 20 have merged into this cache. 22 What: /sys/kernel/slab/<cache>/align 28 The align file is read-only and specifies the cache's object 31 What: /sys/kernel/slab/<cache>/alloc_calls 38 locations from which allocations for this cache were performed. 40 enabled for that cache (see Documentation/mm/slub.rst). [all …]
|
| /Documentation/filesystems/nfs/ |
| D | rpc-cache.rst | 2 RPC Cache 31 - general cache lookup with correct locking 33 - allowing an EXPIRED time on cache items, and removing 35 - making requests to user-space to fill in cache entries 36 - allowing user-space to directly set entries in the cache 38 cache entries, and replaying those requests when the cache entry 42 Creating a Cache 45 - A cache needs a datum to store. This is in the form of a 49 Each cache element is reference counted and contains 50 expiry and update times for use in cache management. [all …]
|
| /Documentation/admin-guide/ |
| D | bcache.rst | 2 A block layer cache (bcache) 6 nice if you could use them as cache... Hence bcache. 29 Writeback caching can use most of the cache for buffering writes - writing 38 thus entirely bypass the cache. 41 from disk or invalidating cache entries. For unrecoverable errors (meta data 43 in the cache it first disables writeback caching and waits for all dirty data 47 You'll need bcache util from the bcache-tools repository. Both the cache device 54 you format your backing devices and cache device at the same time, you won't 71 device, it'll be running in passthrough mode until you attach it to a cache. 73 slow devices as bcache backing devices without a cache, and you can choose to add [all …]
|
| /Documentation/driver-api/firmware/ |
| D | firmware_cache.rst | 2 Firmware cache 11 infrastructure implements a firmware cache for device drivers for most API 14 The firmware cache makes using certain firmware API calls safe during a device 15 driver's suspend and resume callback. Users of these API calls needn't cache 18 The firmware cache works by requesting for firmware prior to suspend and 24 Some implementation details about the firmware cache setup: 26 * The firmware cache is setup by adding a devres entry for each device that 29 * If an asynchronous call is used the firmware cache is only set up for a 35 * If the firmware cache is determined to be needed as per the above two 36 criteria the firmware cache is setup by adding a devres entry for the [all …]
|
| /Documentation/core-api/ |
| D | cachetlb.rst | 2 Cache and TLB Flushing Under Linux 7 This document describes the cache/tlb flushing interfaces called 17 thinking SMP cache/tlb flushing must be so inefficient, this is in 24 "TLB" is abstracted under Linux as something the cpu uses to cache 27 possible for stale translations to exist in this "TLB" cache. 108 Next, we have the cache flushing interfaces. In general, when Linux 124 The cache level flush will always be first, because this allows 127 when that virtual address is flushed from the cache. The HyperSparc 130 The cache flushing routines below need only deal with cache flushing 144 the caches. That is, after running, there will be no cache [all …]
|
| /Documentation/devicetree/bindings/powerpc/fsl/ |
| D | pamu.txt | 57 - fsl,primary-cache-geometry 60 cache. The first is the number of cache lines, and the 63 - fsl,secondary-cache-geometry 66 cache. The first is the number of cache lines, and the 81 best LIODN values to minimize PAMU cache thrashing. 107 fsl,primary-cache-geometry = <32 1>; 108 fsl,secondary-cache-geometry = <128 2>; 113 fsl,primary-cache-geometry = <32 1>; 114 fsl,secondary-cache-geometry = <128 2>; 119 fsl,primary-cache-geometry = <32 1>; [all …]
|
| /Documentation/admin-guide/mm/ |
| D | numaperf.rst | 10 as CPU cache coherence, but may have different performance. For example, 99 NUMA Cache 107 higher performing memory to transparently cache access to progressively 111 hierarchy. Each increasing cache level provides higher performing 113 cache provided by the system. 115 This numbering is different than CPU caches where the cache level (ex: 117 performing. In contrast, the memory cache level is centric to the last 118 level memory, so the higher numbered cache level corresponds to memory 123 near memory cache if it is present. If it is not present, the system 125 cache level, or it reaches far memory. [all …]
|
| /Documentation/filesystems/ |
| D | fuse-io.rst | 12 + writeback-cache 17 In direct-io mode the page cache is completely bypassed for reads and writes. 21 In cached mode reads may be satisfied from the page cache, and data may be 22 read-ahead by the kernel to fill the cache. The cache is always kept consistent 27 writeback-cache mode may be selected by the FUSE_WRITEBACK_CACHE flag in the 35 In writeback-cache mode (enabled by the FUSE_WRITEBACK_CACHE flag) writes go to 36 the cache only, which means that the write(2) syscall can often complete very
|
| /Documentation/kernel-hacking/ |
| D | false-sharing.rst | 9 False sharing is related with cache mechanism of maintaining the data 10 coherence of one cache line stored in multiple CPU's caches; then 20 Member 'refcount'(A) and 'name'(B) _share_ one cache line like below:: 29 | A B | Cache 0 | A B | Cache 1 44 reload the whole cache line over and over due to the 'sharing', even 49 mm_struct struct, whose cache line layout change triggered a 65 members could be purposely put in the same cache line to make them 66 cache hot and save cacheline/TLB, like a lock and the data protected 75 purposely put in one cache line. 76 * global data being put together in one cache line. Some kernel [all …]
|