/Documentation/devicetree/bindings/powerpc/fsl/ |
D | l2cache.txt | 1 Freescale L2 Cache Controller 3 L2 cache is present in Freescale's QorIQ and QorIQ Qonverge platforms. 4 The cache bindings explained below are Devicetree Specification compliant 9 "fsl,8540-l2-cache-controller" 10 "fsl,8541-l2-cache-controller" 11 "fsl,8544-l2-cache-controller" 12 "fsl,8548-l2-cache-controller" 13 "fsl,8555-l2-cache-controller" 14 "fsl,8568-l2-cache-controller" 15 "fsl,b4420-l2-cache-controller" [all …]
|
D | cache_sram.txt | 1 * Freescale PQ3 and QorIQ based Cache SRAM 4 option of configuring a part of (or full) cache memory 5 as SRAM. This cache SRAM representation in the device 10 - compatible : should be "fsl,p2020-cache-sram" 11 - fsl,cache-sram-ctlr-handle : points to the L2 controller 12 - reg : offset and length of the cache-sram. 16 cache-sram@fff00000 { 17 fsl,cache-sram-ctlr-handle = <&L2>; 19 compatible = "fsl,p2020-cache-sram";
|
D | pamu.txt | 57 - fsl,primary-cache-geometry 60 cache. The first is the number of cache lines, and the 63 - fsl,secondary-cache-geometry 66 cache. The first is the number of cache lines, and the 81 best LIODN values to minimize PAMU cache thrashing. 107 fsl,primary-cache-geometry = <32 1>; 108 fsl,secondary-cache-geometry = <128 2>; 113 fsl,primary-cache-geometry = <32 1>; 114 fsl,secondary-cache-geometry = <128 2>; 119 fsl,primary-cache-geometry = <32 1>; [all …]
|
/Documentation/devicetree/bindings/arm/socionext/ |
D | socionext,uniphier-system-cache.yaml | 4 $id: http://devicetree.org/schemas/arm/socionext/socionext,uniphier-system-cache.yaml# 7 title: UniPhier outer cache controller 10 UniPhier ARM 32-bit SoCs are integrated with a full-custom outer cache 11 controller system. All of them have a level 2 cache controller, and some 12 have a level 3 cache controller as well. 19 const: socionext,uniphier-system-cache 30 Interrupts can be used to notify the completion of cache operations. 36 cache-unified: true 38 cache-size: true 40 cache-sets: true [all …]
|
/Documentation/devicetree/bindings/riscv/ |
D | sifive-l2-cache.yaml | 5 $id: http://devicetree.org/schemas/riscv/sifive-l2-cache.yaml# 8 title: SiFive L2 Cache Controller 16 The SiFive Level 2 Cache Controller is used to provide access to fast copies 17 of memory for masters in a Core Complex. The Level 2 Cache Controller also 22 - $ref: /schemas/cache-controller.yaml# 38 - const: cache 40 cache-block-size: 43 cache-level: 46 cache-sets: 49 cache-size: [all …]
|
/Documentation/filesystems/caching/ |
D | backend-api.rst | 4 FS-Cache Cache backend API 7 The FS-Cache system provides an API by which actual caches can be supplied to 8 FS-Cache for it to then serve out to network filesystems and other interested 11 This API is declared in <linux/fscache-cache.h>. 14 Initialising and Registering a Cache 17 To start off, a cache definition must be initialised and registered for each 18 cache the backend wants to make available. For instance, CacheFS does this in 21 The cache definition (struct fscache_cache) should be initialised by calling:: 23 void fscache_init_cache(struct fscache_cache *cache, 30 * "cache" is a pointer to the cache definition; [all …]
|
D | cachefiles.rst | 4 CacheFiles: CACHE ON ALREADY MOUNTED FILESYSTEM 15 (*) Starting the cache. 19 (*) Cache culling. 21 (*) Cache structure. 36 CacheFiles is a caching backend that's meant to use as a cache a directory on 39 CacheFiles uses a userspace daemon to do some of the cache management - such as 43 The filesystem and data integrity of the cache are only as good as those of the 50 and while it is open, a cache is at least partially in existence. The daemon 51 opens this and sends commands down it to control the cache. 53 CacheFiles is currently limited to a single cache. [all …]
|
D | netfs-api.rst | 4 FS-Cache Network Filesystem API 7 There's an API by which a network filesystem can make use of the FS-Cache 12 FS-Cache to make finding objects faster and to make retiring of groups of 30 (5) Cache tag lookup 43 (18) FS-Cache specific page flags. 49 FS-Cache needs a description of the network filesystem. This is specified 68 entire in-cache hierarchy for this netfs will be scrapped and begun 95 their index hierarchies in quite the same way, FS-Cache tries to impose as few 106 cache. Any such objects created within an index will be created in the 107 first cache only. The cache in which an index is created can be [all …]
|
/Documentation/devicetree/bindings/cpufreq/ |
D | cpufreq-qcom-hw.txt | 58 next-level-cache = <&L2_0>; 60 L2_0: l2-cache { 61 compatible = "cache"; 62 next-level-cache = <&L3_0>; 63 L3_0: l3-cache { 64 compatible = "cache"; 74 next-level-cache = <&L2_100>; 76 L2_100: l2-cache { 77 compatible = "cache"; 78 next-level-cache = <&L3_0>; [all …]
|
/Documentation/devicetree/bindings/nds32/ |
D | atl2c.txt | 1 * Andestech L2 cache Controller 3 The level-2 cache controller plays an important role in reducing memory latency 5 Level-2 cache controller in general enhances overall system performance 10 representation of an Andestech L2 cache controller. 17 - reg : Physical base address and size of cache controller's memory mapped 18 - cache-unified : Specifies the cache is a unified cache. 19 - cache-level : Should be set to 2 for a level 2 cache. 23 cache-controller@e0500000 { 26 cache-unified; 27 cache-level = <2>;
|
/Documentation/admin-guide/device-mapper/ |
D | cache.rst | 2 Cache title 8 dm-cache is a device mapper target written by Joe Thornber, Heinz 40 may be out of date or kept in sync with the copy on the cache device 54 2. A cache device - the small, fast one. 56 3. A small metadata device - records which blocks are in the cache, 58 This information could be put on the cache device, but having it 61 be used by a single cache device. 67 is configurable when you first create the cache. Typically we've been 73 getting hit a lot, yet the whole block will be promoted to the cache. 74 So large block sizes are bad because they waste cache space. And small [all …]
|
D | writecache.rst | 6 doesn't cache reads because reads are supposed to be cached in page cache 14 1. type of the cache device - "p" or "s" 19 3. the cache device 26 offset from the start of cache device in 512-byte sectors 65 flush the cache device. The message returns successfully 66 if the cache device was flushed without an error 68 flush the cache device on next suspend. Use this message 69 when you are going to remove the cache device. The proper 70 sequence for removing the cache device is: 79 6. the cache device is now inactive and it can be deleted
|
D | cache-policies.rst | 26 Overview of supplied cache replacement policies 55 DM table that is using the cache target. Doing so will cause all of the 56 mq policy's hints to be dropped. Also, performance of the cache may 63 The mq policy used a lot of memory; 88 bytes per cache block on a 64 68 has a 'hotspot' queue, rather than a pre-cache, which uses a quarter of 70 cache block). 72 All this means smq uses ~25bytes per cache block. Still a lot of 91 The mq policy maintained a hit count for each cache block. For a 92 different block to get promoted to the cache its hit count has to 93 exceed the lowest currently in the cache. This meant it could take a [all …]
|
/Documentation/driver-api/md/ |
D | raid5-cache.rst | 2 RAID 4/5/6 cache 5 Raid 4/5/6 could include an extra disk for data cache besides normal RAID 6 disks. The role of RAID disks isn't changed with the cache disk. The cache disk 7 caches data to the RAID disks. The cache can be in write-through (supported 9 3.4) has a new option '--write-journal' to create array with cache. Please 10 refer to mdadm manual for details. By default (RAID array starts), the cache is 19 In both modes, all writes to the array will hit cache disk first. This means 20 the cache disk must be fast and sustainable. 34 The write-through cache will cache all data on cache disk first. After the data 35 is safe on the cache disk, the data will be flushed onto RAID disks. The [all …]
|
/Documentation/ABI/testing/ |
D | sysfs-block-bcache | 5 A write to this file causes the backing device or cache to be 6 unregistered. If a backing device had dirty data in the cache, 17 What: /sys/block/<disk>/bcache/cache 21 For a backing device that has cache, a symlink to 22 the bcache/ dir of that cache. 28 For backing devices: integer number of full cache hits, 29 counted per bio. A partial cache hit counts as a miss. 35 For backing devices: integer number of cache misses. 41 For backing devices: cache hits as a percentage. 48 skip the cache. Read and written as bytes in human readable [all …]
|
D | sysfs-kernel-slab | 8 internal state of the SLUB allocator for each cache. Certain 9 files may be modified to change the behavior of the cache (and 10 any cache it aliases, if any). 13 What: /sys/kernel/slab/cache/aliases 20 have merged into this cache. 22 What: /sys/kernel/slab/cache/align 28 The align file is read-only and specifies the cache's object 31 What: /sys/kernel/slab/cache/alloc_calls 38 locations from which allocations for this cache were performed. 40 enabled for that cache (see Documentation/vm/slub.rst). [all …]
|
/Documentation/devicetree/bindings/arm/ |
D | l2c2x0.yaml | 7 title: ARM L2 Cache Controller 14 PL220/PL310 and variants) based level 2 cache controller. All these various 15 implementations of the L2 cache controller have compatible programming 16 models (Note 1). Some of the properties that are just prefixed "cache-*" are 22 cache controllers as found in e.g. Cortex-A15/A7/A57/A53. These 28 - $ref: /schemas/cache-controller.yaml# 34 - arm,pl310-cache 35 - arm,l220-cache 36 - arm,l210-cache 37 # DEPRECATED by "brcm,bcm11351-a2-pl310-cache" [all …]
|
/Documentation/filesystems/nfs/ |
D | rpc-cache.rst | 2 RPC Cache 31 - general cache lookup with correct locking 33 - allowing an EXPIRED time on cache items, and removing 35 - making requests to user-space to fill in cache entries 36 - allowing user-space to directly set entries in the cache 38 cache entries, and replaying those requests when the cache entry 42 Creating a Cache 45 - A cache needs a datum to store. This is in the form of a 49 Each cache element is reference counted and contains 50 expiry and update times for use in cache management. [all …]
|
/Documentation/admin-guide/ |
D | bcache.rst | 2 A block layer cache (bcache) 6 nice if you could use them as cache... Hence bcache. 29 Writeback caching can use most of the cache for buffering writes - writing 38 thus entirely bypass the cache. 41 from disk or invalidating cache entries. For unrecoverable errors (meta data 43 in the cache it first disables writeback caching and waits for all dirty data 47 You'll need bcache util from the bcache-tools repository. Both the cache device 54 you format your backing devices and cache device at the same time, you won't 71 device, it'll be running in passthrough mode until you attach it to a cache. 73 slow devices as bcache backing devices without a cache, and you can choose to add [all …]
|
/Documentation/devicetree/bindings/arm/mrvl/ |
D | feroceon.txt | 1 * Marvell Feroceon Cache 4 - compatible : Should be either "marvell,feroceon-cache" or 5 "marvell,kirkwood-cache". 8 - reg : Address of the L2 cache control register. Mandatory for 9 "marvell,kirkwood-cache", not used by "marvell,feroceon-cache" 13 l2: l2-cache@20128 { 14 compatible = "marvell,kirkwood-cache";
|
D | tauros2.txt | 1 * Marvell Tauros2 Cache 4 - compatible : Should be "marvell,tauros2-cache". 5 - marvell,tauros2-cache-features : Specify the features supported for the 6 tauros2 cache. 11 arch/arm/include/asm/hardware/cache-tauros2.h 14 L2: l2-cache { 15 compatible = "marvell,tauros2-cache"; 16 marvell,tauros2-cache-features = <0x3>;
|
/Documentation/driver-api/firmware/ |
D | firmware_cache.rst | 2 Firmware cache 11 infrastructure implements a firmware cache for device drivers for most API 14 The firmware cache makes using certain firmware API calls safe during a device 15 driver's suspend and resume callback. Users of these API calls needn't cache 18 The firmware cache works by requesting for firmware prior to suspend and 24 Some implementation details about the firmware cache setup: 26 * The firmware cache is setup by adding a devres entry for each device that 29 * If an asynchronous call is used the firmware cache is only set up for a 35 * If the firmware cache is determined to be needed as per the above two 36 criteria the firmware cache is setup by adding a devres entry for the [all …]
|
/Documentation/core-api/ |
D | cachetlb.rst | 2 Cache and TLB Flushing Under Linux 7 This document describes the cache/tlb flushing interfaces called 17 thinking SMP cache/tlb flushing must be so inefficient, this is in 24 "TLB" is abstracted under Linux as something the cpu uses to cache 27 possible for stale translations to exist in this "TLB" cache. 104 Next, we have the cache flushing interfaces. In general, when Linux 120 The cache level flush will always be first, because this allows 123 when that virtual address is flushed from the cache. The HyperSparc 126 The cache flushing routines below need only deal with cache flushing 140 the caches. That is, after running, there will be no cache [all …]
|
/Documentation/filesystems/ |
D | fuse-io.rst | 12 + writeback-cache 17 In direct-io mode the page cache is completely bypassed for reads and writes. 20 In cached mode reads may be satisfied from the page cache, and data may be 21 read-ahead by the kernel to fill the cache. The cache is always kept consistent 26 writeback-cache mode may be selected by the FUSE_WRITEBACK_CACHE flag in the 34 In writeback-cache mode (enabled by the FUSE_WRITEBACK_CACHE flag) writes go to 35 the cache only, which means that the write(2) syscall can often complete very
|
/Documentation/x86/ |
D | resctrl_ui.rst | 22 CAT (Cache Allocation Technology) "cat_l3", "cat_l2" 24 CQM (Cache QoS Monitoring) "cqm_llc", "cqm_occup_llc" 36 Enable code/data prioritization in L3 cache allocations. 38 Enable code/data prioritization in L2 cache allocations. 46 monitoring, only control, or both monitoring and control. Cache 47 pseudo-locking is a unique way of using cache control to "pin" or 48 "lock" data in the cache. Details can be found in 49 "Cache Pseudo-Locking". 67 Cache resource(L3/L2) subdirectory contains the following files 84 setting up exclusive cache partitions. Note that [all …]
|