Searched +full:cache +full:- (Results 1 – 25 of 301) sorted by relevance
12345678910>>...13
| /Documentation/devicetree/bindings/powerpc/fsl/ |
| D | l2cache.txt | 1 Freescale L2 Cache Controller 3 L2 cache is present in Freescale's QorIQ and QorIQ Qonverge platforms. 4 The cache bindings explained below are Devicetree Specification compliant 8 - compatible : Should include one of the following: 9 "fsl,8540-l2-cache-controller" 10 "fsl,8541-l2-cache-controller" 11 "fsl,8544-l2-cache-controller" 12 "fsl,8548-l2-cache-controller" 13 "fsl,8555-l2-cache-controller" 14 "fsl,8568-l2-cache-controller" [all …]
|
| D | cache_sram.txt | 1 * Freescale PQ3 and QorIQ based Cache SRAM 4 option of configuring a part of (or full) cache memory 5 as SRAM. This cache SRAM representation in the device 6 tree should be done as under:- 10 - compatible : should be "fsl,p2020-cache-sram" 11 - fsl,cache-sram-ctlr-handle : points to the L2 controller 12 - reg : offset and length of the cache-sram. 16 cache-sram@fff00000 { 17 fsl,cache-sram-ctlr-handle = <&L2>; 19 compatible = "fsl,p2020-cache-sram";
|
| D | pamu.txt | 5 The PAMU is an I/O MMU that provides device-to-memory access control and 10 - compatible : <string> 11 First entry is a version-specific string, such as 12 "fsl,pamu-v1.0". The second is "fsl,pamu". 13 - ranges : <prop-encoded-array> 20 - interrupts : <prop-encoded-array> 25 - #address-cells: <u32> 27 - #size-cells : <u32> 31 - reg : <prop-encoded-array> 35 - fsl,portid-mapping : <u32> [all …]
|
| /Documentation/devicetree/bindings/arm/socionext/ |
| D | cache-uniphier.txt | 1 UniPhier outer cache controller 3 UniPhier SoCs are integrated with a full-custom outer cache controller system. 4 All of them have a level 2 cache controller, and some have a level 3 cache 8 - compatible: should be "socionext,uniphier-system-cache" 9 - reg: offsets and lengths of the register sets for the device. It should 12 - cache-unified: specifies the cache is a unified cache. 13 - cache-size: specifies the size in bytes of the cache 14 - cache-sets: specifies the number of associativity sets of the cache 15 - cache-line-size: specifies the line size in bytes 16 - cache-level: specifies the level in the cache hierarchy. The value should [all …]
|
| /Documentation/devicetree/bindings/riscv/ |
| D | sifive-l2-cache.txt | 1 SiFive L2 Cache Controller 2 -------------------------- 3 The SiFive Level 2 Cache Controller is used to provide access to fast copies 4 of memory for masters in a Core Complex. The Level 2 Cache Controller also 5 acts as directory-based coherency manager. 9 -------------------- 10 - compatible: Should be "sifive,fu540-c000-ccache" and "cache" 12 - cache-block-size: Specifies the block size in bytes of the cache. 15 - cache-level: Should be set to 2 for a level 2 cache 17 - cache-sets: Specifies the number of associativity sets of the cache. [all …]
|
| /Documentation/filesystems/caching/ |
| D | backend-api.txt | 2 FS-CACHE CACHE BACKEND API 5 The FS-Cache system provides an API by which actual caches can be supplied to 6 FS-Cache for it to then serve out to network filesystems and other interested 9 This API is declared in <linux/fscache-cache.h>. 13 INITIALISING AND REGISTERING A CACHE 16 To start off, a cache definition must be initialised and registered for each 17 cache the backend wants to make available. For instance, CacheFS does this in 20 The cache definition (struct fscache_cache) should be initialised by calling: 22 void fscache_init_cache(struct fscache_cache *cache, 29 (*) "cache" is a pointer to the cache definition; [all …]
|
| D | cachefiles.txt | 2 CacheFiles: CACHE ON ALREADY MOUNTED FILESYSTEM 13 (*) Starting the cache. 17 (*) Cache culling. 19 (*) Cache structure. 34 CacheFiles is a caching backend that's meant to use as a cache a directory on 37 CacheFiles uses a userspace daemon to do some of the cache management - such as 41 The filesystem and data integrity of the cache are only as good as those of the 46 CacheFiles creates a misc character device - "/dev/cachefiles" - that is used 48 and while it is open, a cache is at least partially in existence. The daemon 49 opens this and sends commands down it to control the cache. [all …]
|
| D | netfs-api.txt | 2 FS-CACHE NETWORK FILESYSTEM API 5 There's an API by which a network filesystem can make use of the FS-Cache 10 FS-Cache to make finding objects faster and to make retiring of groups of 17 (3) Barring the top-level index (one entry per cached netfs), the index 28 (5) Cache tag lookup 41 (18) FS-Cache specific page flags. 48 FS-Cache needs a description of the network filesystem. This is specified 67 entire in-cache hierarchy for this netfs will be scrapped and begun 92 a particular key - for instance to mirror the removal of an AFS volume. 95 their index hierarchies in quite the same way, FS-Cache tries to impose as few [all …]
|
| D | fscache.txt | 9 This facility is a general purpose cache for network filesystems, though it 12 FS-Cache mediates between cache backends (such as CacheFS) and network 15 +---------+ 16 | | +--------------+ 17 | NFS |--+ | | 18 | | | +-->| CacheFS | 19 +---------+ | +----------+ | | /dev/hda5 | 20 | | | | +--------------+ 21 +---------+ +-->| | | 22 | | | |--+ [all …]
|
| /Documentation/devicetree/bindings/cpufreq/ |
| D | cpufreq-qcom-hw.txt | 8 - compatible 11 Definition: must be "qcom,cpufreq-hw". 13 - clocks 18 - clock-names 23 - reg 25 Value type: <prop-encoded-array> 28 - reg-names 32 "freq-domain0", "freq-domain1". 34 - #freq-domain-cells: 38 * Property qcom,freq-domain [all …]
|
| /Documentation/devicetree/bindings/nds32/ |
| D | atl2c.txt | 1 * Andestech L2 cache Controller 3 The level-2 cache controller plays an important role in reducing memory latency 5 Level-2 cache controller in general enhances overall system performance 10 representation of an Andestech L2 cache controller. 13 - compatible: 17 - reg : Physical base address and size of cache controller's memory mapped 18 - cache-unified : Specifies the cache is a unified cache. 19 - cache-level : Should be set to 2 for a level 2 cache. 23 cache-controller@e0500000 { 26 cache-unified; [all …]
|
| /Documentation/admin-guide/device-mapper/ |
| D | cache.rst | 2 Cache title 8 dm-cache is a device mapper target written by Joe Thornber, Heinz 15 This device-mapper solution allows us to insert this caching at 17 a thin-provisioning pool. Caching solutions that are integrated more 20 The target reuses the metadata library used in the thin-provisioning 23 The decision as to what data to migrate and when is left to a plug-in 40 may be out of date or kept in sync with the copy on the cache device 46 Sub-devices 47 ----------- 52 1. An origin device - the big, slow one. [all …]
|
| D | writecache.rst | 6 doesn't cache reads because reads are supposed to be cached in page cache 14 1. type of the cache device - "p" or "s" 16 - p - persistent memory 17 - s - SSD 19 3. the cache device 26 offset from the start of cache device in 512-byte sectors 46 applicable only to persistent memory - use the FUA flag 50 applicable only to persistent memory - don't use the FUA 54 - some underlying devices perform better with fua, some 58 1. error indicator - 0 if there was no error, otherwise error number [all …]
|
| D | cache-policies.rst | 26 Overview of supplied cache replacement policies 30 --------------- 43 --------------------------- 47 The stochastic multi-queue (smq) policy addresses some of the problems 55 DM table that is using the cache target. Doing so will cause all of the 56 mq policy's hints to be dropped. Also, performance of the cache may 63 The mq policy used a lot of memory; 88 bytes per cache block on a 64 68 has a 'hotspot' queue, rather than a pre-cache, which uses a quarter of 70 cache block). 72 All this means smq uses ~25bytes per cache block. Still a lot of [all …]
|
| /Documentation/driver-api/md/ |
| D | raid5-cache.rst | 2 RAID 4/5/6 cache 5 Raid 4/5/6 could include an extra disk for data cache besides normal RAID 6 disks. The role of RAID disks isn't changed with the cache disk. The cache disk 7 caches data to the RAID disks. The cache can be in write-through (supported 8 since 4.4) or write-back mode (supported since 4.10). mdadm (supported since 9 3.4) has a new option '--write-journal' to create array with cache. Please 10 refer to mdadm manual for details. By default (RAID array starts), the cache is 11 in write-through mode. A user can switch it to write-back mode by:: 13 echo "write-back" > /sys/block/md0/md/journal_mode 15 And switch it back to write-through mode by:: [all …]
|
| /Documentation/devicetree/bindings/arm/ |
| D | l2c2x0.yaml | 1 # SPDX-License-Identifier: GPL-2.0 3 --- 5 $schema: http://devicetree.org/meta-schemas/core.yaml# 7 title: ARM L2 Cache Controller 10 - Rob Herring <robh@kernel.org> 14 PL220/PL310 and variants) based level 2 cache controller. All these various 15 implementations of the L2 cache controller have compatible programming 16 models (Note 1). Some of the properties that are just prefixed "cache-*" are 22 cache controllers as found in e.g. Cortex-A15/A7/A57/A53. These 28 - $ref: /schemas/cache-controller.yaml# [all …]
|
| /Documentation/devicetree/bindings/arm/mrvl/ |
| D | feroceon.txt | 1 * Marvell Feroceon Cache 4 - compatible : Should be either "marvell,feroceon-cache" or 5 "marvell,kirkwood-cache". 8 - reg : Address of the L2 cache control register. Mandatory for 9 "marvell,kirkwood-cache", not used by "marvell,feroceon-cache" 13 l2: l2-cache@20128 { 14 compatible = "marvell,kirkwood-cache";
|
| D | tauros2.txt | 1 * Marvell Tauros2 Cache 4 - compatible : Should be "marvell,tauros2-cache". 5 - marvell,tauros2-cache-features : Specify the features supported for the 6 tauros2 cache. 11 arch/arm/include/asm/hardware/cache-tauros2.h 14 L2: l2-cache { 15 compatible = "marvell,tauros2-cache"; 16 marvell,tauros2-cache-features = <0x3>;
|
| /Documentation/admin-guide/ |
| D | bcache.rst | 2 A block layer cache (bcache) 6 nice if you could use them as cache... Hence bcache. 10 - http://bcache.evilpiepirate.org 11 - http://evilpiepirate.org/git/linux-bcache.git 12 - http://evilpiepirate.org/git/bcache-tools.git 14 It's designed around the performance characteristics of SSDs - it only allocates 22 great lengths to protect your data - it reliably handles unclean shutdown. (It 26 Writeback caching can use most of the cache for buffering writes - writing 33 average is above the cutoff it will skip all IO from that task - instead of 35 thus entirely bypass the cache. [all …]
|
| /Documentation/filesystems/nfs/ |
| D | rpc-cache.txt | 15 - mapping from IP address to client name 16 - mapping from client name and filesystem to export options 17 - mapping from UID to list of GIDs, to work around NFS's limitation 19 - mappings between local UID/GID and remote UID/GID for sites that 21 - mapping from network identify to public key for crypto authentication. 24 - general cache lookup with correct locking 25 - supporting 'NEGATIVE' as well as positive entries 26 - allowing an EXPIRED time on cache items, and removing 27 items after they expire, and are no longer in-use. 28 - making requests to user-space to fill in cache entries [all …]
|
| /Documentation/driver-api/firmware/ |
| D | firmware_cache.rst | 2 Firmware cache 6 re-initialize devices. During resume there may be a period of time during which 11 infrastructure implements a firmware cache for device drivers for most API 14 The firmware cache makes using certain firmware API calls safe during a device 15 driver's suspend and resume callback. Users of these API calls needn't cache 18 The firmware cache works by requesting for firmware prior to suspend and 24 Some implementation details about the firmware cache setup: 26 * The firmware cache is setup by adding a devres entry for each device that 29 * If an asynchronous call is used the firmware cache is only set up for a 35 * If the firmware cache is determined to be needed as per the above two [all …]
|
| /Documentation/ABI/testing/ |
| D | sysfs-kernel-slab | 5 Christoph Lameter <cl@linux-foundation.org> 8 internal state of the SLUB allocator for each cache. Certain 9 files may be modified to change the behavior of the cache (and 10 any cache it aliases, if any). 13 What: /sys/kernel/slab/cache/aliases 17 Christoph Lameter <cl@linux-foundation.org> 19 The aliases file is read-only and specifies how many caches 20 have merged into this cache. 22 What: /sys/kernel/slab/cache/align 26 Christoph Lameter <cl@linux-foundation.org> [all …]
|
| /Documentation/core-api/ |
| D | cachetlb.rst | 2 Cache and TLB Flushing Under Linux 7 This document describes the cache/tlb flushing interfaces called 17 thinking SMP cache/tlb flushing must be so inefficient, this is in 24 "TLB" is abstracted under Linux as something the cpu uses to cache 25 virtual-->physical address translations obtained from the software 27 possible for stale translations to exist in this "TLB" cache. 59 modifications for the address space 'vma->vm_mm' in the range 60 'start' to 'end-1' will be visible to the cpu. That is, after 62 virtual addresses in the range 'start' to 'end-1'. 78 address space is available via vma->vm_mm. Also, one may [all …]
|
| /Documentation/x86/ |
| D | resctrl_ui.rst | 1 .. SPDX-License-Identifier: GPL-2.0 9 :Authors: - Fenghua Yu <fenghua.yu@intel.com> 10 - Tony Luck <tony.luck@intel.com> 11 - Vikas Shivappa <vikas.shivappa@intel.com> 22 CAT (Cache Allocation Technology) "cat_l3", "cat_l2" 24 CQM (Cache QoS Monitoring) "cqm_llc", "cqm_occup_llc" 31 # mount -t resctrl resctrl [-o cdp[,cdpl2][,mba_MBps]] /sys/fs/resctrl 36 Enable code/data prioritization in L3 cache allocations. 38 Enable code/data prioritization in L2 cache allocations. 46 monitoring, only control, or both monitoring and control. Cache [all …]
|
| /Documentation/filesystems/ |
| D | fuse-io.txt | 3 - direct-io 4 - cached 5 + write-through 6 + writeback-cache 8 The direct-io mode can be selected with the FOPEN_DIRECT_IO flag in the 11 In direct-io mode the page cache is completely bypassed for reads and writes. 12 No read-ahead takes place. Shared mmap is disabled. 14 In cached mode reads may be satisfied from the page cache, and data may be 15 read-ahead by the kernel to fill the cache. The cache is always kept consistent 19 write-through mode is the default and is supported on all kernels. The [all …]
|
12345678910>>...13