Searched +full:cache +full:- (Results 1 – 25 of 381) sorted by relevance
12345678910>>...16
| /Documentation/devicetree/bindings/cache/ |
| D | freescale-l2cache.txt | 1 Freescale L2 Cache Controller 3 L2 cache is present in Freescale's QorIQ and QorIQ Qonverge platforms. 4 The cache bindings explained below are Devicetree Specification compliant 8 - compatible : Should include one of the following: 9 "fsl,b4420-l2-cache-controller" 10 "fsl,b4860-l2-cache-controller" 11 "fsl,bsc9131-l2-cache-controller" 12 "fsl,bsc9132-l2-cache-controller" 13 "fsl,c293-l2-cache-controller" 14 "fsl,mpc8536-l2-cache-controller" [all …]
|
| D | socionext,uniphier-system-cache.yaml | 1 # SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause 3 --- 4 $id: http://devicetree.org/schemas/cache/socionext,uniphier-system-cache.yaml# 5 $schema: http://devicetree.org/meta-schemas/core.yaml# 7 title: UniPhier outer cache controller 10 UniPhier ARM 32-bit SoCs are integrated with a full-custom outer cache 11 controller system. All of them have a level 2 cache controller, and some 12 have a level 3 cache controller as well. 15 - Masahiro Yamada <yamada.masahiro@socionext.com> 19 const: socionext,uniphier-system-cache [all …]
|
| D | starfive,jh8100-starlink-cache.yaml | 1 # SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) 3 --- 4 $id: http://devicetree.org/schemas/cache/starfive,jh8100-starlink-cache.yaml# 5 $schema: http://devicetree.org/meta-schemas/core.yaml# 7 title: StarFive StarLink Cache Controller 10 - Joshua Yeong <joshua.yeong@starfivetech.com> 13 StarFive's StarLink Cache Controller manages the L3 cache shared between 14 clusters of CPU cores. The cache driver enables RISC-V non-standard cache 15 management as an alternative to instructions in the RISC-V Zicbom extension. 18 - $ref: /schemas/cache-controller.yaml# [all …]
|
| D | andestech,ax45mp-cache.yaml | 1 # SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) 4 --- 5 $id: http://devicetree.org/schemas/cache/andestech,ax45mp-cache.yaml# 6 $schema: http://devicetree.org/meta-schemas/core.yaml# 8 title: Andestech AX45MP L2 Cache Controller 11 - Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> 14 A level-2 cache (L2C) is used to improve the system performance by providing 15 a large amount of cache line entries and reasonable access delays. The L2C 16 is shared between cores, and a non-inclusive non-exclusive policy is used. 23 - andestech,ax45mp-cache [all …]
|
| D | sifive,ccache0.yaml | 1 # SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) 4 --- 5 $id: http://devicetree.org/schemas/cache/sifive,ccache0.yaml# 6 $schema: http://devicetree.org/meta-schemas/core.yaml# 8 title: SiFive Composable Cache Controller 11 - Paul Walmsley <paul.walmsley@sifive.com> 14 The SiFive Composable Cache Controller is used to provide access to fast copies 15 of memory for masters in a Core Complex. The Composable Cache Controller also 16 acts as directory-based coherency manager. 24 - sifive,ccache0 [all …]
|
| D | l2c2x0.yaml | 1 # SPDX-License-Identifier: GPL-2.0 3 --- 4 $id: http://devicetree.org/schemas/cache/l2c2x0.yaml# 5 $schema: http://devicetree.org/meta-schemas/core.yaml# 7 title: ARM L2 Cache Controller 10 - Rob Herring <robh@kernel.org> 14 PL220/PL310 and variants) based level 2 cache controller. All these various 15 implementations of the L2 cache controller have compatible programming 16 models (Note 1). Some of the properties that are just prefixed "cache-*" are 22 cache controllers as found in e.g. Cortex-A15/A7/A57/A53. These [all …]
|
| D | marvell,feroceon-cache.txt | 1 * Marvell Feroceon Cache 4 - compatible : Should be either "marvell,feroceon-cache" or 5 "marvell,kirkwood-cache". 8 - reg : Address of the L2 cache control register. Mandatory for 9 "marvell,kirkwood-cache", not used by "marvell,feroceon-cache" 13 l2: l2-cache@20128 { 14 compatible = "marvell,kirkwood-cache";
|
| D | marvell,tauros2-cache.txt | 1 * Marvell Tauros2 Cache 4 - compatible : Should be "marvell,tauros2-cache". 5 - marvell,tauros2-cache-features : Specify the features supported for the 6 tauros2 cache. 11 arch/arm/include/asm/hardware/cache-tauros2.h 14 L2: l2-cache { 15 compatible = "marvell,tauros2-cache"; 16 marvell,tauros2-cache-features = <0x3>;
|
| /Documentation/filesystems/caching/ |
| D | backend-api.rst | 1 .. SPDX-License-Identifier: GPL-2.0 4 Cache Backend API 7 The FS-Cache system provides an API by which actual caches can be supplied to 8 FS-Cache for it to then serve out to network filesystems and other interested 11 #include <linux/fscache-cache.h>. 17 Interaction with the API is handled on three levels: cache, volume and data 23 Cache cookie struct fscache_cache 28 Cookies are used to provide some filesystem data to the cache, manage state and 29 pin the cache during access in addition to acting as reference points for the 34 The cache backend and the network filesystem can both ask for cache cookies - [all …]
|
| D | cachefiles.rst | 1 .. SPDX-License-Identifier: GPL-2.0 4 Cache on Already Mounted Filesystem 15 (*) Starting the cache. 19 (*) Cache culling. 21 (*) Cache structure. 31 (*) On-demand Read. 37 CacheFiles is a caching backend that's meant to use as a cache a directory on 40 CacheFiles uses a userspace daemon to do some of the cache management - such as 44 The filesystem and data integrity of the cache are only as good as those of the 49 CacheFiles creates a misc character device - "/dev/cachefiles" - that is used [all …]
|
| /Documentation/devicetree/bindings/cpufreq/ |
| D | cpufreq-qcom-hw.yaml | 1 # SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause 3 --- 4 $id: http://devicetree.org/schemas/cpufreq/cpufreq-qcom-hw.yaml# 5 $schema: http://devicetree.org/meta-schemas/core.yaml# 10 - Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 21 - description: v1 of CPUFREQ HW 23 - enum: 24 - qcom,qcm2290-cpufreq-hw 25 - qcom,sc7180-cpufreq-hw 26 - qcom,sdm670-cpufreq-hw [all …]
|
| /Documentation/admin-guide/device-mapper/ |
| D | cache.rst | 2 Cache title 8 dm-cache is a device mapper target written by Joe Thornber, Heinz 15 This device-mapper solution allows us to insert this caching at 17 a thin-provisioning pool. Caching solutions that are integrated more 20 The target reuses the metadata library used in the thin-provisioning 23 The decision as to what data to migrate and when is left to a plug-in 40 may be out of date or kept in sync with the copy on the cache device 46 Sub-devices 47 ----------- 52 1. An origin device - the big, slow one. [all …]
|
| D | writecache.rst | 6 doesn't cache reads because reads are supposed to be cached in page cache 14 1. type of the cache device - "p" or "s" 15 - p - persistent memory 16 - s - SSD 18 3. the cache device 25 offset from the start of cache device in 512-byte sectors 45 applicable only to persistent memory - use the FUA flag 49 applicable only to persistent memory - don't use the FUA 53 - some underlying devices perform better with fua, some 57 arguments or by a message), the cache will not promote [all …]
|
| D | cache-policies.rst | 26 Overview of supplied cache replacement policies 30 --------------- 43 --------------------------- 47 The stochastic multi-queue (smq) policy addresses some of the problems 55 DM table that is using the cache target. Doing so will cause all of the 56 mq policy's hints to be dropped. Also, performance of the cache may 63 The mq policy used a lot of memory; 88 bytes per cache block on a 64 68 has a 'hotspot' queue, rather than a pre-cache, which uses a quarter of 70 cache block). 72 All this means smq uses ~25bytes per cache block. Still a lot of [all …]
|
| /Documentation/driver-api/md/ |
| D | raid5-cache.rst | 2 RAID 4/5/6 cache 5 Raid 4/5/6 could include an extra disk for data cache besides normal RAID 6 disks. The role of RAID disks isn't changed with the cache disk. The cache disk 7 caches data to the RAID disks. The cache can be in write-through (supported 8 since 4.4) or write-back mode (supported since 4.10). mdadm (supported since 9 3.4) has a new option '--write-journal' to create array with cache. Please 10 refer to mdadm manual for details. By default (RAID array starts), the cache is 11 in write-through mode. A user can switch it to write-back mode by:: 13 echo "write-back" > /sys/block/md0/md/journal_mode 15 And switch it back to write-through mode by:: [all …]
|
| /Documentation/filesystems/nfs/ |
| D | rpc-cache.rst | 2 RPC Cache 21 - mapping from IP address to client name 22 - mapping from client name and filesystem to export options 23 - mapping from UID to list of GIDs, to work around NFS's limitation 25 - mappings between local UID/GID and remote UID/GID for sites that 27 - mapping from network identify to public key for crypto authentication. 31 - general cache lookup with correct locking 32 - supporting 'NEGATIVE' as well as positive entries 33 - allowing an EXPIRED time on cache items, and removing 34 items after they expire, and are no longer in-use. [all …]
|
| /Documentation/admin-guide/ |
| D | bcache.rst | 2 A block layer cache (bcache) 6 nice if you could use them as cache... Hence bcache. 11 This is the git repository of bcache-tools: 12 https://git.kernel.org/pub/scm/linux/kernel/git/colyli/bcache-tools.git/ 17 It's designed around the performance characteristics of SSDs - it only allocates 25 great lengths to protect your data - it reliably handles unclean shutdown. (It 29 Writeback caching can use most of the cache for buffering writes - writing 36 average is above the cutoff it will skip all IO from that task - instead of 38 thus entirely bypass the cache. 41 from disk or invalidating cache entries. For unrecoverable errors (meta data [all …]
|
| /Documentation/driver-api/firmware/ |
| D | firmware_cache.rst | 2 Firmware cache 6 re-initialize devices. During resume there may be a period of time during which 11 infrastructure implements a firmware cache for device drivers for most API 14 The firmware cache makes using certain firmware API calls safe during a device 15 driver's suspend and resume callback. Users of these API calls needn't cache 18 The firmware cache works by requesting for firmware prior to suspend and 24 Some implementation details about the firmware cache setup: 26 * The firmware cache is setup by adding a devres entry for each device that 29 * If an asynchronous call is used the firmware cache is only set up for a 35 * If the firmware cache is determined to be needed as per the above two [all …]
|
| /Documentation/ABI/testing/ |
| D | sysfs-kernel-slab | 5 Christoph Lameter <cl@linux-foundation.org> 8 internal state of the SLUB allocator for each cache. Certain 9 files may be modified to change the behavior of the cache (and 10 any cache it aliases, if any). 13 What: /sys/kernel/slab/<cache>/aliases 17 Christoph Lameter <cl@linux-foundation.org> 19 The aliases file is read-only and specifies how many caches 20 have merged into this cache. 22 What: /sys/kernel/slab/<cache>/align 26 Christoph Lameter <cl@linux-foundation.org> [all …]
|
| D | sysfs-class-bdi | 14 non-block filesystems which provide their own BDI, such as NFS 17 MAJOR:MINOR-fuseblk 23 The default backing dev, used for non-block device backed 30 Size of the read-ahead window in kilobytes 32 (read-write) 38 total write-back cache that relates to its current average 42 percentage of the write-back cache to a particular device. 45 (read-write) 52 total write-back cache that relates to its current average 56 of the write-back cache to a particular device. The value is [all …]
|
| /Documentation/core-api/ |
| D | cachetlb.rst | 2 Cache and TLB Flushing Under Linux 7 This document describes the cache/tlb flushing interfaces called 17 thinking SMP cache/tlb flushing must be so inefficient, this is in 24 "TLB" is abstracted under Linux as something the cpu uses to cache 25 virtual-->physical address translations obtained from the software 27 possible for stale translations to exist in this "TLB" cache. 59 modifications for the address space 'vma->vm_mm' in the range 60 'start' to 'end-1' will be visible to the cpu. That is, after 62 virtual addresses in the range 'start' to 'end-1'. 78 address space is available via vma->vm_mm. Also, one may [all …]
|
| /Documentation/devicetree/bindings/powerpc/fsl/ |
| D | pamu.txt | 5 The PAMU is an I/O MMU that provides device-to-memory access control and 10 - compatible : <string> 11 First entry is a version-specific string, such as 12 "fsl,pamu-v1.0". The second is "fsl,pamu". 13 - ranges : <prop-encoded-array> 20 - interrupts : <prop-encoded-array> 25 - #address-cells: <u32> 27 - #size-cells : <u32> 31 - reg : <prop-encoded-array> 35 - fsl,portid-mapping : <u32> [all …]
|
| /Documentation/admin-guide/mm/ |
| D | numaperf.rst | 10 as CPU cache coherence, but may have different performance. For example, 21 +------------------+ +------------------+ 22 | Compute Node 0 +-----+ Compute Node 1 | 24 +--------+---------+ +--------+---------+ 26 +--------+---------+ +--------+---------+ 28 +------------------+ +--------+---------+ 36 performance when accessing a given memory target. Each initiator-target 48 # symlinks -v /sys/devices/system/node/nodeX/access0/targets/ 49 relative: /sys/devices/system/node/nodeX/access0/targets/nodeY -> ../../nodeY 51 # symlinks -v /sys/devices/system/node/nodeY/access0/initiators/ [all …]
|
| /Documentation/filesystems/ |
| D | fuse-io.rst | 1 .. SPDX-License-Identifier: GPL-2.0 9 - direct-io 10 - cached 11 + write-through 12 + writeback-cache 14 The direct-io mode can be selected with the FOPEN_DIRECT_IO flag in the 17 In direct-io mode the page cache is completely bypassed for reads and writes. 18 No read-ahead takes place. Shared mmap is disabled by default. To allow shared 21 In cached mode reads may be satisfied from the page cache, and data may be 22 read-ahead by the kernel to fill the cache. The cache is always kept consistent [all …]
|
| /Documentation/kernel-hacking/ |
| D | false-sharing.rst | 1 .. SPDX-License-Identifier: GPL-2.0 9 False sharing is related with cache mechanism of maintaining the data 10 coherence of one cache line stored in multiple CPU's caches; then 20 Member 'refcount'(A) and 'name'(B) _share_ one cache line like below:: 22 +-----------+ +-----------+ 24 +-----------+ +-----------+ 28 +----------------------+ +----------------------+ 29 | A B | Cache 0 | A B | Cache 1 30 +----------------------+ +----------------------+ 32 ---------------------------+------------------+----------------------------- [all …]
|
12345678910>>...16