Searched +full:cache +full:- +full:block (Results 1 – 25 of 147) sorted by relevance
123456
| /Documentation/admin-guide/device-mapper/ |
| D | cache.rst | 2 Cache title 8 dm-cache is a device mapper target written by Joe Thornber, Heinz 11 It aims to improve performance of a block device (eg, a spindle) by 15 This device-mapper solution allows us to insert this caching at 17 a thin-provisioning pool. Caching solutions that are integrated more 20 The target reuses the metadata library used in the thin-provisioning 23 The decision as to what data to migrate and when is left to a plug-in 32 Movement of the primary copy of a logical block from one 39 The origin device always contains a copy of the logical block, which 40 may be out of date or kept in sync with the copy on the cache device [all …]
|
| D | writecache.rst | 6 doesn't cache reads because reads are supposed to be cached in page cache 14 1. type of the cache device - "p" or "s" 15 - p - persistent memory 16 - s - SSD 18 3. the cache device 19 4. block size (4096 is recommended; the maximum block size is the page 25 offset from the start of cache device in 512-byte sectors 45 applicable only to persistent memory - use the FUA flag 49 applicable only to persistent memory - don't use the FUA 53 - some underlying devices perform better with fua, some [all …]
|
| D | cache-policies.rst | 21 doesn't update states (eg, hit counts) for a block more than once 26 Overview of supplied cache replacement policies 30 --------------- 43 --------------------------- 47 The stochastic multi-queue (smq) policy addresses some of the problems 55 DM table that is using the cache target. Doing so will cause all of the 56 mq policy's hints to be dropped. Also, performance of the cache may 63 The mq policy used a lot of memory; 88 bytes per cache block on a 64 67 pointers. It avoids storing an explicit hit count for each block. It 68 has a 'hotspot' queue, rather than a pre-cache, which uses a quarter of [all …]
|
| D | era.rst | 2 dm-era 8 dm-era is a target that behaves similar to the linear target. In 11 maintains the current era as a monotonically increasing 32-bit 15 partially invalidating the contents of a cache to restore cache 21 era <metadata dev> <origin dev> <block size> 26 block size block size of origin data device, granularity that is 36 ---------- 43 ------------------ 48 ------------------ 55 <metadata block size> <#used metadata blocks>/<#total metadata blocks> [all …]
|
| D | vdo.rst | 1 .. SPDX-License-Identifier: GPL-2.0-only 3 dm-vdo 6 The dm-vdo (virtual data optimizer) device mapper target provides 7 block-level deduplication, compression, and thin provisioning. As a device 20 https://github.com/dm-vdo/vdo/ 25 enter or come up in read-only mode. Because read-only mode is indicative of 26 data-loss, a positive action must be taken to bring vdo out of read-only 28 prepare a read-only vdo to exit read-only mode. After running this tool, 34 inspect a vdo target's on-disk metadata. Fortunately, these tools are 35 rarely needed except by dm-vdo developers. [all …]
|
| D | dm-clone.rst | 1 .. SPDX-License-Identifier: GPL-2.0-only 4 dm-clone 10 dm-clone is a device mapper target which produces a one-to-one copy of an 11 existing, read-only source device into a writable destination device: It 12 presents a virtual block device which makes all data appear immediately, and 15 The main use case of dm-clone is to clone a potentially remote, high-latency, 16 read-only, archival-type block device into a writable, fast, primary-type device 17 for fast, low-latency I/O. The cloned device is visible/mountable immediately 21 For example, one could restore an application backup from a read-only copy, 26 When the cloning completes, the dm-clone table can be removed altogether and be [all …]
|
| D | vdo-design.rst | 1 .. SPDX-License-Identifier: GPL-2.0-only 4 Design of dm-vdo 7 The dm-vdo (virtual data optimizer) target provides inline deduplication, 8 compression, zero-block elimination, and thin provisioning. A dm-vdo target 12 production environments ever since. It was made open-source in 2017 after 14 dm-vdo. For usage, see vdo.rst in the same directory as this file. 16 Because deduplication rates fall drastically as the block size increases, a 17 vdo target has a maximum block size of 4K. However, it can achieve 18 deduplication rates of 254:1, i.e. up to 254 copies of a given 4K block can 25 The design of dm-vdo is based on the idea that deduplication is a two-part [all …]
|
| D | dm-init.rst | 5 It is possible to configure a device-mapper device to act as the root device for 11 The second is to create one or more device-mappers using the module parameter 12 "dm-mod.create=" through the kernel boot command line argument. 15 semi-colons, where: 17 - a comma is used to separate fields like name, uuid, flags and table 19 - a semi-colon is used to separate devices. 23 …dm-mod.create=<name>,<uuid>,<minor>,<flags>,<table>[,<table>+][;<name>,<uuid>,<minor>,<flags>,<tab… 28 <uuid> ::= xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx | "" 35 `--concise` argument. 45 `cache` constrained, userspace should verify cache device [all …]
|
| /Documentation/admin-guide/ |
| D | bcache.rst | 2 A block layer cache (bcache) 6 nice if you could use them as cache... Hence bcache. 11 This is the git repository of bcache-tools: 12 https://git.kernel.org/pub/scm/linux/kernel/git/colyli/bcache-tools.git/ 17 It's designed around the performance characteristics of SSDs - it only allocates 18 in erase block sized buckets, and it uses a hybrid btree/log to track cached 20 designed to avoid random writes at all costs; it fills up an erase block 25 great lengths to protect your data - it reliably handles unclean shutdown. (It 29 Writeback caching can use most of the cache for buffering writes - writing 36 average is above the cutoff it will skip all IO from that task - instead of [all …]
|
| D | ext4.rst | 1 .. SPDX-License-Identifier: GPL-2.0 9 (64 bit) in keeping with increasing disk capacities and state-of-the-art 12 Mailing list: linux-ext4@vger.kernel.org 23 - The latest version of e2fsprogs can be found at: 35 - Create a new filesystem using the ext4 filesystem type: 37 # mke2fs -t ext4 /dev/hda1 41 # tune2fs -O extents /dev/hda1 46 # tune2fs -I 256 /dev/hda1 48 - Mounting: 50 # mount -t ext4 /dev/hda1 /wherever [all …]
|
| /Documentation/devicetree/bindings/cache/ |
| D | starfive,jh8100-starlink-cache.yaml | 1 # SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) 3 --- 4 $id: http://devicetree.org/schemas/cache/starfive,jh8100-starlink-cache.yaml# 5 $schema: http://devicetree.org/meta-schemas/core.yaml# 7 title: StarFive StarLink Cache Controller 10 - Joshua Yeong <joshua.yeong@starfivetech.com> 13 StarFive's StarLink Cache Controller manages the L3 cache shared between 14 clusters of CPU cores. The cache driver enables RISC-V non-standard cache 15 management as an alternative to instructions in the RISC-V Zicbom extension. 18 - $ref: /schemas/cache-controller.yaml# [all …]
|
| D | sifive,ccache0.yaml | 1 # SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) 4 --- 5 $id: http://devicetree.org/schemas/cache/sifive,ccache0.yaml# 6 $schema: http://devicetree.org/meta-schemas/core.yaml# 8 title: SiFive Composable Cache Controller 11 - Paul Walmsley <paul.walmsley@sifive.com> 14 The SiFive Composable Cache Controller is used to provide access to fast copies 15 of memory for masters in a Core Complex. The Composable Cache Controller also 16 acts as directory-based coherency manager. 24 - sifive,ccache0 [all …]
|
| D | baikal,bt1-l2-ctl.yaml | 1 # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 4 --- 5 $id: http://devicetree.org/schemas/cache/baikal,bt1-l2-ctl.yaml# 6 $schema: http://devicetree.org/meta-schemas/core.yaml# 8 title: Baikal-T1 L2-cache Control Block 11 - Serge Semin <fancer.lancer@gmail.com> 14 By means of the System Controller Baikal-T1 SoC exposes a few settings to 15 tune the MIPS P5600 CM2 L2 cache performance up. In particular it's possible 16 to change the Tag, Data and Way-select RAM access latencies. Baikal-T1 17 L2-cache controller block is responsible for the tuning. Its DT node is [all …]
|
| D | l2c2x0.yaml | 1 # SPDX-License-Identifier: GPL-2.0 3 --- 4 $id: http://devicetree.org/schemas/cache/l2c2x0.yaml# 5 $schema: http://devicetree.org/meta-schemas/core.yaml# 7 title: ARM L2 Cache Controller 10 - Rob Herring <robh@kernel.org> 14 PL220/PL310 and variants) based level 2 cache controller. All these various 15 implementations of the L2 cache controller have compatible programming 16 models (Note 1). Some of the properties that are just prefixed "cache-*" are 22 cache controllers as found in e.g. Cortex-A15/A7/A57/A53. These [all …]
|
| /Documentation/devicetree/bindings/riscv/ |
| D | cpus.yaml | 1 # SPDX-License-Identifier: (GPL-2.0 OR MIT) 3 --- 5 $schema: http://devicetree.org/meta-schemas/core.yaml# 7 title: RISC-V CPUs 10 - Paul Walmsley <paul.walmsley@sifive.com> 11 - Palmer Dabbelt <palmer@sifive.com> 12 - Conor Dooley <conor@kernel.org> 15 This document uses some terminology common to the RISC-V community 19 mandated by the RISC-V ISA: a PC and some registers. This 27 - $ref: /schemas/cpu.yaml# [all …]
|
| /Documentation/block/ |
| D | writeback_cache_control.rst | 2 Explicit volatile write back cache control 6 ------------ 10 operating system before data actually has hit the non-volatile storage. This 12 system needs to force data out to the non-volatile storage when it performs 15 The Linux block layer provides two simple mechanisms that let filesystems 17 a forced cache flush, and the Force Unit Access (FUA) flag for requests. 20 Explicit cache flushes 21 ---------------------- 24 the filesystem and will make sure the volatile cache of the storage device 26 guarantees that previously completed write requests are on non-volatile [all …]
|
| D | blk-mq.rst | 1 .. SPDX-License-Identifier: GPL-2.0 4 Multi-Queue Block IO Queueing Mechanism (blk-mq) 7 The Multi-Queue Block IO Queueing Mechanism is an API to enable fast storage 9 through queueing and submitting IO requests to block devices simultaneously, 16 ---------- 19 development of the kernel. The Block IO subsystem aimed to achieve the best 26 However, with the development of Solid State Drives and Non-Volatile Memories 30 in those devices' design, the multi-queue mechanism was introduced. 32 The former design had a single queue to store block IO requests with a single 33 lock. That did not scale well in SMP systems due to dirty data in cache and the [all …]
|
| /Documentation/driver-api/md/ |
| D | raid5-cache.rst | 2 RAID 4/5/6 cache 5 Raid 4/5/6 could include an extra disk for data cache besides normal RAID 6 disks. The role of RAID disks isn't changed with the cache disk. The cache disk 7 caches data to the RAID disks. The cache can be in write-through (supported 8 since 4.4) or write-back mode (supported since 4.10). mdadm (supported since 9 3.4) has a new option '--write-journal' to create array with cache. Please 10 refer to mdadm manual for details. By default (RAID array starts), the cache is 11 in write-through mode. A user can switch it to write-back mode by:: 13 echo "write-back" > /sys/block/md0/md/journal_mode 15 And switch it back to write-through mode by:: [all …]
|
| /Documentation/ABI/testing/ |
| D | sysfs-class-bdi | 13 Device number for block devices, or value of st_dev on 14 non-block filesystems which provide their own BDI, such as NFS 17 MAJOR:MINOR-fuseblk 23 The default backing dev, used for non-block device backed 30 Size of the read-ahead window in kilobytes 32 (read-write) 38 total write-back cache that relates to its current average 42 percentage of the write-back cache to a particular device. 45 (read-write) 52 total write-back cache that relates to its current average [all …]
|
| /Documentation/filesystems/ |
| D | squashfs.rst | 1 .. SPDX-License-Identifier: GPL-2.0 7 Squashfs is a compressed read-only filesystem for Linux. 11 minimise data overhead. Block sizes greater than 4K are supported up to a 12 maximum of 1Mbytes (default block size 128K). 14 Squashfs is intended for general read-only filesystem use, for archival 16 block device/memory systems (e.g. embedded systems) where low overhead is 19 Mailing list: squashfs-devel@lists.sourceforge.net 23 ---------------------- 35 Max block size 1 MiB 4 KiB 39 Tail-end packing (fragments) yes no [all …]
|
| /Documentation/networking/ |
| D | page_pool.rst | 1 .. SPDX-License-Identifier: GPL-2.0 7 .. kernel-doc:: include/net/page_pool/helpers.h 13 .. code-block:: none 15 +------------------+ 17 +------------------+ 23 +--------------------------------------------+ 25 +--------------------------------------------+ 31 +-----------------------+ +------------------------+ 32 | alloc (and map) pages | | get page from cache | 33 +-----------------------+ +------------------------+ [all …]
|
| /Documentation/driver-api/mmc/ |
| D | mmc-async-req.rst | 8 How significant is the cache maintenance overhead? 10 It depends. Fast eMMC and multiple cache levels with speculative cache 11 pre-fetch makes the cache overhead relatively significant. If the DMA 15 The intention of non-blocking (asynchronous) MMC requests is to minimize the 19 dma_unmap_sg are processing. Using non-blocking MMC requests makes it 23 MMC block driver 26 The mmc_blk_issue_rw_rq() in the MMC block driver is made non-blocking. 32 performance gain is 5% for large writes and 10% on large reads on a L2 cache 40 https://wiki.linaro.org/WorkingGroups/Kernel/Specs/StoragePerfMMC-async-req 48 truly non-blocking. If there is an ongoing async request it waits [all …]
|
| /Documentation/filesystems/caching/ |
| D | backend-api.rst | 1 .. SPDX-License-Identifier: GPL-2.0 4 Cache Backend API 7 The FS-Cache system provides an API by which actual caches can be supplied to 8 FS-Cache for it to then serve out to network filesystems and other interested 11 #include <linux/fscache-cache.h>. 17 Interaction with the API is handled on three levels: cache, volume and data 23 Cache cookie struct fscache_cache 28 Cookies are used to provide some filesystem data to the cache, manage state and 29 pin the cache during access in addition to acting as reference points for the 34 The cache backend and the network filesystem can both ask for cache cookies - [all …]
|
| D | netfs-api.rst | 1 .. SPDX-License-Identifier: GPL-2.0 10 (1) A cache is logically organised into volumes and data storage objects 18 (4) Cookies have coherency data that allows a cache to determine if the 55 maximum size of a filename component (allowing the cache backend one char for 62 their parent volume. The cache backend is responsible for rendering the binary 71 This causes fscache to send the cache backend off to look up/create resources 83 extra pins into the cache to stop cache withdrawal from tearing down the 87 The filesystem is expected to use netfslib to access the cache, but that's not 109 what's stored in the cache. 111 The caller may also specify the name of the cache to use. If specified, [all …]
|
| /Documentation/filesystems/nfs/ |
| D | rpc-cache.rst | 2 RPC Cache 21 - mapping from IP address to client name 22 - mapping from client name and filesystem to export options 23 - mapping from UID to list of GIDs, to work around NFS's limitation 25 - mappings between local UID/GID and remote UID/GID for sites that 27 - mapping from network identify to public key for crypto authentication. 31 - general cache lookup with correct locking 32 - supporting 'NEGATIVE' as well as positive entries 33 - allowing an EXPIRED time on cache items, and removing 34 items after they expire, and are no longer in-use. [all …]
|
123456