Searched +full:i +full:- +full:cache +full:- +full:block +full:- +full:size (Results 1 – 25 of 89) sorted by relevance
1234
| /Documentation/devicetree/bindings/riscv/ |
| D | cpus.yaml | 1 # SPDX-License-Identifier: (GPL-2.0 OR MIT) 3 --- 5 $schema: http://devicetree.org/meta-schemas/core.yaml# 7 title: RISC-V CPUs 10 - Paul Walmsley <paul.walmsley@sifive.com> 11 - Palmer Dabbelt <palmer@sifive.com> 12 - Conor Dooley <conor@kernel.org> 15 This document uses some terminology common to the RISC-V community 19 mandated by the RISC-V ISA: a PC and some registers. This 27 - $ref: /schemas/cpu.yaml# [all …]
|
| D | extensions.yaml | 1 # SPDX-License-Identifier: (GPL-2.0 OR MIT) 3 --- 5 $schema: http://devicetree.org/meta-schemas/core.yaml# 7 title: RISC-V ISA extensions 10 - Paul Walmsley <paul.walmsley@sifive.com> 11 - Palmer Dabbelt <palmer@sifive.com> 12 - Conor Dooley <conor@kernel.org> 15 RISC-V has a large number of extensions, some of which are "standard" 16 extensions, meaning they are ratified by RISC-V International, and others 24 ratified states, with the exception of the I, Zicntr & Zihpm extensions. [all …]
|
| /Documentation/admin-guide/device-mapper/ |
| D | vdo.rst | 1 .. SPDX-License-Identifier: GPL-2.0-only 3 dm-vdo 6 The dm-vdo (virtual data optimizer) device mapper target provides 7 block-level deduplication, compression, and thin provisioning. As a device 20 https://github.com/dm-vdo/vdo/ 25 enter or come up in read-only mode. Because read-only mode is indicative of 26 data-loss, a positive action must be taken to bring vdo out of read-only 28 prepare a read-only vdo to exit read-only mode. After running this tool, 34 inspect a vdo target's on-disk metadata. Fortunately, these tools are 35 rarely needed except by dm-vdo developers. [all …]
|
| D | writecache.rst | 6 doesn't cache reads because reads are supposed to be cached in page cache 14 1. type of the cache device - "p" or "s" 15 - p - persistent memory 16 - s - SSD 18 3. the cache device 19 4. block size (4096 is recommended; the maximum block size is the page 20 size) 25 offset from the start of cache device in 512-byte sectors 45 applicable only to persistent memory - use the FUA flag 49 applicable only to persistent memory - don't use the FUA [all …]
|
| D | cache.rst | 2 Cache title 8 dm-cache is a device mapper target written by Joe Thornber, Heinz 11 It aims to improve performance of a block device (eg, a spindle) by 15 This device-mapper solution allows us to insert this caching at 17 a thin-provisioning pool. Caching solutions that are integrated more 20 The target reuses the metadata library used in the thin-provisioning 23 The decision as to what data to migrate and when is left to a plug-in 32 Movement of the primary copy of a logical block from one 39 The origin device always contains a copy of the logical block, which 40 may be out of date or kept in sync with the copy on the cache device [all …]
|
| D | dm-clone.rst | 1 .. SPDX-License-Identifier: GPL-2.0-only 4 dm-clone 10 dm-clone is a device mapper target which produces a one-to-one copy of an 11 existing, read-only source device into a writable destination device: It 12 presents a virtual block device which makes all data appear immediately, and 15 The main use case of dm-clone is to clone a potentially remote, high-latency, 16 read-only, archival-type block device into a writable, fast, primary-type device 17 for fast, low-latency I/O. The cloned device is visible/mountable immediately 19 background, in parallel with user I/O. 21 For example, one could restore an application backup from a read-only copy, [all …]
|
| D | vdo-design.rst | 1 .. SPDX-License-Identifier: GPL-2.0-only 4 Design of dm-vdo 7 The dm-vdo (virtual data optimizer) target provides inline deduplication, 8 compression, zero-block elimination, and thin provisioning. A dm-vdo target 9 can be backed by up to 256TB of storage, and can present a logical size of 12 production environments ever since. It was made open-source in 2017 after 14 dm-vdo. For usage, see vdo.rst in the same directory as this file. 16 Because deduplication rates fall drastically as the block size increases, a 17 vdo target has a maximum block size of 4K. However, it can achieve 18 deduplication rates of 254:1, i.e. up to 254 copies of a given 4K block can [all …]
|
| D | verity.rst | 2 dm-verity 5 Device-Mapper's "verity" target provides transparent integrity checking of 6 block devices using a cryptographic digest provided by the kernel crypto API. 7 This target is read-only. 21 This is the type of the on-disk hash format. 25 the rest of the block is padded with zeroes. 40 dm-verity device. 43 The block size on a data device in bytes. 44 Each block corresponds to one digest on the hash device. 47 The size of a hash block in bytes. [all …]
|
| /Documentation/admin-guide/ |
| D | bcache.rst | 2 A block layer cache (bcache) 6 nice if you could use them as cache... Hence bcache. 11 This is the git repository of bcache-tools: 12 https://git.kernel.org/pub/scm/linux/kernel/git/colyli/bcache-tools.git/ 17 It's designed around the performance characteristics of SSDs - it only allocates 18 in erase block sized buckets, and it uses a hybrid btree/log to track cached 19 extents (which can be anywhere from a single sector to the bucket size). It's 20 designed to avoid random writes at all costs; it fills up an erase block 25 great lengths to protect your data - it reliably handles unclean shutdown. (It 29 Writeback caching can use most of the cache for buffering writes - writing [all …]
|
| D | ext4.rst | 1 .. SPDX-License-Identifier: GPL-2.0 9 (64 bit) in keeping with increasing disk capacities and state-of-the-art 12 Mailing list: linux-ext4@vger.kernel.org 23 - The latest version of e2fsprogs can be found at: 35 - Create a new filesystem using the ext4 filesystem type: 37 # mke2fs -t ext4 /dev/hda1 41 # tune2fs -O extents /dev/hda1 46 # tune2fs -I 256 /dev/hda1 48 - Mounting: 50 # mount -t ext4 /dev/hda1 /wherever [all …]
|
| /Documentation/filesystems/ |
| D | squashfs.rst | 1 .. SPDX-License-Identifier: GPL-2.0 7 Squashfs is a compressed read-only filesystem for Linux. 11 minimise data overhead. Block sizes greater than 4K are supported up to a 12 maximum of 1Mbytes (default block size 128K). 14 Squashfs is intended for general read-only filesystem use, for archival 15 use (i.e. in cases where a .tar.gz file may be used), and in constrained 16 block device/memory systems (e.g. embedded systems) where low overhead is 19 Mailing list: squashfs-devel@lists.sourceforge.net 23 ---------------------- 30 Max filesystem size 2^64 256 MiB [all …]
|
| D | erofs.rst | 1 .. SPDX-License-Identifier: GPL-2.0 4 EROFS - Enhanced Read-Only File System 10 EROFS filesystem stands for Enhanced Read-Only File System. It aims to form a 11 generic read-only filesystem solution for various read-only use cases instead 17 random-access friendly high-performance filesystem to get rid of unneeded I/O 18 amplification and memory-resident overhead compared to similar approaches. 22 - read-only storage media or 24 - part of a fully trusted read-only solution, which means it needs to be 25 immutable and bit-for-bit identical to the official golden image for 28 - hope to minimize extra storage space with guaranteed end-to-end performance [all …]
|
| D | zonefs.rst | 1 .. SPDX-License-Identifier: GPL-2.0 4 ZoneFS - Zone filesystem for Zoned block devices 10 zonefs is a very simple file system exposing each zone of a zoned block device 11 as a file. Unlike a regular POSIX-compliant file system with native zoned block 13 constraint of zoned block devices to the user. Files representing sequential 17 As such, zonefs is in essence closer to a raw block device access interface 18 than to a full-featured POSIX file system. The goal of zonefs is to simplify 19 the implementation of zoned block device support in applications by replacing 20 raw block device file accesses with a richer file API, avoiding relying on 21 direct block device file ioctls which may be more obscure to developers. One [all …]
|
| D | f2fs.rst | 1 .. SPDX-License-Identifier: GPL-2.0 4 WHAT IS Flash-Friendly File System (F2FS)? 7 NAND flash memory-based storage devices, such as SSD, eMMC, and SD cards, have 13 F2FS is a file system exploiting NAND flash memory-based storage devices, which 14 is based on Log-structured File System (LFS). The design has been focused on 18 Since a NAND flash memory-based storage device shows different characteristic 20 F2FS and its tools support various parameters not only for configuring on-disk 26 - git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs-tools.git 30 - linux-f2fs-devel@lists.sourceforge.net 34 - https://bugzilla.kernel.org/enter_bug.cgi?product=File%20System&component=f2fs [all …]
|
| D | locking.rst | 5 The text below describes the locking rules for VFS-related methods. 6 It is (believed to be) up-to-date. *Please*, if you change anything in 7 prototypes or locking protocols - update this file. And update the relevant 10 Don't turn it into log - maintainers of out-of-the-tree code are supposed to 37 ops rename_lock ->d_lock may block rcu-walk 39 d_revalidate: no no yes (ref-walk) maybe 50 d_manage: no no yes (ref-walk) maybe 91 all may block 108 permission: no (may not block if called in rcu-walk mode) 123 Additionally, ->rmdir(), ->unlink() and ->rename() have ->i_rwsem [all …]
|
| D | ramfs-rootfs-initramfs.rst | 1 .. SPDX-License-Identifier: GPL-2.0 12 -------------- 15 mechanisms (the page cache and dentry cache) as a dynamically resizable 16 RAM-based filesystem. 19 backing store (usually the block device the filesystem is mounted on) are kept 24 memory. A similar mechanism (the dentry cache) greatly speeds up access to 28 dentries and page cache as usual, but there's nowhere to write them to. 34 you're mounting the disk cache as a filesystem. Because of this, ramfs is not 39 ------------------ 41 The older "ram disk" mechanism created a synthetic block device out of [all …]
|
| D | vfs.rst | 1 .. SPDX-License-Identifier: GPL-2.0 9 - Copyright (C) 1999 Richard Gooch 10 - Copyright (C) 2005 Pekka Enberg 26 Directory Entry Cache (dcache) 27 ------------------------------ 31 to search through the directory entry cache (also known as the dentry 32 cache or dcache). This provides a very fast look-up mechanism to 36 The dentry cache is meant to be a view into your entire filespace. As 38 bits of the cache are missing. In order to resolve your pathname into a 44 ---------------- [all …]
|
| /Documentation/filesystems/caching/ |
| D | netfs-api.rst | 1 .. SPDX-License-Identifier: GPL-2.0 10 (1) A cache is logically organised into volumes and data storage objects 18 (4) Cookies have coherency data that allows a cache to determine if the 21 (5) I/O is done asynchronously where possible. 34 (6) Data I/O API 55 maximum size of a filename component (allowing the cache backend one char for 62 their parent volume. The cache backend is responsible for rendering the binary 71 This causes fscache to send the cache backend off to look up/create resources 83 extra pins into the cache to stop cache withdrawal from tearing down the 87 The filesystem is expected to use netfslib to access the cache, but that's not [all …]
|
| D | backend-api.rst | 1 .. SPDX-License-Identifier: GPL-2.0 4 Cache Backend API 7 The FS-Cache system provides an API by which actual caches can be supplied to 8 FS-Cache for it to then serve out to network filesystems and other interested 11 #include <linux/fscache-cache.h>. 17 Interaction with the API is handled on three levels: cache, volume and data 23 Cache cookie struct fscache_cache 28 Cookies are used to provide some filesystem data to the cache, manage state and 29 pin the cache during access in addition to acting as reference points for the 34 The cache backend and the network filesystem can both ask for cache cookies - [all …]
|
| D | cachefiles.rst | 1 .. SPDX-License-Identifier: GPL-2.0 4 Cache on Already Mounted Filesystem 15 (*) Starting the cache. 19 (*) Cache culling. 21 (*) Cache structure. 31 (*) On-demand Read. 37 CacheFiles is a caching backend that's meant to use as a cache a directory on 40 CacheFiles uses a userspace daemon to do some of the cache management - such as 44 The filesystem and data integrity of the cache are only as good as those of the 49 CacheFiles creates a misc character device - "/dev/cachefiles" - that is used [all …]
|
| /Documentation/devicetree/bindings/cache/ |
| D | l2c2x0.yaml | 1 # SPDX-License-Identifier: GPL-2.0 3 --- 4 $id: http://devicetree.org/schemas/cache/l2c2x0.yaml# 5 $schema: http://devicetree.org/meta-schemas/core.yaml# 7 title: ARM L2 Cache Controller 10 - Rob Herring <robh@kernel.org> 14 PL220/PL310 and variants) based level 2 cache controller. All these various 15 implementations of the L2 cache controller have compatible programming 16 models (Note 1). Some of the properties that are just prefixed "cache-*" are 22 cache controllers as found in e.g. Cortex-A15/A7/A57/A53. These [all …]
|
| /Documentation/ABI/stable/ |
| D | sysfs-block | 1 What: /sys/block/<disk>/alignment_offset 5 Storage devices may report a physical block size that is 6 bigger than the logical block size (for instance a drive 7 with 4KB physical sectors exposing 512-byte logical 13 What: /sys/block/<disk>/discard_alignment 19 the exported logical block size. The discard_alignment 24 What: /sys/block/<disk>/atomic_write_max_bytes 29 size reported by the device. This parameter is relevant 35 power-of-two and atomic_write_unit_max_bytes may also be 37 This parameter - along with atomic_write_unit_min_bytes [all …]
|
| /Documentation/block/ |
| D | data-integrity.rst | 18 support for appending integrity metadata to an I/O. The integrity 22 for some protection schemes also that the I/O is written to the right 28 between adjacent nodes in the I/O path. The interesting thing about 30 is well defined and every node in the I/O path can verify the 31 integrity of the I/O and reject it if corruption is detected. This 54 scatter-gather lists. 58 host memory without changes to the page cache. 60 Also, the 16-bit CRC checksum mandated by both the SCSI and SATA specs 64 lighter-weight checksum to be used when interfacing with the operating 66 The IP checksum received from the OS is converted to the 16-bit CRC [all …]
|
| /Documentation/translations/it_IT/process/ |
| D | coding-style.rst | 1 .. include:: ../disclaimer-ita.rst 3 :Original: :ref:`Documentation/process/coding-style.rst <codingstyle>` 21 Comunque, ecco i punti: 24 --------------- 29 pi-greco a 3. 33 schermo per 20 ore a file, troverete molto più facile capire i livelli di 47 allineare sulla stessa colonna la parola chiave ``switch`` e i suoi 49 i ``case``. Un esempio.: 51 .. code-block:: c 73 .. code-block:: c [all …]
|
| /Documentation/filesystems/iomap/ |
| D | operations.rst | 1 .. SPDX-License-Identifier: GPL-2.0 20 Buffered I/O 23 Buffered I/O is the default file I/O path in Linux. 26 Dirty cache will be written back to disk at some point that can be 30 filesystems have to implement themselves under the legacy I/O model. 34 Under the legacy I/O model, this was managed very inefficiently with 35 linked lists of buffer heads instead of the per-folio bitmaps that iomap 38 be used, which makes buffered I/O much more efficient, and the pagecache 42 ----------------------------------- 61 -------------------------- [all …]
|
1234