Searched +full:d +full:- +full:cache +full:- +full:block +full:- +full:size (Results 1 – 25 of 51) sorted by relevance
123
| /Documentation/devicetree/bindings/riscv/ |
| D | cpus.yaml | 1 # SPDX-License-Identifier: (GPL-2.0 OR MIT) 3 --- 5 $schema: http://devicetree.org/meta-schemas/core.yaml# 7 title: RISC-V CPUs 10 - Paul Walmsley <paul.walmsley@sifive.com> 11 - Palmer Dabbelt <palmer@sifive.com> 12 - Conor Dooley <conor@kernel.org> 15 This document uses some terminology common to the RISC-V community 19 mandated by the RISC-V ISA: a PC and some registers. This 27 - $ref: /schemas/cpu.yaml# [all …]
|
| D | extensions.yaml | 1 # SPDX-License-Identifier: (GPL-2.0 OR MIT) 3 --- 5 $schema: http://devicetree.org/meta-schemas/core.yaml# 7 title: RISC-V ISA extensions 10 - Paul Walmsley <paul.walmsley@sifive.com> 11 - Palmer Dabbelt <palmer@sifive.com> 12 - Conor Dooley <conor@kernel.org> 15 RISC-V has a large number of extensions, some of which are "standard" 16 extensions, meaning they are ratified by RISC-V International, and others 36 Identifies the specific RISC-V instruction set architecture [all …]
|
| /Documentation/admin-guide/ |
| D | bcache.rst | 2 A block layer cache (bcache) 6 nice if you could use them as cache... Hence bcache. 11 This is the git repository of bcache-tools: 12 https://git.kernel.org/pub/scm/linux/kernel/git/colyli/bcache-tools.git/ 17 It's designed around the performance characteristics of SSDs - it only allocates 18 in erase block sized buckets, and it uses a hybrid btree/log to track cached 19 extents (which can be anywhere from a single sector to the bucket size). It's 20 designed to avoid random writes at all costs; it fills up an erase block 25 great lengths to protect your data - it reliably handles unclean shutdown. (It 29 Writeback caching can use most of the cache for buffering writes - writing [all …]
|
| D | md.rst | 5 --------------------------------- 16 md=<md device no.>,<raid level>,<chunk size factor>,<fault level>,dev0,dev1,...,devn 24 md=d<md device no.>,dev0,dev1,...,devn 49 -1 linear mode 55 ``chunk size factor`` 58 (raid-0 and raid-1 only) 60 Set the chunk size as 4k << n. 78 -------------------------------------- 87 that all auto-detected arrays are assembled as partitionable. 90 ------------------------------------------- [all …]
|
| D | ramoops.rst | 9 ------------ 17 ---------------- 19 Ramoops uses a predefined memory area to store the dump. The start and size 23 * ``mem_size`` for the size. The memory size will be rounded down to a 36 which enables full cache on it. This can improve the performance. 60 ---------------------- 68 the kernel to use only the first 128 MB of memory, and place ECC-protected 74 ``Documentation/devicetree/bindings/reserved-memory/ramoops.yaml``. 77 reserved-memory { 78 #address-cells = <2>; [all …]
|
| /Documentation/filesystems/ |
| D | dax.rst | 6 ---------- 8 The page cache is usually used to buffer reads and writes to files. 12 For block devices that are memory-like, the page cache pages would be 19 ----- 21 If you have a block device which supports `DAX`, you can make a filesystem 22 on it as usual. The `DAX` code currently only supports files with a block 23 size equal to your kernel's `PAGE_SIZE`, so you may need to specify a block 24 size when creating the filesystem. 30 ------------------------------ 32 When mounting the filesystem, use the ``-o dax`` option on the command line or [all …]
|
| D | affs.rst | 1 .. SPDX-License-Identifier: GPL-2.0 19 in file names are case-insensitive, as they ought to be. 24 DOS\4 The original filesystem with directory cache. The directory 25 cache speeds up directory accesses on floppies considerably, 29 DOS\5 The Fast File System with directory cache. Supported read only. 32 All of the above filesystems allow block sizes from 512 to 32K bytes. 33 Supported block sizes are: 512, 1024, 2048 and 4096 bytes. Larger blocks 70 root=block 71 Sets the block number of the root block. This should never 75 Sets the blocksize to blksize. Valid block sizes are 512, [all …]
|
| D | f2fs.rst | 1 .. SPDX-License-Identifier: GPL-2.0 4 WHAT IS Flash-Friendly File System (F2FS)? 7 NAND flash memory-based storage devices, such as SSD, eMMC, and SD cards, have 13 F2FS is a file system exploiting NAND flash memory-based storage devices, which 14 is based on Log-structured File System (LFS). The design has been focused on 18 Since a NAND flash memory-based storage device shows different characteristic 20 F2FS and its tools support various parameters not only for configuring on-disk 26 - git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs-tools.git 30 - linux-f2fs-devel@lists.sourceforge.net 34 - https://bugzilla.kernel.org/enter_bug.cgi?product=File%20System&component=f2fs [all …]
|
| D | ramfs-rootfs-initramfs.rst | 1 .. SPDX-License-Identifier: GPL-2.0 12 -------------- 15 mechanisms (the page cache and dentry cache) as a dynamically resizable 16 RAM-based filesystem. 19 backing store (usually the block device the filesystem is mounted on) are kept 24 memory. A similar mechanism (the dentry cache) greatly speeds up access to 28 dentries and page cache as usual, but there's nowhere to write them to. 34 you're mounting the disk cache as a filesystem. Because of this, ramfs is not 39 ------------------ 41 The older "ram disk" mechanism created a synthetic block device out of [all …]
|
| D | zonefs.rst | 1 .. SPDX-License-Identifier: GPL-2.0 4 ZoneFS - Zone filesystem for Zoned block devices 10 zonefs is a very simple file system exposing each zone of a zoned block device 11 as a file. Unlike a regular POSIX-compliant file system with native zoned block 13 constraint of zoned block devices to the user. Files representing sequential 17 As such, zonefs is in essence closer to a raw block device access interface 18 than to a full-featured POSIX file system. The goal of zonefs is to simplify 19 the implementation of zoned block device support in applications by replacing 20 raw block device file accesses with a richer file API, avoiding relying on 21 direct block device file ioctls which may be more obscure to developers. One [all …]
|
| D | ceph.rst | 1 .. SPDX-License-Identifier: GPL-2.0 15 * N-way replication of data across storage nodes 26 on symmetric access by all clients to shared block devices, Ceph 32 re-replicated in a distributed fashion by the storage nodes themselves 37 in-memory cache above the file namespace that is extremely scalable, 39 and can tolerate arbitrary (well, non-Byzantine) node failures. The 44 loaded into its cache with a single I/O operation. The contents of 64 * They can not exceed 240 characters in size. This is because the MDS makes 66 `_<SNAPSHOT-NAME>_<INODE-NUMBER>`. Since filenames in general can't have 67 more than 255 characters, and `<node-id>` takes 13 characters, the long [all …]
|
| D | ext2.rst | 1 .. SPDX-License-Identifier: GPL-2.0 18 set using tune2fs(8). Kernel-determined defaults are indicated by (*). 27 dax Use direct access (no page cache). See 34 errors=remount-ro Remount the filesystem read-only on an error. 40 nouid32 Use 16-bit UIDs and GIDs. 42 oldalloc Enable the old block allocator. Orlov should 43 have better performance, we'd like to get some 45 orlov (*) Use the Orlov block allocator. 84 ------ 87 a fixed size, of 1024, 2048 or 4096 bytes (8192 bytes on Alpha systems), [all …]
|
| /Documentation/admin-guide/device-mapper/ |
| D | vdo-design.rst | 1 .. SPDX-License-Identifier: GPL-2.0-only 4 Design of dm-vdo 7 The dm-vdo (virtual data optimizer) target provides inline deduplication, 8 compression, zero-block elimination, and thin provisioning. A dm-vdo target 9 can be backed by up to 256TB of storage, and can present a logical size of 12 production environments ever since. It was made open-source in 2017 after 14 dm-vdo. For usage, see vdo.rst in the same directory as this file. 16 Because deduplication rates fall drastically as the block size increases, a 17 vdo target has a maximum block size of 4K. However, it can achieve 18 deduplication rates of 254:1, i.e. up to 254 copies of a given 4K block can [all …]
|
| D | persistent-data.rst | 8 The more-sophisticated device-mapper targets require complex metadata 12 - Mikulas Patocka's multisnap implementation 13 - Heinz Mauelshagen's thin provisioning target 14 - Another btree-based caching target posted to dm-devel 15 - Another multi-snapshot target based on a design of Daniel Phillips 18 we'd like to reduce the number. 20 The persistent-data library is an attempt to provide a re-usable 21 framework for people who want to store metadata in device-mapper 22 targets. It's currently used by the thin-provisioning target and an 29 under drivers/md/persistent-data. [all …]
|
| D | dm-ima.rst | 2 dm-ima 6 (including the attestation service) interact with it - both during the 7 setup and during rest of the system run-time. They share sensitive data 9 may want to verify the current run-time state of the relevant kernel 10 subsystems before fully trusting the system with business-critical 14 various important functionalities to the block devices using various 18 impact the security profile of the block device, and in-turn, of the 20 key size determines the strength of encryption for a given block device. 22 Therefore, verifying the current state of various block devices as well 24 fully trusting the system with business-critical data/workload. [all …]
|
| /Documentation/filesystems/caching/ |
| D | cachefiles.rst | 1 .. SPDX-License-Identifier: GPL-2.0 4 Cache on Already Mounted Filesystem 15 (*) Starting the cache. 19 (*) Cache culling. 21 (*) Cache structure. 31 (*) On-demand Read. 37 CacheFiles is a caching backend that's meant to use as a cache a directory on 40 CacheFiles uses a userspace daemon to do some of the cache management - such as 44 The filesystem and data integrity of the cache are only as good as those of the 49 CacheFiles creates a misc character device - "/dev/cachefiles" - that is used [all …]
|
| /Documentation/admin-guide/blockdev/ |
| D | ramdisk.rst | 2 Using the RAM disk block device with Linux 14 ----------- 16 The RAM disk driver is a way to use main system memory as a block device. It 18 in order to access the root filesystem (see Documentation/admin-guide/initrd.rst). It can 23 RAM from the buffer cache. The driver marks the buffers it is using as dirty 28 the configuration symbol BLK_DEV_RAM_COUNT in the Block drivers config menu 41 --------------------------------- 46 Size of the ramdisk. 48 This parameter tells the RAM disk driver to set up RAM disks of N k size. The 63 --------------- [all …]
|
| /Documentation/translations/it_IT/process/ |
| D | coding-style.rst | 1 .. include:: ../disclaimer-ita.rst 3 :Original: :ref:`Documentation/process/coding-style.rst <codingstyle>` 24 --------------- 29 pi-greco a 3. 51 .. code-block:: c 73 .. code-block:: c 80 .. code-block:: c 87 .. code-block:: c 107 ----------------------------------- 122 Lo stesso si applica, nei file d'intestazione, alle funzioni con una [all …]
|
| /Documentation/mm/ |
| D | slub.rst | 18 slabs that have data in them. See "slabinfo -h" for more options when 22 gcc -o slabinfo tools/mm/slabinfo.c 30 ------------------------------------------- 35 slab_debug=<Debug-Options> 38 slab_debug=<Debug-Options>,<slab name1>,<slab name2>,... 44 to all slabs except those that match one of the "select slabs" block. Options 55 A Enable failslab filter mark for the cache 58 - Switch all debugging off (useful if the kernel is 65 Trying to find an issue in the dentry cache? Try:: 69 to only enable debugging on the dentry cache. You may use an asterisk at the [all …]
|
| /Documentation/fault-injection/ |
| D | fault-injection.rst | 5 See also drivers/md/md-faulty.c and "every_nth" module option for scsi_debug. 9 -------------------------------------- 11 - failslab 15 - fail_page_alloc 19 - fail_usercopy 23 - fail_futex 27 - fail_sunrpc 31 - fail_make_request 34 /sys/block/<device>/make-it-fail or 35 /sys/block/<device>/<partition>/make-it-fail. (submit_bio_noacct()) [all …]
|
| /Documentation/filesystems/xfs/ |
| D | xfs-online-fsck-design.rst | 1 .. SPDX-License-Identifier: GPL-2.0 8 Heading 3 uses "----" 25 - To help kernel distributors understand exactly what the XFS online fsck 28 - To help people reading the code to familiarize themselves with the relevant 31 - To help developers maintaining the system by capturing the reasons 59 - Provide a hierarchy of names through which application programs can associate 62 - Virtualize physical storage media across those names, and 64 - Retrieve the named data blobs at any time. 66 - Examine resource usage. 79 cross-references different types of metadata records with each other to look [all …]
|
| /Documentation/driver-api/nvdimm/ |
| D | btt.rst | 2 BTT - Block Translation Table 10 accurately, cache line) granularity. However, we often want to expose such 11 storage as traditional block devices. The block drivers for persistent memory 14 using stored energy in capacitors to complete in-flight block writes, or perhaps 15 in firmware. We don't have this luxury with persistent memory - if a write is in 16 progress, and we experience a power failure, the block will contain a mix of old 19 The Block Translation Table (BTT) provides atomic sector update semantics for 21 being torn can continue to do so. The BTT manifests itself as a stacked block 23 the heart of it, is an indirection table that re-maps all the blocks on the 37 next arena). The following depicts the "On-disk" metadata layout:: [all …]
|
| D | nvdimm.rst | 2 LIBNVDIMM: Non-Volatile Devices 5 libnvdimm - kernel / libndctl - userspace helper library 18 PMEM-REGIONs, Atomic Sectors, and DAX 40 LIBNVDIMM/LIBNDCTL: Block Translation Table "btt" 50 A system-physical-address range where writes are persistent. A 51 block device composed of PMEM is capable of DAX. A PMEM address range 55 DIMM Physical Address, is a DIMM-relative offset. With one DIMM in 56 the system there would be a 1:1 system-physical-address:DPA association. 59 system-physical-address. 62 File system extensions to bypass the page cache and block layer to [all …]
|
| /Documentation/process/ |
| D | coding-style.rst | 9 able to maintain, and I'd prefer it for most other things too. Please 12 First off, I'd suggest printing out a copy of the GNU coding standards, 19 -------------- 27 a block of control starts and ends. Especially when you've been looking 31 Now, some people will claim that having 8-character indentations makes 33 80-character terminal screen. The answer to that is that if you need 37 In short, 8-char indents make things easier to read, and have the added 43 instead of ``double-indenting`` the ``case`` labels. E.g.: 45 .. code-block:: c 67 .. code-block:: c [all …]
|
| /Documentation/core-api/ |
| D | dma-api.rst | 8 of the API (and actual examples), see Documentation/core-api/dma-api-howto.rst. 11 Part II describes extensions for supporting non-consistent memory 13 non-consistent platforms (this is usually only legacy platforms) you 16 Part I - dma_API 17 ---------------- 19 To get the dma_API, you must #include <linux/dma-mapping.h>. This 27 Part Ia - Using large DMA-coherent buffers 28 ------------------------------------------ 33 dma_alloc_coherent(struct device *dev, size_t size, 42 This routine allocates a region of <size> bytes of consistent memory. [all …]
|
123