Home
last modified time | relevance | path

Searched +full:i +full:- +full:cache +full:- +full:block +full:- +full:size (Results 1 – 25 of 75) sorted by relevance

123

/Documentation/devicetree/bindings/riscv/
Dcpus.yaml1 # SPDX-License-Identifier: (GPL-2.0 OR MIT)
3 ---
5 $schema: http://devicetree.org/meta-schemas/core.yaml#
7 title: RISC-V bindings for 'cpus' DT nodes
10 - Paul Walmsley <paul.walmsley@sifive.com>
11 - Palmer Dabbelt <palmer@sifive.com>
14 This document uses some terminology common to the RISC-V community
18 mandated by the RISC-V ISA: a PC and some registers. This
28 - items:
29 - enum:
[all …]
/Documentation/admin-guide/
Dbcache.rst2 A block layer cache (bcache)
6 nice if you could use them as cache... Hence bcache.
10 - http://bcache.evilpiepirate.org
11 - http://evilpiepirate.org/git/linux-bcache.git
12 - http://evilpiepirate.org/git/bcache-tools.git
14 It's designed around the performance characteristics of SSDs - it only allocates
15 in erase block sized buckets, and it uses a hybrid btree/log to track cached
16 extents (which can be anywhere from a single sector to the bucket size). It's
17 designed to avoid random writes at all costs; it fills up an erase block
22 great lengths to protect your data - it reliably handles unclean shutdown. (It
[all …]
Dext4.rst1 .. SPDX-License-Identifier: GPL-2.0
9 (64 bit) in keeping with increasing disk capacities and state-of-the-art
12 Mailing list: linux-ext4@vger.kernel.org
23 - The latest version of e2fsprogs can be found at:
35 - Create a new filesystem using the ext4 filesystem type:
37 # mke2fs -t ext4 /dev/hda1
41 # tune2fs -O extents /dev/hda1
46 # tune2fs -I 256 /dev/hda1
48 - Mounting:
50 # mount -t ext4 /dev/hda1 /wherever
[all …]
Dmd.rst5 ---------------------------------
16 md=<md device no.>,<raid level>,<chunk size factor>,<fault level>,dev0,dev1,...,devn
49 -1 linear mode
55 ``chunk size factor``
58 (raid-0 and raid-1 only)
60 Set the chunk size as 4k << n.
78 --------------------------------------
87 that all auto-detected arrays are assembled as partitionable.
90 -------------------------------------------
102 mdadm --assemble --force ....
[all …]
/Documentation/admin-guide/device-mapper/
Dcache.rst2 Cache title
8 dm-cache is a device mapper target written by Joe Thornber, Heinz
11 It aims to improve performance of a block device (eg, a spindle) by
15 This device-mapper solution allows us to insert this caching at
17 a thin-provisioning pool. Caching solutions that are integrated more
20 The target reuses the metadata library used in the thin-provisioning
23 The decision as to what data to migrate and when is left to a plug-in
32 Movement of the primary copy of a logical block from one
39 The origin device always contains a copy of the logical block, which
40 may be out of date or kept in sync with the copy on the cache device
[all …]
Ddm-clone.rst1 .. SPDX-License-Identifier: GPL-2.0-only
4 dm-clone
10 dm-clone is a device mapper target which produces a one-to-one copy of an
11 existing, read-only source device into a writable destination device: It
12 presents a virtual block device which makes all data appear immediately, and
15 The main use case of dm-clone is to clone a potentially remote, high-latency,
16 read-only, archival-type block device into a writable, fast, primary-type device
17 for fast, low-latency I/O. The cloned device is visible/mountable immediately
19 background, in parallel with user I/O.
21 For example, one could restore an application backup from a read-only copy,
[all …]
Dverity.rst2 dm-verity
5 Device-Mapper's "verity" target provides transparent integrity checking of
6 block devices using a cryptographic digest provided by the kernel crypto API.
7 This target is read-only.
21 This is the type of the on-disk hash format.
25 the rest of the block is padded with zeroes.
40 dm-verity device.
43 The block size on a data device in bytes.
44 Each block corresponds to one digest on the hash device.
47 The size of a hash block in bytes.
[all …]
Dthin-provisioning.rst8 This document describes a collection of device-mapper targets that
9 between them implement thin-provisioning and snapshots.
27 - Improve metadata resilience by storing metadata on a mirrored volume
28 but data on a non-mirrored one.
30 - Improve performance by storing the metadata on SSD.
40 dm-devel@redhat.com with details and we'll try our best to improve
46 a Red Hat distribution it is named 'device-mapper-persistent-data').
52 They use the dmsetup program to control the device-mapper driver
53 directly. End users will be advised to use a higher-level volume
57 -----------
[all …]
/Documentation/filesystems/
Dsquashfs.txt4 Squashfs is a compressed read-only filesystem for Linux.
7 minimise data overhead. Block sizes greater than 4K are supported up to a
8 maximum of 1Mbytes (default block size 128K).
10 Squashfs is intended for general read-only filesystem use, for archival
11 use (i.e. in cases where a .tar.gz file may be used), and in constrained
12 block device/memory systems (e.g. embedded systems) where low overhead is
15 Mailing list: squashfs-devel@lists.sourceforge.net
19 ----------------------
25 Max filesystem size: 2^64 256 MiB
26 Max file size: ~ 2 TiB 16 MiB
[all …]
Derofs.txt4 EROFS file-system stands for Enhanced Read-Only File System. Different
5 from other read-only file systems, it aims to be designed for flexibility,
9 - read-only storage media or
11 - part of a fully trusted read-only solution, which means it needs to be
12 immutable and bit-for-bit identical to the official golden image for
15 - hope to save some extra storage space with guaranteed end-to-end performance
20 - Little endian on-disk design;
22 - Currently 4KB block size (nobh) and therefore maximum 16TB address space;
24 - Metadata & data could be mixed by design;
26 - 2 inode versions for different requirements:
[all …]
Dcoda.txt3 Coda -- this document describes the client kernel-Venus interface.
10 To run Coda you need to get a user level cache manager for the client,
29 level filesystem code needed for the operation of the Coda file sys-
148 11.. IInnttrroodduuccttiioonn
152 A key component in the Coda Distributed File System is the cache
160 client cache and makes remote procedure calls to Coda file servers and
179 leads to an almost natural environment for implementing a kernel-level
204 filesystem (VFS) layer, which is named I/O Manager in NT and IFS
209 pre-processing, the VFS starts invoking exported routines in the FS
221 offered by the cache manager Venus. When the replies from Venus have
[all …]
Dramfs-rootfs-initramfs.txt7 --------------
10 mechanisms (the page cache and dentry cache) as a dynamically resizable
11 RAM-based filesystem.
14 backing store (usually the block device the filesystem is mounted on) are kept
19 memory. A similar mechanism (the dentry cache) greatly speeds up access to
23 dentries and page cache as usual, but there's nowhere to write them to.
29 you're mounting the disk cache as a filesystem. Because of this, ramfs is not
34 ------------------
36 The older "ram disk" mechanism created a synthetic block device out of
37 an area of RAM and used it as backing store for a filesystem. This block
[all …]
Df2fs.txt2 WHAT IS Flash-Friendly File System (F2FS)?
5 NAND flash memory-based storage devices, such as SSD, eMMC, and SD cards, have
11 F2FS is a file system exploiting NAND flash memory-based storage devices, which
12 is based on Log-structured File System (LFS). The design has been focused on
16 Since a NAND flash memory-based storage device shows different characteristic
18 F2FS and its tools support various parameters not only for configuring on-disk
23 >> git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs-tools.git
26 >> linux-f2fs-devel@lists.sourceforge.net
32 Log-structured File System (LFS)
33 --------------------------------
[all …]
/Documentation/filesystems/caching/
Dbackend-api.txt2 FS-CACHE CACHE BACKEND API
5 The FS-Cache system provides an API by which actual caches can be supplied to
6 FS-Cache for it to then serve out to network filesystems and other interested
9 This API is declared in <linux/fscache-cache.h>.
13 INITIALISING AND REGISTERING A CACHE
16 To start off, a cache definition must be initialised and registered for each
17 cache the backend wants to make available. For instance, CacheFS does this in
20 The cache definition (struct fscache_cache) should be initialised by calling:
22 void fscache_init_cache(struct fscache_cache *cache,
29 (*) "cache" is a pointer to the cache definition;
[all …]
Dnetfs-api.txt2 FS-CACHE NETWORK FILESYSTEM API
5 There's an API by which a network filesystem can make use of the FS-Cache
10 FS-Cache to make finding objects faster and to make retiring of groups of
17 (3) Barring the top-level index (one entry per cached netfs), the index
28 (5) Cache tag lookup
32 (9) Setting the data file size
41 (18) FS-Cache specific page flags.
48 FS-Cache needs a description of the network filesystem. This is specified
67 entire in-cache hierarchy for this netfs will be scrapped and begun
92 a particular key - for instance to mirror the removal of an AFS volume.
[all …]
Dcachefiles.txt2 CacheFiles: CACHE ON ALREADY MOUNTED FILESYSTEM
13 (*) Starting the cache.
17 (*) Cache culling.
19 (*) Cache structure.
34 CacheFiles is a caching backend that's meant to use as a cache a directory on
37 CacheFiles uses a userspace daemon to do some of the cache management - such as
41 The filesystem and data integrity of the cache are only as good as those of the
46 CacheFiles creates a misc character device - "/dev/cachefiles" - that is used
48 and while it is open, a cache is at least partially in existence. The daemon
49 opens this and sends commands down it to control the cache.
[all …]
/Documentation/devicetree/
Dbooting-without-of.txt2 --------------------------------------------------
7 Freescale Semiconductor, FSL SOC and 32-bit additions
14 I - Introduction
21 II - The DT block format
24 3) Device tree "structure" block
25 4) Device tree "strings" block
27 III - Required content of the device tree
40 IV - "dtc", the device tree compiler
42 V - Recommendations for a bootloader
44 VI - System-on-a-chip devices and nodes
[all …]
/Documentation/ide/
DChangeLog.ide-tape.1995-20022 * Ver 0.1 Nov 1 95 Pre-working code :-)
4 * was successful ! (Using tar cvf ... on the block
8 * we received non serial read-ahead requests from the
9 * buffer cache.
17 * ide tapes :-)
59 * not limited in size, as they were before.
62 * as several transfers of the above size.
65 * initialization). I will soon add an ioctl to get
73 * Removed some old (non-active) code which had
74 * to do with supporting buffer cache originated
[all …]
/Documentation/devicetree/bindings/arm/
Dl2c2x0.yaml1 # SPDX-License-Identifier: GPL-2.0
3 ---
5 $schema: http://devicetree.org/meta-schemas/core.yaml#
7 title: ARM L2 Cache Controller
10 - Rob Herring <robh@kernel.org>
14 PL220/PL310 and variants) based level 2 cache controller. All these various
15 implementations of the L2 cache controller have compatible programming
16 models (Note 1). Some of the properties that are just prefixed "cache-*" are
22 cache controllers as found in e.g. Cortex-A15/A7/A57/A53. These
28 - $ref: /schemas/cache-controller.yaml#
[all …]
/Documentation/block/
Dbiodoc.rst2 Notes on the Generic Block Layer Rewrite in Linux 2.5
13 - Jens Axboe <jens.axboe@oracle.com>
14 - Suparna Bhattacharya <suparna@in.ibm.com>
18 September 2003: Updated I/O Scheduler portions
19 - Nick Piggin <npiggin@kernel.dk>
24 These are some notes describing some aspects of the 2.5 block layer in the
34 - Jens Axboe <jens.axboe@oracle.com>
36 Many aspects of the generic block layer redesign were driven by and evolved
43 - Christoph Hellwig <hch@infradead.org>
44 - Arjan van de Ven <arjanv@redhat.com>
[all …]
Ddata-integrity.rst18 support for appending integrity metadata to an I/O. The integrity
22 for some protection schemes also that the I/O is written to the right
28 between adjacent nodes in the I/O path. The interesting thing about
30 is well defined and every node in the I/O path can verify the
31 integrity of the I/O and reject it if corruption is detected. This
54 scatter-gather lists.
58 host memory without changes to the page cache.
60 Also, the 16-bit CRC checksum mandated by both the SCSI and SATA specs
64 lighter-weight checksum to be used when interfacing with the operating
66 The IP checksum received from the OS is converted to the 16-bit CRC
[all …]
/Documentation/devicetree/bindings/pci/
Dlayerscape-pci.txt4 and thus inherits all the common properties defined in designware-pcie.txt.
7 which is used to describe the PLL settings at the time of chip-reset.
15 - compatible: should contain the platform identifier such as:
17 "fsl,ls1021a-pcie"
18 "fsl,ls2080a-pcie", "fsl,ls2085a-pcie"
19 "fsl,ls2088a-pcie"
20 "fsl,ls1088a-pcie"
21 "fsl,ls1046a-pcie"
22 "fsl,ls1043a-pcie"
23 "fsl,ls1012a-pcie"
[all …]
/Documentation/
DDMA-API.txt8 of the API (and actual examples), see Documentation/DMA-API-HOWTO.txt.
10 This API is split into two pieces. Part I describes the basic API.
11 Part II describes extensions for supporting non-consistent memory
13 non-consistent platforms (this is usually only legacy platforms) you
14 should only use the API described in part I.
16 Part I - dma_API
17 ----------------
19 To get the dma_API, you must #include <linux/dma-mapping.h>. This
27 Part Ia - Using large DMA-coherent buffers
28 ------------------------------------------
[all …]
/Documentation/ABI/testing/
Dsysfs-devices-system-cpu2 Date: pre-git history
3 Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
18 Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
37 See Documentation/admin-guide/cputopology.rst for more information.
43 Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
58 Contact: Linux memory management mailing list <linux-mm@kvack.org>
67 /sys/devices/system/cpu/cpu42/node2 -> ../../node/node2
77 Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
93 core_siblings_list: human-readable list of the logical CPU
103 thread_siblings_list: human-readable list of cpu#'s hardware
[all …]
/Documentation/vm/
Dfrontswap.rst9 swapped pages are saved in RAM (or a RAM-like device) instead of a swap disk.
11 (Note, frontswap -- and :ref:`cleancache` (merged at 3.0) -- are the "frontends"
13 all other supporting code -- the "backends" -- is implemented as drivers.
21 a synchronous concurrency-safe page-oriented "pseudo-RAM device" conforming
23 in-kernel compressed memory, aka "zcache", or future RAM-like devices);
24 this pseudo-RAM device is not directly accessible or addressable by the
25 kernel and is of unknown and possibly time-varying size. The driver
49 cache" by calling frontswap_writethrough(). In this mode, the reduction
50 in swap device writes is lost (and also a non-trivial performance advantage)
87 and size (such as with compression) or secretly moved (as might be
[all …]

123