Home
last modified time | relevance | path

Searched +full:i +full:- +full:cache +full:- +full:size (Results 1 – 25 of 115) sorted by relevance

12345

/Documentation/devicetree/bindings/riscv/
Dcpus.yaml1 # SPDX-License-Identifier: (GPL-2.0 OR MIT)
3 ---
5 $schema: http://devicetree.org/meta-schemas/core.yaml#
7 title: RISC-V bindings for 'cpus' DT nodes
10 - Paul Walmsley <paul.walmsley@sifive.com>
11 - Palmer Dabbelt <palmer@sifive.com>
14 This document uses some terminology common to the RISC-V community
18 mandated by the RISC-V ISA: a PC and some registers. This
28 - items:
29 - enum:
[all …]
/Documentation/devicetree/bindings/powerpc/fsl/
Dpamu.txt5 The PAMU is an I/O MMU that provides device-to-memory access control and
10 - compatible : <string>
11 First entry is a version-specific string, such as
12 "fsl,pamu-v1.0". The second is "fsl,pamu".
13 - ranges : <prop-encoded-array>
15 I/O space utilized by the controller. The size should
16 be set to the total size of the register space of all
18 PAMU v1.0, on an SOC that has five PAMU devices, the size
20 - interrupts : <prop-encoded-array>
25 - #address-cells: <u32>
[all …]
/Documentation/core-api/
Dcachetlb.rst2 Cache and TLB Flushing Under Linux
7 This document describes the cache/tlb flushing interfaces called
17 thinking SMP cache/tlb flushing must be so inefficient, this is in
24 "TLB" is abstracted under Linux as something the cpu uses to cache
25 virtual-->physical address translations obtained from the software
27 possible for stale translations to exist in this "TLB" cache.
59 modifications for the address space 'vma->vm_mm' in the range
60 'start' to 'end-1' will be visible to the cpu. That is, after
62 virtual addresses in the range 'start' to 'end-1'.
78 address space is available via vma->vm_mm. Also, one may
[all …]
Dmemory-allocation.rst19 makes the question "How should I allocate memory?" not that easy to
24 kzalloc(<size>, GFP_KERNEL);
35 :ref:`Documentation/core-api/mm-api.rst <mm-api-gfp-flags>` provides
40 kernel data structures, DMAable memory, inode cache, all these and
78 :ref:`Documentation/core-api/gfp_mask-from-fs-io.rst <gfp_mask_from_fs_io>`.
91 from the :c:func:`kmalloc` family. And, to be on the safe size it's
96 The maximal size of a chunk that can be allocated with `kmalloc` is
99 smaller than page size.
103 alignment is also guaranteed to be at least the respective size.
110 If you are not sure whether the allocation size is too large for
[all …]
/Documentation/admin-guide/device-mapper/
Ddm-clone.rst1 .. SPDX-License-Identifier: GPL-2.0-only
4 dm-clone
10 dm-clone is a device mapper target which produces a one-to-one copy of an
11 existing, read-only source device into a writable destination device: It
15 The main use case of dm-clone is to clone a potentially remote, high-latency,
16 read-only, archival-type block device into a writable, fast, primary-type device
17 for fast, low-latency I/O. The cloned device is visible/mountable immediately
19 background, in parallel with user I/O.
21 For example, one could restore an application backup from a read-only copy,
26 When the cloning completes, the dm-clone table can be removed altogether and be
[all …]
Dcache.rst2 Cache title
8 dm-cache is a device mapper target written by Joe Thornber, Heinz
15 This device-mapper solution allows us to insert this caching at
17 a thin-provisioning pool. Caching solutions that are integrated more
20 The target reuses the metadata library used in the thin-provisioning
23 The decision as to what data to migrate and when is left to a plug-in
40 may be out of date or kept in sync with the copy on the cache device
46 Sub-devices
47 -----------
52 1. An origin device - the big, slow one.
[all …]
/Documentation/admin-guide/
Dbcache.rst2 A block layer cache (bcache)
6 nice if you could use them as cache... Hence bcache.
10 - http://bcache.evilpiepirate.org
11 - http://evilpiepirate.org/git/linux-bcache.git
12 - http://evilpiepirate.org/git/bcache-tools.git
14 It's designed around the performance characteristics of SSDs - it only allocates
16 extents (which can be anywhere from a single sector to the bucket size). It's
22 great lengths to protect your data - it reliably handles unclean shutdown. (It
26 Writeback caching can use most of the cache for buffering writes - writing
33 average is above the cutoff it will skip all IO from that task - instead of
[all …]
/Documentation/filesystems/
Dfuse-io.txt1 Fuse supports the following I/O modes:
3 - direct-io
4 - cached
5 + write-through
6 + writeback-cache
8 The direct-io mode can be selected with the FOPEN_DIRECT_IO flag in the
11 In direct-io mode the page cache is completely bypassed for reads and writes.
12 No read-ahead takes place. Shared mmap is disabled.
14 In cached mode reads may be satisfied from the page cache, and data may be
15 read-ahead by the kernel to fill the cache. The cache is always kept consistent
[all …]
Dcoda.txt3 Coda -- this document describes the client kernel-Venus interface.
10 To run Coda you need to get a user level cache manager for the client,
29 level filesystem code needed for the operation of the Coda file sys-
148 11.. IInnttrroodduuccttiioonn
152 A key component in the Coda Distributed File System is the cache
160 client cache and makes remote procedure calls to Coda file servers and
179 leads to an almost natural environment for implementing a kernel-level
204 filesystem (VFS) layer, which is named I/O Manager in NT and IFS
209 pre-processing, the VFS starts invoking exported routines in the FS
221 offered by the cache manager Venus. When the replies from Venus have
[all …]
Dsquashfs.txt4 Squashfs is a compressed read-only filesystem for Linux.
8 maximum of 1Mbytes (default block size 128K).
10 Squashfs is intended for general read-only filesystem use, for archival
11 use (i.e. in cases where a .tar.gz file may be used), and in constrained
15 Mailing list: squashfs-devel@lists.sourceforge.net
19 ----------------------
25 Max filesystem size: 2^64 256 MiB
26 Max file size: ~ 2 TiB 16 MiB
30 Max block size: 1 MiB 4 KiB
34 Tail-end packing (fragments): yes no
[all …]
/Documentation/filesystems/caching/
Dbackend-api.txt2 FS-CACHE CACHE BACKEND API
5 The FS-Cache system provides an API by which actual caches can be supplied to
6 FS-Cache for it to then serve out to network filesystems and other interested
9 This API is declared in <linux/fscache-cache.h>.
13 INITIALISING AND REGISTERING A CACHE
16 To start off, a cache definition must be initialised and registered for each
17 cache the backend wants to make available. For instance, CacheFS does this in
20 The cache definition (struct fscache_cache) should be initialised by calling:
22 void fscache_init_cache(struct fscache_cache *cache,
29 (*) "cache" is a pointer to the cache definition;
[all …]
Dfscache.txt9 This facility is a general purpose cache for network filesystems, though it
12 FS-Cache mediates between cache backends (such as CacheFS) and network
15 +---------+
16 | | +--------------+
17 | NFS |--+ | |
18 | | | +-->| CacheFS |
19 +---------+ | +----------+ | | /dev/hda5 |
20 | | | | +--------------+
21 +---------+ +-->| | |
22 | | | |--+
[all …]
Dnetfs-api.txt2 FS-CACHE NETWORK FILESYSTEM API
5 There's an API by which a network filesystem can make use of the FS-Cache
10 FS-Cache to make finding objects faster and to make retiring of groups of
17 (3) Barring the top-level index (one entry per cached netfs), the index
28 (5) Cache tag lookup
32 (9) Setting the data file size
41 (18) FS-Cache specific page flags.
48 FS-Cache needs a description of the network filesystem. This is specified
67 entire in-cache hierarchy for this netfs will be scrapped and begun
92 a particular key - for instance to mirror the removal of an AFS volume.
[all …]
Dcachefiles.txt2 CacheFiles: CACHE ON ALREADY MOUNTED FILESYSTEM
13 (*) Starting the cache.
17 (*) Cache culling.
19 (*) Cache structure.
34 CacheFiles is a caching backend that's meant to use as a cache a directory on
37 CacheFiles uses a userspace daemon to do some of the cache management - such as
41 The filesystem and data integrity of the cache are only as good as those of the
46 CacheFiles creates a misc character device - "/dev/cachefiles" - that is used
48 and while it is open, a cache is at least partially in existence. The daemon
49 opens this and sends commands down it to control the cache.
[all …]
/Documentation/devicetree/bindings/cpufreq/
Dcpufreq-dt.txt11 - None
14 - operating-points: Refer to Documentation/devicetree/bindings/opp/opp.txt for
15 details. OPPs *must* be supplied either via DT, i.e. this property, or
17 - clock-latency: Specify the possible maximum transition latency for clock,
19 - voltage-tolerance: Specify the CPU voltage tolerance in percentage.
20 - #cooling-cells:
26 #address-cells = <1>;
27 #size-cells = <0>;
30 compatible = "arm,cortex-a9";
32 next-level-cache = <&L2>;
[all …]
Dcpufreq-qcom-hw.txt8 - compatible
11 Definition: must be "qcom,cpufreq-hw".
13 - clocks
18 - clock-names
23 - reg
25 Value type: <prop-encoded-array>
28 - reg-names
31 Definition: Frequency domain name i.e.
32 "freq-domain0", "freq-domain1".
34 - #freq-domain-cells:
[all …]
/Documentation/
DDMA-API.txt8 of the API (and actual examples), see Documentation/DMA-API-HOWTO.txt.
10 This API is split into two pieces. Part I describes the basic API.
11 Part II describes extensions for supporting non-consistent memory
13 non-consistent platforms (this is usually only legacy platforms) you
14 should only use the API described in part I.
16 Part I - dma_API
17 ----------------
19 To get the dma_API, you must #include <linux/dma-mapping.h>. This
27 Part Ia - Using large DMA-coherent buffers
28 ------------------------------------------
[all …]
/Documentation/devicetree/bindings/arm/
Dl2c2x0.yaml1 # SPDX-License-Identifier: GPL-2.0
3 ---
5 $schema: http://devicetree.org/meta-schemas/core.yaml#
7 title: ARM L2 Cache Controller
10 - Rob Herring <robh@kernel.org>
14 PL220/PL310 and variants) based level 2 cache controller. All these various
15 implementations of the L2 cache controller have compatible programming
16 models (Note 1). Some of the properties that are just prefixed "cache-*" are
22 cache controllers as found in e.g. Cortex-A15/A7/A57/A53. These
28 - $ref: /schemas/cache-controller.yaml#
[all …]
/Documentation/devicetree/bindings/mips/cavium/
Dsata-uctl.txt3 UCTL is the bridge unit between the I/O interconnect (an internal bus)
5 - provides interfaces for the applications to access the UAHC AHCI
6 registers on the CN71XX I/O space.
7 - provides a bridge for UAHC to fetch AHCI command table entries and data
8 buffers from Level 2 Cache.
9 - posts interrupts to the CIU.
10 - contains registers that:
11 - control the behavior of the UAHC
12 - control the clock/reset generation to UAHC
13 - control endian swapping for all UAHC registers and DMA accesses
[all …]
/Documentation/vm/
Dzswap.rst10 Zswap is a lightweight compressed cache for swap pages. It takes pages that are
12 dynamically allocated RAM-based memory pool. zswap basically trades CPU cycles
13 for potentially reduced swap I/O.  This trade-off can also result in a
14 significant performance improvement if reads from the compressed cache are
27 * Overcommitted guests that share a common I/O resource can
28 dramatically reduce their swap I/O pressure, avoiding heavy handed I/O
30 impact to the guest workload and guests sharing the I/O subsystem
32 drastically reducing life-shortening writes.
34 Zswap evicts pages from compressed cache on an LRU basis to the backing swap
35 device when the compressed pool reaches its size limit. This requirement had
[all …]
/Documentation/admin-guide/mm/
Dnumaperf.rst9 as CPU cache coherence, but may have different performance. For example,
20 +------------------+ +------------------+
21 | Compute Node 0 +-----+ Compute Node 1 |
23 +--------+---------+ +--------+---------+
25 +--------+---------+ +--------+---------+
27 +------------------+ +--------+---------+
30 CPUs or separate memory I/O devices that can initiate memory requests.
35 performance when accessing a given memory target. Each initiator-target
47 # symlinks -v /sys/devices/system/node/nodeX/access0/targets/
48 relative: /sys/devices/system/node/nodeX/access0/targets/nodeY -> ../../nodeY
[all …]
/Documentation/driver-api/usb/
Ddma.rst5 over how DMA may be used to perform I/O operations. The APIs are detailed
12 though they still must provide DMA-ready buffers (see
13 ``Documentation/DMA-API-HOWTO.txt``). That's how they've worked through
14 the 2.4 (and earlier) kernels, or they can now be DMA-aware.
16 DMA-aware usb drivers:
18 - New calls enable DMA-aware drivers, letting them allocate dma buffers and
19 manage dma mappings for existing dma-ready buffers (see below).
21 - URBs have an additional "transfer_dma" field, as well as a transfer_flags
25 - "usbcore" will map this DMA address, if a DMA-aware driver didn't do
29 - There's a new "generic DMA API", parts of which are usable by USB device
[all …]
/Documentation/ABI/testing/
Dsysfs-devices-system-cpu2 Date: pre-git history
3 Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
18 Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
37 See Documentation/admin-guide/cputopology.rst for more information.
43 Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
58 Contact: Linux memory management mailing list <linux-mm@kvack.org>
67 /sys/devices/system/cpu/cpu42/node2 -> ../../node/node2
77 Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
93 core_siblings_list: human-readable list of the logical CPU
103 thread_siblings_list: human-readable list of cpu#'s hardware
[all …]
Dsysfs-kernel-slab5 Christoph Lameter <cl@linux-foundation.org>
8 internal state of the SLUB allocator for each cache. Certain
9 files may be modified to change the behavior of the cache (and
10 any cache it aliases, if any).
13 What: /sys/kernel/slab/cache/aliases
17 Christoph Lameter <cl@linux-foundation.org>
19 The aliases file is read-only and specifies how many caches
20 have merged into this cache.
22 What: /sys/kernel/slab/cache/align
26 Christoph Lameter <cl@linux-foundation.org>
[all …]
/Documentation/devicetree/bindings/opp/
Dopp.txt2 ----------------------------------------------------
4 Devices work at voltage-current-frequency combinations and some implementations
13 Binding 1: operating-points
16 This binding only supports voltage-frequency pairs.
19 - operating-points: An array of 2-tuples items, and each item consists
20 of frequency and voltage like <freq-kHz vol-uV>.
27 compatible = "arm,cortex-a9";
29 next-level-cache = <&L2>;
30 operating-points = <
39 Binding 2: operating-points-v2
[all …]

12345