Home
last modified time | relevance | path

Searched +full:i +full:- +full:cache +full:- +full:sets (Results 1 – 25 of 55) sorted by relevance

123

/Documentation/devicetree/bindings/riscv/
Dcpus.yaml1 # SPDX-License-Identifier: (GPL-2.0 OR MIT)
3 ---
5 $schema: http://devicetree.org/meta-schemas/core.yaml#
7 title: RISC-V bindings for 'cpus' DT nodes
10 - Paul Walmsley <paul.walmsley@sifive.com>
11 - Palmer Dabbelt <palmer@sifive.com>
14 This document uses some terminology common to the RISC-V community
18 mandated by the RISC-V ISA: a PC and some registers. This
28 - items:
29 - enum:
[all …]
/Documentation/filesystems/caching/
Dobject.txt2 IN-KERNEL CACHE OBJECT REPRESENTATION AND MANAGEMENT
13 - Provision of cpu time.
14 - Locking simplification.
25 FS-Cache maintains an in-kernel representation of each object that a netfs is
29 FS-Cache also maintains a separate in-kernel representation of the objects that
30 a cache backend is currently actively caching. Such objects are represented by
31 the fscache_object struct. The cache backends allocate these upon request, and
36 represented by multiple objects - an index may exist in more than one cache -
43 NETFS INDEX TREE : CACHE 1 : CACHE 2
45 : +-----------+ :
[all …]
Dbackend-api.txt2 FS-CACHE CACHE BACKEND API
5 The FS-Cache system provides an API by which actual caches can be supplied to
6 FS-Cache for it to then serve out to network filesystems and other interested
9 This API is declared in <linux/fscache-cache.h>.
13 INITIALISING AND REGISTERING A CACHE
16 To start off, a cache definition must be initialised and registered for each
17 cache the backend wants to make available. For instance, CacheFS does this in
20 The cache definition (struct fscache_cache) should be initialised by calling:
22 void fscache_init_cache(struct fscache_cache *cache,
29 (*) "cache" is a pointer to the cache definition;
[all …]
/Documentation/networking/
Dpktgen.txt4 ------------------------------------
6 Enable CONFIG_NET_PKTGEN to compile and build pktgen either in-kernel
29 overload type of benchmarking, as this could hurt the normal use-case.
32 # ethtool -G ethX tx 1024
36 than the CPU's L1/L2 cache, 2) because it allows more queueing in the
41 ring-buffers for various performance reasons, and packets stalling
46 and the cleanup interval is affected by the ethtool --coalesce setting
47 of parameter "rx-usecs".
50 # ethtool -C ethX rx-usecs 30
67 * add_device DEVICE@NAME -- adds a single device
[all …]
Dipvs-sysctl.txt3 am_droprate - INTEGER
6 It sets the always mode drop rate, which is used in the mode 3
9 amemthresh - INTEGER
12 It sets the available memory threshold (in pages), which is
18 backup_only - BOOLEAN
19 0 - disabled (default)
20 not 0 - enabled
25 conn_reuse_mode - INTEGER
26 1 - default
46 conntrack - BOOLEAN
[all …]
Dscaling.rst1 .. SPDX-License-Identifier: GPL-2.0
13 multi-processor systems.
17 - RSS: Receive Side Scaling
18 - RPS: Receive Packet Steering
19 - RFS: Receive Flow Steering
20 - Accelerated Receive Flow Steering
21 - XPS: Transmit Packet Steering
28 (multi-queue). On reception, a NIC can send different packets to different
33 generally known as “Receive-side Scaling” (RSS). The goal of RSS and
35 Multi-queue distribution can also be used for traffic prioritization, but
[all …]
/Documentation/devicetree/bindings/arm/
Dl2c2x0.yaml1 # SPDX-License-Identifier: GPL-2.0
3 ---
5 $schema: http://devicetree.org/meta-schemas/core.yaml#
7 title: ARM L2 Cache Controller
10 - Rob Herring <robh@kernel.org>
14 PL220/PL310 and variants) based level 2 cache controller. All these various
15 implementations of the L2 cache controller have compatible programming
16 models (Note 1). Some of the properties that are just prefixed "cache-*" are
22 cache controllers as found in e.g. Cortex-A15/A7/A57/A53. These
28 - $ref: /schemas/cache-controller.yaml#
[all …]
/Documentation/vm/
Dhmm.rst7 Provide infrastructure and helpers to integrate non-conventional memory (device
12 HMM also provides optional helpers for SVM (Share Virtual Memory), i.e.,
19 This document is divided as follows: in the first section I expose the problems
20 related to using device specific memory allocators. In the second section, I
23 CPU page-table mirroring works and the purpose of HMM in this context. The
37 regular file backed memory). From here on I will refer to this aspect as split
38 address space. I use shared address space to refer to the opposite situation:
39 i.e., one in which any application memory region can be used by a device
52 For flat data sets (array, grid, image, ...) this isn't too hard to achieve but
53 for complex data sets (list, tree, ...) it's hard to get right. Duplicating a
[all …]
Dslub.rst20 slabs that have data in them. See "slabinfo -h" for more options when
24 gcc -o slabinfo tools/vm/slabinfo.c
32 -------------------------------------------
37 slub_debug=<Debug-Options>
40 slub_debug=<Debug-Options>,<slab name1>,<slab name2>,...
52 A Toggle failslab filter mark for the cache
55 - Switch all debugging off (useful if the kernel is
62 Trying to find an issue in the dentry cache? Try::
66 to only enable debugging on the dentry cache. You may use an asterisk at the
68 example, here's how you can poison the dentry cache as well as all kmalloc
[all …]
Dfrontswap.rst9 swapped pages are saved in RAM (or a RAM-like device) instead of a swap disk.
11 (Note, frontswap -- and :ref:`cleancache` (merged at 3.0) -- are the "frontends"
13 all other supporting code -- the "backends" -- is implemented as drivers.
21 a synchronous concurrency-safe page-oriented "pseudo-RAM device" conforming
23 in-kernel compressed memory, aka "zcache", or future RAM-like devices);
24 this pseudo-RAM device is not directly accessible or addressable by the
25 kernel and is of unknown and possibly time-varying size. The driver
49 cache" by calling frontswap_writethrough(). In this mode, the reduction
50 in swap device writes is lost (and also a non-trivial performance advantage)
88 useful for write-balancing for some RAM-like devices). Swap pages (and
[all …]
/Documentation/admin-guide/
Dbcache.rst2 A block layer cache (bcache)
6 nice if you could use them as cache... Hence bcache.
10 - http://bcache.evilpiepirate.org
11 - http://evilpiepirate.org/git/linux-bcache.git
12 - http://evilpiepirate.org/git/bcache-tools.git
14 It's designed around the performance characteristics of SSDs - it only allocates
22 great lengths to protect your data - it reliably handles unclean shutdown. (It
26 Writeback caching can use most of the cache for buffering writes - writing
33 average is above the cutoff it will skip all IO from that task - instead of
35 thus entirely bypass the cache.
[all …]
Dmd.rst5 ---------------------------------
49 -1 linear mode
58 (raid-0 and raid-1 only)
78 --------------------------------------
87 that all auto-detected arrays are assembled as partitionable.
90 -------------------------------------------
102 mdadm --assemble --force ....
112 md-mod.start_dirty_degraded=1
116 ------------------
119 Currently, it supports superblock formats ``0.90.0`` and the ``md-1`` format
[all …]
Dext4.rst1 .. SPDX-License-Identifier: GPL-2.0
9 (64 bit) in keeping with increasing disk capacities and state-of-the-art
12 Mailing list: linux-ext4@vger.kernel.org
23 - The latest version of e2fsprogs can be found at:
35 - Create a new filesystem using the ext4 filesystem type:
37 # mke2fs -t ext4 /dev/hda1
41 # tune2fs -O extents /dev/hda1
46 # tune2fs -I 256 /dev/hda1
48 - Mounting:
50 # mount -t ext4 /dev/hda1 /wherever
[all …]
Dkernel-parameters.txt5 force -- enable ACPI if default was off
6 on -- enable ACPI but allow fallback to DT [arm64]
7 off -- disable ACPI if default was on
8 noirq -- do not use ACPI for IRQ routing
9 strict -- Be less tolerant of platforms that are not
11 rsdt -- prefer RSDT over (default) XSDT
12 copy_dsdt -- copy DSDT to memory
56 Documentation/firmware-guide/acpi/debug.rst for more information about
63 Enable AML "Debug" output, i.e., stores to the Debug
119 Disable auto-serialization of AML methods
[all …]
/Documentation/filesystems/
Dcoda.txt3 Coda -- this document describes the client kernel-Venus interface.
10 To run Coda you need to get a user level cache manager for the client,
29 level filesystem code needed for the operation of the Coda file sys-
148 11.. IInnttrroodduuccttiioonn
152 A key component in the Coda Distributed File System is the cache
160 client cache and makes remote procedure calls to Coda file servers and
179 leads to an almost natural environment for implementing a kernel-level
204 filesystem (VFS) layer, which is named I/O Manager in NT and IFS
209 pre-processing, the VFS starts invoking exported routines in the FS
221 offered by the cache manager Venus. When the replies from Venus have
[all …]
Dgfs2-glocks.txt2 ------------------------------
10 2. A non-blocking bit lock, GLF_LOCK, which is used to prevent other
28 ------------------------------
35 shared lock mode, SH. In GFS2 the DF mode is used exclusively for direct I/O
37 with cache management. The following rules apply for the cache:
39 Glock mode | Cache data | Cache Metadata | Dirty Data | Dirty Metadata
40 --------------------------------------------------------------------------
53 ----------------------------------------------------------------------------
55 go_xmote_bh | Called after remote state change (e.g. to refill cache)
56 go_inval | Called if remote state change requires invalidating the cache
[all …]
Dvfat.txt2 ----------------------------------------------------------------------
3 To use the vfat filesystem, use the filesystem type 'vfat'. i.e.
4 mount -t vfat /dev/fd0 /mnt
10 ----------------------------------------------------------------------
11 uid=### -- Set the owner of all files on this filesystem.
14 gid=### -- Set the group of all files on this filesystem.
17 umask=### -- The permission mask (for files and directories, see umask(1)).
20 dmask=### -- The permission mask for the directory.
23 fmask=### -- The permission mask for files.
26 allow_utime=### -- This option controls the permission check of mtime/atime.
[all …]
/Documentation/admin-guide/blockdev/
Dfloppy.rst19 Example: If your kernel is called linux-2.6.9, type the following line
22 linux-2.6.9 floppy=thinkpad
25 of linux-2.6.9::
31 linux-2.6.9 floppy=daring floppy=two_fdc
62 Sets the bit mask to allow only units 0 and 1. (default)
96 and is thus harder to find, whereas non-dma buffers may be
97 allocated in virtual memory. However, I advise against this if
100 If you use nodma mode, I suggest you also set the FIFO
104 If you have a FIFO-able FDC, the floppy driver automatically
105 falls back on non DMA mode if no DMA-able memory can be found.
[all …]
/Documentation/ABI/testing/
Dsysfs-devices-system-cpu2 Date: pre-git history
3 Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
18 Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
37 See Documentation/admin-guide/cputopology.rst for more information.
43 Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
58 Contact: Linux memory management mailing list <linux-mm@kvack.org>
67 /sys/devices/system/cpu/cpu42/node2 -> ../../node/node2
77 Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
93 core_siblings_list: human-readable list of the logical CPU
103 thread_siblings_list: human-readable list of cpu#'s hardware
[all …]
/Documentation/block/
Dbiodoc.rst13 - Jens Axboe <jens.axboe@oracle.com>
14 - Suparna Bhattacharya <suparna@in.ibm.com>
18 September 2003: Updated I/O Scheduler portions
19 - Nick Piggin <npiggin@kernel.dk>
34 - Jens Axboe <jens.axboe@oracle.com>
43 - Christoph Hellwig <hch@infradead.org>
44 - Arjan van de Ven <arjanv@redhat.com>
45 - Randy Dunlap <rdunlap@xenotime.net>
46 - Andre Hedrick <andre@linux-ide.org>
49 while it was still work-in-progress:
[all …]
/Documentation/admin-guide/mm/
Dnuma_memory_policy.rst12 supported platforms with Non-Uniform Memory Access architectures since 2.4.?.
18 (``Documentation/admin-guide/cgroup-v1/cpusets.rst``)
21 programming interface that a NUMA-aware application can take advantage of. When
30 ------------------------
43 not to overload the initial boot node with boot-time
47 this is an optional, per-task policy. When defined for a
63 In a multi-threaded task, task policies apply only to the thread
100 mapping-- i.e., at Copy-On-Write.
103 virtual address space--a.k.a. threads--independent of when
108 are NOT inheritable across exec(). Thus, only NUMA-aware
[all …]
/Documentation/admin-guide/cgroup-v1/
Dmemory.rst18 we call it "memory cgroup". When you see git-log and source code, you'll
30 Memory-hungry applications can be isolated and limited to a smaller
42 Current Status: linux-2.6.34-mmotm(development version of 2010/April)
46 - accounting anonymous pages, file caches, swap caches usage and limiting them.
47 - pages are linked to per-memcg LRU exclusively, and there is no global LRU.
48 - optionally, memory+swap usage can be accounted and limited.
49 - hierarchical accounting
50 - soft limit
51 - moving (recharging) account at moving a task is selectable.
52 - usage threshold notifier
[all …]
Dcpusets.rst9 - Portions Copyright (c) 2004-2006 Silicon Graphics, Inc.
10 - Modified by Paul Jackson <pj@sgi.com>
11 - Modified by Christoph Lameter <cl@linux.com>
12 - Modified by Paul Menage <menage@google.com>
13 - Modified by Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
26 1.9 How do I use cpusets ?
39 ----------------------
43 an on-line node that contains memory.
52 Documentation/admin-guide/cgroup-v1/cgroups.rst.
71 ----------------------------
[all …]
/Documentation/
DDMA-API-HOWTO.txt10 with example pseudo-code. For a concise description of the API, see
11 DMA-API.txt.
30 I/O devices use a third kind of address: a "bus address". If a device has
39 supports 64-bit addresses for main memory and PCI BARs, it may use an IOMMU
40 so devices only need to use 32-bit DMA addresses.
49 +-------+ +------+ +------+
52 C +-------+ --------> B +------+ ----------> +------+ A
54 +-----+ | | | | bridge | | +--------+
55 | | | | +------+ | | | |
58 +-----+ +-------+ +------+ +------+ +--------+
[all …]
/Documentation/admin-guide/device-mapper/
Dthin-provisioning.rst8 This document describes a collection of device-mapper targets that
9 between them implement thin-provisioning and snapshots.
27 - Improve metadata resilience by storing metadata on a mirrored volume
28 but data on a non-mirrored one.
30 - Improve performance by storing the metadata on SSD.
40 dm-devel@redhat.com with details and we'll try our best to improve
46 a Red Hat distribution it is named 'device-mapper-persistent-data').
52 They use the dmsetup program to control the device-mapper driver
53 directly. End users will be advised to use a higher-level volume
57 -----------
[all …]

123