Home
last modified time | relevance | path

Searched full:storage (Results 1 – 25 of 274) sorted by relevance

1234567891011

/Documentation/bpf/
Dmap_sk_storage.rst11 ``BPF_MAP_TYPE_SK_STORAGE`` is used to provide socket-local storage for BPF
12 programs. A map of type ``BPF_MAP_TYPE_SK_STORAGE`` declares the type of storage
14 storage. The values for maps of type ``BPF_MAP_TYPE_SK_STORAGE`` are stored
16 allocating storage for a socket when requested and for freeing the storage when
22 socket-local storage.
37 Socket-local storage for ``map`` can be retrieved from socket ``sk`` using the
39 flag is used then ``bpf_sk_storage_get()`` will create the storage for ``sk``
41 ``BPF_LOCAL_STORAGE_GET_F_CREATE`` to initialize the storage value, otherwise
42 it will be zero initialized. Returns a pointer to the storage on success, or
56 Socket-local storage for ``map`` can be deleted from socket ``sk`` using the
[all …]
Dmap_cgroup_storage.rst9 storage. It is only available with ``CONFIG_CGROUP_BPF``, and to programs that
11 storage is identified by the cgroup the program is attached to.
13 The map provide a local storage at the cgroup that the BPF program is attached
38 map will share the same storage. Otherwise, if the type is
42 To access the storage in a program, use ``bpf_get_local_storage``::
51 ``struct bpf_spin_lock`` to synchronize the storage. See
128 storage. The non-per-CPU will have the same memory region for each storage.
130 Prior to Linux 5.9, the lifetime of a storage is precisely per-attachment, and
133 multiple attach types, and each attach creates a fresh zeroed storage. The
134 storage is freed upon detach.
[all …]
Dmap_cgrp_storage.rst9 storage for cgroups. It is only available with ``CONFIG_CGROUPS``.
21 To access the storage in a program, use ``bpf_cgrp_storage_get``::
26 a new local storage will be created if one does not exist.
28 The local storage can be removed with ``bpf_cgrp_storage_delete``::
81 The old cgroup storage map ``BPF_MAP_TYPE_CGROUP_STORAGE`` has been marked as
91 (2). ``BPF_MAP_TYPE_CGRP_STORAGE`` supports local storage for more than one
95 (3). ``BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED`` allocates local storage at attach time so
96 ``bpf_get_local_storage()`` always returns non-NULL local storage.
97 ``BPF_MAP_TYPE_CGRP_STORAGE`` allocates local storage at runtime so
98 it is possible that ``bpf_cgrp_storage_get()`` may return null local storage.
[all …]
/Documentation/usb/
Dmass-storage.rst2 Mass Storage Gadget (MSG)
8 Mass Storage Gadget (or MSG) acts as a USB Mass Storage device,
10 multiple logical units (LUNs). Backing storage for each LUN is
27 relation to mass storage function (or MSF) and different gadgets
28 using it, and how it differs from File Storage Gadget (or FSG)
35 The mass storage gadget accepts the following mass storage specific
41 backing storage for each logical unit. There may be at most
45 *BEWARE* that if a file is used as a backing storage, it may not
75 true. This has been changed to better match File Storage Gadget
110 MS Windows mounts removable storage in “Removal optimised mode” by
[all …]
/Documentation/devicetree/bindings/mmc/
Dbluefield-dw-mshc.txt2 Mobile Storage Host Controller
6 The Synopsys designware mobile storage host controller is used to interface
7 a SoC with storage medium such as eMMC or SD/MMC cards. This file documents
10 specific extensions to the Synopsys Designware Mobile Storage Host Controller.
Dk3-dw-mshc.txt2 Storage Host Controller
6 The Synopsys designware mobile storage host controller is used to interface
7 a SoC with storage medium such as eMMC or SD/MMC cards. This file documents
10 extensions to the Synopsys Designware Mobile Storage Host Controller.
Dstarfive,jh7110-mmc.yaml7 title: StarFive Designware Mobile Storage Host Controller
10 StarFive uses the Synopsys designware mobile storage host controller
11 to interface a SoC with storage medium such as eMMC or SD/MMC cards.
Drockchip-dw-mshc.yaml7 title: Rockchip designware mobile storage host controller
10 Rockchip uses the Synopsys designware mobile storage host controller
11 to interface a SoC with storage medium such as eMMC or SD/MMC cards.
/Documentation/admin-guide/device-mapper/
Dswitch.rst18 Dell EqualLogic and some other iSCSI storage arrays use a distributed
19 frameless architecture. In this architecture, the storage group
20 consists of a number of distinct storage arrays ("members") each having
21 independent controllers, disk storage and network adapters. When a LUN
23 spreading are hidden from initiators connected to this storage system.
24 The storage group exposes a single target discovery portal, no matter
29 forwarding is invisible to the initiator. The storage layout is also
34 the storage group and initiators. In a multipathing configuration, it
38 robin algorithm to send I/O across all paths and let the storage array
Dvdo.rst8 mapper target, it can add these features to the storage stack, compatible
10 corruption, relying instead on integrity protection of the storage below
56 <offset> <logical device size> vdo V4 <storage device>
57 <storage device size> <minimum I/O size> <block map cache size>
72 storage device:
75 storage device size:
121 storage. Threads of this type allow the vdo volume to
146 underlying storage device. At format time, a slab size for
147 the vdo is chosen; the vdo storage device must be large
186 underlying storage device. Additionally, when formatting the vdo device, a
[all …]
Dzero.rst13 than the amount of actual storage space available for that device. A user can
16 enough data has been written to fill up the actual storage space, the sparse
36 10GB of actual storage space available. If more than 10GB of data is written
Dvdo-design.rst9 can be backed by up to 256TB of storage, and can present a logical size of
19 reference a single 4K of actual storage. It can achieve compression rates
20 of 14:1. All zero blocks consume no storage at all.
30 maps from logical block addresses to the actual storage location of the
78 ultimate goal of deduplication is to reduce storage costs. Since there is a
79 trade-off between the storage saved and the resources expended to achieve
85 that data on the underlying storage. However, it is not possible to
90 storage, or reading and rehashing each block before overwriting it.
112 without having to load the entire chapter from storage. This index uses
135 in memory and is saved to storage only when the vdo target is shut down.
[all …]
Ddm-log.rst37 performance. This method can also be used if no storage device is
55 provide a cluster-coherent log for shared-storage. Device-mapper mirroring
56 can be used in a shared-storage environment when the cluster log implementations
/Documentation/block/
Dwriteback_cache_control.rst8 Many storage devices, especially in the consumer market, come with volatile
10 operating system before data actually has hit the non-volatile storage. This
12 system needs to force data out to the non-volatile storage when it performs
16 control the caching behavior of the storage device. These mechanisms are
24 the filesystem and will make sure the volatile cache of the storage device
27 storage before the flagged bio starts. In addition the REQ_PREFLUSH flag can be
38 signaled after the data has been committed to non-volatile storage.
Ddata-integrity.rst25 Current storage controllers and devices implement various protective
39 controller and storage device. However, many controllers actually
80 them within the Storage Networking Industry Association.
90 they enable us to protect the entire path from application to storage
96 the storage devices they are accessing. The virtual filesystem switch
117 Some storage devices allow each hardware sector to be tagged with a
241 contain the integrity metadata received from the storage device.
/Documentation/ABI/testing/
Dpstore6 Description: Generic interface to platform dependent persistent storage.
28 the file will signal to the underlying persistent storage
41 persistent storage until at least this amount is reached.
/Documentation/devicetree/bindings/arm/
Dmicrochip,sparx5.yaml28 which has both spi-nor and eMMC storage. The modular design
36 either spi-nand or eMMC storage (mount option).
43 either spi-nand or eMMC storage (mount option).
/Documentation/scsi/
Dufs.rst4 Universal Flash Storage
27 Universal Flash Storage (UFS) is a storage specification for flash devices.
28 It aims to provide a universal storage interface for both
29 embedded and removable flash memory-based storage in mobile
203 to specify the intended reference clock frequency for the UFS storage
209 subsystem will ensure the bRefClkFreq attribute of the UFS storage device is
Dsmartpqi.rst4 SMARTPQI - Microchip Smart Storage SCSI driver
76 Allows "BMIC" and "CISS" commands to be passed through to the Smart Storage Array.
77 These are used extensively by the SSA Array Configuration Utility, SNMP storage
/Documentation/filesystems/caching/
Dbackend-api.rst18 storage, and each level has its own type of cookie object:
25 Data storage cookie struct fscache_cookie
106 The cache must then go through the data storage objects it has and tell fscache
136 Within a cache, the data storage objects are organised into logical volumes.
171 Data Storage Cookies
174 A volume is a logical group of data storage objects, each of which is
234 The index key is a binary blob, the storage for which is padded out to a
247 Data storage cookies are counted and this is used to block cache withdrawal
305 data storage for a cookie. It is called from a worker thread with a
339 * Change the size of a data storage object [mandatory]::
[all …]
/Documentation/filesystems/
Dceph.rst15 * N-way replication of data across storage nodes
29 storage nodes run entirely as user space daemons. File data is striped
30 across storage nodes in large chunks to distribute workload and
31 facilitate high throughputs. When storage nodes fail, data is
32 re-replicated in a distributed fashion by the storage nodes themselves
41 storage to significantly improve performance for common workloads. In
169 Disable CRC32C calculation for data writes. If set, the storage node
/Documentation/filesystems/iomap/
Ddesign.rst24 This layer tries to obtain mappings of each file ranges to storage
25 from the filesystem, but the storage information is not necessarily
32 physical extents, but the storage layer information is not necessarily
52 The target audience for this document are filesystem, storage, and
155 byte ranges of a file to byte ranges of a storage device with the
182 * **IOMAP_HOLE**: No storage has been allocated.
197 storage device.
202 storage device, but the space has not yet been initialized.
227 storage.
326 block storage.
[all …]
Doperations.rst238 storage with another file to preemptively copy the shared data to newly
239 allocate storage.
273 This is to prevent dirty folio clots when storage devices fail; an
341 storage device.
366 ioends can only be merged if the file range and storage addresses are
378 storage, bypassing the pagecache.
391 extra work before or after the I/O is issued to storage.
471 cache before issuing the I/O to storage.
483 A direct I/O read initiates a read I/O from the storage device to the
485 Dirty parts of the pagecache are flushed to storage before initiating
[all …]
/Documentation/arch/riscv/
Dcmodx.rst8 modified by the program itself. Instruction storage and the instruction cache
16 storage with fence.i, the icache on the new hart will no longer be clean. This
19 instruction storage and icache.
/Documentation/firmware-guide/acpi/
Dchromeos-acpi-device.rst270 NV Storage Block Offset //DWORD
271 NV Storage Block Size //DWORD
282 * - NV Storage Block Offset
284 - Offset in CMOS bank 0 of the verified boot non-volatile storage block, counting from
288 * - NV Storage Block Size
290 - Size in bytes of the verified boot non-volatile storage block.

1234567891011