| /Documentation/filesystems/nfs/ |
| D | pnfs.rst | 6 reference multiple devices, each of which can reference multiple data servers. 7 Each data server can be referenced by multiple devices. Each device 17 Each nfs_inode may hold a pointer to a cache of these layout 20 We reference the header for the inode pointing to it, across each 22 LAYOUTCOMMIT), and for each lseg held within. 24 Each header is also (when non-empty) put on a list associated with 34 nfs4_deviceid_cache). The cache itself is referenced across each 36 the lifetime of each lseg referencing them. 66 layout types: "files", "objects", "blocks", and "flexfiles". For each
|
| /Documentation/hwmon/ |
| D | ibmpowernv.rst | 18 'hwmon' populates the 'sysfs' tree having attribute files, each for a given 21 All the nodes in the DT appear under "/ibm,opal/sensors" and each valid node in 45 each OCC. Using this attribute each OCC can be asked to 58 each OCC. Using this attribute each OCC can be asked to 69 each OCC. Using this attribute each OCC can be asked to 80 each OCC. Using this attribute each OCC can be asked to
|
| /Documentation/devicetree/bindings/pinctrl/ |
| D | pinctrl-bindings.txt | 5 controllers. Each pin controller must be represented as a node in device tree, 9 designated client devices. Again, each client device must be represented as a 16 device is inactive. Hence, each client device can define a set of named 35 For each client device individually, every pin state is assigned an integer 36 ID. These numbers start at 0, and are contiguous. For each state ID, a unique 37 property exists to define the pin configuration. Each state may also be 41 Each client device's own binding determines the set of states that must be 47 pinctrl-0: List of phandles, each pointing at a pin configuration 52 from multiple nodes for a single pin controller, each 65 pinctrl-1: List of phandles, each pointing at a pin configuration [all …]
|
| D | pinctrl-vt8500.txt | 3 These SoCs contain a combined Pinmux/GPIO module. Each pin may operate as 23 Each pin configuration node lists the pin(s) to which it applies, and one or 25 configuration. Each subnode only affects those parameters that are explicitly 31 - wm,pins: An array of cells. Each cell contains the ID of a pin. 44 Each of wm,function and wm,pull may contain either a single value which 45 will be applied to all pins in wm,pins, or one value for each entry in
|
| D | marvell,mvebu-pinctrl.txt | 4 (mpp) to a specific function. For each SoC family there is a SoC specific 12 be used for a specific device or function. Each node requires one or more 17 Please refer to each marvell,<soc>-pinctrl.txt binding doc for supported SoCs. 24 valid pin/pin group names and available function names for each SoC.
|
| D | lantiq,pinctrl-falcon.txt | 13 subnodes. Each of these subnodes represents some desired configuration for a 18 The name of each subnode is not important as long as it is unique; all subnodes 21 Each subnode only affects those parameters that are explicitly listed. In 32 - lantiq,groups : An array of strings. Each string contains the name of a group. 50 - lantiq,pins : An array of strings. Each string contains the name of a pin.
|
| /Documentation/ABI/testing/ |
| D | debugfs-scmi-raw | 11 Each write to the entry causes one command request to be built 13 (receiving an EOF at each message boundary). 29 Each write to the entry causes one command request to be built 31 (receiving an EOF at each message boundary). 41 Each read gives back one message at time (receiving an EOF at 42 each message boundary). 52 Each read gives back one message at time (receiving an EOF at 53 each message boundary). 80 Each write to the entry causes one command request to be built 82 (receiving an EOF at each message boundary). [all …]
|
| D | sysfs-firmware-sgi_uv | 31 machines, which each partition running a unique copy 32 of the operating system. Each partition will have a unique 55 The hubs directory contains a number of hub objects, each representing 56 a UV Hub visible to the BIOS. Each hub object's name is appended by a 59 Each hub object directory contains a number of read-only attributes:: 94 Each hub object directory also contains a number of port objects, 95 each representing a fabric port on the corresponding hub. 99 Each port object directory contains a number of read-only attributes:: 125 Each PCI bus object's name is appended by its PCI bus address. 128 Each pcibus object has a number of possible read-only attributes::
|
| /Documentation/devicetree/bindings/dma/stm32/ |
| D | st,stm32-mdma.yaml | 13 described in the dma.txt file, using a five-cell specifier for each channel: 24 0x2: Source address pointer is incremented after each data transfer 25 0x3: Source address pointer is decremented after each data transfer 28 0x2: Destination address pointer is incremented after each data transfer 29 0x3: Destination address pointer is decremented after each data transfer 43 0x00: Each MDMA request triggers a buffer transfer (max 128 bytes) 44 0x1: Each MDMA request triggers a block transfer (max 64K bytes) 45 0x2: Each MDMA request triggers a repeated block transfer 46 0x3: Each MDMA request triggers a linked list transfer
|
| /Documentation/gpu/ |
| D | msm-crash-dump.rst | 11 Each entry is in the form key: value. Sections headers will not have a value 13 Each section might have multiple array entries the start of which is designated 43 Section containing the contents of each ringbuffer. Each ringbuffer is 47 Ringbuffer ID (0 based index). Each ringbuffer in the section 73 Each buffer object will have a uinque iova. 86 Set of registers values. Each entry is on its own line enclosed
|
| /Documentation/scheduler/ |
| D | sched-domains.rst | 5 Each CPU has a "base" scheduling domain (struct sched_domain). The domain 10 Each scheduling domain spans a number of CPUs (stored in the ->span field). 13 i. The top domain for each CPU will generally span all CPUs in the system 19 Each scheduling domain must have one or more CPU groups (struct sched_group) 29 Balancing within a sched domain occurs between groups. That is, each group 31 load of each of its member CPUs, and only when the load of a group becomes 34 In kernel/sched/core.c, sched_balance_trigger() is run periodically on each CPU 59 of SMT, you'll span all siblings of the physical CPU, with each group being 63 node. Each group being a single physical CPU. Then with NUMA, the parent 64 of the SMP domain will span the entire machine, with each group having the
|
| /Documentation/devicetree/bindings/phy/ |
| D | apm-xgene-phy.txt | 3 PHY nodes are defined to describe on-chip 15Gbps Multi-purpose PHY. Each 19 Two set of 3-tuple setting for each (up to 3) 25 Two set of 3-tuple setting for each (up to 3) 28 gain control. Two set of 3-tuple setting for each 32 each (up to 3) supported link speed on the host. 36 3-tuple setting for each (up to 3) supported link 40 3-tuple setting for each (up to 3) supported link 46 - apm,tx-speed : Tx operating speed. One set of 3-tuple for each
|
| D | phy-tegra194-p2u.yaml | 14 Speed) each interfacing with 12 and 8 P2U instances respectively. 16 each interfacing with 8, 8 and 8 P2U instances respectively. 18 interface and PHY of HSIO/NVHS/GBE bricks. Each P2U instance represents one 29 description: Should be the physical address space and length of respective each P2U instance.
|
| /Documentation/filesystems/ |
| D | qnx6.rst | 42 Each qnx6fs got two superblocks, each one having a 64bit serial number. 44 In write mode with reach new snapshot (after each synchronous write), the 53 Each superblock holds a set of root inodes for the different filesystem 55 Each of these root nodes holds information like total size of the stored 57 If the level value is 0, up to 16 direct blocks can be addressed by each 60 Level 1 adds an additional indirect addressing level where each indirect 79 0x1000 is the size reserved for each superblock - regardless of the 85 Each object in the filesystem is represented by an inode. (index node) 107 It is a specially formatted file containing records which associate each 146 Each data block (tree leaves) holds one long filename. That filename is [all …]
|
| /Documentation/devicetree/bindings/gpio/ |
| D | nvidia,tegra186-gpio.yaml | 37 aliases" in address space, each of which access the same underlying 42 implemented by the SoC. Each GPIO is assigned to a port, and a port may 43 control a number of GPIOs. Thus, each GPIO is named according to an 47 The number of ports implemented by each GPIO controller varies. The number 48 of implemented GPIOs within each port varies. GPIO registers within a 60 Each GPIO controller can generate a number of interrupt signals. Each 67 Each GPIO controller in fact generates multiple interrupts signals for 68 each set of ports. Each GPIO may be configured to feed into a specific 70 for each generated signal to be routed to a different CPU, thus allowing 71 different CPUs to each handle subsets of the interrupts within a port. [all …]
|
| D | gpio-max3191x.txt | 18 - maxim,modesel-gpios: GPIO pins to configure modesel of each chip. 20 (if each chip is driven by a separate pin) or 1 22 - maxim,fault-gpios: GPIO pins to read fault of each chip. 25 - maxim,db0-gpios: GPIO pins to configure debounce of each chip. 28 - maxim,db1-gpios: GPIO pins to configure debounce of each chip.
|
| /Documentation/core-api/ |
| D | protection-keys.rst | 19 Pkeys work by dedicating 4 previously Reserved bits in each page table entry to 22 Protections for each key are defined with a per-CPU user-accessible register 23 (PKRU). Each of these is a 32-bit register storing two bits (Access Disable 24 and Write Disable) for each of 16 keys. 26 Being a CPU register, PKRU is inherently thread-local, potentially giving each 37 Pkeys use 3 bits in each page table entry, to encode a "protection key index", 40 Protections for each key are defined with a per-CPU user-writable system 42 overlay permissions for each protection key index. 45 each thread a different set of protections from every other thread.
|
| /Documentation/admin-guide/mm/damon/ |
| D | usage.rst | 60 figure, parents-children relations are represented with indentations, each 61 directory is having ``/`` suffix, and files in each directory are separated by 121 child directories named ``0`` to ``N-1``. Each directory represents each 129 In each kdamond directory, two files (``state`` and ``pid``) and one directory 143 - ``update_schemes_stats``: Update the contents of stats files for each 147 action tried regions directory for each DAMON-based operation scheme of the 154 action tried regions directory for each DAMON-based operation scheme of the 157 ``effective_bytes`` files for each DAMON-based operation scheme of the 172 ``0`` to ``N-1``. Each directory represents each monitoring context (refer to 182 In each context directory, two files (``avail_operations`` and ``operations``) [all …]
|
| /Documentation/userspace-api/media/v4l/ |
| D | ext-ctrls-detect.rst | 37 - The image is divided into a grid, each cell with its own motion 41 - The image is divided into a grid, each cell with its own region 43 should be used. Each region has its own thresholds. How these 55 Sets the motion detection thresholds for each cell in the grid. To 61 Sets the motion detection region value for each cell in the grid. To
|
| /Documentation/devicetree/bindings/cpufreq/ |
| D | qemu,virtual-cpufreq.yaml | 15 selection of its vCPUs as a hint to the host through MMIO regions. Each vCPU 17 Each performance domain has its own set of registers for performance controls. 26 Address and size of region containing performance controls for each of the 27 performance domains. Regions for each performance domain is placed
|
| /Documentation/networking/ |
| D | scaling.rst | 30 applying a filter to each packet that assigns it to one of a small number 31 of logical flows. Packets for each flow are steered to a separate receive 41 implementation of RSS uses a 128-entry indirection table where each entry 75 for each CPU if the device supports enough queues, or otherwise at least 76 one for each memory domain, where a memory domain is a set of CPUs that 91 Each receive queue has a separate IRQ associated with it. The NIC triggers 94 that can route each interrupt to a particular CPU. The active mapping 99 affinity of each interrupt see Documentation/core-api/irq/irq-affinity.rst. Some systems 115 interrupts (and thus work) grows with each additional queue. 118 processors with hyperthreading (HT), each hyperthread is represented as [all …]
|
| /Documentation/userspace-api/ |
| D | dma-buf-alloc-exchange.rst | 60 A tuple of numbers, representing a color. Each element in the tuple is a 94 Each buffer must have an underlying format. This format describes the color 95 values provided for each pixel. Although each subsystem has its own format 101 Each ``DRM_FORMAT_*`` token describes the translation between a pixel 104 whether they are RGB or YUV, integer or floating-point, the size of each channel 108 For example, ``DRM_FORMAT_ARGB8888`` describes a format in which each pixel has 118 sample is stored for each 2x2 pixel grouping). 122 modifier is ``DRM_FORMAT_MOD_LINEAR``, describing a scheme in which each plane 149 Each pixel buffer must be accompanied by logical pixel dimensions. This refers 160 Each plane must therefore be described with an ``offset`` in bytes, which will be [all …]
|
| /Documentation/admin-guide/device-mapper/ |
| D | vdo-design.rst | 46 normal operation, each zone is assigned to a specific thread, and only that 48 Associated with each thread is a work queue. Each bio is associated with a 54 each zone has an implicit lock on the structures it manages for all its 58 Although each structure is divided into zones, this division is not 59 reflected in the on-disk representation of each data structure. Therefore, 60 the number of zones for each structure, and hence the number of threads, 61 can be reconfigured each time a vdo target is started. 83 Each block of data is hashed to produce a 16-byte block name. An index 90 storage, or reading and rehashing each block before overwriting it. 95 index as hints, and reads each indicated block to verify that it is indeed [all …]
|
| D | statistics.rst | 10 Each user-defined region specifies a starting sector, length and step. 11 Individual statistics will be collected for each step-sized area within 14 The I/O statistics counters for each step-sized area of a region are 26 Each region has a corresponding unique identifier, which we call a 31 on each other's data. 55 the range is subdivided into areas each containing 78 nanoseconds. For each range, the kernel will report the 133 Print counters for each step-sized area of a region. 146 Output format for each step-sized area of a region: 210 Set the auxiliary data string to "foo bar baz" (the escape for each
|
| /Documentation/bpf/ |
| D | map_cgroup_storage.rst | 127 per-CPU variant will have different memory regions for each CPU for each 128 storage. The non-per-CPU will have the same memory region for each storage. 133 multiple attach types, and each attach creates a fresh zeroed storage. The 136 There is a one-to-one association between the map of each type (per-CPU and 138 each map can only be used by one BPF program and each BPF program can only use 139 one storage map of each type. Because of map can only be used by one BPF 153 However, the BPF program can still only associate with one map of each type
|