Home
last modified time | relevance | path

Searched full:each (Results 1 – 25 of 1839) sorted by relevance

12345678910>>...74

/Documentation/hwmon/
Dibmpowernv.rst18 'hwmon' populates the 'sysfs' tree having attribute files, each for a given
21 All the nodes in the DT appear under "/ibm,opal/sensors" and each valid node in
45 each OCC. Using this attribute each OCC can be asked to
58 each OCC. Using this attribute each OCC can be asked to
69 each OCC. Using this attribute each OCC can be asked to
80 each OCC. Using this attribute each OCC can be asked to
/Documentation/filesystems/nfs/
Dpnfs.rst6 reference multiple devices, each of which can reference multiple data servers.
7 Each data server can be referenced by multiple devices. Each device
17 Each nfs_inode may hold a pointer to a cache of these layout
20 We reference the header for the inode pointing to it, across each
22 LAYOUTCOMMIT), and for each lseg held within.
24 Each header is also (when non-empty) put on a list associated with
34 nfs4_deviceid_cache). The cache itself is referenced across each
36 the lifetime of each lseg referencing them.
66 layout types: "files", "objects", "blocks", and "flexfiles". For each
/Documentation/devicetree/bindings/pinctrl/
Dpinctrl-bindings.txt5 controllers. Each pin controller must be represented as a node in device tree,
9 designated client devices. Again, each client device must be represented as a
16 device is inactive. Hence, each client device can define a set of named
35 For each client device individually, every pin state is assigned an integer
36 ID. These numbers start at 0, and are contiguous. For each state ID, a unique
37 property exists to define the pin configuration. Each state may also be
41 Each client device's own binding determines the set of states that must be
47 pinctrl-0: List of phandles, each pointing at a pin configuration
52 from multiple nodes for a single pin controller, each
65 pinctrl-1: List of phandles, each pointing at a pin configuration
[all …]
Dpinctrl-vt8500.txt3 These SoCs contain a combined Pinmux/GPIO module. Each pin may operate as
23 Each pin configuration node lists the pin(s) to which it applies, and one or
25 configuration. Each subnode only affects those parameters that are explicitly
31 - wm,pins: An array of cells. Each cell contains the ID of a pin.
44 Each of wm,function and wm,pull may contain either a single value which
45 will be applied to all pins in wm,pins, or one value for each entry in
Dmarvell,mvebu-pinctrl.txt4 (mpp) to a specific function. For each SoC family there is a SoC specific
12 be used for a specific device or function. Each node requires one or more
17 Please refer to each marvell,<soc>-pinctrl.txt binding doc for supported SoCs.
24 valid pin/pin group names and available function names for each SoC.
Dlantiq,pinctrl-falcon.txt13 subnodes. Each of these subnodes represents some desired configuration for a
18 The name of each subnode is not important as long as it is unique; all subnodes
21 Each subnode only affects those parameters that are explicitly listed. In
32 - lantiq,groups : An array of strings. Each string contains the name of a group.
50 - lantiq,pins : An array of strings. Each string contains the name of a pin.
Dbrcm,bcm2835-gpio.txt39 Each pin configuration node lists the pin(s) to which it applies, and one or
41 configuration. Each subnode only affects those parameters that are explicitly
47 For details on each properties, you can refer to ./pinctrl-bindings.txt.
65 - brcm,pins: An array of cells. Each cell contains the ID of a pin. Valid IDs
83 Each of brcm,function and brcm,pull may contain either a single value which
84 will be applied to all pins in brcm,pins, or 1 value for each entry in
Dfsl,imx-pinctrl.txt5 multiplexing the PAD input/output signals. For each PAD there are up to
22 Please refer to each fsl,<soc>-pinctrl.txt binding doc for supported SoCs.
25 - fsl,pins: each entry consists of 6 integers and represents the mux and config
41 Please refer to each fsl,<soc>-pinctrl,txt binding doc for SoC specific part
57 4. Each pin configuration node should have a phandle, devices can set pins
93 User should refer to each SoC spec to set the correct value.
/Documentation/devicetree/bindings/dma/
Dst,stm32-mdma.yaml13 described in the dma.txt file, using a five-cell specifier for each channel:
24 0x2: Source address pointer is incremented after each data transfer
25 0x3: Source address pointer is decremented after each data transfer
28 0x2: Destination address pointer is incremented after each data transfer
29 0x3: Destination address pointer is decremented after each data transfer
43 0x00: Each MDMA request triggers a buffer transfer (max 128 bytes)
44 0x1: Each MDMA request triggers a block transfer (max 64K bytes)
45 0x2: Each MDMA request triggers a repeated block transfer
46 0x3: Each MDMA request triggers a linked list transfer
/Documentation/gpu/
Dmsm-crash-dump.rst11 Each entry is in the form key: value. Sections headers will not have a value
13 Each section might have multiple array entries the start of which is designated
43 Section containing the contents of each ringbuffer. Each ringbuffer is
47 Ringbuffer ID (0 based index). Each ringbuffer in the section
73 Each buffer object will have a uinque iova.
86 Set of registers values. Each entry is on its own line enclosed
/Documentation/scheduler/
Dsched-domains.rst5 Each CPU has a "base" scheduling domain (struct sched_domain). The domain
10 Each scheduling domain spans a number of CPUs (stored in the ->span field).
13 i. The top domain for each CPU will generally span all CPUs in the system
19 Each scheduling domain must have one or more CPU groups (struct sched_group)
29 Balancing within a sched domain occurs between groups. That is, each group
31 load of each of its member CPUs, and only when the load of a group becomes
34 In kernel/sched/core.c, trigger_load_balance() is run periodically on each CPU
59 of SMT, you'll span all siblings of the physical CPU, with each group being
63 node. Each group being a single physical CPU. Then with NUMA, the parent
64 of the SMP domain will span the entire machine, with each group having the
/Documentation/devicetree/bindings/gpio/
Dnvidia,tegra186-gpio.txt26 address space, each of which access the same underlying state. See the hardware
31 implemented by the SoC. Each GPIO is assigned to a port, and a port may control
32 a number of GPIOs. Thus, each GPIO is named according to an alphabetical port
36 The number of ports implemented by each GPIO controller varies. The number of
37 implemented GPIOs within each port varies. GPIO registers within a controller
48 Each GPIO controller can generate a number of interrupt signals. Each signal
54 Each GPIO controller in fact generates multiple interrupts signals for each set
55 of ports. Each GPIO may be configured to feed into a specific one of the
56 interrupt signals generated by a set-of-ports. The intent is for each generated
57 signal to be routed to a different CPU, thus allowing different CPUs to each
[all …]
Dgpio-max3191x.txt18 - maxim,modesel-gpios: GPIO pins to configure modesel of each chip.
20 (if each chip is driven by a separate pin) or 1
22 - maxim,fault-gpios: GPIO pins to read fault of each chip.
25 - maxim,db0-gpios: GPIO pins to configure debounce of each chip.
28 - maxim,db1-gpios: GPIO pins to configure debounce of each chip.
/Documentation/devicetree/bindings/phy/
Dapm-xgene-phy.txt3 PHY nodes are defined to describe on-chip 15Gbps Multi-purpose PHY. Each
19 Two set of 3-tuple setting for each (up to 3)
25 Two set of 3-tuple setting for each (up to 3)
28 gain control. Two set of 3-tuple setting for each
32 each (up to 3) supported link speed on the host.
36 3-tuple setting for each (up to 3) supported link
40 3-tuple setting for each (up to 3) supported link
46 - apm,tx-speed : Tx operating speed. One set of 3-tuple for each
Dphy-tegra194-p2u.txt4 Speed) each interfacing with 12 and 8 P2U instances respectively.
6 interface and PHY of HSIO/NVHS bricks. Each P2U instance represents one PCIe
11 - reg: Should be the physical address space and length of respective each P2U
/Documentation/filesystems/
Dqnx6.rst42 Each qnx6fs got two superblocks, each one having a 64bit serial number.
44 In write mode with reach new snapshot (after each synchronous write), the
53 Each superblock holds a set of root inodes for the different filesystem
55 Each of these root nodes holds information like total size of the stored
57 If the level value is 0, up to 16 direct blocks can be addressed by each
60 Level 1 adds an additional indirect addressing level where each indirect
79 0x1000 is the size reserved for each superblock - regardless of the
85 Each object in the filesystem is represented by an inode. (index node)
107 It is a specially formatted file containing records which associate each
146 Each data block (tree leaves) holds one long filename. That filename is
[all …]
/Documentation/userspace-api/media/v4l/
Dext-ctrls-detect.rst37 - The image is divided into a grid, each cell with its own motion
41 - The image is divided into a grid, each cell with its own region
43 should be used. Each region has its own thresholds. How these
55 Sets the motion detection thresholds for each cell in the grid. To
61 Sets the motion detection region value for each cell in the grid. To
/Documentation/devicetree/bindings/display/tegra/
Dnvidia,tegra20-host1x.txt7 For Tegra186, one entry for each entry in reg-names:
18 - resets: Must contain an entry for each entry in reset-names.
23 The host1x top-level node defines a number of children, each representing one
34 - resets: Must contain an entry for each entry in reset-names.
48 - resets: Must contain an entry for each entry in reset-names.
56 vi can have optional ports node and max 6 ports are supported. Each port
77 Maximum 6 channels are supported with each csi brick as either x4 or x2
86 Each channel node must contain 2 port nodes which can be grouped
87 under 'ports' node and each port should have a single child 'endpoint'
124 - resets: Must contain an entry for each entry in reset-names.
[all …]
/Documentation/networking/
Dscaling.rst30 applying a filter to each packet that assigns it to one of a small number
31 of logical flows. Packets for each flow are steered to a separate receive
41 implementation of RSS uses a 128-entry indirection table where each entry
60 for each CPU if the device supports enough queues, or otherwise at least
61 one for each memory domain, where a memory domain is a set of CPUs that
76 Each receive queue has a separate IRQ associated with it. The NIC triggers
79 that can route each interrupt to a particular CPU. The active mapping
84 affinity of each interrupt see Documentation/core-api/irq/irq-affinity.rst. Some systems
100 interrupts (and thus work) grows with each additional queue.
103 processors with hyperthreading (HT), each hyperthread is represented as
[all …]
/Documentation/bpf/
Dmap_cgroup_storage.rst127 per-CPU variant will have different memory regions for each CPU for each
128 storage. The non-per-CPU will have the same memory region for each storage.
133 multiple attach types, and each attach creates a fresh zeroed storage. The
136 There is a one-to-one association between the map of each type (per-CPU and
138 each map can only be used by one BPF program and each BPF program can only use
139 one storage map of each type. Because of map can only be used by one BPF
153 However, the BPF program can still only associate with one map of each type
/Documentation/ABI/testing/
Dsysfs-kernel-mm-cma5 /sys/kernel/mm/cma/ contains a subdirectory for each CMA
8 Each CMA heap subdirectory (that is, each
/Documentation/devicetree/bindings/c6x/
Ddscr.txt19 For device state control (enable/disable), each device control is assigned an
46 a lock register. Each tuple consists of the register offset, lock register
56 MAC addresses are contained in two registers. Each element of a MAC address
57 is contained in a single byte. This property has two tuples. Each tuple has
65 Each tuple describes a range of identical bitfields used to control one or
66 more devices (one bitfield per device). The layout of each tuple is:
81 for device states controlled by the DSCR. Each tuple describes a range of
83 bitfield per device). The layout of each tuple is:
/Documentation/devicetree/bindings/display/
Darm,komeda.txt7 - clocks: A list of phandle + clock-specifier pairs, one for each entry
19 Each device contains one or two pipeline sub-nodes (at least one), each
22 - clocks: A list of phandle + clock-specifier pairs, one for each entry
27 - port: each pipeline connect to an encoder input port. The connection is
/Documentation/admin-guide/device-mapper/
Dstatistics.rst10 Each user-defined region specifies a starting sector, length and step.
11 Individual statistics will be collected for each step-sized area within
14 The I/O statistics counters for each step-sized area of a region are
26 Each region has a corresponding unique identifier, which we call a
31 on each other's data.
55 the range is subdivided into areas each containing
78 nanoseconds. For each range, the kernel will report the
133 Print counters for each step-sized area of a region.
146 Output format for each step-sized area of a region:
210 Set the auxiliary data string to "foo bar baz" (the escape for each
/Documentation/sound/hd-audio/
Ddp-mst.rst9 from legacy is that DP MST introduces device entry. Each pin can contain
10 several device entries. Each device entry behaves as a pin.
12 As each pin may contain several device entries and each codec may contain
24 Each pin may have several device entries (virtual pins). On Intel platform,
56 a member of hdmi_pcm. Each pin has one struct hdmi_pcm * pcm pointer.

12345678910>>...74