Searched full:group (Results 1 – 25 of 438) sorted by relevance
12345678910>>...18
/Documentation/filesystems/ext4/ |
D | blockgroup.rst | 6 The layout of a standard block group is approximately as follows (each 13 * - Group 0 Padding 15 - Group Descriptors 30 For the special case of block group 0, the first 1024 bytes are unused, 37 The ext4 driver primarily works with the superblock and the group 38 descriptors that are found in block group 0. Redundant copies of the 39 superblock and group descriptors are written to some of the block groups 42 paragraph for more details). If the group does not have a redundant 43 copy, the block group begins with the data block bitmap. Note also that 45 GDT block” space after the block group descriptors and before the start [all …]
|
D | group_descr.rst | 3 Block Group Descriptors 6 Each block group on the filesystem has one of these descriptors 7 associated with it. As noted in the Layout section above, the group 8 descriptors (if present) are the second item in the block group. The 9 standard configuration is for each block group to contain a full copy of 10 the block group descriptor table unless the sparse\_super feature flag 13 Notice how the group descriptor records the location of both bitmaps and 15 group, the only data structures with fixed locations are the superblock 16 and the group descriptor table. The flex\_bg mechanism uses this 17 property to group several block groups into a flex group and lay out all [all …]
|
D | bitmaps.rst | 7 group. 12 block or inode table entry. This implies a block group size of 8 \* 15 NOTE: If ``BLOCK_UNINIT`` is set for a given block group, various parts 17 zeros (i.e. all blocks in the group are free). However, it is not 19 the bitmaps and group descriptor live inside the group. Unfortunately, 25 Inode tables are statically allocated at mkfs time. Each block group 27 the number of inodes per group. See the section on inodes for more
|
D | overview.rst | 8 very hard to keep each file's blocks within the same group, thereby 9 reducing seek times. The size of a block group is specified in 11 ``block_size_in_bytes``. With the default block size of 4KiB, each group 13 groups is the size of the device divided by the size of a block group.
|
D | blocks.rst | 6 ext4 allocates storage space in units of “blocks”. A block is a group of 43 * - Blocks Per Block Group 48 * - Inodes Per Block Group 53 * - Block Group Size 105 * - Blocks Per Block Group 110 * - Inodes Per Block Group 115 * - Block Group Size
|
/Documentation/devicetree/bindings/pinctrl/ |
D | marvell,armada-37xx-pinctrl.txt | 32 group: jtag 36 group sdio0 40 group emmc_nb 44 group pwm0 48 group pwm1 52 group pwm2 56 group pwm3 60 group pmic1 64 group pmic0 68 group i2c2 [all …]
|
D | fsl,mxs-pinctrl.txt | 18 a group of pins, and only affects those parameters that are explicitly listed. 26 One is to set up a group of pins for a function, both mux selection and pin 27 configurations, and it's called group node in the binding document. The other 29 different configuration than what is defined in group node. The binding 32 On mxs, there is no hardware pin group. The pin group in this binding only 33 means a group of pins put together for particular peripheral to work in 35 group node should include all the pins needed for one function rather than 36 having these pins defined in several group nodes. It also means each of 37 "pinctrl-*" phandle in client device node should only have one group node 39 there to adjust configurations for some pins in the group. [all …]
|
D | pinctrl-sirf.txt | 6 - interrupts : Interrupts used by every GPIO group 17 Each of these subnodes represents some desired configuration for a group of pins. 20 - sirf,pins : An array of strings. Each string contains the name of a group. 22 group. 24 Valid values for group and function names can be found from looking at the 25 group and function arrays in driver files:
|
D | cnxt,cx92755-pinctrl.txt | 37 container of an arbitrary number of subnodes, called pin group nodes in this 44 === Pin Group Node === 46 A pin group node specifies the desired pin mux for an arbitrary number of 47 pins. The name of the pin group node is optional and not used. 49 A pin group node only affects the properties specified in the node, and has no 52 The pin group node accepts a subset of the generic pin config properties. For 56 Required Pin Group Node Properties: 84 In the example above, a single pin group configuration node defines the
|
D | nvidia,tegra194-pinmux.txt | 15 pin, a group, or a list of pins or groups. This configuration can include the 16 mux function to select on those pin(s)/group(s), and various pin configuration 19 See the TRM to determine which properties and values apply to each pin/group. 25 group. Valid values for these names are listed below. 29 pin or group. 58 Valid values for pin and group names (nvidia,pin) are: 71 These registers controls a single pin for which a mux group exists.
|
/Documentation/devicetree/bindings/powerpc/opal/ |
D | sensor-groups.txt | 7 servers. Each child node indicates a sensor group. 9 - compatible : Should be "ibm,opal-sensor-group" 13 - type : String to indicate the type of sensor-group 15 - sensor-group-id: Abstract unique identifier provided by firmware of 16 type <u32> which is used for sensor-group 18 sensors belonging to the group. 23 belonging to this group 27 group.
|
/Documentation/scheduler/ |
D | sched-rt-group.rst | 2 Real-Time group scheduling 42 Realtime scheduling is all about determinism, a group has to be able to rely on 44 multiple groups of realtime tasks, each group must be assigned a fixed portion 45 of the CPU time available. Without a minimum guarantee a realtime group can 53 in a given period. We allocate this "run time" for each realtime group which 56 Any time not allocated to a realtime group will be used to run normal priority 63 time dedicated for the graphics. We can then give this group a run time of 0.8 66 This way the graphics group will have a 0.04s period with a 0.032s run time 69 0.00015s. So this group can be scheduled with a period of 0.005s and a run time 114 By default all bandwidth is assigned to the root group and new groups get the [all …]
|
D | sched-bwc.rst | 6 The SCHED_RT case is covered in Documentation/scheduler/sched-rt-group.rst ] 9 specification of the maximum CPU bandwidth available to a group or hierarchy. 11 The bandwidth allowed for a group is specified using a quota and period. Within 12 each given "period" (microseconds), a task group is allocated up to "quota" 19 A group's unassigned quota is globally tracked, being refreshed back to 37 A value of -1 for cpu.cfs_quota_us indicates that the group does not have any 38 bandwidth restriction in place, such a group is described as an unconstrained 39 bandwidth group. This represents the traditional work-conserving behavior for 49 and return the group to an unconstrained state once more. 51 Any updates to a group's bandwidth specification will result in it becoming [all …]
|
D | sched-domains.rst | 22 domain's span. The group pointed to by the ->groups pointer MUST contain the CPU 29 Balancing within a sched domain occurs between groups. That is, each group 30 is treated as one entity. The load of a group is defined as the sum of the 31 load of each of its member CPUs, and only when the load of a group becomes 48 Initially, load_balance() finds the busiest group in the current sched domain. 50 that group. If it manages to find such a runqueue, it locks both our initial 59 of SMT, you'll span all siblings of the physical CPU, with each group being 63 node. Each group being a single physical CPU. Then with NUMA, the parent 64 of the SMP domain will span the entire machine, with each group having the
|
/Documentation/admin-guide/cgroup-v1/ |
D | cpuacct.rst | 5 The CPU accounting controller is used to group tasks using cgroups and 9 group accumulates the CPU usage of all of its child groups and the tasks 10 directly present in its group. 16 With the above step, the initial or the parent accounting group becomes 17 visible at /sys/fs/cgroup. At bootup, this group includes all the tasks in 20 by this group which is essentially the CPU time obtained by all the tasks 23 New accounting groups can be created under the parent group /sys/fs/cgroup:: 29 The above steps create a new group g1 and move the current shell
|
D | net_prio.rst | 16 This cgroup allows an administrator to assign a process to a group which defines 22 With the above step, the initial group acting as the parent accounting group 23 becomes visible at '/sys/fs/cgroup/net_prio'. This group includes all tasks in 35 from processes in this group and egressing the system on various interfaces. 44 said traffic set to the value 5. The parent accounting group also has a
|
/Documentation/virt/kvm/devices/ |
D | vfio.rst | 16 VFIO-group is held by KVM. 22 KVM_DEV_VFIO_GROUP_ADD: Add a VFIO group to VFIO-KVM device tracking 24 for the VFIO group. 25 KVM_DEV_VFIO_GROUP_DEL: Remove a VFIO group from VFIO-KVM device tracking 27 for the VFIO group. 39 - @groupfd is a file descriptor for a VFIO group;
|
/Documentation/x86/ |
D | resctrl_ui.rst | 104 "shareable_bits" but no resource group will 110 well as a resource group's allocation. 116 one resource group. No sharing allowed. 187 system. The default group is the root directory which, immediately 199 group that is their ancestor. These are called "MON" groups in the rest 202 Removing a directory will move all tasks and cpus owned by the group it 210 this group. Writing a task id to the file will add a task to the 211 group. If the group is a CTRL_MON group the task is removed from 212 whichever previous CTRL_MON group owned the task and also from 213 any MON group that owned the task. If the group is a MON group, [all …]
|
/Documentation/admin-guide/aoe/ |
D | udev.txt | 19 SUBSYSTEM=="aoe", KERNEL=="discover", NAME="etherd/%k", GROUP="disk", MODE="0220" 20 SUBSYSTEM=="aoe", KERNEL=="err", NAME="etherd/%k", GROUP="disk", MODE="0440" 21 SUBSYSTEM=="aoe", KERNEL=="interfaces", NAME="etherd/%k", GROUP="disk", MODE="0220" 22 SUBSYSTEM=="aoe", KERNEL=="revalidate", NAME="etherd/%k", GROUP="disk", MODE="0220" 23 SUBSYSTEM=="aoe", KERNEL=="flush", NAME="etherd/%k", GROUP="disk", MODE="0220" 26 KERNEL=="etherd*", GROUP="disk"
|
/Documentation/driver-api/ |
D | vfio.rst | 69 IOMMU API therefore supports a notion of IOMMU groups. A group is 73 While the group is the minimum granularity that must be used to 85 The user needs to add a group into the container for the next level 87 group associated with the desired device. This can be done using 90 VFIO group will appear for the group as /dev/vfio/$GROUP, where 91 $GROUP is the IOMMU group number of which the device is a member. 92 If the IOMMU group contains multiple devices, each will need to 93 be bound to a VFIO driver before operations on the VFIO group 96 group available, but not that particular device). TBD - interface 99 Once the group is ready, it may be added to the container by opening [all …]
|
/Documentation/filesystems/ |
D | configfs.rst | 33 symlink(2) can be used to group items together. Unlike sysfs, the 127 Items are created and destroyed inside a config_group. A group is a 130 handles that. The group has a set of operations to perform these tasks 275 void config_group_init(struct config_group *group); 276 void config_group_init_type_name(struct config_group *group, 282 that item means that a group can behave as an item in its own right. 284 accomplished via the group operations specified on the group's 288 struct config_item *(*make_item)(struct config_group *group, 290 struct config_group *(*make_group)(struct config_group *group, 293 void (*disconnect_notify)(struct config_group *group, [all …]
|
/Documentation/ABI/testing/ |
D | sysfs-kernel-iommu_groups | 6 directories, each representing an IOMMU group. The 8 for the group, which is an integer value. Within each 10 links to the sysfs devices contained in this group. 11 The group directory also optionally contains a "name" 13 common name for the group.
|
D | sysfs-firmware-opal-sensor-groups | 6 Each folder in this directory contains a sensor group 9 can also indicate the group of sensors belonging to 16 belonging to the group. 19 maximum values of all the sensors in the group.
|
/Documentation/trace/postprocess/ |
D | decode_msr.py | 13 msrs[int(m.group(2), 16)] = m.group(1) 25 num = int(m.group(2), 16) 34 j = j.replace(" " + m.group(2), " " + r + "(" + m.group(2) + ")")
|
/Documentation/devicetree/bindings/interrupt-controller/ |
D | st,spear3xx-shirq.txt | 5 interrupt controller (VIC) on behalf of a group of devices. 8 exceeding 4. The number of devices in a group can differ, further they 10 bit masks. Also in some cases the group may not have enable or other 14 interrupt multiplexor (one node for all groups). A group in the 29 then connected to a parent interrupt controller. Each group is
|
12345678910>>...18