Home
last modified time | relevance | path

Searched full:cpus (Results 1 – 25 of 293) sorted by relevance

12345678910>>...12

/Documentation/devicetree/bindings/csky/
Dcpus.txt5 The device tree allows to describe the layout of CPUs in a system through
6 the "cpus" node, which in turn contains a number of subnodes (ie "cpu")
9 Only SMP system need to care about the cpus node and single processor
10 needn't define cpus node at all.
13 cpus and cpu node bindings definition
16 - cpus node
20 The node name must be "cpus".
22 A cpus node must define the following properties:
59 cpus {
/Documentation/timers/
Dno_hz.rst19 2. Omit scheduling-clock ticks on idle CPUs (CONFIG_NO_HZ_IDLE=y or
23 3. Omit scheduling-clock ticks on CPUs that are either idle or that
65 Omit Scheduling-Clock Ticks For Idle CPUs
74 scheduling-clock interrupts to idle CPUs, which is critically important
82 idle CPUs. That said, dyntick-idle mode is not free:
104 Omit Scheduling-Clock Ticks For CPUs With Only One Runnable Task
109 Note that omitting scheduling-clock ticks for CPUs with only one runnable
110 task implies also omitting them for idle CPUs.
113 sending scheduling-clock interrupts to CPUs with a single runnable task,
114 and such CPUs are said to be "adaptive-ticks CPUs". This is important
[all …]
/Documentation/admin-guide/cgroup-v1/
Dcpusets.rst31 2.2 Adding/removing cpus
43 Cpusets provide a mechanism for assigning a set of CPUs and Memory
57 include CPUs in its CPU affinity mask, and using the mbind(2) and
60 CPUs or Memory Nodes not in that cpuset. The scheduler will not
67 cpusets and which CPUs and Memory Nodes are assigned to each cpuset,
75 The management of large computer systems, with many processors (CPUs),
113 Cpusets provide a Linux kernel mechanism to constrain which CPUs and
117 CPUs a task may be scheduled (sched_setaffinity) and on which Memory
122 - Cpusets are sets of allowed CPUs and Memory Nodes, known to the
126 - Calls to sched_setaffinity are filtered to just those CPUs
[all …]
/Documentation/admin-guide/
Dcputopology.rst41 internal kernel map of CPUs within the same core.
46 human-readable list of CPUs within the same core.
51 internal kernel map of the CPUs sharing the same physical_package_id.
56 human-readable list of CPUs sharing the same physical_package_id.
61 internal kernel map of CPUs within the same die.
65 human-readable list of CPUs within the same die.
137 offline: CPUs that are not online because they have been
139 of CPUs allowed by the kernel configuration (kernel_max
140 above). [~cpu_online_mask + cpus >= NR_CPUS]
142 online: CPUs that are online and being scheduled [cpu_online_mask]
[all …]
Dkernel-per-CPU-kthreads.rst13 - Documentation/core-api/irq/irq-affinity.rst: Binding interrupts to sets of CPUs.
15 - Documentation/admin-guide/cgroup-v1: Using cgroups to bind tasks to sets of CPUs.
18 of CPUs.
21 call to bind tasks to sets of CPUs.
50 2. Do all eHCA-Infiniband-related work on other CPUs, including
53 provisioned only on selected CPUs.
101 with multiple CPUs, force them all offline before bringing the
102 first one back online. Once you have onlined the CPUs in question,
103 do not offline any other CPUs, because doing so could force the
104 timer back onto one of the CPUs in question.
[all …]
/Documentation/scheduler/
Dsched-domains.rst10 Each scheduling domain spans a number of CPUs (stored in the ->span field).
13 i. The top domain for each CPU will generally span all CPUs in the system
15 CPUs will never be given tasks to run unless the CPUs allowed mask is
17 CPUs".
23 to which the domain belongs. Groups may be shared among CPUs as they contain
27 shared between CPUs.
31 load of each of its member CPUs, and only when the load of a group becomes
49 If it succeeds, it looks for the busiest runqueue of all the CPUs' runqueues in
62 In SMP, the parent of the base domain will span all physical CPUs in the
80 CPUs using cpu_attach_domain.
Dsched-energy.rst9 the impact of its decisions on the energy consumed by CPUs. EAS relies on an
10 Energy Model (EM) of the CPUs to select an energy efficient CPU for each task,
59 In short, EAS changes the way CFS tasks are assigned to CPUs. When it is time
64 knowledge about the platform's topology, which include the 'capacity' of CPUs,
72 differentiate CPUs with different computing throughput. The 'capacity' of a CPU
76 tasks and CPUs computed by the Per-Entity Load Tracking (PELT) mechanism. Thanks
79 energy trade-offs. The capacity of CPUs is provided via arch-specific code
99 Let us consider a platform with 12 CPUs, split in 3 performance domains
102 CPUs: 0 1 2 3 4 5 6 7 8 9 10 11
108 containing 6 CPUs. The two root domains are denoted rd1 and rd2 in the
[all …]
/Documentation/power/
Dsuspend-and-cpuhotplug.rst27 |tasks | | cpus | | | | cpus | |tasks|
59 online CPUs
75 Note down these cpus in | P
100 | Call _cpu_up() [for all those cpus in the frozen_cpus mask, in a loop]
158 the non-boot CPUs are offlined or onlined, the _cpu_*() functions are called
177 update on the CPUs, as discussed below:
184 a. When all the CPUs are identical:
187 to apply the same microcode revision to each of the CPUs.
192 all CPUs, in order to handle case 'b' described below.
195 b. When some of the CPUs are different than the rest:
[all …]
/Documentation/devicetree/bindings/clock/
Dallwinner,sun9i-a80-cpus-clk.yaml4 $id: http://devicetree.org/schemas/clock/allwinner,sun9i-a80-cpus-clk.yaml#
7 title: Allwinner A80 CPUS Clock Device Tree Bindings
20 const: allwinner,sun9i-a80-cpus-clk
45 compatible = "allwinner,sun9i-a80-cpus-clk";
49 clock-output-names = "cpus";
/Documentation/devicetree/bindings/arm/cpu-enable-method/
Dmarvell,berlin-smp6 CPUs. To apply to all CPUs, a single "marvell,berlin-smp" enable method should
7 be defined in the "cpus" node.
11 Compatible CPUs: "marvell,pj4b" and "arm,cortex-a9"
20 cpus {
Dal,alpine-smp6 enabling secondary CPUs. To apply to all CPUs, a single
8 "cpus" node.
12 Compatible CPUs: "arm,cortex-a15"
33 system fabric, like powering CPUs off.
42 cpus {
Dnuvoton,npcm750-smp5 To apply to all CPUs, a single "nuvoton,npcm750-smp" enable method should be
6 defined in the "cpus" node.
10 Compatible CPUs: "arm,cortex-a9"
19 cpus {
/Documentation/devicetree/bindings/mips/brcm/
Dbrcm,bmips.txt1 * Broadcom MIPS (BMIPS) CPUs
7 - mips-hpt-frequency: This is common to all CPUs in the system so it lives
8 under the "cpus" node.
/Documentation/arm/
Dcluster-pm-race-avoidance.rst18 In a system containing multiple CPUs, it is desirable to have the
19 ability to turn off individual CPUs when the system is idle, reducing
22 In a system containing multiple clusters of CPUs, it is also desirable
27 of independently running CPUs, while the OS continues to run. This
92 CPUs in the cluster simultaneously modifying the state. The cluster-
104 referred to as a "CPU". CPUs are assumed to be single-threaded:
107 This means that CPUs fit the basic model closely.
216 A cluster is a group of connected CPUs with some common resources.
217 Because a cluster contains multiple CPUs, it can be doing multiple
272 which exact CPUs within the cluster play these roles. This must
[all …]
Dvlocks.rst9 These are intended to be used to coordinate critical activity among CPUs
68 The currently_voting[] array provides a way for the CPUs to determine
77 As long as the last_vote variable is globally visible to all CPUs, it
94 number of CPUs.
97 if necessary, as in the following hypothetical example for 4096 CPUs::
127 of CPUs potentially contending the lock is small enough). This
157 If there are too many CPUs to read the currently_voting array in
169 * vlocks are currently only used to coordinate between CPUs which are
175 memory unless all CPUs contending the lock are cache-coherent, due
177 CPUs. (Though if all the CPUs are cache-coherent, you should be
/Documentation/devicetree/bindings/arm/
Darm-dsu-pmu.txt18 - cpus : List of phandles for the CPUs connected to this DSU instance.
26 cpus = <&cpu_0>, <&cpu_1>;
Dnvidia,tegra194-ccplex.yaml16 symmetric cores. Compatible string in "cpus" node represents the CPU
21 const: cpus
31 operating point data for all CPUs.
37 cpus {
Dcpu-capacity.txt2 ARM CPUs capacity bindings
9 ARM systems may be configured to have cpus with different power/performance
18 CPU capacity is a number that provides the scheduler information about CPUs
20 (e.g., ARM big.LITTLE systems) or maximum frequency at which CPUs can run
23 capture a first-order approximation of the relative performance of CPUs.
68 cpus {
196 cpus 0,1@1GHz, cpus 2,3@500MHz):
200 cpus {
237 [1] ARM Linux Kernel documentation - CPUs bindings
238 Documentation/devicetree/bindings/arm/cpus.yaml
/Documentation/devicetree/bindings/cpufreq/
Dcpufreq-spear.txt6 which share clock across all CPUs.
17 /cpus/cpu@0.
21 cpus {
/Documentation/networking/
Dscaling.rst29 queues to distribute processing among CPUs. The NIC distributes packets by
32 queue, which in turn can be processed by separate CPUs. This mechanism is
61 one for each memory domain, where a memory domain is a set of CPUs that
83 to spread receive interrupts between CPUs. To manually adjust the IRQ
93 interrupt processing forms a bottleneck. Spreading load between CPUs
95 is to allocate as many queues as there are CPUs in the system (or the
140 Each receive hardware queue has an associated list of CPUs to which
145 the end of the bottom half routine, IPIs are sent to any CPUs for which
156 explicitly configured. The list of CPUs to which RPS may forward traffic
161 This file implements a bitmap of CPUs. RPS is disabled when it is zero
[all …]
/Documentation/core-api/
Dcpu_hotplug.rst20 Such advances require CPUs available to a kernel to be removed either for
33 Restrict boot time CPUs to *n*. Say if you have four CPUs, using
35 other CPUs later online.
38 Restrict the total amount of CPUs the kernel will support. If the number
39 supplied here is lower than the number of physically available CPUs, then
40 those CPUs can not be brought online later.
43 Use this to limit hotpluggable CPUs. This option sets
62 Bitmap of possible CPUs that can ever be available in the
64 that aren't designed to grow/shrink as CPUs are made available or removed.
70 Bitmap of all CPUs currently online. Its set in ``__cpu_up()``
[all …]
/Documentation/core-api/irq/
Dirq-affinity.rst11 which target CPUs are permitted for a given IRQ source. It's a bitmask
12 (smp_affinity) or cpu list (smp_affinity_list) of allowed CPUs. It's not
13 allowed to turn off all CPUs, and if an IRQ controller does not support
14 IRQ affinity then the value will not change from the default of all cpus.
63 Here is an example of limiting that same irq (44) to cpus 1024 to 1031::
/Documentation/admin-guide/pm/
Dcpufreq.rst36 cases, there are hardware interfaces allowing CPUs to be switched between
41 capacity, so as to decide which P-states to put the CPUs into. Of course, since
89 CPUs. That is, for example, the same register (or set of registers) is used to
90 control the P-state of multiple CPUs at the same time and writing to it affects
91 all of those CPUs simultaneously.
93 Sets of CPUs sharing hardware P-state control interfaces are represented by
99 every CPU in the system, including CPUs that are currently offline. If multiple
100 CPUs share the same hardware P-state control interface, all of the pointers
112 driver is expected to be able to handle all CPUs in the system.
115 CPUs are registered earlier, the driver core invokes the ``CPUFreq`` core to
[all …]
/Documentation/devicetree/bindings/openrisc/opencores/
Dor1ksim.txt15 A "cpus" node is required. Required properties:
19 be probed via CPS, it is not necessary to specify secondary CPUs. Required
25 cpus {
/Documentation/devicetree/bindings/mips/ingenic/
Dingenic,cpu.yaml7 title: Bindings for Ingenic XBurst family CPUs
13 Ingenic XBurst family CPUs shall have the following properties.
49 cpus {

12345678910>>...12