Home
last modified time | relevance | path

Searched full:cpus (Results 1 – 25 of 272) sorted by relevance

1234567891011

/Documentation/devicetree/bindings/csky/
Dcpus.txt5 The device tree allows to describe the layout of CPUs in a system through
6 the "cpus" node, which in turn contains a number of subnodes (ie "cpu")
9 Only SMP system need to care about the cpus node and single processor
10 needn't define cpus node at all.
13 cpus and cpu node bindings definition
16 - cpus node
20 The node name must be "cpus".
22 A cpus node must define the following properties:
59 cpus {
/Documentation/timers/
Dno_hz.rst19 2. Omit scheduling-clock ticks on idle CPUs (CONFIG_NO_HZ_IDLE=y or
23 3. Omit scheduling-clock ticks on CPUs that are either idle or that
65 Omit Scheduling-Clock Ticks For Idle CPUs
74 scheduling-clock interrupts to idle CPUs, which is critically important
82 idle CPUs. That said, dyntick-idle mode is not free:
104 Omit Scheduling-Clock Ticks For CPUs With Only One Runnable Task
109 Note that omitting scheduling-clock ticks for CPUs with only one runnable
110 task implies also omitting them for idle CPUs.
113 sending scheduling-clock interrupts to CPUs with a single runnable task,
114 and such CPUs are said to be "adaptive-ticks CPUs". This is important
[all …]
/Documentation/admin-guide/cgroup-v1/
Dcpusets.rst29 2.2 Adding/removing cpus
41 Cpusets provide a mechanism for assigning a set of CPUs and Memory
55 include CPUs in its CPU affinity mask, and using the mbind(2) and
58 CPUs or Memory Nodes not in that cpuset. The scheduler will not
65 cpusets and which CPUs and Memory Nodes are assigned to each cpuset,
73 The management of large computer systems, with many processors (CPUs),
111 Cpusets provide a Linux kernel mechanism to constrain which CPUs and
115 CPUs a task may be scheduled (sched_setaffinity) and on which Memory
120 - Cpusets are sets of allowed CPUs and Memory Nodes, known to the
124 - Calls to sched_setaffinity are filtered to just those CPUs
[all …]
/Documentation/admin-guide/
Dcputopology.rst41 internal kernel map of CPUs within the same core.
46 human-readable list of CPUs within the same core.
51 internal kernel map of the CPUs sharing the same physical_package_id.
56 human-readable list of CPUs sharing the same physical_package_id.
61 internal kernel map of CPUs within the same die.
65 human-readable list of CPUs within the same die.
137 offline: CPUs that are not online because they have been
139 of CPUs allowed by the kernel configuration (kernel_max
140 above). [~cpu_online_mask + cpus >= NR_CPUS]
142 online: CPUs that are online and being scheduled [cpu_online_mask]
[all …]
Dkernel-per-CPU-kthreads.rst13 - Documentation/IRQ-affinity.txt: Binding interrupts to sets of CPUs.
15 - Documentation/admin-guide/cgroup-v1: Using cgroups to bind tasks to sets of CPUs.
18 of CPUs.
21 call to bind tasks to sets of CPUs.
50 2. Do all eHCA-Infiniband-related work on other CPUs, including
53 provisioned only on selected CPUs.
101 with multiple CPUs, force them all offline before bringing the
102 first one back online. Once you have onlined the CPUs in question,
103 do not offline any other CPUs, because doing so could force the
104 timer back onto one of the CPUs in question.
[all …]
/Documentation/scheduler/
Dsched-energy.rst9 the impact of its decisions on the energy consumed by CPUs. EAS relies on an
10 Energy Model (EM) of the CPUs to select an energy efficient CPU for each task,
59 In short, EAS changes the way CFS tasks are assigned to CPUs. When it is time
64 knowledge about the platform's topology, which include the 'capacity' of CPUs,
72 differentiate CPUs with different computing throughput. The 'capacity' of a CPU
76 tasks and CPUs computed by the Per-Entity Load Tracking (PELT) mechanism. Thanks
79 energy trade-offs. The capacity of CPUs is provided via arch-specific code
99 Let us consider a platform with 12 CPUs, split in 3 performance domains
102 CPUs: 0 1 2 3 4 5 6 7 8 9 10 11
108 containing 6 CPUs. The two root domains are denoted rd1 and rd2 in the
[all …]
Dsched-domains.rst10 Each scheduling domain spans a number of CPUs (stored in the ->span field).
13 i. The top domain for each CPU will generally span all CPUs in the system
15 CPUs will never be given tasks to run unless the CPUs allowed mask is
17 CPUs".
25 CPUs as they contain read only data after they have been set up.
29 load of each of its member CPUs, and only when the load of a group becomes
47 If it succeeds, it looks for the busiest runqueue of all the CPUs' runqueues in
60 In SMP, the parent of the base domain will span all physical CPUs in the
78 CPUs using cpu_attach_domain.
/Documentation/power/
Dsuspend-and-cpuhotplug.rst27 |tasks | | cpus | | | | cpus | |tasks|
59 online CPUs
75 Note down these cpus in | P
100 | Call _cpu_up() [for all those cpus in the frozen_cpus mask, in a loop]
158 the non-boot CPUs are offlined or onlined, the _cpu_*() functions are called
176 update on the CPUs, as discussed below:
183 a. When all the CPUs are identical:
186 to apply the same microcode revision to each of the CPUs.
191 all CPUs, in order to handle case 'b' described below.
194 b. When some of the CPUs are different than the rest:
[all …]
Denergy-model.rst2 Energy Model of CPUs
9 the power consumed by CPUs at various performance levels, and the kernel
12 The source of the information about the power consumed by CPUs can vary greatly
51 system. A performance domain is a group of CPUs whose performance is scaled
53 policies. All CPUs in a performance domain are required to have the same
54 micro-architecture. CPUs in different performance domains can have different
76 Drivers must specify the CPUs of the performance domains using the cpumask
144 36 em_register_perf_domain(policy->cpus, nr_opp, &em_cb);
/Documentation/devicetree/bindings/mips/brcm/
Dbrcm,bmips.txt1 * Broadcom MIPS (BMIPS) CPUs
7 - mips-hpt-frequency: This is common to all CPUs in the system so it lives
8 under the "cpus" node.
/Documentation/devicetree/bindings/arm/cpu-enable-method/
Dmarvell,berlin-smp6 CPUs. To apply to all CPUs, a single "marvell,berlin-smp" enable method should
7 be defined in the "cpus" node.
11 Compatible CPUs: "marvell,pj4b" and "arm,cortex-a9"
20 cpus {
Dal,alpine-smp6 enabling secondary CPUs. To apply to all CPUs, a single
8 "cpus" node.
12 Compatible CPUs: "arm,cortex-a15"
33 system fabric, like powering CPUs off.
42 cpus {
Dnuvoton,npcm750-smp5 To apply to all CPUs, a single "nuvoton,npcm750-smp" enable method should be
6 defined in the "cpus" node.
10 Compatible CPUs: "arm,cortex-a9"
19 cpus {
/Documentation/arm/
Dcluster-pm-race-avoidance.rst18 In a system containing multiple CPUs, it is desirable to have the
19 ability to turn off individual CPUs when the system is idle, reducing
22 In a system containing multiple clusters of CPUs, it is also desirable
27 of independently running CPUs, while the OS continues to run. This
92 CPUs in the cluster simultaneously modifying the state. The cluster-
104 referred to as a "CPU". CPUs are assumed to be single-threaded:
107 This means that CPUs fit the basic model closely.
216 A cluster is a group of connected CPUs with some common resources.
217 Because a cluster contains multiple CPUs, it can be doing multiple
272 which exact CPUs within the cluster play these roles. This must
[all …]
Dvlocks.rst9 These are intended to be used to coordinate critical activity among CPUs
68 The currently_voting[] array provides a way for the CPUs to determine
77 As long as the last_vote variable is globally visible to all CPUs, it
94 number of CPUs.
97 if necessary, as in the following hypothetical example for 4096 CPUs::
127 of CPUs potentially contending the lock is small enough). This
157 If there are too many CPUs to read the currently_voting array in
169 * vlocks are currently only used to coordinate between CPUs which are
175 memory unless all CPUs contending the lock are cache-coherent, due
177 CPUs. (Though if all the CPUs are cache-coherent, you should be
/Documentation/networking/
Dscaling.rst29 queues to distribute processing among CPUs. The NIC distributes packets by
32 queue, which in turn can be processed by separate CPUs. This mechanism is
61 one for each memory domain, where a memory domain is a set of CPUs that
83 to spread receive interrupts between CPUs. To manually adjust the IRQ
93 interrupt processing forms a bottleneck. Spreading load between CPUs
95 is to allocate as many queues as there are CPUs in the system (or the
140 Each receive hardware queue has an associated list of CPUs to which
145 the end of the bottom half routine, IPIs are sent to any CPUs for which
156 explicitly configured. The list of CPUs to which RPS may forward traffic
161 This file implements a bitmap of CPUs. RPS is disabled when it is zero
[all …]
/Documentation/devicetree/bindings/arm/
Darm-dsu-pmu.txt18 - cpus : List of phandles for the CPUs connected to this DSU instance.
26 cpus = <&cpu_0>, <&cpu_1>;
Dcpu-capacity.txt2 ARM CPUs capacity bindings
9 ARM systems may be configured to have cpus with different power/performance
18 CPU capacity is a number that provides the scheduler information about CPUs
20 (e.g., ARM big.LITTLE systems) or maximum frequency at which CPUs can run
23 capture a first-order approximation of the relative performance of CPUs.
68 cpus {
196 cpus 0,1@1GHz, cpus 2,3@500MHz):
200 cpus {
237 [1] ARM Linux Kernel documentation - CPUs bindings
238 Documentation/devicetree/bindings/arm/cpus.yaml
Dpmu.yaml46 # Don't know how many CPUs, so no constraints to specify
57 nodes corresponding to the set of CPUs which have
81 not valid for non-ARMv7 CPUs or ARMv7 CPUs booting Linux
/Documentation/devicetree/bindings/cpufreq/
Dcpufreq-spear.txt6 which share clock across all CPUs.
17 /cpus/cpu@0.
21 cpus {
/Documentation/
DIRQ-affinity.txt11 which target CPUs are permitted for a given IRQ source. It's a bitmask
12 (smp_affinity) or cpu list (smp_affinity_list) of allowed CPUs. It's not
13 allowed to turn off all CPUs, and if an IRQ controller does not support
14 IRQ affinity then the value will not change from the default of all cpus.
63 Here is an example of limiting that same irq (44) to cpus 1024 to 1031::
/Documentation/core-api/
Dcpu_hotplug.rst20 Such advances require CPUs available to a kernel to be removed either for
33 Restrict boot time CPUs to *n*. Say if you have fourV CPUs, using
35 other CPUs later online.
38 Restrict the total amount CPUs the kernel will support. If the number
39 supplied here is lower than the number of physically available CPUs than
40 those CPUs can not be brought online later.
43 Use this to limit hotpluggable CPUs. This option sets
69 Bitmap of possible CPUs that can ever be available in the
71 that aren't designed to grow/shrink as CPUs are made available or removed.
77 Bitmap of all CPUs currently online. Its set in ``__cpu_up()``
[all …]
/Documentation/devicetree/bindings/openrisc/opencores/
Dor1ksim.txt15 A "cpus" node is required. Required properties:
19 be probed via CPS, it is not necessary to specify secondary CPUs. Required
25 cpus {
/Documentation/admin-guide/pm/
Dcpufreq.rst37 cases, there are hardware interfaces allowing CPUs to be switched between
42 capacity, so as to decide which P-states to put the CPUs into. Of course, since
90 CPUs. That is, for example, the same register (or set of registers) is used to
91 control the P-state of multiple CPUs at the same time and writing to it affects
92 all of those CPUs simultaneously.
94 Sets of CPUs sharing hardware P-state control interfaces are represented by
100 every CPU in the system, including CPUs that are currently offline. If multiple
101 CPUs share the same hardware P-state control interface, all of the pointers
113 driver is expected to be able to handle all CPUs in the system.
116 CPUs are registered earlier, the driver core invokes the ``CPUFreq`` core to
[all …]
/Documentation/devicetree/bindings/mips/img/
Dpistachio.txt10 A "cpus" node is required. Required properties:
14 be probed via CPS, it is not necessary to specify secondary CPUs. Required
22 cpus {

1234567891011