Searched full:cpus (Results 1 – 25 of 382) sorted by relevance
12345678910>>...16
| /Documentation/timers/ |
| D | no_hz.rst | 19 2. Omit scheduling-clock ticks on idle CPUs (CONFIG_NO_HZ_IDLE=y or 23 3. Omit scheduling-clock ticks on CPUs that are either idle or that 65 Omit Scheduling-Clock Ticks For Idle CPUs 78 scheduling-clock interrupts to idle CPUs, which is critically important 86 idle CPUs. That said, dyntick-idle mode is not free: 104 Omit Scheduling-Clock Ticks For CPUs With Only One Runnable Task 109 Note that omitting scheduling-clock ticks for CPUs with only one runnable 110 task implies also omitting them for idle CPUs. 113 sending scheduling-clock interrupts to CPUs with a single runnable task, 114 and such CPUs are said to be "adaptive-ticks CPUs". This is important [all …]
|
| /Documentation/devicetree/bindings/csky/ |
| D | cpus.txt | 5 The device tree allows to describe the layout of CPUs in a system through 6 the "cpus" node, which in turn contains a number of subnodes (ie "cpu") 9 Only SMP system need to care about the cpus node and single processor 10 needn't define cpus node at all. 13 cpus and cpu node bindings definition 16 - cpus node 20 The node name must be "cpus". 22 A cpus node must define the following properties: 59 cpus {
|
| /Documentation/arch/arm64/ |
| D | cpu-hotplug.rst | 9 CPUs online/offline using PSCI. This document is about ACPI firmware allowing 10 CPUs that were not available during boot to be added to the system later. 15 CPU Hotplug on physical systems - CPUs not present at boot 24 In the arm64 world CPUs are not a single device but a slice of the system. 25 There are no systems that support the physical addition (or removal) of CPUs 29 e.g. New CPUs come with new caches, but the platform's cache topology is 30 described in a static table, the PPTT. How caches are shared between CPUs is 42 CPU Hotplug on virtual systems - CPUs not enabled at boot 50 CPU Hotplug as all resources are described as ``present``, but CPUs may be 53 single CPU, and additional CPUs are added once a cloud orchestrator deploys [all …]
|
| D | asymmetric-32bit.rst | 16 of the CPUs are capable of executing 32-bit user applications. On such 56 The subset of CPUs capable of running 32-bit tasks is described in 60 **Note:** CPUs are advertised by this file as they are detected and so 61 late-onlining of 32-bit-capable CPUs can result in the file contents 62 being modified by the kernel at runtime. Once advertised, CPUs are never 71 affinity mask contains 64-bit-only CPUs. In this situation, the kernel 88 of all 32-bit-capable CPUs of which the kernel is aware. 98 the 32-bit-capable CPUs of the requested affinity mask. On success, the 112 64-bit-only CPUs and admission control is enabled. Concurrent offlining 113 of 32-bit-capable CPUs may still necessitate the procedure described in [all …]
|
| D | booting.rst | 193 be programmed with a consistent value on all CPUs. If entering the 199 All CPUs to be booted by the kernel must be part of the same coherency 214 - SCR_EL3.FIQ must have the same value across all CPUs the kernel is 229 all CPUs the kernel is executing on, and must stay constant 252 For CPUs with pointer authentication functionality: 264 For CPUs with Activity Monitors Unit v1 (AMUv1) extension present: 282 For CPUs with the Fine Grained Traps (FEAT_FGT) extension present: 288 For CPUs with the Fine Grained Traps 2 (FEAT_FGT2) extension present: 294 For CPUs with support for HCRX_EL2 (FEAT_HCX) present: 300 For CPUs with Advanced SIMD and floating point support: [all …]
|
| /Documentation/admin-guide/cgroup-v1/ |
| D | cpusets.rst | 31 2.2 Adding/removing cpus 43 Cpusets provide a mechanism for assigning a set of CPUs and Memory 57 include CPUs in its CPU affinity mask, and using the mbind(2) and 60 CPUs or Memory Nodes not in that cpuset. The scheduler will not 67 cpusets and which CPUs and Memory Nodes are assigned to each cpuset, 75 The management of large computer systems, with many processors (CPUs), 113 Cpusets provide a Linux kernel mechanism to constrain which CPUs and 117 CPUs a task may be scheduled (sched_setaffinity) and on which Memory 122 - Cpusets are sets of allowed CPUs and Memory Nodes, known to the 126 - Calls to sched_setaffinity are filtered to just those CPUs [all …]
|
| /Documentation/admin-guide/ |
| D | kernel-per-CPU-kthreads.rst | 13 - Documentation/core-api/irq/irq-affinity.rst: Binding interrupts to sets of CPUs. 15 - Documentation/admin-guide/cgroup-v1: Using cgroups to bind tasks to sets of CPUs. 18 of CPUs. 21 call to bind tasks to sets of CPUs. 50 2. Do all eHCA-Infiniband-related work on other CPUs, including 53 provisioned only on selected CPUs. 101 with multiple CPUs, force them all offline before bringing the 102 first one back online. Once you have onlined the CPUs in question, 103 do not offline any other CPUs, because doing so could force the 104 timer back onto one of the CPUs in question. [all …]
|
| D | cputopology.rst | 61 offline: CPUs that are not online because they have been 62 HOTPLUGGED off or exceed the limit of CPUs allowed by the 64 [~cpu_online_mask + cpus >= NR_CPUS] 66 online: CPUs that are online and being scheduled [cpu_online_mask] 68 possible: CPUs that have been allocated resources and can be 71 present: CPUs that have been identified as being present in the 78 In this example, there are 64 CPUs in the system but cpus 32-63 exceed 80 being 32. Note also that CPUs 2 and 4-31 are not online but could be 90 started with possible_cpus=144. There are 4 CPUs in the system and cpu2
|
| /Documentation/power/ |
| D | suspend-and-cpuhotplug.rst | 27 |tasks | | cpus | | | | cpus | |tasks| 59 online CPUs 75 Note down these cpus in | P 100 | Call _cpu_up() [for all those cpus in the frozen_cpus mask, in a loop] 158 the non-boot CPUs are offlined or onlined, the _cpu_*() functions are called 177 update on the CPUs, as discussed below: 184 a. When all the CPUs are identical: 187 to apply the same microcode revision to each of the CPUs. 192 all CPUs, in order to handle case 'b' described below. 195 b. When some of the CPUs are different than the rest: [all …]
|
| /Documentation/devicetree/bindings/clock/ |
| D | allwinner,sun9i-a80-cpus-clk.yaml | 4 $id: http://devicetree.org/schemas/clock/allwinner,sun9i-a80-cpus-clk.yaml# 7 title: Allwinner A80 CPUS Clock 20 const: allwinner,sun9i-a80-cpus-clk 45 compatible = "allwinner,sun9i-a80-cpus-clk"; 49 clock-output-names = "cpus";
|
| /Documentation/scheduler/ |
| D | sched-energy.rst | 9 the impact of its decisions on the energy consumed by CPUs. EAS relies on an 10 Energy Model (EM) of the CPUs to select an energy efficient CPU for each task, 59 In short, EAS changes the way CFS tasks are assigned to CPUs. When it is time 64 knowledge about the platform's topology, which include the 'capacity' of CPUs, 72 differentiate CPUs with different computing throughput. The 'capacity' of a CPU 76 tasks and CPUs computed by the Per-Entity Load Tracking (PELT) mechanism. Thanks 79 energy trade-offs. The capacity of CPUs is provided via arch-specific code 99 Let us consider a platform with 12 CPUs, split in 3 performance domains 102 CPUs: 0 1 2 3 4 5 6 7 8 9 10 11 108 containing 6 CPUs. The two root domains are denoted rd1 and rd2 in the [all …]
|
| D | sched-domains.rst | 10 Each scheduling domain spans a number of CPUs (stored in the ->span field). 13 i. The top domain for each CPU will generally span all CPUs in the system 15 CPUs will never be given tasks to run unless the CPUs allowed mask is 17 CPUs". 23 to which the domain belongs. Groups may be shared among CPUs as they contain 27 shared between CPUs. 31 load of each of its member CPUs, and only when the load of a group becomes 49 If it succeeds, it looks for the busiest runqueue of all the CPUs' runqueues in 62 In SMP, the parent of the base domain will span all physical CPUs in the
|
| /Documentation/devicetree/bindings/loongarch/ |
| D | cpus.yaml | 4 $id: http://devicetree.org/schemas/loongarch/cpus.yaml# 7 title: LoongArch CPUs 14 it describe the layout of CPUs in a system through the "cpus" node. 42 cpus {
|
| /Documentation/devicetree/bindings/arm/cpu-enable-method/ |
| D | marvell,berlin-smp | 6 CPUs. To apply to all CPUs, a single "marvell,berlin-smp" enable method should 7 be defined in the "cpus" node. 11 Compatible CPUs: "marvell,pj4b" and "arm,cortex-a9" 20 cpus {
|
| D | al,alpine-smp | 6 enabling secondary CPUs. To apply to all CPUs, a single 8 "cpus" node. 12 Compatible CPUs: "arm,cortex-a15" 32 cpus {
|
| D | nuvoton,npcm750-smp | 5 To apply to all CPUs, a single "nuvoton,npcm750-smp" enable method should be 6 defined in the "cpus" node. 10 Compatible CPUs: "arm,cortex-a9" 19 cpus {
|
| /Documentation/devicetree/bindings/mips/ |
| D | cpus.yaml | 4 $id: http://devicetree.org/schemas/mips/cpus.yaml# 7 title: MIPS CPUs 14 The device tree allows to describe the layout of CPUs in a system through 15 the "cpus" node, which in turn contains a number of subnodes (ie "cpu") 75 cpus { 96 cpus {
|
| /Documentation/devicetree/bindings/mips/brcm/ |
| D | soc.yaml | 42 cpus: 54 This is common to all CPUs in the system so it lives 55 under the "cpus" node. 71 $ref: /schemas/mips/cpus.yaml# 87 cpus: 101 cpus {
|
| /Documentation/arch/arm/ |
| D | cluster-pm-race-avoidance.rst | 18 In a system containing multiple CPUs, it is desirable to have the 19 ability to turn off individual CPUs when the system is idle, reducing 22 In a system containing multiple clusters of CPUs, it is also desirable 27 of independently running CPUs, while the OS continues to run. This 92 CPUs in the cluster simultaneously modifying the state. The cluster- 104 referred to as a "CPU". CPUs are assumed to be single-threaded: 107 This means that CPUs fit the basic model closely. 216 A cluster is a group of connected CPUs with some common resources. 217 Because a cluster contains multiple CPUs, it can be doing multiple 272 which exact CPUs within the cluster play these roles. This must [all …]
|
| D | vlocks.rst | 9 These are intended to be used to coordinate critical activity among CPUs 68 The currently_voting[] array provides a way for the CPUs to determine 77 As long as the last_vote variable is globally visible to all CPUs, it 94 number of CPUs. 97 if necessary, as in the following hypothetical example for 4096 CPUs:: 127 of CPUs potentially contending the lock is small enough). This 157 If there are too many CPUs to read the currently_voting array in 169 * vlocks are currently only used to coordinate between CPUs which are 175 memory unless all CPUs contending the lock are cache-coherent, due 177 CPUs. (Though if all the CPUs are cache-coherent, you should be
|
| /Documentation/ABI/stable/ |
| D | sysfs-devices-system-cpu | 6 /sys/devices/system/cpu/cpuN/dscr on all CPUs. 64 Description: internal kernel map of CPUs within the same core. 69 Description: human-readable list of CPUs within the same core. 75 Description: internal kernel map of the CPUs sharing the same physical_package_id. 80 Description: human-readable list of CPUs sharing the same physical_package_id. 86 Description: internal kernel map of CPUs within the same die. 94 Description: human-readable list of CPUs within the same die. 99 Description: internal kernel map of CPUs within the same cluster. 103 Description: human-readable list of CPUs within the same cluster.
|
| /Documentation/devicetree/bindings/cpufreq/ |
| D | cpufreq-spear.txt | 6 which share clock across all CPUs. 17 /cpus/cpu@0. 21 cpus {
|
| /Documentation/networking/ |
| D | scaling.rst | 29 queues to distribute processing among CPUs. The NIC distributes packets by 32 queue, which in turn can be processed by separate CPUs. This mechanism is 76 one for each memory domain, where a memory domain is a set of CPUs that 98 to spread receive interrupts between CPUs. To manually adjust the IRQ 108 interrupt processing forms a bottleneck. Spreading load between CPUs 110 is to allocate as many queues as there are CPUs in the system (or the 197 Each receive hardware queue has an associated list of CPUs to which 202 the end of the bottom half routine, IPIs are sent to any CPUs for which 213 explicitly configured. The list of CPUs to which RPS may forward traffic 218 This file implements a bitmap of CPUs. RPS is disabled when it is zero [all …]
|
| /Documentation/devicetree/bindings/arm/ |
| D | nvidia,tegra194-ccplex.yaml | 16 symmetric cores. Compatible string in "cpus" node represents the CPU 21 const: cpus 31 operating point data for all CPUs. 37 cpus {
|
| /Documentation/core-api/irq/ |
| D | irq-affinity.rst | 11 which target CPUs are permitted for a given IRQ source. It's a bitmask 12 (smp_affinity) or cpu list (smp_affinity_list) of allowed CPUs. It's not 13 allowed to turn off all CPUs, and if an IRQ controller does not support 14 IRQ affinity then the value will not change from the default of all cpus. 63 Here is an example of limiting that same irq (44) to cpus 1024 to 1031::
|
12345678910>>...16