• Home
  • Raw
  • Download

Lines Matching +full:cpu +full:- +full:core

5 [ This document only discusses CPU bandwidth control for SCHED_NORMAL.
6 The SCHED_RT case is covered in Documentation/scheduler/sched-rt-group.rst ]
9 specification of the maximum CPU bandwidth available to a group or hierarchy.
13 microseconds of CPU time. That quota is assigned to per-cpu run queues in
21 is transferred to cpu-local "silos" on a demand basis. The amount transferred
25 ----------
26 Quota and period are managed within the cpu subsystem via cgroupfs.
28 cpu.cfs_quota_us: the total available run-time within a period (in microseconds)
29 cpu.cfs_period_us: the length of a period (in microseconds)
30 cpu.stat: exports throttling statistics [explained further below]
34 cpu.cfs_period_us=100ms
35 cpu.cfs_quota=-1
37 A value of -1 for cpu.cfs_quota_us indicates that the group does not have any
39 bandwidth group. This represents the traditional work-conserving behavior for
48 Writing any negative value to cpu.cfs_quota_us will remove the bandwidth limit
55 --------------------
56 For efficiency run-time is transferred between the global pool and CPU local
66 for more fine-grained consumption.
69 ----------
70 A group's bandwidth statistics are exported via 3 fields in cpu.stat.
72 cpu.stat:
74 - nr_periods: Number of enforcement intervals that have elapsed.
75 - nr_throttled: Number of times the group has been throttled/limited.
76 - throttled_time: The total time duration (in nanoseconds) for which entities
79 This interface is read-only.
82 ---------------------------
84 attainable, that is: max(c_i) <= C. However, over-subscription in the
85 aggregate case is explicitly allowed to enable work-conserving semantics
102 ---------------------------
103 Once a slice is assigned to a cpu it does not expire. However all but 1ms of
104 the slice may be returned to the global pool if all threads on that cpu become
109 The fact that cpu-local slices do not expire results in some interesting corner
112 For cgroup cpu constrained applications that are cpu limited this is a
114 quota as well as the entirety of each cpu-local slice in each period. As a
118 For highly-threaded, non-cpu bound applications this non-expiration nuance
120 unused slice on each cpu that the task group is running on (typically at most
121 1ms per cpu or as defined by min_cfs_rq_runtime). This slight burst only
122 applies if quota had been assigned to a cpu and then not fully used or returned
126 also limits the burst ability to no more than 1ms per cpu. This provides
128 small quota limits on high core count machines. It also eliminates the
130 quota amounts of cpu. Another way to say this, is that by allowing the unused
132 possibility of wastefully expiring quota on cpu-local silos that don't need a
133 full slice's amount of cpu time.
135 The interaction between cpu-bound and non-cpu-bound-interactive applications
136 should also be considered, especially when single core usage hits 100%. If you
137 gave each of these applications half of a cpu-core and they both got scheduled
138 on the same CPU it is theoretically possible that the non-cpu bound application
140 cpu-bound application from fully using its quota by that same amount. In these
141 instances it will be up to the CFS algorithm (see sched-design-CFS.rst) to
147 --------
148 1. Limit a group to 1 CPU worth of runtime::
151 1 CPU worth of runtime every 250ms.
153 # echo 250000 > cpu.cfs_quota_us /* quota = 250ms */
154 # echo 250000 > cpu.cfs_period_us /* period = 250ms */
156 2. Limit a group to 2 CPUs worth of runtime on a multi-CPU machine
161 # echo 1000000 > cpu.cfs_quota_us /* quota = 1000ms */
162 # echo 500000 > cpu.cfs_period_us /* period = 500ms */
166 3. Limit a group to 20% of 1 CPU.
168 With 50ms period, 10ms quota will be equivalent to 20% of 1 CPU::
170 # echo 10000 > cpu.cfs_quota_us /* quota = 10ms */
171 # echo 50000 > cpu.cfs_period_us /* period = 50ms */