• Home
  • Raw
  • Download

Lines Matching +full:cpu +full:- +full:capacity

32 worker thread per CPU and a single threaded (ST) wq had one worker
33 thread system-wide. A single MT wq needed to keep around the same
35 wq users over the years and with the number of CPU cores continuously
42 worker pool. An MT wq could provide only one execution context per CPU
60 * Use per-CPU unified worker pools shared by all wq to provide
83 called worker-pools.
85 The cmwq design differentiates between the user-facing workqueues that
87 which manages worker-pools and processes the queued work items.
89 There are two worker-pools, one for normal work items and the other
90 for high priority ones, for each possible CPU and some extra
91 worker-pools to serve work items queued on unbound workqueues - the
98 things like CPU locality, concurrency limits, priority and more. To
102 When a work item is queued to a workqueue, the target worker-pool is
104 and appended on the shared worklist of the worker-pool. For example,
106 be queued on the worklist of either normal or highpri worker-pool that
107 is associated to the CPU the issuer is running on.
113 its full capacity.
115 Each worker-pool bound to an actual CPU implements concurrency
116 management by hooking into the scheduler. The worker-pool is notified
119 not expected to hog a CPU and consume many cycles. That means
122 workers on the CPU, the worker-pool doesn't start execution of a new
124 schedules a new worker so that the CPU doesn't sit idle while there
144 wq's that have a rescue-worker reserved for execution under memory
145 pressure. Else it is possible that the worker-pool deadlocks waiting
154 removal. ``alloc_workqueue()`` takes three arguments - ``@name``,
165 ---------
169 worker-pools which host workers which are not bound to any
170 specific CPU. This makes the wq behave as a simple execution
172 worker-pools try to start execution of work items as soon as
181 * Long running CPU intensive workloads which can be better
196 worker-pool of the target cpu. Highpri worker-pools are
199 Note that normal and highpri worker-pools don't interact with
204 Work items of a CPU intensive wq do not contribute to the
205 concurrency level. In other words, runnable CPU intensive
207 worker-pool from starting execution. This is useful for bound
208 work items which are expected to hog CPU cycles so that their
211 Although CPU intensive work items don't contribute to the
214 non-CPU-intensive work items can delay execution of CPU
220 workqueues are now non-reentrant - any work item is guaranteed to be
221 executed by at most one worker system-wide at any given time.
225 --------------
228 per CPU which can be assigned to the work items of a wq. For example,
230 executing at the same time per CPU.
248 unbound worker-pools and only one work item could be active at any given
253 be used to achieve system-wide ST behavior.
262 Work items w0, w1, w2 are queued to a bound wq q0 on the same CPU.
263 w0 burns CPU for 5ms then sleeps for 10ms then burns CPU for 5ms
264 again before finishing. w1 and w2 burn CPU for 5ms then sleep for
272 0 w0 starts and burns CPU
274 15 w0 wakes up and burns CPU
276 20 w1 starts and burns CPU
279 35 w2 starts and burns CPU
286 0 w0 starts and burns CPU
288 5 w1 starts and burns CPU
290 10 w2 starts and burns CPU
292 15 w0 wakes up and burns CPU
300 0 w0 starts and burns CPU
302 5 w1 starts and burns CPU
304 15 w0 wakes up and burns CPU
307 20 w2 starts and burns CPU
315 0 w0 starts and burns CPU
317 5 w1 and w2 start and burn CPU
320 15 w0 wakes up and burns CPU
350 * Unless work items are expected to consume a huge amount of CPU
369 If kworkers are going crazy (using too much cpu), there are two types
373 2. A single work item that consumes lots of cpu cycles
398 .. kernel-doc:: include/linux/workqueue.h