Lines Matching +full:cpu +full:- +full:3
32 worker thread per CPU and a single threaded (ST) wq had one worker
33 thread system-wide. A single MT wq needed to keep around the same
35 wq users over the years and with the number of CPU cores continuously
42 worker pool. An MT wq could provide only one execution context per CPU
60 * Use per-CPU unified worker pools shared by all wq to provide
85 worker-pools.
87 The cmwq design differentiates between the user-facing workqueues that
89 which manages worker-pools and processes the queued work items.
91 There are two worker-pools, one for normal work items and the other
92 for high priority ones, for each possible CPU and some extra
93 worker-pools to serve work items queued on unbound workqueues - the
98 Each per-CPU BH worker pool contains only one pseudo worker which represents
106 things like CPU locality, concurrency limits, priority and more. To
110 When a work item is queued to a workqueue, the target worker-pool is
112 and appended on the shared worklist of the worker-pool. For example,
114 be queued on the worklist of either normal or highpri worker-pool that
115 is associated to the CPU the issuer is running on.
123 Each worker-pool bound to an actual CPU implements concurrency
124 management by hooking into the scheduler. The worker-pool is notified
127 not expected to hog a CPU and consume many cycles. That means
130 workers on the CPU, the worker-pool doesn't start execution of a new
132 schedules a new worker so that the CPU doesn't sit idle while there
152 wq's that have a rescue-worker reserved for execution under memory
153 pressure. Else it is possible that the worker-pool deadlocks waiting
162 removal. ``alloc_workqueue()`` takes three arguments - ``@name``,
173 ---------
177 workqueues are always per-CPU and all BH work items are executed in the
178 queueing CPU's softirq context in the queueing order.
188 worker-pools which host workers which are not bound to any
189 specific CPU. This makes the wq behave as a simple execution
191 worker-pools try to start execution of work items as soon as
200 * Long running CPU intensive workloads which can be better
215 worker-pool of the target cpu. Highpri worker-pools are
218 Note that normal and highpri worker-pools don't interact with
223 Work items of a CPU intensive wq do not contribute to the
224 concurrency level. In other words, runnable CPU intensive
226 worker-pool from starting execution. This is useful for bound
227 work items which are expected to hog CPU cycles so that their
230 Although CPU intensive work items don't contribute to the
233 non-CPU-intensive work items can delay execution of CPU
240 --------------
243 CPU which can be assigned to the work items of a wq. For example, with
245 at the same time per CPU. This is always a per-CPU attribute, even for
272 Work items w0, w1, w2 are queued to a bound wq q0 on the same CPU.
273 w0 burns CPU for 5ms then sleeps for 10ms then burns CPU for 5ms
274 again before finishing. w1 and w2 burn CPU for 5ms then sleep for
282 0 w0 starts and burns CPU
284 15 w0 wakes up and burns CPU
286 20 w1 starts and burns CPU
289 35 w2 starts and burns CPU
293 And with cmwq with ``@max_active`` >= 3, ::
296 0 w0 starts and burns CPU
298 5 w1 starts and burns CPU
300 10 w2 starts and burns CPU
302 15 w0 wakes up and burns CPU
310 0 w0 starts and burns CPU
312 5 w1 starts and burns CPU
314 15 w0 wakes up and burns CPU
317 20 w2 starts and burns CPU
325 0 w0 starts and burns CPU
327 5 w1 and w2 start and burn CPU
330 15 w0 wakes up and burns CPU
360 * Unless work items are expected to consume a huge amount of CPU
372 on one of the CPUs which share the last level cache with the issuing CPU.
382 ``cpu``
383 CPUs are not grouped. A work item issued on one CPU is processed by a
384 worker on the same CPU. This makes unbound workqueues behave as per-cpu
389 logical threads of each physical CPU core are grouped together.
401 work item on a CPU close to the issuing CPU.
419 item starts execution, workqueue makes a best-effort attempt to ensure
438 kernel, there exists a pronounced trade-off between locality and utilization
442 the same number of consumed CPU cycles. However, higher locality may also
445 testing with dm-crypt clearly illustrates this trade-off.
447 The tests are run on a CPU with 12-cores/24-threads split across four L3
448 caches (AMD Ryzen 9 3900x). CPU clock boost is turned off for consistency.
449 ``/dev/dm-0`` is a dm-crypt device created on NVME SSD (Samsung 990 PRO) and
454 -------------------------------------------------------------
458 $ fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=32k --ioengine=libaio \
459 --iodepth=64 --runtime=60 --numjobs=24 --time_based --group_reporting \
460 --name=iops-test-job --verify=sha512
462 There are 24 issuers, each issuing 64 IOs concurrently. ``--verify=sha512``
465 are the read bandwidths and CPU utilizations depending on different affinity
467 MiBps, and CPU util in percents.
469 .. list-table::
471 :header-rows: 1
473 * - Affinity
474 - Bandwidth (MiBps)
475 - CPU util (%)
477 * - system
478 - 1159.40 ±1.34
479 - 99.31 ±0.02
481 * - cache
482 - 1166.40 ±0.89
483 - 99.34 ±0.01
485 * - cache (strict)
486 - 1166.00 ±0.71
487 - 99.35 ±0.01
491 machine but the cache-affine ones outperform by 0.6% thanks to improved
496 -----------------------------------------------------
500 $ fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=32k \
501 --ioengine=libaio --iodepth=64 --runtime=60 --numjobs=8 \
502 --time_based --group_reporting --name=iops-test-job --verify=sha512
504 The only difference from the previous scenario is ``--numjobs=8``. There are
508 .. list-table::
510 :header-rows: 1
512 * - Affinity
513 - Bandwidth (MiBps)
514 - CPU util (%)
516 * - system
517 - 1155.40 ±0.89
518 - 97.41 ±0.05
520 * - cache
521 - 1154.40 ±1.14
522 - 96.15 ±0.09
524 * - cache (strict)
525 - 1112.00 ±4.64
526 - 93.26 ±0.35
530 less CPU but the better efficiency puts it at the same bandwidth as
538 Scenario 3: Even fewer issuers, not enough work to saturate
539 -----------------------------------------------------------
543 $ fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=32k \
544 --ioengine=libaio --iodepth=64 --runtime=60 --numjobs=4 \
545 --time_based --group_reporting --name=iops-test-job --verify=sha512
547 Again, the only difference is ``--numjobs=4``. With the number of issuers
551 .. list-table::
553 :header-rows: 1
555 * - Affinity
556 - Bandwidth (MiBps)
557 - CPU util (%)
559 * - system
560 - 993.60 ±1.82
561 - 75.49 ±0.06
563 * - cache
564 - 973.40 ±1.52
565 - 74.90 ±0.07
567 * - cache (strict)
568 - 828.20 ±4.49
569 - 66.84 ±0.29
576 ------------------------------
583 While the loss of work-conservation in certain scenarios hurts, it is a lot
589 that may consume a significant amount of CPU are recommended to configure
593 * An unbound workqueue with strict "cpu" affinity scope behaves the same as
594 ``WQ_CPU_INTENSIVE`` per-cpu workqueue. There is no real advanage to the
600 * The loss of work-conservation in non-strict affinity scopes is likely
603 work-conservation in most cases. As such, it is possible that future
610 Use tools/workqueue/wq_dump.py to examine unbound CPU affinity
618 CPU
620 pod_cpus [0]=00000001 [1]=00000002 [2]=00000004 [3]=00000008
621 pod_node [0]=0 [1]=0 [2]=1 [3]=1
622 cpu_pod [0]=0 [1]=1 [2]=2 [3]=3
626 pod_cpus [0]=00000001 [1]=00000002 [2]=00000004 [3]=00000008
627 pod_node [0]=0 [1]=0 [2]=1 [3]=1
628 cpu_pod [0]=0 [1]=1 [2]=2 [3]=3
634 cpu_pod [0]=0 [1]=0 [2]=1 [3]=1
640 cpu_pod [0]=0 [1]=0 [2]=1 [3]=1
645 pod_node [0]=-1
646 cpu_pod [0]=0 [1]=0 [2]=0 [3]=0
650 pool[00] ref= 1 nice= 0 idle/workers= 4/ 4 cpu= 0
651 pool[01] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 0
652 pool[02] ref= 1 nice= 0 idle/workers= 4/ 4 cpu= 1
653 pool[03] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 1
654 pool[04] ref= 1 nice= 0 idle/workers= 4/ 4 cpu= 2
655 pool[05] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 2
656 pool[06] ref= 1 nice= 0 idle/workers= 3/ 3 cpu= 3
657 pool[07] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 3
659 pool[09] ref=28 nice= 0 idle/workers= 3/ 3 cpus=00000003
661 pool[11] ref= 1 nice=-20 idle/workers= 1/ 1 cpus=0000000f
662 pool[12] ref= 2 nice=-20 idle/workers= 1/ 1 cpus=00000003
663 pool[13] ref= 2 nice=-20 idle/workers= 1/ 1 cpus=0000000c
665 Workqueue CPU -> pool
667 [ workqueue \ CPU 0 1 2 3 dfl]
669 events_highpri percpu 1 3 5 7
691 events 18545 0 6.1 0 5 - -
692 events_highpri 8 0 0.0 0 0 - -
693 events_long 3 0 0.0 0 0 - -
694 events_unbound 38306 0 0.1 - 7 - -
695 events_freezable 0 0 0.0 0 0 - -
696 events_power_efficient 29598 0 0.2 0 0 - -
697 events_freezable_pwr_ef 10 0 0.0 0 0 - -
698 sock_diag_events 0 0 0.0 0 0 - -
701 events 18548 0 6.1 0 5 - -
702 events_highpri 8 0 0.0 0 0 - -
703 events_long 3 0 0.0 0 0 - -
704 events_unbound 38322 0 0.1 - 7 - -
705 events_freezable 0 0 0.0 0 0 - -
706 events_power_efficient 29603 0 0.2 0 0 - -
707 events_freezable_pwr_ef 10 0 0.0 0 0 - -
708 sock_diag_events 0 0 0.0 0 0 - -
729 If kworkers are going crazy (using too much cpu), there are two types
733 2. A single work item that consumes lots of cpu cycles
755 Non-reentrance Conditions
758 Workqueue guarantees that a work item cannot be re-entrant if the following
763 3. The work item hasn't been reinitiated.
766 executed by at most one worker system-wide at any given time.
776 .. kernel-doc:: include/linux/workqueue.h
778 .. kernel-doc:: kernel/workqueue.c