Lines Matching full:scope
368 An unbound workqueue groups CPUs according to its affinity scope to improve
370 scope of "cache", it will group CPUs according to last level cache
373 Once started, the worker may or may not be allowed to move outside the scope
374 depending on the ``affinity_strict`` setting of the scope.
379 Use the scope in module parameter ``workqueue.default_affinity_scope``
394 cases. This is the default affinity scope.
403 The default affinity scope can be changed with the module parameter
405 scope can be changed using ``apply_workqueue_attrs()``.
407 If ``WQ_SYSFS`` is set, the workqueue will have the following affinity scope
412 Read to see the current affinity scope. Write to change.
414 When default is the current scope, reading this file will also show the
415 current effective scope in parentheses, for example, ``default (cache)``.
420 that the worker is inside its affinity scope, which is called
422 anywhere in the system as it sees fit. This enables benefiting from scope
426 If set to 1, all workers of the scope are guaranteed always to be in the
427 scope. This may be useful when crossing affinity scopes has other
429 isolation. Strict NUMA scope can also be used to match the workqueue
466 scope settings on ``kcryptd`` measured over five runs. Bandwidths are in
533 Eight issuers moving around over four L3 cache scope still allow "cache
579 scope over "system" is, while consistent and noticeable, small. However, the
586 affinity scope for unbound pools.
593 * An unbound workqueue with strict "cpu" affinity scope behaves the same as
598 behavior, use strict "numa" affinity scope.