• Home
  • Raw
  • Download

Lines Matching full:workqueues

22  * pools for workqueues which are not bound to any specific CPU - the
287 struct list_head list; /* PR: list of all workqueues */
318 * the workqueues list without grabbing wq_pool_mutex.
319 * This is used to dump all workqueues from sysrq.
331 * Each pod type describes how CPUs should be grouped for unbound workqueues.
372 static DEFINE_MUTEX(wq_pool_mutex); /* protects pools and workqueues list */
378 static LIST_HEAD(workqueues); /* PR: list of all workqueues */
1161 * workqueues as appropriate. To avoid flooding the console, each violating work
1900 * This current implementation is specific to unbound workqueues. in queue_work_node()
2630 * workqueues), so hiding them isn't a problem. in process_one_work()
2732 * exception is work items which belong to workqueues with a rescuer which
2825 * workqueues which have works queued on the pool and let them process
3403 * For single threaded workqueues the deadlock happens when the work in start_flush_work()
3405 * workqueues the deadlock happens when the rescuer stalls, blocking in start_flush_work()
4455 /* only unbound workqueues can change attributes */ in apply_workqueue_attrs_locked()
4527 * may execute on any CPU. This is similar to how per-cpu workqueues behave on
4659 * Workqueues which may be used during memory reclaim should have a rescuer
4757 * wq_pool_mutex protects global freeze state and workqueues list. in alloc_workqueue()
4758 * Grab it, adjust max_active and add the new @wq to workqueues in alloc_workqueue()
4768 list_add_tail_rcu(&wq->list, &workqueues); in alloc_workqueue()
4905 /* disallow meddling with max_active for ordered workqueues */ in workqueue_set_max_active()
4965 * With the exception of ordered workqueues, all workqueues have per-cpu
5324 * Called from a sysrq handler and prints out all busy workqueues and pools.
5334 pr_info("Showing busy workqueues and worker pools:\n"); in show_all_workqueues()
5336 list_for_each_entry_rcu(wq, &workqueues, list) in show_all_workqueues()
5348 * Called from try_to_freeze_tasks() and prints out all freezable workqueues
5357 pr_info("Showing freezable workqueues that are still busy:\n"); in show_freezable_workqueues()
5359 list_for_each_entry_rcu(wq, &workqueues, list) { in show_freezable_workqueues()
5588 /* update pod affinity of unbound workqueues */ in workqueue_online_cpu()
5589 list_for_each_entry(wq, &workqueues, list) { in workqueue_online_cpu()
5615 /* update pod affinity of unbound workqueues */ in workqueue_offline_cpu()
5617 list_for_each_entry(wq, &workqueues, list) { in workqueue_offline_cpu()
5701 * freeze_workqueues_begin - begin freezing workqueues
5703 * Start freezing workqueues. After this function returns, all freezable
5704 * workqueues will queue new works to their inactive_works list instead of
5720 list_for_each_entry(wq, &workqueues, list) { in freeze_workqueues_begin()
5731 * freeze_workqueues_busy - are freezable workqueues still busy?
5740 * %true if some freezable workqueues are still busy. %false if freezing
5753 list_for_each_entry(wq, &workqueues, list) { in freeze_workqueues_busy()
5777 * thaw_workqueues - thaw workqueues
5779 * Thaw workqueues. Normal queueing is restored and all collected
5798 list_for_each_entry(wq, &workqueues, list) { in thaw_workqueues()
5819 list_for_each_entry(wq, &workqueues, list) { in workqueue_apply_unbound_cpumask()
5853 * The low-level workqueues cpumask is a global cpumask that limits
5854 * the affinity of all unbound workqueues. This function check the @cpumask
5855 * and apply it to all unbound workqueues and updates all pwqs of them.
5913 list_for_each_entry(wq, &workqueues, list) { in wq_affn_dfl_set()
5939 * Workqueues with WQ_SYSFS flag set is visible to userland via
5940 * /sys/bus/workqueue/devices/WQ_NAME. All visible workqueues have the
5946 * Unbound workqueues have the following extra attributes.
6262 * workqueues. in workqueue_sysfs_register()
6561 * up. It sets up all the data structures and system workqueues and allows early
6562 * boot code to create workqueues and queue/cancel work items. Actual work item
6699 * and invoked as soon as kthreads can be created and scheduled. Workqueues have
6716 * up. Also, create a rescuer for workqueues that requested it. in workqueue_init()
6724 list_for_each_entry(wq, &workqueues, list) { in workqueue_init()
6810 * workqueue_init_topology - initialize CPU pods for unbound workqueues
6829 * Workqueues allocated earlier would have all CPUs sharing the default in workqueue_init_topology()
6833 list_for_each_entry(wq, &workqueues, list) { in workqueue_init_topology()
6844 pr_warn("WARNING: Flushing system-wide workqueues will be prohibited in near future.\n"); in __warn_flushing_systemwide_wq()