• Home
  • Raw
  • Download

Lines Matching +full:per +full:- +full:cpu

9 - Portions Copyright (c) 2004-2006 Silicon Graphics, Inc.
10 - Modified by Paul Jackson <pj@sgi.com>
11 - Modified by Christoph Lameter <cl@linux.com>
12 - Modified by Paul Menage <menage@google.com>
13 - Modified by Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
39 ----------------------
43 an on-line node that contains memory.
45 Cpusets constrain the CPU and Memory placement of tasks to only
52 Documentation/admin-guide/cgroup-v1/cgroups.rst.
55 include CPUs in its CPU affinity mask, and using the mbind(2) and
59 schedule a task on a CPU that is not allowed in its cpus_allowed
71 ----------------------------
75 non-uniform access times (NUMA) presents additional challenges for
80 the available CPU and Memory resources amongst the requesting tasks.
103 leverages existing CPU and Memory Placement facilities in the Linux
109 ---------------------------------
120 - Cpusets are sets of allowed CPUs and Memory Nodes, known to the
122 - Each task in the system is attached to a cpuset, via a pointer
124 - Calls to sched_setaffinity are filtered to just those CPUs
126 - Calls to mbind and set_mempolicy are filtered to just
128 - The root cpuset contains all the systems CPUs and Memory
130 - For any cpuset, one can define child cpusets containing a subset
131 of the parents CPU and Memory Node resources.
132 - The hierarchy of cpusets can be mounted at /dev/cpuset, for
134 - A cpuset may be marked exclusive, which ensures that no other
137 - You can list all the tasks (by pid) attached to any cpuset.
142 - in init/main.c, to initialize the root cpuset at system boot.
143 - in fork and exit, to attach and detach a task from its cpuset.
144 - in sched_setaffinity, to mask the requested CPUs by what's
146 - in sched.c migrate_live_tasks(), to keep migrating tasks within
148 - in the mbind and set_mempolicy system calls, to mask the requested
150 - in page_alloc.c, to restrict memory to allowed nodes.
151 - in vmscan.c, to restrict page recovery to the current cpuset.
155 new system calls are added for cpusets - all support for querying and
164 Cpus_allowed_list: 0-127
166 Mems_allowed_list: 0-63
172 - cpuset.cpus: list of CPUs in that cpuset
173 - cpuset.mems: list of Memory Nodes in that cpuset
174 - cpuset.memory_migrate flag: if set, move pages to cpusets nodes
175 - cpuset.cpu_exclusive flag: is cpu placement exclusive?
176 - cpuset.mem_exclusive flag: is memory placement exclusive?
177 - cpuset.mem_hardwall flag: is memory allocation hardwalled
178 - cpuset.memory_pressure: measure of how much paging pressure in cpuset
179 - cpuset.memory_spread_page flag: if set, spread page cache evenly on allowed nodes
180 - cpuset.memory_spread_slab flag: if set, spread slab cache evenly on allowed nodes
181 - cpuset.sched_load_balance flag: if set, load balance within CPUs on that cpuset
182 - cpuset.sched_relax_domain_level: the searching range when migrating tasks
186 - cpuset.memory_pressure_enabled flag: compute memory_pressure?
194 a large system into nested, dynamically changeable, "soft-partitions".
200 may be re-attached to any other cpuset, if allowed by the permissions
209 - Its CPUs and Memory Nodes must be a subset of its parents.
210 - It can't be marked exclusive unless its parent is.
211 - If its cpu or memory is exclusive, they may not overlap any sibling.
221 read-only. The cpus file automatically tracks the value of
222 cpu_online_mask using a CPU hotplug notifier, and the mems file
223 automatically tracks the value of node_states[N_MEMORY]--i.e.,
224 nodes with memory--using the cpuset_track_online_nodes() hook.
228 --------------------------------
230 If a cpuset is cpu or mem exclusive, no other cpuset, other than
242 construct child, non-mem_exclusive cpusets for each individual job.
249 -----------------------------
250 The memory_pressure of a cpuset provides a simple per-cpuset metric
260 submitted jobs, which may choose to terminate or re-prioritize jobs that
278 Why a per-cpuset, running average:
280 Because this meter is per-cpuset, rather than per-task or mm,
290 Because this meter is per-cpuset rather than per-task or mm,
296 A per-cpuset simple digital filter (requires a spinlock and 3 words
297 of data per-cpuset) is kept, and updated by any task attached to that
300 A per-cpuset file provides an integer number representing the recent
301 (half-life of 10 seconds) rate of direct page reclaims caused by
302 the tasks in the cpuset, in units of reclaims attempted per second,
307 ---------------------------
308 There are two boolean flag files per cpuset that control where the
313 If the per-cpuset boolean flag file 'cpuset.memory_spread_page' is set, then
318 If the per-cpuset boolean flag file 'cpuset.memory_spread_slab' is set,
350 Setting the flag 'cpuset.memory_spread_page' turns on a per-process flag
362 value of a per-task rotor cpuset_mem_spread_rotor to select the next
366 round-robin or interleave.
377 --------------------------------
380 tasks. If one CPU is underutilized, kernel code running on that
381 CPU will look for tasks on other more overloaded CPUs and move those
413 When the per-cpuset flag "cpuset.sched_load_balance" is enabled (the default
417 from any CPU in that cpuset to any other.
419 When the per-cpuset flag "cpuset.sched_load_balance" is disabled, then the
421 --except-- in so far as is necessary because some overlapping cpuset
434 the top cpuset that might use non-trivial amounts of CPU, as such tasks
437 such a task could use spare CPU cycles in some other CPUs, the kernel
439 task to that underused CPU.
441 Of course, tasks pinned to a particular CPU can be left in a cpuset
447 overlap and each CPU is in at most one sched domain.
454 a task to a CPU outside its cpuset, but the scheduler load balancing
457 This mismatch is why there is not a simple one-to-one relation
469 don't leave tasks that might use non-trivial amounts of CPU in
479 ------------------------------------------------
481 The per-cpuset flag 'cpuset.sched_load_balance' defaults to enabled (contrary
509 - the 'cpuset.sched_load_balance' flag of a cpuset with non-empty CPUs changes,
510 - or CPUs come or go from a cpuset with this flag enabled,
511 - or 'cpuset.sched_relax_domain_level' value of a cpuset with non-empty CPUs
513 - or a cpuset with non-empty CPUs and with this flag enabled is removed,
514 - or a cpu is offlined/onlined.
517 setup - one sched domain for each element (struct cpumask) in the
528 --------------------------------------
533 When a task is woken up, scheduler try to move the task on idle CPU.
534 For example, if a task A running on CPU X activates another task B
535 on the same CPU X, and if CPU Y is X's sibling and performing idle,
536 then scheduler migrate task B to CPU Y so that task B can start on
537 CPU Y without waiting task A on CPU X.
539 And if a CPU run out of tasks in its runqueue, the CPU try to pull
546 events are limited in the same socket or node where the CPU locates,
549 For example, assume CPU Z is relatively far from CPU X. Even if CPU Z
550 is idle while CPU X and the siblings are busy, scheduler can't migrate
552 As the result, task B on CPU X need to wait task A or wait load balance
559 otherwise initial value -1 that indicates the cpuset has no request.
562 -1 no request. use system default or follow request of others.
566 3 search cpus in a node [= system wide on non-NUMA system]
574 This file is per-cpuset and affect the sched domain where the cpuset
581 requests 0 and others are -1 then 0 is used.
589 - The migration costs between each cpu can be assumed considerably
591 special hardware support for CPU cache etc.
592 - The searching cost doesn't have impact(for you) or you can make
594 - The latency is required even it sacrifices cache hit rate etc.
599 --------------------------
604 task directly, the impact on a task of changing its cpuset CPU
611 in the task's cpuset, and update its per-task memory placement to
625 will have its allowed CPU placement changed immediately. Similarly,
627 allowed CPU placement is changed immediately. If such a task had been
629 the task will be allowed to run on any CPU allowed in its new cpuset,
658 with non-empty cpus. But the moving of some (or all) tasks might fail if
679 2) mount -t cgroup -ocpuset cpuset /sys/fs/cgroup/cpuset
691 mount -t cgroup -ocpuset cpuset /sys/fs/cgroup/cpuset
695 /bin/echo 2-3 > cpuset.cpus
705 - via the cpuset file system directly, using the various cd, mkdir, echo,
707 - via the C library libcpuset.
708 - via the C library libcgroup.
710 - via the python application cset.
722 ---------------
728 # mount -t cgroup -o cpuset cpuset /sys/fs/cgroup/cpuset
766 # /bin/echo 0-7 > cpuset.cpus
770 # /bin/echo 0-7 > cpuset.mems
793 mount -t cpuset X /sys/fs/cgroup/cpuset
797 mount -t cgroup -ocpuset,noprefix X /sys/fs/cgroup/cpuset
801 ------------------------
806 # /bin/echo 1-4 > cpuset.cpus -> set cpus list to cpus 1,2,3,4
807 # /bin/echo 1,2,3,4 > cpuset.cpus -> set cpus list to cpus 1,2,3,4
809 To add a CPU to a cpuset, write the new list of CPUs including the
810 CPU to be added. To add 6 to the above cpuset::
812 # /bin/echo 1-4,6 > cpuset.cpus -> set cpus list to cpus 1,2,3,4,6
814 Similarly to remove a CPU from a cpuset, write the new list of CPUs
815 without the CPU to be removed.
819 # /bin/echo "" > cpuset.cpus -> clear cpus list
822 -----------------
826 # /bin/echo 1 > cpuset.cpu_exclusive -> set flag 'cpuset.cpu_exclusive'
827 # /bin/echo 0 > cpuset.cpu_exclusive -> unset flag 'cpuset.cpu_exclusive'
830 -----------------------
860 We can only return one error code per call to write(). So you should also