Searched +full:per +full:- +full:cpu (Results 1 – 25 of 318) sorted by relevance
12345678910>>...13
/Documentation/core-api/ |
D | this_cpu_ops.rst | 8 this_cpu operations are a way of optimizing access to per cpu 11 the cpu permanently stored the beginning of the per cpu area for a 14 this_cpu operations add a per cpu variable offset to the processor 15 specific per cpu base and encode that operation in the instruction 16 operating on the per cpu variable. 24 Read-modify-write operations are of particular interest. Frequently 32 synchronization is not necessary since we are dealing with per cpu 37 Please note that accesses by remote processors to a per cpu area are 66 ------------------------------------ 69 per cpu area. It is then possible to simply use the segment override [all …]
|
D | local_ops.rst | 29 Local atomic operations are meant to provide fast and highly reentrant per CPU 34 Having fast per CPU atomic counters is interesting in many cases: it does not 40 CPU which owns the data. Therefore, care must taken to make sure that only one 41 CPU writes to the ``local_t`` data. This is done by using per cpu data and 43 however permitted to read ``local_t`` data from any CPU: it will then appear to 44 be written out of order wrt other memory writes by the owner CPU. 54 ``asm-generic/local.h`` in your architecture's ``local.h`` is sufficient. 66 * Variables touched by local ops must be per cpu variables. 67 * *Only* the CPU owner of these variables must write to them. 68 * This CPU can use local ops from any context (process, irq, softirq, nmi, ...) [all …]
|
/Documentation/virt/ |
D | guest-halt-polling.rst | 15 2) The VM-exit cost can be avoided. 25 ("per-cpu guest_halt_poll_ns"), which is adjusted by the algorithm 42 Division factor used to shrink per-cpu guest_halt_poll_ns when 49 Multiplication factor used to grow per-cpu guest_halt_poll_ns 50 when event occurs after per-cpu guest_halt_poll_ns 57 The per-cpu guest_halt_poll_ns eventually reaches zero 59 per-cpu guest_halt_poll_ns when growing. This can 70 to avoid it (per-cpu guest_halt_poll_ns will remain 82 - Care should be taken when setting the guest_halt_poll_ns parameter as a 83 large value has the potential to drive the cpu usage to 100% on a machine
|
/Documentation/trace/ |
D | events-kmem.rst | 8 - Slab allocation of small objects of unknown type (kmalloc) 9 - Slab allocation of small objects of known type 10 - Page allocation 11 - Per-CPU Allocator Activity 12 - External Fragmentation 40 These events are similar in usage to the kmalloc-related events except that 50 mm_page_alloc_zone_locked page=%p pfn=%lu order=%u migratetype=%d cpu=%d percpu_refill=%d 56 the per-CPU allocator (high performance) or the buddy allocator. 60 amounts of activity imply high activity on the zone->lock. Taking this lock 72 contention on the zone->lru_lock. [all …]
|
/Documentation/devicetree/bindings/arm/ |
D | ete.yaml | 1 # SPDX-License-Identifier: GPL-2.0-only or BSD-2-Clause 4 --- 6 $schema: "http://devicetree.org/meta-schemas/core.yaml#" 11 - Suzuki K Poulose <suzuki.poulose@arm.com> 12 - Mathieu Poirier <mathieu.poirier@linaro.org> 15 Arm Embedded Trace Extension(ETE) is a per CPU trace component that 16 allows tracing the CPU execution. It overlaps with the CoreSight ETMv4 19 components (e.g, TMC-ETR) or other means (e.g, using a per CPU buffer 21 legacy CoreSight components, a node must be listed per instance, along 22 with any optional connection graph as per the coresight bindings. [all …]
|
/Documentation/x86/ |
D | topology.rst | 1 .. SPDX-License-Identifier: GPL-2.0 11 The architecture-agnostic topology definitions are in 12 Documentation/admin-guide/cputopology.rst. This file holds x86-specific 17 Needless to say, code should use the generic functions - this file is *only* 35 - packages 36 - cores 37 - threads 48 Package-related topology information in the kernel: 50 - cpuinfo_x86.x86_max_cores: 54 - cpuinfo_x86.x86_max_dies: [all …]
|
/Documentation/accounting/ |
D | delay-accounting.rst | 7 runnable task may wait for a free CPU to run on. 9 The per-task delay accounting functionality measures 12 a) waiting for a CPU (while being runnable) 20 Such delays provide feedback for setting a task's cpu priority, 36 --------- 40 generic data structure to userspace corresponding to per-pid and per-tgid 48 delay seen for cpu, sync block I/O, swapin, memory reclaim etc. 55 When a task exits, records containing the per-task statistics 57 task of a thread group, the per-tgid statistics are also sent. More details 65 ----- [all …]
|
D | taskstats-struct.rst | 34 4) Per-task and per-thread context switch count statistics 69 /* The scheduling discipline as set in task->policy field. */ 84 /* The user CPU time of a task, in [usec]. */ 85 __u64 ac_utime; /* User CPU time [usec] */ 87 /* The system CPU time of a task, in [usec]. */ 88 __u64 ac_stime; /* System CPU time [usec] */ 90 /* The minor page fault count of a task, as set in task->min_flt. */ 93 /* The major page fault count of a task, as set in task->maj_flt. */ 112 /* Delay waiting for cpu, while runnable 118 /* Following four fields atomically updated using task->delays->lock */ [all …]
|
/Documentation/cpu-freq/ |
D | cpu-drivers.rst | 1 .. SPDX-License-Identifier: GPL-2.0 10 - Dominik Brodowski <linux@brodo.de> 11 - Rafael J. Wysocki <rafael.j.wysocki@intel.com> 12 - Viresh Kumar <viresh.kumar@linaro.org> 18 1.2 Per-CPU Initialization 31 So, you just got a brand-new CPU / chipset with datasheets and want to 32 add cpufreq support for this CPU / chipset? Great. Here are some hints 37 ------------------ 40 function check whether this kernel runs on the right CPU and the right 46 .name - The name of this driver. [all …]
|
/Documentation/devicetree/bindings/arm/marvell/ |
D | coherency-fabric.txt | 2 ---------------- 7 - compatible: the possible values are: 9 * "marvell,coherency-fabric", to be used for the coherency fabric of 12 * "marvell,armada-375-coherency-fabric", for the Armada 375 coherency 15 * "marvell,armada-380-coherency-fabric", for the Armada 38x coherency 18 - reg: Should contain coherency fabric registers location and 21 * For "marvell,coherency-fabric", the first pair for the coherency 22 fabric registers, second pair for the per-CPU fabric registers. 24 * For "marvell,armada-375-coherency-fabric", only one pair is needed 25 for the per-CPU fabric registers. [all …]
|
D | mvebu-cpu-config.txt | 1 MVEBU CPU Config registers 2 -------------------------- 8 - compatible: one of: 9 - "marvell,armada-370-cpu-config" 10 - "marvell,armada-xp-cpu-config" 12 - reg: Should contain CPU config registers location and length, in 13 their per-CPU variant 17 cpu-config@21000 { 18 compatible = "marvell,armada-xp-cpu-config";
|
/Documentation/devicetree/bindings/timer/ |
D | qcom,msm-timer.txt | 5 - compatible : Should at least contain "qcom,msm-timer". More specific 8 "qcom,kpss-timer" - krait subsystem 9 "qcom,scss-timer" - scorpion subsystem 11 - interrupts : Interrupts for the debug timer, the first general purpose 15 - reg : Specifies the base address of the timer registers. 17 - clocks: Reference to the parent clocks, one per output clock. The parents 20 - clock-names: The name of the clocks as free-form strings. They should be in 23 - clock-frequency : The frequency of the debug timer and the general purpose 28 - cpu-offset : per-cpu offset used when the timer is accessed without the 29 CPU remapping facilities. The offset is [all …]
|
D | arm,arch_timer.yaml | 1 # SPDX-License-Identifier: GPL-2.0 3 --- 5 $schema: http://devicetree.org/meta-schemas/core.yaml# 10 - Marc Zyngier <marc.zyngier@arm.com> 11 - Mark Rutland <mark.rutland@arm.com> 13 ARM cores may have a per-core architected timer, which provides per-cpu timers, 15 physical and optional virtual timer per frame. 17 The per-core architected timer is attached to a GIC to deliver its 18 per-processor interrupts via PPIs. The memory mapped timer is attached to a GIC 24 - items: [all …]
|
/Documentation/powerpc/ |
D | dscr.rst | 21 dscr_default /* per-CPU DSCR default value */ 29 Scheduler will write the per-CPU DSCR default which is stored in the 30 CPU's PACA value into the register if the thread has dscr_inherit value 35 the per-CPU default PACA based DSCR value. 42 - Global DSCR default: /sys/devices/system/cpu/dscr_default 43 - CPU specific DSCR default: /sys/devices/system/cpu/cpuN/dscr 45 Changing the global DSCR default in the sysfs will change all the CPU 48 value into every CPU's DSCR register right away and updates the current 51 Changing the CPU specific DSCR default value in the sysfs does exactly 53 stuff for that particular CPU instead for all the CPUs on the system. [all …]
|
/Documentation/bpf/ |
D | map_cgroup_storage.rst | 1 .. SPDX-License-Identifier: GPL-2.0-only 8 The ``BPF_MAP_TYPE_CGROUP_STORAGE`` map type represents a local fix-sized 127 per-CPU variant will have different memory regions for each CPU for each 128 storage. The non-per-CPU will have the same memory region for each storage. 130 Prior to Linux 5.9, the lifetime of a storage is precisely per-attachment, and 136 There is a one-to-one association between the map of each type (per-CPU and 137 non-per-CPU) and the BPF program during load verification time. As a result, 154 (per-CPU and non-per-CPU). A BPF program cannot use more than one
|
/Documentation/vm/ |
D | page_frags.rst | 7 A page fragment is an arbitrary-length arbitrary-offset area of memory 15 memory for use as either an sk_buff->head, or to be used in the "frags" 24 either a per-cpu limitation, or a per-cpu limitation and forcing interrupts 27 The network stack uses two separate caches per CPU to handle fragment 43 avoid calling get_page per allocation.
|
/Documentation/admin-guide/ |
D | perf-security.rst | 7 -------- 19 1. System hardware and software configuration data, for example: a CPU 30 faults, CPU migrations), architectural hardware performance counters 50 ------------------------------- 66 independently enabled and disabled on per-thread basis for processes and 100 --------------------------------- 102 Mechanisms of capabilities, privileged capability-dumb files [6]_ and 115 # ls -alhF 116 -rwxr-xr-x 2 root root 11M Oct 19 15:12 perf 118 # ls -alhF [all …]
|
D | kernel-per-CPU-kthreads.rst | 2 Reducing OS jitter due to per-cpu kthreads 5 This document lists per-CPU kthreads in the Linux kernel and presents 6 options to control their OS jitter. Note that non-per-CPU kthreads are 7 not listed here. To reduce OS jitter from non-per-CPU kthreads, bind 8 them to a "housekeeping" CPU dedicated to such work. 13 - Documentation/core-api/irq/irq-affinity.rst: Binding interrupts to sets of CPUs. 15 - Documentation/admin-guide/cgroup-v1: Using cgroups to bind tasks to sets of CPUs. 17 - man taskset: Using the taskset command to bind tasks to sets 20 - man sched_setaffinity: Using the sched_setaffinity() system 23 - /sys/devices/system/cpu/cpuN/online: Control CPU N's hotplug state, [all …]
|
/Documentation/scheduler/ |
D | sched-stats.rst | 11 12 which was in the kernel from 2.6.13-2.6.19 (version 13 never saw a kernel 12 release). Some counters make more sense to be per-runqueue; other to be 13 per-domain. Note that domains (and their associated information) will only 17 statistics for each cpu listed, and there may well be more than one 33 Note that any such script will necessarily be version-specific, as the main 37 CPU statistics 38 -------------- 39 cpu<N> 1 2 3 4 5 6 7 8 9 55 6) # of times try_to_wake_up() was called to wake up the local cpu 62 9) # of timeslices run on this cpu [all …]
|
D | sched-bwc.rst | 5 [ This document only discusses CPU bandwidth control for SCHED_NORMAL. 6 The SCHED_RT case is covered in Documentation/scheduler/sched-rt-group.rst ] 9 specification of the maximum CPU bandwidth available to a group or hierarchy. 13 microseconds of CPU time. That quota is assigned to per-cpu run queues in 21 is transferred to cpu-local "silos" on a demand basis. The amount transferred 25 ---------- 26 Quota and period are managed within the cpu subsystem via cgroupfs. 28 cpu.cfs_quota_us: the total available run-time within a period (in microseconds) 29 cpu.cfs_period_us: the length of a period (in microseconds) 30 cpu.stat: exports throttling statistics [explained further below] [all …]
|
/Documentation/timers/ |
D | highres.rst | 8 https://www.kernel.org/doc/ols/2006/ols2006v1-pages-333-346.pdf 11 http://www.cs.columbia.edu/~nahum/w6998/papers/ols2006-hrtimers-slides.pdf 23 - hrtimer base infrastructure 24 - timeofday and clock source management 25 - clock event management 26 - high resolution timer functionality 27 - dynamic ticks 31 --------------------------- 40 - time ordered enqueueing into a rb-tree 41 - independent of ticks (the processing is based on nanoseconds) [all …]
|
/Documentation/networking/device_drivers/ethernet/freescale/ |
D | dpaa.rst | 1 .. SPDX-License-Identifier: GPL-2.0 8 - Madalin Bucur <madalin.bucur@nxp.com> 9 - Camelia Groza <camelia.groza@nxp.com> 13 - DPAA Ethernet Overview 14 - DPAA Ethernet Supported SoCs 15 - Configuring DPAA Ethernet in your kernel 16 - DPAA Ethernet Frame Processing 17 - DPAA Ethernet Features 18 - DPAA IRQ Affinity and Receive Side Scaling 19 - Debugging [all …]
|
/Documentation/devicetree/bindings/hwmon/ |
D | pwm-fan.txt | 4 - compatible : "pwm-fan" 5 - pwms : the PWM that is used to control the PWM fan 6 - cooling-levels : PWM duty cycle values in a range from 0 to 255 10 - fan-supply : phandle to the regulator that provides power to the fan 11 - interrupts : This contains a single interrupt specifier which 14 defined number of interrupts per fan revolution, which 16 See interrupt-controller/interrupts.txt for the format. 17 - pulses-per-revolution : define the tachometer pulses per fan revolution as 18 an integer (default is 2 interrupts per revolution). 22 fan0: pwm-fan { [all …]
|
/Documentation/admin-guide/cgroup-v1/ |
D | cpusets.rst | 11 - Portions Copyright (c) 2004-2006 Silicon Graphics, Inc. 12 - Modified by Paul Jackson <pj@sgi.com> 13 - Modified by Christoph Lameter <cl@linux.com> 14 - Modified by Paul Menage <menage@google.com> 15 - Modified by Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> 41 ---------------------- 45 an on-line node that contains memory. 47 Cpusets constrain the CPU and Memory placement of tasks to only 54 Documentation/admin-guide/cgroup-v1/cgroups.rst. 57 include CPUs in its CPU affinity mask, and using the mbind(2) and [all …]
|
/Documentation/ABI/stable/ |
D | sysfs-devices-system-cpu | 1 What: /sys/devices/system/cpu/dscr_default 2 Date: 13-May-2014 6 /sys/devices/system/cpu/cpuN/dscr on all CPUs. 9 all per-CPU defaults at the same time. 12 What: /sys/devices/system/cpu/cpu[0-9]+/dscr 13 Date: 13-May-2014 17 a CPU. 22 on any CPU where it executes (overriding the value described
|
12345678910>>...13