Searched +full:per +full:- +full:cpu (Results 1 – 25 of 414) sorted by relevance
12345678910>>...17
| /Documentation/core-api/ |
| D | this_cpu_ops.rst | 8 this_cpu operations are a way of optimizing access to per cpu 11 the cpu permanently stored the beginning of the per cpu area for a 14 this_cpu operations add a per cpu variable offset to the processor 15 specific per cpu base and encode that operation in the instruction 16 operating on the per cpu variable. 24 Read-modify-write operations are of particular interest. Frequently 32 synchronization is not necessary since we are dealing with per cpu 37 Please note that accesses by remote processors to a per cpu area are 65 ------------------------------------ 68 per cpu area. It is then possible to simply use the segment override [all …]
|
| D | local_ops.rst | 29 Local atomic operations are meant to provide fast and highly reentrant per CPU 34 Having fast per CPU atomic counters is interesting in many cases: it does not 40 CPU which owns the data. Therefore, care must taken to make sure that only one 41 CPU writes to the ``local_t`` data. This is done by using per cpu data and 43 however permitted to read ``local_t`` data from any CPU: it will then appear to 44 be written out of order wrt other memory writes by the owner CPU. 54 ``asm-generic/local.h`` in your architecture's ``local.h`` is sufficient. 66 * Variables touched by local ops must be per cpu variables. 67 * *Only* the CPU owner of these variables must write to them. 68 * This CPU can use local ops from any context (process, irq, softirq, nmi, ...) [all …]
|
| /Documentation/virt/ |
| D | guest-halt-polling.rst | 15 2) The VM-exit cost can be avoided. 25 ("per-cpu guest_halt_poll_ns"), which is adjusted by the algorithm 42 Division factor used to shrink per-cpu guest_halt_poll_ns when 49 Multiplication factor used to grow per-cpu guest_halt_poll_ns 50 when event occurs after per-cpu guest_halt_poll_ns 57 The per-cpu guest_halt_poll_ns eventually reaches zero 59 per-cpu guest_halt_poll_ns when growing. This can 70 to avoid it (per-cpu guest_halt_poll_ns will remain 82 - Care should be taken when setting the guest_halt_poll_ns parameter as a 83 large value has the potential to drive the cpu usage to 100% on a machine
|
| /Documentation/trace/ |
| D | events-kmem.rst | 8 - Slab allocation of small objects of unknown type (kmalloc) 9 - Slab allocation of small objects of known type 10 - Page allocation 11 - Per-CPU Allocator Activity 12 - External Fragmentation 40 These events are similar in usage to the kmalloc-related events except that 50 mm_page_alloc_zone_locked page=%p pfn=%lu order=%u migratetype=%d cpu=%d percpu_refill=%d 56 the per-CPU allocator (high performance) or the buddy allocator. 60 amounts of activity imply high activity on the zone->lock. Taking this lock 72 contention on the lruvec->lru_lock. [all …]
|
| /Documentation/devicetree/bindings/arm/ |
| D | arm,embedded-trace-extension.yaml | 1 # SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause 4 --- 5 $id: http://devicetree.org/schemas/arm/arm,embedded-trace-extension.yaml# 6 $schema: http://devicetree.org/meta-schemas/core.yaml# 11 - Suzuki K Poulose <suzuki.poulose@arm.com> 12 - Mathieu Poirier <mathieu.poirier@linaro.org> 15 Arm Embedded Trace Extension(ETE) is a per CPU trace component that 16 allows tracing the CPU execution. It overlaps with the CoreSight ETMv4 19 components (e.g, TMC-ETR) or other means (e.g, using a per CPU buffer 21 legacy CoreSight components, a node must be listed per instance, along [all …]
|
| /Documentation/devicetree/bindings/interrupt-controller/ |
| D | apple,aic.yaml | 1 # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 3 --- 4 $id: http://devicetree.org/schemas/interrupt-controller/apple,aic.yaml# 5 $schema: http://devicetree.org/meta-schemas/core.yaml# 10 - Hector Martin <marcan@marcan.st> 19 - Level-triggered hardware IRQs wired to SoC blocks 20 - Single mask bit per IRQ 21 - Per-IRQ affinity setting 22 - Automatic masking on event delivery (auto-ack) 23 - Software triggering (ORed with hw line) [all …]
|
| /Documentation/translations/it_IT/locking/ |
| D | locktypes.rst | 1 .. SPDX-License-Identifier: GPL-2.0 3 .. include:: ../disclaimer-ita.rst 17 - blocchi ad attesa con sospensione 18 - blocchi locali per CPU 19 - blocchi ad attesa attiva 28 --------------------------------- 41 - mutex 42 - rt_mutex 43 - semaphore 44 - rw_semaphore [all …]
|
| /Documentation/arch/x86/ |
| D | topology.rst | 1 .. SPDX-License-Identifier: GPL-2.0 11 The architecture-agnostic topology definitions are in 12 Documentation/admin-guide/cputopology.rst. This file holds x86-specific 17 Needless to say, code should use the generic functions - this file is *only* 35 - packages 36 - cores 37 - threads 48 Package-related topology information in the kernel: 50 - topology_num_threads_per_package() 54 - topology_num_cores_per_package() [all …]
|
| /Documentation/devicetree/bindings/cpufreq/ |
| D | apple,cluster-cpufreq.yaml | 1 # SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause 3 --- 4 $id: http://devicetree.org/schemas/cpufreq/apple,cluster-cpufreq.yaml# 5 $schema: http://devicetree.org/meta-schemas/core.yaml# 10 - Hector Martin <marcan@marcan.st> 13 Apple SoCs (e.g. M1) have a per-cpu-cluster DVFS controller that is part of 15 operating-points-v2 table to define the CPU performance states, with the 16 opp-level property specifying the hardware p-state index for that level. 21 - items: 22 - enum: [all …]
|
| /Documentation/devicetree/bindings/arm/marvell/ |
| D | coherency-fabric.txt | 2 ---------------- 7 - compatible: the possible values are: 9 * "marvell,coherency-fabric", to be used for the coherency fabric of 12 * "marvell,armada-375-coherency-fabric", for the Armada 375 coherency 15 * "marvell,armada-380-coherency-fabric", for the Armada 38x coherency 18 - reg: Should contain coherency fabric registers location and 21 * For "marvell,coherency-fabric", the first pair for the coherency 22 fabric registers, second pair for the per-CPU fabric registers. 24 * For "marvell,armada-375-coherency-fabric", only one pair is needed 25 for the per-CPU fabric registers. [all …]
|
| D | mvebu-cpu-config.txt | 1 MVEBU CPU Config registers 2 -------------------------- 8 - compatible: one of: 9 - "marvell,armada-370-cpu-config" 10 - "marvell,armada-xp-cpu-config" 12 - reg: Should contain CPU config registers location and length, in 13 their per-CPU variant 17 cpu-config@21000 { 18 compatible = "marvell,armada-xp-cpu-config";
|
| /Documentation/bpf/ |
| D | map_hash.rst | 1 .. SPDX-License-Identifier: GPL-2.0-only 3 .. Copyright (C) 2022-2023 Isovalent, Inc. 10 - ``BPF_MAP_TYPE_HASH`` was introduced in kernel version 3.19 11 - ``BPF_MAP_TYPE_PERCPU_HASH`` was introduced in version 4.6 12 - Both ``BPF_MAP_TYPE_LRU_HASH`` and ``BPF_MAP_TYPE_LRU_PERCPU_HASH`` 20 to the max_entries limit that you specify. Hash maps use pre-allocation 22 used to disable pre-allocation when it is too memory expensive. 24 ``BPF_MAP_TYPE_PERCPU_HASH`` provides a separate value slot per 25 CPU. The per-cpu values are stored internally in an array. 32 shared across CPUs but it is possible to request a per CPU LRU list with [all …]
|
| D | map_cgroup_storage.rst | 1 .. SPDX-License-Identifier: GPL-2.0-only 8 The ``BPF_MAP_TYPE_CGROUP_STORAGE`` map type represents a local fix-sized 127 per-CPU variant will have different memory regions for each CPU for each 128 storage. The non-per-CPU will have the same memory region for each storage. 130 Prior to Linux 5.9, the lifetime of a storage is precisely per-attachment, and 136 There is a one-to-one association between the map of each type (per-CPU and 137 non-per-CPU) and the BPF program during load verification time. As a result, 154 (per-CPU and non-per-CPU). A BPF program cannot use more than one
|
| /Documentation/arch/powerpc/ |
| D | dscr.rst | 21 dscr_default /* per-CPU DSCR default value */ 29 Scheduler will write the per-CPU DSCR default which is stored in the 30 CPU's PACA value into the register if the thread has dscr_inherit value 35 the per-CPU default PACA based DSCR value. 42 - Global DSCR default: /sys/devices/system/cpu/dscr_default 43 - CPU specific DSCR default: /sys/devices/system/cpu/cpuN/dscr 45 Changing the global DSCR default in the sysfs will change all the CPU 48 value into every CPU's DSCR register right away and updates the current 51 Changing the CPU specific DSCR default value in the sysfs does exactly 53 stuff for that particular CPU instead for all the CPUs on the system. [all …]
|
| /Documentation/cpu-freq/ |
| D | cpu-drivers.rst | 1 .. SPDX-License-Identifier: GPL-2.0 10 - Dominik Brodowski <linux@brodo.de> 11 - Rafael J. Wysocki <rafael.j.wysocki@intel.com> 12 - Viresh Kumar <viresh.kumar@linaro.org> 18 1.2 Per-CPU Initialization 31 So, you just got a brand-new CPU / chipset with datasheets and want to 32 add cpufreq support for this CPU / chipset? Great. Here are some hints 37 ------------------ 40 function check whether this kernel runs on the right CPU and the right 46 .name - The name of this driver. [all …]
|
| /Documentation/mm/ |
| D | page_frags.rst | 5 A page fragment is an arbitrary-length arbitrary-offset area of memory 13 memory for use as either an sk_buff->head, or to be used in the "frags" 22 either a per-cpu limitation, or a per-cpu limitation and forcing interrupts 25 The network stack uses two separate caches per CPU to handle fragment 41 avoid calling get_page per allocation.
|
| /Documentation/netlink/specs/ |
| D | ovs_datapath.yaml | 1 # SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 5 protocol: genetlink-legacy 6 uapi-header: linux/openvswitch.h 12 - 13 name: ovs-header 16 - 17 name: dp-ifindex 19 - 20 name: user-features 22 name-prefix: ovs-dp-f- [all …]
|
| /Documentation/admin-guide/ |
| D | kernel-per-CPU-kthreads.rst | 2 Reducing OS jitter due to per-cpu kthreads 5 This document lists per-CPU kthreads in the Linux kernel and presents 6 options to control their OS jitter. Note that non-per-CPU kthreads are 7 not listed here. To reduce OS jitter from non-per-CPU kthreads, bind 8 them to a "housekeeping" CPU dedicated to such work. 13 - Documentation/core-api/irq/irq-affinity.rst: Binding interrupts to sets of CPUs. 15 - Documentation/admin-guide/cgroup-v1: Using cgroups to bind tasks to sets of CPUs. 17 - man taskset: Using the taskset command to bind tasks to sets 20 - man sched_setaffinity: Using the sched_setaffinity() system 23 - /sys/devices/system/cpu/cpuN/online: Control CPU N's hotplug state, [all …]
|
| /Documentation/timers/ |
| D | highres.rst | 8 https://www.kernel.org/doc/ols/2006/ols2006v1-pages-333-346.pdf 11 http://www.cs.columbia.edu/~nahum/w6998/papers/ols2006-hrtimers-slides.pdf 23 - hrtimer base infrastructure 24 - timeofday and clock source management 25 - clock event management 26 - high resolution timer functionality 27 - dynamic ticks 31 --------------------------- 40 - time ordered enqueueing into a rb-tree 41 - independent of ticks (the processing is based on nanoseconds) [all …]
|
| /Documentation/scheduler/ |
| D | sched-stats.rst | 16 12 which was in the kernel from 2.6.13-2.6.19 (version 13 never saw a kernel 17 release). Some counters make more sense to be per-runqueue; other to be 18 per-domain. Note that domains (and their associated information) will only 22 statistics for each cpu listed, and there may well be more than one 38 Note that any such script will necessarily be version-specific, as the main 42 CPU statistics 43 -------------- 44 cpu<N> 1 2 3 4 5 6 7 8 9 60 6) # of times try_to_wake_up() was called to wake up the local cpu 67 9) # of timeslices run on this cpu [all …]
|
| /Documentation/accounting/ |
| D | delay-accounting.rst | 7 runnable task may wait for a free CPU to run on. 9 The per-task delay accounting functionality measures 12 a) waiting for a CPU (while being runnable) 18 g) write-protect copy 24 Such delays provide feedback for setting a task's cpu priority, 40 --------- 44 generic data structure to userspace corresponding to per-pid and per-tgid 52 delay seen for cpu, sync block I/O, swapin, memory reclaim, thrash page 53 cache, direct compact, write-protect copy, IRQ/SOFTIRQ etc. 60 When a task exits, records containing the per-task statistics [all …]
|
| D | taskstats-struct.rst | 34 4) Per-task and per-thread context switch count statistics 69 /* The scheduling discipline as set in task->policy field. */ 84 /* The user CPU time of a task, in [usec]. */ 85 __u64 ac_utime; /* User CPU time [usec] */ 87 /* The system CPU time of a task, in [usec]. */ 88 __u64 ac_stime; /* System CPU time [usec] */ 90 /* The minor page fault count of a task, as set in task->min_flt. */ 93 /* The major page fault count of a task, as set in task->maj_flt. */ 112 /* Delay waiting for cpu, while runnable 118 /* Following four fields atomically updated using task->delays->lock */ [all …]
|
| /Documentation/networking/device_drivers/ethernet/freescale/ |
| D | dpaa.rst | 1 .. SPDX-License-Identifier: GPL-2.0 8 - Madalin Bucur <madalin.bucur@nxp.com> 9 - Camelia Groza <camelia.groza@nxp.com> 13 - DPAA Ethernet Overview 14 - DPAA Ethernet Supported SoCs 15 - Configuring DPAA Ethernet in your kernel 16 - DPAA Ethernet Frame Processing 17 - DPAA Ethernet Features 18 - DPAA IRQ Affinity and Receive Side Scaling 19 - Debugging [all …]
|
| /Documentation/admin-guide/pm/ |
| D | intel_uncore_frequency_scaling.rst | 1 .. SPDX-License-Identifier: GPL-2.0 8 :Copyright: |copy| 2022-2023 Intel Corporation 13 ------------ 22 the scaling min/max frequencies via cpufreq sysfs to improve CPU performance. 30 --------------- 33 `/sys/devices/system/cpu/intel_uncore_frequency/`. 36 uncore scaling control is per die in multiple die/package SoCs or per 37 package for single die per package SoCs. The name represents the 45 This is a read-only attribute. If users adjust max_freq_khz, 50 This is a read-only attribute. If users adjust min_freq_khz, [all …]
|
| /Documentation/admin-guide/cgroup-v1/ |
| D | cpusets.rst | 11 - Portions Copyright (c) 2004-2006 Silicon Graphics, Inc. 12 - Modified by Paul Jackson <pj@sgi.com> 13 - Modified by Christoph Lameter <cl@linux.com> 14 - Modified by Paul Menage <menage@google.com> 15 - Modified by Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> 41 ---------------------- 45 an on-line node that contains memory. 47 Cpusets constrain the CPU and Memory placement of tasks to only 54 Documentation/admin-guide/cgroup-v1/cgroups.rst. 57 include CPUs in its CPU affinity mask, and using the mbind(2) and [all …]
|
12345678910>>...17