Home
last modified time | relevance | path

Searched +full:8 +full:- +full:cpu (Results 1 – 25 of 382) sorted by relevance

12345678910>>...16

/Documentation/ABI/stable/
Dsysfs-devices-system-cpu1 What: /sys/devices/system/cpu/dscr_default
2 Date: 13-May-2014
6 /sys/devices/system/cpu/cpuN/dscr on all CPUs.
9 all per-CPU defaults at the same time.
12 What: /sys/devices/system/cpu/cpu[0-9]+/dscr
13 Date: 13-May-2014
17 a CPU.
22 on any CPU where it executes (overriding the value described
27 What: /sys/devices/system/cpu/cpuX/topology/physical_package_id
33 What: /sys/devices/system/cpu/cpuX/topology/die_id
[all …]
/Documentation/devicetree/bindings/interrupt-controller/
Dmti,cpu-interrupt-controller.yaml1 # SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
3 ---
4 $id: http://devicetree.org/schemas/interrupt-controller/mti,cpu-interrupt-controller.yaml#
5 $schema: http://devicetree.org/meta-schemas/core.yaml#
7 title: MIPS CPU Interrupt Controller
10 On MIPS the mips_cpu_irq_of_init() helper can be used to initialize the 8 CPU
13 With the irq_domain in place we can describe how the 8 IRQs are wired to the
17 - Thomas Bogendoerfer <tsbogend@alpha.franken.de>
21 const: mti,cpu-interrupt-controller
23 '#interrupt-cells':
[all …]
Darm,gic.yaml1 # SPDX-License-Identifier: GPL-2.0
3 ---
4 $id: http://devicetree.org/schemas/interrupt-controller/arm,gic.yaml#
5 $schema: http://devicetree.org/meta-schemas/core.yaml#
10 - Marc Zyngier <marc.zyngier@arm.com>
17 Primary GIC is attached directly to the CPU and typically has PPIs and SGIs.
22 - $ref: /schemas/interrupt-controller.yaml#
27 - items:
28 - enum:
29 - arm,arm11mp-gic
[all …]
Djcore,aic.txt1 J-Core Advanced Interrupt Controller
5 - compatible: Should be "jcore,aic1" for the (obsolete) first-generation aic
6 with 8 interrupt lines with programmable priorities, or "jcore,aic2" for
9 - reg: Memory region(s) for configuration. For SMP, there should be one
10 region per cpu, indexed by the sequential, zero-based hardware cpu
13 - interrupt-controller: Identifies the node as an interrupt controller
15 - #interrupt-cells: Specifies the number of cells needed to encode an
21 aic: interrupt-controller@200 {
24 interrupt-controller;
25 #interrupt-cells = <1>;
/Documentation/translations/zh_CN/core-api/
Dworkqueue.rst1 .. SPDX-License-Identifier: GPL-2.0
2 .. include:: ../disclaimer-zh_CN.rst
4 :Original: Documentation/core-api/workqueue.rst
109 每个与实际CPU绑定的worker-pool通过钩住调度器来实现并发管理。每当
139 参数 - ``@name`` , ``@flags`` 和 ``@max_active`` 。
148 ---------
202 --------------
234 0 w0 starts and burns CPU
236 15 w0 wakes up and burns CPU
238 20 w1 starts and burns CPU
[all …]
/Documentation/driver-api/
Dedac.rst5 ----------------------------------------
8 *sockets, *socket sets*, *banks*, *rows*, *chip-select rows*, *channels*,
19 output 4 and 8 bits each (x4, x8). Grouping several of these in parallel
21 typically 72 bits, in order to provide 64 bits + 8 bits of ECC data.
43 It is typically the highest hierarchy on a Fully-Buffered DIMM memory
52 * Single-channel
55 only. E. g. if the data is 64 bits-wide, the data flows to the CPU using
57 memories. FB-DIMM and RAMBUS use a different concept for channel, so
60 * Double-channel
63 dimms, accessed at the same time. E. g. if the DIMM is 64 bits-wide (72
[all …]
/Documentation/scheduler/
Dsched-stats.rst16 12 which was in the kernel from 2.6.13-2.6.19 (version 13 never saw a kernel
17 release). Some counters make more sense to be per-runqueue; other to be
18 per-domain. Note that domains (and their associated information) will only
22 statistics for each cpu listed, and there may well be more than one
38 Note that any such script will necessarily be version-specific, as the main
42 CPU statistics
43 --------------
44 cpu<N> 1 2 3 4 5 6 7 8 9
60 6) # of times try_to_wake_up() was called to wake up the local cpu
65 8) sum of all time spent waiting to run by tasks on this processor (in
[all …]
/Documentation/tools/rtla/
Drtla-timerlat-top.rst2 rtla-timerlat-top
4 -------------------------------------------
6 -------------------------------------------
22 seem with the option **-T**.
35 **--aa-only** *us*
38 Print the auto-analysis if the system hits the stop tracing condition. This option
39 is useful to reduce rtla timerlat CPU, enabling the debug without the overhead of
45 In the example below, the timerlat tracer is dispatched in cpus *1-23* in the
49 # timerlat -a 40 -c 1-23 -q
52 CPU COUNT | cur min avg max | cur min avg max
[all …]
Drtla-osnoise-hist.rst2 rtla-osnoise-hist
4 ------------------------------------------------------
6 ------------------------------------------------------
19 occurrence in a histogram, displaying the results in a user-friendly way.
33 In the example below, *osnoise* tracer threads are set to run with real-time
34 priority *FIFO:1*, on CPUs *0-11*, for *900ms* at each period (*1s* by
39 [root@f34 ~/]# rtla osnoise hist -P F:1 -c 0-11 -r 900000 -d 1M -b 10 -E 25
43 …Index CPU-000 CPU-001 CPU-002 CPU-003 CPU-004 CPU-005 CPU-006 CPU-007 CPU-008
46 …20 8 5 12 2 13 24 20 41 29 …
48 … 0 0 0 4 2 7 2 3 8 11
[all …]
/Documentation/core-api/
Dpacking.rst6 -----------------
10 One can memory-map a pointer to a carefully crafted struct over the hardware
13 due to potential endianness mismatches between the CPU and the hardware device.
23 were performed byte-by-byte. Also the code can easily get cluttered, and the
24 high-level idea might get lost among the many bit shifts required.
25 Many drivers take the bit-shifting approach and then attempt to reduce the
30 ------------
34 - Packing a CPU-usable number into a memory buffer (with hardware
36 - Unpacking a memory buffer (which has hardware constraints/quirks)
37 into a CPU-usable number.
[all …]
Dworkqueue.rst32 worker thread per CPU and a single threaded (ST) wq had one worker
33 thread system-wide. A single MT wq needed to keep around the same
35 wq users over the years and with the number of CPU cores continuously
42 worker pool. An MT wq could provide only one execution context per CPU
60 * Use per-CPU unified worker pools shared by all wq to provide
85 worker-pools.
87 The cmwq design differentiates between the user-facing workqueues that
89 which manages worker-pools and processes the queued work items.
91 There are two worker-pools, one for normal work items and the other
92 for high priority ones, for each possible CPU and some extra
[all …]
/Documentation/RCU/
Drcubarrier.rst10 struct placed within the RCU-protected data structure and another pointer
16 call_rcu(&p->rcu, p_callback);
30 -------------------------------------
37 http://lwn.net/images/ns/kernel/rcu-drop.jpg.
39 We could try placing a synchronize_rcu() in the module-exit code path,
43 One might be tempted to try several back-to-back synchronize_rcu()
45 heavy RCU-callback load, then some of the callbacks might be deferred in
52 -------------
61 Pseudo-code using rcu_barrier() is as follows:
92 8 VERBOSE_PRINTK_STRING("Stopping rcu_torture_shuffle task");
[all …]
/Documentation/devicetree/bindings/mips/img/
Dxilfpga.txt14 the ARTIX-7 FPGA by Xilinx.
18 - microAptiv UP core m14Kc
19 - 50MHz clock speed
20 - 128Mbyte DDR RAM at 0x0000_0000
21 - 8Kbyte RAM at 0x1000_0000
22 - axi_intc at 0x1020_0000
23 - axi_uart16550 at 0x1040_0000
24 - axi_gpio at 0x1060_0000
25 - axi_i2c at 0x10A0_0000
26 - custom_gpio at 0x10C0_0000
[all …]
/Documentation/admin-guide/thermal/
Dintel_powerclamp.rst6 - Arjan van de Ven <arjan@linux.intel.com>
7 - Jacob Pan <jacob.jun.pan@linux.intel.com>
12 - Goals and Objectives
15 - Idle Injection
16 - Calibration
19 - Effectiveness and Limitations
20 - Power vs Performance
21 - Scalability
22 - Calibration
23 - Comparison with Alternative Techniques
[all …]
/Documentation/devicetree/bindings/arm/marvell/
Dap80x-system-controller.txt5 7K/8K/931x SoCs. It contains system controllers, which provide several
6 registers giving access to numerous features: clocks, pin-muxing and
11 - compatible: must be: "syscon", "simple-mfd";
12 - reg: register area of the AP80x system controller
18 -------
24 - 0: reference clock of CPU cluster 0
25 - 1: reference clock of CPU cluster 1
26 - 2: fixed PLL at 1200 Mhz
27 - 3: MSS clock, derived from the fixed PLL
31 - compatible: must be one of:
[all …]
Darmada-7k-8k.yaml1 # SPDX-License-Identifier: (GPL-2.0+ OR X11)
3 ---
4 $id: http://devicetree.org/schemas/arm/marvell/armada-7k-8k.yaml#
5 $schema: http://devicetree.org/meta-schemas/core.yaml#
7 title: Marvell Armada 7K/8K Platforms
10 - Gregory CLEMENT <gregory.clement@bootlin.com>
18 - description: Armada 7020 SoC
20 - const: marvell,armada7020
21 - const: marvell,armada-ap806-dual
22 - const: marvell,armada-ap806
[all …]
/Documentation/virt/
Dne_overview.rst1 .. SPDX-License-Identifier: GPL-2.0
29 1. An enclave abstraction process - a user space process running in the primary
42 2. The enclave itself - a VM running on the same host as the primary VM that
48 this size e.g. 8 MiB). The memory can be allocated e.g. by using hugetlbfs from
52 An enclave runs on dedicated cores. CPU 0 and its CPU siblings need to remain
53 available for the primary VM. A CPU pool has to be set for NE purposes by an
54 user with admin capability. See the cpu list section from the kernel
55 documentation [4] for how a CPU pool format looks.
58 using virtio-vsock [5]. The primary VM has virtio-pci vsock emulated device,
59 while the enclave VM has a virtio-mmio vsock emulated device. The vsock device
[all …]
/Documentation/admin-guide/pm/
Dintel-speed-select.rst1 .. SPDX-License-Identifier: GPL-2.0
8 collection of features that give more granular control over CPU performance.
14 - https://www.intel.com/content/www/us/en/architecture-and-technology/speed-select-technology-artic…
15 - https://builders.intel.com/docs/networkbuilders/intel-speed-select-technology-base-frequency-enha…
19 dynamically without pre-configuring via BIOS setup options. This dynamic
29 intel-speed-select configuration tool
32 Most Linux distribution packages may include the "intel-speed-select" tool. If not,
38 # cd tools/power/x86/intel-speed-select/
43 ------------
47 # intel-speed-select --help
[all …]
/Documentation/devicetree/bindings/net/
Dlantiq,pef2256.yaml1 # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
3 ---
5 $schema: http://devicetree.org/meta-schemas/core.yaml#
10 - Herve Codina <herve.codina@bootlin.com>
20 - const: lantiq,pef2256
27 - description: Master Clock
28 - description: System Clock Receive
29 - description: System Clock Transmit
31 clock-names:
33 - const: mclk
[all …]
/Documentation/trace/
Dosnoise-tracer.rst5 In the context of high-performance computing (HPC), the Operating System
9 system. Moreover, hardware-related jobs can also cause noise, for example,
32 source of interferences, increasing a per-cpu interference counter. The
38 hardware-related noise. In this way, osnoise can account for any
40 prints the sum of all noise, the max single noise, the percentage of CPU
44 -----
59 # _-----=> irqs-off
60 # / _----=> need-resched
61 # | / _---=> hardirq/softirq
62 # || / _--=> preempt-depth MAX
[all …]
/Documentation/hwmon/
Dsmsc47m192.rst10 Addresses scanned: I2C 0x2c - 0x2d
23 - Hartmut Rick <linux@rick.claranet.de>
25 - Special thanks to Jean Delvare for careful checking
30 -----------
33 of the SMSC LPC47M192 and compatible Super-I/O chips.
35 These chips support 3 temperature channels and 8 voltage inputs
36 as well as CPU voltage VID input.
42 Voltages and temperatures are measured by an 8-bit ADC, the resolution
52 bit 4 of the encoded CPU voltage. This means that you either get
53 a +12V voltage measurement or a 5 bit CPU VID, but not both.
[all …]
/Documentation/devicetree/bindings/nios2/
Dnios2.txt11 - compatible: Compatible property value should be "altr,nios2-1.0".
12 - reg: Contains CPU index.
13 - interrupt-controller: Specifies that the node is an interrupt controller
14 - #interrupt-cells: Specifies the number of cells needed to encode an
16 - clock-frequency: Contains the clock frequency for CPU, in Hz.
17 - dcache-line-size: Contains data cache line size.
18 - icache-line-size: Contains instruction line size.
19 - dcache-size: Contains data cache size.
20 - icache-size: Contains instruction cache size.
21 - altr,pid-num-bits: Specifies the number of bits to use to represent the process
[all …]
/Documentation/translations/zh_CN/admin-guide/
Dcputopology.rst1 .. SPDX-License-Identifier: GPL-2.0
2 .. include:: ../disclaimer-zh_CN.rst
4 :Original: Documentation/admin-guide/cputopology.rst
15 /sys/devices/system/cpu/cpuX/topology/。请阅读ABI文件:
16 Documentation/ABI/stable/sysfs-devices-system-cpu
21 对于支持这个特性的体系结构,它必须在include/asm-XXX/topology.h中定义这些宏中的一部分::
23 #define topology_physical_package_id(cpu)
24 #define topology_die_id(cpu)
25 #define topology_cluster_id(cpu)
26 #define topology_core_id(cpu)
[all …]
/Documentation/translations/zh_TW/admin-guide/
Dcputopology.rst1 .. SPDX-License-Identifier: GPL-2.0
2 .. include:: ../disclaimer-zh_TW.rst
4 :Original: Documentation/admin-guide/cputopology.rst
15 /sys/devices/system/cpu/cpuX/topology/。請閱讀ABI文件:
16 Documentation/ABI/stable/sysfs-devices-system-cpu
21 對於支持這個特性的體系結構,它必須在include/asm-XXX/topology.h中定義這些宏中的一部分::
23 #define topology_physical_package_id(cpu)
24 #define topology_die_id(cpu)
25 #define topology_cluster_id(cpu)
26 #define topology_core_id(cpu)
[all …]
/Documentation/virt/kvm/x86/
Dnested-vmx.rst1 .. SPDX-License-Identifier: GPL-2.0
8 ---------
10 On Intel processors, KVM uses Intel's VMX (Virtual-Machine eXtensions)
15 The "Nested VMX" feature adds this missing capability - of running guest
25 https://www.usenix.org/events/osdi10/tech/full_papers/Ben-Yehuda.pdf
29 -----------
31 Single-level virtualization has two levels - the host (KVM) and the guests.
38 ------------------
42 kvm-intel module.
46 emulated CPU type (qemu64) does not list the "VMX" CPU feature, so it must be
[all …]

12345678910>>...16