Searched full:thread (Results 1 – 25 of 274) sorted by relevance
1234567891011
| /Documentation/translations/zh_CN/mm/ |
| D | mmu_notifier.rst | 43 CPU-thread-0 {尝试写到addrA} 44 CPU-thread-1 {尝试写到addrB} 45 CPU-thread-2 {} 46 CPU-thread-3 {} 47 DEV-thread-0 {读取addrA并填充设备TLB} 48 DEV-thread-2 {读取addrB并填充设备TLB} 50 CPU-thread-0 {COW_step0: {mmu_notifier_invalidate_range_start(addrA)}} 51 CPU-thread-1 {COW_step0: {mmu_notifier_invalidate_range_start(addrB)}} 52 CPU-thread-2 {} 53 CPU-thread-3 {} [all …]
|
| /Documentation/mm/ |
| D | mmu_notifier.rst | 39 CPU-thread-0 {try to write to addrA} 40 CPU-thread-1 {try to write to addrB} 41 CPU-thread-2 {} 42 CPU-thread-3 {} 43 DEV-thread-0 {read addrA and populate device TLB} 44 DEV-thread-2 {read addrB and populate device TLB} 46 CPU-thread-0 {COW_step0: {mmu_notifier_invalidate_range_start(addrA)}} 47 CPU-thread-1 {COW_step0: {mmu_notifier_invalidate_range_start(addrB)}} 48 CPU-thread-2 {} 49 CPU-thread-3 {} [all …]
|
| /Documentation/arch/x86/ |
| D | topology.rst | 105 A thread is a single scheduling unit. It's the equivalent to a logical Linux 109 uses "thread". 111 Thread-related topology information in the kernel: 115 The cpumask contains all online threads in the package to which a thread 122 The cpumask contains all online threads in the core to which a thread 127 The logical package ID to which a thread belongs. 131 The physical package ID to which a thread belongs. 135 The ID of the core to which a thread belongs. It is also printed in /proc/cpuinfo 152 [package 0] -> [core 0] -> [thread 0] -> Linux CPU 0 156 a) One thread per core:: [all …]
|
| D | kernel-stacks.rst | 15 active thread. These thread stacks are THREAD_SIZE (4*PAGE_SIZE) big. 16 These stacks contain useful data as long as a thread is alive or a 17 zombie. While the thread is in user space the kernel stack is empty 20 In addition to the per thread stacks, there are specialized stacks 30 the split thread and interrupt stacks on i386, this gives more room 32 of every per thread stack. 53 will switch back to the per-thread stack. If software wants to allow
|
| D | mds.rst | 21 buffers are partitioned between Hyper-Threads so cross thread forwarding is 22 not possible. But if a thread enters or exits a sleep state the store 23 buffer is repartitioned which can expose data from one thread to the other. 32 Hyper-Threads so cross thread leakage is possible. 40 thread leakage is possible. 74 thread case (SMT off): Force the CPU to clear the affected buffers. 90 This does not protect against cross Hyper-Thread attacks except for MSBDS 91 which is only exploitable cross Hyper-thread when one of the Hyper-Threads 183 protected against cross Hyper-Thread attacks because the Fill Buffer and 193 that stale data from the idling CPU from spilling to the Hyper-Thread
|
| /Documentation/trace/ |
| D | timerlat-tracer.rst | 7 the tracer sets a periodic timer that wakes up a thread. The thread then 37 <...>-867 [000] .... 54.029339: #1 context thread timer_latency 11700 ns 39 <...>-868 [001] .... 54.029353: #1 context thread timer_latency 9820 ns 41 <...>-867 [000] .... 54.030330: #2 context thread timer_latency 3070 ns 43 <...>-868 [001] .... 54.030347: #2 context thread timer_latency 4351 ns 46 The tracer creates a per-cpu kernel thread with real-time priority that 48 observed at the *hardirq* context before the activation of the thread. 49 The second is the *timer latency* observed by the thread. The ACTIVATION 50 ID field serves to relate the *irq* execution to its respective *thread* 53 The *irq*/*thread* splitting is important to clarify in which context [all …]
|
| D | osnoise-tracer.rst | 8 NMIs, IRQs, SoftIRQs, and any other system thread can cause noise to the 15 In a nutshell, the hwlat_detector creates a thread that runs 16 periodically for a given period. At the beginning of a period, the thread 18 thread reads the time in a loop. As interrupts are disabled, threads, 19 IRQs, and SoftIRQs cannot interfere with the hwlatd thread. Hence, the 41 available for the thread, and the counters for the noise sources. 65 … CPU# |||| TIMESTAMP IN US IN US AVAILABLE IN US HW NMI IRQ SIRQ THREAD 78 running an osnoise/ thread. The osnoise specific fields report: 81 the osnoise thread kept looping reading the time. 85 the osnoise thread during the runtime window. [all …]
|
| D | hwlat_detector.rst | 76 - tracing_cpumask - the CPUs to move the hwlat thread across 79 - hwlat_detector/mode - the thread mode 81 By default, one hwlat detector's kernel thread will migrate across each CPU 83 fashion. This behavior can be changed by changing the thread mode, 88 - per-cpu: create one thread for each cpu in tracing_cpumask
|
| /Documentation/locking/ |
| D | robust-futex-ABI.rst | 11 The interesting data as to what futexes a thread is holding is kept on a 17 1) a one time call, per thread, to tell the kernel where its list of 20 by the exiting thread. 36 A thread that anticipates possibly using robust_futexes should first 44 bits on 64 bit arch's, and local byte order. Each thread should have 45 its own thread private 'head'. 47 If a thread is running in 32 bit compatibility mode on a 64 native arch 64 word' holds 2 flag bits in the upper 2 bits, and the thread id (TID) 65 of the thread holding the lock in the bottom 30 bits. See further 70 and is needed to correctly resolve races should a thread exit while [all …]
|
| D | robust-futexes.rst | 22 When the owner thread releases the futex, it notices (via the variable 66 - they have to scan _every_ vma at sys_exit() time, per thread! 89 At the heart of this new approach there is a per-thread private list of 92 registration happens at most once per thread lifetime]. At do_exit() 98 comparison. If the thread has registered a list, then normally the list 99 is empty. If the thread/process crashed or terminated in some incorrect 102 this thread with the FUTEX_OWNER_DIED bit, and wakes up one waiter (if 105 The list is guaranteed to be private and per-thread at do_exit() time, 110 instructions window for the thread (or process) to die there, leaving 112 also maintains a simple per-thread 'list_op_pending' field, to allow the [all …]
|
| /Documentation/admin-guide/hw-vuln/ |
| D | cross-thread-rsb.rst | 4 Cross-Thread Return Address Predictions 7 Certain AMD and Hygon processors are subject to a cross-thread return address 8 predictions vulnerability. When running in SMT mode and one sibling thread 9 transitions out of C0 state, the other sibling thread could use return target 10 predictions from the sibling thread that transitioned out of C0. 14 thread. However, KVM does allow a VMM to prevent exiting guest mode when 16 being consumed by the sibling thread. 32 CVE-2022-27672 Cross-Thread Return Address Predictions 43 When the thread re-enters the C0 state, the processor transitions back 44 to 2T mode, assuming the other thread is also still in C0 state. [all …]
|
| /Documentation/filesystems/nfs/ |
| D | knfsd-stats.rst | 31 for each NFS thread pool. 38 The id number of the NFS thread pool to which this line applies. 41 Thread pool ids are a contiguous set of small integers starting 42 at zero. The maximum value depends on the thread pool mode, but 44 Note that in the default case there will be a single thread pool 64 an nfsd thread to service it, i.e. no nfsd thread was considered 73 This can happen because there are too few nfsd threads in the thread 74 pool for the NFS workload (the workload is thread-limited), in which 79 Counts how many times an idle nfsd thread is woken to try to 88 Counts how many times an nfsd thread triggered an idle timeout, [all …]
|
| /Documentation/arch/powerpc/ |
| D | dscr.rst | 16 dscr /* Thread DSCR value */ 17 dscr_inherit /* Thread has changed default DSCR */ 30 CPU's PACA value into the register if the thread has dscr_inherit value 34 now be contained in thread struct's dscr into the register instead of 49 thread's DSCR value as well. 76 The thread struct element 'dscr_inherit' represents whether the thread 78 following methods. This element signifies whether the thread wants to
|
| /Documentation/arch/riscv/ |
| D | vector.rst | 26 Sets the Vector enablement status of the calling thread, where the control 38 thread. 41 instructions under such condition will trap and casuse the termination of the thread. 49 enablement status of current thread, and the setting at bit[3:2] takes place 54 Vector enablement status for the calling thread. The calling thread is 62 Vector enablement setting for the calling thread at the next execve() 78 was enabled for the calling thread. 87 thread. 91 Gets the same Vector enablement status for the calling thread. Setting for 132 status of any existing process of thread that do not make an execve() call.
|
| /Documentation/virt/kvm/ |
| D | vcpu-requests.rst | 10 KVM supports an internal API enabling threads to request a VCPU thread to 11 perform some activity. For example, a thread may request a VCPU to flush 48 The goal of a VCPU kick is to bring a VCPU thread out of guest mode in 50 a guest mode exit. However, a VCPU thread may not be in guest mode at the 52 thread, there are two other actions a kick may take. All three actions 60 3) Nothing. When the VCPU is not in guest mode and the VCPU thread is not 76 The VCPU thread is outside guest mode. 80 The VCPU thread is in guest mode. 84 The VCPU thread is transitioning from IN_GUEST_MODE to 89 The VCPU thread is outside guest mode, but it wants the sender of [all …]
|
| /Documentation/arch/arm64/ |
| D | sme.rst | 22 present) ZTn register state and TPIDR2_EL0 are tracked per thread. 72 * On thread creation PSTATE.ZA and TPIDR2_EL0 are preserved unless CLONE_VM 109 * All other SME state of a thread, including the currently configured vector 127 the thread's vector length (in za_context.vl). 183 Sets the vector length of the calling thread and related flags, where 199 performed by this thread. 202 call immediately after the next execve() (if any) by the thread: 220 * Either the calling thread's vector length or the deferred vector length 221 to be applied at the next execve() by the thread (dependent on whether 228 thread is cancelled. [all …]
|
| D | sve.rst | 26 tracked per-thread. 120 * All other SVE state of a thread, including the currently configured vector 126 process or thread share identical SVE configuration, matching that of the 141 if set indicates that the thread is in streaming mode and the vector length 146 the thread's vector length (in sve_context.vl). 149 whether the registers are live for the thread. The registers are present if 203 Sets the vector length of the calling thread and related flags, where 219 performed by this thread. 222 call immediately after the next execve() (if any) by the thread: 241 * Either the calling thread's vector length or the deferred vector length [all …]
|
| /Documentation/usb/ |
| D | dwc3.rst | 11 - Convert interrupt handler to per-ep-thread-irq 39 sleeping is handed over to the Thread. The event is saved in an 42 handed something to thread so we don't process event X prio Y 51 There should be no increase in latency since the interrupt-thread has a
|
| /Documentation/devicetree/bindings/cpu/ |
| D | cpu-topology.txt | 15 - thread 17 The bottom hierarchy level sits at core or thread level depending on whether 21 threads existing in the system and map to the hierarchy level "thread" above. 72 - thread node 76 The nodes describing the CPU topology (socket/cluster/core/thread) can 77 only be defined within the cpu-map node and every core/thread in the 87 (ie socket/cluster/core/thread) (where N = {0, 1, ...} is the node number; nodes 95 3 - socket/cluster/core/thread node bindings 98 Bindings for socket/cluster/cpu/thread nodes are defined as follows: 140 thread nodes. [all …]
|
| /Documentation/core-api/ |
| D | protection-keys.rst | 26 Being a CPU register, PKRU is inherently thread-local, potentially giving each 27 thread a different set of protections from every other thread. 44 Being a CPU register, POR_EL0 is inherently thread-local, potentially giving 45 each thread a different set of protections from every other thread.
|
| D | padata.rst | 159 A multithreaded job has a main thread and zero or more helper threads, with the 160 main thread participating in the job and then waiting until all helpers have 162 piece of the job that one thread completes in one call to the thread function. 166 section. This includes a pointer to the thread function, which padata will 167 call each time it assigns a job chunk to a thread. Then, define the thread 169 the first two delimit the range that the thread operates on and the last is a 171 typically allocated on the main thread's stack. Last, call
|
| /Documentation/tools/rtla/ |
| D | rtla-timerlat-top.rst | 51 0 00:00:12 | IRQ Timer Latency (us) | Thread Timer Latency (us) 81 Blocking thread: 3.79 us (9.03 %) 83 Blocking thread stacktrace 104 Thread latency: 41.96 us (100%) 112 current thread masking interrupts, which can be seen in the blocking 113 thread stacktrace: the current thread (*objtool:49256*) disabled interrupts
|
| D | common_timerlat_description.rst | 2 *timerlat* tracer dispatches a kernel thread per-cpu. These threads 8 prints the timer latency at the timer *IRQ* handler and the *Thread*
|
| /Documentation/userspace-api/ |
| D | perf_ring_buffer.rst | 15 2.2.2 Per-thread mode 116 The perf profiles programs with different modes: default mode, per thread 129 This command doesn't specify any options for CPU and thread modes, the 142 than for all threads in the system. The *T1* thread represents the 143 thread context of the 'test_program', whereas *T2* and *T3* are irrelevant 145 the *T1* thread and stored in the ring buffer associated with the CPU on 146 which the *T1* thread is running. 190 T1: Thread 1; T2: Thread 2; T3: Thread 3 191 x: Thread is in running state 195 2.2.2 Per-thread mode [all …]
|
| /Documentation/admin-guide/device-mapper/ |
| D | vdo.rst | 88 If the logical thread count is non-zero, the cache size 89 must be at least 4096 blocks per logical thread. 103 Thread related parameters: 105 Different categories of work are assigned to separate thread groups, and 109 all three thread types will be handled by a single thread. If any of these 126 The number of bios to enqueue on each bio thread before 127 switching to the next thread. The value must be greater 148 enough to have at least 1 slab per physical thread. The 268 queues: Basic information about each vdo thread 388 The logical and physical thread counts should also be adjusted. A logical [all …]
|
1234567891011