| /Documentation/driver-api/dmaengine/ |
| D | dmatest.rst | 16 test multiple channels at the same time, and it can start multiple threads 73 (shared) parameters used for all threads will use the new values. 74 After the channels are specified, each thread is set as pending. All threads 82 Once started a message like " dmatest: Added 1 threads using dma0chan0" is 171 dmatest: Added 1 threads using dma0chan2 179 dmatest: Added 1 threads using dma0chan1 181 dmatest: Added 1 threads using dma0chan2 191 dmatest: Added 1 threads using dma0chan0 192 dmatest: Added 1 threads using dma0chan3 193 dmatest: Added 1 threads using dma0chan4 [all …]
|
| /Documentation/arch/x86/ |
| D | topology.rst | 24 threads, cores, packages, etc. 37 - threads 52 The number of threads in a package. 97 A core consists of 1 or more threads. It does not matter whether the threads 98 are SMT- or CMT-type threads. 103 Threads chapter 108 AMDs nomenclature for CMT threads is "Compute Unit Core". The kernel always 115 The cpumask contains all online threads in the package to which a thread 118 The number of online threads is also printed in /proc/cpuinfo "siblings." 122 The cpumask contains all online threads in the core to which a thread [all …]
|
| D | sva.rst | 104 thread can interact with a device. Threads that belong to the same 118 it is not active in any of the threads of that process. It's loaded to the 136 init optimization. Since #GP faults have to be handled on any threads that 138 created threads might as well be treated in a consistent way. 141 all threads in unbind, free the PASID lazily only on mm exit. 145 PASID is still marked VALID in the PASID_MSR for any threads in the 152 * Each process has many threads, but only one PASID. 159 * Many threads within a process can share a single portal to access 163 * The single process-wide PASID is used by all threads to interact 191 MMIO. This doesn't scale as the number of threads becomes quite large. The
|
| D | mds.rst | 21 buffers are partitioned between Hyper-Threads so cross thread forwarding is 32 Hyper-Threads so cross thread leakage is possible. 39 exploited eventually. Load ports are shared between Hyper-Threads so cross 91 which is only exploitable cross Hyper-thread when one of the Hyper-Threads 175 repartitioning of the store buffer when one of the Hyper-Threads enters 179 sibling threads are offline CPU buffer clearing is not required.
|
| /Documentation/power/ |
| D | freezing-of-tasks.rst | 11 kernel threads are controlled during hibernation or system-wide suspend (on some 20 threads) are regarded as 'freezable' and treated in a special way before the 31 wakes up all the kernel threads. All freezable tasks must react to that by 38 tasks are generally frozen before kernel threads. 45 signal-handling code, but the freezable kernel threads need to call it 74 kernel threads must call try_to_freeze() somewhere or use one of the 90 - freezes all tasks (including kernel threads) because we can't freeze 91 kernel threads without freezing userspace tasks 94 - thaws only kernel threads; this is particularly useful if we need to do 95 anything special in between thawing of kernel threads and thawing of [all …]
|
| /Documentation/admin-guide/device-mapper/ |
| D | vdo.rst | 106 the number of threads in each group can be configured separately. 113 The number of threads used to complete bios. Since 115 outside the vdo volume, threads of this type allow the vdo 120 The number of threads used to issue bios to the underlying 121 storage. Threads of this type allow the vdo volume to 131 The number of threads used to do CPU-intensive work, such 135 The number of threads used to manage data comparisons for 140 The number of threads used to manage caching and locking 145 The number of threads used to manage administration of the 269 threads: A synonym of 'queues' [all …]
|
| /Documentation/netlink/specs/ |
| D | nfsd.yaml | 69 name: threads 151 name: threads-set 152 doc: set the number of running threads 158 - threads 163 name: threads-get 164 doc: get the number of running threads 169 - threads
|
| /Documentation/filesystems/nfs/ |
| D | knfsd-stats.rst | 45 which contains all the nfsd threads and all the CPUs in the system, 73 This can happen because there are too few nfsd threads in the thread 75 case configuring more nfsd threads will probably improve the 78 threads-woken 87 threads-timedout 93 threads configured than can be used by the NFS workload. This is 94 a clue that the number of nfsd threads can be reduced without 109 one of three ways. An nfsd thread can be woken (threads-woken counts 116 packets-deferred = packets-arrived - ( sockets-enqueued + threads-woken )
|
| /Documentation/tools/rtla/ |
| D | common_timerlat_options.rst | 5 **-t**. By default, *timerlat* tracer uses FIFO:95 for *timerlat* threads, 34 **-k**, **--kernel-threads** 36 Use timerlat kernel-space threads, in contrast of **-u**. 38 **-u**, **--user-threads** 43 output. **--user-threads** will be used unless the user specify **-k**.
|
| D | common_options.rst | 3 Set the osnoise tracer to run the sample threads in the cpu-list. 7 Run rtla control threads only on the given cpu-list. 38 Set scheduling parameters to the osnoise tracer threads, the format to set the priority are: 47 …tracer's threads. If the **-C** option is passed without arguments, the tracer's thread will inher…
|
| D | common_osnoise_description.rst | 2 *osnoise* tracer dispatches a kernel thread per-cpu. These threads read the 5 The *osnoise*'s tracer threads take note of the delta between each time
|
| D | common_timerlat_description.rst | 2 *timerlat* tracer dispatches a kernel thread per-cpu. These threads
|
| /Documentation/admin-guide/nfs/ |
| D | nfsd-admin-interfaces.rst | 12 nfsd/threads. 26 0 to nfsd/threads. All locks and state are thrown away at that point. 28 Between startup and shutdown, the number of threads may be adjusted up 29 or down by additional writes to nfsd/threads or by writes to
|
| /Documentation/locking/ |
| D | robust-futex-ABI.rst | 25 threads in the kernel. Options on the sys_futex(2) system call support 33 probably causing deadlock or other such failure of the other threads 42 The pointer 'head' points to a structure in the threads address space 82 waiting for a lock on a threads exit if that next thread used the futex 102 It is anticipated that threads will use robust_futexes embedded in 117 entirely by user level code in the contending threads, and by the 123 There may exist thousands of futex lock structures in a threads shared 129 at different times by any of the threads with access to that region. The 130 thread currently holding such a lock, if any, is marked with the threads 160 exiting threads TID, then the kernel will do two things: [all …]
|
| D | locktorture.rst | 17 This torture test consists of creating a number of kernel threads which 34 Number of kernel threads that will stress exclusive lock 39 Number of kernel threads that will stress shared lock 116 The number of seconds to keep the test threads affinitized 142 (D): Min and max number of times threads failed to acquire the lock.
|
| /Documentation/userspace-api/ |
| D | unshare.rst | 28 Most legacy operating system kernels support an abstraction of threads 30 special resources and mechanisms to maintain these "threads". The Linux 32 between processes and "threads". The kernel allows processes to share 33 resources and thus they can achieve legacy "threads" behavior without 35 power of implementing threads in this manner comes not only from 38 threads. On Linux, at the time of thread creation using the clone system 40 between threads. 43 allows threads to selectively 'unshare' any resources that were being 46 of the discussion on POSIX threads on Linux. unshare() augments the 47 usefulness of Linux threads for applications that would like to control [all …]
|
| /Documentation/admin-guide/thermal/ |
| D | intel_powerclamp.rst | 46 idle injection across all online CPU threads was introduced. The goal 78 Injection is controlled by high priority kernel threads, spawned for 81 These kernel threads, with SCHED_FIFO class, are created to perform 85 effect. Threads are also bound to the CPU such that they cannot be 86 migrated, unless the CPU is taken offline. In this case, threads 223 Per-CPU kernel threads are started/stopped upon receiving 225 keeps track of clamping kernel threads, even after they are migrated 241 case, little can be done from the idle injection threads. In most 265 counter summed over per CPU counting threads spawned for all running 295 will not show idle injection kernel threads. [all …]
|
| /Documentation/ABI/stable/ |
| D | sysfs-devices-system-cpu | 108 Description: internal kernel map of cpuX's hardware threads within the same 113 Description: human-readable list of cpuX's hardware threads within the same 119 Description: internal kernel map of cpuX's hardware threads within the same 124 Description: human-readable list of cpuX's hardware threads within the same
|
| /Documentation/virt/kvm/ |
| D | vcpu-requests.rst | 10 KVM supports an internal API enabling threads to request a VCPU thread to 56 2) Waking a sleeping VCPU. Sleeping VCPUs are VCPU threads outside guest 57 mode that wait on waitqueues. Waking them removes the threads from 58 the waitqueues, allowing the threads to run again. This behavior 195 IPIs will only trigger guest mode exits for VCPU threads that are in guest 225 As stated above, the IPI is only useful for VCPU threads in guest mode or 246 VCPU threads are in modes other than IN_GUEST_MODE. For example, one case 282 VCPU threads may need to consider requests before and/or after calling
|
| /Documentation/scheduler/ |
| D | completion.rst | 8 If you have one or more threads that must wait for some kernel activity 21 also result in more efficient code as all threads can continue execution 26 the Linux scheduler. The event the threads on the waitqueue are waiting for 123 threads) have ceased and the completion object is completely unused. 135 exceeds the life time of any helper threads using the completion object, 258 (decrementing) the done field of 'struct completion'. Waiting threads
|
| D | sched-bwc.rst | 15 slices as threads in the cgroup become runnable. Once all quota has been 16 assigned any additional requests for quota will result in those threads being 17 throttled. Throttled threads will not be able to run again until the next 21 cfs_quota units at each period boundary. As threads consume this bandwidth it 164 the slice may be returned to the global pool if all threads on that cpu become
|
| /Documentation/admin-guide/ |
| D | kernel-per-CPU-kthreads.rst | 64 1. Use irq affinity to force the irq threads to execute on 97 both kernel threads and interrupts to execute elsewhere. 171 forcing both kernel threads and interrupts to execute elsewhere. 182 kernel threads and interrupts to execute elsewhere. 205 calls and by forcing both kernel threads and interrupts 219 calls and by forcing both kernel threads and interrupts
|
| /Documentation/trace/ |
| D | timerlat-tracer.rst | 6 find sources of wakeup latencies of real-time threads. Like cyclictest, 57 can also be influenced by blocking caused by threads. For example, by 59 execution, or masking interrupts. Threads can also be delayed by the 60 interference from other threads and IRQs. 187 Timerlat allows user-space threads to use timerlat infra-structure to
|
| /Documentation/networking/ |
| D | pktgen.rst | 57 Kernel threads 79 To support adding the same device to multiple threads, which is useful 255 -t : ($THREADS) threads to start 278 to the running threads CPU (directly from smp_processor_id()).
|
| /Documentation/mm/ |
| D | active_mm.rst | 37 doesn't need any user mappings - all kernel threads basically fall into 38 this category, but even "real" threads can temporarily say that for
|