Searched refs:preemption (Results 1 – 25 of 59) sorted by relevance
123
/kernel/linux/linux-5.10/Documentation/locking/ |
D | preempt-locking.rst | 35 protect these situations by disabling preemption around them. 37 You can also use put_cpu() and get_cpu(), which will disable preemption. 44 Under preemption, the state of the CPU must be protected. This is arch- 47 section that must occur while preemption is disabled. Think what would happen 50 upon preemption, the FPU registers will be sold to the lowest bidder. Thus, 51 preemption must be disabled around such regions. 54 kernel_fpu_begin and kernel_fpu_end will disable and enable preemption. 72 Data protection under preemption is achieved by disabling preemption for the 84 n-times in a code path, and preemption will not be reenabled until the n-th 86 preemption is not enabled. [all …]
|
D | locktypes.rst | 59 preemption and interrupt disabling primitives. Contrary to other locking 60 mechanisms, disabling preemption or interrupts are pure CPU local 76 Spinning locks implicitly disable preemption and the lock / unlock functions 103 PI has limitations on non-PREEMPT_RT kernels due to preemption and 106 PI clearly cannot preempt preemption-disabled or interrupt-disabled 162 by disabling preemption or interrupts. 164 On non-PREEMPT_RT kernels local_lock operations map to the preemption and 200 local_lock should be used in situations where disabling preemption or 204 local_lock is not suitable to protect against preemption or interrupts on a 220 preemption or interrupts is required, for example, to safely access [all …]
|
D | seqlock.rst | 47 preemption, preemption must be explicitly disabled before entering the 72 /* Serialized context with disabled preemption */ 107 For lock types which do not implicitly disable preemption, preemption
|
D | hwspinlock.rst | 95 Upon a successful return from this function, preemption is disabled so 111 Upon a successful return from this function, preemption and the local 127 Upon a successful return from this function, preemption is disabled, 178 Upon a successful return from this function, preemption is disabled so 195 Upon a successful return from this function, preemption and the local 211 Upon a successful return from this function, preemption is disabled, 268 Upon a successful return from this function, preemption and local 280 Upon a successful return from this function, preemption is reenabled,
|
D | ww-mutex-design.rst | 53 running transaction. Note that this is not the same as process preemption. A 350 The Wound-Wait preemption is implemented with a lazy-preemption scheme: 354 wounded status and retries. A great benefit of implementing preemption in
|
/kernel/linux/linux-5.10/kernel/ |
D | Kconfig.preempt | 10 This is the traditional Linux preemption model, geared towards 25 "explicit preemption points" to the kernel code. These new 26 preemption points have been selected to reduce the maximum 49 otherwise not be about to reach a natural preemption point.
|
/kernel/linux/linux-5.10/Documentation/core-api/ |
D | local_ops.rst | 42 making sure that we modify it from within a preemption safe context. It is 76 preemption already disabled. I suggest, however, to explicitly 77 disable preemption anyway to make sure it will still work correctly on 104 local atomic operations: it makes sure that preemption is disabled around write 110 If you are already in a preemption-safe context, you can use 161 * preemptible context (it disables preemption) :
|
D | this_cpu_ops.rst | 20 necessary to disable preemption or interrupts to ensure that the 44 The following this_cpu() operations with implied preemption protection 46 preemption and interrupts:: 111 reserved for a specific processor. Without disabling preemption in the 143 preemption has been disabled. The pointer is then used to 144 access local per cpu data in a critical section. When preemption 231 preemption. If a per cpu variable is not used in an interrupt context
|
/kernel/linux/linux-5.10/Documentation/RCU/ |
D | NMI-RCU.rst | 46 The do_nmi() function processes each NMI. It first disables preemption 51 preemption is restored. 96 CPUs complete any preemption-disabled segments of code that they were 98 Since NMI handlers disable preemption, synchronize_rcu() is guaranteed
|
/kernel/linux/linux-5.10/tools/lib/traceevent/Documentation/ |
D | libtraceevent-record_parse.txt | 41 The _tep_data_preempt_count()_ function gets the preemption count from the 64 preemption count. 95 /* Got the preemption count */
|
D | libtraceevent-event_print.txt | 33 current context, and preemption count. 53 Field 4 is the preemption count.
|
/kernel/linux/linux-5.10/Documentation/virt/kvm/devices/ |
D | arm-vgic.rst | 99 maximum possible 128 preemption levels. The semantics of the register 100 indicate if any interrupts in a given preemption level are in the active 103 Thus, preemption level X has one or more active interrupts if and only if: 107 Bits for undefined preemption levels are RAZ/WI.
|
/kernel/linux/linux-5.10/arch/arc/kernel/ |
D | entry-compact.S | 152 ; if L2 IRQ interrupted a L1 ISR, disable preemption 157 ; -preemption off IRQ, user task in syscall picked to run 172 ; bump thread_info->preempt_count (Disable preemption) 367 ; decrement thread_info->preempt_count (re-enable preemption)
|
D | entry.S | 298 ; --- (Slow Path #1) task preemption ---
|
/kernel/liteos_a/arch/ |
D | Kconfig | 26 This option will support high priority interrupt preemption.
|
/kernel/liteos_m/arch/ |
D | Kconfig | 39 This option will support high priority interrupt preemption.
|
/kernel/linux/linux-5.10/Documentation/arm/ |
D | kernel_mode_neon.rst | 14 preemption disabled 58 * NEON/VFP code is executed with preemption disabled.
|
/kernel/linux/linux-5.10/include/rdma/ |
D | opa_port_info.h | 321 } preemption; member
|
/kernel/linux/linux-5.10/kernel/rcu/ |
D | Kconfig | 84 only voluntary context switch (not preemption!), idle, and 91 only context switch (including preemption) and user-mode
|
/kernel/linux/linux-5.10/Documentation/kernel-hacking/ |
D | locking.rst | 16 With the wide availability of HyperThreading, and preemption in the 136 is set, then spinlocks simply disable preemption, which is sufficient to 137 prevent any races. For most purposes, we can think of preemption as 1138 these simply disable preemption so the reader won't go to sleep while 1241 Now, because the 'read lock' in RCU is simply disabling preemption, a 1242 caller which always has preemption disabled between calling 1304 preemption disabled. This also means you need to be in user context: 1393 preemption 1399 preemption, even on UP.
|
/kernel/linux/linux-5.10/drivers/gpu/drm/i915/ |
D | Kconfig.profile | 45 How long to wait (in milliseconds) for a preemption event to occur
|
/kernel/linux/linux-5.10/Documentation/devicetree/bindings/net/dsa/ |
D | ocelot.txt | 23 TSN frame preemption.
|
/kernel/linux/linux-5.10/Documentation/trace/ |
D | tracepoints.rst | 100 the probe. This, and the fact that preemption is disabled around the
|
D | ftrace.rst | 29 disabled and enabled, as well as for preemption and from a time 757 time for which preemption is disabled. 762 records the largest time for which irqs and/or preemption 1538 When preemption is disabled, we may be able to receive 1540 priority task must wait for preemption to be enabled again 1543 The preemptoff tracer traces the places that disable preemption. 1545 which preemption was disabled. The control of preemptoff tracer 1670 preemption disabled for the longest times is helpful. But 1671 sometimes we would like to know when either preemption and/or 1693 preemption is disabled. This total time is the time that we can [all …]
|
/kernel/linux/linux-5.10/arch/m68k/ifpsp060/ |
D | iskeleton.S | 261 | Linux/m68k: perhaps reenable preemption here...
|
123