| /kernel/linux/linux-6.6/arch/csky/ |
| D | Kconfig | 13 select ARCH_INLINE_READ_LOCK if !PREEMPTION 14 select ARCH_INLINE_READ_LOCK_BH if !PREEMPTION 15 select ARCH_INLINE_READ_LOCK_IRQ if !PREEMPTION 16 select ARCH_INLINE_READ_LOCK_IRQSAVE if !PREEMPTION 17 select ARCH_INLINE_READ_UNLOCK if !PREEMPTION 18 select ARCH_INLINE_READ_UNLOCK_BH if !PREEMPTION 19 select ARCH_INLINE_READ_UNLOCK_IRQ if !PREEMPTION 20 select ARCH_INLINE_READ_UNLOCK_IRQRESTORE if !PREEMPTION 21 select ARCH_INLINE_WRITE_LOCK if !PREEMPTION 22 select ARCH_INLINE_WRITE_LOCK_BH if !PREEMPTION [all …]
|
| /kernel/linux/linux-5.10/Documentation/locking/ |
| D | preempt-locking.rst | 35 protect these situations by disabling preemption around them. 37 You can also use put_cpu() and get_cpu(), which will disable preemption. 44 Under preemption, the state of the CPU must be protected. This is arch- 47 section that must occur while preemption is disabled. Think what would happen 50 upon preemption, the FPU registers will be sold to the lowest bidder. Thus, 51 preemption must be disabled around such regions. 54 kernel_fpu_begin and kernel_fpu_end will disable and enable preemption. 72 Data protection under preemption is achieved by disabling preemption for the 84 n-times in a code path, and preemption will not be reenabled until the n-th 86 preemption is not enabled. [all …]
|
| D | locktypes.rst | 59 preemption and interrupt disabling primitives. Contrary to other locking 60 mechanisms, disabling preemption or interrupts are pure CPU local 76 Spinning locks implicitly disable preemption and the lock / unlock functions 103 PI has limitations on non-PREEMPT_RT kernels due to preemption and 106 PI clearly cannot preempt preemption-disabled or interrupt-disabled 162 by disabling preemption or interrupts. 164 On non-PREEMPT_RT kernels local_lock operations map to the preemption and 200 local_lock should be used in situations where disabling preemption or 204 local_lock is not suitable to protect against preemption or interrupts on a 220 preemption or interrupts is required, for example, to safely access [all …]
|
| /kernel/linux/linux-6.6/Documentation/locking/ |
| D | preempt-locking.rst | 35 protect these situations by disabling preemption around them. 37 You can also use put_cpu() and get_cpu(), which will disable preemption. 44 Under preemption, the state of the CPU must be protected. This is arch- 47 section that must occur while preemption is disabled. Think what would happen 50 upon preemption, the FPU registers will be sold to the lowest bidder. Thus, 51 preemption must be disabled around such regions. 54 kernel_fpu_begin and kernel_fpu_end will disable and enable preemption. 72 Data protection under preemption is achieved by disabling preemption for the 84 n-times in a code path, and preemption will not be reenabled until the n-th 86 preemption is not enabled. [all …]
|
| D | locktypes.rst | 59 preemption and interrupt disabling primitives. Contrary to other locking 60 mechanisms, disabling preemption or interrupts are pure CPU local 76 Spinning locks implicitly disable preemption and the lock / unlock functions 103 PI has limitations on non-PREEMPT_RT kernels due to preemption and 106 PI clearly cannot preempt preemption-disabled or interrupt-disabled 162 by disabling preemption or interrupts. 164 On non-PREEMPT_RT kernels local_lock operations map to the preemption and 200 local_lock should be used in situations where disabling preemption or 204 local_lock is not suitable to protect against preemption or interrupts on a 217 preemption or interrupts is required, for example, to safely access [all …]
|
| /kernel/linux/linux-6.6/kernel/ |
| D | Kconfig.preempt | 11 select PREEMPTION 15 prompt "Preemption Model" 19 bool "No Forced Preemption (Server)" 22 This is the traditional Linux preemption model, geared towards 33 bool "Voluntary Kernel Preemption (Desktop)" 38 "explicit preemption points" to the kernel code. These new 39 preemption points have been selected to reduce the maximum 61 otherwise not be about to reach a natural preemption point. 73 select PREEMPTION 92 config PREEMPTION config [all …]
|
| D | Kconfig.locks | 104 # - DEBUG_SPINLOCK=n and PREEMPTION=n 142 depends on !PREEMPTION || ARCH_INLINE_SPIN_UNLOCK_IRQ 171 depends on !PREEMPTION || ARCH_INLINE_READ_UNLOCK 179 depends on !PREEMPTION || ARCH_INLINE_READ_UNLOCK_IRQ 208 depends on !PREEMPTION || ARCH_INLINE_WRITE_UNLOCK 216 depends on !PREEMPTION || ARCH_INLINE_WRITE_UNLOCK_IRQ
|
| /kernel/linux/linux-5.10/drivers/gpu/drm/msm/adreno/ |
| D | a5xx_gpu.h | 56 * In order to do lockless preemption we use a simple state machine to progress 59 * PREEMPT_NONE - no preemption in progress. Next state START. 60 * PREEMPT_START - The trigger is evaulating if preemption is possible. Next 64 * PREEMPT_TRIGGERED: A preemption has been executed on the hardware. Next 66 * PREEMPT_FAULTED: A preemption timed out (never completed). This will trigger 68 * PREEMPT_PENDING: Preemption complete interrupt fired - the callback is 83 * CPU to store the state for preemption. The record itself is much larger 86 * There is a preemption record assigned per ringbuffer. When the CPU triggers a 87 * preemption, it fills out the record with the useful information (wptr, ring 89 * the preemption. When a ring is switched out, the CP will save the ringbuffer [all …]
|
| D | a5xx_preempt.c | 9 * Try to transition the preemption state from old to new. Return 22 * Force the preemption state to the specified state. This is used in cases 30 * preemption or in the interrupt handler so barriers are needed in set_preempt_state() 86 DRM_DEV_ERROR(dev->dev, "%s: preemption timed out\n", gpu->name); in a5xx_preempt_timer() 90 /* Try to trigger a preemption switch */ 102 * Try to start preemption by moving from NONE to START. If in a5xx_preempt_trigger() 103 * unsuccessful, a preemption is already in flight in a5xx_preempt_trigger() 117 * Its possible that while a preemption request is in progress in a5xx_preempt_trigger() 139 /* Set the address of the incoming preemption record */ in a5xx_preempt_trigger() 146 /* Start a timer to catch a stuck preemption */ in a5xx_preempt_trigger() [all …]
|
| /kernel/linux/linux-6.6/drivers/gpu/drm/msm/adreno/ |
| D | a5xx_gpu.h | 58 * In order to do lockless preemption we use a simple state machine to progress 61 * PREEMPT_NONE - no preemption in progress. Next state START. 62 * PREEMPT_START - The trigger is evaulating if preemption is possible. Next 66 * PREEMPT_TRIGGERED: A preemption has been executed on the hardware. Next 68 * PREEMPT_FAULTED: A preemption timed out (never completed). This will trigger 70 * PREEMPT_PENDING: Preemption complete interrupt fired - the callback is 85 * CPU to store the state for preemption. The record itself is much larger 88 * There is a preemption record assigned per ringbuffer. When the CPU triggers a 89 * preemption, it fills out the record with the useful information (wptr, ring 91 * the preemption. When a ring is switched out, the CP will save the ringbuffer [all …]
|
| D | a5xx_preempt.c | 9 * Try to transition the preemption state from old to new. Return 22 * Force the preemption state to the specified state. This is used in cases 30 * preemption or in the interrupt handler so barriers are needed in set_preempt_state() 89 DRM_DEV_ERROR(dev->dev, "%s: preemption timed out\n", gpu->name); in a5xx_preempt_timer() 93 /* Try to trigger a preemption switch */ 105 * Serialize preemption start to ensure that we always make in a5xx_preempt_trigger() 112 * Try to start preemption by moving from NONE to START. If in a5xx_preempt_trigger() 113 * unsuccessful, a preemption is already in flight in a5xx_preempt_trigger() 127 * Its possible that while a preemption request is in progress in a5xx_preempt_trigger() 151 /* Set the address of the incoming preemption record */ in a5xx_preempt_trigger() [all …]
|
| /kernel/linux/linux-6.6/arch/loongarch/ |
| D | Kconfig | 23 select ARCH_INLINE_READ_LOCK if !PREEMPTION 24 select ARCH_INLINE_READ_LOCK_BH if !PREEMPTION 25 select ARCH_INLINE_READ_LOCK_IRQ if !PREEMPTION 26 select ARCH_INLINE_READ_LOCK_IRQSAVE if !PREEMPTION 27 select ARCH_INLINE_READ_UNLOCK if !PREEMPTION 28 select ARCH_INLINE_READ_UNLOCK_BH if !PREEMPTION 29 select ARCH_INLINE_READ_UNLOCK_IRQ if !PREEMPTION 30 select ARCH_INLINE_READ_UNLOCK_IRQRESTORE if !PREEMPTION 31 select ARCH_INLINE_WRITE_LOCK if !PREEMPTION 32 select ARCH_INLINE_WRITE_LOCK_BH if !PREEMPTION [all …]
|
| /kernel/linux/linux-5.10/kernel/ |
| D | Kconfig.preempt | 4 prompt "Preemption Model" 8 bool "No Forced Preemption (Server)" 10 This is the traditional Linux preemption model, geared towards 21 bool "Voluntary Kernel Preemption (Desktop)" 25 "explicit preemption points" to the kernel code. These new 26 preemption points have been selected to reduce the maximum 41 select PREEMPTION 49 otherwise not be about to reach a natural preemption point. 61 select PREEMPTION 80 config PREEMPTION config
|
| D | Kconfig.locks | 104 # - DEBUG_SPINLOCK=n and PREEMPTION=n 142 depends on !PREEMPTION || ARCH_INLINE_SPIN_UNLOCK_IRQ 171 depends on !PREEMPTION || ARCH_INLINE_READ_UNLOCK 179 depends on !PREEMPTION || ARCH_INLINE_READ_UNLOCK_IRQ 208 depends on !PREEMPTION || ARCH_INLINE_WRITE_UNLOCK 216 depends on !PREEMPTION || ARCH_INLINE_WRITE_UNLOCK_IRQ
|
| /kernel/linux/linux-5.10/arch/loongarch/ |
| D | Kconfig | 15 select ARCH_INLINE_READ_LOCK if !PREEMPTION 16 select ARCH_INLINE_READ_LOCK_BH if !PREEMPTION 17 select ARCH_INLINE_READ_LOCK_IRQ if !PREEMPTION 18 select ARCH_INLINE_READ_LOCK_IRQSAVE if !PREEMPTION 19 select ARCH_INLINE_READ_UNLOCK if !PREEMPTION 20 select ARCH_INLINE_READ_UNLOCK_BH if !PREEMPTION 21 select ARCH_INLINE_READ_UNLOCK_IRQ if !PREEMPTION 22 select ARCH_INLINE_READ_UNLOCK_IRQRESTORE if !PREEMPTION 23 select ARCH_INLINE_WRITE_LOCK if !PREEMPTION 24 select ARCH_INLINE_WRITE_LOCK_BH if !PREEMPTION [all …]
|
| /kernel/linux/linux-6.6/Documentation/core-api/ |
| D | entry.rst | 10 * Preemption counter 167 irq_enter_rcu() updates the preemption count which makes in_hardirq() 172 irq_exit_rcu() handles interrupt time accounting, undoes the preemption 175 In theory, the preemption count could be updated in irqentry_enter(). In 176 practice, deferring this update to irq_enter_rcu() allows the preemption-count 180 preemption count has not yet been updated with the HARDIRQ_OFFSET state. 182 Note that irq_exit_rcu() must remove HARDIRQ_OFFSET from the preemption count 185 also requires that HARDIRQ_OFFSET has been removed from the preemption count. 215 * Preemption counter 223 Note that the update of the preemption counter has to be the first [all …]
|
| D | local_ops.rst | 42 making sure that we modify it from within a preemption safe context. It is 70 * Preemption (or interrupts) must be disabled when using local ops in 76 preemption already disabled. I suggest, however, to explicitly 77 disable preemption anyway to make sure it will still work correctly on 104 local atomic operations: it makes sure that preemption is disabled around write 110 If you are already in a preemption-safe context, you can use 161 * preemptible context (it disables preemption) :
|
| /kernel/linux/linux-5.10/include/linux/ |
| D | preempt.h | 7 * preempt_count (used for kernel preemption, interrupt count, etc.) 14 * We put the hardirq and softirq counter into the preemption 17 * - bits 0-7 are the preemption count (max preemption depth: 256) 59 * Disable preemption until the scheduler is running -- use an unconditional 125 * Which need to disable both preemption (CONFIG_PREEMPT_COUNT) and 237 * Even if we don't have any preemption, we need preempt disable/enable 257 * Modules have no business playing preemption tricks. 300 * preempt_notifier - key for installing preemption notifiers 328 * Maps to preempt_disable() which also disables preemption. Use 330 * but not necessarily preemption.
|
| /kernel/linux/linux-6.6/include/linux/ |
| D | preempt.h | 7 * preempt_count (used for kernel preemption, interrupt count, etc.) 15 * We put the hardirq and softirq counter into the preemption 18 * - bits 0-7 are the preemption count (max preemption depth: 256) 60 * Disable preemption until the scheduler is running -- use an unconditional 160 /* Locks on RT do not disable preemption */ 169 * Which need to disable both preemption (CONFIG_PREEMPT_COUNT) and 281 * Even if we don't have any preemption, we need preempt disable/enable 301 * Modules have no business playing preemption tricks. 344 * preempt_notifier - key for installing preemption notifiers 437 * preempt_disable_nested - Disable preemption inside a normally preempt disabled section [all …]
|
| /kernel/linux/linux-5.10/tools/testing/selftests/kvm/x86_64/ |
| D | vmx_preemption_timer_test.c | 3 * VMX-preemption timer test 62 * Now wait for the preemption timer to fire and in l2_guest_code() 86 * Check for Preemption timer support in l1_guest_code() 129 * Ensure the exit from L2 is due to preemption timer expiry in l1_guest_code() 183 pr_info("will skip vmx preemption timer checks\n"); in main() 214 * From L1's perspective verify Preemption timer hasn't in main() 216 * From L2's perspective verify Preemption timer hasn't in main()
|
| /kernel/linux/linux-5.10/Documentation/core-api/ |
| D | local_ops.rst | 42 making sure that we modify it from within a preemption safe context. It is 70 * Preemption (or interrupts) must be disabled when using local ops in 76 preemption already disabled. I suggest, however, to explicitly 77 disable preemption anyway to make sure it will still work correctly on 104 local atomic operations: it makes sure that preemption is disabled around write 110 If you are already in a preemption-safe context, you can use 161 * preemptible context (it disables preemption) :
|
| /kernel/linux/linux-5.10/kernel/rcu/ |
| D | Kconfig | 19 default y if PREEMPTION 32 default y if !PREEMPTION && !SMP 81 def_bool PREEMPTION 84 only voluntary context switch (not preemption!), idle, and 92 only context switch (including preemption) and user-mode 231 the "p" for RCU-preempt (PREEMPTION kernels) and "s" for RCU-sched 232 (!PREEMPTION kernels). Nothing prevents this kthread from running
|
| /kernel/linux/linux-6.6/tools/testing/selftests/kvm/x86_64/ |
| D | vmx_preemption_timer_test.c | 3 * VMX-preemption timer test 61 * Now wait for the preemption timer to fire and in l2_guest_code() 85 * Check for Preemption timer support in l1_guest_code() 128 * Ensure the exit from L2 is due to preemption timer expiry in l1_guest_code() 204 * From L1's perspective verify Preemption timer hasn't in main() 206 * From L2's perspective verify Preemption timer hasn't in main()
|
| /kernel/linux/linux-6.6/arch/ia64/kernel/ |
| D | smp.c | 141 * Called with preemption disabled. 151 * Called with preemption disabled. 165 * Called with preemption disabled. 178 * Called with preemption disabled. 191 * Called with preemption disabled. 220 * Called with preemption disabled. 230 * Called with preemption disabled.
|
| /kernel/linux/linux-5.10/arch/ia64/kernel/ |
| D | smp.c | 141 * Called with preemption disabled. 151 * Called with preemption disabled. 165 * Called with preemption disabled. 178 * Called with preemption disabled. 191 * Called with preemption disabled. 220 * Called with preemption disabled. 230 * Called with preemption disabled.
|