Lines Matching +full:in +full:- +full:and +full:- +full:around
2 Proper Locking Under a Preemptible Kernel: Keeping Kernel Code Preempt-Safe
13 those under SMP: concurrency and reentrancy. Thankfully, the Linux preemptible
17 This document is for all kernel hackers. Developing code in the kernel
21 RULE #1: Per-CPU data structures need explicit protection
32 First, since the data is per-CPU, it may not have explicit SMP locking, but
35 protect these situations by disabling preemption around them.
37 You can also use put_cpu() and get_cpu(), which will disable preemption.
44 Under preemption, the state of the CPU must be protected. This is arch-
45 dependent, but includes CPU structures and state not preserved over a context
46 switch. For example, on x86, entering and exiting FPU mode is now a critical
48 if the kernel is executing a floating-point instruction and is then preempted.
51 preemption must be disabled around such regions.
54 kernel_fpu_begin and kernel_fpu_end will disable and enable preemption.
57 RULE #3: Lock acquire and release must be performed by same task
61 A lock acquired in one task must be released by the same task. This
62 means you can't do oddball things like acquire a lock and go off to
64 like this, acquire and release the task in the same code path and
83 The functions are nestable. In other words, you can call preempt_disable
84 n-times in a code path, and preemption will not be reenabled until the n-th
90 in those cases.
92 But keep in mind that 'irqs disabled' is a fundamentally unsafe way of
93 disabling preemption - any cond_resched() or cond_resched_lock() might trigger
95 reschedule. So use this implicit preemption-disabling property only if you
97 this only for small, atomic code that you wrote and which calls no complex
102 cpucache_t *cc; /* this is per-CPU */
105 if (cc && cc->avail) {
106 __free_block(searchp, cc_entry(cc), cc->avail);
107 cc->avail = 0;
117 if (buf[smp_processor_id()] == -1) printf(KERN_INFO "wee!\n");
121 This code is not preempt-safe, but see how easily we can fix it by simply
129 It is possible to prevent a preemption event using local_irq_disable and
131 an event that would set need_resched and result in a preemption check. When
132 in doubt, rely on locking or explicit preemption disabling.
134 Note in 2.5 interrupt disabling is now only per-CPU (e.g. local).
136 An additional concern is proper usage of local_irq_disable and local_irq_save.
139 these are called from the spin_lock and read/write lock macros, the right thing
140 is done. They may also be called within a spin-lock protected region, however,
143 are also protected by preemption locks and so may use the versions which do