• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1Review Checklist for RCU Patches
2
3
4This document contains a checklist for producing and reviewing patches
5that make use of RCU.  Violating any of the rules listed below will
6result in the same sorts of problems that leaving out a locking primitive
7would cause.  This list is based on experiences reviewing such patches
8over a rather long period of time, but improvements are always welcome!
9
100.	Is RCU being applied to a read-mostly situation?  If the data
11	structure is updated more than about 10% of the time, then you
12	should strongly consider some other approach, unless detailed
13	performance measurements show that RCU is nonetheless the right
14	tool for the job.  Yes, RCU does reduce read-side overhead by
15	increasing write-side overhead, which is exactly why normal uses
16	of RCU will do much more reading than updating.
17
18	Another exception is where performance is not an issue, and RCU
19	provides a simpler implementation.  An example of this situation
20	is the dynamic NMI code in the Linux 2.6 kernel, at least on
21	architectures where NMIs are rare.
22
23	Yet another exception is where the low real-time latency of RCU's
24	read-side primitives is critically important.
25
261.	Does the update code have proper mutual exclusion?
27
28	RCU does allow -readers- to run (almost) naked, but -writers- must
29	still use some sort of mutual exclusion, such as:
30
31	a.	locking,
32	b.	atomic operations, or
33	c.	restricting updates to a single task.
34
35	If you choose #b, be prepared to describe how you have handled
36	memory barriers on weakly ordered machines (pretty much all of
37	them -- even x86 allows later loads to be reordered to precede
38	earlier stores), and be prepared to explain why this added
39	complexity is worthwhile.  If you choose #c, be prepared to
40	explain how this single task does not become a major bottleneck on
41	big multiprocessor machines (for example, if the task is updating
42	information relating to itself that other tasks can read, there
43	by definition can be no bottleneck).
44
452.	Do the RCU read-side critical sections make proper use of
46	rcu_read_lock() and friends?  These primitives are needed
47	to prevent grace periods from ending prematurely, which
48	could result in data being unceremoniously freed out from
49	under your read-side code, which can greatly increase the
50	actuarial risk of your kernel.
51
52	As a rough rule of thumb, any dereference of an RCU-protected
53	pointer must be covered by rcu_read_lock(), rcu_read_lock_bh(),
54	rcu_read_lock_sched(), or by the appropriate update-side lock.
55	Disabling of preemption can serve as rcu_read_lock_sched(), but
56	is less readable.
57
583.	Does the update code tolerate concurrent accesses?
59
60	The whole point of RCU is to permit readers to run without
61	any locks or atomic operations.  This means that readers will
62	be running while updates are in progress.  There are a number
63	of ways to handle this concurrency, depending on the situation:
64
65	a.	Use the RCU variants of the list and hlist update
66		primitives to add, remove, and replace elements on
67		an RCU-protected list.	Alternatively, use the other
68		RCU-protected data structures that have been added to
69		the Linux kernel.
70
71		This is almost always the best approach.
72
73	b.	Proceed as in (a) above, but also maintain per-element
74		locks (that are acquired by both readers and writers)
75		that guard per-element state.  Of course, fields that
76		the readers refrain from accessing can be guarded by
77		some other lock acquired only by updaters, if desired.
78
79		This works quite well, also.
80
81	c.	Make updates appear atomic to readers.  For example,
82		pointer updates to properly aligned fields will
83		appear atomic, as will individual atomic primitives.
84		Sequences of perations performed under a lock will -not-
85		appear to be atomic to RCU readers, nor will sequences
86		of multiple atomic primitives.
87
88		This can work, but is starting to get a bit tricky.
89
90	d.	Carefully order the updates and the reads so that
91		readers see valid data at all phases of the update.
92		This is often more difficult than it sounds, especially
93		given modern CPUs' tendency to reorder memory references.
94		One must usually liberally sprinkle memory barriers
95		(smp_wmb(), smp_rmb(), smp_mb()) through the code,
96		making it difficult to understand and to test.
97
98		It is usually better to group the changing data into
99		a separate structure, so that the change may be made
100		to appear atomic by updating a pointer to reference
101		a new structure containing updated values.
102
1034.	Weakly ordered CPUs pose special challenges.  Almost all CPUs
104	are weakly ordered -- even x86 CPUs allow later loads to be
105	reordered to precede earlier stores.  RCU code must take all of
106	the following measures to prevent memory-corruption problems:
107
108	a.	Readers must maintain proper ordering of their memory
109		accesses.  The rcu_dereference() primitive ensures that
110		the CPU picks up the pointer before it picks up the data
111		that the pointer points to.  This really is necessary
112		on Alpha CPUs.	If you don't believe me, see:
113
114			http://www.openvms.compaq.com/wizard/wiz_2637.html
115
116		The rcu_dereference() primitive is also an excellent
117		documentation aid, letting the person reading the code
118		know exactly which pointers are protected by RCU.
119		Please note that compilers can also reorder code, and
120		they are becoming increasingly aggressive about doing
121		just that.  The rcu_dereference() primitive therefore
122		also prevents destructive compiler optimizations.
123
124		The rcu_dereference() primitive is used by the
125		various "_rcu()" list-traversal primitives, such
126		as the list_for_each_entry_rcu().  Note that it is
127		perfectly legal (if redundant) for update-side code to
128		use rcu_dereference() and the "_rcu()" list-traversal
129		primitives.  This is particularly useful in code that
130		is common to readers and updaters.  However, lockdep
131		will complain if you access rcu_dereference() outside
132		of an RCU read-side critical section.  See lockdep.txt
133		to learn what to do about this.
134
135		Of course, neither rcu_dereference() nor the "_rcu()"
136		list-traversal primitives can substitute for a good
137		concurrency design coordinating among multiple updaters.
138
139	b.	If the list macros are being used, the list_add_tail_rcu()
140		and list_add_rcu() primitives must be used in order
141		to prevent weakly ordered machines from misordering
142		structure initialization and pointer planting.
143		Similarly, if the hlist macros are being used, the
144		hlist_add_head_rcu() primitive is required.
145
146	c.	If the list macros are being used, the list_del_rcu()
147		primitive must be used to keep list_del()'s pointer
148		poisoning from inflicting toxic effects on concurrent
149		readers.  Similarly, if the hlist macros are being used,
150		the hlist_del_rcu() primitive is required.
151
152		The list_replace_rcu() and hlist_replace_rcu() primitives
153		may be used to replace an old structure with a new one
154		in their respective types of RCU-protected lists.
155
156	d.	Rules similar to (4b) and (4c) apply to the "hlist_nulls"
157		type of RCU-protected linked lists.
158
159	e.	Updates must ensure that initialization of a given
160		structure happens before pointers to that structure are
161		publicized.  Use the rcu_assign_pointer() primitive
162		when publicizing a pointer to a structure that can
163		be traversed by an RCU read-side critical section.
164
1655.	If call_rcu(), or a related primitive such as call_rcu_bh(),
166	call_rcu_sched(), or call_srcu() is used, the callback function
167	must be written to be called from softirq context.  In particular,
168	it cannot block.
169
1706.	Since synchronize_rcu() can block, it cannot be called from
171	any sort of irq context.  The same rule applies for
172	synchronize_rcu_bh(), synchronize_sched(), synchronize_srcu(),
173	synchronize_rcu_expedited(), synchronize_rcu_bh_expedited(),
174	synchronize_sched_expedite(), and synchronize_srcu_expedited().
175
176	The expedited forms of these primitives have the same semantics
177	as the non-expedited forms, but expediting is both expensive
178	and unfriendly to real-time workloads.	Use of the expedited
179	primitives should be restricted to rare configuration-change
180	operations that would not normally be undertaken while a real-time
181	workload is running.
182
183	In particular, if you find yourself invoking one of the expedited
184	primitives repeatedly in a loop, please do everyone a favor:
185	Restructure your code so that it batches the updates, allowing
186	a single non-expedited primitive to cover the entire batch.
187	This will very likely be faster than the loop containing the
188	expedited primitive, and will be much much easier on the rest
189	of the system, especially to real-time workloads running on
190	the rest of the system.
191
192	In addition, it is illegal to call the expedited forms from
193	a CPU-hotplug notifier, or while holding a lock that is acquired
194	by a CPU-hotplug notifier.  Failing to observe this restriction
195	will result in deadlock.
196
1977.	If the updater uses call_rcu() or synchronize_rcu(), then the
198	corresponding readers must use rcu_read_lock() and
199	rcu_read_unlock().  If the updater uses call_rcu_bh() or
200	synchronize_rcu_bh(), then the corresponding readers must
201	use rcu_read_lock_bh() and rcu_read_unlock_bh().  If the
202	updater uses call_rcu_sched() or synchronize_sched(), then
203	the corresponding readers must disable preemption, possibly
204	by calling rcu_read_lock_sched() and rcu_read_unlock_sched().
205	If the updater uses synchronize_srcu() or call_srcu(),
206	the the corresponding readers must use srcu_read_lock() and
207	srcu_read_unlock(), and with the same srcu_struct.  The rules for
208	the expedited primitives are the same as for their non-expedited
209	counterparts.  Mixing things up will result in confusion and
210	broken kernels.
211
212	One exception to this rule: rcu_read_lock() and rcu_read_unlock()
213	may be substituted for rcu_read_lock_bh() and rcu_read_unlock_bh()
214	in cases where local bottom halves are already known to be
215	disabled, for example, in irq or softirq context.  Commenting
216	such cases is a must, of course!  And the jury is still out on
217	whether the increased speed is worth it.
218
2198.	Although synchronize_rcu() is slower than is call_rcu(), it
220	usually results in simpler code.  So, unless update performance is
221	critically important, the updaters cannot block, or the latency of
222	synchronize_rcu() is visible from userspace, synchronize_rcu()
223	should be used in preference to call_rcu().  Furthermore,
224	kfree_rcu() usually results in even simpler code than does
225	synchronize_rcu() without synchronize_rcu()'s multi-millisecond
226	latency.  So please take advantage of kfree_rcu()'s "fire and
227	forget" memory-freeing capabilities where it applies.
228
229	An especially important property of the synchronize_rcu()
230	primitive is that it automatically self-limits: if grace periods
231	are delayed for whatever reason, then the synchronize_rcu()
232	primitive will correspondingly delay updates.  In contrast,
233	code using call_rcu() should explicitly limit update rate in
234	cases where grace periods are delayed, as failing to do so can
235	result in excessive realtime latencies or even OOM conditions.
236
237	Ways of gaining this self-limiting property when using call_rcu()
238	include:
239
240	a.	Keeping a count of the number of data-structure elements
241		used by the RCU-protected data structure, including
242		those waiting for a grace period to elapse.  Enforce a
243		limit on this number, stalling updates as needed to allow
244		previously deferred frees to complete.	Alternatively,
245		limit only the number awaiting deferred free rather than
246		the total number of elements.
247
248		One way to stall the updates is to acquire the update-side
249		mutex.	(Don't try this with a spinlock -- other CPUs
250		spinning on the lock could prevent the grace period
251		from ever ending.)  Another way to stall the updates
252		is for the updates to use a wrapper function around
253		the memory allocator, so that this wrapper function
254		simulates OOM when there is too much memory awaiting an
255		RCU grace period.  There are of course many other
256		variations on this theme.
257
258	b.	Limiting update rate.  For example, if updates occur only
259		once per hour, then no explicit rate limiting is required,
260		unless your system is already badly broken.  The dcache
261		subsystem takes this approach -- updates are guarded
262		by a global lock, limiting their rate.
263
264	c.	Trusted update -- if updates can only be done manually by
265		superuser or some other trusted user, then it might not
266		be necessary to automatically limit them.  The theory
267		here is that superuser already has lots of ways to crash
268		the machine.
269
270	d.	Use call_rcu_bh() rather than call_rcu(), in order to take
271		advantage of call_rcu_bh()'s faster grace periods.
272
273	e.	Periodically invoke synchronize_rcu(), permitting a limited
274		number of updates per grace period.
275
276	The same cautions apply to call_rcu_bh(), call_rcu_sched(),
277	call_srcu(), and kfree_rcu().
278
2799.	All RCU list-traversal primitives, which include
280	rcu_dereference(), list_for_each_entry_rcu(), and
281	list_for_each_safe_rcu(), must be either within an RCU read-side
282	critical section or must be protected by appropriate update-side
283	locks.	RCU read-side critical sections are delimited by
284	rcu_read_lock() and rcu_read_unlock(), or by similar primitives
285	such as rcu_read_lock_bh() and rcu_read_unlock_bh(), in which
286	case the matching rcu_dereference() primitive must be used in
287	order to keep lockdep happy, in this case, rcu_dereference_bh().
288
289	The reason that it is permissible to use RCU list-traversal
290	primitives when the update-side lock is held is that doing so
291	can be quite helpful in reducing code bloat when common code is
292	shared between readers and updaters.  Additional primitives
293	are provided for this case, as discussed in lockdep.txt.
294
29510.	Conversely, if you are in an RCU read-side critical section,
296	and you don't hold the appropriate update-side lock, you -must-
297	use the "_rcu()" variants of the list macros.  Failing to do so
298	will break Alpha, cause aggressive compilers to generate bad code,
299	and confuse people trying to read your code.
300
30111.	Note that synchronize_rcu() -only- guarantees to wait until
302	all currently executing rcu_read_lock()-protected RCU read-side
303	critical sections complete.  It does -not- necessarily guarantee
304	that all currently running interrupts, NMIs, preempt_disable()
305	code, or idle loops will complete.  Therefore, if your
306	read-side critical sections are protected by something other
307	than rcu_read_lock(), do -not- use synchronize_rcu().
308
309	Similarly, disabling preemption is not an acceptable substitute
310	for rcu_read_lock().  Code that attempts to use preemption
311	disabling where it should be using rcu_read_lock() will break
312	in real-time kernel builds.
313
314	If you want to wait for interrupt handlers, NMI handlers, and
315	code under the influence of preempt_disable(), you instead
316	need to use synchronize_irq() or synchronize_sched().
317
318	This same limitation also applies to synchronize_rcu_bh()
319	and synchronize_srcu(), as well as to the asynchronous and
320	expedited forms of the three primitives, namely call_rcu(),
321	call_rcu_bh(), call_srcu(), synchronize_rcu_expedited(),
322	synchronize_rcu_bh_expedited(), and synchronize_srcu_expedited().
323
32412.	Any lock acquired by an RCU callback must be acquired elsewhere
325	with softirq disabled, e.g., via spin_lock_irqsave(),
326	spin_lock_bh(), etc.  Failing to disable irq on a given
327	acquisition of that lock will result in deadlock as soon as
328	the RCU softirq handler happens to run your RCU callback while
329	interrupting that acquisition's critical section.
330
33113.	RCU callbacks can be and are executed in parallel.  In many cases,
332	the callback code simply wrappers around kfree(), so that this
333	is not an issue (or, more accurately, to the extent that it is
334	an issue, the memory-allocator locking handles it).  However,
335	if the callbacks do manipulate a shared data structure, they
336	must use whatever locking or other synchronization is required
337	to safely access and/or modify that data structure.
338
339	RCU callbacks are -usually- executed on the same CPU that executed
340	the corresponding call_rcu(), call_rcu_bh(), or call_rcu_sched(),
341	but are by -no- means guaranteed to be.  For example, if a given
342	CPU goes offline while having an RCU callback pending, then that
343	RCU callback will execute on some surviving CPU.  (If this was
344	not the case, a self-spawning RCU callback would prevent the
345	victim CPU from ever going offline.)
346
34714.	SRCU (srcu_read_lock(), srcu_read_unlock(), srcu_dereference(),
348	synchronize_srcu(), synchronize_srcu_expedited(), and call_srcu())
349	may only be invoked from process context.  Unlike other forms of
350	RCU, it -is- permissible to block in an SRCU read-side critical
351	section (demarked by srcu_read_lock() and srcu_read_unlock()),
352	hence the "SRCU": "sleepable RCU".  Please note that if you
353	don't need to sleep in read-side critical sections, you should be
354	using RCU rather than SRCU, because RCU is almost always faster
355	and easier to use than is SRCU.
356
357	If you need to enter your read-side critical section in a
358	hardirq or exception handler, and then exit that same read-side
359	critical section in the task that was interrupted, then you need
360	to srcu_read_lock_raw() and srcu_read_unlock_raw(), which avoid
361	the lockdep checking that would otherwise this practice illegal.
362
363	Also unlike other forms of RCU, explicit initialization
364	and cleanup is required via init_srcu_struct() and
365	cleanup_srcu_struct().	These are passed a "struct srcu_struct"
366	that defines the scope of a given SRCU domain.	Once initialized,
367	the srcu_struct is passed to srcu_read_lock(), srcu_read_unlock()
368	synchronize_srcu(), synchronize_srcu_expedited(), and call_srcu().
369	A given synchronize_srcu() waits only for SRCU read-side critical
370	sections governed by srcu_read_lock() and srcu_read_unlock()
371	calls that have been passed the same srcu_struct.  This property
372	is what makes sleeping read-side critical sections tolerable --
373	a given subsystem delays only its own updates, not those of other
374	subsystems using SRCU.	Therefore, SRCU is less prone to OOM the
375	system than RCU would be if RCU's read-side critical sections
376	were permitted to sleep.
377
378	The ability to sleep in read-side critical sections does not
379	come for free.	First, corresponding srcu_read_lock() and
380	srcu_read_unlock() calls must be passed the same srcu_struct.
381	Second, grace-period-detection overhead is amortized only
382	over those updates sharing a given srcu_struct, rather than
383	being globally amortized as they are for other forms of RCU.
384	Therefore, SRCU should be used in preference to rw_semaphore
385	only in extremely read-intensive situations, or in situations
386	requiring SRCU's read-side deadlock immunity or low read-side
387	realtime latency.
388
389	Note that, rcu_assign_pointer() relates to SRCU just as it does
390	to other forms of RCU.
391
39215.	The whole point of call_rcu(), synchronize_rcu(), and friends
393	is to wait until all pre-existing readers have finished before
394	carrying out some otherwise-destructive operation.  It is
395	therefore critically important to -first- remove any path
396	that readers can follow that could be affected by the
397	destructive operation, and -only- -then- invoke call_rcu(),
398	synchronize_rcu(), or friends.
399
400	Because these primitives only wait for pre-existing readers, it
401	is the caller's responsibility to guarantee that any subsequent
402	readers will execute safely.
403
40416.	The various RCU read-side primitives do -not- necessarily contain
405	memory barriers.  You should therefore plan for the CPU
406	and the compiler to freely reorder code into and out of RCU
407	read-side critical sections.  It is the responsibility of the
408	RCU update-side primitives to deal with this.
409
41017.	Use CONFIG_PROVE_RCU, CONFIG_DEBUG_OBJECTS_RCU_HEAD, and the
411	__rcu sparse checks (enabled by CONFIG_SPARSE_RCU_POINTER) to
412	validate your RCU code.  These can help find problems as follows:
413
414	CONFIG_PROVE_RCU: check that accesses to RCU-protected data
415		structures are carried out under the proper RCU
416		read-side critical section, while holding the right
417		combination of locks, or whatever other conditions
418		are appropriate.
419
420	CONFIG_DEBUG_OBJECTS_RCU_HEAD: check that you don't pass the
421		same object to call_rcu() (or friends) before an RCU
422		grace period has elapsed since the last time that you
423		passed that same object to call_rcu() (or friends).
424
425	__rcu sparse checks: tag the pointer to the RCU-protected data
426		structure with __rcu, and sparse will warn you if you
427		access that pointer without the services of one of the
428		variants of rcu_dereference().
429
430	These debugging aids can help you find problems that are
431	otherwise extremely difficult to spot.
432