Lines Matching +full:clock +full:- +full:accuracy
1 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
4 <head><title>A Tour Through TREE_RCU's Grace-Period Memory Ordering</title>
5 <meta HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=iso-8859-1">
13 grace-period memory ordering guarantee is provided.
28 <p>RCU grace periods provide extremely strong memory-ordering guarantees
29 for non-idle non-offline code.
32 period that are within RCU read-side critical sections.
35 of that grace period that are within RCU read-side critical sections.
37 <p>Note well that RCU-sched read-side critical sections include any region
40 an extremely small region of preemption-disabled code, one can think of
47 a linked RCU-protected data structure, and phase two frees that element.
49 phase-one update (in the common case, removal) must not witness state
50 following the phase-two update (in the common case, freeing).
53 of lock-based critical sections, memory barriers, and per-CPU
59 <p>The workhorse for RCU's grace-period memory ordering is the
61 <tt>->lock</tt>.
66 Their lock-release counterparts are
74 The key point is that the lock-acquisition functions, including
80 happening before one of the above lock-release functions will be seen
82 one of the above lock-acquisition functions.
84 above lock-release function on any given CPU will be seen by all
86 of the above lock-acquisition functions executing on that same CPU,
87 even if the lock-release and lock-acquisition functions are operating
95 lock-acquisition and lock-release functions:
135 RCU's grace-period memory ordering guarantee to extend to any
136 RCU read-side critical sections preceding and following the current
139 <tt>atomic_add_return()</tt> read-modify-write atomic operation that
140 is invoked within <tt>rcu_dynticks_eqs_enter()</tt> at idle-entry
141 time and within <tt>rcu_dynticks_eqs_exit()</tt> at idle-exit time.
142 The grace-period kthread invokes <tt>rcu_dynticks_snap()</tt> and
157 Races between grace-period start and CPU-hotplug operations
159 <tt>->lock</tt> as described above.
194 <p>Tree RCU's grace--period memory-ordering guarantees rely most
195 heavily on the <tt>rcu_node</tt> structure's <tt>->lock</tt>
216 14 if (tne != rdtp->tick_nohz_enabled_snap) {
219 17 rdtp->tick_nohz_enabled_snap = tne;
224 22 if (rdtp->all_lazy &&
225 23 rdtp->nonlazy_posted != rdtp->nonlazy_posted_snap) {
226 24 rdtp->all_lazy = false;
227 25 rdtp->nonlazy_posted_snap = rdtp->nonlazy_posted;
231 29 if (rdtp->last_accelerate == jiffies)
233 31 rdtp->last_accelerate = jiffies;
235 33 rdp = this_cpu_ptr(rsp->rda);
236 34 if (rcu_segcblist_pend_cbs(&rdp->cblist))
238 36 rnp = rdp->mynode;
252 </p><p><img src="rcu_node-lock.svg" alt="rcu_node-lock.svg">
254 <p>The box represents the <tt>rcu_node</tt> structure's <tt>->lock</tt>
261 <p>Tree RCU's grace-period memory-ordering guarantee is provided by
266 <li> <a href="#Grace-Period Initialization">Grace-Period Initialization</a>
267 <li> <a href="#Self-Reported Quiescent States">
268 Self-Reported Quiescent States</a>
270 <li> <a href="#CPU-Hotplug Interface">CPU-Hotplug Interface</a>
272 <li> <a href="Grace-Period Cleanup">Grace-Period Cleanup</a>
281 <p>If RCU's grace-period guarantee is to mean anything at all, any
287 </p><p><img src="TreeRCU-callback-registry.svg" alt="TreeRCU-callback-registry.svg">
289 <p>Because <tt>call_rcu()</tt> normally acts only on CPU-local state,
292 an element from an RCU-protected data structure).
293 It simply enqueues the <tt>rcu_head</tt> structure on a per-CPU list,
313 <p>There are a few other code paths within grace-period processing
316 structures are associated with a future grace-period number under
318 <tt>->lock</tt>.
320 for that same <tt>rcu_node</tt> structure's <tt>->lock</tt>, and
322 sections for any <tt>rcu_node</tt> structure's <tt>->lock</tt>.
344 <h4><a name="Grace-Period Initialization">Grace-Period Initialization</a></h4>
346 <p>Grace-period initialization is carried out by
347 the grace-period kernel thread, which makes several passes over the
350 grace-period computation will require duplicating this tree.
354 grace-period kernel thread's traversals are presented in multiple
356 grace-period initialization.
358 <p>The first ordering-related grace-period initialization action is to
359 advance the <tt>rcu_state</tt> structure's <tt>->gp_seq</tt>
360 grace-period-number counter, as shown below:
362 </p><p><img src="TreeRCU-gp-init-1.svg" alt="TreeRCU-gp-init-1.svg" width="75%">
365 which helps reject false-positive RCU CPU stall detection.
386 </p><p><img src="TreeRCU-gp-init-2.svg" alt="TreeRCU-gp-init-1.svg" width="75%">
389 tree traverses breadth-first, setting each <tt>rcu_node</tt> structure's
390 <tt>->gp_seq</tt> field to the newly advanced value from the
393 </p><p><img src="TreeRCU-gp-init-3.svg" alt="TreeRCU-gp-init-1.svg" width="75%">
399 But because the grace-period kthread started the grace period at the
401 <tt>->gp_seq</tt> field) before setting each leaf <tt>rcu_node</tt>
402 structure's <tt>->gp_seq</tt> field, each CPU's observation of
419 However, if we instead assume that RCU is not self-aware,
431 <h4><a name="Self-Reported Quiescent States">
432 Self-Reported Quiescent States</a></h4>
437 Online non-idle CPUs report their own quiescent states, as shown
440 </p><p><img src="TreeRCU-qs.svg" alt="TreeRCU-qs.svg" width="75%">
448 state will acquire that <tt>rcu_node</tt> structure's <tt>->lock</tt>.
457 In addition, this CPU will consider any RCU read-side critical
466 But a RCU read-side critical section might have started
468 (the advancing of <tt>->gp_seq</tt> from earlier), so why should
478 all-at-once grace-period start could possibly be.
485 On the other hand, if the CPU takes a scheduler-clock interrupt
489 in a per-CPU variable.
492 this CPU (for example, after the next scheduler-clock
501 each <tt>rcu_node</tt> structure's <tt>->qsmask</tt> field,
510 from that point, and the <tt>rcu_node</tt> <tt>->lock</tt>
517 structure's <tt>->qsmask</tt> field.
521 <p>Due to energy-efficiency considerations, RCU is forbidden from
524 state, which they do via fully ordered value-returning atomic operations
525 on a per-CPU variable.
528 </p><p><img src="TreeRCU-dyntick.svg" alt="TreeRCU-dyntick.svg" width="50%">
530 <p>The RCU grace-period kernel thread samples the per-CPU idleness
532 structure's <tt>->lock</tt>.
533 This means that any RCU read-side critical sections that precede the
537 any RCU read-side critical sections that follow the
540 <p>Plumbing this into the full grace-period execution is described
543 <h4><a name="CPU-Hotplug Interface">CPU-Hotplug Interface</a></h4>
551 </p><p><img src="TreeRCU-hotplug.svg" alt="TreeRCU-hotplug.svg" width="50%">
555 structure's <tt>->lock</tt> and update this structure's
556 <tt>->qsmaskinitnext</tt>.
557 The RCU grace-period kernel thread samples this mask to detect CPUs
560 <p>Plumbing this into the full grace-period execution is described
566 quiescent states, and therefore the grace-period kernel thread
571 </p><p><img src="TreeRCU-gp-fqs.svg" alt="TreeRCU-gp-fqs.svg" width="100%">
580 As with self-reported quiescent states, the upwards driving stops
600 overrode accuracy.
603 <a href="#Putting It All Together">stitched-together diagram</a>.
608 <h4><a name="Grace-Period Cleanup">Grace-Period Cleanup</a></h4>
610 <p>Grace-period cleanup first scans the <tt>rcu_node</tt> tree
611 breadth-first advancing all the <tt>->gp_seq</tt> fields, then it
612 advances the <tt>rcu_state</tt> structure's <tt>->gp_seq</tt> field.
615 </p><p><img src="TreeRCU-gp-cleanup.svg" alt="TreeRCU-gp-cleanup.svg" width="75%">
618 grace-period cleanup is complete, the next grace period can begin.
634 structure's <tt>->gp_seq</tt> field has been updated,
647 <tt>->gp_seq</tt> field has been updated, that CPU can begin
653 the scheduling-clock interrupt (<tt>rcu_sched_clock_irq()</tt> on
660 via wakeup) the needed phase-two processing for each update.
662 </p><p><img src="TreeRCU-callback-invocation.svg" alt="TreeRCU-callback-invocation.svg" width="60%">
665 number of corner-case code paths, for example, when a CPU notes that
668 <tt>->lock</tt> before invoking callbacks, which preserves the
678 <a href="#Grace-Period Cleanup">grace-period cleanup</a> diagram.
682 and the grace-period kernel thread might not yet have reached the
690 <p>A stitched-together diagram is
691 <a href="Tree-RCU-Diagram.html">here</a>.