Lines Matching +full:wait +full:- +full:on +full:- +full:read
8 RCU updaters sometimes use call_rcu() to initiate an asynchronous wait for
10 struct placed within the RCU-protected data structure and another pointer
16 call_rcu(&p->rcu, p_callback);
30 -------------------------------------
37 http://lwn.net/images/ns/kernel/rcu-drop.jpg.
39 We could try placing a synchronize_rcu() in the module-exit code path,
40 but this is not sufficient. Although synchronize_rcu() does wait for a
41 grace period to elapse, it does not wait for the callbacks to complete.
43 One might be tempted to try several back-to-back synchronize_rcu()
45 heavy RCU-callback load, then some of the callbacks might be deferred in
52 -------------
61 Pseudo-code using rcu_barrier() is as follows:
71 For example, if it uses call_rcu(), call_srcu() on srcu_struct_1, and
72 call_srcu() on srcu_struct_2, then the following three lines of code
136 52 /* Wait for all RCU callbacks to fire. */
139 55 rcu_torture_stats_print(); /* -After- the stats thread is stopped! */
141 57 if (cur_ops->cleanup != NULL)
142 58 cur_ops->cleanup();
150 re-posting themselves. This will not be necessary in most cases, since
155 Lines 7-50 stop all the kernel tasks associated with the rcutorture
157 RCU callbacks will be posted. The rcu_barrier() call on line 53 waits
158 for any pre-existing callbacks to complete.
160 Then lines 55-62 print status and do operation-specific cleanup, and
161 then return, permitting the module-unload operation to be completed.
173 from posting new timers, cancel (or wait for) all the already-posted
174 timers, and only then invoke rcu_barrier() to wait for any remaining
180 and on the same srcu_struct structure. If your module uses call_rcu()
186 --------------------------
189 that RCU callbacks are never reordered once queued on one of the per-CPU
190 queues. His implementation queues an RCU callback on each of the per-CPU
212 global completion and counters at a time, which are initialized on lines
232 The rcu_barrier_func() runs on each CPU, where it invokes call_rcu()
241 7 head = &rdp->barrier;
246 Lines 3 and 4 locate RCU's internal per-CPU rcu_data structure,
250 by the callback. Line 9 then registers the rcu_barrier_callback() on
275 to avoid disturbing idle CPUs (especially on battery-powered systems)
276 and the need to minimally disturb non-idle CPUs in real-time systems.
282 ---------------------
291 ------------------------
303 filesystem-unmount time. Dipankar Sarma coded up rcu_barrier()
305 filesystem-unmount process.
307 Much later, yours truly hit the RCU module-unload problem when
320 Suppose that the on_each_cpu() function shown on line 8 was
326 wait for CPU 1's callbacks to be invoked.
330 disables preemption, which acted as an RCU read-side critical
334 waited on nonpreemptible regions of code in preemptible kernels,
339 RCU once again waits on nonpreemptible regions of code.
342 Relying on these sort of accidents of implementation can result
358 argument, the wait flag, set to "1". This flag is passed through
360 causing this latter to spin until the cross-CPU invocation of
362 a grace period from completing on non-CONFIG_PREEMPTION kernels,
370 preemption-disabled regions of code as RCU read-side critical
377 as might well happen due to real-time latency considerations,