Lines Matching +full:processor +full:- +full:b +full:- +full:side
1 /* SPDX-License-Identifier: GPL-2.0+ */
3 * Read-Copy Update mechanism for mutual exclusion
15 * For detailed explanation of Read-Copy Update mechanism see -
30 #include <asm/processor.h>
33 #define ULONG_CMP_GE(a, b) (ULONG_MAX / 2 >= (a) - (b)) argument
34 #define ULONG_CMP_LT(a, b) (ULONG_MAX / 2 < (a) - (b)) argument
36 #define USHORT_CMP_GE(a, b) (USHRT_MAX / 2 >= (unsigned short)((a) - (b))) argument
37 #define USHORT_CMP_LT(a, b) (USHRT_MAX / 2 < (unsigned short)((a) - (b))) argument
53 * nesting depth, but makes sense only if CONFIG_PREEMPT_RCU -- in other
56 #define rcu_preempt_depth() (current->rcu_read_lock_nesting)
122 * RCU_NONIDLE - Indicate idle-loop code that needs RCU readers
125 * RCU read-side critical sections are forbidden in the inner idle loop,
126 * that is, between the rcu_idle_enter() and the rcu_idle_exit() -- RCU
127 * will happily ignore any such read-side critical sections. However,
135 * on the order of a million or so, even on 32-bit systems). It is
147 * Note a quasi-voluntary context switch for RCU-tasks's benefit.
155 if (!(preempt) && READ_ONCE((t)->rcu_tasks_holdout)) \
156 WRITE_ONCE((t)->rcu_tasks_holdout, false); \
169 if (!likely(READ_ONCE((t)->trc_reader_checked)) && \
170 !unlikely(READ_ONCE((t)->trc_reader_nesting))) { \
171 smp_store_release(&(t)->trc_reader_checked, true); \
205 * cond_resched_tasks_rcu_qs - Report potential quiescent states to RCU
208 * report potential quiescent states to RCU-tasks even if the cond_resched()
308 * RCU_LOCKDEP_WARN - emit lockdep splat if specified condition is met
314 * and rechecks it after checking (c) to prevent false-positive splats
332 "Illegal context switch in RCU read-side critical section"); in rcu_preempt_sleep_check()
342 "Illegal context switch in RCU-bh read-side critical section"); \
344 "Illegal context switch in RCU-sched read-side critical section"); \
370 * unrcu_pointer - mark a pointer as not being RCU protected
411 * RCU_INITIALIZER() - statically initialize an RCU-protected global variable
417 * rcu_assign_pointer() - assign to RCU-protected pointer
421 * Assigns the specified value to the specified RCU-protected
429 * will be dereferenced by RCU read-side code.
436 * impossible-to-diagnose memory corruption. So please be careful.
443 * macros, this execute-arguments-only-once property is important, so
459 * rcu_replace_pointer() - replace an RCU pointer, returning its old value
464 * Perform a replacement, where @rcu_ptr is an RCU-annotated
477 * rcu_access_pointer() - fetch RCU pointer with no dereferencing
480 * Return the value of the specified RCU-protected pointer, but omit the
481 * lockdep checks for being in an RCU read-side critical section. This is
483 * not dereferenced, for example, when testing an RCU-protected pointer
485 * where update-side locks prevent the value of the pointer from changing,
488 * It is also permissible to use rcu_access_pointer() when read-side
492 * when tearing down multi-linked structures after a grace period
498 * rcu_dereference_check() - rcu_dereference with debug checking
506 * An implicit check for being in an RCU read-side critical section
511 * bar = rcu_dereference_check(foo->bar, lockdep_is_held(&foo->lock));
513 * could be used to indicate to lockdep that foo->bar may only be dereferenced
515 * the bar struct at foo->bar is held.
521 * bar = rcu_dereference_check(foo->bar, lockdep_is_held(&foo->lock) ||
522 * atomic_read(&foo->usage) == 0);
534 * rcu_dereference_bh_check() - rcu_dereference_bh with debug checking
538 * This is the RCU-bh counterpart to rcu_dereference_check().
544 * rcu_dereference_sched_check() - rcu_dereference_sched with debug checking
548 * This is the RCU-sched counterpart to rcu_dereference_check().
558 * The no-tracing version of rcu_dereference_raw() must not call
564 * rcu_dereference_protected() - fetch RCU pointer when updates prevented
568 * Return the value of the specified RCU-protected pointer, but omit
569 * the READ_ONCE(). This is useful in cases where update-side locks
575 * This function is only for update-side use. Using this function
584 * rcu_dereference() - fetch RCU-protected pointer for dereferencing
592 * rcu_dereference_bh() - fetch an RCU-bh-protected pointer for dereferencing
600 * rcu_dereference_sched() - fetch RCU-sched-protected pointer for dereferencing
608 * rcu_pointer_handoff() - Hand off a pointer from RCU to other mechanism
620 * if (!atomic_inc_not_zero(p->refcnt))
630 * rcu_read_lock() - mark the beginning of an RCU read-side critical section
633 * are within RCU read-side critical sections, then the
636 * on one CPU while other CPUs are within RCU read-side critical
641 * with new RCU read-side critical sections. One way that this can happen
643 * read-side critical section, (2) CPU 1 invokes call_rcu() to register
644 * an RCU callback, (3) CPU 0 exits the RCU read-side critical section,
645 * (4) CPU 2 enters a RCU read-side critical section, (5) the RCU
646 * callback is invoked. This is legal, because the RCU read-side critical
652 * RCU read-side critical sections may be nested. Any deferred actions
653 * will be deferred until the outermost RCU read-side critical section
658 * read-side critical section that would block in a !PREEMPTION kernel.
661 * In non-preemptible RCU implementations (pure TREE_RCU and TINY_RCU),
662 * it is illegal to block while in an RCU read-side critical section.
664 * kernel builds, RCU read-side critical sections may be preempted,
666 * implementations in real-time (with -rt patchset) kernel builds, RCU
667 * read-side critical sections may be preempted and they may also block, but
682 * a bug -- this property is what provides RCU's performance benefits.
690 * rcu_read_unlock() - marks the end of an RCU read-side critical section.
696 * priority-inheritance spinlocks. This means that deadlock could result
702 * that preemption never happens within any RCU read-side critical
709 * at any time, a somewhat more future-proofed approach is to make sure
710 * that that preemption never happens within any RCU read-side critical
713 * acquires irq-disabled locks.
732 * rcu_read_lock_bh() - mark the beginning of an RCU-bh critical section
736 * an RCU read-side critical section.
753 * rcu_read_unlock_bh() - marks the end of a softirq-only RCU critical section
767 * rcu_read_lock_sched() - mark the beginning of a RCU-sched critical section
770 * Read-side critical sections can also be introduced by anything else
795 * rcu_read_unlock_sched() - marks the end of a RCU-classic critical section
816 * RCU_INIT_POINTER() - initialize an RCU protected pointer
820 * Initialize an RCU-protected pointer in special cases where readers
830 * a. You have not made *any* reader-visible changes to
832 * b. It is OK for readers accessing this structure from its
838 * result in impossible-to-diagnose memory corruption. As in the structures
840 * see pre-initialized values of the referenced data structure. So
843 * If you are creating an RCU-protected linked structure that is accessed
844 * by a single external-to-structure RCU-protected pointer, then you may
845 * use RCU_INIT_POINTER() to initialize the internal RCU-protected
847 * external-to-structure pointer *after* you have completely initialized
848 * the reader-accessible portions of the linked structure.
860 * RCU_POINTER_INITIALIZER() - statically initialize an RCU protected pointer
864 * GCC-style initialization for an RCU-protected pointer in a structure field.
876 * Helper macro for kfree_rcu() to prevent argument-expansion eyestrain.
885 * kfree_rcu() - kfree an object after a grace period.
892 * high-latency rcu_barrier() function at module-unload time.
897 * Because the functions are not allowed in the low-order 4096 bytes of
899 * If the offset is larger than 4095 bytes, a compile-time error will
915 __kvfree_rcu(&((___p)->rhf), offsetof(typeof(*(ptr)), rhf)); \
919 * kvfree_rcu() - kvfree an object after a grace period.
922 * based on whether an object is head-less or not. If it
931 * When it comes to head-less variant, only one argument
939 * Please note, head-less way of freeing is permitted to
958 * Place this after a lock-acquisition primitive to guarantee that
973 * rcu_head_init - Initialize rcu_head for rcu_head_after_call_rcu()
984 rhp->func = (rcu_callback_t)~0L; in rcu_head_init()
988 * rcu_head_after_call_rcu() - Has this rcu_head been passed to call_rcu()?
997 * in an RCU read-side critical section that includes a read-side fetch
1003 rcu_callback_t func = READ_ONCE(rhp->func); in rcu_head_after_call_rcu()