• Home
  • Raw
  • Download

Lines Matching +full:processor +full:- +full:b +full:- +full:side

1 /* SPDX-License-Identifier: GPL-2.0+ */
3 * Read-Copy Update mechanism for mutual exclusion
15 * For detailed explanation of Read-Copy Update mechanism see -
31 #include <asm/processor.h>
35 #define ULONG_CMP_GE(a, b) (ULONG_MAX / 2 >= (a) - (b)) argument
36 #define ULONG_CMP_LT(a, b) (ULONG_MAX / 2 < (a) - (b)) argument
38 #define USHORT_CMP_GE(a, b) (USHRT_MAX / 2 >= (unsigned short)((a) - (b))) argument
39 #define USHORT_CMP_LT(a, b) (USHRT_MAX / 2 < (unsigned short)((a) - (b))) argument
52 // not-yet-completed RCU grace periods.
56 * same_state_synchronize_rcu - Are two old-state values identical?
57 * @oldstate1: First old-state value.
58 * @oldstate2: Second old-state value.
60 * The two old-state values must have been obtained from either
64 * are tracked by old-state values to push these values to a list header,
80 * nesting depth, but makes sense only if CONFIG_PREEMPT_RCU -- in other
83 #define rcu_preempt_depth() READ_ONCE(current->rcu_read_lock_nesting)
155 static inline int rcu_nocb_cpu_offload(int cpu) { return -EINVAL; } in rcu_nocb_cpu_offload()
161 * Note a quasi-voluntary context switch for RCU-tasks's benefit.
169 if (!(preempt) && READ_ONCE((t)->rcu_tasks_holdout)) \
170 WRITE_ONCE((t)->rcu_tasks_holdout, false); \
181 // Bits for ->trc_reader_special.b.need_qs field.
190 int ___rttq_nesting = READ_ONCE((t)->trc_reader_nesting); \
192 if (unlikely(READ_ONCE((t)->trc_reader_special.b.need_qs) == TRC_NEED_QS) && \
196 !READ_ONCE((t)->trc_reader_special.b.blocked)) { \
231 * rcu_trace_implies_rcu_gp - does an RCU Tasks Trace grace period imply an RCU grace period?
243 * cond_resched_tasks_rcu_qs - Report potential quiescent states to RCU
246 * report potential quiescent states to RCU-tasks even if the cond_resched()
256 * rcu_softirq_qs_periodic - Report RCU and RCU-Tasks quiescent states
259 * This helper is for long-running softirq handlers, such as NAPI threads in
263 * provide both RCU and RCU-Tasks quiescent states. Note that this macro
266 * Because regions of code that have disabled softirq act as RCU read-side
273 * effect because cond_resched() does not provide RCU-Tasks quiescent states.
389 * RCU_LOCKDEP_WARN - emit lockdep splat if specified condition is met
395 * and rechecks it after checking (c) to prevent false-positive splats
413 "Illegal context switch in RCU read-side critical section"); in rcu_preempt_sleep_check()
424 "Illegal context switch in RCU-bh read-side critical section"); \
426 "Illegal context switch in RCU-sched read-side critical section"); \
458 * unrcu_pointer - mark a pointer as not being RCU protected
495 * RCU_INITIALIZER() - statically initialize an RCU-protected global variable
501 * rcu_assign_pointer() - assign to RCU-protected pointer
505 * Assigns the specified value to the specified RCU-protected
513 * will be dereferenced by RCU read-side code.
520 * impossible-to-diagnose memory corruption. So please be careful.
527 * macros, this execute-arguments-only-once property is important, so
543 * rcu_replace_pointer() - replace an RCU pointer, returning its old value
548 * Perform a replacement, where @rcu_ptr is an RCU-annotated
561 * rcu_access_pointer() - fetch RCU pointer with no dereferencing
564 * Return the value of the specified RCU-protected pointer, but omit the
565 * lockdep checks for being in an RCU read-side critical section. This is
567 * not dereferenced, for example, when testing an RCU-protected pointer
569 * where update-side locks prevent the value of the pointer from changing,
571 * Within an RCU read-side critical section, there is little reason to
580 * It is also permissible to use rcu_access_pointer() when read-side
584 * down multi-linked structures after a grace period has elapsed. However,
590 * rcu_dereference_check() - rcu_dereference with debug checking
598 * An implicit check for being in an RCU read-side critical section
603 * bar = rcu_dereference_check(foo->bar, lockdep_is_held(&foo->lock));
605 * could be used to indicate to lockdep that foo->bar may only be dereferenced
607 * the bar struct at foo->bar is held.
613 * bar = rcu_dereference_check(foo->bar, lockdep_is_held(&foo->lock) ||
614 * atomic_read(&foo->usage) == 0);
627 * rcu_dereference_bh_check() - rcu_dereference_bh with debug checking
631 * This is the RCU-bh counterpart to rcu_dereference_check(). However,
643 * rcu_dereference_sched_check() - rcu_dereference_sched with debug checking
647 * This is the RCU-sched counterpart to rcu_dereference_check().
663 * The no-tracing version of rcu_dereference_raw() must not call
670 * rcu_dereference_protected() - fetch RCU pointer when updates prevented
674 * Return the value of the specified RCU-protected pointer, but omit
675 * the READ_ONCE(). This is useful in cases where update-side locks
681 * This function is only for update-side use. Using this function
690 * rcu_dereference() - fetch RCU-protected pointer for dereferencing
698 * rcu_dereference_bh() - fetch an RCU-bh-protected pointer for dereferencing
706 * rcu_dereference_sched() - fetch RCU-sched-protected pointer for dereferencing
714 * rcu_pointer_handoff() - Hand off a pointer from RCU to other mechanism
726 * if (!atomic_inc_not_zero(p->refcnt))
736 * rcu_read_lock() - mark the beginning of an RCU read-side critical section
739 * are within RCU read-side critical sections, then the
742 * on one CPU while other CPUs are within RCU read-side critical
748 * code with interrupts or softirqs disabled. In pre-v5.0 kernels, which
753 * with new RCU read-side critical sections. One way that this can happen
755 * read-side critical section, (2) CPU 1 invokes call_rcu() to register
756 * an RCU callback, (3) CPU 0 exits the RCU read-side critical section,
757 * (4) CPU 2 enters a RCU read-side critical section, (5) the RCU
758 * callback is invoked. This is legal, because the RCU read-side critical
764 * RCU read-side critical sections may be nested. Any deferred actions
765 * will be deferred until the outermost RCU read-side critical section
770 * read-side critical section that would block in a !PREEMPTION kernel.
773 * In non-preemptible RCU implementations (pure TREE_RCU and TINY_RCU),
774 * it is illegal to block while in an RCU read-side critical section.
776 * kernel builds, RCU read-side critical sections may be preempted,
778 * implementations in real-time (with -rt patchset) kernel builds, RCU
779 * read-side critical sections may be preempted and they may also block, but
794 * a bug -- this property is what provides RCU's performance benefits.
802 * rcu_read_unlock() - marks the end of an RCU read-side critical section.
807 * also extends to the scheduler's runqueue and priority-inheritance
808 * spinlocks, courtesy of the quiescent-state deferral that is carried
823 * rcu_read_lock_bh() - mark the beginning of an RCU-bh critical section
827 * read-side critical section. However, please note that this equivalence
846 * rcu_read_unlock_bh() - marks the end of a softirq-only RCU critical section
860 * rcu_read_lock_sched() - mark the beginning of a RCU-sched critical section
863 * Read-side critical sections can also be introduced by anything else that
891 * rcu_read_unlock_sched() - marks the end of a RCU-classic critical section
912 * RCU_INIT_POINTER() - initialize an RCU protected pointer
916 * Initialize an RCU-protected pointer in special cases where readers
926 * a. You have not made *any* reader-visible changes to
928 * b. It is OK for readers accessing this structure from its
934 * result in impossible-to-diagnose memory corruption. As in the structures
936 * see pre-initialized values of the referenced data structure. So
939 * If you are creating an RCU-protected linked structure that is accessed
940 * by a single external-to-structure RCU-protected pointer, then you may
941 * use RCU_INIT_POINTER() to initialize the internal RCU-protected
943 * external-to-structure pointer *after* you have completely initialized
944 * the reader-accessible portions of the linked structure.
956 * RCU_POINTER_INITIALIZER() - statically initialize an RCU protected pointer
960 * GCC-style initialization for an RCU-protected pointer in a structure field.
972 * kfree_rcu() - kfree an object after a grace period.
973 * @ptr: pointer to kfree for double-argument invocations.
979 * high-latency rcu_barrier() function at module-unload time.
984 * Because the functions are not allowed in the low-order 4096 bytes of
986 * If the offset is larger than 4095 bytes, a compile-time error will
1003 * kfree_rcu_mightsleep() - kfree an object after a grace period.
1004 * @ptr: pointer to kfree for single-argument invocations.
1006 * When it comes to head-less variant, only one argument
1014 * Please note, head-less way of freeing is permitted to
1028 kvfree_call_rcu(&((___p)->rhf), (void *) (___p)); \
1041 * Place this after a lock-acquisition primitive to guarantee that
1056 * rcu_head_init - Initialize rcu_head for rcu_head_after_call_rcu()
1067 rhp->func = (rcu_callback_t)~0L; in rcu_head_init()
1071 * rcu_head_after_call_rcu() - Has this rcu_head been passed to call_rcu()?
1080 * in an RCU read-side critical section that includes a read-side fetch
1086 rcu_callback_t func = READ_ONCE(rhp->func); in rcu_head_after_call_rcu()