• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1 //! Atomic types
2 //!
3 //! Atomic types provide primitive shared-memory communication between
4 //! threads, and are the building blocks of other concurrent
5 //! types.
6 //!
7 //! Rust atomics currently follow the same rules as [C++20 atomics][cpp], specifically `atomic_ref`.
8 //! Basically, creating a *shared reference* to one of the Rust atomic types corresponds to creating
9 //! an `atomic_ref` in C++; the `atomic_ref` is destroyed when the lifetime of the shared reference
10 //! ends. (A Rust atomic type that is exclusively owned or behind a mutable reference does *not*
11 //! correspond to an "atomic object" in C++, since it can be accessed via non-atomic operations.)
12 //!
13 //! This module defines atomic versions of a select number of primitive
14 //! types, including [`AtomicBool`], [`AtomicIsize`], [`AtomicUsize`],
15 //! [`AtomicI8`], [`AtomicU16`], etc.
16 //! Atomic types present operations that, when used correctly, synchronize
17 //! updates between threads.
18 //!
19 //! Each method takes an [`Ordering`] which represents the strength of
20 //! the memory barrier for that operation. These orderings are the
21 //! same as the [C++20 atomic orderings][1]. For more information see the [nomicon][2].
22 //!
23 //! [cpp]: https://en.cppreference.com/w/cpp/atomic
24 //! [1]: https://en.cppreference.com/w/cpp/atomic/memory_order
25 //! [2]: ../../../nomicon/atomics.html
26 //!
27 //! Atomic variables are safe to share between threads (they implement [`Sync`])
28 //! but they do not themselves provide the mechanism for sharing and follow the
29 //! [threading model](../../../std/thread/index.html#the-threading-model) of Rust.
30 //! The most common way to share an atomic variable is to put it into an [`Arc`][arc] (an
31 //! atomically-reference-counted shared pointer).
32 //!
33 //! [arc]: ../../../std/sync/struct.Arc.html
34 //!
35 //! Atomic types may be stored in static variables, initialized using
36 //! the constant initializers like [`AtomicBool::new`]. Atomic statics
37 //! are often used for lazy global initialization.
38 //!
39 //! # Portability
40 //!
41 //! All atomic types in this module are guaranteed to be [lock-free] if they're
42 //! available. This means they don't internally acquire a global mutex. Atomic
43 //! types and operations are not guaranteed to be wait-free. This means that
44 //! operations like `fetch_or` may be implemented with a compare-and-swap loop.
45 //!
46 //! Atomic operations may be implemented at the instruction layer with
47 //! larger-size atomics. For example some platforms use 4-byte atomic
48 //! instructions to implement `AtomicI8`. Note that this emulation should not
49 //! have an impact on correctness of code, it's just something to be aware of.
50 //!
51 //! The atomic types in this module might not be available on all platforms. The
52 //! atomic types here are all widely available, however, and can generally be
53 //! relied upon existing. Some notable exceptions are:
54 //!
55 //! * PowerPC and MIPS platforms with 32-bit pointers do not have `AtomicU64` or
56 //!   `AtomicI64` types.
57 //! * ARM platforms like `armv5te` that aren't for Linux only provide `load`
58 //!   and `store` operations, and do not support Compare and Swap (CAS)
59 //!   operations, such as `swap`, `fetch_add`, etc. Additionally on Linux,
60 //!   these CAS operations are implemented via [operating system support], which
61 //!   may come with a performance penalty.
62 //! * ARM targets with `thumbv6m` only provide `load` and `store` operations,
63 //!   and do not support Compare and Swap (CAS) operations, such as `swap`,
64 //!   `fetch_add`, etc.
65 //!
66 //! [operating system support]: https://www.kernel.org/doc/Documentation/arm/kernel_user_helpers.txt
67 //!
68 //! Note that future platforms may be added that also do not have support for
69 //! some atomic operations. Maximally portable code will want to be careful
70 //! about which atomic types are used. `AtomicUsize` and `AtomicIsize` are
71 //! generally the most portable, but even then they're not available everywhere.
72 //! For reference, the `std` library requires `AtomicBool`s and pointer-sized atomics, although
73 //! `core` does not.
74 //!
75 //! The `#[cfg(target_has_atomic)]` attribute can be used to conditionally
76 //! compile based on the target's supported bit widths. It is a key-value
77 //! option set for each supported size, with values "8", "16", "32", "64",
78 //! "128", and "ptr" for pointer-sized atomics.
79 //!
80 //! [lock-free]: https://en.wikipedia.org/wiki/Non-blocking_algorithm
81 //!
82 //! # Examples
83 //!
84 //! A simple spinlock:
85 //!
86 //! ```
87 //! use std::sync::Arc;
88 //! use std::sync::atomic::{AtomicUsize, Ordering};
89 //! use std::{hint, thread};
90 //!
91 //! fn main() {
92 //!     let spinlock = Arc::new(AtomicUsize::new(1));
93 //!
94 //!     let spinlock_clone = Arc::clone(&spinlock);
95 //!     let thread = thread::spawn(move|| {
96 //!         spinlock_clone.store(0, Ordering::SeqCst);
97 //!     });
98 //!
99 //!     // Wait for the other thread to release the lock
100 //!     while spinlock.load(Ordering::SeqCst) != 0 {
101 //!         hint::spin_loop();
102 //!     }
103 //!
104 //!     if let Err(panic) = thread.join() {
105 //!         println!("Thread had an error: {panic:?}");
106 //!     }
107 //! }
108 //! ```
109 //!
110 //! Keep a global count of live threads:
111 //!
112 //! ```
113 //! use std::sync::atomic::{AtomicUsize, Ordering};
114 //!
115 //! static GLOBAL_THREAD_COUNT: AtomicUsize = AtomicUsize::new(0);
116 //!
117 //! let old_thread_count = GLOBAL_THREAD_COUNT.fetch_add(1, Ordering::SeqCst);
118 //! println!("live threads: {}", old_thread_count + 1);
119 //! ```
120 
121 #![stable(feature = "rust1", since = "1.0.0")]
122 #![cfg_attr(not(target_has_atomic_load_store = "8"), allow(dead_code))]
123 #![cfg_attr(not(target_has_atomic_load_store = "8"), allow(unused_imports))]
124 #![rustc_diagnostic_item = "atomic_mod"]
125 
126 use self::Ordering::*;
127 
128 use crate::cell::UnsafeCell;
129 use crate::fmt;
130 use crate::intrinsics;
131 
132 use crate::hint::spin_loop;
133 
134 /// A boolean type which can be safely shared between threads.
135 ///
136 /// This type has the same in-memory representation as a [`bool`].
137 ///
138 /// **Note**: This type is only available on platforms that support atomic
139 /// loads and stores of `u8`.
140 #[cfg(target_has_atomic_load_store = "8")]
141 #[stable(feature = "rust1", since = "1.0.0")]
142 #[rustc_diagnostic_item = "AtomicBool"]
143 #[repr(C, align(1))]
144 pub struct AtomicBool {
145     v: UnsafeCell<u8>,
146 }
147 
148 #[cfg(target_has_atomic_load_store = "8")]
149 #[stable(feature = "rust1", since = "1.0.0")]
150 impl Default for AtomicBool {
151     /// Creates an `AtomicBool` initialized to `false`.
152     #[inline]
default() -> Self153     fn default() -> Self {
154         Self::new(false)
155     }
156 }
157 
158 // Send is implicitly implemented for AtomicBool.
159 #[cfg(target_has_atomic_load_store = "8")]
160 #[stable(feature = "rust1", since = "1.0.0")]
161 unsafe impl Sync for AtomicBool {}
162 
163 /// A raw pointer type which can be safely shared between threads.
164 ///
165 /// This type has the same in-memory representation as a `*mut T`.
166 ///
167 /// **Note**: This type is only available on platforms that support atomic
168 /// loads and stores of pointers. Its size depends on the target pointer's size.
169 #[cfg(target_has_atomic_load_store = "ptr")]
170 #[stable(feature = "rust1", since = "1.0.0")]
171 #[cfg_attr(not(test), rustc_diagnostic_item = "AtomicPtr")]
172 #[cfg_attr(target_pointer_width = "16", repr(C, align(2)))]
173 #[cfg_attr(target_pointer_width = "32", repr(C, align(4)))]
174 #[cfg_attr(target_pointer_width = "64", repr(C, align(8)))]
175 pub struct AtomicPtr<T> {
176     p: UnsafeCell<*mut T>,
177 }
178 
179 #[cfg(target_has_atomic_load_store = "ptr")]
180 #[stable(feature = "rust1", since = "1.0.0")]
181 impl<T> Default for AtomicPtr<T> {
182     /// Creates a null `AtomicPtr<T>`.
default() -> AtomicPtr<T>183     fn default() -> AtomicPtr<T> {
184         AtomicPtr::new(crate::ptr::null_mut())
185     }
186 }
187 
188 #[cfg(target_has_atomic_load_store = "ptr")]
189 #[stable(feature = "rust1", since = "1.0.0")]
190 unsafe impl<T> Send for AtomicPtr<T> {}
191 #[cfg(target_has_atomic_load_store = "ptr")]
192 #[stable(feature = "rust1", since = "1.0.0")]
193 unsafe impl<T> Sync for AtomicPtr<T> {}
194 
195 /// Atomic memory orderings
196 ///
197 /// Memory orderings specify the way atomic operations synchronize memory.
198 /// In its weakest [`Ordering::Relaxed`], only the memory directly touched by the
199 /// operation is synchronized. On the other hand, a store-load pair of [`Ordering::SeqCst`]
200 /// operations synchronize other memory while additionally preserving a total order of such
201 /// operations across all threads.
202 ///
203 /// Rust's memory orderings are [the same as those of
204 /// C++20](https://en.cppreference.com/w/cpp/atomic/memory_order).
205 ///
206 /// For more information see the [nomicon].
207 ///
208 /// [nomicon]: ../../../nomicon/atomics.html
209 #[stable(feature = "rust1", since = "1.0.0")]
210 #[derive(Copy, Clone, Debug, Eq, PartialEq, Hash)]
211 #[non_exhaustive]
212 #[rustc_diagnostic_item = "Ordering"]
213 pub enum Ordering {
214     /// No ordering constraints, only atomic operations.
215     ///
216     /// Corresponds to [`memory_order_relaxed`] in C++20.
217     ///
218     /// [`memory_order_relaxed`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Relaxed_ordering
219     #[stable(feature = "rust1", since = "1.0.0")]
220     Relaxed,
221     /// When coupled with a store, all previous operations become ordered
222     /// before any load of this value with [`Acquire`] (or stronger) ordering.
223     /// In particular, all previous writes become visible to all threads
224     /// that perform an [`Acquire`] (or stronger) load of this value.
225     ///
226     /// Notice that using this ordering for an operation that combines loads
227     /// and stores leads to a [`Relaxed`] load operation!
228     ///
229     /// This ordering is only applicable for operations that can perform a store.
230     ///
231     /// Corresponds to [`memory_order_release`] in C++20.
232     ///
233     /// [`memory_order_release`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release-Acquire_ordering
234     #[stable(feature = "rust1", since = "1.0.0")]
235     Release,
236     /// When coupled with a load, if the loaded value was written by a store operation with
237     /// [`Release`] (or stronger) ordering, then all subsequent operations
238     /// become ordered after that store. In particular, all subsequent loads will see data
239     /// written before the store.
240     ///
241     /// Notice that using this ordering for an operation that combines loads
242     /// and stores leads to a [`Relaxed`] store operation!
243     ///
244     /// This ordering is only applicable for operations that can perform a load.
245     ///
246     /// Corresponds to [`memory_order_acquire`] in C++20.
247     ///
248     /// [`memory_order_acquire`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release-Acquire_ordering
249     #[stable(feature = "rust1", since = "1.0.0")]
250     Acquire,
251     /// Has the effects of both [`Acquire`] and [`Release`] together:
252     /// For loads it uses [`Acquire`] ordering. For stores it uses the [`Release`] ordering.
253     ///
254     /// Notice that in the case of `compare_and_swap`, it is possible that the operation ends up
255     /// not performing any store and hence it has just [`Acquire`] ordering. However,
256     /// `AcqRel` will never perform [`Relaxed`] accesses.
257     ///
258     /// This ordering is only applicable for operations that combine both loads and stores.
259     ///
260     /// Corresponds to [`memory_order_acq_rel`] in C++20.
261     ///
262     /// [`memory_order_acq_rel`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release-Acquire_ordering
263     #[stable(feature = "rust1", since = "1.0.0")]
264     AcqRel,
265     /// Like [`Acquire`]/[`Release`]/[`AcqRel`] (for load, store, and load-with-store
266     /// operations, respectively) with the additional guarantee that all threads see all
267     /// sequentially consistent operations in the same order.
268     ///
269     /// Corresponds to [`memory_order_seq_cst`] in C++20.
270     ///
271     /// [`memory_order_seq_cst`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Sequentially-consistent_ordering
272     #[stable(feature = "rust1", since = "1.0.0")]
273     SeqCst,
274 }
275 
276 /// An [`AtomicBool`] initialized to `false`.
277 #[cfg(target_has_atomic_load_store = "8")]
278 #[stable(feature = "rust1", since = "1.0.0")]
279 #[deprecated(
280     since = "1.34.0",
281     note = "the `new` function is now preferred",
282     suggestion = "AtomicBool::new(false)"
283 )]
284 pub const ATOMIC_BOOL_INIT: AtomicBool = AtomicBool::new(false);
285 
286 #[cfg(target_has_atomic_load_store = "8")]
287 impl AtomicBool {
288     /// Creates a new `AtomicBool`.
289     ///
290     /// # Examples
291     ///
292     /// ```
293     /// use std::sync::atomic::AtomicBool;
294     ///
295     /// let atomic_true = AtomicBool::new(true);
296     /// let atomic_false = AtomicBool::new(false);
297     /// ```
298     #[inline]
299     #[stable(feature = "rust1", since = "1.0.0")]
300     #[rustc_const_stable(feature = "const_atomic_new", since = "1.24.0")]
301     #[must_use]
new(v: bool) -> AtomicBool302     pub const fn new(v: bool) -> AtomicBool {
303         AtomicBool { v: UnsafeCell::new(v as u8) }
304     }
305 
306     /// Creates a new `AtomicBool` from a pointer.
307     ///
308     /// # Examples
309     ///
310     /// ```
311     /// #![feature(atomic_from_ptr, pointer_is_aligned)]
312     /// use std::sync::atomic::{self, AtomicBool};
313     /// use std::mem::align_of;
314     ///
315     /// // Get a pointer to an allocated value
316     /// let ptr: *mut bool = Box::into_raw(Box::new(false));
317     ///
318     /// assert!(ptr.is_aligned_to(align_of::<AtomicBool>()));
319     ///
320     /// {
321     ///     // Create an atomic view of the allocated value
322     ///     let atomic = unsafe { AtomicBool::from_ptr(ptr) };
323     ///
324     ///     // Use `atomic` for atomic operations, possibly share it with other threads
325     ///     atomic.store(true, atomic::Ordering::Relaxed);
326     /// }
327     ///
328     /// // It's ok to non-atomically access the value behind `ptr`,
329     /// // since the reference to the atomic ended its lifetime in the block above
330     /// assert_eq!(unsafe { *ptr }, true);
331     ///
332     /// // Deallocate the value
333     /// unsafe { drop(Box::from_raw(ptr)) }
334     /// ```
335     ///
336     /// # Safety
337     ///
338     /// * `ptr` must be aligned to `align_of::<AtomicBool>()` (note that on some platforms this can be bigger than `align_of::<bool>()`).
339     /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
340     /// * The value behind `ptr` must not be accessed through non-atomic operations for the whole lifetime `'a`.
341     ///
342     /// [valid]: crate::ptr#safety
343     #[unstable(feature = "atomic_from_ptr", issue = "108652")]
344     #[rustc_const_unstable(feature = "atomic_from_ptr", issue = "108652")]
from_ptr<'a>(ptr: *mut bool) -> &'a AtomicBool345     pub const unsafe fn from_ptr<'a>(ptr: *mut bool) -> &'a AtomicBool {
346         // SAFETY: guaranteed by the caller
347         unsafe { &*ptr.cast() }
348     }
349 
350     /// Returns a mutable reference to the underlying [`bool`].
351     ///
352     /// This is safe because the mutable reference guarantees that no other threads are
353     /// concurrently accessing the atomic data.
354     ///
355     /// # Examples
356     ///
357     /// ```
358     /// use std::sync::atomic::{AtomicBool, Ordering};
359     ///
360     /// let mut some_bool = AtomicBool::new(true);
361     /// assert_eq!(*some_bool.get_mut(), true);
362     /// *some_bool.get_mut() = false;
363     /// assert_eq!(some_bool.load(Ordering::SeqCst), false);
364     /// ```
365     #[inline]
366     #[stable(feature = "atomic_access", since = "1.15.0")]
get_mut(&mut self) -> &mut bool367     pub fn get_mut(&mut self) -> &mut bool {
368         // SAFETY: the mutable reference guarantees unique ownership.
369         unsafe { &mut *(self.v.get() as *mut bool) }
370     }
371 
372     /// Get atomic access to a `&mut bool`.
373     ///
374     /// # Examples
375     ///
376     /// ```
377     /// #![feature(atomic_from_mut)]
378     /// use std::sync::atomic::{AtomicBool, Ordering};
379     ///
380     /// let mut some_bool = true;
381     /// let a = AtomicBool::from_mut(&mut some_bool);
382     /// a.store(false, Ordering::Relaxed);
383     /// assert_eq!(some_bool, false);
384     /// ```
385     #[inline]
386     #[cfg(target_has_atomic_equal_alignment = "8")]
387     #[unstable(feature = "atomic_from_mut", issue = "76314")]
from_mut(v: &mut bool) -> &mut Self388     pub fn from_mut(v: &mut bool) -> &mut Self {
389         // SAFETY: the mutable reference guarantees unique ownership, and
390         // alignment of both `bool` and `Self` is 1.
391         unsafe { &mut *(v as *mut bool as *mut Self) }
392     }
393 
394     /// Get non-atomic access to a `&mut [AtomicBool]` slice.
395     ///
396     /// This is safe because the mutable reference guarantees that no other threads are
397     /// concurrently accessing the atomic data.
398     ///
399     /// # Examples
400     ///
401     /// ```
402     /// #![feature(atomic_from_mut, inline_const)]
403     /// use std::sync::atomic::{AtomicBool, Ordering};
404     ///
405     /// let mut some_bools = [const { AtomicBool::new(false) }; 10];
406     ///
407     /// let view: &mut [bool] = AtomicBool::get_mut_slice(&mut some_bools);
408     /// assert_eq!(view, [false; 10]);
409     /// view[..5].copy_from_slice(&[true; 5]);
410     ///
411     /// std::thread::scope(|s| {
412     ///     for t in &some_bools[..5] {
413     ///         s.spawn(move || assert_eq!(t.load(Ordering::Relaxed), true));
414     ///     }
415     ///
416     ///     for f in &some_bools[5..] {
417     ///         s.spawn(move || assert_eq!(f.load(Ordering::Relaxed), false));
418     ///     }
419     /// });
420     /// ```
421     #[inline]
422     #[unstable(feature = "atomic_from_mut", issue = "76314")]
get_mut_slice(this: &mut [Self]) -> &mut [bool]423     pub fn get_mut_slice(this: &mut [Self]) -> &mut [bool] {
424         // SAFETY: the mutable reference guarantees unique ownership.
425         unsafe { &mut *(this as *mut [Self] as *mut [bool]) }
426     }
427 
428     /// Get atomic access to a `&mut [bool]` slice.
429     ///
430     /// # Examples
431     ///
432     /// ```
433     /// #![feature(atomic_from_mut)]
434     /// use std::sync::atomic::{AtomicBool, Ordering};
435     ///
436     /// let mut some_bools = [false; 10];
437     /// let a = &*AtomicBool::from_mut_slice(&mut some_bools);
438     /// std::thread::scope(|s| {
439     ///     for i in 0..a.len() {
440     ///         s.spawn(move || a[i].store(true, Ordering::Relaxed));
441     ///     }
442     /// });
443     /// assert_eq!(some_bools, [true; 10]);
444     /// ```
445     #[inline]
446     #[cfg(target_has_atomic_equal_alignment = "8")]
447     #[unstable(feature = "atomic_from_mut", issue = "76314")]
from_mut_slice(v: &mut [bool]) -> &mut [Self]448     pub fn from_mut_slice(v: &mut [bool]) -> &mut [Self] {
449         // SAFETY: the mutable reference guarantees unique ownership, and
450         // alignment of both `bool` and `Self` is 1.
451         unsafe { &mut *(v as *mut [bool] as *mut [Self]) }
452     }
453 
454     /// Consumes the atomic and returns the contained value.
455     ///
456     /// This is safe because passing `self` by value guarantees that no other threads are
457     /// concurrently accessing the atomic data.
458     ///
459     /// # Examples
460     ///
461     /// ```
462     /// use std::sync::atomic::AtomicBool;
463     ///
464     /// let some_bool = AtomicBool::new(true);
465     /// assert_eq!(some_bool.into_inner(), true);
466     /// ```
467     #[inline]
468     #[stable(feature = "atomic_access", since = "1.15.0")]
469     #[rustc_const_unstable(feature = "const_cell_into_inner", issue = "78729")]
into_inner(self) -> bool470     pub const fn into_inner(self) -> bool {
471         self.v.into_inner() != 0
472     }
473 
474     /// Loads a value from the bool.
475     ///
476     /// `load` takes an [`Ordering`] argument which describes the memory ordering
477     /// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
478     ///
479     /// # Panics
480     ///
481     /// Panics if `order` is [`Release`] or [`AcqRel`].
482     ///
483     /// # Examples
484     ///
485     /// ```
486     /// use std::sync::atomic::{AtomicBool, Ordering};
487     ///
488     /// let some_bool = AtomicBool::new(true);
489     ///
490     /// assert_eq!(some_bool.load(Ordering::Relaxed), true);
491     /// ```
492     #[inline]
493     #[stable(feature = "rust1", since = "1.0.0")]
494     #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
load(&self, order: Ordering) -> bool495     pub fn load(&self, order: Ordering) -> bool {
496         // SAFETY: any data races are prevented by atomic intrinsics and the raw
497         // pointer passed in is valid because we got it from a reference.
498         unsafe { atomic_load(self.v.get(), order) != 0 }
499     }
500 
501     /// Stores a value into the bool.
502     ///
503     /// `store` takes an [`Ordering`] argument which describes the memory ordering
504     /// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
505     ///
506     /// # Panics
507     ///
508     /// Panics if `order` is [`Acquire`] or [`AcqRel`].
509     ///
510     /// # Examples
511     ///
512     /// ```
513     /// use std::sync::atomic::{AtomicBool, Ordering};
514     ///
515     /// let some_bool = AtomicBool::new(true);
516     ///
517     /// some_bool.store(false, Ordering::Relaxed);
518     /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
519     /// ```
520     #[inline]
521     #[stable(feature = "rust1", since = "1.0.0")]
522     #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
store(&self, val: bool, order: Ordering)523     pub fn store(&self, val: bool, order: Ordering) {
524         // SAFETY: any data races are prevented by atomic intrinsics and the raw
525         // pointer passed in is valid because we got it from a reference.
526         unsafe {
527             atomic_store(self.v.get(), val as u8, order);
528         }
529     }
530 
531     /// Stores a value into the bool, returning the previous value.
532     ///
533     /// `swap` takes an [`Ordering`] argument which describes the memory ordering
534     /// of this operation. All ordering modes are possible. Note that using
535     /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
536     /// using [`Release`] makes the load part [`Relaxed`].
537     ///
538     /// **Note:** This method is only available on platforms that support atomic
539     /// operations on `u8`.
540     ///
541     /// # Examples
542     ///
543     /// ```
544     /// use std::sync::atomic::{AtomicBool, Ordering};
545     ///
546     /// let some_bool = AtomicBool::new(true);
547     ///
548     /// assert_eq!(some_bool.swap(false, Ordering::Relaxed), true);
549     /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
550     /// ```
551     #[inline]
552     #[stable(feature = "rust1", since = "1.0.0")]
553     #[cfg(target_has_atomic = "8")]
554     #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
swap(&self, val: bool, order: Ordering) -> bool555     pub fn swap(&self, val: bool, order: Ordering) -> bool {
556         // SAFETY: data races are prevented by atomic intrinsics.
557         unsafe { atomic_swap(self.v.get(), val as u8, order) != 0 }
558     }
559 
560     /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
561     ///
562     /// The return value is always the previous value. If it is equal to `current`, then the value
563     /// was updated.
564     ///
565     /// `compare_and_swap` also takes an [`Ordering`] argument which describes the memory
566     /// ordering of this operation. Notice that even when using [`AcqRel`], the operation
567     /// might fail and hence just perform an `Acquire` load, but not have `Release` semantics.
568     /// Using [`Acquire`] makes the store part of this operation [`Relaxed`] if it
569     /// happens, and using [`Release`] makes the load part [`Relaxed`].
570     ///
571     /// **Note:** This method is only available on platforms that support atomic
572     /// operations on `u8`.
573     ///
574     /// # Migrating to `compare_exchange` and `compare_exchange_weak`
575     ///
576     /// `compare_and_swap` is equivalent to `compare_exchange` with the following mapping for
577     /// memory orderings:
578     ///
579     /// Original | Success | Failure
580     /// -------- | ------- | -------
581     /// Relaxed  | Relaxed | Relaxed
582     /// Acquire  | Acquire | Acquire
583     /// Release  | Release | Relaxed
584     /// AcqRel   | AcqRel  | Acquire
585     /// SeqCst   | SeqCst  | SeqCst
586     ///
587     /// `compare_exchange_weak` is allowed to fail spuriously even when the comparison succeeds,
588     /// which allows the compiler to generate better assembly code when the compare and swap
589     /// is used in a loop.
590     ///
591     /// # Examples
592     ///
593     /// ```
594     /// use std::sync::atomic::{AtomicBool, Ordering};
595     ///
596     /// let some_bool = AtomicBool::new(true);
597     ///
598     /// assert_eq!(some_bool.compare_and_swap(true, false, Ordering::Relaxed), true);
599     /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
600     ///
601     /// assert_eq!(some_bool.compare_and_swap(true, true, Ordering::Relaxed), false);
602     /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
603     /// ```
604     #[inline]
605     #[stable(feature = "rust1", since = "1.0.0")]
606     #[deprecated(
607         since = "1.50.0",
608         note = "Use `compare_exchange` or `compare_exchange_weak` instead"
609     )]
610     #[cfg(target_has_atomic = "8")]
611     #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
compare_and_swap(&self, current: bool, new: bool, order: Ordering) -> bool612     pub fn compare_and_swap(&self, current: bool, new: bool, order: Ordering) -> bool {
613         match self.compare_exchange(current, new, order, strongest_failure_ordering(order)) {
614             Ok(x) => x,
615             Err(x) => x,
616         }
617     }
618 
619     /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
620     ///
621     /// The return value is a result indicating whether the new value was written and containing
622     /// the previous value. On success this value is guaranteed to be equal to `current`.
623     ///
624     /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
625     /// ordering of this operation. `success` describes the required ordering for the
626     /// read-modify-write operation that takes place if the comparison with `current` succeeds.
627     /// `failure` describes the required ordering for the load operation that takes place when
628     /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
629     /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
630     /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
631     ///
632     /// **Note:** This method is only available on platforms that support atomic
633     /// operations on `u8`.
634     ///
635     /// # Examples
636     ///
637     /// ```
638     /// use std::sync::atomic::{AtomicBool, Ordering};
639     ///
640     /// let some_bool = AtomicBool::new(true);
641     ///
642     /// assert_eq!(some_bool.compare_exchange(true,
643     ///                                       false,
644     ///                                       Ordering::Acquire,
645     ///                                       Ordering::Relaxed),
646     ///            Ok(true));
647     /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
648     ///
649     /// assert_eq!(some_bool.compare_exchange(true, true,
650     ///                                       Ordering::SeqCst,
651     ///                                       Ordering::Acquire),
652     ///            Err(false));
653     /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
654     /// ```
655     #[inline]
656     #[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
657     #[doc(alias = "compare_and_swap")]
658     #[cfg(target_has_atomic = "8")]
659     #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
compare_exchange( &self, current: bool, new: bool, success: Ordering, failure: Ordering, ) -> Result<bool, bool>660     pub fn compare_exchange(
661         &self,
662         current: bool,
663         new: bool,
664         success: Ordering,
665         failure: Ordering,
666     ) -> Result<bool, bool> {
667         // SAFETY: data races are prevented by atomic intrinsics.
668         match unsafe {
669             atomic_compare_exchange(self.v.get(), current as u8, new as u8, success, failure)
670         } {
671             Ok(x) => Ok(x != 0),
672             Err(x) => Err(x != 0),
673         }
674     }
675 
676     /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
677     ///
678     /// Unlike [`AtomicBool::compare_exchange`], this function is allowed to spuriously fail even when the
679     /// comparison succeeds, which can result in more efficient code on some platforms. The
680     /// return value is a result indicating whether the new value was written and containing the
681     /// previous value.
682     ///
683     /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
684     /// ordering of this operation. `success` describes the required ordering for the
685     /// read-modify-write operation that takes place if the comparison with `current` succeeds.
686     /// `failure` describes the required ordering for the load operation that takes place when
687     /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
688     /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
689     /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
690     ///
691     /// **Note:** This method is only available on platforms that support atomic
692     /// operations on `u8`.
693     ///
694     /// # Examples
695     ///
696     /// ```
697     /// use std::sync::atomic::{AtomicBool, Ordering};
698     ///
699     /// let val = AtomicBool::new(false);
700     ///
701     /// let new = true;
702     /// let mut old = val.load(Ordering::Relaxed);
703     /// loop {
704     ///     match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
705     ///         Ok(_) => break,
706     ///         Err(x) => old = x,
707     ///     }
708     /// }
709     /// ```
710     #[inline]
711     #[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
712     #[doc(alias = "compare_and_swap")]
713     #[cfg(target_has_atomic = "8")]
714     #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
compare_exchange_weak( &self, current: bool, new: bool, success: Ordering, failure: Ordering, ) -> Result<bool, bool>715     pub fn compare_exchange_weak(
716         &self,
717         current: bool,
718         new: bool,
719         success: Ordering,
720         failure: Ordering,
721     ) -> Result<bool, bool> {
722         // SAFETY: data races are prevented by atomic intrinsics.
723         match unsafe {
724             atomic_compare_exchange_weak(self.v.get(), current as u8, new as u8, success, failure)
725         } {
726             Ok(x) => Ok(x != 0),
727             Err(x) => Err(x != 0),
728         }
729     }
730 
731     /// Logical "and" with a boolean value.
732     ///
733     /// Performs a logical "and" operation on the current value and the argument `val`, and sets
734     /// the new value to the result.
735     ///
736     /// Returns the previous value.
737     ///
738     /// `fetch_and` takes an [`Ordering`] argument which describes the memory ordering
739     /// of this operation. All ordering modes are possible. Note that using
740     /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
741     /// using [`Release`] makes the load part [`Relaxed`].
742     ///
743     /// **Note:** This method is only available on platforms that support atomic
744     /// operations on `u8`.
745     ///
746     /// # Examples
747     ///
748     /// ```
749     /// use std::sync::atomic::{AtomicBool, Ordering};
750     ///
751     /// let foo = AtomicBool::new(true);
752     /// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), true);
753     /// assert_eq!(foo.load(Ordering::SeqCst), false);
754     ///
755     /// let foo = AtomicBool::new(true);
756     /// assert_eq!(foo.fetch_and(true, Ordering::SeqCst), true);
757     /// assert_eq!(foo.load(Ordering::SeqCst), true);
758     ///
759     /// let foo = AtomicBool::new(false);
760     /// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), false);
761     /// assert_eq!(foo.load(Ordering::SeqCst), false);
762     /// ```
763     #[inline]
764     #[stable(feature = "rust1", since = "1.0.0")]
765     #[cfg(target_has_atomic = "8")]
766     #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
fetch_and(&self, val: bool, order: Ordering) -> bool767     pub fn fetch_and(&self, val: bool, order: Ordering) -> bool {
768         // SAFETY: data races are prevented by atomic intrinsics.
769         unsafe { atomic_and(self.v.get(), val as u8, order) != 0 }
770     }
771 
772     /// Logical "nand" with a boolean value.
773     ///
774     /// Performs a logical "nand" operation on the current value and the argument `val`, and sets
775     /// the new value to the result.
776     ///
777     /// Returns the previous value.
778     ///
779     /// `fetch_nand` takes an [`Ordering`] argument which describes the memory ordering
780     /// of this operation. All ordering modes are possible. Note that using
781     /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
782     /// using [`Release`] makes the load part [`Relaxed`].
783     ///
784     /// **Note:** This method is only available on platforms that support atomic
785     /// operations on `u8`.
786     ///
787     /// # Examples
788     ///
789     /// ```
790     /// use std::sync::atomic::{AtomicBool, Ordering};
791     ///
792     /// let foo = AtomicBool::new(true);
793     /// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), true);
794     /// assert_eq!(foo.load(Ordering::SeqCst), true);
795     ///
796     /// let foo = AtomicBool::new(true);
797     /// assert_eq!(foo.fetch_nand(true, Ordering::SeqCst), true);
798     /// assert_eq!(foo.load(Ordering::SeqCst) as usize, 0);
799     /// assert_eq!(foo.load(Ordering::SeqCst), false);
800     ///
801     /// let foo = AtomicBool::new(false);
802     /// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), false);
803     /// assert_eq!(foo.load(Ordering::SeqCst), true);
804     /// ```
805     #[inline]
806     #[stable(feature = "rust1", since = "1.0.0")]
807     #[cfg(target_has_atomic = "8")]
808     #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
fetch_nand(&self, val: bool, order: Ordering) -> bool809     pub fn fetch_nand(&self, val: bool, order: Ordering) -> bool {
810         // We can't use atomic_nand here because it can result in a bool with
811         // an invalid value. This happens because the atomic operation is done
812         // with an 8-bit integer internally, which would set the upper 7 bits.
813         // So we just use fetch_xor or swap instead.
814         if val {
815             // !(x & true) == !x
816             // We must invert the bool.
817             self.fetch_xor(true, order)
818         } else {
819             // !(x & false) == true
820             // We must set the bool to true.
821             self.swap(true, order)
822         }
823     }
824 
825     /// Logical "or" with a boolean value.
826     ///
827     /// Performs a logical "or" operation on the current value and the argument `val`, and sets the
828     /// new value to the result.
829     ///
830     /// Returns the previous value.
831     ///
832     /// `fetch_or` takes an [`Ordering`] argument which describes the memory ordering
833     /// of this operation. All ordering modes are possible. Note that using
834     /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
835     /// using [`Release`] makes the load part [`Relaxed`].
836     ///
837     /// **Note:** This method is only available on platforms that support atomic
838     /// operations on `u8`.
839     ///
840     /// # Examples
841     ///
842     /// ```
843     /// use std::sync::atomic::{AtomicBool, Ordering};
844     ///
845     /// let foo = AtomicBool::new(true);
846     /// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), true);
847     /// assert_eq!(foo.load(Ordering::SeqCst), true);
848     ///
849     /// let foo = AtomicBool::new(true);
850     /// assert_eq!(foo.fetch_or(true, Ordering::SeqCst), true);
851     /// assert_eq!(foo.load(Ordering::SeqCst), true);
852     ///
853     /// let foo = AtomicBool::new(false);
854     /// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), false);
855     /// assert_eq!(foo.load(Ordering::SeqCst), false);
856     /// ```
857     #[inline]
858     #[stable(feature = "rust1", since = "1.0.0")]
859     #[cfg(target_has_atomic = "8")]
860     #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
fetch_or(&self, val: bool, order: Ordering) -> bool861     pub fn fetch_or(&self, val: bool, order: Ordering) -> bool {
862         // SAFETY: data races are prevented by atomic intrinsics.
863         unsafe { atomic_or(self.v.get(), val as u8, order) != 0 }
864     }
865 
866     /// Logical "xor" with a boolean value.
867     ///
868     /// Performs a logical "xor" operation on the current value and the argument `val`, and sets
869     /// the new value to the result.
870     ///
871     /// Returns the previous value.
872     ///
873     /// `fetch_xor` takes an [`Ordering`] argument which describes the memory ordering
874     /// of this operation. All ordering modes are possible. Note that using
875     /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
876     /// using [`Release`] makes the load part [`Relaxed`].
877     ///
878     /// **Note:** This method is only available on platforms that support atomic
879     /// operations on `u8`.
880     ///
881     /// # Examples
882     ///
883     /// ```
884     /// use std::sync::atomic::{AtomicBool, Ordering};
885     ///
886     /// let foo = AtomicBool::new(true);
887     /// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), true);
888     /// assert_eq!(foo.load(Ordering::SeqCst), true);
889     ///
890     /// let foo = AtomicBool::new(true);
891     /// assert_eq!(foo.fetch_xor(true, Ordering::SeqCst), true);
892     /// assert_eq!(foo.load(Ordering::SeqCst), false);
893     ///
894     /// let foo = AtomicBool::new(false);
895     /// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), false);
896     /// assert_eq!(foo.load(Ordering::SeqCst), false);
897     /// ```
898     #[inline]
899     #[stable(feature = "rust1", since = "1.0.0")]
900     #[cfg(target_has_atomic = "8")]
901     #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
fetch_xor(&self, val: bool, order: Ordering) -> bool902     pub fn fetch_xor(&self, val: bool, order: Ordering) -> bool {
903         // SAFETY: data races are prevented by atomic intrinsics.
904         unsafe { atomic_xor(self.v.get(), val as u8, order) != 0 }
905     }
906 
907     /// Logical "not" with a boolean value.
908     ///
909     /// Performs a logical "not" operation on the current value, and sets
910     /// the new value to the result.
911     ///
912     /// Returns the previous value.
913     ///
914     /// `fetch_not` takes an [`Ordering`] argument which describes the memory ordering
915     /// of this operation. All ordering modes are possible. Note that using
916     /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
917     /// using [`Release`] makes the load part [`Relaxed`].
918     ///
919     /// **Note:** This method is only available on platforms that support atomic
920     /// operations on `u8`.
921     ///
922     /// # Examples
923     ///
924     /// ```
925     /// #![feature(atomic_bool_fetch_not)]
926     /// use std::sync::atomic::{AtomicBool, Ordering};
927     ///
928     /// let foo = AtomicBool::new(true);
929     /// assert_eq!(foo.fetch_not(Ordering::SeqCst), true);
930     /// assert_eq!(foo.load(Ordering::SeqCst), false);
931     ///
932     /// let foo = AtomicBool::new(false);
933     /// assert_eq!(foo.fetch_not(Ordering::SeqCst), false);
934     /// assert_eq!(foo.load(Ordering::SeqCst), true);
935     /// ```
936     #[inline]
937     #[unstable(feature = "atomic_bool_fetch_not", issue = "98485")]
938     #[cfg(target_has_atomic = "8")]
939     #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
fetch_not(&self, order: Ordering) -> bool940     pub fn fetch_not(&self, order: Ordering) -> bool {
941         self.fetch_xor(true, order)
942     }
943 
944     /// Returns a mutable pointer to the underlying [`bool`].
945     ///
946     /// Doing non-atomic reads and writes on the resulting integer can be a data race.
947     /// This method is mostly useful for FFI, where the function signature may use
948     /// `*mut bool` instead of `&AtomicBool`.
949     ///
950     /// Returning an `*mut` pointer from a shared reference to this atomic is safe because the
951     /// atomic types work with interior mutability. All modifications of an atomic change the value
952     /// through a shared reference, and can do so safely as long as they use atomic operations. Any
953     /// use of the returned raw pointer requires an `unsafe` block and still has to uphold the same
954     /// restriction: operations on it must be atomic.
955     ///
956     /// # Examples
957     ///
958     /// ```ignore (extern-declaration)
959     /// # fn main() {
960     /// use std::sync::atomic::AtomicBool;
961     ///
962     /// extern "C" {
963     ///     fn my_atomic_op(arg: *mut bool);
964     /// }
965     ///
966     /// let mut atomic = AtomicBool::new(true);
967     /// unsafe {
968     ///     my_atomic_op(atomic.as_ptr());
969     /// }
970     /// # }
971     /// ```
972     #[inline]
973     #[stable(feature = "atomic_as_ptr", since = "1.70.0")]
974     #[rustc_const_stable(feature = "atomic_as_ptr", since = "1.70.0")]
as_ptr(&self) -> *mut bool975     pub const fn as_ptr(&self) -> *mut bool {
976         self.v.get().cast()
977     }
978 
979     /// Fetches the value, and applies a function to it that returns an optional
980     /// new value. Returns a `Result` of `Ok(previous_value)` if the function
981     /// returned `Some(_)`, else `Err(previous_value)`.
982     ///
983     /// Note: This may call the function multiple times if the value has been
984     /// changed from other threads in the meantime, as long as the function
985     /// returns `Some(_)`, but the function will have been applied only once to
986     /// the stored value.
987     ///
988     /// `fetch_update` takes two [`Ordering`] arguments to describe the memory
989     /// ordering of this operation. The first describes the required ordering for
990     /// when the operation finally succeeds while the second describes the
991     /// required ordering for loads. These correspond to the success and failure
992     /// orderings of [`AtomicBool::compare_exchange`] respectively.
993     ///
994     /// Using [`Acquire`] as success ordering makes the store part of this
995     /// operation [`Relaxed`], and using [`Release`] makes the final successful
996     /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
997     /// [`Acquire`] or [`Relaxed`].
998     ///
999     /// **Note:** This method is only available on platforms that support atomic
1000     /// operations on `u8`.
1001     ///
1002     /// # Considerations
1003     ///
1004     /// This method is not magic;  it is not provided by the hardware.
1005     /// It is implemented in terms of [`AtomicBool::compare_exchange_weak`], and suffers from the same drawbacks.
1006     /// In particular, this method will not circumvent the [ABA Problem].
1007     ///
1008     /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1009     ///
1010     /// # Examples
1011     ///
1012     /// ```rust
1013     /// use std::sync::atomic::{AtomicBool, Ordering};
1014     ///
1015     /// let x = AtomicBool::new(false);
1016     /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(false));
1017     /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(false));
1018     /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(true));
1019     /// assert_eq!(x.load(Ordering::SeqCst), false);
1020     /// ```
1021     #[inline]
1022     #[stable(feature = "atomic_fetch_update", since = "1.53.0")]
1023     #[cfg(target_has_atomic = "8")]
1024     #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
fetch_update<F>( &self, set_order: Ordering, fetch_order: Ordering, mut f: F, ) -> Result<bool, bool> where F: FnMut(bool) -> Option<bool>,1025     pub fn fetch_update<F>(
1026         &self,
1027         set_order: Ordering,
1028         fetch_order: Ordering,
1029         mut f: F,
1030     ) -> Result<bool, bool>
1031     where
1032         F: FnMut(bool) -> Option<bool>,
1033     {
1034         let mut prev = self.load(fetch_order);
1035         while let Some(next) = f(prev) {
1036             match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
1037                 x @ Ok(_) => return x,
1038                 Err(next_prev) => prev = next_prev,
1039             }
1040         }
1041         Err(prev)
1042     }
1043 }
1044 
1045 #[cfg(target_has_atomic_load_store = "ptr")]
1046 impl<T> AtomicPtr<T> {
1047     /// Creates a new `AtomicPtr`.
1048     ///
1049     /// # Examples
1050     ///
1051     /// ```
1052     /// use std::sync::atomic::AtomicPtr;
1053     ///
1054     /// let ptr = &mut 5;
1055     /// let atomic_ptr = AtomicPtr::new(ptr);
1056     /// ```
1057     #[inline]
1058     #[stable(feature = "rust1", since = "1.0.0")]
1059     #[rustc_const_stable(feature = "const_atomic_new", since = "1.24.0")]
new(p: *mut T) -> AtomicPtr<T>1060     pub const fn new(p: *mut T) -> AtomicPtr<T> {
1061         AtomicPtr { p: UnsafeCell::new(p) }
1062     }
1063 
1064     /// Creates a new `AtomicPtr` from a pointer.
1065     ///
1066     /// # Examples
1067     ///
1068     /// ```
1069     /// #![feature(atomic_from_ptr, pointer_is_aligned)]
1070     /// use std::sync::atomic::{self, AtomicPtr};
1071     /// use std::mem::align_of;
1072     ///
1073     /// // Get a pointer to an allocated value
1074     /// let ptr: *mut *mut u8 = Box::into_raw(Box::new(std::ptr::null_mut()));
1075     ///
1076     /// assert!(ptr.is_aligned_to(align_of::<AtomicPtr<u8>>()));
1077     ///
1078     /// {
1079     ///     // Create an atomic view of the allocated value
1080     ///     let atomic = unsafe { AtomicPtr::from_ptr(ptr) };
1081     ///
1082     ///     // Use `atomic` for atomic operations, possibly share it with other threads
1083     ///     atomic.store(std::ptr::NonNull::dangling().as_ptr(), atomic::Ordering::Relaxed);
1084     /// }
1085     ///
1086     /// // It's ok to non-atomically access the value behind `ptr`,
1087     /// // since the reference to the atomic ended its lifetime in the block above
1088     /// assert!(!unsafe { *ptr }.is_null());
1089     ///
1090     /// // Deallocate the value
1091     /// unsafe { drop(Box::from_raw(ptr)) }
1092     /// ```
1093     ///
1094     /// # Safety
1095     ///
1096     /// * `ptr` must be aligned to `align_of::<AtomicPtr<T>>()` (note that on some platforms this can be bigger than `align_of::<*mut T>()`).
1097     /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
1098     /// * The value behind `ptr` must not be accessed through non-atomic operations for the whole lifetime `'a`.
1099     ///
1100     /// [valid]: crate::ptr#safety
1101     #[unstable(feature = "atomic_from_ptr", issue = "108652")]
1102     #[rustc_const_unstable(feature = "atomic_from_ptr", issue = "108652")]
from_ptr<'a>(ptr: *mut *mut T) -> &'a AtomicPtr<T>1103     pub const unsafe fn from_ptr<'a>(ptr: *mut *mut T) -> &'a AtomicPtr<T> {
1104         // SAFETY: guaranteed by the caller
1105         unsafe { &*ptr.cast() }
1106     }
1107 
1108     /// Returns a mutable reference to the underlying pointer.
1109     ///
1110     /// This is safe because the mutable reference guarantees that no other threads are
1111     /// concurrently accessing the atomic data.
1112     ///
1113     /// # Examples
1114     ///
1115     /// ```
1116     /// use std::sync::atomic::{AtomicPtr, Ordering};
1117     ///
1118     /// let mut data = 10;
1119     /// let mut atomic_ptr = AtomicPtr::new(&mut data);
1120     /// let mut other_data = 5;
1121     /// *atomic_ptr.get_mut() = &mut other_data;
1122     /// assert_eq!(unsafe { *atomic_ptr.load(Ordering::SeqCst) }, 5);
1123     /// ```
1124     #[inline]
1125     #[stable(feature = "atomic_access", since = "1.15.0")]
get_mut(&mut self) -> &mut *mut T1126     pub fn get_mut(&mut self) -> &mut *mut T {
1127         self.p.get_mut()
1128     }
1129 
1130     /// Get atomic access to a pointer.
1131     ///
1132     /// # Examples
1133     ///
1134     /// ```
1135     /// #![feature(atomic_from_mut)]
1136     /// use std::sync::atomic::{AtomicPtr, Ordering};
1137     ///
1138     /// let mut data = 123;
1139     /// let mut some_ptr = &mut data as *mut i32;
1140     /// let a = AtomicPtr::from_mut(&mut some_ptr);
1141     /// let mut other_data = 456;
1142     /// a.store(&mut other_data, Ordering::Relaxed);
1143     /// assert_eq!(unsafe { *some_ptr }, 456);
1144     /// ```
1145     #[inline]
1146     #[cfg(target_has_atomic_equal_alignment = "ptr")]
1147     #[unstable(feature = "atomic_from_mut", issue = "76314")]
from_mut(v: &mut *mut T) -> &mut Self1148     pub fn from_mut(v: &mut *mut T) -> &mut Self {
1149         use crate::mem::align_of;
1150         let [] = [(); align_of::<AtomicPtr<()>>() - align_of::<*mut ()>()];
1151         // SAFETY:
1152         //  - the mutable reference guarantees unique ownership.
1153         //  - the alignment of `*mut T` and `Self` is the same on all platforms
1154         //    supported by rust, as verified above.
1155         unsafe { &mut *(v as *mut *mut T as *mut Self) }
1156     }
1157 
1158     /// Get non-atomic access to a `&mut [AtomicPtr]` slice.
1159     ///
1160     /// This is safe because the mutable reference guarantees that no other threads are
1161     /// concurrently accessing the atomic data.
1162     ///
1163     /// # Examples
1164     ///
1165     /// ```
1166     /// #![feature(atomic_from_mut, inline_const)]
1167     /// use std::ptr::null_mut;
1168     /// use std::sync::atomic::{AtomicPtr, Ordering};
1169     ///
1170     /// let mut some_ptrs = [const { AtomicPtr::new(null_mut::<String>()) }; 10];
1171     ///
1172     /// let view: &mut [*mut String] = AtomicPtr::get_mut_slice(&mut some_ptrs);
1173     /// assert_eq!(view, [null_mut::<String>(); 10]);
1174     /// view
1175     ///     .iter_mut()
1176     ///     .enumerate()
1177     ///     .for_each(|(i, ptr)| *ptr = Box::into_raw(Box::new(format!("iteration#{i}"))));
1178     ///
1179     /// std::thread::scope(|s| {
1180     ///     for ptr in &some_ptrs {
1181     ///         s.spawn(move || {
1182     ///             let ptr = ptr.load(Ordering::Relaxed);
1183     ///             assert!(!ptr.is_null());
1184     ///
1185     ///             let name = unsafe { Box::from_raw(ptr) };
1186     ///             println!("Hello, {name}!");
1187     ///         });
1188     ///     }
1189     /// });
1190     /// ```
1191     #[inline]
1192     #[unstable(feature = "atomic_from_mut", issue = "76314")]
get_mut_slice(this: &mut [Self]) -> &mut [*mut T]1193     pub fn get_mut_slice(this: &mut [Self]) -> &mut [*mut T] {
1194         // SAFETY: the mutable reference guarantees unique ownership.
1195         unsafe { &mut *(this as *mut [Self] as *mut [*mut T]) }
1196     }
1197 
1198     /// Get atomic access to a slice of pointers.
1199     ///
1200     /// # Examples
1201     ///
1202     /// ```
1203     /// #![feature(atomic_from_mut)]
1204     /// use std::ptr::null_mut;
1205     /// use std::sync::atomic::{AtomicPtr, Ordering};
1206     ///
1207     /// let mut some_ptrs = [null_mut::<String>(); 10];
1208     /// let a = &*AtomicPtr::from_mut_slice(&mut some_ptrs);
1209     /// std::thread::scope(|s| {
1210     ///     for i in 0..a.len() {
1211     ///         s.spawn(move || {
1212     ///             let name = Box::new(format!("thread{i}"));
1213     ///             a[i].store(Box::into_raw(name), Ordering::Relaxed);
1214     ///         });
1215     ///     }
1216     /// });
1217     /// for p in some_ptrs {
1218     ///     assert!(!p.is_null());
1219     ///     let name = unsafe { Box::from_raw(p) };
1220     ///     println!("Hello, {name}!");
1221     /// }
1222     /// ```
1223     #[inline]
1224     #[cfg(target_has_atomic_equal_alignment = "ptr")]
1225     #[unstable(feature = "atomic_from_mut", issue = "76314")]
from_mut_slice(v: &mut [*mut T]) -> &mut [Self]1226     pub fn from_mut_slice(v: &mut [*mut T]) -> &mut [Self] {
1227         // SAFETY:
1228         //  - the mutable reference guarantees unique ownership.
1229         //  - the alignment of `*mut T` and `Self` is the same on all platforms
1230         //    supported by rust, as verified above.
1231         unsafe { &mut *(v as *mut [*mut T] as *mut [Self]) }
1232     }
1233 
1234     /// Consumes the atomic and returns the contained value.
1235     ///
1236     /// This is safe because passing `self` by value guarantees that no other threads are
1237     /// concurrently accessing the atomic data.
1238     ///
1239     /// # Examples
1240     ///
1241     /// ```
1242     /// use std::sync::atomic::AtomicPtr;
1243     ///
1244     /// let mut data = 5;
1245     /// let atomic_ptr = AtomicPtr::new(&mut data);
1246     /// assert_eq!(unsafe { *atomic_ptr.into_inner() }, 5);
1247     /// ```
1248     #[inline]
1249     #[stable(feature = "atomic_access", since = "1.15.0")]
1250     #[rustc_const_unstable(feature = "const_cell_into_inner", issue = "78729")]
into_inner(self) -> *mut T1251     pub const fn into_inner(self) -> *mut T {
1252         self.p.into_inner()
1253     }
1254 
1255     /// Loads a value from the pointer.
1256     ///
1257     /// `load` takes an [`Ordering`] argument which describes the memory ordering
1258     /// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
1259     ///
1260     /// # Panics
1261     ///
1262     /// Panics if `order` is [`Release`] or [`AcqRel`].
1263     ///
1264     /// # Examples
1265     ///
1266     /// ```
1267     /// use std::sync::atomic::{AtomicPtr, Ordering};
1268     ///
1269     /// let ptr = &mut 5;
1270     /// let some_ptr = AtomicPtr::new(ptr);
1271     ///
1272     /// let value = some_ptr.load(Ordering::Relaxed);
1273     /// ```
1274     #[inline]
1275     #[stable(feature = "rust1", since = "1.0.0")]
1276     #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
load(&self, order: Ordering) -> *mut T1277     pub fn load(&self, order: Ordering) -> *mut T {
1278         // SAFETY: data races are prevented by atomic intrinsics.
1279         unsafe { atomic_load(self.p.get(), order) }
1280     }
1281 
1282     /// Stores a value into the pointer.
1283     ///
1284     /// `store` takes an [`Ordering`] argument which describes the memory ordering
1285     /// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
1286     ///
1287     /// # Panics
1288     ///
1289     /// Panics if `order` is [`Acquire`] or [`AcqRel`].
1290     ///
1291     /// # Examples
1292     ///
1293     /// ```
1294     /// use std::sync::atomic::{AtomicPtr, Ordering};
1295     ///
1296     /// let ptr = &mut 5;
1297     /// let some_ptr = AtomicPtr::new(ptr);
1298     ///
1299     /// let other_ptr = &mut 10;
1300     ///
1301     /// some_ptr.store(other_ptr, Ordering::Relaxed);
1302     /// ```
1303     #[inline]
1304     #[stable(feature = "rust1", since = "1.0.0")]
1305     #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
store(&self, ptr: *mut T, order: Ordering)1306     pub fn store(&self, ptr: *mut T, order: Ordering) {
1307         // SAFETY: data races are prevented by atomic intrinsics.
1308         unsafe {
1309             atomic_store(self.p.get(), ptr, order);
1310         }
1311     }
1312 
1313     /// Stores a value into the pointer, returning the previous value.
1314     ///
1315     /// `swap` takes an [`Ordering`] argument which describes the memory ordering
1316     /// of this operation. All ordering modes are possible. Note that using
1317     /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1318     /// using [`Release`] makes the load part [`Relaxed`].
1319     ///
1320     /// **Note:** This method is only available on platforms that support atomic
1321     /// operations on pointers.
1322     ///
1323     /// # Examples
1324     ///
1325     /// ```
1326     /// use std::sync::atomic::{AtomicPtr, Ordering};
1327     ///
1328     /// let ptr = &mut 5;
1329     /// let some_ptr = AtomicPtr::new(ptr);
1330     ///
1331     /// let other_ptr = &mut 10;
1332     ///
1333     /// let value = some_ptr.swap(other_ptr, Ordering::Relaxed);
1334     /// ```
1335     #[inline]
1336     #[stable(feature = "rust1", since = "1.0.0")]
1337     #[cfg(target_has_atomic = "ptr")]
1338     #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
swap(&self, ptr: *mut T, order: Ordering) -> *mut T1339     pub fn swap(&self, ptr: *mut T, order: Ordering) -> *mut T {
1340         // SAFETY: data races are prevented by atomic intrinsics.
1341         unsafe { atomic_swap(self.p.get(), ptr, order) }
1342     }
1343 
1344     /// Stores a value into the pointer if the current value is the same as the `current` value.
1345     ///
1346     /// The return value is always the previous value. If it is equal to `current`, then the value
1347     /// was updated.
1348     ///
1349     /// `compare_and_swap` also takes an [`Ordering`] argument which describes the memory
1350     /// ordering of this operation. Notice that even when using [`AcqRel`], the operation
1351     /// might fail and hence just perform an `Acquire` load, but not have `Release` semantics.
1352     /// Using [`Acquire`] makes the store part of this operation [`Relaxed`] if it
1353     /// happens, and using [`Release`] makes the load part [`Relaxed`].
1354     ///
1355     /// **Note:** This method is only available on platforms that support atomic
1356     /// operations on pointers.
1357     ///
1358     /// # Migrating to `compare_exchange` and `compare_exchange_weak`
1359     ///
1360     /// `compare_and_swap` is equivalent to `compare_exchange` with the following mapping for
1361     /// memory orderings:
1362     ///
1363     /// Original | Success | Failure
1364     /// -------- | ------- | -------
1365     /// Relaxed  | Relaxed | Relaxed
1366     /// Acquire  | Acquire | Acquire
1367     /// Release  | Release | Relaxed
1368     /// AcqRel   | AcqRel  | Acquire
1369     /// SeqCst   | SeqCst  | SeqCst
1370     ///
1371     /// `compare_exchange_weak` is allowed to fail spuriously even when the comparison succeeds,
1372     /// which allows the compiler to generate better assembly code when the compare and swap
1373     /// is used in a loop.
1374     ///
1375     /// # Examples
1376     ///
1377     /// ```
1378     /// use std::sync::atomic::{AtomicPtr, Ordering};
1379     ///
1380     /// let ptr = &mut 5;
1381     /// let some_ptr = AtomicPtr::new(ptr);
1382     ///
1383     /// let other_ptr = &mut 10;
1384     ///
1385     /// let value = some_ptr.compare_and_swap(ptr, other_ptr, Ordering::Relaxed);
1386     /// ```
1387     #[inline]
1388     #[stable(feature = "rust1", since = "1.0.0")]
1389     #[deprecated(
1390         since = "1.50.0",
1391         note = "Use `compare_exchange` or `compare_exchange_weak` instead"
1392     )]
1393     #[cfg(target_has_atomic = "ptr")]
1394     #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
compare_and_swap(&self, current: *mut T, new: *mut T, order: Ordering) -> *mut T1395     pub fn compare_and_swap(&self, current: *mut T, new: *mut T, order: Ordering) -> *mut T {
1396         match self.compare_exchange(current, new, order, strongest_failure_ordering(order)) {
1397             Ok(x) => x,
1398             Err(x) => x,
1399         }
1400     }
1401 
1402     /// Stores a value into the pointer if the current value is the same as the `current` value.
1403     ///
1404     /// The return value is a result indicating whether the new value was written and containing
1405     /// the previous value. On success this value is guaranteed to be equal to `current`.
1406     ///
1407     /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
1408     /// ordering of this operation. `success` describes the required ordering for the
1409     /// read-modify-write operation that takes place if the comparison with `current` succeeds.
1410     /// `failure` describes the required ordering for the load operation that takes place when
1411     /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
1412     /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
1413     /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1414     ///
1415     /// **Note:** This method is only available on platforms that support atomic
1416     /// operations on pointers.
1417     ///
1418     /// # Examples
1419     ///
1420     /// ```
1421     /// use std::sync::atomic::{AtomicPtr, Ordering};
1422     ///
1423     /// let ptr = &mut 5;
1424     /// let some_ptr = AtomicPtr::new(ptr);
1425     ///
1426     /// let other_ptr = &mut 10;
1427     ///
1428     /// let value = some_ptr.compare_exchange(ptr, other_ptr,
1429     ///                                       Ordering::SeqCst, Ordering::Relaxed);
1430     /// ```
1431     #[inline]
1432     #[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
1433     #[cfg(target_has_atomic = "ptr")]
1434     #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
compare_exchange( &self, current: *mut T, new: *mut T, success: Ordering, failure: Ordering, ) -> Result<*mut T, *mut T>1435     pub fn compare_exchange(
1436         &self,
1437         current: *mut T,
1438         new: *mut T,
1439         success: Ordering,
1440         failure: Ordering,
1441     ) -> Result<*mut T, *mut T> {
1442         // SAFETY: data races are prevented by atomic intrinsics.
1443         unsafe { atomic_compare_exchange(self.p.get(), current, new, success, failure) }
1444     }
1445 
1446     /// Stores a value into the pointer if the current value is the same as the `current` value.
1447     ///
1448     /// Unlike [`AtomicPtr::compare_exchange`], this function is allowed to spuriously fail even when the
1449     /// comparison succeeds, which can result in more efficient code on some platforms. The
1450     /// return value is a result indicating whether the new value was written and containing the
1451     /// previous value.
1452     ///
1453     /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
1454     /// ordering of this operation. `success` describes the required ordering for the
1455     /// read-modify-write operation that takes place if the comparison with `current` succeeds.
1456     /// `failure` describes the required ordering for the load operation that takes place when
1457     /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
1458     /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
1459     /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1460     ///
1461     /// **Note:** This method is only available on platforms that support atomic
1462     /// operations on pointers.
1463     ///
1464     /// # Examples
1465     ///
1466     /// ```
1467     /// use std::sync::atomic::{AtomicPtr, Ordering};
1468     ///
1469     /// let some_ptr = AtomicPtr::new(&mut 5);
1470     ///
1471     /// let new = &mut 10;
1472     /// let mut old = some_ptr.load(Ordering::Relaxed);
1473     /// loop {
1474     ///     match some_ptr.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
1475     ///         Ok(_) => break,
1476     ///         Err(x) => old = x,
1477     ///     }
1478     /// }
1479     /// ```
1480     #[inline]
1481     #[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
1482     #[cfg(target_has_atomic = "ptr")]
1483     #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
compare_exchange_weak( &self, current: *mut T, new: *mut T, success: Ordering, failure: Ordering, ) -> Result<*mut T, *mut T>1484     pub fn compare_exchange_weak(
1485         &self,
1486         current: *mut T,
1487         new: *mut T,
1488         success: Ordering,
1489         failure: Ordering,
1490     ) -> Result<*mut T, *mut T> {
1491         // SAFETY: This intrinsic is unsafe because it operates on a raw pointer
1492         // but we know for sure that the pointer is valid (we just got it from
1493         // an `UnsafeCell` that we have by reference) and the atomic operation
1494         // itself allows us to safely mutate the `UnsafeCell` contents.
1495         unsafe { atomic_compare_exchange_weak(self.p.get(), current, new, success, failure) }
1496     }
1497 
1498     /// Fetches the value, and applies a function to it that returns an optional
1499     /// new value. Returns a `Result` of `Ok(previous_value)` if the function
1500     /// returned `Some(_)`, else `Err(previous_value)`.
1501     ///
1502     /// Note: This may call the function multiple times if the value has been
1503     /// changed from other threads in the meantime, as long as the function
1504     /// returns `Some(_)`, but the function will have been applied only once to
1505     /// the stored value.
1506     ///
1507     /// `fetch_update` takes two [`Ordering`] arguments to describe the memory
1508     /// ordering of this operation. The first describes the required ordering for
1509     /// when the operation finally succeeds while the second describes the
1510     /// required ordering for loads. These correspond to the success and failure
1511     /// orderings of [`AtomicPtr::compare_exchange`] respectively.
1512     ///
1513     /// Using [`Acquire`] as success ordering makes the store part of this
1514     /// operation [`Relaxed`], and using [`Release`] makes the final successful
1515     /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
1516     /// [`Acquire`] or [`Relaxed`].
1517     ///
1518     /// **Note:** This method is only available on platforms that support atomic
1519     /// operations on pointers.
1520     ///
1521     /// # Considerations
1522     ///
1523     /// This method is not magic;  it is not provided by the hardware.
1524     /// It is implemented in terms of [`AtomicPtr::compare_exchange_weak`], and suffers from the same drawbacks.
1525     /// In particular, this method will not circumvent the [ABA Problem].
1526     ///
1527     /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1528     ///
1529     /// # Examples
1530     ///
1531     /// ```rust
1532     /// use std::sync::atomic::{AtomicPtr, Ordering};
1533     ///
1534     /// let ptr: *mut _ = &mut 5;
1535     /// let some_ptr = AtomicPtr::new(ptr);
1536     ///
1537     /// let new: *mut _ = &mut 10;
1538     /// assert_eq!(some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(ptr));
1539     /// let result = some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| {
1540     ///     if x == ptr {
1541     ///         Some(new)
1542     ///     } else {
1543     ///         None
1544     ///     }
1545     /// });
1546     /// assert_eq!(result, Ok(ptr));
1547     /// assert_eq!(some_ptr.load(Ordering::SeqCst), new);
1548     /// ```
1549     #[inline]
1550     #[stable(feature = "atomic_fetch_update", since = "1.53.0")]
1551     #[cfg(target_has_atomic = "ptr")]
1552     #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
fetch_update<F>( &self, set_order: Ordering, fetch_order: Ordering, mut f: F, ) -> Result<*mut T, *mut T> where F: FnMut(*mut T) -> Option<*mut T>,1553     pub fn fetch_update<F>(
1554         &self,
1555         set_order: Ordering,
1556         fetch_order: Ordering,
1557         mut f: F,
1558     ) -> Result<*mut T, *mut T>
1559     where
1560         F: FnMut(*mut T) -> Option<*mut T>,
1561     {
1562         let mut prev = self.load(fetch_order);
1563         while let Some(next) = f(prev) {
1564             match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
1565                 x @ Ok(_) => return x,
1566                 Err(next_prev) => prev = next_prev,
1567             }
1568         }
1569         Err(prev)
1570     }
1571 
1572     /// Offsets the pointer's address by adding `val` (in units of `T`),
1573     /// returning the previous pointer.
1574     ///
1575     /// This is equivalent to using [`wrapping_add`] to atomically perform the
1576     /// equivalent of `ptr = ptr.wrapping_add(val);`.
1577     ///
1578     /// This method operates in units of `T`, which means that it cannot be used
1579     /// to offset the pointer by an amount which is not a multiple of
1580     /// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to
1581     /// work with a deliberately misaligned pointer. In such cases, you may use
1582     /// the [`fetch_byte_add`](Self::fetch_byte_add) method instead.
1583     ///
1584     /// `fetch_ptr_add` takes an [`Ordering`] argument which describes the
1585     /// memory ordering of this operation. All ordering modes are possible. Note
1586     /// that using [`Acquire`] makes the store part of this operation
1587     /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
1588     ///
1589     /// **Note**: This method is only available on platforms that support atomic
1590     /// operations on [`AtomicPtr`].
1591     ///
1592     /// [`wrapping_add`]: pointer::wrapping_add
1593     ///
1594     /// # Examples
1595     ///
1596     /// ```
1597     /// #![feature(strict_provenance_atomic_ptr, strict_provenance)]
1598     /// use core::sync::atomic::{AtomicPtr, Ordering};
1599     ///
1600     /// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut());
1601     /// assert_eq!(atom.fetch_ptr_add(1, Ordering::Relaxed).addr(), 0);
1602     /// // Note: units of `size_of::<i64>()`.
1603     /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 8);
1604     /// ```
1605     #[inline]
1606     #[cfg(target_has_atomic = "ptr")]
1607     #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
1608     #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
fetch_ptr_add(&self, val: usize, order: Ordering) -> *mut T1609     pub fn fetch_ptr_add(&self, val: usize, order: Ordering) -> *mut T {
1610         self.fetch_byte_add(val.wrapping_mul(core::mem::size_of::<T>()), order)
1611     }
1612 
1613     /// Offsets the pointer's address by subtracting `val` (in units of `T`),
1614     /// returning the previous pointer.
1615     ///
1616     /// This is equivalent to using [`wrapping_sub`] to atomically perform the
1617     /// equivalent of `ptr = ptr.wrapping_sub(val);`.
1618     ///
1619     /// This method operates in units of `T`, which means that it cannot be used
1620     /// to offset the pointer by an amount which is not a multiple of
1621     /// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to
1622     /// work with a deliberately misaligned pointer. In such cases, you may use
1623     /// the [`fetch_byte_sub`](Self::fetch_byte_sub) method instead.
1624     ///
1625     /// `fetch_ptr_sub` takes an [`Ordering`] argument which describes the memory
1626     /// ordering of this operation. All ordering modes are possible. Note that
1627     /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
1628     /// and using [`Release`] makes the load part [`Relaxed`].
1629     ///
1630     /// **Note**: This method is only available on platforms that support atomic
1631     /// operations on [`AtomicPtr`].
1632     ///
1633     /// [`wrapping_sub`]: pointer::wrapping_sub
1634     ///
1635     /// # Examples
1636     ///
1637     /// ```
1638     /// #![feature(strict_provenance_atomic_ptr)]
1639     /// use core::sync::atomic::{AtomicPtr, Ordering};
1640     ///
1641     /// let array = [1i32, 2i32];
1642     /// let atom = AtomicPtr::new(array.as_ptr().wrapping_add(1) as *mut _);
1643     ///
1644     /// assert!(core::ptr::eq(
1645     ///     atom.fetch_ptr_sub(1, Ordering::Relaxed),
1646     ///     &array[1],
1647     /// ));
1648     /// assert!(core::ptr::eq(atom.load(Ordering::Relaxed), &array[0]));
1649     /// ```
1650     #[inline]
1651     #[cfg(target_has_atomic = "ptr")]
1652     #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
1653     #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
fetch_ptr_sub(&self, val: usize, order: Ordering) -> *mut T1654     pub fn fetch_ptr_sub(&self, val: usize, order: Ordering) -> *mut T {
1655         self.fetch_byte_sub(val.wrapping_mul(core::mem::size_of::<T>()), order)
1656     }
1657 
1658     /// Offsets the pointer's address by adding `val` *bytes*, returning the
1659     /// previous pointer.
1660     ///
1661     /// This is equivalent to using [`wrapping_byte_add`] to atomically
1662     /// perform `ptr = ptr.wrapping_byte_add(val)`.
1663     ///
1664     /// `fetch_byte_add` takes an [`Ordering`] argument which describes the
1665     /// memory ordering of this operation. All ordering modes are possible. Note
1666     /// that using [`Acquire`] makes the store part of this operation
1667     /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
1668     ///
1669     /// **Note**: This method is only available on platforms that support atomic
1670     /// operations on [`AtomicPtr`].
1671     ///
1672     /// [`wrapping_byte_add`]: pointer::wrapping_byte_add
1673     ///
1674     /// # Examples
1675     ///
1676     /// ```
1677     /// #![feature(strict_provenance_atomic_ptr, strict_provenance)]
1678     /// use core::sync::atomic::{AtomicPtr, Ordering};
1679     ///
1680     /// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut());
1681     /// assert_eq!(atom.fetch_byte_add(1, Ordering::Relaxed).addr(), 0);
1682     /// // Note: in units of bytes, not `size_of::<i64>()`.
1683     /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 1);
1684     /// ```
1685     #[inline]
1686     #[cfg(target_has_atomic = "ptr")]
1687     #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
1688     #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
fetch_byte_add(&self, val: usize, order: Ordering) -> *mut T1689     pub fn fetch_byte_add(&self, val: usize, order: Ordering) -> *mut T {
1690         // SAFETY: data races are prevented by atomic intrinsics.
1691         unsafe { atomic_add(self.p.get(), core::ptr::invalid_mut(val), order).cast() }
1692     }
1693 
1694     /// Offsets the pointer's address by subtracting `val` *bytes*, returning the
1695     /// previous pointer.
1696     ///
1697     /// This is equivalent to using [`wrapping_byte_sub`] to atomically
1698     /// perform `ptr = ptr.wrapping_byte_sub(val)`.
1699     ///
1700     /// `fetch_byte_sub` takes an [`Ordering`] argument which describes the
1701     /// memory ordering of this operation. All ordering modes are possible. Note
1702     /// that using [`Acquire`] makes the store part of this operation
1703     /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
1704     ///
1705     /// **Note**: This method is only available on platforms that support atomic
1706     /// operations on [`AtomicPtr`].
1707     ///
1708     /// [`wrapping_byte_sub`]: pointer::wrapping_byte_sub
1709     ///
1710     /// # Examples
1711     ///
1712     /// ```
1713     /// #![feature(strict_provenance_atomic_ptr, strict_provenance)]
1714     /// use core::sync::atomic::{AtomicPtr, Ordering};
1715     ///
1716     /// let atom = AtomicPtr::<i64>::new(core::ptr::invalid_mut(1));
1717     /// assert_eq!(atom.fetch_byte_sub(1, Ordering::Relaxed).addr(), 1);
1718     /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 0);
1719     /// ```
1720     #[inline]
1721     #[cfg(target_has_atomic = "ptr")]
1722     #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
1723     #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
fetch_byte_sub(&self, val: usize, order: Ordering) -> *mut T1724     pub fn fetch_byte_sub(&self, val: usize, order: Ordering) -> *mut T {
1725         // SAFETY: data races are prevented by atomic intrinsics.
1726         unsafe { atomic_sub(self.p.get(), core::ptr::invalid_mut(val), order).cast() }
1727     }
1728 
1729     /// Performs a bitwise "or" operation on the address of the current pointer,
1730     /// and the argument `val`, and stores a pointer with provenance of the
1731     /// current pointer and the resulting address.
1732     ///
1733     /// This is equivalent to using [`map_addr`] to atomically perform
1734     /// `ptr = ptr.map_addr(|a| a | val)`. This can be used in tagged
1735     /// pointer schemes to atomically set tag bits.
1736     ///
1737     /// **Caveat**: This operation returns the previous value. To compute the
1738     /// stored value without losing provenance, you may use [`map_addr`]. For
1739     /// example: `a.fetch_or(val).map_addr(|a| a | val)`.
1740     ///
1741     /// `fetch_or` takes an [`Ordering`] argument which describes the memory
1742     /// ordering of this operation. All ordering modes are possible. Note that
1743     /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
1744     /// and using [`Release`] makes the load part [`Relaxed`].
1745     ///
1746     /// **Note**: This method is only available on platforms that support atomic
1747     /// operations on [`AtomicPtr`].
1748     ///
1749     /// This API and its claimed semantics are part of the Strict Provenance
1750     /// experiment, see the [module documentation for `ptr`][crate::ptr] for
1751     /// details.
1752     ///
1753     /// [`map_addr`]: pointer::map_addr
1754     ///
1755     /// # Examples
1756     ///
1757     /// ```
1758     /// #![feature(strict_provenance_atomic_ptr, strict_provenance)]
1759     /// use core::sync::atomic::{AtomicPtr, Ordering};
1760     ///
1761     /// let pointer = &mut 3i64 as *mut i64;
1762     ///
1763     /// let atom = AtomicPtr::<i64>::new(pointer);
1764     /// // Tag the bottom bit of the pointer.
1765     /// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 0);
1766     /// // Extract and untag.
1767     /// let tagged = atom.load(Ordering::Relaxed);
1768     /// assert_eq!(tagged.addr() & 1, 1);
1769     /// assert_eq!(tagged.map_addr(|p| p & !1), pointer);
1770     /// ```
1771     #[inline]
1772     #[cfg(target_has_atomic = "ptr")]
1773     #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
1774     #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
fetch_or(&self, val: usize, order: Ordering) -> *mut T1775     pub fn fetch_or(&self, val: usize, order: Ordering) -> *mut T {
1776         // SAFETY: data races are prevented by atomic intrinsics.
1777         unsafe { atomic_or(self.p.get(), core::ptr::invalid_mut(val), order).cast() }
1778     }
1779 
1780     /// Performs a bitwise "and" operation on the address of the current
1781     /// pointer, and the argument `val`, and stores a pointer with provenance of
1782     /// the current pointer and the resulting address.
1783     ///
1784     /// This is equivalent to using [`map_addr`] to atomically perform
1785     /// `ptr = ptr.map_addr(|a| a & val)`. This can be used in tagged
1786     /// pointer schemes to atomically unset tag bits.
1787     ///
1788     /// **Caveat**: This operation returns the previous value. To compute the
1789     /// stored value without losing provenance, you may use [`map_addr`]. For
1790     /// example: `a.fetch_and(val).map_addr(|a| a & val)`.
1791     ///
1792     /// `fetch_and` takes an [`Ordering`] argument which describes the memory
1793     /// ordering of this operation. All ordering modes are possible. Note that
1794     /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
1795     /// and using [`Release`] makes the load part [`Relaxed`].
1796     ///
1797     /// **Note**: This method is only available on platforms that support atomic
1798     /// operations on [`AtomicPtr`].
1799     ///
1800     /// This API and its claimed semantics are part of the Strict Provenance
1801     /// experiment, see the [module documentation for `ptr`][crate::ptr] for
1802     /// details.
1803     ///
1804     /// [`map_addr`]: pointer::map_addr
1805     ///
1806     /// # Examples
1807     ///
1808     /// ```
1809     /// #![feature(strict_provenance_atomic_ptr, strict_provenance)]
1810     /// use core::sync::atomic::{AtomicPtr, Ordering};
1811     ///
1812     /// let pointer = &mut 3i64 as *mut i64;
1813     /// // A tagged pointer
1814     /// let atom = AtomicPtr::<i64>::new(pointer.map_addr(|a| a | 1));
1815     /// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 1);
1816     /// // Untag, and extract the previously tagged pointer.
1817     /// let untagged = atom.fetch_and(!1, Ordering::Relaxed)
1818     ///     .map_addr(|a| a & !1);
1819     /// assert_eq!(untagged, pointer);
1820     /// ```
1821     #[inline]
1822     #[cfg(target_has_atomic = "ptr")]
1823     #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
1824     #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
fetch_and(&self, val: usize, order: Ordering) -> *mut T1825     pub fn fetch_and(&self, val: usize, order: Ordering) -> *mut T {
1826         // SAFETY: data races are prevented by atomic intrinsics.
1827         unsafe { atomic_and(self.p.get(), core::ptr::invalid_mut(val), order).cast() }
1828     }
1829 
1830     /// Performs a bitwise "xor" operation on the address of the current
1831     /// pointer, and the argument `val`, and stores a pointer with provenance of
1832     /// the current pointer and the resulting address.
1833     ///
1834     /// This is equivalent to using [`map_addr`] to atomically perform
1835     /// `ptr = ptr.map_addr(|a| a ^ val)`. This can be used in tagged
1836     /// pointer schemes to atomically toggle tag bits.
1837     ///
1838     /// **Caveat**: This operation returns the previous value. To compute the
1839     /// stored value without losing provenance, you may use [`map_addr`]. For
1840     /// example: `a.fetch_xor(val).map_addr(|a| a ^ val)`.
1841     ///
1842     /// `fetch_xor` takes an [`Ordering`] argument which describes the memory
1843     /// ordering of this operation. All ordering modes are possible. Note that
1844     /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
1845     /// and using [`Release`] makes the load part [`Relaxed`].
1846     ///
1847     /// **Note**: This method is only available on platforms that support atomic
1848     /// operations on [`AtomicPtr`].
1849     ///
1850     /// This API and its claimed semantics are part of the Strict Provenance
1851     /// experiment, see the [module documentation for `ptr`][crate::ptr] for
1852     /// details.
1853     ///
1854     /// [`map_addr`]: pointer::map_addr
1855     ///
1856     /// # Examples
1857     ///
1858     /// ```
1859     /// #![feature(strict_provenance_atomic_ptr, strict_provenance)]
1860     /// use core::sync::atomic::{AtomicPtr, Ordering};
1861     ///
1862     /// let pointer = &mut 3i64 as *mut i64;
1863     /// let atom = AtomicPtr::<i64>::new(pointer);
1864     ///
1865     /// // Toggle a tag bit on the pointer.
1866     /// atom.fetch_xor(1, Ordering::Relaxed);
1867     /// assert_eq!(atom.load(Ordering::Relaxed).addr() & 1, 1);
1868     /// ```
1869     #[inline]
1870     #[cfg(target_has_atomic = "ptr")]
1871     #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
1872     #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
fetch_xor(&self, val: usize, order: Ordering) -> *mut T1873     pub fn fetch_xor(&self, val: usize, order: Ordering) -> *mut T {
1874         // SAFETY: data races are prevented by atomic intrinsics.
1875         unsafe { atomic_xor(self.p.get(), core::ptr::invalid_mut(val), order).cast() }
1876     }
1877 
1878     /// Returns a mutable pointer to the underlying pointer.
1879     ///
1880     /// Doing non-atomic reads and writes on the resulting integer can be a data race.
1881     /// This method is mostly useful for FFI, where the function signature may use
1882     /// `*mut *mut T` instead of `&AtomicPtr<T>`.
1883     ///
1884     /// Returning an `*mut` pointer from a shared reference to this atomic is safe because the
1885     /// atomic types work with interior mutability. All modifications of an atomic change the value
1886     /// through a shared reference, and can do so safely as long as they use atomic operations. Any
1887     /// use of the returned raw pointer requires an `unsafe` block and still has to uphold the same
1888     /// restriction: operations on it must be atomic.
1889     ///
1890     /// # Examples
1891     ///
1892     /// ```ignore (extern-declaration)
1893     /// use std::sync::atomic::AtomicPtr;
1894     ///
1895     /// extern "C" {
1896     ///     fn my_atomic_op(arg: *mut *mut u32);
1897     /// }
1898     ///
1899     /// let mut value = 17;
1900     /// let atomic = AtomicPtr::new(&mut value);
1901     ///
1902     /// // SAFETY: Safe as long as `my_atomic_op` is atomic.
1903     /// unsafe {
1904     ///     my_atomic_op(atomic.as_ptr());
1905     /// }
1906     /// ```
1907     #[inline]
1908     #[stable(feature = "atomic_as_ptr", since = "1.70.0")]
1909     #[rustc_const_stable(feature = "atomic_as_ptr", since = "1.70.0")]
as_ptr(&self) -> *mut *mut T1910     pub const fn as_ptr(&self) -> *mut *mut T {
1911         self.p.get()
1912     }
1913 }
1914 
1915 #[cfg(target_has_atomic_load_store = "8")]
1916 #[stable(feature = "atomic_bool_from", since = "1.24.0")]
1917 impl From<bool> for AtomicBool {
1918     /// Converts a `bool` into an `AtomicBool`.
1919     ///
1920     /// # Examples
1921     ///
1922     /// ```
1923     /// use std::sync::atomic::AtomicBool;
1924     /// let atomic_bool = AtomicBool::from(true);
1925     /// assert_eq!(format!("{atomic_bool:?}"), "true")
1926     /// ```
1927     #[inline]
from(b: bool) -> Self1928     fn from(b: bool) -> Self {
1929         Self::new(b)
1930     }
1931 }
1932 
1933 #[cfg(target_has_atomic_load_store = "ptr")]
1934 #[stable(feature = "atomic_from", since = "1.23.0")]
1935 impl<T> From<*mut T> for AtomicPtr<T> {
1936     /// Converts a `*mut T` into an `AtomicPtr<T>`.
1937     #[inline]
from(p: *mut T) -> Self1938     fn from(p: *mut T) -> Self {
1939         Self::new(p)
1940     }
1941 }
1942 
1943 #[allow(unused_macros)] // This macro ends up being unused on some architectures.
1944 macro_rules! if_not_8_bit {
1945     (u8, $($tt:tt)*) => { "" };
1946     (i8, $($tt:tt)*) => { "" };
1947     ($_:ident, $($tt:tt)*) => { $($tt)* };
1948 }
1949 
1950 #[cfg(target_has_atomic_load_store)]
1951 macro_rules! atomic_int {
1952     ($cfg_cas:meta,
1953      $cfg_align:meta,
1954      $stable:meta,
1955      $stable_cxchg:meta,
1956      $stable_debug:meta,
1957      $stable_access:meta,
1958      $stable_from:meta,
1959      $stable_nand:meta,
1960      $const_stable:meta,
1961      $stable_init_const:meta,
1962      $diagnostic_item:meta,
1963      $s_int_type:literal,
1964      $extra_feature:expr,
1965      $min_fn:ident, $max_fn:ident,
1966      $align:expr,
1967      $atomic_new:expr,
1968      $int_type:ident $atomic_type:ident $atomic_init:ident) => {
1969         /// An integer type which can be safely shared between threads.
1970         ///
1971         /// This type has the same in-memory representation as the underlying
1972         /// integer type, [`
1973         #[doc = $s_int_type]
1974         /// `]. For more about the differences between atomic types and
1975         /// non-atomic types as well as information about the portability of
1976         /// this type, please see the [module-level documentation].
1977         ///
1978         /// **Note:** This type is only available on platforms that support
1979         /// atomic loads and stores of [`
1980         #[doc = $s_int_type]
1981         /// `].
1982         ///
1983         /// [module-level documentation]: crate::sync::atomic
1984         #[$stable]
1985         #[$diagnostic_item]
1986         #[repr(C, align($align))]
1987         pub struct $atomic_type {
1988             v: UnsafeCell<$int_type>,
1989         }
1990 
1991         /// An atomic integer initialized to `0`.
1992         #[$stable_init_const]
1993         #[deprecated(
1994             since = "1.34.0",
1995             note = "the `new` function is now preferred",
1996             suggestion = $atomic_new,
1997         )]
1998         pub const $atomic_init: $atomic_type = $atomic_type::new(0);
1999 
2000         #[$stable]
2001         impl Default for $atomic_type {
2002             #[inline]
2003             fn default() -> Self {
2004                 Self::new(Default::default())
2005             }
2006         }
2007 
2008         #[$stable_from]
2009         impl From<$int_type> for $atomic_type {
2010             #[doc = concat!("Converts an `", stringify!($int_type), "` into an `", stringify!($atomic_type), "`.")]
2011             #[inline]
2012             fn from(v: $int_type) -> Self { Self::new(v) }
2013         }
2014 
2015         #[$stable_debug]
2016         impl fmt::Debug for $atomic_type {
2017             fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
2018                 fmt::Debug::fmt(&self.load(Ordering::Relaxed), f)
2019             }
2020         }
2021 
2022         // Send is implicitly implemented.
2023         #[$stable]
2024         unsafe impl Sync for $atomic_type {}
2025 
2026         impl $atomic_type {
2027             /// Creates a new atomic integer.
2028             ///
2029             /// # Examples
2030             ///
2031             /// ```
2032             #[doc = concat!($extra_feature, "use std::sync::atomic::", stringify!($atomic_type), ";")]
2033             ///
2034             #[doc = concat!("let atomic_forty_two = ", stringify!($atomic_type), "::new(42);")]
2035             /// ```
2036             #[inline]
2037             #[$stable]
2038             #[$const_stable]
2039             #[must_use]
2040             pub const fn new(v: $int_type) -> Self {
2041                 Self {v: UnsafeCell::new(v)}
2042             }
2043 
2044             /// Creates a new reference to an atomic integer from a pointer.
2045             ///
2046             /// # Examples
2047             ///
2048             /// ```
2049             /// #![feature(atomic_from_ptr, pointer_is_aligned)]
2050             #[doc = concat!($extra_feature, "use std::sync::atomic::{self, ", stringify!($atomic_type), "};")]
2051             /// use std::mem::align_of;
2052             ///
2053             /// // Get a pointer to an allocated value
2054             #[doc = concat!("let ptr: *mut ", stringify!($int_type), " = Box::into_raw(Box::new(0));")]
2055             ///
2056             #[doc = concat!("assert!(ptr.is_aligned_to(align_of::<", stringify!($atomic_type), ">()));")]
2057             ///
2058             /// {
2059             ///     // Create an atomic view of the allocated value
2060             // SAFETY: this is a doc comment, tidy, it can't hurt you (also guaranteed by the construction of `ptr` and the assert above)
2061             #[doc = concat!("    let atomic = unsafe {", stringify!($atomic_type), "::from_ptr(ptr) };")]
2062             ///
2063             ///     // Use `atomic` for atomic operations, possibly share it with other threads
2064             ///     atomic.store(1, atomic::Ordering::Relaxed);
2065             /// }
2066             ///
2067             /// // It's ok to non-atomically access the value behind `ptr`,
2068             /// // since the reference to the atomic ended its lifetime in the block above
2069             /// assert_eq!(unsafe { *ptr }, 1);
2070             ///
2071             /// // Deallocate the value
2072             /// unsafe { drop(Box::from_raw(ptr)) }
2073             /// ```
2074             ///
2075             /// # Safety
2076             ///
2077             /// * `ptr` must be aligned to `align_of::<AtomicBool>()` (note that on some platforms this can be bigger than `align_of::<bool>()`).
2078             #[doc = concat!(" * `ptr` must be aligned to `align_of::<", stringify!($atomic_type), ">()` (note that on some platforms this can be bigger than `align_of::<", stringify!($int_type), ">()`).")]
2079             /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
2080             /// * The value behind `ptr` must not be accessed through non-atomic operations for the whole lifetime `'a`.
2081             ///
2082             /// [valid]: crate::ptr#safety
2083             #[unstable(feature = "atomic_from_ptr", issue = "108652")]
2084             #[rustc_const_unstable(feature = "atomic_from_ptr", issue = "108652")]
2085             pub const unsafe fn from_ptr<'a>(ptr: *mut $int_type) -> &'a $atomic_type {
2086                 // SAFETY: guaranteed by the caller
2087                 unsafe { &*ptr.cast() }
2088             }
2089 
2090 
2091             /// Returns a mutable reference to the underlying integer.
2092             ///
2093             /// This is safe because the mutable reference guarantees that no other threads are
2094             /// concurrently accessing the atomic data.
2095             ///
2096             /// # Examples
2097             ///
2098             /// ```
2099             #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2100             ///
2101             #[doc = concat!("let mut some_var = ", stringify!($atomic_type), "::new(10);")]
2102             /// assert_eq!(*some_var.get_mut(), 10);
2103             /// *some_var.get_mut() = 5;
2104             /// assert_eq!(some_var.load(Ordering::SeqCst), 5);
2105             /// ```
2106             #[inline]
2107             #[$stable_access]
2108             pub fn get_mut(&mut self) -> &mut $int_type {
2109                 self.v.get_mut()
2110             }
2111 
2112             #[doc = concat!("Get atomic access to a `&mut ", stringify!($int_type), "`.")]
2113             ///
2114             #[doc = if_not_8_bit! {
2115                 $int_type,
2116                 concat!(
2117                     "**Note:** This function is only available on targets where `",
2118                     stringify!($int_type), "` has an alignment of ", $align, " bytes."
2119                 )
2120             }]
2121             ///
2122             /// # Examples
2123             ///
2124             /// ```
2125             /// #![feature(atomic_from_mut)]
2126             #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2127             ///
2128             /// let mut some_int = 123;
2129             #[doc = concat!("let a = ", stringify!($atomic_type), "::from_mut(&mut some_int);")]
2130             /// a.store(100, Ordering::Relaxed);
2131             /// assert_eq!(some_int, 100);
2132             /// ```
2133             ///
2134             #[inline]
2135             #[$cfg_align]
2136             #[unstable(feature = "atomic_from_mut", issue = "76314")]
2137             pub fn from_mut(v: &mut $int_type) -> &mut Self {
2138                 use crate::mem::align_of;
2139                 let [] = [(); align_of::<Self>() - align_of::<$int_type>()];
2140                 // SAFETY:
2141                 //  - the mutable reference guarantees unique ownership.
2142                 //  - the alignment of `$int_type` and `Self` is the
2143                 //    same, as promised by $cfg_align and verified above.
2144                 unsafe { &mut *(v as *mut $int_type as *mut Self) }
2145             }
2146 
2147             #[doc = concat!("Get non-atomic access to a `&mut [", stringify!($atomic_type), "]` slice")]
2148             ///
2149             /// This is safe because the mutable reference guarantees that no other threads are
2150             /// concurrently accessing the atomic data.
2151             ///
2152             /// # Examples
2153             ///
2154             /// ```
2155             /// #![feature(atomic_from_mut, inline_const)]
2156             #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2157             ///
2158             #[doc = concat!("let mut some_ints = [const { ", stringify!($atomic_type), "::new(0) }; 10];")]
2159             ///
2160             #[doc = concat!("let view: &mut [", stringify!($int_type), "] = ", stringify!($atomic_type), "::get_mut_slice(&mut some_ints);")]
2161             /// assert_eq!(view, [0; 10]);
2162             /// view
2163             ///     .iter_mut()
2164             ///     .enumerate()
2165             ///     .for_each(|(idx, int)| *int = idx as _);
2166             ///
2167             /// std::thread::scope(|s| {
2168             ///     some_ints
2169             ///         .iter()
2170             ///         .enumerate()
2171             ///         .for_each(|(idx, int)| {
2172             ///             s.spawn(move || assert_eq!(int.load(Ordering::Relaxed), idx as _));
2173             ///         })
2174             /// });
2175             /// ```
2176             #[inline]
2177             #[unstable(feature = "atomic_from_mut", issue = "76314")]
2178             pub fn get_mut_slice(this: &mut [Self]) -> &mut [$int_type] {
2179                 // SAFETY: the mutable reference guarantees unique ownership.
2180                 unsafe { &mut *(this as *mut [Self] as *mut [$int_type]) }
2181             }
2182 
2183             #[doc = concat!("Get atomic access to a `&mut [", stringify!($int_type), "]` slice.")]
2184             ///
2185             /// # Examples
2186             ///
2187             /// ```
2188             /// #![feature(atomic_from_mut)]
2189             #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2190             ///
2191             /// let mut some_ints = [0; 10];
2192             #[doc = concat!("let a = &*", stringify!($atomic_type), "::from_mut_slice(&mut some_ints);")]
2193             /// std::thread::scope(|s| {
2194             ///     for i in 0..a.len() {
2195             ///         s.spawn(move || a[i].store(i as _, Ordering::Relaxed));
2196             ///     }
2197             /// });
2198             /// for (i, n) in some_ints.into_iter().enumerate() {
2199             ///     assert_eq!(i, n as usize);
2200             /// }
2201             /// ```
2202             #[inline]
2203             #[$cfg_align]
2204             #[unstable(feature = "atomic_from_mut", issue = "76314")]
2205             pub fn from_mut_slice(v: &mut [$int_type]) -> &mut [Self] {
2206                 use crate::mem::align_of;
2207                 let [] = [(); align_of::<Self>() - align_of::<$int_type>()];
2208                 // SAFETY:
2209                 //  - the mutable reference guarantees unique ownership.
2210                 //  - the alignment of `$int_type` and `Self` is the
2211                 //    same, as promised by $cfg_align and verified above.
2212                 unsafe { &mut *(v as *mut [$int_type] as *mut [Self]) }
2213             }
2214 
2215             /// Consumes the atomic and returns the contained value.
2216             ///
2217             /// This is safe because passing `self` by value guarantees that no other threads are
2218             /// concurrently accessing the atomic data.
2219             ///
2220             /// # Examples
2221             ///
2222             /// ```
2223             #[doc = concat!($extra_feature, "use std::sync::atomic::", stringify!($atomic_type), ";")]
2224             ///
2225             #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2226             /// assert_eq!(some_var.into_inner(), 5);
2227             /// ```
2228             #[inline]
2229             #[$stable_access]
2230             #[rustc_const_unstable(feature = "const_cell_into_inner", issue = "78729")]
2231             pub const fn into_inner(self) -> $int_type {
2232                 self.v.into_inner()
2233             }
2234 
2235             /// Loads a value from the atomic integer.
2236             ///
2237             /// `load` takes an [`Ordering`] argument which describes the memory ordering of this operation.
2238             /// Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
2239             ///
2240             /// # Panics
2241             ///
2242             /// Panics if `order` is [`Release`] or [`AcqRel`].
2243             ///
2244             /// # Examples
2245             ///
2246             /// ```
2247             #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2248             ///
2249             #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2250             ///
2251             /// assert_eq!(some_var.load(Ordering::Relaxed), 5);
2252             /// ```
2253             #[inline]
2254             #[$stable]
2255             #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2256             pub fn load(&self, order: Ordering) -> $int_type {
2257                 // SAFETY: data races are prevented by atomic intrinsics.
2258                 unsafe { atomic_load(self.v.get(), order) }
2259             }
2260 
2261             /// Stores a value into the atomic integer.
2262             ///
2263             /// `store` takes an [`Ordering`] argument which describes the memory ordering of this operation.
2264             ///  Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
2265             ///
2266             /// # Panics
2267             ///
2268             /// Panics if `order` is [`Acquire`] or [`AcqRel`].
2269             ///
2270             /// # Examples
2271             ///
2272             /// ```
2273             #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2274             ///
2275             #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2276             ///
2277             /// some_var.store(10, Ordering::Relaxed);
2278             /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
2279             /// ```
2280             #[inline]
2281             #[$stable]
2282             #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2283             pub fn store(&self, val: $int_type, order: Ordering) {
2284                 // SAFETY: data races are prevented by atomic intrinsics.
2285                 unsafe { atomic_store(self.v.get(), val, order); }
2286             }
2287 
2288             /// Stores a value into the atomic integer, returning the previous value.
2289             ///
2290             /// `swap` takes an [`Ordering`] argument which describes the memory ordering
2291             /// of this operation. All ordering modes are possible. Note that using
2292             /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2293             /// using [`Release`] makes the load part [`Relaxed`].
2294             ///
2295             /// **Note**: This method is only available on platforms that support atomic operations on
2296             #[doc = concat!("[`", $s_int_type, "`].")]
2297             ///
2298             /// # Examples
2299             ///
2300             /// ```
2301             #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2302             ///
2303             #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2304             ///
2305             /// assert_eq!(some_var.swap(10, Ordering::Relaxed), 5);
2306             /// ```
2307             #[inline]
2308             #[$stable]
2309             #[$cfg_cas]
2310             #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2311             pub fn swap(&self, val: $int_type, order: Ordering) -> $int_type {
2312                 // SAFETY: data races are prevented by atomic intrinsics.
2313                 unsafe { atomic_swap(self.v.get(), val, order) }
2314             }
2315 
2316             /// Stores a value into the atomic integer if the current value is the same as
2317             /// the `current` value.
2318             ///
2319             /// The return value is always the previous value. If it is equal to `current`, then the
2320             /// value was updated.
2321             ///
2322             /// `compare_and_swap` also takes an [`Ordering`] argument which describes the memory
2323             /// ordering of this operation. Notice that even when using [`AcqRel`], the operation
2324             /// might fail and hence just perform an `Acquire` load, but not have `Release` semantics.
2325             /// Using [`Acquire`] makes the store part of this operation [`Relaxed`] if it
2326             /// happens, and using [`Release`] makes the load part [`Relaxed`].
2327             ///
2328             /// **Note**: This method is only available on platforms that support atomic operations on
2329             #[doc = concat!("[`", $s_int_type, "`].")]
2330             ///
2331             /// # Migrating to `compare_exchange` and `compare_exchange_weak`
2332             ///
2333             /// `compare_and_swap` is equivalent to `compare_exchange` with the following mapping for
2334             /// memory orderings:
2335             ///
2336             /// Original | Success | Failure
2337             /// -------- | ------- | -------
2338             /// Relaxed  | Relaxed | Relaxed
2339             /// Acquire  | Acquire | Acquire
2340             /// Release  | Release | Relaxed
2341             /// AcqRel   | AcqRel  | Acquire
2342             /// SeqCst   | SeqCst  | SeqCst
2343             ///
2344             /// `compare_exchange_weak` is allowed to fail spuriously even when the comparison succeeds,
2345             /// which allows the compiler to generate better assembly code when the compare and swap
2346             /// is used in a loop.
2347             ///
2348             /// # Examples
2349             ///
2350             /// ```
2351             #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2352             ///
2353             #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2354             ///
2355             /// assert_eq!(some_var.compare_and_swap(5, 10, Ordering::Relaxed), 5);
2356             /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
2357             ///
2358             /// assert_eq!(some_var.compare_and_swap(6, 12, Ordering::Relaxed), 10);
2359             /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
2360             /// ```
2361             #[inline]
2362             #[$stable]
2363             #[deprecated(
2364                 since = "1.50.0",
2365                 note = "Use `compare_exchange` or `compare_exchange_weak` instead")
2366             ]
2367             #[$cfg_cas]
2368             #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2369             pub fn compare_and_swap(&self,
2370                                     current: $int_type,
2371                                     new: $int_type,
2372                                     order: Ordering) -> $int_type {
2373                 match self.compare_exchange(current,
2374                                             new,
2375                                             order,
2376                                             strongest_failure_ordering(order)) {
2377                     Ok(x) => x,
2378                     Err(x) => x,
2379                 }
2380             }
2381 
2382             /// Stores a value into the atomic integer if the current value is the same as
2383             /// the `current` value.
2384             ///
2385             /// The return value is a result indicating whether the new value was written and
2386             /// containing the previous value. On success this value is guaranteed to be equal to
2387             /// `current`.
2388             ///
2389             /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
2390             /// ordering of this operation. `success` describes the required ordering for the
2391             /// read-modify-write operation that takes place if the comparison with `current` succeeds.
2392             /// `failure` describes the required ordering for the load operation that takes place when
2393             /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
2394             /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
2395             /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
2396             ///
2397             /// **Note**: This method is only available on platforms that support atomic operations on
2398             #[doc = concat!("[`", $s_int_type, "`].")]
2399             ///
2400             /// # Examples
2401             ///
2402             /// ```
2403             #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2404             ///
2405             #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2406             ///
2407             /// assert_eq!(some_var.compare_exchange(5, 10,
2408             ///                                      Ordering::Acquire,
2409             ///                                      Ordering::Relaxed),
2410             ///            Ok(5));
2411             /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
2412             ///
2413             /// assert_eq!(some_var.compare_exchange(6, 12,
2414             ///                                      Ordering::SeqCst,
2415             ///                                      Ordering::Acquire),
2416             ///            Err(10));
2417             /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
2418             /// ```
2419             #[inline]
2420             #[$stable_cxchg]
2421             #[$cfg_cas]
2422             #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2423             pub fn compare_exchange(&self,
2424                                     current: $int_type,
2425                                     new: $int_type,
2426                                     success: Ordering,
2427                                     failure: Ordering) -> Result<$int_type, $int_type> {
2428                 // SAFETY: data races are prevented by atomic intrinsics.
2429                 unsafe { atomic_compare_exchange(self.v.get(), current, new, success, failure) }
2430             }
2431 
2432             /// Stores a value into the atomic integer if the current value is the same as
2433             /// the `current` value.
2434             ///
2435             #[doc = concat!("Unlike [`", stringify!($atomic_type), "::compare_exchange`],")]
2436             /// this function is allowed to spuriously fail even
2437             /// when the comparison succeeds, which can result in more efficient code on some
2438             /// platforms. The return value is a result indicating whether the new value was
2439             /// written and containing the previous value.
2440             ///
2441             /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
2442             /// ordering of this operation. `success` describes the required ordering for the
2443             /// read-modify-write operation that takes place if the comparison with `current` succeeds.
2444             /// `failure` describes the required ordering for the load operation that takes place when
2445             /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
2446             /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
2447             /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
2448             ///
2449             /// **Note**: This method is only available on platforms that support atomic operations on
2450             #[doc = concat!("[`", $s_int_type, "`].")]
2451             ///
2452             /// # Examples
2453             ///
2454             /// ```
2455             #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2456             ///
2457             #[doc = concat!("let val = ", stringify!($atomic_type), "::new(4);")]
2458             ///
2459             /// let mut old = val.load(Ordering::Relaxed);
2460             /// loop {
2461             ///     let new = old * 2;
2462             ///     match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
2463             ///         Ok(_) => break,
2464             ///         Err(x) => old = x,
2465             ///     }
2466             /// }
2467             /// ```
2468             #[inline]
2469             #[$stable_cxchg]
2470             #[$cfg_cas]
2471             #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2472             pub fn compare_exchange_weak(&self,
2473                                          current: $int_type,
2474                                          new: $int_type,
2475                                          success: Ordering,
2476                                          failure: Ordering) -> Result<$int_type, $int_type> {
2477                 // SAFETY: data races are prevented by atomic intrinsics.
2478                 unsafe {
2479                     atomic_compare_exchange_weak(self.v.get(), current, new, success, failure)
2480                 }
2481             }
2482 
2483             /// Adds to the current value, returning the previous value.
2484             ///
2485             /// This operation wraps around on overflow.
2486             ///
2487             /// `fetch_add` takes an [`Ordering`] argument which describes the memory ordering
2488             /// of this operation. All ordering modes are possible. Note that using
2489             /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2490             /// using [`Release`] makes the load part [`Relaxed`].
2491             ///
2492             /// **Note**: This method is only available on platforms that support atomic operations on
2493             #[doc = concat!("[`", $s_int_type, "`].")]
2494             ///
2495             /// # Examples
2496             ///
2497             /// ```
2498             #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2499             ///
2500             #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0);")]
2501             /// assert_eq!(foo.fetch_add(10, Ordering::SeqCst), 0);
2502             /// assert_eq!(foo.load(Ordering::SeqCst), 10);
2503             /// ```
2504             #[inline]
2505             #[$stable]
2506             #[$cfg_cas]
2507             #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2508             pub fn fetch_add(&self, val: $int_type, order: Ordering) -> $int_type {
2509                 // SAFETY: data races are prevented by atomic intrinsics.
2510                 unsafe { atomic_add(self.v.get(), val, order) }
2511             }
2512 
2513             /// Subtracts from the current value, returning the previous value.
2514             ///
2515             /// This operation wraps around on overflow.
2516             ///
2517             /// `fetch_sub` takes an [`Ordering`] argument which describes the memory ordering
2518             /// of this operation. All ordering modes are possible. Note that using
2519             /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2520             /// using [`Release`] makes the load part [`Relaxed`].
2521             ///
2522             /// **Note**: This method is only available on platforms that support atomic operations on
2523             #[doc = concat!("[`", $s_int_type, "`].")]
2524             ///
2525             /// # Examples
2526             ///
2527             /// ```
2528             #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2529             ///
2530             #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(20);")]
2531             /// assert_eq!(foo.fetch_sub(10, Ordering::SeqCst), 20);
2532             /// assert_eq!(foo.load(Ordering::SeqCst), 10);
2533             /// ```
2534             #[inline]
2535             #[$stable]
2536             #[$cfg_cas]
2537             #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2538             pub fn fetch_sub(&self, val: $int_type, order: Ordering) -> $int_type {
2539                 // SAFETY: data races are prevented by atomic intrinsics.
2540                 unsafe { atomic_sub(self.v.get(), val, order) }
2541             }
2542 
2543             /// Bitwise "and" with the current value.
2544             ///
2545             /// Performs a bitwise "and" operation on the current value and the argument `val`, and
2546             /// sets the new value to the result.
2547             ///
2548             /// Returns the previous value.
2549             ///
2550             /// `fetch_and` takes an [`Ordering`] argument which describes the memory ordering
2551             /// of this operation. All ordering modes are possible. Note that using
2552             /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2553             /// using [`Release`] makes the load part [`Relaxed`].
2554             ///
2555             /// **Note**: This method is only available on platforms that support atomic operations on
2556             #[doc = concat!("[`", $s_int_type, "`].")]
2557             ///
2558             /// # Examples
2559             ///
2560             /// ```
2561             #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2562             ///
2563             #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0b101101);")]
2564             /// assert_eq!(foo.fetch_and(0b110011, Ordering::SeqCst), 0b101101);
2565             /// assert_eq!(foo.load(Ordering::SeqCst), 0b100001);
2566             /// ```
2567             #[inline]
2568             #[$stable]
2569             #[$cfg_cas]
2570             #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2571             pub fn fetch_and(&self, val: $int_type, order: Ordering) -> $int_type {
2572                 // SAFETY: data races are prevented by atomic intrinsics.
2573                 unsafe { atomic_and(self.v.get(), val, order) }
2574             }
2575 
2576             /// Bitwise "nand" with the current value.
2577             ///
2578             /// Performs a bitwise "nand" operation on the current value and the argument `val`, and
2579             /// sets the new value to the result.
2580             ///
2581             /// Returns the previous value.
2582             ///
2583             /// `fetch_nand` takes an [`Ordering`] argument which describes the memory ordering
2584             /// of this operation. All ordering modes are possible. Note that using
2585             /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2586             /// using [`Release`] makes the load part [`Relaxed`].
2587             ///
2588             /// **Note**: This method is only available on platforms that support atomic operations on
2589             #[doc = concat!("[`", $s_int_type, "`].")]
2590             ///
2591             /// # Examples
2592             ///
2593             /// ```
2594             #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2595             ///
2596             #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0x13);")]
2597             /// assert_eq!(foo.fetch_nand(0x31, Ordering::SeqCst), 0x13);
2598             /// assert_eq!(foo.load(Ordering::SeqCst), !(0x13 & 0x31));
2599             /// ```
2600             #[inline]
2601             #[$stable_nand]
2602             #[$cfg_cas]
2603             #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2604             pub fn fetch_nand(&self, val: $int_type, order: Ordering) -> $int_type {
2605                 // SAFETY: data races are prevented by atomic intrinsics.
2606                 unsafe { atomic_nand(self.v.get(), val, order) }
2607             }
2608 
2609             /// Bitwise "or" with the current value.
2610             ///
2611             /// Performs a bitwise "or" operation on the current value and the argument `val`, and
2612             /// sets the new value to the result.
2613             ///
2614             /// Returns the previous value.
2615             ///
2616             /// `fetch_or` takes an [`Ordering`] argument which describes the memory ordering
2617             /// of this operation. All ordering modes are possible. Note that using
2618             /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2619             /// using [`Release`] makes the load part [`Relaxed`].
2620             ///
2621             /// **Note**: This method is only available on platforms that support atomic operations on
2622             #[doc = concat!("[`", $s_int_type, "`].")]
2623             ///
2624             /// # Examples
2625             ///
2626             /// ```
2627             #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2628             ///
2629             #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0b101101);")]
2630             /// assert_eq!(foo.fetch_or(0b110011, Ordering::SeqCst), 0b101101);
2631             /// assert_eq!(foo.load(Ordering::SeqCst), 0b111111);
2632             /// ```
2633             #[inline]
2634             #[$stable]
2635             #[$cfg_cas]
2636             #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2637             pub fn fetch_or(&self, val: $int_type, order: Ordering) -> $int_type {
2638                 // SAFETY: data races are prevented by atomic intrinsics.
2639                 unsafe { atomic_or(self.v.get(), val, order) }
2640             }
2641 
2642             /// Bitwise "xor" with the current value.
2643             ///
2644             /// Performs a bitwise "xor" operation on the current value and the argument `val`, and
2645             /// sets the new value to the result.
2646             ///
2647             /// Returns the previous value.
2648             ///
2649             /// `fetch_xor` takes an [`Ordering`] argument which describes the memory ordering
2650             /// of this operation. All ordering modes are possible. Note that using
2651             /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2652             /// using [`Release`] makes the load part [`Relaxed`].
2653             ///
2654             /// **Note**: This method is only available on platforms that support atomic operations on
2655             #[doc = concat!("[`", $s_int_type, "`].")]
2656             ///
2657             /// # Examples
2658             ///
2659             /// ```
2660             #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2661             ///
2662             #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0b101101);")]
2663             /// assert_eq!(foo.fetch_xor(0b110011, Ordering::SeqCst), 0b101101);
2664             /// assert_eq!(foo.load(Ordering::SeqCst), 0b011110);
2665             /// ```
2666             #[inline]
2667             #[$stable]
2668             #[$cfg_cas]
2669             #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2670             pub fn fetch_xor(&self, val: $int_type, order: Ordering) -> $int_type {
2671                 // SAFETY: data races are prevented by atomic intrinsics.
2672                 unsafe { atomic_xor(self.v.get(), val, order) }
2673             }
2674 
2675             /// Fetches the value, and applies a function to it that returns an optional
2676             /// new value. Returns a `Result` of `Ok(previous_value)` if the function returned `Some(_)`, else
2677             /// `Err(previous_value)`.
2678             ///
2679             /// Note: This may call the function multiple times if the value has been changed from other threads in
2680             /// the meantime, as long as the function returns `Some(_)`, but the function will have been applied
2681             /// only once to the stored value.
2682             ///
2683             /// `fetch_update` takes two [`Ordering`] arguments to describe the memory ordering of this operation.
2684             /// The first describes the required ordering for when the operation finally succeeds while the second
2685             /// describes the required ordering for loads. These correspond to the success and failure orderings of
2686             #[doc = concat!("[`", stringify!($atomic_type), "::compare_exchange`]")]
2687             /// respectively.
2688             ///
2689             /// Using [`Acquire`] as success ordering makes the store part
2690             /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
2691             /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
2692             ///
2693             /// **Note**: This method is only available on platforms that support atomic operations on
2694             #[doc = concat!("[`", $s_int_type, "`].")]
2695             ///
2696             /// # Considerations
2697             ///
2698             /// This method is not magic;  it is not provided by the hardware.
2699             /// It is implemented in terms of
2700             #[doc = concat!("[`", stringify!($atomic_type), "::compare_exchange_weak`],")]
2701             /// and suffers from the same drawbacks.
2702             /// In particular, this method will not circumvent the [ABA Problem].
2703             ///
2704             /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
2705             ///
2706             /// # Examples
2707             ///
2708             /// ```rust
2709             #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2710             ///
2711             #[doc = concat!("let x = ", stringify!($atomic_type), "::new(7);")]
2712             /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
2713             /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
2714             /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
2715             /// assert_eq!(x.load(Ordering::SeqCst), 9);
2716             /// ```
2717             #[inline]
2718             #[stable(feature = "no_more_cas", since = "1.45.0")]
2719             #[$cfg_cas]
2720             #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2721             pub fn fetch_update<F>(&self,
2722                                    set_order: Ordering,
2723                                    fetch_order: Ordering,
2724                                    mut f: F) -> Result<$int_type, $int_type>
2725             where F: FnMut($int_type) -> Option<$int_type> {
2726                 let mut prev = self.load(fetch_order);
2727                 while let Some(next) = f(prev) {
2728                     match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
2729                         x @ Ok(_) => return x,
2730                         Err(next_prev) => prev = next_prev
2731                     }
2732                 }
2733                 Err(prev)
2734             }
2735 
2736             /// Maximum with the current value.
2737             ///
2738             /// Finds the maximum of the current value and the argument `val`, and
2739             /// sets the new value to the result.
2740             ///
2741             /// Returns the previous value.
2742             ///
2743             /// `fetch_max` takes an [`Ordering`] argument which describes the memory ordering
2744             /// of this operation. All ordering modes are possible. Note that using
2745             /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2746             /// using [`Release`] makes the load part [`Relaxed`].
2747             ///
2748             /// **Note**: This method is only available on platforms that support atomic operations on
2749             #[doc = concat!("[`", $s_int_type, "`].")]
2750             ///
2751             /// # Examples
2752             ///
2753             /// ```
2754             #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2755             ///
2756             #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
2757             /// assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23);
2758             /// assert_eq!(foo.load(Ordering::SeqCst), 42);
2759             /// ```
2760             ///
2761             /// If you want to obtain the maximum value in one step, you can use the following:
2762             ///
2763             /// ```
2764             #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2765             ///
2766             #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
2767             /// let bar = 42;
2768             /// let max_foo = foo.fetch_max(bar, Ordering::SeqCst).max(bar);
2769             /// assert!(max_foo == 42);
2770             /// ```
2771             #[inline]
2772             #[stable(feature = "atomic_min_max", since = "1.45.0")]
2773             #[$cfg_cas]
2774             #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2775             pub fn fetch_max(&self, val: $int_type, order: Ordering) -> $int_type {
2776                 // SAFETY: data races are prevented by atomic intrinsics.
2777                 unsafe { $max_fn(self.v.get(), val, order) }
2778             }
2779 
2780             /// Minimum with the current value.
2781             ///
2782             /// Finds the minimum of the current value and the argument `val`, and
2783             /// sets the new value to the result.
2784             ///
2785             /// Returns the previous value.
2786             ///
2787             /// `fetch_min` takes an [`Ordering`] argument which describes the memory ordering
2788             /// of this operation. All ordering modes are possible. Note that using
2789             /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2790             /// using [`Release`] makes the load part [`Relaxed`].
2791             ///
2792             /// **Note**: This method is only available on platforms that support atomic operations on
2793             #[doc = concat!("[`", $s_int_type, "`].")]
2794             ///
2795             /// # Examples
2796             ///
2797             /// ```
2798             #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2799             ///
2800             #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
2801             /// assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23);
2802             /// assert_eq!(foo.load(Ordering::Relaxed), 23);
2803             /// assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23);
2804             /// assert_eq!(foo.load(Ordering::Relaxed), 22);
2805             /// ```
2806             ///
2807             /// If you want to obtain the minimum value in one step, you can use the following:
2808             ///
2809             /// ```
2810             #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2811             ///
2812             #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
2813             /// let bar = 12;
2814             /// let min_foo = foo.fetch_min(bar, Ordering::SeqCst).min(bar);
2815             /// assert_eq!(min_foo, 12);
2816             /// ```
2817             #[inline]
2818             #[stable(feature = "atomic_min_max", since = "1.45.0")]
2819             #[$cfg_cas]
2820             #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2821             pub fn fetch_min(&self, val: $int_type, order: Ordering) -> $int_type {
2822                 // SAFETY: data races are prevented by atomic intrinsics.
2823                 unsafe { $min_fn(self.v.get(), val, order) }
2824             }
2825 
2826             /// Returns a mutable pointer to the underlying integer.
2827             ///
2828             /// Doing non-atomic reads and writes on the resulting integer can be a data race.
2829             /// This method is mostly useful for FFI, where the function signature may use
2830             #[doc = concat!("`*mut ", stringify!($int_type), "` instead of `&", stringify!($atomic_type), "`.")]
2831             ///
2832             /// Returning an `*mut` pointer from a shared reference to this atomic is safe because the
2833             /// atomic types work with interior mutability. All modifications of an atomic change the value
2834             /// through a shared reference, and can do so safely as long as they use atomic operations. Any
2835             /// use of the returned raw pointer requires an `unsafe` block and still has to uphold the same
2836             /// restriction: operations on it must be atomic.
2837             ///
2838             /// # Examples
2839             ///
2840             /// ```ignore (extern-declaration)
2841             /// # fn main() {
2842             #[doc = concat!($extra_feature, "use std::sync::atomic::", stringify!($atomic_type), ";")]
2843             ///
2844             /// extern "C" {
2845             #[doc = concat!("    fn my_atomic_op(arg: *mut ", stringify!($int_type), ");")]
2846             /// }
2847             ///
2848             #[doc = concat!("let atomic = ", stringify!($atomic_type), "::new(1);")]
2849             ///
2850             /// // SAFETY: Safe as long as `my_atomic_op` is atomic.
2851             /// unsafe {
2852             ///     my_atomic_op(atomic.as_ptr());
2853             /// }
2854             /// # }
2855             /// ```
2856             #[inline]
2857             #[stable(feature = "atomic_as_ptr", since = "1.70.0")]
2858             #[rustc_const_stable(feature = "atomic_as_ptr", since = "1.70.0")]
2859             pub const fn as_ptr(&self) -> *mut $int_type {
2860                 self.v.get()
2861             }
2862         }
2863     }
2864 }
2865 
2866 #[cfg(target_has_atomic_load_store = "8")]
2867 atomic_int! {
2868     cfg(target_has_atomic = "8"),
2869     cfg(target_has_atomic_equal_alignment = "8"),
2870     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2871     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2872     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2873     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2874     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2875     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2876     rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
2877     unstable(feature = "integer_atomics", issue = "99069"),
2878     cfg_attr(not(test), rustc_diagnostic_item = "AtomicI8"),
2879     "i8",
2880     "",
2881     atomic_min, atomic_max,
2882     1,
2883     "AtomicI8::new(0)",
2884     i8 AtomicI8 ATOMIC_I8_INIT
2885 }
2886 #[cfg(target_has_atomic_load_store = "8")]
2887 atomic_int! {
2888     cfg(target_has_atomic = "8"),
2889     cfg(target_has_atomic_equal_alignment = "8"),
2890     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2891     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2892     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2893     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2894     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2895     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2896     rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
2897     unstable(feature = "integer_atomics", issue = "99069"),
2898     cfg_attr(not(test), rustc_diagnostic_item = "AtomicU8"),
2899     "u8",
2900     "",
2901     atomic_umin, atomic_umax,
2902     1,
2903     "AtomicU8::new(0)",
2904     u8 AtomicU8 ATOMIC_U8_INIT
2905 }
2906 #[cfg(target_has_atomic_load_store = "16")]
2907 atomic_int! {
2908     cfg(target_has_atomic = "16"),
2909     cfg(target_has_atomic_equal_alignment = "16"),
2910     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2911     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2912     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2913     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2914     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2915     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2916     rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
2917     unstable(feature = "integer_atomics", issue = "99069"),
2918     cfg_attr(not(test), rustc_diagnostic_item = "AtomicI16"),
2919     "i16",
2920     "",
2921     atomic_min, atomic_max,
2922     2,
2923     "AtomicI16::new(0)",
2924     i16 AtomicI16 ATOMIC_I16_INIT
2925 }
2926 #[cfg(target_has_atomic_load_store = "16")]
2927 atomic_int! {
2928     cfg(target_has_atomic = "16"),
2929     cfg(target_has_atomic_equal_alignment = "16"),
2930     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2931     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2932     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2933     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2934     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2935     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2936     rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
2937     unstable(feature = "integer_atomics", issue = "99069"),
2938     cfg_attr(not(test), rustc_diagnostic_item = "AtomicU16"),
2939     "u16",
2940     "",
2941     atomic_umin, atomic_umax,
2942     2,
2943     "AtomicU16::new(0)",
2944     u16 AtomicU16 ATOMIC_U16_INIT
2945 }
2946 #[cfg(target_has_atomic_load_store = "32")]
2947 atomic_int! {
2948     cfg(target_has_atomic = "32"),
2949     cfg(target_has_atomic_equal_alignment = "32"),
2950     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2951     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2952     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2953     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2954     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2955     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2956     rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
2957     unstable(feature = "integer_atomics", issue = "99069"),
2958     cfg_attr(not(test), rustc_diagnostic_item = "AtomicI32"),
2959     "i32",
2960     "",
2961     atomic_min, atomic_max,
2962     4,
2963     "AtomicI32::new(0)",
2964     i32 AtomicI32 ATOMIC_I32_INIT
2965 }
2966 #[cfg(target_has_atomic_load_store = "32")]
2967 atomic_int! {
2968     cfg(target_has_atomic = "32"),
2969     cfg(target_has_atomic_equal_alignment = "32"),
2970     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2971     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2972     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2973     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2974     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2975     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2976     rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
2977     unstable(feature = "integer_atomics", issue = "99069"),
2978     cfg_attr(not(test), rustc_diagnostic_item = "AtomicU32"),
2979     "u32",
2980     "",
2981     atomic_umin, atomic_umax,
2982     4,
2983     "AtomicU32::new(0)",
2984     u32 AtomicU32 ATOMIC_U32_INIT
2985 }
2986 #[cfg(target_has_atomic_load_store = "64")]
2987 atomic_int! {
2988     cfg(target_has_atomic = "64"),
2989     cfg(target_has_atomic_equal_alignment = "64"),
2990     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2991     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2992     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2993     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2994     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2995     stable(feature = "integer_atomics_stable", since = "1.34.0"),
2996     rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
2997     unstable(feature = "integer_atomics", issue = "99069"),
2998     cfg_attr(not(test), rustc_diagnostic_item = "AtomicI64"),
2999     "i64",
3000     "",
3001     atomic_min, atomic_max,
3002     8,
3003     "AtomicI64::new(0)",
3004     i64 AtomicI64 ATOMIC_I64_INIT
3005 }
3006 #[cfg(target_has_atomic_load_store = "64")]
3007 atomic_int! {
3008     cfg(target_has_atomic = "64"),
3009     cfg(target_has_atomic_equal_alignment = "64"),
3010     stable(feature = "integer_atomics_stable", since = "1.34.0"),
3011     stable(feature = "integer_atomics_stable", since = "1.34.0"),
3012     stable(feature = "integer_atomics_stable", since = "1.34.0"),
3013     stable(feature = "integer_atomics_stable", since = "1.34.0"),
3014     stable(feature = "integer_atomics_stable", since = "1.34.0"),
3015     stable(feature = "integer_atomics_stable", since = "1.34.0"),
3016     rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3017     unstable(feature = "integer_atomics", issue = "99069"),
3018     cfg_attr(not(test), rustc_diagnostic_item = "AtomicU64"),
3019     "u64",
3020     "",
3021     atomic_umin, atomic_umax,
3022     8,
3023     "AtomicU64::new(0)",
3024     u64 AtomicU64 ATOMIC_U64_INIT
3025 }
3026 #[cfg(target_has_atomic_load_store = "128")]
3027 atomic_int! {
3028     cfg(target_has_atomic = "128"),
3029     cfg(target_has_atomic_equal_alignment = "128"),
3030     unstable(feature = "integer_atomics", issue = "99069"),
3031     unstable(feature = "integer_atomics", issue = "99069"),
3032     unstable(feature = "integer_atomics", issue = "99069"),
3033     unstable(feature = "integer_atomics", issue = "99069"),
3034     unstable(feature = "integer_atomics", issue = "99069"),
3035     unstable(feature = "integer_atomics", issue = "99069"),
3036     rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3037     unstable(feature = "integer_atomics", issue = "99069"),
3038     cfg_attr(not(test), rustc_diagnostic_item = "AtomicI128"),
3039     "i128",
3040     "#![feature(integer_atomics)]\n\n",
3041     atomic_min, atomic_max,
3042     16,
3043     "AtomicI128::new(0)",
3044     i128 AtomicI128 ATOMIC_I128_INIT
3045 }
3046 #[cfg(target_has_atomic_load_store = "128")]
3047 atomic_int! {
3048     cfg(target_has_atomic = "128"),
3049     cfg(target_has_atomic_equal_alignment = "128"),
3050     unstable(feature = "integer_atomics", issue = "99069"),
3051     unstable(feature = "integer_atomics", issue = "99069"),
3052     unstable(feature = "integer_atomics", issue = "99069"),
3053     unstable(feature = "integer_atomics", issue = "99069"),
3054     unstable(feature = "integer_atomics", issue = "99069"),
3055     unstable(feature = "integer_atomics", issue = "99069"),
3056     rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3057     unstable(feature = "integer_atomics", issue = "99069"),
3058     cfg_attr(not(test), rustc_diagnostic_item = "AtomicU128"),
3059     "u128",
3060     "#![feature(integer_atomics)]\n\n",
3061     atomic_umin, atomic_umax,
3062     16,
3063     "AtomicU128::new(0)",
3064     u128 AtomicU128 ATOMIC_U128_INIT
3065 }
3066 
3067 macro_rules! atomic_int_ptr_sized {
3068     ( $($target_pointer_width:literal $align:literal)* ) => { $(
3069         #[cfg(target_has_atomic_load_store = "ptr")]
3070         #[cfg(target_pointer_width = $target_pointer_width)]
3071         atomic_int! {
3072             cfg(target_has_atomic = "ptr"),
3073             cfg(target_has_atomic_equal_alignment = "ptr"),
3074             stable(feature = "rust1", since = "1.0.0"),
3075             stable(feature = "extended_compare_and_swap", since = "1.10.0"),
3076             stable(feature = "atomic_debug", since = "1.3.0"),
3077             stable(feature = "atomic_access", since = "1.15.0"),
3078             stable(feature = "atomic_from", since = "1.23.0"),
3079             stable(feature = "atomic_nand", since = "1.27.0"),
3080             rustc_const_stable(feature = "const_ptr_sized_atomics", since = "1.24.0"),
3081             stable(feature = "rust1", since = "1.0.0"),
3082             cfg_attr(not(test), rustc_diagnostic_item = "AtomicIsize"),
3083             "isize",
3084             "",
3085             atomic_min, atomic_max,
3086             $align,
3087             "AtomicIsize::new(0)",
3088             isize AtomicIsize ATOMIC_ISIZE_INIT
3089         }
3090         #[cfg(target_has_atomic_load_store = "ptr")]
3091         #[cfg(target_pointer_width = $target_pointer_width)]
3092         atomic_int! {
3093             cfg(target_has_atomic = "ptr"),
3094             cfg(target_has_atomic_equal_alignment = "ptr"),
3095             stable(feature = "rust1", since = "1.0.0"),
3096             stable(feature = "extended_compare_and_swap", since = "1.10.0"),
3097             stable(feature = "atomic_debug", since = "1.3.0"),
3098             stable(feature = "atomic_access", since = "1.15.0"),
3099             stable(feature = "atomic_from", since = "1.23.0"),
3100             stable(feature = "atomic_nand", since = "1.27.0"),
3101             rustc_const_stable(feature = "const_ptr_sized_atomics", since = "1.24.0"),
3102             stable(feature = "rust1", since = "1.0.0"),
3103             cfg_attr(not(test), rustc_diagnostic_item = "AtomicUsize"),
3104             "usize",
3105             "",
3106             atomic_umin, atomic_umax,
3107             $align,
3108             "AtomicUsize::new(0)",
3109             usize AtomicUsize ATOMIC_USIZE_INIT
3110         }
3111     )* };
3112 }
3113 
3114 atomic_int_ptr_sized! {
3115     "16" 2
3116     "32" 4
3117     "64" 8
3118 }
3119 
3120 #[inline]
3121 #[cfg(target_has_atomic)]
strongest_failure_ordering(order: Ordering) -> Ordering3122 fn strongest_failure_ordering(order: Ordering) -> Ordering {
3123     match order {
3124         Release => Relaxed,
3125         Relaxed => Relaxed,
3126         SeqCst => SeqCst,
3127         Acquire => Acquire,
3128         AcqRel => Acquire,
3129     }
3130 }
3131 
3132 #[inline]
3133 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
atomic_store<T: Copy>(dst: *mut T, val: T, order: Ordering)3134 unsafe fn atomic_store<T: Copy>(dst: *mut T, val: T, order: Ordering) {
3135     // SAFETY: the caller must uphold the safety contract for `atomic_store`.
3136     unsafe {
3137         match order {
3138             Relaxed => intrinsics::atomic_store_relaxed(dst, val),
3139             Release => intrinsics::atomic_store_release(dst, val),
3140             SeqCst => intrinsics::atomic_store_seqcst(dst, val),
3141             Acquire => panic!("there is no such thing as an acquire store"),
3142             AcqRel => panic!("there is no such thing as an acquire-release store"),
3143         }
3144     }
3145 }
3146 
3147 #[inline]
3148 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
atomic_load<T: Copy>(dst: *const T, order: Ordering) -> T3149 unsafe fn atomic_load<T: Copy>(dst: *const T, order: Ordering) -> T {
3150     // SAFETY: the caller must uphold the safety contract for `atomic_load`.
3151     unsafe {
3152         match order {
3153             Relaxed => intrinsics::atomic_load_relaxed(dst),
3154             Acquire => intrinsics::atomic_load_acquire(dst),
3155             SeqCst => intrinsics::atomic_load_seqcst(dst),
3156             Release => panic!("there is no such thing as a release load"),
3157             AcqRel => panic!("there is no such thing as an acquire-release load"),
3158         }
3159     }
3160 }
3161 
3162 #[inline]
3163 #[cfg(target_has_atomic)]
3164 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
atomic_swap<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T3165 unsafe fn atomic_swap<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3166     // SAFETY: the caller must uphold the safety contract for `atomic_swap`.
3167     unsafe {
3168         match order {
3169             Relaxed => intrinsics::atomic_xchg_relaxed(dst, val),
3170             Acquire => intrinsics::atomic_xchg_acquire(dst, val),
3171             Release => intrinsics::atomic_xchg_release(dst, val),
3172             AcqRel => intrinsics::atomic_xchg_acqrel(dst, val),
3173             SeqCst => intrinsics::atomic_xchg_seqcst(dst, val),
3174         }
3175     }
3176 }
3177 
3178 /// Returns the previous value (like __sync_fetch_and_add).
3179 #[inline]
3180 #[cfg(target_has_atomic)]
3181 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
atomic_add<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T3182 unsafe fn atomic_add<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3183     // SAFETY: the caller must uphold the safety contract for `atomic_add`.
3184     unsafe {
3185         match order {
3186             Relaxed => intrinsics::atomic_xadd_relaxed(dst, val),
3187             Acquire => intrinsics::atomic_xadd_acquire(dst, val),
3188             Release => intrinsics::atomic_xadd_release(dst, val),
3189             AcqRel => intrinsics::atomic_xadd_acqrel(dst, val),
3190             SeqCst => intrinsics::atomic_xadd_seqcst(dst, val),
3191         }
3192     }
3193 }
3194 
3195 /// Returns the previous value (like __sync_fetch_and_sub).
3196 #[inline]
3197 #[cfg(target_has_atomic)]
3198 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
atomic_sub<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T3199 unsafe fn atomic_sub<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3200     // SAFETY: the caller must uphold the safety contract for `atomic_sub`.
3201     unsafe {
3202         match order {
3203             Relaxed => intrinsics::atomic_xsub_relaxed(dst, val),
3204             Acquire => intrinsics::atomic_xsub_acquire(dst, val),
3205             Release => intrinsics::atomic_xsub_release(dst, val),
3206             AcqRel => intrinsics::atomic_xsub_acqrel(dst, val),
3207             SeqCst => intrinsics::atomic_xsub_seqcst(dst, val),
3208         }
3209     }
3210 }
3211 
3212 #[inline]
3213 #[cfg(target_has_atomic)]
3214 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
atomic_compare_exchange<T: Copy>( dst: *mut T, old: T, new: T, success: Ordering, failure: Ordering, ) -> Result<T, T>3215 unsafe fn atomic_compare_exchange<T: Copy>(
3216     dst: *mut T,
3217     old: T,
3218     new: T,
3219     success: Ordering,
3220     failure: Ordering,
3221 ) -> Result<T, T> {
3222     // SAFETY: the caller must uphold the safety contract for `atomic_compare_exchange`.
3223     let (val, ok) = unsafe {
3224         match (success, failure) {
3225             (Relaxed, Relaxed) => intrinsics::atomic_cxchg_relaxed_relaxed(dst, old, new),
3226             (Relaxed, Acquire) => intrinsics::atomic_cxchg_relaxed_acquire(dst, old, new),
3227             (Relaxed, SeqCst) => intrinsics::atomic_cxchg_relaxed_seqcst(dst, old, new),
3228             (Acquire, Relaxed) => intrinsics::atomic_cxchg_acquire_relaxed(dst, old, new),
3229             (Acquire, Acquire) => intrinsics::atomic_cxchg_acquire_acquire(dst, old, new),
3230             (Acquire, SeqCst) => intrinsics::atomic_cxchg_acquire_seqcst(dst, old, new),
3231             (Release, Relaxed) => intrinsics::atomic_cxchg_release_relaxed(dst, old, new),
3232             (Release, Acquire) => intrinsics::atomic_cxchg_release_acquire(dst, old, new),
3233             (Release, SeqCst) => intrinsics::atomic_cxchg_release_seqcst(dst, old, new),
3234             (AcqRel, Relaxed) => intrinsics::atomic_cxchg_acqrel_relaxed(dst, old, new),
3235             (AcqRel, Acquire) => intrinsics::atomic_cxchg_acqrel_acquire(dst, old, new),
3236             (AcqRel, SeqCst) => intrinsics::atomic_cxchg_acqrel_seqcst(dst, old, new),
3237             (SeqCst, Relaxed) => intrinsics::atomic_cxchg_seqcst_relaxed(dst, old, new),
3238             (SeqCst, Acquire) => intrinsics::atomic_cxchg_seqcst_acquire(dst, old, new),
3239             (SeqCst, SeqCst) => intrinsics::atomic_cxchg_seqcst_seqcst(dst, old, new),
3240             (_, AcqRel) => panic!("there is no such thing as an acquire-release failure ordering"),
3241             (_, Release) => panic!("there is no such thing as a release failure ordering"),
3242         }
3243     };
3244     if ok { Ok(val) } else { Err(val) }
3245 }
3246 
3247 #[inline]
3248 #[cfg(target_has_atomic)]
3249 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
atomic_compare_exchange_weak<T: Copy>( dst: *mut T, old: T, new: T, success: Ordering, failure: Ordering, ) -> Result<T, T>3250 unsafe fn atomic_compare_exchange_weak<T: Copy>(
3251     dst: *mut T,
3252     old: T,
3253     new: T,
3254     success: Ordering,
3255     failure: Ordering,
3256 ) -> Result<T, T> {
3257     // SAFETY: the caller must uphold the safety contract for `atomic_compare_exchange_weak`.
3258     let (val, ok) = unsafe {
3259         match (success, failure) {
3260             (Relaxed, Relaxed) => intrinsics::atomic_cxchgweak_relaxed_relaxed(dst, old, new),
3261             (Relaxed, Acquire) => intrinsics::atomic_cxchgweak_relaxed_acquire(dst, old, new),
3262             (Relaxed, SeqCst) => intrinsics::atomic_cxchgweak_relaxed_seqcst(dst, old, new),
3263             (Acquire, Relaxed) => intrinsics::atomic_cxchgweak_acquire_relaxed(dst, old, new),
3264             (Acquire, Acquire) => intrinsics::atomic_cxchgweak_acquire_acquire(dst, old, new),
3265             (Acquire, SeqCst) => intrinsics::atomic_cxchgweak_acquire_seqcst(dst, old, new),
3266             (Release, Relaxed) => intrinsics::atomic_cxchgweak_release_relaxed(dst, old, new),
3267             (Release, Acquire) => intrinsics::atomic_cxchgweak_release_acquire(dst, old, new),
3268             (Release, SeqCst) => intrinsics::atomic_cxchgweak_release_seqcst(dst, old, new),
3269             (AcqRel, Relaxed) => intrinsics::atomic_cxchgweak_acqrel_relaxed(dst, old, new),
3270             (AcqRel, Acquire) => intrinsics::atomic_cxchgweak_acqrel_acquire(dst, old, new),
3271             (AcqRel, SeqCst) => intrinsics::atomic_cxchgweak_acqrel_seqcst(dst, old, new),
3272             (SeqCst, Relaxed) => intrinsics::atomic_cxchgweak_seqcst_relaxed(dst, old, new),
3273             (SeqCst, Acquire) => intrinsics::atomic_cxchgweak_seqcst_acquire(dst, old, new),
3274             (SeqCst, SeqCst) => intrinsics::atomic_cxchgweak_seqcst_seqcst(dst, old, new),
3275             (_, AcqRel) => panic!("there is no such thing as an acquire-release failure ordering"),
3276             (_, Release) => panic!("there is no such thing as a release failure ordering"),
3277         }
3278     };
3279     if ok { Ok(val) } else { Err(val) }
3280 }
3281 
3282 #[inline]
3283 #[cfg(target_has_atomic)]
3284 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
atomic_and<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T3285 unsafe fn atomic_and<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3286     // SAFETY: the caller must uphold the safety contract for `atomic_and`
3287     unsafe {
3288         match order {
3289             Relaxed => intrinsics::atomic_and_relaxed(dst, val),
3290             Acquire => intrinsics::atomic_and_acquire(dst, val),
3291             Release => intrinsics::atomic_and_release(dst, val),
3292             AcqRel => intrinsics::atomic_and_acqrel(dst, val),
3293             SeqCst => intrinsics::atomic_and_seqcst(dst, val),
3294         }
3295     }
3296 }
3297 
3298 #[inline]
3299 #[cfg(target_has_atomic)]
3300 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
atomic_nand<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T3301 unsafe fn atomic_nand<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3302     // SAFETY: the caller must uphold the safety contract for `atomic_nand`
3303     unsafe {
3304         match order {
3305             Relaxed => intrinsics::atomic_nand_relaxed(dst, val),
3306             Acquire => intrinsics::atomic_nand_acquire(dst, val),
3307             Release => intrinsics::atomic_nand_release(dst, val),
3308             AcqRel => intrinsics::atomic_nand_acqrel(dst, val),
3309             SeqCst => intrinsics::atomic_nand_seqcst(dst, val),
3310         }
3311     }
3312 }
3313 
3314 #[inline]
3315 #[cfg(target_has_atomic)]
3316 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
atomic_or<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T3317 unsafe fn atomic_or<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3318     // SAFETY: the caller must uphold the safety contract for `atomic_or`
3319     unsafe {
3320         match order {
3321             SeqCst => intrinsics::atomic_or_seqcst(dst, val),
3322             Acquire => intrinsics::atomic_or_acquire(dst, val),
3323             Release => intrinsics::atomic_or_release(dst, val),
3324             AcqRel => intrinsics::atomic_or_acqrel(dst, val),
3325             Relaxed => intrinsics::atomic_or_relaxed(dst, val),
3326         }
3327     }
3328 }
3329 
3330 #[inline]
3331 #[cfg(target_has_atomic)]
3332 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
atomic_xor<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T3333 unsafe fn atomic_xor<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3334     // SAFETY: the caller must uphold the safety contract for `atomic_xor`
3335     unsafe {
3336         match order {
3337             SeqCst => intrinsics::atomic_xor_seqcst(dst, val),
3338             Acquire => intrinsics::atomic_xor_acquire(dst, val),
3339             Release => intrinsics::atomic_xor_release(dst, val),
3340             AcqRel => intrinsics::atomic_xor_acqrel(dst, val),
3341             Relaxed => intrinsics::atomic_xor_relaxed(dst, val),
3342         }
3343     }
3344 }
3345 
3346 /// returns the max value (signed comparison)
3347 #[inline]
3348 #[cfg(target_has_atomic)]
3349 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
atomic_max<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T3350 unsafe fn atomic_max<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3351     // SAFETY: the caller must uphold the safety contract for `atomic_max`
3352     unsafe {
3353         match order {
3354             Relaxed => intrinsics::atomic_max_relaxed(dst, val),
3355             Acquire => intrinsics::atomic_max_acquire(dst, val),
3356             Release => intrinsics::atomic_max_release(dst, val),
3357             AcqRel => intrinsics::atomic_max_acqrel(dst, val),
3358             SeqCst => intrinsics::atomic_max_seqcst(dst, val),
3359         }
3360     }
3361 }
3362 
3363 /// returns the min value (signed comparison)
3364 #[inline]
3365 #[cfg(target_has_atomic)]
3366 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
atomic_min<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T3367 unsafe fn atomic_min<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3368     // SAFETY: the caller must uphold the safety contract for `atomic_min`
3369     unsafe {
3370         match order {
3371             Relaxed => intrinsics::atomic_min_relaxed(dst, val),
3372             Acquire => intrinsics::atomic_min_acquire(dst, val),
3373             Release => intrinsics::atomic_min_release(dst, val),
3374             AcqRel => intrinsics::atomic_min_acqrel(dst, val),
3375             SeqCst => intrinsics::atomic_min_seqcst(dst, val),
3376         }
3377     }
3378 }
3379 
3380 /// returns the max value (unsigned comparison)
3381 #[inline]
3382 #[cfg(target_has_atomic)]
3383 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
atomic_umax<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T3384 unsafe fn atomic_umax<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3385     // SAFETY: the caller must uphold the safety contract for `atomic_umax`
3386     unsafe {
3387         match order {
3388             Relaxed => intrinsics::atomic_umax_relaxed(dst, val),
3389             Acquire => intrinsics::atomic_umax_acquire(dst, val),
3390             Release => intrinsics::atomic_umax_release(dst, val),
3391             AcqRel => intrinsics::atomic_umax_acqrel(dst, val),
3392             SeqCst => intrinsics::atomic_umax_seqcst(dst, val),
3393         }
3394     }
3395 }
3396 
3397 /// returns the min value (unsigned comparison)
3398 #[inline]
3399 #[cfg(target_has_atomic)]
3400 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
atomic_umin<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T3401 unsafe fn atomic_umin<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3402     // SAFETY: the caller must uphold the safety contract for `atomic_umin`
3403     unsafe {
3404         match order {
3405             Relaxed => intrinsics::atomic_umin_relaxed(dst, val),
3406             Acquire => intrinsics::atomic_umin_acquire(dst, val),
3407             Release => intrinsics::atomic_umin_release(dst, val),
3408             AcqRel => intrinsics::atomic_umin_acqrel(dst, val),
3409             SeqCst => intrinsics::atomic_umin_seqcst(dst, val),
3410         }
3411     }
3412 }
3413 
3414 /// An atomic fence.
3415 ///
3416 /// Depending on the specified order, a fence prevents the compiler and CPU from
3417 /// reordering certain types of memory operations around it.
3418 /// That creates synchronizes-with relationships between it and atomic operations
3419 /// or fences in other threads.
3420 ///
3421 /// A fence 'A' which has (at least) [`Release`] ordering semantics, synchronizes
3422 /// with a fence 'B' with (at least) [`Acquire`] semantics, if and only if there
3423 /// exist operations X and Y, both operating on some atomic object 'M' such
3424 /// that A is sequenced before X, Y is sequenced before B and Y observes
3425 /// the change to M. This provides a happens-before dependence between A and B.
3426 ///
3427 /// ```text
3428 ///     Thread 1                                          Thread 2
3429 ///
3430 /// fence(Release);      A --------------
3431 /// x.store(3, Relaxed); X ---------    |
3432 ///                                |    |
3433 ///                                |    |
3434 ///                                -------------> Y  if x.load(Relaxed) == 3 {
3435 ///                                     |-------> B      fence(Acquire);
3436 ///                                                      ...
3437 ///                                                  }
3438 /// ```
3439 ///
3440 /// Atomic operations with [`Release`] or [`Acquire`] semantics can also synchronize
3441 /// with a fence.
3442 ///
3443 /// A fence which has [`SeqCst`] ordering, in addition to having both [`Acquire`]
3444 /// and [`Release`] semantics, participates in the global program order of the
3445 /// other [`SeqCst`] operations and/or fences.
3446 ///
3447 /// Accepts [`Acquire`], [`Release`], [`AcqRel`] and [`SeqCst`] orderings.
3448 ///
3449 /// # Panics
3450 ///
3451 /// Panics if `order` is [`Relaxed`].
3452 ///
3453 /// # Examples
3454 ///
3455 /// ```
3456 /// use std::sync::atomic::AtomicBool;
3457 /// use std::sync::atomic::fence;
3458 /// use std::sync::atomic::Ordering;
3459 ///
3460 /// // A mutual exclusion primitive based on spinlock.
3461 /// pub struct Mutex {
3462 ///     flag: AtomicBool,
3463 /// }
3464 ///
3465 /// impl Mutex {
3466 ///     pub fn new() -> Mutex {
3467 ///         Mutex {
3468 ///             flag: AtomicBool::new(false),
3469 ///         }
3470 ///     }
3471 ///
3472 ///     pub fn lock(&self) {
3473 ///         // Wait until the old value is `false`.
3474 ///         while self
3475 ///             .flag
3476 ///             .compare_exchange_weak(false, true, Ordering::Relaxed, Ordering::Relaxed)
3477 ///             .is_err()
3478 ///         {}
3479 ///         // This fence synchronizes-with store in `unlock`.
3480 ///         fence(Ordering::Acquire);
3481 ///     }
3482 ///
3483 ///     pub fn unlock(&self) {
3484 ///         self.flag.store(false, Ordering::Release);
3485 ///     }
3486 /// }
3487 /// ```
3488 #[inline]
3489 #[stable(feature = "rust1", since = "1.0.0")]
3490 #[rustc_diagnostic_item = "fence"]
3491 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
fence(order: Ordering)3492 pub fn fence(order: Ordering) {
3493     // SAFETY: using an atomic fence is safe.
3494     unsafe {
3495         match order {
3496             Acquire => intrinsics::atomic_fence_acquire(),
3497             Release => intrinsics::atomic_fence_release(),
3498             AcqRel => intrinsics::atomic_fence_acqrel(),
3499             SeqCst => intrinsics::atomic_fence_seqcst(),
3500             Relaxed => panic!("there is no such thing as a relaxed fence"),
3501         }
3502     }
3503 }
3504 
3505 /// A compiler memory fence.
3506 ///
3507 /// `compiler_fence` does not emit any machine code, but restricts the kinds
3508 /// of memory re-ordering the compiler is allowed to do. Specifically, depending on
3509 /// the given [`Ordering`] semantics, the compiler may be disallowed from moving reads
3510 /// or writes from before or after the call to the other side of the call to
3511 /// `compiler_fence`. Note that it does **not** prevent the *hardware*
3512 /// from doing such re-ordering. This is not a problem in a single-threaded,
3513 /// execution context, but when other threads may modify memory at the same
3514 /// time, stronger synchronization primitives such as [`fence`] are required.
3515 ///
3516 /// The re-ordering prevented by the different ordering semantics are:
3517 ///
3518 ///  - with [`SeqCst`], no re-ordering of reads and writes across this point is allowed.
3519 ///  - with [`Release`], preceding reads and writes cannot be moved past subsequent writes.
3520 ///  - with [`Acquire`], subsequent reads and writes cannot be moved ahead of preceding reads.
3521 ///  - with [`AcqRel`], both of the above rules are enforced.
3522 ///
3523 /// `compiler_fence` is generally only useful for preventing a thread from
3524 /// racing *with itself*. That is, if a given thread is executing one piece
3525 /// of code, and is then interrupted, and starts executing code elsewhere
3526 /// (while still in the same thread, and conceptually still on the same
3527 /// core). In traditional programs, this can only occur when a signal
3528 /// handler is registered. In more low-level code, such situations can also
3529 /// arise when handling interrupts, when implementing green threads with
3530 /// pre-emption, etc. Curious readers are encouraged to read the Linux kernel's
3531 /// discussion of [memory barriers].
3532 ///
3533 /// # Panics
3534 ///
3535 /// Panics if `order` is [`Relaxed`].
3536 ///
3537 /// # Examples
3538 ///
3539 /// Without `compiler_fence`, the `assert_eq!` in following code
3540 /// is *not* guaranteed to succeed, despite everything happening in a single thread.
3541 /// To see why, remember that the compiler is free to swap the stores to
3542 /// `IMPORTANT_VARIABLE` and `IS_READY` since they are both
3543 /// `Ordering::Relaxed`. If it does, and the signal handler is invoked right
3544 /// after `IS_READY` is updated, then the signal handler will see
3545 /// `IS_READY=1`, but `IMPORTANT_VARIABLE=0`.
3546 /// Using a `compiler_fence` remedies this situation.
3547 ///
3548 /// ```
3549 /// use std::sync::atomic::{AtomicBool, AtomicUsize};
3550 /// use std::sync::atomic::Ordering;
3551 /// use std::sync::atomic::compiler_fence;
3552 ///
3553 /// static IMPORTANT_VARIABLE: AtomicUsize = AtomicUsize::new(0);
3554 /// static IS_READY: AtomicBool = AtomicBool::new(false);
3555 ///
3556 /// fn main() {
3557 ///     IMPORTANT_VARIABLE.store(42, Ordering::Relaxed);
3558 ///     // prevent earlier writes from being moved beyond this point
3559 ///     compiler_fence(Ordering::Release);
3560 ///     IS_READY.store(true, Ordering::Relaxed);
3561 /// }
3562 ///
3563 /// fn signal_handler() {
3564 ///     if IS_READY.load(Ordering::Relaxed) {
3565 ///         assert_eq!(IMPORTANT_VARIABLE.load(Ordering::Relaxed), 42);
3566 ///     }
3567 /// }
3568 /// ```
3569 ///
3570 /// [memory barriers]: https://www.kernel.org/doc/Documentation/memory-barriers.txt
3571 #[inline]
3572 #[stable(feature = "compiler_fences", since = "1.21.0")]
3573 #[rustc_diagnostic_item = "compiler_fence"]
3574 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
compiler_fence(order: Ordering)3575 pub fn compiler_fence(order: Ordering) {
3576     // SAFETY: using an atomic fence is safe.
3577     unsafe {
3578         match order {
3579             Acquire => intrinsics::atomic_singlethreadfence_acquire(),
3580             Release => intrinsics::atomic_singlethreadfence_release(),
3581             AcqRel => intrinsics::atomic_singlethreadfence_acqrel(),
3582             SeqCst => intrinsics::atomic_singlethreadfence_seqcst(),
3583             Relaxed => panic!("there is no such thing as a relaxed compiler fence"),
3584         }
3585     }
3586 }
3587 
3588 #[cfg(target_has_atomic_load_store = "8")]
3589 #[stable(feature = "atomic_debug", since = "1.3.0")]
3590 impl fmt::Debug for AtomicBool {
fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result3591     fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
3592         fmt::Debug::fmt(&self.load(Ordering::Relaxed), f)
3593     }
3594 }
3595 
3596 #[cfg(target_has_atomic_load_store = "ptr")]
3597 #[stable(feature = "atomic_debug", since = "1.3.0")]
3598 impl<T> fmt::Debug for AtomicPtr<T> {
fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result3599     fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
3600         fmt::Debug::fmt(&self.load(Ordering::Relaxed), f)
3601     }
3602 }
3603 
3604 #[cfg(target_has_atomic_load_store = "ptr")]
3605 #[stable(feature = "atomic_pointer", since = "1.24.0")]
3606 impl<T> fmt::Pointer for AtomicPtr<T> {
fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result3607     fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
3608         fmt::Pointer::fmt(&self.load(Ordering::SeqCst), f)
3609     }
3610 }
3611 
3612 /// Signals the processor that it is inside a busy-wait spin-loop ("spin lock").
3613 ///
3614 /// This function is deprecated in favor of [`hint::spin_loop`].
3615 ///
3616 /// [`hint::spin_loop`]: crate::hint::spin_loop
3617 #[inline]
3618 #[stable(feature = "spin_loop_hint", since = "1.24.0")]
3619 #[deprecated(since = "1.51.0", note = "use hint::spin_loop instead")]
spin_loop_hint()3620 pub fn spin_loop_hint() {
3621     spin_loop()
3622 }
3623