• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1[/
2 / Copyright (c) 2009 Helge Bahmann
3 / Copyright (c) 2014, 2017, 2018, 2020 Andrey Semashev
4 /
5 / Distributed under the Boost Software License, Version 1.0. (See accompanying
6 / file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
7 /]
8
9[library Boost.Atomic
10    [quickbook 1.4]
11    [authors [Bahmann, Helge][Semashev, Andrey]]
12    [copyright 2011 Helge Bahmann]
13    [copyright 2012 Tim Blechmann]
14    [copyright 2013, 2017, 2018, 2020 Andrey Semashev]
15    [id atomic]
16    [dirname atomic]
17    [purpose Atomic operations]
18    [license
19        Distributed under the Boost Software License, Version 1.0.
20        (See accompanying file LICENSE_1_0.txt or copy at
21        [@http://www.boost.org/LICENSE_1_0.txt])
22    ]
23]
24
25[section:introduction Introduction]
26
27[section:introduction_presenting Presenting Boost.Atomic]
28
29[*Boost.Atomic] is a library that provides [^atomic]
30data types and operations on these data types, as well as memory
31ordering constraints required for coordinating multiple threads through
32atomic variables. It implements the interface as defined by the C++11
33standard, but makes this feature available for platforms lacking
34system/compiler support for this particular C++11 feature.
35
36Users of this library should already be familiar with concurrency
37in general, as well as elementary concepts such as "mutual exclusion".
38
39The implementation makes use of processor-specific instructions where
40possible (via inline assembler, platform libraries or compiler
41intrinsics), and falls back to "emulating" atomic operations through
42locking.
43
44[endsect]
45
46[section:introduction_purpose Purpose]
47
48Operations on "ordinary" variables are not guaranteed to be atomic.
49This means that with [^int n=0] initially, two threads concurrently
50executing
51
52[c++]
53
54  void function()
55  {
56    n ++;
57  }
58
59might result in [^n==1] instead of 2: Each thread will read the
60old value into a processor register, increment it and write the result
61back. Both threads may therefore write [^1], unaware that the other thread
62is doing likewise.
63
64Declaring [^atomic<int> n=0] instead, the same operation on
65this variable will always result in [^n==2] as each operation on this
66variable is ['atomic]: This means that each operation behaves as if it
67were strictly sequentialized with respect to the other.
68
69Atomic variables are useful for two purposes:
70
71* as a means for coordinating multiple threads via custom
72  coordination protocols
73* as faster alternatives to "locked" access to simple variables
74
75Take a look at the [link atomic.usage_examples examples] section
76for common patterns.
77
78[endsect]
79
80[endsect]
81
82[section:thread_coordination Thread coordination using Boost.Atomic]
83
84The most common use of [*Boost.Atomic] is to realize custom
85thread synchronization protocols: The goal is to coordinate
86accesses of threads to shared variables in order to avoid
87"conflicts". The
88programmer must be aware of the fact that
89compilers, CPUs and the cache
90hierarchies may generally reorder memory references at will.
91As a consequence a program such as:
92
93[c++]
94
95  int x = 0, int y = 0;
96
97  thread1:
98    x = 1;
99    y = 1;
100
101  thread2:
102    if (y == 1) {
103      assert(x == 1);
104    }
105
106might indeed fail as there is no guarantee that the read of `x`
107by thread2 "sees" the write by thread1.
108
109[*Boost.Atomic] uses a synchronisation concept based on the
110['happens-before] relation to describe the guarantees under
111which situations such as the above one cannot occur.
112
113The remainder of this section will discuss ['happens-before] in
114a "hands-on" way instead of giving a fully formalized definition.
115The reader is encouraged to additionally have a
116look at the discussion of the correctness of a few of the
117[link atomic.usage_examples examples] afterwards.
118
119[section:mutex Enforcing ['happens-before] through mutual exclusion]
120
121As an introductory example to understand how arguing using
122['happens-before] works, consider two threads synchronizing
123using a common mutex:
124
125[c++]
126
127  mutex m;
128
129  thread1:
130    m.lock();
131    ... /* A */
132    m.unlock();
133
134  thread2:
135    m.lock();
136    ... /* B */
137    m.unlock();
138
139The "lockset-based intuition" would be to argue that A and B
140cannot be executed concurrently as the code paths require a
141common lock to be held.
142
143One can however also arrive at the same conclusion using
144['happens-before]: Either thread1 or thread2 will succeed first
145at [^m.lock()]. If this is be thread1, then as a consequence,
146thread2 cannot succeed at [^m.lock()] before thread1 has executed
147[^m.unlock()], consequently A ['happens-before] B in this case.
148By symmetry, if thread2 succeeds at [^m.lock()] first, we can
149conclude B ['happens-before] A.
150
151Since this already exhausts all options, we can conclude that
152either A ['happens-before] B or B ['happens-before] A must
153always hold. Obviously cannot state ['which] of the two relationships
154holds, but either one is sufficient to conclude that A and B
155cannot conflict.
156
157Compare the [link boost_atomic.usage_examples.example_spinlock spinlock]
158implementation to see how the mutual exclusion concept can be
159mapped to [*Boost.Atomic].
160
161[endsect]
162
163[section:release_acquire ['happens-before] through [^release] and [^acquire]]
164
165The most basic pattern for coordinating threads via [*Boost.Atomic]
166uses [^release] and [^acquire] on an atomic variable for coordination: If ...
167
168* ... thread1 performs an operation A,
169* ... thread1 subsequently writes (or atomically
170  modifies) an atomic variable with [^release] semantic,
171* ... thread2 reads (or atomically reads-and-modifies)
172  the value this value from the same atomic variable with
173  [^acquire] semantic and
174* ... thread2 subsequently performs an operation B,
175
176... then A ['happens-before] B.
177
178Consider the following example
179
180[c++]
181
182  atomic<int> a(0);
183
184  thread1:
185    ... /* A */
186    a.fetch_add(1, memory_order_release);
187
188  thread2:
189    int tmp = a.load(memory_order_acquire);
190    if (tmp == 1) {
191      ... /* B */
192    } else {
193      ... /* C */
194    }
195
196In this example, two avenues for execution are possible:
197
198* The [^store] operation by thread1 precedes the [^load] by thread2:
199  In this case thread2 will execute B and "A ['happens-before] B"
200  holds as all of the criteria above are satisfied.
201* The [^load] operation by thread2 precedes the [^store] by thread1:
202  In this case, thread2 will execute C, but "A ['happens-before] C"
203  does ['not] hold: thread2 does not read the value written by
204  thread1 through [^a].
205
206Therefore, A and B cannot conflict, but A and C ['can] conflict.
207
208[endsect]
209
210[section:fences Fences]
211
212Ordering constraints are generally specified together with an access to
213an atomic variable. It is however also possible to issue "fence"
214operations in isolation, in this case the fence operates in
215conjunction with preceding (for `acquire`, `consume` or `seq_cst`
216operations) or succeeding (for `release` or `seq_cst`) atomic
217operations.
218
219The example from the previous section could also be written in
220the following way:
221
222[c++]
223
224  atomic<int> a(0);
225
226  thread1:
227    ... /* A */
228    atomic_thread_fence(memory_order_release);
229    a.fetch_add(1, memory_order_relaxed);
230
231  thread2:
232    int tmp = a.load(memory_order_relaxed);
233    if (tmp == 1) {
234      atomic_thread_fence(memory_order_acquire);
235      ... /* B */
236    } else {
237      ... /* C */
238    }
239
240This provides the same ordering guarantees as previously, but
241elides a (possibly expensive) memory ordering operation in
242the case C is executed.
243
244[note Atomic fences are only indended to constraint ordering of
245regular and atomic loads and stores for the purpose of thread
246synchronization. `atomic_thread_fence` is not intended to be used
247to order some architecture-specific memory accesses, such as
248non-temporal loads and stores on x86 or write combining memory
249accesses. Use specialized instructions for these purposes.]
250
251[endsect]
252
253[section:release_consume ['happens-before] through [^release] and [^consume]]
254
255The second pattern for coordinating threads via [*Boost.Atomic]
256uses [^release] and [^consume] on an atomic variable for coordination: If ...
257
258* ... thread1 performs an operation A,
259* ... thread1 subsequently writes (or atomically modifies) an
260  atomic variable with [^release] semantic,
261* ... thread2 reads (or atomically reads-and-modifies)
262  the value this value from the same atomic variable with [^consume] semantic and
263* ... thread2 subsequently performs an operation B that is ['computationally
264  dependent on the value of the atomic variable],
265
266... then A ['happens-before] B.
267
268Consider the following example
269
270[c++]
271
272  atomic<int> a(0);
273  complex_data_structure data[2];
274
275  thread1:
276    data[1] = ...; /* A */
277    a.store(1, memory_order_release);
278
279  thread2:
280    int index = a.load(memory_order_consume);
281    complex_data_structure tmp = data[index]; /* B */
282
283In this example, two avenues for execution are possible:
284
285* The [^store] operation by thread1 precedes the [^load] by thread2:
286  In this case thread2 will read [^data\[1\]] and "A ['happens-before] B"
287  holds as all of the criteria above are satisfied.
288* The [^load] operation by thread2 precedes the [^store] by thread1:
289  In this case thread2 will read [^data\[0\]] and "A ['happens-before] B"
290  does ['not] hold: thread2 does not read the value written by
291  thread1 through [^a].
292
293Here, the ['happens-before] relationship helps ensure that any
294accesses (presumable writes) to [^data\[1\]] by thread1 happen before
295before the accesses (presumably reads) to [^data\[1\]] by thread2:
296Lacking this relationship, thread2 might see stale/inconsistent
297data.
298
299Note that in this example, the fact that operation B is computationally
300dependent on the atomic variable, therefore the following program would
301be erroneous:
302
303[c++]
304
305  atomic<int> a(0);
306  complex_data_structure data[2];
307
308  thread1:
309    data[1] = ...; /* A */
310    a.store(1, memory_order_release);
311
312  thread2:
313    int index = a.load(memory_order_consume);
314    complex_data_structure tmp;
315    if (index == 0)
316      tmp = data[0];
317    else
318      tmp = data[1];
319
320[^consume] is most commonly (and most safely! see
321[link atomic.limitations limitations]) used with
322pointers, compare for example the
323[link boost_atomic.usage_examples.singleton singleton with double-checked locking].
324
325[endsect]
326
327[section:seq_cst Sequential consistency]
328
329The third pattern for coordinating threads via [*Boost.Atomic]
330uses [^seq_cst] for coordination: If ...
331
332* ... thread1 performs an operation A,
333* ... thread1 subsequently performs any operation with [^seq_cst],
334* ... thread1 subsequently performs an operation B,
335* ... thread2 performs an operation C,
336* ... thread2 subsequently performs any operation with [^seq_cst],
337* ... thread2 subsequently performs an operation D,
338
339then either "A ['happens-before] D" or "C ['happens-before] B" holds.
340
341In this case it does not matter whether thread1 and thread2 operate
342on the same or different atomic variables, or use a "stand-alone"
343[^atomic_thread_fence] operation.
344
345[endsect]
346
347[endsect]
348
349[section:interface Programming interfaces]
350
351[section:configuration Configuration and building]
352
353The library contains header-only and compiled parts. The library is
354header-only for lock-free cases but requires a separate binary to
355implement the lock-based emulation and waiting and notifying operations
356on some platforms. Users are able to detect whether linking to the compiled
357part is required by checking the [link atomic.interface.feature_macros feature macros].
358
359The following macros affect library behavior:
360
361[table
362    [[Macro] [Description]]
363    [[`BOOST_ATOMIC_LOCK_POOL_SIZE_LOG2`] [Binary logarithm of the number of locks in the internal
364      lock pool used by [*Boost.Atomic] to implement lock-based atomic operations and waiting and notifying
365      operations on some platforms. Must be an integer in range from 0 to 16, the default value is 8.
366      Only has effect when building [*Boost.Atomic].]]
367    [[`BOOST_ATOMIC_NO_CMPXCHG8B`] [Affects 32-bit x86 Oracle Studio builds. When defined,
368      the library assumes the target CPU does not support `cmpxchg8b` instruction used
369      to support 64-bit atomic operations. This is the case with very old CPUs (pre-Pentium).
370      The library does not perform runtime detection of this instruction, so running the code
371      that uses 64-bit atomics on such CPUs will result in crashes, unless this macro is defined.
372      Note that the macro does not affect MSVC, GCC and compatible compilers because the library infers
373      this information from the compiler-defined macros.]]
374    [[`BOOST_ATOMIC_NO_CMPXCHG16B`] [Affects 64-bit x86 MSVC and Oracle Studio builds. When defined,
375      the library assumes the target CPU does not support `cmpxchg16b` instruction used
376      to support 128-bit atomic operations. This is the case with some early 64-bit AMD CPUs,
377      all Intel CPUs and current AMD CPUs support this instruction. The library does not
378      perform runtime detection of this instruction, so running the code that uses 128-bit
379      atomics on such CPUs will result in crashes, unless this macro is defined. Note that
380      the macro does not affect GCC and compatible compilers because the library infers
381      this information from the compiler-defined macros.]]
382    [[`BOOST_ATOMIC_NO_FLOATING_POINT`] [When defined, support for floating point operations is disabled.
383      Floating point types shall be treated similar to trivially copyable structs and no capability macros
384      will be defined.]]
385    [[`BOOST_ATOMIC_FORCE_FALLBACK`] [When defined, all operations are implemented with locks.
386      This is mostly used for testing and should not be used in real world projects.]]
387    [[`BOOST_ATOMIC_DYN_LINK` and `BOOST_ALL_DYN_LINK`] [Control library linking. If defined,
388      the library assumes dynamic linking, otherwise static. The latter macro affects all Boost
389      libraries, not just [*Boost.Atomic].]]
390    [[`BOOST_ATOMIC_NO_LIB` and `BOOST_ALL_NO_LIB`] [Control library auto-linking on Windows.
391      When defined, disables auto-linking. The latter macro affects all Boost libraries,
392      not just [*Boost.Atomic].]]
393]
394
395Besides macros, it is important to specify the correct compiler options for the target CPU.
396With GCC and compatible compilers this affects whether particular atomic operations are
397lock-free or not.
398
399Boost building process is described in the [@http://www.boost.org/doc/libs/release/more/getting_started/ Getting Started guide].
400For example, you can build [*Boost.Atomic] with the following command line:
401
402[pre
403    bjam --with-atomic variant=release instruction-set=core2 stage
404]
405
406[endsect]
407
408[section:interface_memory_order Memory order]
409
410    #include <boost/memory_order.hpp>
411
412The enumeration [^boost::memory_order] defines the following
413values to represent memory ordering constraints:
414
415[table
416    [[Constant] [Description]]
417    [[`memory_order_relaxed`] [No ordering constraint.
418      Informally speaking, following operations may be reordered before,
419      preceding operations may be reordered after the atomic
420      operation. This constraint is suitable only when
421      either a) further operations do not depend on the outcome
422      of the atomic operation or b) ordering is enforced through
423      stand-alone `atomic_thread_fence` operations. The operation on
424      the atomic value itself is still atomic though.
425    ]]
426    [[`memory_order_release`] [
427      Perform `release` operation. Informally speaking,
428      prevents all preceding memory operations to be reordered
429      past this point.
430    ]]
431    [[`memory_order_acquire`] [
432      Perform `acquire` operation. Informally speaking,
433      prevents succeeding memory operations to be reordered
434      before this point.
435    ]]
436    [[`memory_order_consume`] [
437      Perform `consume` operation. More relaxed (and
438      on some architectures potentially more efficient) than `memory_order_acquire`
439      as it only affects succeeding operations that are
440      computationally-dependent on the value retrieved from
441      an atomic variable. Currently equivalent to `memory_order_acquire`
442      on all supported architectures (see [link atomic.limitations Limitations] section for an explanation).
443    ]]
444    [[`memory_order_acq_rel`] [Perform both `release` and `acquire` operation]]
445    [[`memory_order_seq_cst`] [
446      Enforce sequential consistency. Implies `memory_order_acq_rel`, but
447      additionally enforces total order for all operations such qualified.
448    ]]
449]
450
451For compilers that support C++11 scoped enums, the library also defines scoped synonyms
452that are preferred in modern programs:
453
454[table
455    [[Pre-C++11 constant] [C++11 equivalent]]
456    [[`memory_order_relaxed`] [`memory_order::relaxed`]]
457    [[`memory_order_release`] [`memory_order::release`]]
458    [[`memory_order_acquire`] [`memory_order::acquire`]]
459    [[`memory_order_consume`] [`memory_order::consume`]]
460    [[`memory_order_acq_rel`] [`memory_order::acq_rel`]]
461    [[`memory_order_seq_cst`] [`memory_order::seq_cst`]]
462]
463
464See section [link atomic.thread_coordination ['happens-before]] for explanation
465of the various ordering constraints.
466
467[endsect]
468
469[section:interface_atomic_flag Atomic flags]
470
471    #include <boost/atomic/atomic_flag.hpp>
472
473The `boost::atomic_flag` type provides the most basic set of atomic operations
474suitable for implementing mutually exclusive access to thread-shared data. The flag
475can have one of the two possible states: set and clear. The class implements the
476following operations:
477
478[table
479    [[Syntax] [Description]]
480    [
481      [`atomic_flag()`]
482      [Initialize to the clear state. See the discussion below.]
483    ]
484    [
485      [`bool is_lock_free()`]
486      [Checks if the atomic flag is lock-free; the returned value is consistent with the `is_always_lock_free` static constant, see below.]
487    ]
488    [
489      [`bool has_native_wait_notify()`]
490      [Indicates if the target platform natively supports waiting and notifying operations for this object. Returns `true` if `always_has_native_wait_notify` is `true`.]
491    ]
492    [
493      [`bool test(memory_order order)`]
494      [Returns `true` if the flag is in the set state and `false` otherwise.]
495    ]
496    [
497      [`bool test_and_set(memory_order order)`]
498      [Sets the atomic flag to the set state; returns `true` if the flag had been set prior to the operation.]
499    ]
500    [
501      [`void clear(memory_order order)`]
502      [Sets the atomic flag to the clear state.]
503    ]
504    [
505      [`bool wait(bool old_val, memory_order order)`]
506      [Potentially blocks the calling thread until unblocked by a notifying operation and `test(order)` returns value other than `old_val`. Returns the result of `test(order)`.]
507    ]
508    [
509      [`void notify_one()`]
510      [Unblocks at least one thread blocked in a waiting operation on this atomic object.]
511    ]
512    [
513      [`void notify_all()`]
514      [Unblocks all threads blocked in waiting operations on this atomic object.]
515    ]
516    [
517      [`static constexpr bool is_always_lock_free`]
518      [This static boolean constant indicates if any atomic flag is lock-free]
519    ]
520    [
521      [`static constexpr bool always_has_native_wait_notify`]
522      [Indicates if the target platform always natively supports waiting and notifying operations.]
523    ]
524]
525
526`order` always has `memory_order_seq_cst` as default parameter.
527
528Waiting and notifying operations are described in detail in [link atomic.interface.interface_wait_notify_ops this] section.
529
530Note that the default constructor `atomic_flag()` is unlike `std::atomic_flag`, which
531leaves the default-constructed object uninitialized. This potentially requires dynamic
532initialization during the program startup to perform the object initialization, which
533makes it unsafe to create global `boost::atomic_flag` objects that can be used before
534entring `main()`. Some compilers though (especially those supporting C++11 `constexpr`)
535may be smart enough to perform flag initialization statically (which is, in C++11 terms,
536a constant initialization).
537
538This difference is deliberate and is done to support C++03 compilers. C++11 defines the
539`ATOMIC_FLAG_INIT` macro which can be used to statically initialize `std::atomic_flag`
540to a clear state like this:
541
542    std::atomic_flag flag = ATOMIC_FLAG_INIT; // constant initialization
543
544This macro cannot be implemented in C++03 because for that `atomic_flag` would have to be
545an aggregate type, which it cannot be because it has to prohibit copying and consequently
546define the default constructor. Thus the closest equivalent C++03 code using [*Boost.Atomic]
547would be:
548
549    boost::atomic_flag flag; // possibly, dynamic initialization in C++03;
550                             // constant initialization in C++11
551
552The same code is also valid in C++11, so this code can be used universally. However, for
553interface parity with `std::atomic_flag`, if possible, the library also defines the
554`BOOST_ATOMIC_FLAG_INIT` macro, which is equivalent to `ATOMIC_FLAG_INIT`:
555
556    boost::atomic_flag flag = BOOST_ATOMIC_FLAG_INIT; // constant initialization
557
558This macro will only be implemented on a C++11 compiler. When this macro is not available,
559the library defines `BOOST_ATOMIC_NO_ATOMIC_FLAG_INIT`.
560
561[endsect]
562
563[section:interface_atomic_object Atomic objects]
564
565    #include <boost/atomic/atomic.hpp>
566
567[^boost::atomic<['T]>] provides methods for atomically accessing
568variables of a suitable type [^['T]]. The type is suitable if
569it is [@https://en.cppreference.com/w/cpp/named_req/TriviallyCopyable ['trivially copyable]] (3.9/9 \[basic.types\]). Following are
570examples of the types compatible with this requirement:
571
572* a scalar type (e.g. integer, boolean, enum or pointer type)
573* a [^class] or [^struct] that has no non-trivial copy or move
574  constructors or assignment operators, has a trivial destructor,
575  and that is comparable via [^memcmp].
576
577Note that classes with virtual functions or virtual base classes
578do not satisfy the requirements. Also be warned
579that structures with padding bits may compare
580non-equal via [^memcmp] even though all members are equal. This may also be
581the case with some floating point types, which include padding bits themselves.
582
583[note Although types with padding bits are generally not supported by the library,
584[*Boost.Atomic] attempts to support operations on floating point types on some
585platforms, where the location of the padding bits is known. In particular,
58680-bit `long double` type on x86 targets is supported.]
587
588[section:interface_atomic_generic [^boost::atomic<['T]>] template class]
589
590All atomic objects support the following operations and properties:
591
592[table
593    [[Syntax] [Description]]
594    [
595      [`atomic()`]
596      [Initialize to an unspecified value]
597    ]
598    [
599      [`atomic(T initial_value)`]
600      [Initialize to [^initial_value]]
601    ]
602    [
603      [`bool is_lock_free()`]
604      [Checks if the atomic object is lock-free; the returned value is consistent with the `is_always_lock_free` static constant, see below.]
605    ]
606    [
607      [`bool has_native_wait_notify()`]
608      [Indicates if the target platform natively supports waiting and notifying operations for this object. Returns `true` if `always_has_native_wait_notify` is `true`.]
609    ]
610    [
611      [`T& value()`]
612      [Returns a reference to the value stored in the atomic object.]
613    ]
614    [
615      [`T load(memory_order order)`]
616      [Return current value]
617    ]
618    [
619      [`void store(T value, memory_order order)`]
620      [Write new value to atomic variable]
621    ]
622    [
623      [`T exchange(T new_value, memory_order order)`]
624      [Exchange current value with `new_value`, returning current value]
625    ]
626    [
627      [`bool compare_exchange_weak(T & expected, T desired, memory_order order)`]
628      [Compare current value with `expected`, change it to `desired` if matches.
629      Returns `true` if an exchange has been performed, and always writes the
630      previous value back in `expected`. May fail spuriously, so must generally be
631      retried in a loop.]
632    ]
633    [
634      [`bool compare_exchange_weak(T & expected, T desired, memory_order success_order, memory_order failure_order)`]
635      [Compare current value with `expected`, change it to `desired` if matches.
636      Returns `true` if an exchange has been performed, and always writes the
637      previous value back in `expected`. May fail spuriously, so must generally be
638      retried in a loop.]
639    ]
640    [
641      [`bool compare_exchange_strong(T & expected, T desired, memory_order order)`]
642      [Compare current value with `expected`, change it to `desired` if matches.
643      Returns `true` if an exchange has been performed, and always writes the
644      previous value back in `expected`.]
645    ]
646    [
647      [`bool compare_exchange_strong(T & expected, T desired, memory_order success_order, memory_order failure_order))`]
648      [Compare current value with `expected`, change it to `desired` if matches.
649      Returns `true` if an exchange has been performed, and always writes the
650      previous value back in `expected`.]
651    ]
652    [
653      [`T wait(T old_val, memory_order order)`]
654      [Potentially blocks the calling thread until unblocked by a notifying operation and `load(order)` returns value other than `old_val`. Returns the result of `load(order)`.]
655    ]
656    [
657      [`void notify_one()`]
658      [Unblocks at least one thread blocked in a waiting operation on this atomic object.]
659    ]
660    [
661      [`void notify_all()`]
662      [Unblocks all threads blocked in waiting operations on this atomic object.]
663    ]
664    [
665      [`static constexpr bool is_always_lock_free`]
666      [This static boolean constant indicates if any atomic object of this type is lock-free]
667    ]
668    [
669      [`static constexpr bool always_has_native_wait_notify`]
670      [Indicates if the target platform always natively supports waiting and notifying operations.]
671    ]
672]
673
674`order` always has `memory_order_seq_cst` as default parameter.
675
676Waiting and notifying operations are described in detail in [link atomic.interface.interface_wait_notify_ops this] section.
677
678The `value` operation is a [*Boost.Atomic] extension. The returned reference can be used to invoke external operations
679on the atomic value, which are not part of [*Boost.Atomic] but are compatible with it on the target architecture. The primary
680example of such is `futex` and similar operations available on some systems. The returned reference must not be used for reading
681or modifying the value of the atomic object in non-atomic manner, or to construct [link atomic.interface.interface_atomic_ref
682atomic references]. Doing so does not guarantee atomicity or memory ordering.
683
684[note Even if `boost::atomic` for a given type is lock-free, an atomic reference for that type may not be. Therefore, `boost::atomic`
685and `boost::atomic_ref` operating on the same object may use different thread synchronization primitives incompatible with each other.]
686
687The `compare_exchange_weak`/`compare_exchange_strong` variants
688taking four parameters differ from the three parameter variants
689in that they allow a different memory ordering constraint to
690be specified in case the operation fails.
691
692In addition to these explicit operations, each
693[^atomic<['T]>] object also supports
694implicit [^store] and [^load] through the use of "assignment"
695and "conversion to [^T]" operators. Avoid using these operators,
696as they do not allow to specify a memory ordering
697constraint which always defaults to `memory_order_seq_cst`.
698
699[endsect]
700
701[section:interface_atomic_integral [^boost::atomic<['integral]>] template class]
702
703In addition to the operations listed in the previous section,
704[^boost::atomic<['I]>] for integral
705types [^['I]], except `bool`, supports the following operations,
706which correspond to [^std::atomic<['I]>]:
707
708[table
709    [[Syntax] [Description]]
710    [
711      [`I fetch_add(I v, memory_order order)`]
712      [Add `v` to variable, returning previous value]
713    ]
714    [
715      [`I fetch_sub(I v, memory_order order)`]
716      [Subtract `v` from variable, returning previous value]
717    ]
718    [
719      [`I fetch_and(I v, memory_order order)`]
720      [Apply bit-wise "and" with `v` to variable, returning previous value]
721    ]
722    [
723      [`I fetch_or(I v, memory_order order)`]
724      [Apply bit-wise "or" with `v` to variable, returning previous value]
725    ]
726    [
727      [`I fetch_xor(I v, memory_order order)`]
728      [Apply bit-wise "xor" with `v` to variable, returning previous value]
729    ]
730]
731
732Additionally, as a [*Boost.Atomic] extension, the following operations are also provided:
733
734[table
735    [[Syntax] [Description]]
736    [
737      [`I fetch_negate(memory_order order)`]
738      [Change the sign of the value stored in the variable, returning previous value]
739    ]
740    [
741      [`I fetch_complement(memory_order order)`]
742      [Set the variable to the one\'s complement of the current value, returning previous value]
743    ]
744    [
745      [`I negate(memory_order order)`]
746      [Change the sign of the value stored in the variable, returning the result]
747    ]
748    [
749      [`I add(I v, memory_order order)`]
750      [Add `v` to variable, returning the result]
751    ]
752    [
753      [`I sub(I v, memory_order order)`]
754      [Subtract `v` from variable, returning the result]
755    ]
756    [
757      [`I bitwise_and(I v, memory_order order)`]
758      [Apply bit-wise "and" with `v` to variable, returning the result]
759    ]
760    [
761      [`I bitwise_or(I v, memory_order order)`]
762      [Apply bit-wise "or" with `v` to variable, returning the result]
763    ]
764    [
765      [`I bitwise_xor(I v, memory_order order)`]
766      [Apply bit-wise "xor" with `v` to variable, returning the result]
767    ]
768    [
769      [`I bitwise_complement(memory_order order)`]
770      [Set the variable to the one\'s complement of the current value, returning the result]
771    ]
772    [
773      [`void opaque_negate(memory_order order)`]
774      [Change the sign of the value stored in the variable, returning nothing]
775    ]
776    [
777      [`void opaque_add(I v, memory_order order)`]
778      [Add `v` to variable, returning nothing]
779    ]
780    [
781      [`void opaque_sub(I v, memory_order order)`]
782      [Subtract `v` from variable, returning nothing]
783    ]
784    [
785      [`void opaque_and(I v, memory_order order)`]
786      [Apply bit-wise "and" with `v` to variable, returning nothing]
787    ]
788    [
789      [`void opaque_or(I v, memory_order order)`]
790      [Apply bit-wise "or" with `v` to variable, returning nothing]
791    ]
792    [
793      [`void opaque_xor(I v, memory_order order)`]
794      [Apply bit-wise "xor" with `v` to variable, returning nothing]
795    ]
796    [
797      [`void opaque_complement(memory_order order)`]
798      [Set the variable to the one\'s complement of the current value, returning nothing]
799    ]
800    [
801      [`bool negate_and_test(memory_order order)`]
802      [Change the sign of the value stored in the variable, returning `true` if the result is non-zero and `false` otherwise]
803    ]
804    [
805      [`bool add_and_test(I v, memory_order order)`]
806      [Add `v` to variable, returning `true` if the result is non-zero and `false` otherwise]
807    ]
808    [
809      [`bool sub_and_test(I v, memory_order order)`]
810      [Subtract `v` from variable, returning `true` if the result is non-zero and `false` otherwise]
811    ]
812    [
813      [`bool and_and_test(I v, memory_order order)`]
814      [Apply bit-wise "and" with `v` to variable, returning `true` if the result is non-zero and `false` otherwise]
815    ]
816    [
817      [`bool or_and_test(I v, memory_order order)`]
818      [Apply bit-wise "or" with `v` to variable, returning `true` if the result is non-zero and `false` otherwise]
819    ]
820    [
821      [`bool xor_and_test(I v, memory_order order)`]
822      [Apply bit-wise "xor" with `v` to variable, returning `true` if the result is non-zero and `false` otherwise]
823    ]
824    [
825      [`bool complement_and_test(memory_order order)`]
826      [Set the variable to the one\'s complement of the current value, returning `true` if the result is non-zero and `false` otherwise]
827    ]
828    [
829      [`bool bit_test_and_set(unsigned int n, memory_order order)`]
830      [Set bit number `n` in the variable to 1, returning `true` if the bit was previously set to 1 and `false` otherwise]
831    ]
832    [
833      [`bool bit_test_and_reset(unsigned int n, memory_order order)`]
834      [Set bit number `n` in the variable to 0, returning `true` if the bit was previously set to 1 and `false` otherwise]
835    ]
836    [
837      [`bool bit_test_and_complement(unsigned int n, memory_order order)`]
838      [Change bit number `n` in the variable to the opposite value, returning `true` if the bit was previously set to 1 and `false` otherwise]
839    ]
840]
841
842[note In [*Boost.Atomic] 1.66 the [^['op]_and_test] operations returned the opposite value (i.e. `true` if the result is zero). This was changed
843to the current behavior in 1.67 for consistency with other operations in [*Boost.Atomic], as well as with conventions taken in the C++ standard library.
844[*Boost.Atomic] 1.66 was the only release shipped with the old behavior.]
845
846`order` always has `memory_order_seq_cst` as default parameter.
847
848The [^opaque_['op]] and [^['op]_and_test] variants of the operations
849may result in a more efficient code on some architectures because
850the original value of the atomic variable is not preserved. In the
851[^bit_test_and_['op]] operations, the bit number `n` starts from 0, which
852means the least significand bit, and must not exceed
853[^std::numeric_limits<['I]>::digits - 1].
854
855In addition to these explicit operations, each
856[^boost::atomic<['I]>] object also
857supports implicit pre-/post- increment/decrement, as well
858as the operators `+=`, `-=`, `&=`, `|=` and `^=`.
859Avoid using these operators, as they do not allow to specify a memory ordering
860constraint which always defaults to `memory_order_seq_cst`.
861
862[endsect]
863
864[section:interface_atomic_floating_point [^boost::atomic<['floating-point]>] template class]
865
866[note The support for floating point types is optional and can be disabled by defining `BOOST_ATOMIC_NO_FLOATING_POINT`.]
867
868In addition to the operations applicable to all atomic objects,
869[^boost::atomic<['F]>] for floating point
870types [^['F]] supports the following operations,
871which correspond to [^std::atomic<['F]>]:
872
873[table
874    [[Syntax] [Description]]
875    [
876      [`F fetch_add(F v, memory_order order)`]
877      [Add `v` to variable, returning previous value]
878    ]
879    [
880      [`F fetch_sub(F v, memory_order order)`]
881      [Subtract `v` from variable, returning previous value]
882    ]
883]
884
885Additionally, as a [*Boost.Atomic] extension, the following operations are also provided:
886
887[table
888    [[Syntax] [Description]]
889    [
890      [`F fetch_negate(memory_order order)`]
891      [Change the sign of the value stored in the variable, returning previous value]
892    ]
893    [
894      [`F negate(memory_order order)`]
895      [Change the sign of the value stored in the variable, returning the result]
896    ]
897    [
898      [`F add(F v, memory_order order)`]
899      [Add `v` to variable, returning the result]
900    ]
901    [
902      [`F sub(F v, memory_order order)`]
903      [Subtract `v` from variable, returning the result]
904    ]
905    [
906      [`void opaque_negate(memory_order order)`]
907      [Change the sign of the value stored in the variable, returning nothing]
908    ]
909    [
910      [`void opaque_add(F v, memory_order order)`]
911      [Add `v` to variable, returning nothing]
912    ]
913    [
914      [`void opaque_sub(F v, memory_order order)`]
915      [Subtract `v` from variable, returning nothing]
916    ]
917]
918
919`order` always has `memory_order_seq_cst` as default parameter.
920
921The [^opaque_['op]] variants of the operations
922may result in a more efficient code on some architectures because
923the original value of the atomic variable is not preserved.
924
925In addition to these explicit operations, each
926[^boost::atomic<['F]>] object also supports operators `+=` and `-=`.
927Avoid using these operators, as they do not allow to specify a memory ordering
928constraint which always defaults to `memory_order_seq_cst`.
929
930When using atomic operations with floating point types, bear in mind that [*Boost.Atomic]
931always performs bitwise comparison of the stored values. This means that operations like
932`compare_exchange*` may fail if the stored value and comparand have different binary representation,
933even if they would normally compare equal. This is typically the case when either of the numbers
934is [@https://en.wikipedia.org/wiki/Denormal_number denormalized]. This also means that the behavior
935with regard to special floating point values like NaN and signed zero is also different from normal C++.
936
937Another source of the problem is padding bits that are added to some floating point types for alignment.
938One widespread example of that is Intel x87 extended double format, which is typically stored as 80 bits
939of value padded with 16 or 48 unused bits. These padding bits are often uninitialized and contain garbage,
940which makes two equal numbers have different binary representation. The library attempts to account for
941the known such cases, but in general it is possible that some platforms are not covered. Note that the C++
942standard makes no guarantees about reliability of `compare_exchange*` operations in the face of padding or
943trap bits.
944
945[endsect]
946
947[section:interface_atomic_pointer [^boost::atomic<['pointer]>] template class]
948
949In addition to the operations applicable to all atomic objects,
950[^boost::atomic<['P]>] for pointer
951types [^['P]] (other than pointers to [^void], function or member pointers) support
952the following operations, which correspond to [^std::atomic<['P]>]:
953
954[table
955    [[Syntax] [Description]]
956    [
957      [`T fetch_add(ptrdiff_t v, memory_order order)`]
958      [Add `v` to variable, returning previous value]
959    ]
960    [
961      [`T fetch_sub(ptrdiff_t v, memory_order order)`]
962      [Subtract `v` from variable, returning previous value]
963    ]
964]
965
966Similarly to integers, the following [*Boost.Atomic] extensions are also provided:
967
968[table
969    [[Syntax] [Description]]
970    [
971      [`void add(ptrdiff_t v, memory_order order)`]
972      [Add `v` to variable, returning the result]
973    ]
974    [
975      [`void sub(ptrdiff_t v, memory_order order)`]
976      [Subtract `v` from variable, returning the result]
977    ]
978    [
979      [`void opaque_add(ptrdiff_t v, memory_order order)`]
980      [Add `v` to variable, returning nothing]
981    ]
982    [
983      [`void opaque_sub(ptrdiff_t v, memory_order order)`]
984      [Subtract `v` from variable, returning nothing]
985    ]
986    [
987      [`bool add_and_test(ptrdiff_t v, memory_order order)`]
988      [Add `v` to variable, returning `true` if the result is non-null and `false` otherwise]
989    ]
990    [
991      [`bool sub_and_test(ptrdiff_t v, memory_order order)`]
992      [Subtract `v` from variable, returning `true` if the result is non-null and `false` otherwise]
993    ]
994]
995
996`order` always has `memory_order_seq_cst` as default parameter.
997
998In addition to these explicit operations, each
999[^boost::atomic<['P]>] object also
1000supports implicit pre-/post- increment/decrement, as well
1001as the operators `+=`, `-=`. Avoid using these operators,
1002as they do not allow explicit specification of a memory ordering
1003constraint which always defaults to `memory_order_seq_cst`.
1004
1005[endsect]
1006
1007[section:interface_atomic_convenience_typedefs [^boost::atomic<['T]>] convenience typedefs]
1008
1009For convenience, the following shorthand typedefs of [^boost::atomic<['T]>] are provided:
1010
1011[c++]
1012
1013    typedef atomic< char > atomic_char;
1014    typedef atomic< unsigned char > atomic_uchar;
1015    typedef atomic< signed char > atomic_schar;
1016    typedef atomic< unsigned short > atomic_ushort;
1017    typedef atomic< short > atomic_short;
1018    typedef atomic< unsigned int > atomic_uint;
1019    typedef atomic< int > atomic_int;
1020    typedef atomic< unsigned long > atomic_ulong;
1021    typedef atomic< long > atomic_long;
1022    typedef atomic< unsigned long long > atomic_ullong;
1023    typedef atomic< long long > atomic_llong;
1024
1025    typedef atomic< void* > atomic_address;
1026    typedef atomic< bool > atomic_bool;
1027    typedef atomic< wchar_t > atomic_wchar_t;
1028    typedef atomic< char8_t > atomic_char8_t;
1029    typedef atomic< char16_t > atomic_char16_t;
1030    typedef atomic< char32_t > atomic_char32_t;
1031
1032    typedef atomic< uint8_t > atomic_uint8_t;
1033    typedef atomic< int8_t > atomic_int8_t;
1034    typedef atomic< uint16_t > atomic_uint16_t;
1035    typedef atomic< int16_t > atomic_int16_t;
1036    typedef atomic< uint32_t > atomic_uint32_t;
1037    typedef atomic< int32_t > atomic_int32_t;
1038    typedef atomic< uint64_t > atomic_uint64_t;
1039    typedef atomic< int64_t > atomic_int64_t;
1040
1041    typedef atomic< int_least8_t > atomic_int_least8_t;
1042    typedef atomic< uint_least8_t > atomic_uint_least8_t;
1043    typedef atomic< int_least16_t > atomic_int_least16_t;
1044    typedef atomic< uint_least16_t > atomic_uint_least16_t;
1045    typedef atomic< int_least32_t > atomic_int_least32_t;
1046    typedef atomic< uint_least32_t > atomic_uint_least32_t;
1047    typedef atomic< int_least64_t > atomic_int_least64_t;
1048    typedef atomic< uint_least64_t > atomic_uint_least64_t;
1049    typedef atomic< int_fast8_t > atomic_int_fast8_t;
1050    typedef atomic< uint_fast8_t > atomic_uint_fast8_t;
1051    typedef atomic< int_fast16_t > atomic_int_fast16_t;
1052    typedef atomic< uint_fast16_t > atomic_uint_fast16_t;
1053    typedef atomic< int_fast32_t > atomic_int_fast32_t;
1054    typedef atomic< uint_fast32_t > atomic_uint_fast32_t;
1055    typedef atomic< int_fast64_t > atomic_int_fast64_t;
1056    typedef atomic< uint_fast64_t > atomic_uint_fast64_t;
1057    typedef atomic< intmax_t > atomic_intmax_t;
1058    typedef atomic< uintmax_t > atomic_uintmax_t;
1059
1060    typedef atomic< std::size_t > atomic_size_t;
1061    typedef atomic< std::ptrdiff_t > atomic_ptrdiff_t;
1062
1063    typedef atomic< intptr_t > atomic_intptr_t;
1064    typedef atomic< uintptr_t > atomic_uintptr_t;
1065
1066    typedef atomic< unsigned integral > atomic_unsigned_lock_free;
1067    typedef atomic< signed integral > atomic_signed_lock_free;
1068
1069The typedefs are provided only if the corresponding value type is available.
1070
1071The `atomic_unsigned_lock_free` and `atomic_signed_lock_free` types, if defined, indicate
1072the atomic object type for an unsigned or signed integer, respectively, that is
1073lock-free and that preferably has native support for
1074[link atomic.interface.interface_wait_notify_ops waiting and notifying operations].
1075
1076[endsect]
1077
1078[endsect]
1079
1080[section:interface_atomic_ref Atomic references]
1081
1082    #include <boost/atomic/atomic_ref.hpp>
1083
1084[^boost::atomic_ref<['T]>] also provides methods for atomically accessing
1085external variables of type [^['T]]. The requirements on the type [^['T]]
1086are the same as those imposed by [link atomic.interface.interface_atomic_object `boost::atomic`].
1087Unlike `boost::atomic`, `boost::atomic_ref` does not store the value internally
1088and only refers to an external object of type [^['T]].
1089
1090There are certain requirements on the objects compatible with `boost::atomic_ref`:
1091
1092* The referenced object lifetime must not end before the last `boost::atomic_ref`
1093  referencing the object is destroyed.
1094* The referenced object must have alignment not less than indicated by the
1095  [^boost::atomic_ref<['T]>::required_alignment] constant. That constant may be larger
1096  than the natural alignment of type [^['T]]. In [*Boost.Atomic], `required_alignment` indicates
1097  the alignment at which operations on the object are lock-free; otherwise, if lock-free
1098  operations are not possible, `required_alignment` shall not be less than the natural
1099  alignment of [^['T]].
1100* The referenced object must not be a [@https://en.cppreference.com/w/cpp/language/object#Subobjects ['potentially overlapping object]].
1101  It must be the ['most derived object] (that is it must not be a base class subobject of
1102  an object of a derived class) and it must not be marked with the `[[no_unique_address]]`
1103  attribute.
1104  ```
1105  struct Base
1106  {
1107      short a;
1108      char b;
1109  };
1110
1111  struct Derived : public Base
1112  {
1113      char c;
1114  };
1115
1116  Derived x;
1117  boost::atomic_ref<Base> ref(x); // bad
1118  ```
1119  In the above example, `ref` may silently corrupt the value of `x.c` because it
1120  resides in the trailing padding of the `Base` base class subobject of `x`.
1121* The referenced object must not reside in read-only memory. Even for non-modifying
1122  operations, like `load()`, `boost::atomic_ref` may issue read-modify-write CPU instructions
1123  that require write access.
1124* While at least one `boost::atomic_ref` referencing an object exists, that object must not
1125  be accessed by any other means, other than through `boost::atomic_ref`.
1126
1127Multiple `boost::atomic_ref` referencing the same object are allowed, and operations
1128through any such reference are atomic and ordered with regard to each other, according to
1129the memory order arguments. [^boost::atomic_ref<['T]>] supports the same set of properties and
1130operations as [^boost::atomic<['T]>], depending on the type [^['T]], with the following exceptions:
1131
1132[table
1133    [[Syntax] [Description]]
1134    [
1135      [`atomic_ref() = delete`]
1136      [`atomic_ref` is not default-constructible.]
1137    ]
1138    [
1139      [`atomic_ref(T& object)`]
1140      [Creates an atomic reference, referring to `object`. May modify the object representation (see caveats below).]
1141    ]
1142    [
1143      [`atomic_ref(atomic_ref const& that) noexcept`]
1144      [Creates an atomic reference, referencing the object referred to by `that`.]
1145    ]
1146    [
1147      [`static constexpr std::size_t required_alignment`]
1148      [A constant, indicating required alignment of objects of type [^['T]] so that they are compatible with `atomic_ref`.
1149      Shall not be less than [^alignof(['T])]. In [*Boost.Atomic], indicates the alignment required by lock-free operations
1150      on the referenced object, if lock-free operations are possible.]
1151    ]
1152]
1153
1154Note that `boost::atomic_ref` cannot be changed to refer to a different object after construction.
1155Assigning to `boost::atomic_ref` will invoke an atomic operation of storing the new value to the
1156referenced object.
1157
1158[section:caveats Caveats]
1159
1160There are a several disadvantages of using `boost::atomic_ref` compared to `boost::atomic`.
1161
1162First, the user is required to maintain proper alignment of the referenced objects. This means that the user
1163has to plan beforehand which variables will require atomic access in the program. In C++11 and later,
1164the user can ensure the required alignment by applying `alignas` specifier:
1165
1166    alignas(boost::atomic_ref<int>::required_alignment)
1167    int atomic_int;
1168
1169On compilers that don't support `alignas` users have to use compiler-specific attributes or manual padding
1170to achieve the required alignment. [@https://www.boost.org/doc/libs/release/libs/config/doc/html/boost_config/boost_macro_reference.html#boost_config.boost_macro_reference.macros_that_allow_use_of_c__11_features_with_c__03_compilers `BOOST_ALIGNMENT`]
1171macro from [*Boost.Config] may be useful.
1172
1173[note Do not rely on compilers to enforce the natural alignment for fundamental types, and that the default
1174alignment will satisfy the `atomic_ref<T>::required_alignment` constraint. There are real world cases when the
1175default alignment is below the required alignment for atomic references. For example, on 32-bit x86 targets it
1176is common that 64-bit integers and floating point numbers have alignment of 4, which is not high enough for `atomic_ref`.
1177Users must always explicitly ensure the referenced objects are aligned to `atomic_ref<T>::required_alignment`.]
1178
1179Next, some types may have padding bits, which are the bits of object representation that do not contribute to
1180the object value. Typically, padding bits are used for alignment purposes. [*Boost.Atomic] does not support
1181types with padding bits, with an exception of floating point types on platforms where the location of the
1182padding bits is known at compile time. One notable example is `long double` on x86, where the value is represented
1183by an [@https://en.wikipedia.org/wiki/Extended_precision#x86_extended_precision_format 80-bit extended precision]
1184floating point number complemented by 2 to 6 bytes of padding, depending on the target ABI.
1185
1186Padding bits pose a problem for [*Boost.Atomic] because they can break binary comparison of object (as if
1187by `memcmp`), which is used in `compare_exchange_weak`/`compare_exchange_strong` operations. `boost::atomic`
1188manages the internal object representation and in some cases, like the mentioned `long double` example,
1189it is able to initialize the padding bits so that binary comparison yields the expected result. This is not
1190possible with `boost::atomic_ref` because the referenced object is initialized by external means and any
1191particular content in the padding bits cannot be guaranteed. This requires `boost::atomic_ref` to initialize
1192padding bits on construction. Since there may be other atomic references referring to the same object, this
1193initialization must be atomic. As a result, `boost::atomic_ref` construction can be relatively expensive
1194and may potentially disrupt atomic operations that are being performed on the same object through other
1195atomic references. It is recommended to avoid constructing `boost::atomic_ref` in tight loops or hot paths.
1196
1197Finally, target platform may not have the necessary means to implement atomic operations on objects of some
1198sizes. For example, on many hardware architectures atomic operations on the following structure are not possible:
1199
1200    struct rgb
1201    {
1202        unsigned char r, g, b; // 3 bytes
1203    };
1204
1205`boost::atomic<rgb>` is able to implement lock-free operations if the target CPU supports 32-bit atomic instructions
1206by padding `rgb` structure internally to the size of 4 bytes. This is not possible for `boost::atomic_ref<rgb>`, as it
1207has to operate on external objects. Thus, `boost::atomic_ref<rgb>` will not provide lock-free operations and will resort
1208to locking.
1209
1210In general, it is advised to use `boost::atomic` wherever possible, as it is easier to use and is more efficient. Use
1211`boost::atomic_ref` only when you absolutely have to.
1212
1213[endsect]
1214
1215[endsect]
1216
1217[section:interface_wait_notify_ops Waiting and notifying operations]
1218
1219`boost::atomic_flag`, [^boost::atomic<['T]>] and [^boost::atomic_ref<['T]>] support ['waiting] and ['notifying] operations that were introduced in C++20. Waiting operations have the following forms:
1220
1221* [^['T] wait(['T] old_val, memory_order order)] (where ['T] is `bool` for `boost::atomic_flag`)
1222
1223Here, `order` must not be `memory_order_release` or `memory_order_acq_rel`. Note that unlike C++20, the `wait` operation returns ['T] instead of `void`. This is a [*Boost.Atomic] extension.
1224
1225The waiting operation performs the following steps repeatedly:
1226
1227* Loads the current value `new_val` of the atomic object using the memory ordering constraint `order`.
1228* If the `new_val` representation is different from `old_val` (i.e. when compared as if by `memcmp`), returns `new_val`.
1229* Blocks the calling thread until unblocked by a notifying operation or spuriously.
1230
1231Note that a waiting operation is allowed to return spuriously, i.e. without a corresponding notifying operation. It is also allowed to ['not] return if the atomic object value is different from `old_val` only momentarily (this is known as [@https://en.wikipedia.org/wiki/ABA_problem ABA problem]).
1232
1233Notifying operations have the following forms:
1234
1235* `void notify_one()`
1236* `void notify_all()`
1237
1238The `notify_one` operation unblocks at least one thread blocked in the waiting operation on the same atomic object, and `notify_all` unblocks all such threads. Notifying operations do not enforce memory ordering and should normally be preceeded with a store operation or a fence with the appropriate memory ordering constraint.
1239
1240Waiting and notifying operations require special support from the operating system, which may not be universally available. Whether the operating system natively supports these operations is indicated by the `always_has_native_wait_notify` static constant and `has_native_wait_notify()` member function of a given atomic type.
1241
1242Even for atomic objects that support lock-free operations (as indicated by the `is_always_lock_free` property or the corresponding [link atomic.interface.feature_macros macro]), the waiting and notifying operations may involve locking and require linking with [*Boost.Atomic] compiled library.
1243
1244Waiting and notifying operations are not address-free, meaning that the implementation may use process-local state and process-local addresses of the atomic objects to implement the operations. In particular, this means these operations cannot be used for communication between processes (when the atomic object is located in shared memory) or when the atomic object is mapped at different memory addresses in the same process.
1245
1246[endsect]
1247
1248[section:interface_ipc Atomic types for inter-process communication]
1249
1250    #include <boost/atomic/ipc_atomic.hpp>
1251    #include <boost/atomic/ipc_atomic_ref.hpp>
1252    #include <boost/atomic/ipc_atomic_flag.hpp>
1253
1254[*Boost.Atomic] provides a dedicated set of types for inter-process communication: `boost::ipc_atomic_flag`, [^boost::ipc_atomic<['T]>] and [^boost::ipc_atomic_ref<['T]>]. Collectively, these types are called inter-process communication atomic types or IPC atomic types, and their counterparts without the `ipc_` prefix - non-IPC atomic types.
1255
1256Each of the IPC atomic types have the same requirements on their value types and provide the same set of operations and properties as its non-IPC counterpart. All operations have the same signature, requirements and effects, with the following amendments:
1257
1258* All operations, except constructors, destructors, `is_lock_free()` and `has_native_wait_notify()` have an additional precondition that `is_lock_free()` returns `true` for this atomic object. (Implementation note: The current implementation detects availability of atomic instructions at compile time, and the code that does not fulfull this requirement will fail to compile.)
1259* The `has_native_wait_notify()` method and `always_has_native_wait_notify` static constant indicate whether the operating system has native support for inter-process waiting and notifying operations. This may be different from non-IPC atomic types as the OS may have different capabilities for inter-thread and inter-process communication.
1260* All operations on objects of IPC atomic types are address-free, which allows to place such objects (in case of [^boost::ipc_atomic_ref<['T]>] - objects referenced by `ipc_atomic_ref`) in memory regions shared between processes or mapped at different addresses in the same process.
1261
1262[note Operations on lock-free non-IPC atomic objects, except [link atomic.interface.interface_wait_notify_ops waiting and notifying operations], are also address-free, so `boost::atomic_flag`, [^boost::atomic<['T]>] and [^boost::atomic_ref<['T]>] could also be used for inter-process communication. However, the user must ensure that the given atomic object indeed supports lock-free operations. Failing to do this could result in a misbehaving program. IPC atomic types enforce this requirement and add support for address-free waiting and notifying operations.]
1263
1264It should be noted that some operations on IPC atomic types may be more expensive than the non-IPC ones. This primarily concerns waiting and notifying operations, as the operating system may have to perform conversion of the process-mapped addresses of atomic objects to physical addresses. Also, when native support for inter-process waiting and notifying operations is not present (as indicated by `has_native_wait_notify()`), waiting operations are emulated with a busy loop, which can affect performance and power consumption of the system. Native support for waiting and notifying operations can also be detected using [link atomic.interface.feature_macros capability macros].
1265
1266Users must not create and use IPC and non-IPC atomic references on the same referenced object at the same time. IPC and non-IPC atomic references are not required to communicate with each other. For example, a waiting operation on a non-IPC atomic reference may not be interrupted by a notifying operation on an IPC atomic reference referencing the same object.
1267
1268[endsect]
1269
1270[section:interface_fences Fences]
1271
1272    #include <boost/atomic/fences.hpp>
1273
1274[link atomic.thread_coordination.fences Fences] are implemented with the following operations:
1275
1276[table
1277    [[Syntax] [Description]]
1278    [
1279      [`void atomic_thread_fence(memory_order order)`]
1280      [Issue fence for coordination with other threads.]
1281    ]
1282    [
1283      [`void atomic_signal_fence(memory_order order)`]
1284      [Issue fence for coordination with a signal handler (only in the same thread).]
1285    ]
1286]
1287
1288Note that `atomic_signal_fence` does not implement thread synchronization
1289and only acts as a barrier to prevent code reordering by the compiler (but not by CPU).
1290The `order` argument here specifies the direction, in which the fence prevents the
1291compiler to reorder code.
1292
1293[endsect]
1294
1295[section:feature_macros Feature testing macros]
1296
1297    #include <boost/atomic/capabilities.hpp>
1298
1299[*Boost.Atomic] defines a number of macros to allow compile-time
1300detection whether an atomic data type is implemented using
1301"true" atomic operations, or whether an internal "lock" is
1302used to provide atomicity. The following macros will be
1303defined to `0` if operations on the data type always
1304require a lock, to `1` if operations on the data type may
1305sometimes require a lock, and to `2` if they are always lock-free:
1306
1307[table
1308    [[Macro] [Description]]
1309    [
1310      [`BOOST_ATOMIC_FLAG_LOCK_FREE`]
1311      [Indicate whether `atomic_flag` is lock-free]
1312    ]
1313    [
1314      [`BOOST_ATOMIC_BOOL_LOCK_FREE`]
1315      [Indicate whether `atomic<bool>` is lock-free]
1316    ]
1317    [
1318      [`BOOST_ATOMIC_CHAR_LOCK_FREE`]
1319      [Indicate whether `atomic<char>` (including signed/unsigned variants) is lock-free]
1320    ]
1321    [
1322      [`BOOST_ATOMIC_CHAR8_T_LOCK_FREE`]
1323      [Indicate whether `atomic<char8_t>` (including signed/unsigned variants) is lock-free]
1324    ]
1325    [
1326      [`BOOST_ATOMIC_CHAR16_T_LOCK_FREE`]
1327      [Indicate whether `atomic<char16_t>` (including signed/unsigned variants) is lock-free]
1328    ]
1329    [
1330      [`BOOST_ATOMIC_CHAR32_T_LOCK_FREE`]
1331      [Indicate whether `atomic<char32_t>` (including signed/unsigned variants) is lock-free]
1332    ]
1333    [
1334      [`BOOST_ATOMIC_WCHAR_T_LOCK_FREE`]
1335      [Indicate whether `atomic<wchar_t>` (including signed/unsigned variants) is lock-free]
1336    ]
1337    [
1338      [`BOOST_ATOMIC_SHORT_LOCK_FREE`]
1339      [Indicate whether `atomic<short>` (including signed/unsigned variants) is lock-free]
1340    ]
1341    [
1342      [`BOOST_ATOMIC_INT_LOCK_FREE`]
1343      [Indicate whether `atomic<int>` (including signed/unsigned variants) is lock-free]
1344    ]
1345    [
1346      [`BOOST_ATOMIC_LONG_LOCK_FREE`]
1347      [Indicate whether `atomic<long>` (including signed/unsigned variants) is lock-free]
1348    ]
1349    [
1350      [`BOOST_ATOMIC_LLONG_LOCK_FREE`]
1351      [Indicate whether `atomic<long long>` (including signed/unsigned variants) is lock-free]
1352    ]
1353    [
1354      [`BOOST_ATOMIC_ADDRESS_LOCK_FREE` or `BOOST_ATOMIC_POINTER_LOCK_FREE`]
1355      [Indicate whether `atomic<T *>` is lock-free]
1356    ]
1357    [
1358      [`BOOST_ATOMIC_THREAD_FENCE`]
1359      [Indicate whether `atomic_thread_fence` function is lock-free]
1360    ]
1361    [
1362      [`BOOST_ATOMIC_SIGNAL_FENCE`]
1363      [Indicate whether `atomic_signal_fence` function is lock-free]
1364    ]
1365]
1366
1367In addition to these standard macros, [*Boost.Atomic] also defines a number of extension macros,
1368which can also be useful. Like the standard ones, these macros are defined to values `0`, `1` and `2`
1369to indicate whether the corresponding operations are lock-free or not.
1370
1371[table
1372    [[Macro] [Description]]
1373    [
1374      [`BOOST_ATOMIC_INT8_LOCK_FREE`]
1375      [Indicate whether `atomic<int8_type>` is lock-free.]
1376    ]
1377    [
1378      [`BOOST_ATOMIC_INT16_LOCK_FREE`]
1379      [Indicate whether `atomic<int16_type>` is lock-free.]
1380    ]
1381    [
1382      [`BOOST_ATOMIC_INT32_LOCK_FREE`]
1383      [Indicate whether `atomic<int32_type>` is lock-free.]
1384    ]
1385    [
1386      [`BOOST_ATOMIC_INT64_LOCK_FREE`]
1387      [Indicate whether `atomic<int64_type>` is lock-free.]
1388    ]
1389    [
1390      [`BOOST_ATOMIC_INT128_LOCK_FREE`]
1391      [Indicate whether `atomic<int128_type>` is lock-free.]
1392    ]
1393    [
1394      [`BOOST_ATOMIC_NO_ATOMIC_FLAG_INIT`]
1395      [Defined after including `atomic_flag.hpp`, if the implementation
1396      does not support the `BOOST_ATOMIC_FLAG_INIT` macro for static
1397      initialization of `atomic_flag`. This macro is typically defined
1398      for pre-C++11 compilers.]
1399    ]
1400]
1401
1402In the table above, [^int['N]_type] is a type that fits storage of contiguous ['N] bits, suitably aligned for atomic operations.
1403
1404For floating-point types the following macros are similarly defined:
1405
1406[table
1407    [[Macro] [Description]]
1408    [
1409      [`BOOST_ATOMIC_FLOAT_LOCK_FREE`]
1410      [Indicate whether `atomic<float>` is lock-free.]
1411    ]
1412    [
1413      [`BOOST_ATOMIC_DOUBLE_LOCK_FREE`]
1414      [Indicate whether `atomic<double>` is lock-free.]
1415    ]
1416    [
1417      [`BOOST_ATOMIC_LONG_DOUBLE_LOCK_FREE`]
1418      [Indicate whether `atomic<long double>` is lock-free.]
1419    ]
1420]
1421
1422These macros are not defined when support for floating point types is disabled by user.
1423
1424For any of the [^BOOST_ATOMIC_['X]_LOCK_FREE] macro described above, two additional macros named [^BOOST_ATOMIC_HAS_NATIVE_['X]_WAIT_NOTIFY] and [^BOOST_ATOMIC_HAS_NATIVE_['X]_IPC_WAIT_NOTIFY] are defined. The former indicates whether [link atomic.interface.interface_wait_notify_ops waiting and notifying operations] are supported natively for non-IPC atomic types of a given type, and the latter does the same for [link atomic.interface.interface_ipc IPC atomic types]. The macros take values of `0`, `1` or `2`, where `0` indicates that native operations are not available, `1` means the operations may be available (which is determined at run time) and `2` means always available. Note that the lock-free and native waiting/notifying operations macros for a given type may have different values.
1425
1426[endsect]
1427
1428[endsect]
1429
1430[section:usage_examples Usage examples]
1431
1432[include examples.qbk]
1433
1434[endsect]
1435
1436[/
1437[section:platform_support Implementing support for additional platforms]
1438
1439[include platform.qbk]
1440
1441[endsect]
1442]
1443
1444[/ [xinclude autodoc.xml] ]
1445
1446[section:limitations Limitations]
1447
1448While [*Boost.Atomic] strives to implement the atomic operations
1449from C++11 and later as faithfully as possible, there are a few
1450limitations that cannot be lifted without compiler support:
1451
1452* [*Aggregate initialization syntax is not supported]: Since [*Boost.Atomic]
1453  sometimes uses storage type that is different from the value type,
1454  the `atomic<>` template needs an initialization constructor that
1455  performs the necessary conversion. This makes `atomic<>` a non-aggregate
1456  type and prohibits aggregate initialization syntax (`atomic<int> a = {10}`).
1457  [*Boost.Atomic] does support direct and unified initialization syntax though.
1458  [*Advice]: Always use direct initialization (`atomic<int> a(10)`) or unified
1459  initialization (`atomic<int> a{10}`) syntax.
1460* [*Initializing constructor is not `constexpr` for some types]: For value types
1461  other than integral types and `bool`, `atomic<>` initializing constructor needs
1462  to perform runtime conversion to the storage type. This limitation may be
1463  lifted for more categories of types in the future.
1464* [*Default constructor is not trivial in C++03]: Because the initializing
1465  constructor has to be defined in `atomic<>`, the default constructor
1466  must also be defined. In C++03 the constructor cannot be defined as defaulted
1467  and therefore it is not trivial. In C++11 the constructor is defaulted (and trivial,
1468  if the default constructor of the value type is). In any case, the default
1469  constructor of `atomic<>` performs default initialization of the atomic value,
1470  as required in C++11. [*Advice]: In C++03, do not use [*Boost.Atomic] in contexts
1471  where trivial default constructor is important (e.g. as a global variable which
1472  is required to be statically initialized).
1473* [*C++03 compilers may transform computation dependency to control dependency]:
1474  Crucially, `memory_order_consume` only affects computationally-dependent
1475  operations, but in general there is nothing preventing a compiler
1476  from transforming a computation dependency into a control dependency.
1477  A fully compliant C++11 compiler would be forbidden from such a transformation,
1478  but in practice most if not all compilers have chosen to promote
1479  `memory_order_consume` to `memory_order_acquire` instead
1480  (see [@https://gcc.gnu.org/bugzilla/show_bug.cgi?id=59448 this] gcc bug
1481  for example). In the current implementation [*Boost.Atomic] follows that trend,
1482  but this may change in the future.
1483  [*Advice]: In general, avoid `memory_order_consume` and use `memory_order_acquire`
1484  instead. Use `memory_order_consume` only in conjunction with
1485  pointer values, and only if you can ensure that the compiler cannot
1486  speculate and transform these into control dependencies.
1487* [*Fence operations may enforce "too strong" compiler ordering]:
1488  Semantically, `memory_order_acquire`/`memory_order_consume`
1489  and `memory_order_release` need to restrain reordering of
1490  memory operations only in one direction. Since in C++03 there is no
1491  way to express this constraint to the compiler, these act
1492  as "full compiler barriers" in C++03 implementation. In corner
1493  cases this may result in a slightly less efficient code than a C++11 compiler
1494  could generate. [*Boost.Atomic] will use compiler intrinsics, if possible,
1495  to express the proper ordering constraints.
1496* [*Atomic operations may enforce "too strong" memory ordering in debug mode]:
1497  On some compilers, disabling optimizations makes it impossible to provide
1498  memory ordering constraints as compile-time constants to the compiler intrinsics.
1499  This causes the compiler to silently ignore the provided constraints and choose
1500  the "strongest" memory order (`memory_order_seq_cst`) to generate code. Not only
1501  this reduces performance, this may hide bugs in the user's code (e.g. if the user
1502  used a wrong memory order constraint, which caused a data race).
1503  [*Advice]: Always test your code with optimizations enabled.
1504* [*No interprocess fallback]: using `atomic<T>` in shared memory only works
1505  correctly, if `atomic<T>::is_lock_free() == true`. Same with `atomic_ref<T>`.
1506* [*Signed integers must use [@https://en.wikipedia.org/wiki/Two%27s_complement two's complement]
1507  representation]: [*Boost.Atomic] makes this requirement in order to implement
1508  conversions between signed and unsigned integers internally. C++11 requires all
1509  atomic arithmetic operations on integers to be well defined according to two's complement
1510  arithmetics, which means that [*Boost.Atomic] has to operate on unsigned integers internally
1511  to avoid undefined behavior that results from signed integer overflows. Platforms
1512  with other signed integer representations are not supported. Note that C++20 makes
1513  two's complement representation of signed integers mandatory.
1514* [*Types with padding bits are not supported]: As discussed in [link atomic.interface.interface_atomic_ref.caveats
1515  this section], [*Boost.Atomic] cannot support types with padding bits because their content
1516  is undefined, and there is no portable way to initialize them to a predefined value. This makes
1517  operations like `compare_exchange_strong`/`compare_exchange_weak` fail, and given that in some
1518  cases other operations are built upon these, potentially all operations become unreliable. [*Boost.Atomic]
1519  does support padding bits for floating point types on platforms where the location of the
1520  padding bits is known at compile time.
1521
1522[endsect]
1523
1524[section:porting Porting]
1525
1526[section:unit_tests Unit tests]
1527
1528[*Boost.Atomic] provides a unit test suite to verify that the
1529implementation behaves as expected:
1530
1531* [*atomic_api.cpp] and [*atomic_ref_api.cpp] verifies that all atomic
1532  operations have correct value semantics (e.g. "fetch_add" really adds
1533  the desired value, returning the previous). The latter tests `atomic_ref`
1534  rather than `atomic` and `atomic_flag`. It is a rough "smoke-test"
1535  to help weed out the most obvious mistakes (for example width overflow,
1536  signed/unsigned extension, ...). These tests are also run with
1537  `BOOST_ATOMIC_FORCE_FALLBACK` macro defined to test the lock pool
1538  based implementation.
1539* [*lockfree.cpp] verifies that the [*BOOST_ATOMIC_*_LOCKFREE] macros
1540  are set properly according to the expectations for a given
1541  platform, and that they match up with the [*is_always_lock_free] and
1542  [*is_lock_free] members of the [*atomic] object instances.
1543* [*atomicity.cpp] and [*atomicity_ref.cpp] lets two threads race against
1544  each other modifying a shared variable, verifying that the operations
1545  behave atomic as appropriate. By nature, this test is necessarily
1546  stochastic, and the test self-calibrates to yield 99% confidence that a
1547  positive result indicates absence of an error. This test is
1548  very useful on uni-processor systems with preemption already.
1549* [*ordering.cpp] and [*ordering_ref.cpp] lets two threads race against
1550  each other accessing multiple shared variables, verifying that the
1551  operations exhibit the expected ordering behavior. By nature, this test
1552  is necessarily stochastic, and the test attempts to self-calibrate to
1553  yield 99% confidence that a positive result indicates absence
1554  of an error. This only works on true multi-processor (or multi-core)
1555  systems. It does not yield any result on uni-processor systems
1556  or emulators (due to there being no observable reordering even
1557  the order=relaxed case) and will report that fact.
1558* [*wait_api.cpp] and [*wait_ref_api.cpp] are used to verify waiting
1559  and notifying operations behavior. Due to the possibility of spurious
1560  wakeups, these tests may fail if a waiting operation returns early
1561  a number of times. The test retries for a few times in this case,
1562  but a failure is still possible.
1563* [*wait_fuzz.cpp] is a fuzzing test for waiting and notifying operations,
1564  that creates a number of threads that block on the same atomic object
1565  and then wake up one or all of them for a number of times. This test
1566  is intended as a smoke test in case if the implementation has long-term
1567  instabilities or races (primarily, in the lock pool implementation).
1568* [*ipc_atomic_api.cpp], [*ipc_atomic_ref_api.cpp], [*ipc_wait_api.cpp]
1569  and [*ipc_wait_ref_api.cpp] are similar to the tests without the [*ipc_]
1570  prefix, but test IPC atomic types.
1571
1572[endsect]
1573
1574[section:tested_compilers Tested compilers]
1575
1576[*Boost.Atomic] has been tested on and is known to work on
1577the following compilers/platforms:
1578
1579* gcc 4.4 and newer: i386, x86_64, ppc32, ppc64, sparcv9, armv6, alpha
1580* clang 3.5 and newer: i386, x86_64
1581* Visual Studio Express 2008 and newer on Windows XP and later: x86, x64, ARM
1582
1583[endsect]
1584
1585[section:acknowledgements Acknowledgements]
1586
1587* Adam Wulkiewicz created the logo used on the [@https://github.com/boostorg/atomic GitHub project page]. The logo was taken from his [@https://github.com/awulkiew/boost-logos collection] of Boost logos.
1588
1589[endsect]
1590
1591[endsect]
1592