1============================================== 2LLVM Atomic Instructions and Concurrency Guide 3============================================== 4 5.. contents:: 6 :local: 7 8Introduction 9============ 10 11LLVM supports instructions which are well-defined in the presence of threads and 12asynchronous signals. 13 14The atomic instructions are designed specifically to provide readable IR and 15optimized code generation for the following: 16 17* The C++11 ``<atomic>`` header. (`C++11 draft available here 18 <http://www.open-std.org/jtc1/sc22/wg21/>`_.) (`C11 draft available here 19 <http://www.open-std.org/jtc1/sc22/wg14/>`_.) 20 21* Proper semantics for Java-style memory, for both ``volatile`` and regular 22 shared variables. (`Java Specification 23 <http://docs.oracle.com/javase/specs/jls/se8/html/jls-17.html>`_) 24 25* gcc-compatible ``__sync_*`` builtins. (`Description 26 <https://gcc.gnu.org/onlinedocs/gcc/_005f_005fsync-Builtins.html>`_) 27 28* Other scenarios with atomic semantics, including ``static`` variables with 29 non-trivial constructors in C++. 30 31Atomic and volatile in the IR are orthogonal; "volatile" is the C/C++ volatile, 32which ensures that every volatile load and store happens and is performed in the 33stated order. A couple examples: if a SequentiallyConsistent store is 34immediately followed by another SequentiallyConsistent store to the same 35address, the first store can be erased. This transformation is not allowed for a 36pair of volatile stores. On the other hand, a non-volatile non-atomic load can 37be moved across a volatile load freely, but not an Acquire load. 38 39This document is intended to provide a guide to anyone either writing a frontend 40for LLVM or working on optimization passes for LLVM with a guide for how to deal 41with instructions with special semantics in the presence of concurrency. This 42is not intended to be a precise guide to the semantics; the details can get 43extremely complicated and unreadable, and are not usually necessary. 44 45.. _Optimization outside atomic: 46 47Optimization outside atomic 48=========================== 49 50The basic ``'load'`` and ``'store'`` allow a variety of optimizations, but can 51lead to undefined results in a concurrent environment; see `NotAtomic`_. This 52section specifically goes into the one optimizer restriction which applies in 53concurrent environments, which gets a bit more of an extended description 54because any optimization dealing with stores needs to be aware of it. 55 56From the optimizer's point of view, the rule is that if there are not any 57instructions with atomic ordering involved, concurrency does not matter, with 58one exception: if a variable might be visible to another thread or signal 59handler, a store cannot be inserted along a path where it might not execute 60otherwise. Take the following example: 61 62.. code-block:: c 63 64 /* C code, for readability; run through clang -O2 -S -emit-llvm to get 65 equivalent IR */ 66 int x; 67 void f(int* a) { 68 for (int i = 0; i < 100; i++) { 69 if (a[i]) 70 x += 1; 71 } 72 } 73 74The following is equivalent in non-concurrent situations: 75 76.. code-block:: c 77 78 int x; 79 void f(int* a) { 80 int xtemp = x; 81 for (int i = 0; i < 100; i++) { 82 if (a[i]) 83 xtemp += 1; 84 } 85 x = xtemp; 86 } 87 88However, LLVM is not allowed to transform the former to the latter: it could 89indirectly introduce undefined behavior if another thread can access ``x`` at 90the same time. That thread would read `undef` instead of the value it was 91expecting, which can lead to undefined behavior down the line. (This example is 92particularly of interest because before the concurrency model was implemented, 93LLVM would perform this transformation.) 94 95Note that speculative loads are allowed; a load which is part of a race returns 96``undef``, but does not have undefined behavior. 97 98Atomic instructions 99=================== 100 101For cases where simple loads and stores are not sufficient, LLVM provides 102various atomic instructions. The exact guarantees provided depend on the 103ordering; see `Atomic orderings`_. 104 105``load atomic`` and ``store atomic`` provide the same basic functionality as 106non-atomic loads and stores, but provide additional guarantees in situations 107where threads and signals are involved. 108 109``cmpxchg`` and ``atomicrmw`` are essentially like an atomic load followed by an 110atomic store (where the store is conditional for ``cmpxchg``), but no other 111memory operation can happen on any thread between the load and store. 112 113A ``fence`` provides Acquire and/or Release ordering which is not part of 114another operation; it is normally used along with Monotonic memory operations. 115A Monotonic load followed by an Acquire fence is roughly equivalent to an 116Acquire load, and a Monotonic store following a Release fence is roughly 117equivalent to a Release store. SequentiallyConsistent fences behave as both 118an Acquire and a Release fence, and offer some additional complicated 119guarantees, see the C++11 standard for details. 120 121Frontends generating atomic instructions generally need to be aware of the 122target to some degree; atomic instructions are guaranteed to be lock-free, and 123therefore an instruction which is wider than the target natively supports can be 124impossible to generate. 125 126.. _Atomic orderings: 127 128Atomic orderings 129================ 130 131In order to achieve a balance between performance and necessary guarantees, 132there are six levels of atomicity. They are listed in order of strength; each 133level includes all the guarantees of the previous level except for 134Acquire/Release. (See also `LangRef Ordering <LangRef.html#ordering>`_.) 135 136.. _NotAtomic: 137 138NotAtomic 139--------- 140 141NotAtomic is the obvious, a load or store which is not atomic. (This isn't 142really a level of atomicity, but is listed here for comparison.) This is 143essentially a regular load or store. If there is a race on a given memory 144location, loads from that location return undef. 145 146Relevant standard 147 This is intended to match shared variables in C/C++, and to be used in any 148 other context where memory access is necessary, and a race is impossible. (The 149 precise definition is in `LangRef Memory Model <LangRef.html#memmodel>`_.) 150 151Notes for frontends 152 The rule is essentially that all memory accessed with basic loads and stores 153 by multiple threads should be protected by a lock or other synchronization; 154 otherwise, you are likely to run into undefined behavior. If your frontend is 155 for a "safe" language like Java, use Unordered to load and store any shared 156 variable. Note that NotAtomic volatile loads and stores are not properly 157 atomic; do not try to use them as a substitute. (Per the C/C++ standards, 158 volatile does provide some limited guarantees around asynchronous signals, but 159 atomics are generally a better solution.) 160 161Notes for optimizers 162 Introducing loads to shared variables along a codepath where they would not 163 otherwise exist is allowed; introducing stores to shared variables is not. See 164 `Optimization outside atomic`_. 165 166Notes for code generation 167 The one interesting restriction here is that it is not allowed to write to 168 bytes outside of the bytes relevant to a store. This is mostly relevant to 169 unaligned stores: it is not allowed in general to convert an unaligned store 170 into two aligned stores of the same width as the unaligned store. Backends are 171 also expected to generate an i8 store as an i8 store, and not an instruction 172 which writes to surrounding bytes. (If you are writing a backend for an 173 architecture which cannot satisfy these restrictions and cares about 174 concurrency, please send an email to llvm-dev.) 175 176Unordered 177--------- 178 179Unordered is the lowest level of atomicity. It essentially guarantees that races 180produce somewhat sane results instead of having undefined behavior. It also 181guarantees the operation to be lock-free, so it does not depend on the data 182being part of a special atomic structure or depend on a separate per-process 183global lock. Note that code generation will fail for unsupported atomic 184operations; if you need such an operation, use explicit locking. 185 186Relevant standard 187 This is intended to match the Java memory model for shared variables. 188 189Notes for frontends 190 This cannot be used for synchronization, but is useful for Java and other 191 "safe" languages which need to guarantee that the generated code never 192 exhibits undefined behavior. Note that this guarantee is cheap on common 193 platforms for loads of a native width, but can be expensive or unavailable for 194 wider loads, like a 64-bit store on ARM. (A frontend for Java or other "safe" 195 languages would normally split a 64-bit store on ARM into two 32-bit unordered 196 stores.) 197 198Notes for optimizers 199 In terms of the optimizer, this prohibits any transformation that transforms a 200 single load into multiple loads, transforms a store into multiple stores, 201 narrows a store, or stores a value which would not be stored otherwise. Some 202 examples of unsafe optimizations are narrowing an assignment into a bitfield, 203 rematerializing a load, and turning loads and stores into a memcpy 204 call. Reordering unordered operations is safe, though, and optimizers should 205 take advantage of that because unordered operations are common in languages 206 that need them. 207 208Notes for code generation 209 These operations are required to be atomic in the sense that if you use 210 unordered loads and unordered stores, a load cannot see a value which was 211 never stored. A normal load or store instruction is usually sufficient, but 212 note that an unordered load or store cannot be split into multiple 213 instructions (or an instruction which does multiple memory operations, like 214 ``LDRD`` on ARM without LPAE, or not naturally-aligned ``LDRD`` on LPAE ARM). 215 216Monotonic 217--------- 218 219Monotonic is the weakest level of atomicity that can be used in synchronization 220primitives, although it does not provide any general synchronization. It 221essentially guarantees that if you take all the operations affecting a specific 222address, a consistent ordering exists. 223 224Relevant standard 225 This corresponds to the C++11/C11 ``memory_order_relaxed``; see those 226 standards for the exact definition. 227 228Notes for frontends 229 If you are writing a frontend which uses this directly, use with caution. The 230 guarantees in terms of synchronization are very weak, so make sure these are 231 only used in a pattern which you know is correct. Generally, these would 232 either be used for atomic operations which do not protect other memory (like 233 an atomic counter), or along with a ``fence``. 234 235Notes for optimizers 236 In terms of the optimizer, this can be treated as a read+write on the relevant 237 memory location (and alias analysis will take advantage of that). In addition, 238 it is legal to reorder non-atomic and Unordered loads around Monotonic 239 loads. CSE/DSE and a few other optimizations are allowed, but Monotonic 240 operations are unlikely to be used in ways which would make those 241 optimizations useful. 242 243Notes for code generation 244 Code generation is essentially the same as that for unordered for loads and 245 stores. No fences are required. ``cmpxchg`` and ``atomicrmw`` are required 246 to appear as a single operation. 247 248Acquire 249------- 250 251Acquire provides a barrier of the sort necessary to acquire a lock to access 252other memory with normal loads and stores. 253 254Relevant standard 255 This corresponds to the C++11/C11 ``memory_order_acquire``. It should also be 256 used for C++11/C11 ``memory_order_consume``. 257 258Notes for frontends 259 If you are writing a frontend which uses this directly, use with caution. 260 Acquire only provides a semantic guarantee when paired with a Release 261 operation. 262 263Notes for optimizers 264 Optimizers not aware of atomics can treat this like a nothrow call. It is 265 also possible to move stores from before an Acquire load or read-modify-write 266 operation to after it, and move non-Acquire loads from before an Acquire 267 operation to after it. 268 269Notes for code generation 270 Architectures with weak memory ordering (essentially everything relevant today 271 except x86 and SPARC) require some sort of fence to maintain the Acquire 272 semantics. The precise fences required varies widely by architecture, but for 273 a simple implementation, most architectures provide a barrier which is strong 274 enough for everything (``dmb`` on ARM, ``sync`` on PowerPC, etc.). Putting 275 such a fence after the equivalent Monotonic operation is sufficient to 276 maintain Acquire semantics for a memory operation. 277 278Release 279------- 280 281Release is similar to Acquire, but with a barrier of the sort necessary to 282release a lock. 283 284Relevant standard 285 This corresponds to the C++11/C11 ``memory_order_release``. 286 287Notes for frontends 288 If you are writing a frontend which uses this directly, use with caution. 289 Release only provides a semantic guarantee when paired with a Acquire 290 operation. 291 292Notes for optimizers 293 Optimizers not aware of atomics can treat this like a nothrow call. It is 294 also possible to move loads from after a Release store or read-modify-write 295 operation to before it, and move non-Release stores from after a Release 296 operation to before it. 297 298Notes for code generation 299 See the section on Acquire; a fence before the relevant operation is usually 300 sufficient for Release. Note that a store-store fence is not sufficient to 301 implement Release semantics; store-store fences are generally not exposed to 302 IR because they are extremely difficult to use correctly. 303 304AcquireRelease 305-------------- 306 307AcquireRelease (``acq_rel`` in IR) provides both an Acquire and a Release 308barrier (for fences and operations which both read and write memory). 309 310Relevant standard 311 This corresponds to the C++11/C11 ``memory_order_acq_rel``. 312 313Notes for frontends 314 If you are writing a frontend which uses this directly, use with caution. 315 Acquire only provides a semantic guarantee when paired with a Release 316 operation, and vice versa. 317 318Notes for optimizers 319 In general, optimizers should treat this like a nothrow call; the possible 320 optimizations are usually not interesting. 321 322Notes for code generation 323 This operation has Acquire and Release semantics; see the sections on Acquire 324 and Release. 325 326SequentiallyConsistent 327---------------------- 328 329SequentiallyConsistent (``seq_cst`` in IR) provides Acquire semantics for loads 330and Release semantics for stores. Additionally, it guarantees that a total 331ordering exists between all SequentiallyConsistent operations. 332 333Relevant standard 334 This corresponds to the C++11/C11 ``memory_order_seq_cst``, Java volatile, and 335 the gcc-compatible ``__sync_*`` builtins which do not specify otherwise. 336 337Notes for frontends 338 If a frontend is exposing atomic operations, these are much easier to reason 339 about for the programmer than other kinds of operations, and using them is 340 generally a practical performance tradeoff. 341 342Notes for optimizers 343 Optimizers not aware of atomics can treat this like a nothrow call. For 344 SequentiallyConsistent loads and stores, the same reorderings are allowed as 345 for Acquire loads and Release stores, except that SequentiallyConsistent 346 operations may not be reordered. 347 348Notes for code generation 349 SequentiallyConsistent loads minimally require the same barriers as Acquire 350 operations and SequentiallyConsistent stores require Release 351 barriers. Additionally, the code generator must enforce ordering between 352 SequentiallyConsistent stores followed by SequentiallyConsistent loads. This 353 is usually done by emitting either a full fence before the loads or a full 354 fence after the stores; which is preferred varies by architecture. 355 356Atomics and IR optimization 357=========================== 358 359Predicates for optimizer writers to query: 360 361* ``isSimple()``: A load or store which is not volatile or atomic. This is 362 what, for example, memcpyopt would check for operations it might transform. 363 364* ``isUnordered()``: A load or store which is not volatile and at most 365 Unordered. This would be checked, for example, by LICM before hoisting an 366 operation. 367 368* ``mayReadFromMemory()``/``mayWriteToMemory()``: Existing predicate, but note 369 that they return true for any operation which is volatile or at least 370 Monotonic. 371 372* ``isStrongerThan`` / ``isAtLeastOrStrongerThan``: These are predicates on 373 orderings. They can be useful for passes that are aware of atomics, for 374 example to do DSE across a single atomic access, but not across a 375 release-acquire pair (see MemoryDependencyAnalysis for an example of this) 376 377* Alias analysis: Note that AA will return ModRef for anything Acquire or 378 Release, and for the address accessed by any Monotonic operation. 379 380To support optimizing around atomic operations, make sure you are using the 381right predicates; everything should work if that is done. If your pass should 382optimize some atomic operations (Unordered operations in particular), make sure 383it doesn't replace an atomic load or store with a non-atomic operation. 384 385Some examples of how optimizations interact with various kinds of atomic 386operations: 387 388* ``memcpyopt``: An atomic operation cannot be optimized into part of a 389 memcpy/memset, including unordered loads/stores. It can pull operations 390 across some atomic operations. 391 392* LICM: Unordered loads/stores can be moved out of a loop. It just treats 393 monotonic operations like a read+write to a memory location, and anything 394 stricter than that like a nothrow call. 395 396* DSE: Unordered stores can be DSE'ed like normal stores. Monotonic stores can 397 be DSE'ed in some cases, but it's tricky to reason about, and not especially 398 important. It is possible in some case for DSE to operate across a stronger 399 atomic operation, but it is fairly tricky. DSE delegates this reasoning to 400 MemoryDependencyAnalysis (which is also used by other passes like GVN). 401 402* Folding a load: Any atomic load from a constant global can be constant-folded, 403 because it cannot be observed. Similar reasoning allows sroa with 404 atomic loads and stores. 405 406Atomics and Codegen 407=================== 408 409Atomic operations are represented in the SelectionDAG with ``ATOMIC_*`` opcodes. 410On architectures which use barrier instructions for all atomic ordering (like 411ARM), appropriate fences can be emitted by the AtomicExpand Codegen pass if 412``setInsertFencesForAtomic()`` was used. 413 414The MachineMemOperand for all atomic operations is currently marked as volatile; 415this is not correct in the IR sense of volatile, but CodeGen handles anything 416marked volatile very conservatively. This should get fixed at some point. 417 418One very important property of the atomic operations is that if your backend 419supports any inline lock-free atomic operations of a given size, you should 420support *ALL* operations of that size in a lock-free manner. 421 422When the target implements atomic ``cmpxchg`` or LL/SC instructions (as most do) 423this is trivial: all the other operations can be implemented on top of those 424primitives. However, on many older CPUs (e.g. ARMv5, SparcV8, Intel 80386) there 425are atomic load and store instructions, but no ``cmpxchg`` or LL/SC. As it is 426invalid to implement ``atomic load`` using the native instruction, but 427``cmpxchg`` using a library call to a function that uses a mutex, ``atomic 428load`` must *also* expand to a library call on such architectures, so that it 429can remain atomic with regards to a simultaneous ``cmpxchg``, by using the same 430mutex. 431 432AtomicExpandPass can help with that: it will expand all atomic operations to the 433proper ``__atomic_*`` libcalls for any size above the maximum set by 434``setMaxAtomicSizeInBitsSupported`` (which defaults to 0). 435 436On x86, all atomic loads generate a ``MOV``. SequentiallyConsistent stores 437generate an ``XCHG``, other stores generate a ``MOV``. SequentiallyConsistent 438fences generate an ``MFENCE``, other fences do not cause any code to be 439generated. ``cmpxchg`` uses the ``LOCK CMPXCHG`` instruction. ``atomicrmw xchg`` 440uses ``XCHG``, ``atomicrmw add`` and ``atomicrmw sub`` use ``XADD``, and all 441other ``atomicrmw`` operations generate a loop with ``LOCK CMPXCHG``. Depending 442on the users of the result, some ``atomicrmw`` operations can be translated into 443operations like ``LOCK AND``, but that does not work in general. 444 445On ARM (before v8), MIPS, and many other RISC architectures, Acquire, Release, 446and SequentiallyConsistent semantics require barrier instructions for every such 447operation. Loads and stores generate normal instructions. ``cmpxchg`` and 448``atomicrmw`` can be represented using a loop with LL/SC-style instructions 449which take some sort of exclusive lock on a cache line (``LDREX`` and ``STREX`` 450on ARM, etc.). 451 452It is often easiest for backends to use AtomicExpandPass to lower some of the 453atomic constructs. Here are some lowerings it can do: 454 455* cmpxchg -> loop with load-linked/store-conditional 456 by overriding ``shouldExpandAtomicCmpXchgInIR()``, ``emitLoadLinked()``, 457 ``emitStoreConditional()`` 458* large loads/stores -> ll-sc/cmpxchg 459 by overriding ``shouldExpandAtomicStoreInIR()``/``shouldExpandAtomicLoadInIR()`` 460* strong atomic accesses -> monotonic accesses + fences by overriding 461 ``shouldInsertFencesForAtomic()``, ``emitLeadingFence()``, and 462 ``emitTrailingFence()`` 463* atomic rmw -> loop with cmpxchg or load-linked/store-conditional 464 by overriding ``expandAtomicRMWInIR()`` 465* expansion to __atomic_* libcalls for unsupported sizes. 466* part-word atomicrmw/cmpxchg -> target-specific intrinsic by overriding 467 ``shouldExpandAtomicRMWInIR``, ``emitMaskedAtomicRMWIntrinsic``, 468 ``shouldExpandAtomicCmpXchgInIR``, and ``emitMaskedAtomicCmpXchgIntrinsic``. 469 470For an example of these look at the ARM (first five lowerings) or RISC-V (last 471lowering) backend. 472 473AtomicExpandPass supports two strategies for lowering atomicrmw/cmpxchg to 474load-linked/store-conditional (LL/SC) loops. The first expands the LL/SC loop 475in IR, calling target lowering hooks to emit intrinsics for the LL and SC 476operations. However, many architectures have strict requirements for LL/SC 477loops to ensure forward progress, such as restrictions on the number and type 478of instructions in the loop. It isn't possible to enforce these restrictions 479when the loop is expanded in LLVM IR, and so affected targets may prefer to 480expand to LL/SC loops at a very late stage (i.e. after register allocation). 481AtomicExpandPass can help support lowering of part-word atomicrmw or cmpxchg 482using this strategy by producing IR for any shifting and masking that can be 483performed outside of the LL/SC loop. 484 485Libcalls: __atomic_* 486==================== 487 488There are two kinds of atomic library calls that are generated by LLVM. Please 489note that both sets of library functions somewhat confusingly share the names of 490builtin functions defined by clang. Despite this, the library functions are 491not directly related to the builtins: it is *not* the case that ``__atomic_*`` 492builtins lower to ``__atomic_*`` library calls and ``__sync_*`` builtins lower 493to ``__sync_*`` library calls. 494 495The first set of library functions are named ``__atomic_*``. This set has been 496"standardized" by GCC, and is described below. (See also `GCC's documentation 497<https://gcc.gnu.org/wiki/Atomic/GCCMM/LIbrary>`_) 498 499LLVM's AtomicExpandPass will translate atomic operations on data sizes above 500``MaxAtomicSizeInBitsSupported`` into calls to these functions. 501 502There are four generic functions, which can be called with data of any size or 503alignment:: 504 505 void __atomic_load(size_t size, void *ptr, void *ret, int ordering) 506 void __atomic_store(size_t size, void *ptr, void *val, int ordering) 507 void __atomic_exchange(size_t size, void *ptr, void *val, void *ret, int ordering) 508 bool __atomic_compare_exchange(size_t size, void *ptr, void *expected, void *desired, int success_order, int failure_order) 509 510There are also size-specialized versions of the above functions, which can only 511be used with *naturally-aligned* pointers of the appropriate size. In the 512signatures below, "N" is one of 1, 2, 4, 8, and 16, and "iN" is the appropriate 513integer type of that size; if no such integer type exists, the specialization 514cannot be used:: 515 516 iN __atomic_load_N(iN *ptr, iN val, int ordering) 517 void __atomic_store_N(iN *ptr, iN val, int ordering) 518 iN __atomic_exchange_N(iN *ptr, iN val, int ordering) 519 bool __atomic_compare_exchange_N(iN *ptr, iN *expected, iN desired, int success_order, int failure_order) 520 521Finally there are some read-modify-write functions, which are only available in 522the size-specific variants (any other sizes use a ``__atomic_compare_exchange`` 523loop):: 524 525 iN __atomic_fetch_add_N(iN *ptr, iN val, int ordering) 526 iN __atomic_fetch_sub_N(iN *ptr, iN val, int ordering) 527 iN __atomic_fetch_and_N(iN *ptr, iN val, int ordering) 528 iN __atomic_fetch_or_N(iN *ptr, iN val, int ordering) 529 iN __atomic_fetch_xor_N(iN *ptr, iN val, int ordering) 530 iN __atomic_fetch_nand_N(iN *ptr, iN val, int ordering) 531 532This set of library functions have some interesting implementation requirements 533to take note of: 534 535- They support all sizes and alignments -- including those which cannot be 536 implemented natively on any existing hardware. Therefore, they will certainly 537 use mutexes in for some sizes/alignments. 538 539- As a consequence, they cannot be shipped in a statically linked 540 compiler-support library, as they have state which must be shared amongst all 541 DSOs loaded in the program. They must be provided in a shared library used by 542 all objects. 543 544- The set of atomic sizes supported lock-free must be a superset of the sizes 545 any compiler can emit. That is: if a new compiler introduces support for 546 inline-lock-free atomics of size N, the ``__atomic_*`` functions must also have a 547 lock-free implementation for size N. This is a requirement so that code 548 produced by an old compiler (which will have called the ``__atomic_*`` function) 549 interoperates with code produced by the new compiler (which will use native 550 the atomic instruction). 551 552Note that it's possible to write an entirely target-independent implementation 553of these library functions by using the compiler atomic builtins themselves to 554implement the operations on naturally-aligned pointers of supported sizes, and a 555generic mutex implementation otherwise. 556 557Libcalls: __sync_* 558================== 559 560Some targets or OS/target combinations can support lock-free atomics, but for 561various reasons, it is not practical to emit the instructions inline. 562 563There's two typical examples of this. 564 565Some CPUs support multiple instruction sets which can be switched back and forth 566on function-call boundaries. For example, MIPS supports the MIPS16 ISA, which 567has a smaller instruction encoding than the usual MIPS32 ISA. ARM, similarly, 568has the Thumb ISA. In MIPS16 and earlier versions of Thumb, the atomic 569instructions are not encodable. However, those instructions are available via a 570function call to a function with the longer encoding. 571 572Additionally, a few OS/target pairs provide kernel-supported lock-free 573atomics. ARM/Linux is an example of this: the kernel `provides 574<https://www.kernel.org/doc/Documentation/arm/kernel_user_helpers.txt>`_ a 575function which on older CPUs contains a "magically-restartable" atomic sequence 576(which looks atomic so long as there's only one CPU), and contains actual atomic 577instructions on newer multicore models. This sort of functionality can typically 578be provided on any architecture, if all CPUs which are missing atomic 579compare-and-swap support are uniprocessor (no SMP). This is almost always the 580case. The only common architecture without that property is SPARC -- SPARCV8 SMP 581systems were common, yet it doesn't support any sort of compare-and-swap 582operation. 583 584In either of these cases, the Target in LLVM can claim support for atomics of an 585appropriate size, and then implement some subset of the operations via libcalls 586to a ``__sync_*`` function. Such functions *must* not use locks in their 587implementation, because unlike the ``__atomic_*`` routines used by 588AtomicExpandPass, these may be mixed-and-matched with native instructions by the 589target lowering. 590 591Further, these routines do not need to be shared, as they are stateless. So, 592there is no issue with having multiple copies included in one binary. Thus, 593typically these routines are implemented by the statically-linked compiler 594runtime support library. 595 596LLVM will emit a call to an appropriate ``__sync_*`` routine if the target 597ISelLowering code has set the corresponding ``ATOMIC_CMPXCHG``, ``ATOMIC_SWAP``, 598or ``ATOMIC_LOAD_*`` operation to "Expand", and if it has opted-into the 599availability of those library functions via a call to ``initSyncLibcalls()``. 600 601The full set of functions that may be called by LLVM is (for ``N`` being 1, 2, 6024, 8, or 16):: 603 604 iN __sync_val_compare_and_swap_N(iN *ptr, iN expected, iN desired) 605 iN __sync_lock_test_and_set_N(iN *ptr, iN val) 606 iN __sync_fetch_and_add_N(iN *ptr, iN val) 607 iN __sync_fetch_and_sub_N(iN *ptr, iN val) 608 iN __sync_fetch_and_and_N(iN *ptr, iN val) 609 iN __sync_fetch_and_or_N(iN *ptr, iN val) 610 iN __sync_fetch_and_xor_N(iN *ptr, iN val) 611 iN __sync_fetch_and_nand_N(iN *ptr, iN val) 612 iN __sync_fetch_and_max_N(iN *ptr, iN val) 613 iN __sync_fetch_and_umax_N(iN *ptr, iN val) 614 iN __sync_fetch_and_min_N(iN *ptr, iN val) 615 iN __sync_fetch_and_umin_N(iN *ptr, iN val) 616 617This list doesn't include any function for atomic load or store; all known 618architectures support atomic loads and stores directly (possibly by emitting a 619fence on either side of a normal load or store.) 620 621There's also, somewhat separately, the possibility to lower ``ATOMIC_FENCE`` to 622``__sync_synchronize()``. This may happen or not happen independent of all the 623above, controlled purely by ``setOperationAction(ISD::ATOMIC_FENCE, ...)``. 624 625On AArch64, a variant of the __sync_* routines is used which contain the memory 626order as part of the function name. These routines may determine at runtime 627whether the single-instruction atomic operations which were introduced as part 628of AArch64 Large System Extensions "LSE" instruction set are available, or if 629it needs to fall back to an LL/SC loop. The following helper functions are 630implemented in both ``compiler-rt`` and ``libgcc`` libraries 631(``N`` is one of 1, 2, 4, 8, and ``M`` is one of 1, 2, 4, 8 and 16, and 632``ORDER`` is one of 'relax', 'acq', 'rel', 'acq_rel'):: 633 634 iM __aarch64_casM_ORDER(iM expected, iM desired, iM *ptr) 635 iN __aarch64_swpN_ORDER(iN val, iN *ptr) 636 iN __aarch64_ldaddN_ORDER(iN val, iN *ptr) 637 iN __aarch64_ldclrN_ORDER(iN val, iN *ptr) 638 iN __aarch64_ldeorN_ORDER(iN val, iN *ptr) 639 iN __aarch64_ldsetN_ORDER(iN val, iN *ptr) 640 641Please note, if LSE instruction set is specified for AArch64 target then 642out-of-line atomics calls are not generated and single-instruction atomic 643operations are used in place. 644