Lines Matching +full:arm +full:- +full:cares
11 LLVM supports instructions which are well-defined in the presence of threads and
18 <http://www.open-std.org/jtc1/sc22/wg21/>`_.) (`C11 draft available here
19 <http://www.open-std.org/jtc1/sc22/wg14/>`_.)
21 * Proper semantics for Java-style memory, for both ``volatile`` and regular
23 <http://docs.oracle.com/javase/specs/jls/se8/html/jls-17.html>`_)
25 * gcc-compatible ``__sync_*`` builtins. (`Description
26 <https://gcc.gnu.org/onlinedocs/gcc/_005f_005fsync-Builtins.html>`_)
29 non-trivial constructors in C++.
36 pair of volatile stores. On the other hand, a non-volatile non-atomic load can
62 .. code-block:: c
64 /* C code, for readability; run through clang -O2 -S -emit-llvm to get
74 The following is equivalent in non-concurrent situations:
76 .. code-block:: c
104 non-atomic loads and stores, but provide additional guarantees in situations
120 target to some degree; atomic instructions are guaranteed to be lock-free, and
137 ---------
171 architecture which cannot satisfy these restrictions and cares about
172 concurrency, please send an email to llvm-dev.)
175 ---------
179 guarantees the operation to be lock-free, so it does not depend on the data
180 being part of a special atomic structure or depend on a separate per-process
192 wider loads, like a 64-bit store on ARM. (A frontend for Java or other "safe"
193 languages would normally split a 64-bit store on ARM into two 32-bit unordered
212 ``LDRD`` on ARM without LPAE, or not naturally-aligned ``LDRD`` on LPAE ARM).
215 ---------
236 it is legal to reorder non-atomic and Unordered loads around Monotonic
247 -------
263 also possible to move stores from before an Acquire load or read-modify-write
264 operation to after it, and move non-Acquire loads from before an Acquire
272 enough for everything (``dmb`` on ARM, ``sync`` on PowerPC, etc.). Putting
277 -------
292 also possible to move loads from after a Release store or read-modify-write
293 operation to before it, and move non-Release stores from after an Release
298 sufficient for Release. Note that a store-store fence is not sufficient to
299 implement Release semantics; store-store fences are generally not exposed to
303 --------------
325 ----------------------
333 the gcc-compatible ``__sync_*`` builtins which do not specify otherwise.
373 release-acquire pair (see MemoryDependencyAnalysis for an example of this)
381 it doesn't replace an atomic load or store with a non-atomic operation.
400 * Folding a load: Any atomic load from a constant global can be constant-folded,
409 ARM), appropriate fences can be emitted by the AtomicExpand Codegen pass if
417 supports any inline lock-free atomic operations of a given size, you should
418 support *ALL* operations of that size in a lock-free manner.
443 On ARM (before v8), MIPS, and many other RISC architectures, Acquire, Release,
446 ``atomicrmw`` can be represented using a loop with LL/SC-style instructions
448 on ARM, etc.).
453 * cmpxchg -> loop with load-linked/store-conditional
456 * large loads/stores -> ll-sc/cmpxchg
458 * strong atomic accesses -> monotonic accesses + fences by overriding
461 * atomic rmw -> loop with cmpxchg or load-linked/store-conditional
465 For an example of all of these, look at the ARM backend.
492 There are also size-specialized versions of the above functions, which can only
493 be used with *naturally-aligned* pointers of the appropriate size. In the
503 Finally there are some read-modify-write functions, which are only available in
504 the size-specific variants (any other sizes use a ``__atomic_compare_exchange``
517 - They support all sizes and alignments -- including those which cannot be
521 - As a consequence, they cannot be shipped in a statically linked
522 compiler-support library, as they have state which must be shared amongst all
526 - The set of atomic sizes supported lock-free must be a superset of the sizes
528 inline-lock-free atomics of size N, the ``__atomic_*`` functions must also have a
529 lock-free implementation for size N. This is a requirement so that code
534 Note that it's possible to write an entirely target-independent implementation
536 implement the operations on naturally-aligned pointers of supported sizes, and a
542 Some targets or OS/target combinations can support lock-free atomics, but for
548 on function-call boundaries. For example, MIPS supports the MIPS16 ISA, which
549 has a smaller instruction encoding than the usual MIPS32 ISA. ARM, similarly,
554 Additionally, a few OS/target pairs provide kernel-supported lock-free
555 atomics. ARM/Linux is an example of this: the kernel `provides
556 <https://www.kernel.org/doc/Documentation/arm/kernel_user_helpers.txt>`_ a
557 function which on older CPUs contains a "magically-restartable" atomic sequence
561 compare-and-swap support are uniprocessor (no SMP). This is almost always the
562 case. The only common architecture without that property is SPARC -- SPARCV8 SMP
563 systems were common, yet it doesn't support any sort of compare-and-swap
570 AtomicExpandPass, these may be mixed-and-matched with native instructions by the
575 typically these routines are implemented by the statically-linked compiler
580 or ``ATOMIC_LOAD_*`` operation to "Expand", and if it has opted-into the