• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1==============================================
2LLVM Atomic Instructions and Concurrency Guide
3==============================================
4
5.. contents::
6   :local:
7
8Introduction
9============
10
11Historically, LLVM has not had very strong support for concurrency; some minimal
12intrinsics were provided, and ``volatile`` was used in some cases to achieve
13rough semantics in the presence of concurrency.  However, this is changing;
14there are now new instructions which are well-defined in the presence of threads
15and asynchronous signals, and the model for existing instructions has been
16clarified in the IR.
17
18The atomic instructions are designed specifically to provide readable IR and
19optimized code generation for the following:
20
21* The new C++0x ``<atomic>`` header.  (`C++0x draft available here
22  <http://www.open-std.org/jtc1/sc22/wg21/>`_.) (`C1x draft available here
23  <http://www.open-std.org/jtc1/sc22/wg14/>`_.)
24
25* Proper semantics for Java-style memory, for both ``volatile`` and regular
26  shared variables. (`Java Specification
27  <http://java.sun.com/docs/books/jls/third_edition/html/memory.html>`_)
28
29* gcc-compatible ``__sync_*`` builtins. (`Description
30  <http://gcc.gnu.org/onlinedocs/gcc/Atomic-Builtins.html>`_)
31
32* Other scenarios with atomic semantics, including ``static`` variables with
33  non-trivial constructors in C++.
34
35Atomic and volatile in the IR are orthogonal; "volatile" is the C/C++ volatile,
36which ensures that every volatile load and store happens and is performed in the
37stated order.  A couple examples: if a SequentiallyConsistent store is
38immediately followed by another SequentiallyConsistent store to the same
39address, the first store can be erased. This transformation is not allowed for a
40pair of volatile stores. On the other hand, a non-volatile non-atomic load can
41be moved across a volatile load freely, but not an Acquire load.
42
43This document is intended to provide a guide to anyone either writing a frontend
44for LLVM or working on optimization passes for LLVM with a guide for how to deal
45with instructions with special semantics in the presence of concurrency.  This
46is not intended to be a precise guide to the semantics; the details can get
47extremely complicated and unreadable, and are not usually necessary.
48
49.. _Optimization outside atomic:
50
51Optimization outside atomic
52===========================
53
54The basic ``'load'`` and ``'store'`` allow a variety of optimizations, but can
55lead to undefined results in a concurrent environment; see `NotAtomic`_. This
56section specifically goes into the one optimizer restriction which applies in
57concurrent environments, which gets a bit more of an extended description
58because any optimization dealing with stores needs to be aware of it.
59
60From the optimizer's point of view, the rule is that if there are not any
61instructions with atomic ordering involved, concurrency does not matter, with
62one exception: if a variable might be visible to another thread or signal
63handler, a store cannot be inserted along a path where it might not execute
64otherwise.  Take the following example:
65
66.. code-block:: c
67
68 /* C code, for readability; run through clang -O2 -S -emit-llvm to get
69     equivalent IR */
70  int x;
71  void f(int* a) {
72    for (int i = 0; i < 100; i++) {
73      if (a[i])
74        x += 1;
75    }
76  }
77
78The following is equivalent in non-concurrent situations:
79
80.. code-block:: c
81
82  int x;
83  void f(int* a) {
84    int xtemp = x;
85    for (int i = 0; i < 100; i++) {
86      if (a[i])
87        xtemp += 1;
88    }
89    x = xtemp;
90  }
91
92However, LLVM is not allowed to transform the former to the latter: it could
93indirectly introduce undefined behavior if another thread can access ``x`` at
94the same time. (This example is particularly of interest because before the
95concurrency model was implemented, LLVM would perform this transformation.)
96
97Note that speculative loads are allowed; a load which is part of a race returns
98``undef``, but does not have undefined behavior.
99
100Atomic instructions
101===================
102
103For cases where simple loads and stores are not sufficient, LLVM provides
104various atomic instructions. The exact guarantees provided depend on the
105ordering; see `Atomic orderings`_.
106
107``load atomic`` and ``store atomic`` provide the same basic functionality as
108non-atomic loads and stores, but provide additional guarantees in situations
109where threads and signals are involved.
110
111``cmpxchg`` and ``atomicrmw`` are essentially like an atomic load followed by an
112atomic store (where the store is conditional for ``cmpxchg``), but no other
113memory operation can happen on any thread between the load and store.  Note that
114LLVM's cmpxchg does not provide quite as many options as the C++0x version.
115
116A ``fence`` provides Acquire and/or Release ordering which is not part of
117another operation; it is normally used along with Monotonic memory operations.
118A Monotonic load followed by an Acquire fence is roughly equivalent to an
119Acquire load.
120
121Frontends generating atomic instructions generally need to be aware of the
122target to some degree; atomic instructions are guaranteed to be lock-free, and
123therefore an instruction which is wider than the target natively supports can be
124impossible to generate.
125
126.. _Atomic orderings:
127
128Atomic orderings
129================
130
131In order to achieve a balance between performance and necessary guarantees,
132there are six levels of atomicity. They are listed in order of strength; each
133level includes all the guarantees of the previous level except for
134Acquire/Release. (See also `LangRef Ordering <LangRef.html#ordering>`_.)
135
136.. _NotAtomic:
137
138NotAtomic
139---------
140
141NotAtomic is the obvious, a load or store which is not atomic. (This isn't
142really a level of atomicity, but is listed here for comparison.) This is
143essentially a regular load or store. If there is a race on a given memory
144location, loads from that location return undef.
145
146Relevant standard
147  This is intended to match shared variables in C/C++, and to be used in any
148  other context where memory access is necessary, and a race is impossible. (The
149  precise definition is in `LangRef Memory Model <LangRef.html#memmodel>`_.)
150
151Notes for frontends
152  The rule is essentially that all memory accessed with basic loads and stores
153  by multiple threads should be protected by a lock or other synchronization;
154  otherwise, you are likely to run into undefined behavior. If your frontend is
155  for a "safe" language like Java, use Unordered to load and store any shared
156  variable.  Note that NotAtomic volatile loads and stores are not properly
157  atomic; do not try to use them as a substitute. (Per the C/C++ standards,
158  volatile does provide some limited guarantees around asynchronous signals, but
159  atomics are generally a better solution.)
160
161Notes for optimizers
162  Introducing loads to shared variables along a codepath where they would not
163  otherwise exist is allowed; introducing stores to shared variables is not. See
164  `Optimization outside atomic`_.
165
166Notes for code generation
167  The one interesting restriction here is that it is not allowed to write to
168  bytes outside of the bytes relevant to a store.  This is mostly relevant to
169  unaligned stores: it is not allowed in general to convert an unaligned store
170  into two aligned stores of the same width as the unaligned store. Backends are
171  also expected to generate an i8 store as an i8 store, and not an instruction
172  which writes to surrounding bytes.  (If you are writing a backend for an
173  architecture which cannot satisfy these restrictions and cares about
174  concurrency, please send an email to llvmdev.)
175
176Unordered
177---------
178
179Unordered is the lowest level of atomicity. It essentially guarantees that races
180produce somewhat sane results instead of having undefined behavior.  It also
181guarantees the operation to be lock-free, so it do not depend on the data being
182part of a special atomic structure or depend on a separate per-process global
183lock.  Note that code generation will fail for unsupported atomic operations; if
184you need such an operation, use explicit locking.
185
186Relevant standard
187  This is intended to match the Java memory model for shared variables.
188
189Notes for frontends
190  This cannot be used for synchronization, but is useful for Java and other
191  "safe" languages which need to guarantee that the generated code never
192  exhibits undefined behavior. Note that this guarantee is cheap on common
193  platforms for loads of a native width, but can be expensive or unavailable for
194  wider loads, like a 64-bit store on ARM. (A frontend for Java or other "safe"
195  languages would normally split a 64-bit store on ARM into two 32-bit unordered
196  stores.)
197
198Notes for optimizers
199  In terms of the optimizer, this prohibits any transformation that transforms a
200  single load into multiple loads, transforms a store into multiple stores,
201  narrows a store, or stores a value which would not be stored otherwise.  Some
202  examples of unsafe optimizations are narrowing an assignment into a bitfield,
203  rematerializing a load, and turning loads and stores into a memcpy
204  call. Reordering unordered operations is safe, though, and optimizers should
205  take advantage of that because unordered operations are common in languages
206  that need them.
207
208Notes for code generation
209  These operations are required to be atomic in the sense that if you use
210  unordered loads and unordered stores, a load cannot see a value which was
211  never stored.  A normal load or store instruction is usually sufficient, but
212  note that an unordered load or store cannot be split into multiple
213  instructions (or an instruction which does multiple memory operations, like
214  ``LDRD`` on ARM).
215
216Monotonic
217---------
218
219Monotonic is the weakest level of atomicity that can be used in synchronization
220primitives, although it does not provide any general synchronization. It
221essentially guarantees that if you take all the operations affecting a specific
222address, a consistent ordering exists.
223
224Relevant standard
225  This corresponds to the C++0x/C1x ``memory_order_relaxed``; see those
226  standards for the exact definition.
227
228Notes for frontends
229  If you are writing a frontend which uses this directly, use with caution.  The
230  guarantees in terms of synchronization are very weak, so make sure these are
231  only used in a pattern which you know is correct.  Generally, these would
232  either be used for atomic operations which do not protect other memory (like
233  an atomic counter), or along with a ``fence``.
234
235Notes for optimizers
236  In terms of the optimizer, this can be treated as a read+write on the relevant
237  memory location (and alias analysis will take advantage of that). In addition,
238  it is legal to reorder non-atomic and Unordered loads around Monotonic
239  loads. CSE/DSE and a few other optimizations are allowed, but Monotonic
240  operations are unlikely to be used in ways which would make those
241  optimizations useful.
242
243Notes for code generation
244  Code generation is essentially the same as that for unordered for loads and
245  stores.  No fences are required.  ``cmpxchg`` and ``atomicrmw`` are required
246  to appear as a single operation.
247
248Acquire
249-------
250
251Acquire provides a barrier of the sort necessary to acquire a lock to access
252other memory with normal loads and stores.
253
254Relevant standard
255  This corresponds to the C++0x/C1x ``memory_order_acquire``. It should also be
256  used for C++0x/C1x ``memory_order_consume``.
257
258Notes for frontends
259  If you are writing a frontend which uses this directly, use with caution.
260  Acquire only provides a semantic guarantee when paired with a Release
261  operation.
262
263Notes for optimizers
264  Optimizers not aware of atomics can treat this like a nothrow call.  It is
265  also possible to move stores from before an Acquire load or read-modify-write
266  operation to after it, and move non-Acquire loads from before an Acquire
267  operation to after it.
268
269Notes for code generation
270  Architectures with weak memory ordering (essentially everything relevant today
271  except x86 and SPARC) require some sort of fence to maintain the Acquire
272  semantics.  The precise fences required varies widely by architecture, but for
273  a simple implementation, most architectures provide a barrier which is strong
274  enough for everything (``dmb`` on ARM, ``sync`` on PowerPC, etc.).  Putting
275  such a fence after the equivalent Monotonic operation is sufficient to
276  maintain Acquire semantics for a memory operation.
277
278Release
279-------
280
281Release is similar to Acquire, but with a barrier of the sort necessary to
282release a lock.
283
284Relevant standard
285  This corresponds to the C++0x/C1x ``memory_order_release``.
286
287Notes for frontends
288  If you are writing a frontend which uses this directly, use with caution.
289  Release only provides a semantic guarantee when paired with a Acquire
290  operation.
291
292Notes for optimizers
293  Optimizers not aware of atomics can treat this like a nothrow call.  It is
294  also possible to move loads from after a Release store or read-modify-write
295  operation to before it, and move non-Release stores from after an Release
296  operation to before it.
297
298Notes for code generation
299  See the section on Acquire; a fence before the relevant operation is usually
300  sufficient for Release. Note that a store-store fence is not sufficient to
301  implement Release semantics; store-store fences are generally not exposed to
302  IR because they are extremely difficult to use correctly.
303
304AcquireRelease
305--------------
306
307AcquireRelease (``acq_rel`` in IR) provides both an Acquire and a Release
308barrier (for fences and operations which both read and write memory).
309
310Relevant standard
311  This corresponds to the C++0x/C1x ``memory_order_acq_rel``.
312
313Notes for frontends
314  If you are writing a frontend which uses this directly, use with caution.
315  Acquire only provides a semantic guarantee when paired with a Release
316  operation, and vice versa.
317
318Notes for optimizers
319  In general, optimizers should treat this like a nothrow call; the possible
320  optimizations are usually not interesting.
321
322Notes for code generation
323  This operation has Acquire and Release semantics; see the sections on Acquire
324  and Release.
325
326SequentiallyConsistent
327----------------------
328
329SequentiallyConsistent (``seq_cst`` in IR) provides Acquire semantics for loads
330and Release semantics for stores. Additionally, it guarantees that a total
331ordering exists between all SequentiallyConsistent operations.
332
333Relevant standard
334  This corresponds to the C++0x/C1x ``memory_order_seq_cst``, Java volatile, and
335  the gcc-compatible ``__sync_*`` builtins which do not specify otherwise.
336
337Notes for frontends
338  If a frontend is exposing atomic operations, these are much easier to reason
339  about for the programmer than other kinds of operations, and using them is
340  generally a practical performance tradeoff.
341
342Notes for optimizers
343  Optimizers not aware of atomics can treat this like a nothrow call.  For
344  SequentiallyConsistent loads and stores, the same reorderings are allowed as
345  for Acquire loads and Release stores, except that SequentiallyConsistent
346  operations may not be reordered.
347
348Notes for code generation
349  SequentiallyConsistent loads minimally require the same barriers as Acquire
350  operations and SequentiallyConsistent stores require Release
351  barriers. Additionally, the code generator must enforce ordering between
352  SequentiallyConsistent stores followed by SequentiallyConsistent loads. This
353  is usually done by emitting either a full fence before the loads or a full
354  fence after the stores; which is preferred varies by architecture.
355
356Atomics and IR optimization
357===========================
358
359Predicates for optimizer writers to query:
360
361* ``isSimple()``: A load or store which is not volatile or atomic.  This is
362  what, for example, memcpyopt would check for operations it might transform.
363
364* ``isUnordered()``: A load or store which is not volatile and at most
365  Unordered. This would be checked, for example, by LICM before hoisting an
366  operation.
367
368* ``mayReadFromMemory()``/``mayWriteToMemory()``: Existing predicate, but note
369  that they return true for any operation which is volatile or at least
370  Monotonic.
371
372* Alias analysis: Note that AA will return ModRef for anything Acquire or
373  Release, and for the address accessed by any Monotonic operation.
374
375To support optimizing around atomic operations, make sure you are using the
376right predicates; everything should work if that is done.  If your pass should
377optimize some atomic operations (Unordered operations in particular), make sure
378it doesn't replace an atomic load or store with a non-atomic operation.
379
380Some examples of how optimizations interact with various kinds of atomic
381operations:
382
383* ``memcpyopt``: An atomic operation cannot be optimized into part of a
384  memcpy/memset, including unordered loads/stores.  It can pull operations
385  across some atomic operations.
386
387* LICM: Unordered loads/stores can be moved out of a loop.  It just treats
388  monotonic operations like a read+write to a memory location, and anything
389  stricter than that like a nothrow call.
390
391* DSE: Unordered stores can be DSE'ed like normal stores.  Monotonic stores can
392  be DSE'ed in some cases, but it's tricky to reason about, and not especially
393  important.
394
395* Folding a load: Any atomic load from a constant global can be constant-folded,
396  because it cannot be observed.  Similar reasoning allows scalarrepl with
397  atomic loads and stores.
398
399Atomics and Codegen
400===================
401
402Atomic operations are represented in the SelectionDAG with ``ATOMIC_*`` opcodes.
403On architectures which use barrier instructions for all atomic ordering (like
404ARM), appropriate fences are split out as the DAG is built.
405
406The MachineMemOperand for all atomic operations is currently marked as volatile;
407this is not correct in the IR sense of volatile, but CodeGen handles anything
408marked volatile very conservatively.  This should get fixed at some point.
409
410Common architectures have some way of representing at least a pointer-sized
411lock-free ``cmpxchg``; such an operation can be used to implement all the other
412atomic operations which can be represented in IR up to that size.  Backends are
413expected to implement all those operations, but not operations which cannot be
414implemented in a lock-free manner.  It is expected that backends will give an
415error when given an operation which cannot be implemented.  (The LLVM code
416generator is not very helpful here at the moment, but hopefully that will
417change.)
418
419The implementation of atomics on LL/SC architectures (like ARM) is currently a
420bit of a mess; there is a lot of copy-pasted code across targets, and the
421representation is relatively unsuited to optimization (it would be nice to be
422able to optimize loops involving cmpxchg etc.).
423
424On x86, all atomic loads generate a ``MOV``. SequentiallyConsistent stores
425generate an ``XCHG``, other stores generate a ``MOV``. SequentiallyConsistent
426fences generate an ``MFENCE``, other fences do not cause any code to be
427generated.  cmpxchg uses the ``LOCK CMPXCHG`` instruction.  ``atomicrmw xchg``
428uses ``XCHG``, ``atomicrmw add`` and ``atomicrmw sub`` use ``XADD``, and all
429other ``atomicrmw`` operations generate a loop with ``LOCK CMPXCHG``.  Depending
430on the users of the result, some ``atomicrmw`` operations can be translated into
431operations like ``LOCK AND``, but that does not work in general.
432
433On ARM, MIPS, and many other RISC architectures, Acquire, Release, and
434SequentiallyConsistent semantics require barrier instructions for every such
435operation. Loads and stores generate normal instructions.  ``cmpxchg`` and
436``atomicrmw`` can be represented using a loop with LL/SC-style instructions
437which take some sort of exclusive lock on a cache line (``LDREX`` and ``STREX``
438on ARM, etc.). At the moment, the IR does not provide any way to represent a
439weak ``cmpxchg`` which would not require a loop.
440