Lines Matching refs:stores
177 Furthermore, the stores committed by a CPU to the memory system may not be
178 perceived by the loads made by another CPU in the same order as the stores were
247 (*) Overlapping loads and stores within a particular CPU will appear to be
264 (Loads and stores overlap if they are targeted at overlapping pieces of
275 (*) It _must_not_ be assumed that independent loads and stores will be issued
387 A write barrier is a partial ordering on stores only; it is not required
391 memory system as time progresses. All stores _before_ a write barrier
392 will occur _before_ all the stores after the write barrier.
408 only; it is not required to have any effect on stores, independent loads
412 committing sequences of stores to the memory system that the CPU being
415 load touches one of a sequence of stores from another CPU, then by the
416 time the barrier completes, the effects of all the stores prior to that
442 have any effect on stores.
458 A general memory barrier is a partial ordering over both loads and stores.
691 However, stores are not speculated. This means that ordering -is- provided
703 the compiler might combine the store to 'b' with other stores to 'b'.
716 It is tempting to try to enforce ordering on identical stores on both
761 ordering is guaranteed only when the stores differ, for example:
813 Please note once again that the stores to 'b' differ. If they were
865 In short, control dependencies apply only to the stores in the then-clause
877 (*) Control dependencies can order prior loads against later stores.
879 Not prior loads against later loads, nor prior stores against
881 use smp_rmb(), smp_wmb(), or, in the case of prior stores and
884 (*) If both legs of the "if" statement begin with identical stores to
885 the same variable, then those stores must be ordered, either by
887 to carry out the stores. Please note that it is -not- sufficient
970 [!] Note that the stores before the write barrier would normally be expected to
1012 | | +------+ } requires all stores prior to the
1014 | | : +------+ } further stores may take place
1019 | Sequence in which stores are committed to the
1357 CPUs agree on the order in which all stores become visible. However,
1373 Suppose that CPU 2's load from X returns 1, which it then stores to Y,
1469 at least aside from stores. Therefore, the following outcome is possible:
1481 and smp_store_release() are not required to order prior stores against
1541 (*) The compiler is within its rights to reorder loads and stores
1729 (*) The compiler is within its rights to invent stores to a variable,
1797 loads followed by a pair of 32-bit stores. This would result in
1940 This is for use with persistent memory to ensure that stores for which
1945 to ensure that stores have reached a platform durability domain. This ensures
1946 that stores have updated persistent storage before any data access or
2237 order multiple stores before the wake-up with respect to loads of those stored
2718 their own loads and stores as if they had happened in program order.
2786 execution progress, whereas stores can often be deferred without a
2798 (*) loads and stores may be combined to improve performance when talking to