| /kernel/linux/linux-5.10/Documentation/ABI/testing/ |
| D | sysfs-bus-mei | 6 Description: Stores the same MODALIAS value emitted by uevent 13 Description: Stores mei client device name 20 Description: Stores mei client device uuid 27 Description: Stores mei client protocol version 34 Description: Stores mei client maximum number of connections 41 Description: Stores mei client fixed address, if any 48 Description: Stores mei client vtag support status 55 Description: Stores mei client maximum message length
|
| /kernel/linux/linux-6.6/Documentation/ABI/testing/ |
| D | sysfs-bus-mei | 6 Description: Stores the same MODALIAS value emitted by uevent 13 Description: Stores mei client device name 20 Description: Stores mei client device uuid 27 Description: Stores mei client protocol version 34 Description: Stores mei client maximum number of connections 41 Description: Stores mei client fixed address, if any 48 Description: Stores mei client vtag support status 55 Description: Stores mei client maximum message length
|
| /kernel/linux/linux-6.6/Documentation/core-api/ |
| D | refcount-vs-atomic.rst | 42 stores (all po-earlier instructions) on the same CPU are completed 44 It also guarantees that all po-earlier stores on the same CPU 45 and all propagated stores from other CPUs must propagate to all 50 stores (all po-earlier instructions) on the same CPU are completed 52 stores on the same CPU and all propagated stores from other CPUs 58 stores (all po-later instructions) on the same CPU are 60 po-later stores on the same CPU must propagate to all other CPUs 67 then further stores are ordered against this operation. 68 Control dependency on stores are not implemented using any explicit 69 barriers, but rely on CPU not to speculate on stores. This is only [all …]
|
| /kernel/linux/linux-5.10/Documentation/core-api/ |
| D | refcount-vs-atomic.rst | 42 stores (all po-earlier instructions) on the same CPU are completed 44 It also guarantees that all po-earlier stores on the same CPU 45 and all propagated stores from other CPUs must propagate to all 50 stores (all po-earlier instructions) on the same CPU are completed 52 stores on the same CPU and all propagated stores from other CPUs 58 stores (all po-later instructions) on the same CPU are 60 po-later stores on the same CPU must propagate to all other CPUs 67 then further stores are ordered against this operation. 68 Control dependency on stores are not implemented using any explicit 69 barriers, but rely on CPU not to speculate on stores. This is only [all …]
|
| /kernel/linux/linux-6.6/tools/perf/pmu-events/arch/x86/knightslanding/ |
| D | uncore-memory.json | 3 …"BriefDescription": "Counts the number of read requests and streaming stores that hit in MCDRAM ca… 11 …"BriefDescription": "Counts the number of read requests and streaming stores that hit in MCDRAM ca… 19 …"BriefDescription": "Counts the number of read requests and streaming stores that miss in MCDRAM c… 27 …"BriefDescription": "Counts the number of read requests and streaming stores that miss in MCDRAM c… 49 …id memory mode, this event counts all read requests as well as streaming stores that hit or miss i… 63 …hybrid. In cache and hybrid memory mode, this event counts all streaming stores, writebacks and, r…
|
| /kernel/linux/linux-6.6/tools/memory-model/Documentation/ |
| D | control-dependencies.txt | 11 One such challenge is that control dependencies order only later stores. 31 However, stores are not speculated. This means that ordering is 43 the compiler might fuse the store to "b" with other stores. Worse yet, 60 identical stores on both branches of the "if" statement as follows: 104 guaranteed only when the stores differ, for example: 212 only to the stores in the then-clause and else-clause of the "if" statement 219 (*) Control dependencies can order prior loads against later stores. 221 Not prior loads against later loads, nor prior stores against 224 stores and later loads, smp_mb(). 226 (*) If both legs of the "if" statement contain identical stores to [all …]
|
| /kernel/linux/linux-6.6/tools/perf/util/ |
| D | mem-events.h | 61 u32 store; /* count of all stores in trace */ 62 u32 st_uncache; /* stores to uncacheable address */ 64 u32 st_l1hit; /* count of stores that hit L1D */ 65 u32 st_l1miss; /* count of stores that miss L1D */ 66 u32 st_na; /* count of stores with memory level is not available */ 89 u32 nomap; /* count of load/stores with no phys addrs */
|
| /kernel/linux/linux-6.6/arch/powerpc/include/asm/ |
| D | barrier.h | 19 * providing an ordering (separately) for (a) cacheable stores and (b) 20 * loads and stores to non-cacheable memory (e.g. I/O devices). 22 * mb() prevents loads and stores being reordered across this point. 24 * wmb() prevents stores being reordered across this point. 32 * doesn't order loads with respect to previous stores. Lwsync can be 109 * pmem_wmb() ensures that all stores for which the modification
|
| /kernel/linux/linux-5.10/arch/ia64/include/asm/ |
| D | barrier.h | 23 * wmb(): Guarantees that all preceding stores to memory- 25 * stores and that all following stores will be 26 * visible only after all previous stores. 52 * IA64 GCC turns volatile stores into st.rel and volatile loads into ld.acq no
|
| /kernel/linux/linux-6.6/arch/ia64/include/asm/ |
| D | barrier.h | 23 * wmb(): Guarantees that all preceding stores to memory- 25 * stores and that all following stores will be 26 * visible only after all previous stores. 52 * IA64 GCC turns volatile stores into st.rel and volatile loads into ld.acq no
|
| /kernel/linux/linux-6.6/arch/mips/include/asm/octeon/ |
| D | octeon.h | 211 * stores; if clear, SYNCWS and SYNCS only order 212 * unmarked stores. SYNCWSMARKED has no effect when 222 * loads/stores can use XKPHYS addresses with 225 /* R/W If set (and UX set), user-level loads/stores 229 * loads/stores can use XKPHYS addresses with 232 /* R/W If set (and UX set), user-level loads/stores 235 /* R/W If set, all stores act as SYNCW (NOMERGE must 238 /* R/W If set, no stores merge, and all stores reach 265 /* R/W If set, CVMSEG is available for loads/stores in 268 /* R/W If set, CVMSEG is available for loads/stores in [all …]
|
| /kernel/linux/linux-5.10/arch/mips/include/asm/octeon/ |
| D | octeon.h | 212 * stores; if clear, SYNCWS and SYNCS only order 213 * unmarked stores. SYNCWSMARKED has no effect when 223 * loads/stores can use XKPHYS addresses with 226 /* R/W If set (and UX set), user-level loads/stores 230 * loads/stores can use XKPHYS addresses with 233 /* R/W If set (and UX set), user-level loads/stores 236 /* R/W If set, all stores act as SYNCW (NOMERGE must 239 /* R/W If set, no stores merge, and all stores reach 266 /* R/W If set, CVMSEG is available for loads/stores in 269 /* R/W If set, CVMSEG is available for loads/stores in [all …]
|
| /kernel/linux/linux-5.10/tools/perf/util/ |
| D | mem-events.h | 56 u32 store; /* count of all stores in trace */ 57 u32 st_uncache; /* stores to uncacheable address */ 59 u32 st_l1hit; /* count of stores that hit L1D */ 60 u32 st_l1miss; /* count of stores that miss L1D */ 78 u32 nomap; /* count of load/stores with no phys adrs */
|
| /kernel/linux/linux-5.10/tools/arch/powerpc/include/asm/ |
| D | barrier.h | 15 * providing an ordering (separately) for (a) cacheable stores and (b) 16 * loads and stores to non-cacheable memory (e.g. I/O devices). 18 * mb() prevents loads and stores being reordered across this point. 20 * wmb() prevents stores being reordered across this point.
|
| /kernel/linux/linux-6.6/tools/arch/powerpc/include/asm/ |
| D | barrier.h | 15 * providing an ordering (separately) for (a) cacheable stores and (b) 16 * loads and stores to non-cacheable memory (e.g. I/O devices). 18 * mb() prevents loads and stores being reordered across this point. 20 * wmb() prevents stores being reordered across this point.
|
| /kernel/linux/linux-5.10/arch/powerpc/include/asm/ |
| D | barrier.h | 19 * providing an ordering (separately) for (a) cacheable stores and (b) 20 * loads and stores to non-cacheable memory (e.g. I/O devices). 22 * mb() prevents loads and stores being reordered across this point. 24 * wmb() prevents stores being reordered across this point. 32 * doesn't order loads with respect to previous stores. Lwsync can be 123 * pmem_wmb() ensures that all stores for which the modification
|
| /kernel/linux/linux-5.10/tools/memory-model/Documentation/ |
| D | explanation.txt | 102 device, stores it in a buffer, and sets a flag to indicate the buffer 134 Thus, P0 stores the data in buf and then sets flag. Meanwhile, P1 140 This pattern of memory accesses, where one CPU stores values to two 197 it, as loads can obtain values only from earlier stores. 202 P1 must load 0 from buf before P0 stores 1 to it; otherwise r2 206 P0 stores 1 to buf before storing 1 to flag, since it executes 222 each CPU stores to its own shared location and then loads from the 270 W: P0 stores 1 to flag executes before 273 Z: P0 stores 1 to buf executes before 274 W: P0 stores 1 to flag. [all …]
|
| /kernel/linux/linux-5.10/arch/sparc/kernel/ |
| D | dtlb_prot.S | 12 * [TL == 0] 1) User stores to readonly pages. 13 * [TL == 0] 2) Nucleus stores to user readonly pages. 14 * [TL > 0] 3) Nucleus stores to user readonly stack frame. 20 membar #Sync ! Synchronize stores
|
| /kernel/linux/linux-6.6/arch/sparc/kernel/ |
| D | dtlb_prot.S | 12 * [TL == 0] 1) User stores to readonly pages. 13 * [TL == 0] 2) Nucleus stores to user readonly pages. 14 * [TL > 0] 3) Nucleus stores to user readonly stack frame. 20 membar #Sync ! Synchronize stores
|
| /kernel/linux/linux-5.10/tools/arch/ia64/include/asm/ |
| D | barrier.h | 25 * wmb(): Guarantees that all preceding stores to memory- 27 * stores and that all following stores will be 28 * visible only after all previous stores.
|
| /kernel/linux/linux-6.6/tools/arch/ia64/include/asm/ |
| D | barrier.h | 25 * wmb(): Guarantees that all preceding stores to memory- 27 * stores and that all following stores will be 28 * visible only after all previous stores.
|
| /kernel/linux/linux-6.6/tools/testing/selftests/kvm/x86_64/ |
| D | pmu_event_filter_test.c | 112 uint64_t stores; member 505 const uint64_t stores = rdmsr(msr_base + 1); in masked_events_guest_test() local 516 pmc_results.stores = rdmsr(msr_base + 1) - stores; in masked_events_guest_test() 588 * For each test, the guest enables 3 PMU counters (loads, stores, 589 * loads + stores). The filter is then set in KVM with the masked events 610 .msg = "Only allow stores.", 619 .msg = "Only allow loads + stores.", 629 .msg = "Only allow loads and stores.", 640 .msg = "Only allow loads and loads + stores.", 650 .msg = "Only allow stores and loads + stores.", [all …]
|
| /kernel/linux/linux-5.10/arch/sparc/lib/ |
| D | M7memset.S | 32 * For small 6 or fewer bytes stores, bytes will be stored. 34 * For less than 32 bytes stores, align the address on 4 byte boundary. 41 * Using BIS stores, set the first long word of each 46 * Using BIS stores, set the first long word of each of 66 * similar to prefetching for normal stores. 71 * BIS stores must be followed by a membar #StoreStore. The benefit of 79 * store and the final stores. 167 ! Use long word stores. 179 and %o2, 63, %o3 ! %o3 = bytes left after blk stores. 187 ! initial cache-clearing stores
|
| /kernel/linux/linux-6.6/arch/sparc/lib/ |
| D | M7memset.S | 32 * For small 6 or fewer bytes stores, bytes will be stored. 34 * For less than 32 bytes stores, align the address on 4 byte boundary. 41 * Using BIS stores, set the first long word of each 46 * Using BIS stores, set the first long word of each of 66 * similar to prefetching for normal stores. 71 * BIS stores must be followed by a membar #StoreStore. The benefit of 79 * store and the final stores. 167 ! Use long word stores. 179 and %o2, 63, %o3 ! %o3 = bytes left after blk stores. 187 ! initial cache-clearing stores
|
| /kernel/linux/linux-6.6/tools/perf/pmu-events/arch/powerpc/power10/ |
| D | translation.json | 10 …efDescription": "Stores completed from S2Q (2nd-level store queue). This event includes regular st…
|