| /kernel/linux/linux-5.10/tools/perf/Documentation/ |
| D | perf-c2c.txt | 32 for cachelines with highest contention - highest number of HITM accesses. 178 - cacheline percentage of all Remote/Local HITM accesses 184 - sum of all cachelines accesses 187 - sum of all load accesses 190 - sum of all store accesses 193 L1Hit - store accesses that hit L1 194 L1Miss - store accesses that missed L1 200 - count of LLC load accesses, includes LLC hits and LLC HITMs 203 - count of remote load accesses, includes remote hits and remote HITMs 206 - count of local and remote DRAM accesses [all …]
|
| /kernel/linux/linux-4.19/tools/perf/Documentation/ |
| D | perf-c2c.txt | 29 for cachelines with highest contention - highest number of HITM accesses. 159 - sum of all cachelines accesses 162 - cacheline percentage of all Remote/Local HITM accesses 168 Total - all store accesses 169 L1Hit - store accesses that hit L1 170 L1Hit - store accesses that missed L1 173 - count of local and remote DRAM accesses 176 - count of all accesses that missed LLC 179 - sum of all load accesses 190 - % of Remote/Local HITM accesses for given offset within cacheline [all …]
|
| /kernel/linux/linux-5.10/include/linux/ |
| D | kcsan-checks.h | 51 * Accesses within the atomic region may appear to race with other accesses but 64 * Accesses within the atomic region may appear to race with other accesses but 75 * kcsan_atomic_next - consider following accesses as atomic 77 * Force treating the next n memory accesses for the current context as atomic 80 * @n: number of following memory accesses to treat as atomic. 87 * Set the access mask for all accesses for the current context if non-zero. 116 * Scoped accesses are implemented by appending @sa to an internal list for the 172 * Only use these to disable KCSAN for accesses in the current compilation unit; 237 * Check for atomic accesses: if atomic accesses are not ignored, this simply 261 * readers, to avoid data races, all these accesses must be marked; even [all …]
|
| /kernel/linux/linux-5.10/Documentation/core-api/ |
| D | unaligned-memory-access.rst | 2 Unaligned Memory Accesses 15 unaligned accesses, why you need to write code that doesn't cause them, 22 Unaligned memory accesses occur when you try to read N bytes of data starting 59 - Some architectures are able to perform unaligned memory accesses 61 - Some architectures raise processor exceptions when unaligned accesses 64 - Some architectures raise processor exceptions when unaligned accesses 72 memory accesses to happen, your code will not work correctly on certain 103 to pad structures so that accesses to fields are suitably aligned (assuming 136 lead to unaligned accesses when accessing fields that do not satisfy 183 Here is another example of some code that could cause unaligned accesses:: [all …]
|
| /kernel/linux/linux-4.19/Documentation/ |
| D | unaligned-memory-access.txt | 2 UNALIGNED MEMORY ACCESSES 15 unaligned accesses, why you need to write code that doesn't cause them, 22 Unaligned memory accesses occur when you try to read N bytes of data starting 59 - Some architectures are able to perform unaligned memory accesses 61 - Some architectures raise processor exceptions when unaligned accesses 64 - Some architectures raise processor exceptions when unaligned accesses 72 memory accesses to happen, your code will not work correctly on certain 103 to pad structures so that accesses to fields are suitably aligned (assuming 136 lead to unaligned accesses when accessing fields that do not satisfy 183 Here is another example of some code that could cause unaligned accesses:: [all …]
|
| /kernel/linux/linux-5.10/Documentation/driver-api/ |
| D | device-io.rst | 10 Bus-Independent Device Accesses 30 part of the CPU's address space is interpreted not as accesses to 31 memory, but as accesses to a device. Some architectures define devices 54 historical accident, these are named byte, word, long and quad accesses. 55 Both read and write accesses are supported; there is no prefetch support 119 Port Space Accesses 127 addresses is generally not as fast as accesses to the memory mapped 136 Accesses to this space are provided through a set of functions which 137 allow 8-bit, 16-bit and 32-bit accesses; also known as byte, word and 143 that accesses to their ports are slowed down. This functionality is
|
| /kernel/linux/linux-4.19/Documentation/i2c/ |
| D | i2c-topology | 136 This means that accesses to D2 are lockout out for the full duration 137 of the entire operation. But accesses to D3 are possibly interleaved 196 This means that accesses to both D2 and D3 are locked out for the full 241 When device D1 is accessed, accesses to D2 are locked out for the 243 are locked). But accesses to D3 and D4 are possibly interleaved at 244 any point. Accesses to D3 locks out D1 and D2, but accesses to D4 262 When device D1 is accessed, accesses to D2 and D3 are locked out 264 root adapter). But accesses to D4 are possibly interleaved at any 275 mux. In that case, any interleaved accesses to D4 might close M2 296 When D1 is accessed, accesses to D2 are locked out for the full [all …]
|
| /kernel/linux/linux-5.10/Documentation/dev-tools/ |
| D | kcsan.rst | 94 instrumentation or e.g. DMA accesses. These reports will only be generated if 100 It may be desirable to disable data race detection for specific accesses, 105 any data races due to accesses in ``expr`` should be ignored and resulting 140 accesses are aligned writes up to word size. 190 In an execution, two memory accesses form a *data race* if they *conflict*, 194 Accesses and Data Races" in the LKMM`_. 196 .. _"Plain Accesses and Data Races" in the LKMM: https://git.kernel.org/pub/scm/linux/kernel/git/to… 236 KCSAN relies on observing that two accesses happen concurrently. Crucially, we 243 address set up, and then observe the watchpoint to fire, two accesses to the 253 compiler instrumenting plain accesses. For each instrumented plain access: [all …]
|
| /kernel/linux/linux-5.10/arch/arm/include/uapi/asm/ |
| D | byteorder.h | 6 * that byte accesses appear as: 8 * and word accesses (data or instruction) appear as: 11 * When in big endian mode, byte accesses appear as: 13 * and word accesses (data or instruction) appear as:
|
| D | swab.h | 6 * that byte accesses appear as: 8 * and word accesses (data or instruction) appear as: 11 * When in big endian mode, byte accesses appear as: 13 * and word accesses (data or instruction) appear as:
|
| /kernel/linux/linux-4.19/arch/arm/include/uapi/asm/ |
| D | byteorder.h | 6 * that byte accesses appear as: 8 * and word accesses (data or instruction) appear as: 11 * When in big endian mode, byte accesses appear as: 13 * and word accesses (data or instruction) appear as:
|
| D | swab.h | 6 * that byte accesses appear as: 8 * and word accesses (data or instruction) appear as: 11 * When in big endian mode, byte accesses appear as: 13 * and word accesses (data or instruction) appear as:
|
| /kernel/linux/linux-5.10/tools/perf/pmu-events/arch/nds32/n13/ |
| D | atcpmu.json | 75 "PublicDescription": "uITLB accesses", 78 "BriefDescription": "V3 uITLB accesses" 81 "PublicDescription": "uDTLB accesses", 84 "BriefDescription": "V3 uDTLB accesses" 87 "PublicDescription": "MTLB accesses", 90 "BriefDescription": "V3 MTLB accesses" 108 "BriefDescription": "V3 ILM accesses"
|
| /kernel/linux/linux-5.10/Documentation/i2c/ |
| D | i2c-topology.rst | 152 This means that accesses to D2 are lockout out for the full duration 153 of the entire operation. But accesses to D3 are possibly interleaved 216 This means that accesses to both D2 and D3 are locked out for the full 261 When device D1 is accessed, accesses to D2 are locked out for the 263 are locked). But accesses to D3 and D4 are possibly interleaved at 264 any point. Accesses to D3 locks out D1 and D2, but accesses to D4 282 When device D1 is accessed, accesses to D2 and D3 are locked out 284 root adapter). But accesses to D4 are possibly interleaved at any 295 mux. In that case, any interleaved accesses to D4 might close M2 316 When D1 is accessed, accesses to D2 are locked out for the full [all …]
|
| /kernel/linux/linux-5.10/tools/perf/pmu-events/arch/x86/amdzen2/ |
| D | recommended.json | 12 "BriefDescription": "All L1 Data Cache Accesses", 17 "BriefDescription": "All L2 Cache Accesses", 24 "BriefDescription": "L2 Cache Accesses from L1 Instruction Cache Misses (including prefetch)", 30 "BriefDescription": "L2 Cache Accesses from L1 Data Cache Misses (including prefetch)", 35 "BriefDescription": "L2 Cache Accesses from L2 HWPF", 90 "BriefDescription": "L3 Accesses",
|
| /kernel/linux/linux-5.10/tools/perf/pmu-events/arch/x86/amdzen1/ |
| D | recommended.json | 12 "BriefDescription": "All L1 Data Cache Accesses", 17 "BriefDescription": "All L2 Cache Accesses", 24 "BriefDescription": "L2 Cache Accesses from L1 Instruction Cache Misses (including prefetch)", 30 "BriefDescription": "L2 Cache Accesses from L1 Data Cache Misses (including prefetch)", 35 "BriefDescription": "L2 Cache Accesses from L2 HWPF", 90 "BriefDescription": "L3 Accesses",
|
| /kernel/linux/linux-4.19/Documentation/driver-api/ |
| D | device-io.rst | 10 Bus-Independent Device Accesses 30 part of the CPU's address space is interpreted not as accesses to 31 memory, but as accesses to a device. Some architectures define devices 54 historical accident, these are named byte, word, long and quad accesses. 55 Both read and write accesses are supported; there is no prefetch support 164 Port Space Accesses 172 addresses is generally not as fast as accesses to the memory mapped 181 Accesses to this space are provided through a set of functions which 182 allow 8-bit, 16-bit and 32-bit accesses; also known as byte, word and 188 that accesses to their ports are slowed down. This functionality is
|
| /kernel/linux/linux-5.10/tools/memory-model/Documentation/ |
| D | explanation.txt | 32 24. PLAIN ACCESSES AND DATA RACES 86 factors such as DMA and mixed-size accesses.) But on multiprocessor 87 systems, with multiple CPUs making concurrent accesses to shared 140 This pattern of memory accesses, where one CPU stores values to two 151 accesses by the CPUs. 276 In short, if a memory model requires certain accesses to be ordered, 278 if those accesses would form a cycle, then the memory model predicts 305 Atomic read-modify-write accesses, such as atomic_inc() or xchg(), 312 logical computations, control-flow instructions, or accesses to 342 po-loc is a sub-relation of po. It links two memory accesses when the [all …]
|
| /kernel/linux/linux-4.19/arch/arm/include/asm/ |
| D | swab.h | 6 * that byte accesses appear as: 8 * and word accesses (data or instruction) appear as: 11 * When in big endian mode, byte accesses appear as: 13 * and word accesses (data or instruction) appear as:
|
| /kernel/linux/linux-5.10/arch/arm/include/asm/ |
| D | swab.h | 6 * that byte accesses appear as: 8 * and word accesses (data or instruction) appear as: 11 * When in big endian mode, byte accesses appear as: 13 * and word accesses (data or instruction) appear as:
|
| /kernel/linux/linux-4.19/lib/ |
| D | Kconfig.kasan | 14 designed to find out-of-bounds accesses and use-after-free bugs. 16 of 4.9.2 or later. Detection of out of bounds accesses to stack or 53 memory accesses. This is faster than outline (in some workloads 65 out of bounds accesses, use after free. It is useful for testing
|
| /kernel/linux/linux-4.19/Documentation/admin-guide/hw-vuln/ |
| D | special-register-buffer-data-sampling.rst | 7 infer values returned from special register accesses. Special register 8 accesses are accesses to off core registers. According to Intel's evaluation, 69 accesses from other logical processors will be delayed until the special 81 #. Executing RDRAND, RDSEED or EGETKEY will delay memory accesses from other 83 legacy locked cache-line-split accesses. 90 processors memory accesses. The opt-out mechanism does not affect Intel SGX
|
| /kernel/linux/linux-5.10/Documentation/admin-guide/hw-vuln/ |
| D | special-register-buffer-data-sampling.rst | 7 infer values returned from special register accesses. Special register 8 accesses are accesses to off core registers. According to Intel's evaluation, 69 accesses from other logical processors will be delayed until the special 81 #. Executing RDRAND, RDSEED or EGETKEY will delay memory accesses from other 83 legacy locked cache-line-split accesses. 90 processors memory accesses. The opt-out mechanism does not affect Intel SGX
|
| /kernel/linux/linux-5.10/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/ |
| D | uncore-l3c.json | 5 "BriefDescription": "Total read accesses", 6 "PublicDescription": "Total read accesses", 12 "BriefDescription": "Total write accesses", 13 "PublicDescription": "Total write accesses",
|
| /kernel/linux/linux-5.10/Documentation/devicetree/bindings/ |
| D | common-properties.txt | 13 - big-endian: Boolean; force big endian register accesses 16 - little-endian: Boolean; force little endian register accesses 19 - native-endian: Boolean; always use register accesses matched to the 30 default to LE for their MMIO accesses.
|