| /kernel/linux/linux-5.10/mm/ |
| D | zpool.c | 109 * the requested module, if needed, but there is no guarantee the module will 150 * Implementations must guarantee this to be thread-safe. 209 * Implementations must guarantee this to be thread-safe, 234 * Implementations must guarantee this to be thread-safe. 250 * Implementations must guarantee this to be thread-safe. 271 * Implementations must guarantee this to be thread-safe. 286 * This frees previously allocated memory. This does not guarantee 290 * Implementations must guarantee this to be thread-safe, 313 * Implementations must guarantee this to be thread-safe.
|
| /kernel/linux/linux-6.6/mm/ |
| D | zpool.c | 101 * the requested module, if needed, but there is no guarantee the module will 141 * Implementations must guarantee this to be thread-safe. 192 * Implementations must guarantee this to be thread-safe, 214 * Implementations must guarantee this to be thread-safe. 230 * Implementations must guarantee this to be thread-safe. 251 * Implementations must guarantee this to be thread-safe. 266 * This frees previously allocated memory. This does not guarantee 270 * Implementations must guarantee this to be thread-safe,
|
| /kernel/linux/linux-6.6/rust/kernel/ |
| D | allocator.rs | 25 // The alignment requirement exceeds the slab guarantee, thus try to enlarge the size in krealloc_aligned() 26 // to use the "power-of-two" size/alignment guarantee (see comments in `kmalloc()` for in krealloc_aligned() 30 // `layout.align()`, so `next_power_of_two` gives enough alignment guarantee. in krealloc_aligned()
|
| D | build_assert.rs | 8 /// If the compiler or optimizer cannot guarantee that `build_error!` can never 36 /// will panic. If the compiler or optimizer cannot guarantee the condition will
|
| /kernel/linux/linux-5.10/tools/testing/selftests/rcutorture/formal/srcu-cbmc/include/linux/ |
| D | types.h | 129 * The alignment is required to guarantee that bits 0 and 1 of @next will be 133 * This guarantee is important for few reasons: 136 * which encode PageTail() in bit 0. The guarantee is needed to avoid
|
| /kernel/linux/linux-6.6/kernel/sched/ |
| D | membarrier.c | 20 * order to enforce the guarantee that any writes occurring on CPU0 before 42 * and r2 == 0. This violates the guarantee that membarrier() is 56 * order to enforce the guarantee that any writes occurring on CPU1 before 77 * the guarantee that membarrier() is supposed to provide. 181 * A sync_core() would provide this guarantee, but in ipi_sync_core() 214 * guarantee that no memory access following registration is reordered in ipi_sync_rq_state() 224 * guarantee that no memory access prior to exec is reordered after in membarrier_exec_mmap() 443 * mm and in the current runqueue to guarantee that no memory in sync_runqueues_membarrier_state()
|
| /kernel/linux/linux-6.6/include/linux/ |
| D | rbtree_latch.h | 9 * lockless lookups; we cannot guarantee they return a correct result. 21 * However, while we have the guarantee that there is at all times one stable 22 * copy, this does not guarantee an iteration will not observe modifications. 61 * guarantee on which of the elements matching the key is found. See
|
| D | types.h | 223 * The alignment is required to guarantee that bit 0 of @next will be 227 * This guarantee is important for few reasons: 230 * which encode PageTail() in bit 0. The guarantee is needed to avoid
|
| /kernel/linux/linux-5.10/include/linux/ |
| D | rbtree_latch.h | 9 * lockless lookups; we cannot guarantee they return a correct result. 21 * However, while we have the guarantee that there is at all times one stable 22 * copy, this does not guarantee an iteration will not observe modifications. 61 * guarantee on which of the elements matching the key is found. See
|
| D | types.h | 206 * The alignment is required to guarantee that bit 0 of @next will be 210 * This guarantee is important for few reasons: 213 * which encode PageTail() in bit 0. The guarantee is needed to avoid
|
| /kernel/linux/linux-5.10/fs/verity/ |
| D | Kconfig | 50 used to provide an authenticity guarantee for verity files, as 53 authenticity guarantee.
|
| /kernel/linux/linux-5.10/arch/x86/include/asm/vdso/ |
| D | gettimeofday.h | 203 * Note: The kernel and hypervisor must guarantee that cpu ID in vread_pvclock() 207 * preemption, it cannot guarantee that per-CPU pvclock time in vread_pvclock() 213 * guarantee than we get with a normal seqlock. in vread_pvclock() 215 * On Xen, we don't appear to have that guarantee, but Xen still in vread_pvclock()
|
| /kernel/linux/linux-6.6/arch/x86/include/asm/vdso/ |
| D | gettimeofday.h | 204 * Note: The kernel and hypervisor must guarantee that cpu ID in vread_pvclock() 208 * preemption, it cannot guarantee that per-CPU pvclock time in vread_pvclock() 214 * guarantee than we get with a normal seqlock. in vread_pvclock() 216 * On Xen, we don't appear to have that guarantee, but Xen still in vread_pvclock()
|
| /kernel/linux/linux-5.10/Documentation/networking/ |
| D | page_pool.rst | 63 This lockless guarantee naturally comes from running under a NAPI softirq. 64 The protection doesn't strictly have to be NAPI, any guarantee that allocating 87 must guarantee safe context (e.g NAPI), since it will recycle the page
|
| /kernel/linux/linux-5.10/fs/xfs/ |
| D | kmem.c | 60 * Same as kmem_alloc_large, except we guarantee the buffer returned is aligned 61 * to the @align_mask. We only guarantee alignment up to page size, we'll clamp
|
| /kernel/linux/linux-6.6/kernel/printk/ |
| D | printk_ringbuffer.c | 455 * Guarantee the state is loaded before copying the descriptor in desc_read() 487 * 1. Guarantee the descriptor content is loaded before re-checking in desc_read() 503 * 2. Guarantee the record data is loaded before re-checking the in desc_read() 677 * 1. Guarantee the block ID loaded in in data_push_tail() 704 * 2. Guarantee the descriptor state loaded in in data_push_tail() 744 * Guarantee any descriptor states that have transitioned to in data_push_tail() 829 * Guarantee any descriptor states that have transitioned to in desc_push_tail() 839 * Guarantee the last state load from desc_read() is before in desc_push_tail() 891 * Guarantee the head ID is read before reading the tail ID. in desc_reserve() 925 * 1. Guarantee the tail ID is read before validating the in desc_reserve() [all …]
|
| /kernel/linux/linux-5.10/kernel/printk/ |
| D | printk_ringbuffer.c | 455 * Guarantee the state is loaded before copying the descriptor in desc_read() 485 * 1. Guarantee the descriptor content is loaded before re-checking in desc_read() 501 * 2. Guarantee the record data is loaded before re-checking the in desc_read() 675 * 1. Guarantee the block ID loaded in in data_push_tail() 702 * 2. Guarantee the descriptor state loaded in in data_push_tail() 742 * Guarantee any descriptor states that have transitioned to in data_push_tail() 827 * Guarantee any descriptor states that have transitioned to in desc_push_tail() 837 * Guarantee the last state load from desc_read() is before in desc_push_tail() 889 * Guarantee the head ID is read before reading the tail ID. in desc_reserve() 923 * 1. Guarantee the tail ID is read before validating the in desc_reserve() [all …]
|
| /kernel/linux/linux-6.6/Documentation/locking/ |
| D | spinlocks.rst | 19 spinlock itself will guarantee the global lock, so it will guarantee that 117 guarantee the same kind of exclusive access, and it will be much faster.
|
| /kernel/linux/linux-5.10/Documentation/locking/ |
| D | spinlocks.rst | 19 spinlock itself will guarantee the global lock, so it will guarantee that 117 guarantee the same kind of exclusive access, and it will be much faster.
|
| /kernel/linux/linux-6.6/Documentation/core-api/ |
| D | refcount-vs-atomic.rst | 84 Memory ordering guarantee changes: 97 Memory ordering guarantee changes: 108 Memory ordering guarantee changes:
|
| /kernel/linux/linux-6.6/rust/alloc/vec/ |
| D | is_zero.rs | 120 // `Option<num::NonZeroU32>` and similar have a representation guarantee that 121 // they're the same size as the corresponding `u32` type, as well as a guarantee 189 // SAFETY: This is *not* a stable layout guarantee, but
|
| /kernel/linux/linux-5.10/Documentation/core-api/ |
| D | refcount-vs-atomic.rst | 84 Memory ordering guarantee changes: 97 Memory ordering guarantee changes: 108 Memory ordering guarantee changes:
|
| /kernel/linux/linux-6.6/Documentation/driver-api/ |
| D | reset.rst | 87 Exclusive resets on the other hand guarantee direct control. 99 is no guarantee that calling reset_control_assert() on a shared reset control 152 The reset control API does not guarantee the order in which the individual
|
| /kernel/linux/linux-6.6/tools/memory-model/Documentation/ |
| D | ordering.txt | 101 with void return types) do not guarantee any ordering whatsoever. Nor do 106 operations such as atomic_read() do not guarantee full ordering, and 130 such as atomic_inc() and atomic_dec() guarantee no ordering whatsoever. 150 atomic_inc() implementations do not guarantee full ordering, thus 278 from "x" instead of writing to it. Then an smp_wmb() could not guarantee 501 and further do not guarantee "atomic" access. For example, the compiler
|
| /kernel/linux/linux-6.6/Documentation/driver-api/usb/ |
| D | anchors.rst | 55 Therefore no guarantee is made that the URBs have been unlinked when 82 destinations in one anchor you have no guarantee the chronologically
|