| /kernel/linux/linux-5.10/mm/ |
| D | zpool.c | 109 * the requested module, if needed, but there is no guarantee the module will 150 * Implementations must guarantee this to be thread-safe. 209 * Implementations must guarantee this to be thread-safe, 234 * Implementations must guarantee this to be thread-safe. 250 * Implementations must guarantee this to be thread-safe. 271 * Implementations must guarantee this to be thread-safe. 286 * This frees previously allocated memory. This does not guarantee 290 * Implementations must guarantee this to be thread-safe, 313 * Implementations must guarantee this to be thread-safe.
|
| /kernel/linux/linux-4.19/mm/ |
| D | zpool.c | 107 * the requested module, if needed, but there is no guarantee the module will 148 * Implementations must guarantee this to be thread-safe. 206 * Implementations must guarantee this to be thread-safe, 231 * Implementations must guarantee this to be thread-safe. 252 * Implementations must guarantee this to be thread-safe. 267 * This frees previously allocated memory. This does not guarantee 271 * Implementations must guarantee this to be thread-safe, 294 * Implementations must guarantee this to be thread-safe.
|
| /kernel/linux/linux-5.10/tools/testing/selftests/rcutorture/formal/srcu-cbmc/include/linux/ |
| D | types.h | 129 * The alignment is required to guarantee that bits 0 and 1 of @next will be 133 * This guarantee is important for few reasons: 136 * which encode PageTail() in bit 0. The guarantee is needed to avoid
|
| /kernel/linux/linux-4.19/tools/testing/selftests/rcutorture/formal/srcu-cbmc/include/linux/ |
| D | types.h | 133 * The alignment is required to guarantee that bits 0 and 1 of @next will be 137 * This guarantee is important for few reasons: 140 * which encode PageTail() in bit 0. The guarantee is needed to avoid
|
| /kernel/linux/linux-5.10/include/linux/ |
| D | rbtree_latch.h | 9 * lockless lookups; we cannot guarantee they return a correct result. 21 * However, while we have the guarantee that there is at all times one stable 22 * copy, this does not guarantee an iteration will not observe modifications. 61 * guarantee on which of the elements matching the key is found. See
|
| D | types.h | 206 * The alignment is required to guarantee that bit 0 of @next will be 210 * This guarantee is important for few reasons: 213 * which encode PageTail() in bit 0. The guarantee is needed to avoid
|
| /kernel/linux/linux-4.19/arch/mips/include/asm/ |
| D | barrier.h | 40 * - The barrier does not guarantee the order in which instruction fetches are 58 * to guarantee that memory reference results are visible across operating 60 * implementations on entry to and exit from Debug Mode to guarantee that 92 * - The barrier does not guarantee the order in which instruction fetches are
|
| /kernel/linux/linux-4.19/include/linux/ |
| D | rbtree_latch.h | 9 * lockless lookups; we cannot guarantee they return a correct result. 21 * However, while we have the guarantee that there is at all times one stable 22 * copy, this does not guarantee an iteration will not observe modifications. 61 * guarantee on which of the elements matching the key is found. See
|
| D | types.h | 214 * The alignment is required to guarantee that bit 0 of @next will be 218 * This guarantee is important for few reasons: 221 * which encode PageTail() in bit 0. The guarantee is needed to avoid
|
| /kernel/linux/linux-5.10/fs/verity/ |
| D | Kconfig | 50 used to provide an authenticity guarantee for verity files, as 53 authenticity guarantee.
|
| /kernel/linux/linux-5.10/arch/x86/include/asm/vdso/ |
| D | gettimeofday.h | 203 * Note: The kernel and hypervisor must guarantee that cpu ID in vread_pvclock() 207 * preemption, it cannot guarantee that per-CPU pvclock time in vread_pvclock() 213 * guarantee than we get with a normal seqlock. in vread_pvclock() 215 * On Xen, we don't appear to have that guarantee, but Xen still in vread_pvclock()
|
| /kernel/linux/linux-5.10/Documentation/networking/ |
| D | page_pool.rst | 63 This lockless guarantee naturally comes from running under a NAPI softirq. 64 The protection doesn't strictly have to be NAPI, any guarantee that allocating 87 must guarantee safe context (e.g NAPI), since it will recycle the page
|
| /kernel/linux/linux-5.10/fs/xfs/ |
| D | kmem.c | 60 * Same as kmem_alloc_large, except we guarantee the buffer returned is aligned 61 * to the @align_mask. We only guarantee alignment up to page size, we'll clamp
|
| /kernel/linux/linux-4.19/arch/x86/entry/vdso/ |
| D | vclock_gettime.c | 111 * Note: The kernel and hypervisor must guarantee that cpu ID in vread_pvclock() 115 * preemption, it cannot guarantee that per-CPU pvclock time in vread_pvclock() 121 * guarantee than we get with a normal seqlock. in vread_pvclock() 123 * On Xen, we don't appear to have that guarantee, but Xen still in vread_pvclock()
|
| /kernel/linux/linux-4.19/Documentation/RCU/Design/Memory-Ordering/ |
| D | Tree-RCU-Memory-Ordering.html | 13 grace-period memory ordering guarantee is provided. 16 <li> <a href="#What Is Tree RCU's Grace Period Memory Ordering Guarantee?"> 17 What Is Tree RCU's Grace Period Memory Ordering Guarantee?</a> 25 <h3><a name="What Is Tree RCU's Grace Period Memory Ordering Guarantee?"> 26 What Is Tree RCU's Grace Period Memory Ordering Guarantee?</a></h3> 37 <p>This guarantee is particularly pervasive for <tt>synchronize_sched()</tt>, 44 <p>RCU updaters use this guarantee by splitting their updates into 53 <p>The RCU implementation provides this guarantee using a network 136 RCU's grace-period memory ordering guarantee to extend to any 262 <p>Tree RCU's grace-period memory-ordering guarantee is provided by [all …]
|
| /kernel/linux/linux-4.19/Documentation/locking/ |
| D | spinlocks.txt | 14 spinlock itself will guarantee the global lock, so it will guarantee that 108 guarantee the same kind of exclusive access, and it will be much faster.
|
| /kernel/linux/linux-4.19/Documentation/core-api/ |
| D | refcount-vs-atomic.rst | 77 Memory ordering guarantee changes: 90 Memory ordering guarantee changes: 101 Memory ordering guarantee changes:
|
| /kernel/linux/linux-5.10/Documentation/locking/ |
| D | spinlocks.rst | 19 spinlock itself will guarantee the global lock, so it will guarantee that 117 guarantee the same kind of exclusive access, and it will be much faster.
|
| /kernel/linux/linux-5.10/kernel/printk/ |
| D | printk_ringbuffer.c | 455 * Guarantee the state is loaded before copying the descriptor in desc_read() 485 * 1. Guarantee the descriptor content is loaded before re-checking in desc_read() 501 * 2. Guarantee the record data is loaded before re-checking the in desc_read() 675 * 1. Guarantee the block ID loaded in in data_push_tail() 702 * 2. Guarantee the descriptor state loaded in in data_push_tail() 742 * Guarantee any descriptor states that have transitioned to in data_push_tail() 827 * Guarantee any descriptor states that have transitioned to in desc_push_tail() 837 * Guarantee the last state load from desc_read() is before in desc_push_tail() 889 * Guarantee the head ID is read before reading the tail ID. in desc_reserve() 923 * 1. Guarantee the tail ID is read before validating the in desc_reserve() [all …]
|
| /kernel/linux/linux-4.19/net/smc/ |
| D | smc_cdc.c | 54 /* guarantee 0 <= sndbuf_space <= sndbuf_desc->len */ in smc_cdc_tx_handler() 202 /* guarantee 0 <= sndbuf_space <= sndbuf_desc->len */ in smcd_cdc_msg_send() 256 /* guarantee 0 <= peer_rmbe_space <= peer_rmbe_size */ in smc_cdc_msg_recv_action() 268 /* guarantee 0 <= bytes_to_rcv <= rmb_desc->len */ in smc_cdc_msg_recv_action() 278 * under send_lock to guarantee arrival in seqno-order in smc_cdc_msg_recv_action()
|
| /kernel/linux/linux-5.10/Documentation/core-api/ |
| D | refcount-vs-atomic.rst | 84 Memory ordering guarantee changes: 97 Memory ordering guarantee changes: 108 Memory ordering guarantee changes:
|
| /kernel/linux/linux-4.19/Documentation/ |
| D | memory-barriers.txt | 334 of the standard containing this guarantee is Section 3.14, which 384 A write memory barrier gives a guarantee that all the STORE operations 438 A read barrier is a data dependency barrier plus a guarantee that all the 455 A general memory barrier gives a guarantee that all the LOAD and STORE 528 There are certain things that the Linux kernel memory barriers do not guarantee: 530 (*) There is no guarantee that any of the memory accesses specified before a 535 (*) There is no guarantee that issuing a memory barrier on one CPU will have 540 (*) There is no guarantee that a CPU will see the correct order of effects 545 (*) There is no guarantee that some intervening piece of off-the-CPU 882 However, they do -not- guarantee any other sort of ordering: [all …]
|
| /kernel/linux/linux-5.10/Documentation/driver-api/usb/ |
| D | anchors.rst | 55 Therefore no guarantee is made that the URBs have been unlinked when 82 destinations in one anchor you have no guarantee the chronologically
|
| /kernel/linux/linux-4.19/Documentation/driver-api/usb/ |
| D | anchors.rst | 55 Therefore no guarantee is made that the URBs have been unlinked when 82 destinations in one anchor you have no guarantee the chronologically
|
| /kernel/linux/linux-5.10/Documentation/ |
| D | memory-barriers.txt | 332 of the standard containing this guarantee is Section 3.14, which 382 A write memory barrier gives a guarantee that all the STORE operations 436 A read barrier is a data dependency barrier plus a guarantee that all the 453 A general memory barrier gives a guarantee that all the LOAD and STORE 524 There are certain things that the Linux kernel memory barriers do not guarantee: 526 (*) There is no guarantee that any of the memory accesses specified before a 531 (*) There is no guarantee that issuing a memory barrier on one CPU will have 536 (*) There is no guarantee that a CPU will see the correct order of effects 541 (*) There is no guarantee that some intervening piece of off-the-CPU 878 However, they do -not- guarantee any other sort of ordering: [all …]
|