| /kernel/linux/linux-6.6/kernel/locking/ |
| D | rwbase_rt.c | 8 * 2) Remove the reader BIAS to force readers into the slow path 9 * 3) Wait until all readers have left the critical section 14 * 2) Set the reader BIAS, so readers can use the fast path again 15 * 3) Unlock rtmutex, to release blocked readers 34 * active readers. A blocked writer would force all newly incoming readers 45 * The lock/unlock of readers can run in fast paths: lock and unlock are only 58 * Increment reader count, if sem->readers < 0, i.e. READER_BIAS is in rwbase_read_trylock() 61 for (r = atomic_read(&rwb->readers); r < 0;) { in rwbase_read_trylock() 62 if (likely(atomic_try_cmpxchg_acquire(&rwb->readers, &r, r + 1))) in rwbase_read_trylock() 122 atomic_inc(&rwb->readers); in __rwbase_read_lock() [all …]
|
| D | percpu-rwsem.c | 60 * Conversely, any readers that increment their sem->read_count after in __percpu_down_read_trylock() 113 * We use EXCLUSIVE for both readers and writers to preserve FIFO order, 114 * and play games with the return value to allow waking multiple readers. 116 * Specifically, we wake readers until we've woken a single writer, or until a 138 return !reader; /* wake (readers until) 1 writer */ in percpu_rwsem_wake_function() 204 * newly arriving readers increment a given counter, they will immediately 230 /* Notify readers to take the slow path. */ in percpu_down_write() 235 * Having sem->block set makes new readers block. in percpu_down_write() 248 /* Wait for all active readers to complete. */ in percpu_down_write() 262 * that new readers might fail to see the results of this writer's in percpu_up_write()
|
| D | rwsem.c | 38 * - Bit 0: RWSEM_READER_OWNED - The rwsem is owned by readers 55 * is involved. Ideally we would like to track all the readers that own 109 * 1) rwsem_mark_wake() for readers -- set, clear 296 * The lock is owned by readers when 301 * Having some reader bits set is not enough to guarantee a readers owned 302 * lock as the readers may be in the process of backing out from the count 350 RWSEM_WAKE_READERS, /* Wake readers only */ 362 * Magic number to batch-wakeup waiting readers, even when writers are 409 * Implies rwsem_del_waiter() for all woken readers. 433 * Readers, on the other hand, will block as they in rwsem_mark_wake() [all …]
|
| D | qrwlock.c | 24 * Readers come here when they cannot get the lock without waiting in queued_read_lock_slowpath() 28 * Readers in interrupt context will get the lock immediately in queued_read_lock_slowpath() 80 /* Set the waiting flag to notify readers that a writer is pending */ in queued_write_lock_slowpath() 83 /* When no more readers or writers, set the locked flag */ in queued_write_lock_slowpath()
|
| /kernel/linux/linux-6.6/include/linux/ |
| D | rwbase_rt.h | 12 atomic_t readers; member 18 .readers = ATOMIC_INIT(READER_BIAS), \ 25 atomic_set(&(rwbase)->readers, READER_BIAS); \ 31 return atomic_read(&rwb->readers) != READER_BIAS; in rw_base_is_locked() 36 return atomic_read(&rwb->readers) > 0; in rw_base_is_contended()
|
| /kernel/linux/linux-6.6/Documentation/RCU/ |
| D | checklist.rst | 30 One final exception is where RCU readers are used to prevent 40 RCU does allow *readers* to run (almost) naked, but *writers* must 85 The whole point of RCU is to permit readers to run without 86 any locks or atomic operations. This means that readers will 99 locks (that are acquired by both readers and writers) 100 that guard per-element state. Fields that the readers 106 c. Make updates appear atomic to readers. For example, 110 appear to be atomic to RCU readers, nor will sequences 118 d. Carefully order the updates and the reads so that readers 138 a. Readers must maintain proper ordering of their memory [all …]
|
| D | rcu.rst | 10 must be long enough that any readers accessing the item being deleted have 21 The advantage of RCU's two-part approach is that RCU readers need 26 in read-mostly situations. The fact that RCU readers need not 30 if the RCU readers give no indication when they are done? 32 Just as with spinlocks, RCU readers are not permitted to 42 same effect, but require that the readers manipulate CPU-local
|
| D | whatisRCU.rst | 56 Section 1, though most readers will profit by reading this section at 79 new versions of these data items), and can run concurrently with readers. 81 readers is the semantics of modern CPUs guarantee that readers will see 85 removal phase. Because reclaiming data items can disrupt any readers 87 not start until readers no longer hold references to those data items. 91 reclamation phase until all readers active during the removal phase have 93 callback that is invoked after they finish. Only readers that are active 101 readers cannot gain a reference to it. 103 b. Wait for all previous readers to complete their RCU read-side 106 c. At this point, there cannot be any readers who hold references [all …]
|
| /kernel/linux/linux-5.10/kernel/rcu/ |
| D | sync.c | 28 * rcu_sync_enter_start - Force readers onto slow path for multiple updates 58 * If it is called by rcu_sync_enter() it signals that all the readers were 67 * readers back onto their fastpaths (after a grace period). If both 70 * rcu_sync_exit(). Otherwise, set all state back to idle so that readers 107 * rcu_sync_enter() - Force readers onto slowpath 110 * This function is used by updaters who need readers to make use of 113 * tells readers to stay off their fastpaths. A later call to 159 * rcu_sync_exit() - Allow readers back onto fast path after grace period 163 * now allow readers to make use of their fastpaths after a grace period 165 * calls to rcu_sync_is_idle() will return true, which tells readers that
|
| /kernel/linux/linux-6.6/kernel/rcu/ |
| D | sync.c | 28 * rcu_sync_enter_start - Force readers onto slow path for multiple updates 58 * If it is called by rcu_sync_enter() it signals that all the readers were 67 * readers back onto their fastpaths (after a grace period). If both 70 * rcu_sync_exit(). Otherwise, set all state back to idle so that readers 107 * rcu_sync_enter() - Force readers onto slowpath 110 * This function is used by updaters who need readers to make use of 113 * tells readers to stay off their fastpaths. A later call to 159 * rcu_sync_exit() - Allow readers back onto fast path after grace period 163 * now allow readers to make use of their fastpaths after a grace period 165 * calls to rcu_sync_is_idle() will return true, which tells readers that
|
| /kernel/linux/linux-5.10/Documentation/RCU/ |
| D | checklist.rst | 30 One final exception is where RCU readers are used to prevent 40 RCU does allow -readers- to run (almost) naked, but -writers- must 80 The whole point of RCU is to permit readers to run without 81 any locks or atomic operations. This means that readers will 94 locks (that are acquired by both readers and writers) 96 the readers refrain from accessing can be guarded by 101 c. Make updates appear atomic to readers. For example, 105 appear to be atomic to RCU readers, nor will sequences 111 readers see valid data at all phases of the update. 128 a. Readers must maintain proper ordering of their memory [all …]
|
| D | whatisRCU.rst | 47 Section 1, though most readers will profit by reading this section at 70 new versions of these data items), and can run concurrently with readers. 72 readers is the semantics of modern CPUs guarantee that readers will see 76 removal phase. Because reclaiming data items can disrupt any readers 78 not start until readers no longer hold references to those data items. 82 reclamation phase until all readers active during the removal phase have 84 callback that is invoked after they finish. Only readers that are active 92 readers cannot gain a reference to it. 94 b. Wait for all previous readers to complete their RCU read-side 97 c. At this point, there cannot be any readers who hold references [all …]
|
| D | rcu.rst | 10 must be long enough that any readers accessing the item being deleted have 22 The advantage of RCU's two-part approach is that RCU readers need 27 in read-mostly situations. The fact that RCU readers need not 31 if the RCU readers give no indication when they are done? 33 Just as with spinlocks, RCU readers are not permitted to 43 same effect, but require that the readers manipulate CPU-local
|
| /kernel/linux/linux-5.10/Documentation/locking/ |
| D | lockdep-design.rst | 405 spin_lock() or write_lock()), non-recursive readers (i.e. shared lockers, like 406 down_read()) and recursive readers (recursive shared lockers, like rcu_read_lock()). 410 r: stands for non-recursive readers. 411 R: stands for recursive readers. 412 S: stands for all readers (non-recursive + recursive), as both are shared lockers. 413 N: stands for writers and non-recursive readers, as both are not recursive. 417 Recursive readers, as their name indicates, are the lockers allowed to acquire 421 While non-recursive readers will cause a self deadlock if trying to acquire inside 424 The difference between recursive readers and non-recursive readers is because: 425 recursive readers get blocked only by a write lock *holder*, while non-recursive [all …]
|
| D | seqlock.rst | 9 lockless readers (read-only retry loops), and no writer starvation. They 23 is odd and indicates to the readers that an update is in progress. At 25 even again which lets readers make progress. 153 from interruption by readers. This is typically the case when the read 195 1. Normal Sequence readers which never block a writer but they must 206 2. Locking readers which will wait if a writer or another locking reader 218 according to a passed marker. This is used to avoid lockless readers
|
| /kernel/linux/linux-6.6/Documentation/locking/ |
| D | lockdep-design.rst | 405 spin_lock() or write_lock()), non-recursive readers (i.e. shared lockers, like 406 down_read()) and recursive readers (recursive shared lockers, like rcu_read_lock()). 410 r: stands for non-recursive readers. 411 R: stands for recursive readers. 412 S: stands for all readers (non-recursive + recursive), as both are shared lockers. 413 N: stands for writers and non-recursive readers, as both are not recursive. 417 Recursive readers, as their name indicates, are the lockers allowed to acquire 421 While non-recursive readers will cause a self deadlock if trying to acquire inside 424 The difference between recursive readers and non-recursive readers is because: 425 recursive readers get blocked only by a write lock *holder*, while non-recursive [all …]
|
| D | seqlock.rst | 9 lockless readers (read-only retry loops), and no writer starvation. They 23 is odd and indicates to the readers that an update is in progress. At 25 even again which lets readers make progress. 153 from interruption by readers. This is typically the case when the read 195 1. Normal Sequence readers which never block a writer but they must 206 2. Locking readers which will wait if a writer or another locking reader 218 according to a passed marker. This is used to avoid lockless readers
|
| /kernel/linux/linux-5.10/kernel/locking/ |
| D | percpu-rwsem.c | 58 * Conversely, any readers that increment their sem->read_count after in __percpu_down_read_trylock() 111 * We use EXCLUSIVE for both readers and writers to preserve FIFO order, 112 * and play games with the return value to allow waking multiple readers. 114 * Specifically, we wake readers until we've woken a single writer, or until a 136 return !reader; /* wake (readers until) 1 writer */ in percpu_rwsem_wake_function() 194 * newly arriving readers increment a given counter, they will immediately 219 /* Notify readers to take the slow path. */ in percpu_down_write() 224 * Having sem->block set makes new readers block. in percpu_down_write() 237 /* Wait for all active readers to complete. */ in percpu_down_write() 250 * that new readers might fail to see the results of this writer's in percpu_up_write()
|
| D | rwsem.c | 36 * - Bit 0: RWSEM_READER_OWNED - The rwsem is owned by readers 37 * - Bit 1: RWSEM_RD_NONSPINNABLE - Readers cannot spin on this lock. 42 * bits will be set to disable optimistic spinning by readers and writers. 45 * to acquire the lock via optimistic spinning, but not readers. Similar 59 * is involved. Ideally we would like to track all the readers that own 63 * is short and there aren't that many readers around. It makes readers 70 * 2) There are just too many readers contending the lock causing it to 76 * groups that contain readers that acquire the lock together smaller 85 * acquire the write lock. Similarly, readers that observe the setting 146 * 1) rwsem_mark_wake() for readers. [all …]
|
| D | qrwlock.c | 24 * Readers come here when they cannot get the lock without waiting in queued_read_lock_slowpath() 28 * Readers in interrupt context will get the lock immediately in queued_read_lock_slowpath() 74 /* Set the waiting flag to notify readers that a writer is pending */ in queued_write_lock_slowpath() 77 /* When no more readers or writers, set the locked flag */ in queued_write_lock_slowpath()
|
| /kernel/linux/linux-6.6/drivers/misc/ibmasm/ |
| D | event.c | 30 list_for_each_entry(reader, &sp->event_buffer->readers, node) in wake_up_event_readers() 39 * event readers. 40 * There is no reader marker in the buffer, therefore readers are 73 * Called by event readers (initiated from user space through the file 123 list_add(&reader->node, &sp->event_buffer->readers); in ibmasm_event_reader_register() 153 INIT_LIST_HEAD(&buffer->readers); in ibmasm_event_buffer_init()
|
| /kernel/linux/linux-5.10/drivers/misc/ibmasm/ |
| D | event.c | 30 list_for_each_entry(reader, &sp->event_buffer->readers, node) in wake_up_event_readers() 39 * event readers. 40 * There is no reader marker in the buffer, therefore readers are 73 * Called by event readers (initiated from user space through the file 123 list_add(&reader->node, &sp->event_buffer->readers); in ibmasm_event_reader_register() 153 INIT_LIST_HEAD(&buffer->readers); in ibmasm_event_buffer_init()
|
| /kernel/linux/linux-5.10/drivers/misc/cardreader/ |
| D | Kconfig | 9 Alcor Micro card readers support access to many types of memory cards, 20 Realtek card readers support access to many types of memory cards, 29 Select this option to get support for Realtek USB 2.0 card readers
|
| /kernel/linux/linux-6.6/drivers/misc/cardreader/ |
| D | Kconfig | 9 Alcor Micro card readers support access to many types of memory cards, 20 Realtek card readers support access to many types of memory cards, 29 Select this option to get support for Realtek USB 2.0 card readers
|
| /kernel/linux/linux-6.6/fs/btrfs/ |
| D | locking.c | 115 * - try-lock semantics for readers and writers 325 * if there are pending readers no new writers would be allowed to come in and 331 atomic_set(&lock->readers, 0); in btrfs_drew_lock_init() 340 if (atomic_read(&lock->readers)) in btrfs_drew_try_write_lock() 345 /* Ensure writers count is updated before we check for pending readers */ in btrfs_drew_try_write_lock() 347 if (atomic_read(&lock->readers)) { in btrfs_drew_try_write_lock() 360 wait_event(lock->pending_writers, !atomic_read(&lock->readers)); in btrfs_drew_write_lock() 372 atomic_inc(&lock->readers); in btrfs_drew_read_lock() 391 if (atomic_dec_and_test(&lock->readers)) in btrfs_drew_read_unlock()
|