Home
last modified time | relevance | path

Searched full:lock (Results 1 – 25 of 346) sorted by relevance

12345678910>>...14

/Documentation/locking/
Drobust-futex-ABI.rst56 pointer to a single linked list of 'lock entries', one per lock,
58 to itself, 'head'. The last 'lock entry' points back to the 'head'.
61 address of the associated 'lock entry', plus or minus, of what will
62 be called the 'lock word', from that 'lock entry'. The 'lock word'
63 is always a 32 bit word, unlike the other words above. The 'lock
65 of the thread holding the lock in the bottom 30 bits. See further
69 the address of the 'lock entry', during list insertion and removal,
73 Each 'lock entry' on the single linked list starting at 'head' consists
74 of just a single word, pointing to the next 'lock entry', or back to
75 'head' if there are no more entries. In addition, nearby to each 'lock
[all …]
Dlockdep-design.rst8 Lock-class
15 tens of thousands of) instantiations. For example a lock in the inode
17 lock class.
19 The validator tracks the 'usage state' of lock-classes, and it tracks
20 the dependencies between different lock-classes. Lock usage indicates
21 how a lock is used with regard to its IRQ contexts, while lock
22 dependency can be understood as lock order, where L1 -> L2 suggests that
26 continuing effort to prove lock usages and dependencies are correct or
29 A lock-class's behavior is constructed by its instances collectively:
30 when the first instance of a lock-class is used after bootup the class
[all …]
Dmutex-design.rst28 (->owner) to keep track of the lock state during its lifetime. Field owner
29 actually contains `struct task_struct *` to the current lock owner and it is
34 CONFIG_MUTEX_SPIN_ON_OWNER=y systems use a spinner MCS lock (->osq), described
38 taken, depending on the state of the lock:
40 (i) fastpath: tries to atomically acquire the lock by cmpxchg()ing the owner with
42 against 0UL, so all 3 state bits above have to be 0). If the lock is
46 while the lock owner is running and there are no other tasks ready
48 that if the lock owner is running, it is likely to release the lock
49 soon. The mutex spinners are queued up using MCS lock so that only
52 The MCS lock (proposed by Mellor-Crummey and Scott) is a simple spinlock
[all …]
Dlockstat.rst2 Lock Statistics
14 Because things like lock contention can severely impact performance.
19 Lockdep already has hooks in the lock functions and maps lock instances to
20 lock classes. We build on that (see Documentation/locking/lockdep-design.rst).
21 The graph below shows the relation between the lock functions and the various
26 lock _____
44 lock, unlock - the regular lock functions
51 - number of lock contention that involved x-cpu data
53 - number of lock acquisitions that had to wait
56 - shortest (non-0) time we ever had to wait for a lock
[all …]
Dspinlocks.rst19 spinlock itself will guarantee the global lock, so it will guarantee that
21 lock. This works well even under UP also, so the code does _not_ need to
45 NOTE! The spin-lock is safe only when you **also** use the lock itself
59 to change the variables it has to get an exclusive write lock.
79 The above kind of lock may be useful for complex data structures like
81 itself. The read lock allows many concurrent readers. Anything that
82 **changes** the list will have to get the write lock.
87 Also, you cannot "upgrade" a read-lock to a write-lock, so if you at _any_
89 to get the write-lock at the very beginning.
100 The single spin-lock primitives above are by no means the only ones. They
[all …]
Dlocktypes.rst6 Lock types and their rules
19 This document conceptually describes these lock types and provides rules
23 Lock categories
37 Sleeping lock types:
46 On PREEMPT_RT kernels, these lock types are converted to sleeping locks:
71 On non-PREEMPT_RT kernels, these lock types are also spinning locks:
76 Spinning locks implicitly disable preemption and the lock / unlock functions
89 The aforementioned lock types except semaphores have strict owner
92 The context (task) that acquired the lock must release it.
135 rw_semaphore is a multiple readers and single writer lock mechanism.
[all …]
Dww-mutex-design.rst39 If the transaction holding the lock is younger, the locking transaction waits.
40 If the transaction holding the lock is older, the locking transaction backs off
43 If the transaction holding the lock is younger, the locking transaction
44 wounds the transaction holding the lock, requesting it to die.
45 If the transaction holding the lock is older, it waits for the other
60 Compared to normal mutexes two additional concepts/objects show up in the lock
65 acquired when starting the lock acquisition. This ticket is stored in the
70 W/w class: In contrast to normal mutexes the lock class needs to be explicit for
71 w/w mutexes, since it is required to initialize the acquire context. The lock
74 Furthermore there are three different class of w/w lock acquire functions:
[all …]
Drt-mutex.rst49 lock->owner holds the task_struct pointer of the owner. Bit 0 is used to
50 keep track of the "lock has waiters" state:
55 NULL 0 lock is free (fast acquire possible)
56 NULL 1 lock is free and has waiters and the top waiter
57 is going to take the lock [1]_
58 taskpointer 0 lock is held (fast release possible)
59 taskpointer 1 lock is held and has waiters [2]_
63 possible when bit 0 of lock->owner is 0.
65 .. [1] It also can be a transitional state when grabbing the lock
66 with ->wait_lock is held. To prevent any fast path cmpxchg to the lock,
[all …]
Dhwspinlock.rst47 API will usually want to communicate the lock's id to the remote core
67 Retrieve the global lock id for an OF phandle-based specific lock.
69 to get the global lock id of a specific hwspinlock, so that it can
72 The function returns a lock id number on success, -EPROBE_DEFER if
103 Lock a previously-assigned hwspinlock with a timeout limit (specified in
119 Lock a previously-assigned hwspinlock with a timeout limit (specified in
135 Lock a previously-assigned hwspinlock with a timeout limit (specified in
152 Lock a previously-assigned hwspinlock with a timeout limit (specified in
156 Caution: User must protect the routine of getting hardware lock with mutex
157 or spinlock to avoid dead-lock, that will let user can do some time-consuming
[all …]
Dpercpu-rw-semaphore.rst9 cores take the lock for reading, the cache line containing the semaphore
14 instruction in the lock and unlock path. On the other hand, locking for
18 The lock is declared with "struct percpu_rw_semaphore" type.
19 The lock is initialized percpu_init_rwsem, it returns 0 on success and
21 The lock must be freed with percpu_free_rwsem to avoid memory leak.
23 The lock is locked for read with percpu_down_read, percpu_up_read and
26 The idea of using RCU for optimized rw-lock was introduced by
Drt-mutex-design.rst36 priority process, C is the lowest, and B is in between. A tries to grab a lock
37 that C owns and must wait and lets C run to release the lock. But in the
41 to release the lock, because for all we know, B is a CPU hog and will
42 never give C a chance to release the lock. This is called unbounded priority
47 grab lock L1 (owned by C)
65 process blocks on a lock owned by the current process. To make this easier
68 This time, when A blocks on the lock owned by C, C would inherit the priority
70 the high priority of A. As soon as C releases the lock, it loses its
90 lock
91 - In this document from now on, I will use the term lock when
[all …]
Dfutex-requeue-pi.rst26 /* caller must lock mutex */
29 lock(cond->__data.__lock);
34 lock(cond->__data.__lock);
37 lock(mutex);
42 lock(cond->__data.__lock);
48 has waiters. Note that pthread_cond_wait() attempts to lock the
60 /* caller must lock mutex */
63 lock(cond->__data.__lock);
68 lock(cond->__data.__lock);
76 lock(cond->__data.__lock);
[all …]
Dlocktorture.rst2 Kernel Lock Torture Test Operation
18 acquire the lock and hold it for specific amount of time, thus simulating
19 different critical region behaviors. The amount of contention on the lock
34 Number of kernel threads that will stress exclusive lock
39 Number of kernel threads that will stress shared lock
45 Type of lock to torture. By default, only spinlocks will
50 Simulates a buggy lock implementation.
59 read/write lock() and unlock() rwlock pairs.
135 (A): Lock type that is being tortured -- torture_type parameter.
137 (B): Number of writer lock acquisitions. If dealing with a read/write
[all …]
/Documentation/mm/
Dsplit_page_table_lock.rst2 Split page table lock
7 multi-threaded applications due high contention on the lock. To improve
8 scalability, split page table lock was introduced.
10 With split page table lock we have separate per-table lock to serialize
11 access to the table. At the moment we use split lock for PTE and PMD
14 There are helpers to lock/unlock a table and other accessor functions:
17 maps PTE and takes PTE table lock, returns pointer to PTE with
18 pointer to its PTE table lock, or returns NULL if no PTE table;
21 lock (not taken), or returns NULL if no PTE table;
24 lock (not taken) and the value of its pmd entry, or returns NULL
[all …]
/Documentation/driver-api/soundwire/
Dlocking.rst9 - Bus lock
11 - Message lock
13 Bus lock
16 SoundWire Bus lock is a mutex and is part of Bus data structure
17 (sdw_bus) which is used for every Bus instance. This lock is used to
26 Message lock
29 SoundWire message transfer lock. This mutex is part of
30 Bus data structure (sdw_bus). This lock is used to serialize the message
42 a. Acquire Message lock.
47 c. Release Message lock
[all …]
/Documentation/arch/x86/
Dbuslock.rst6 Bus lock detection and handling
16 A split lock is any atomic operation whose operand crosses two cache lines.
20 A bus lock is acquired through either split locked access to writeback (WB)
31 #AC exception for split lock detection
34 Beginning with the Tremont Atom CPU split lock operations may raise an
35 Alignment Check (#AC) exception when a split lock operation is attempted.
37 #DB exception for bus lock detection
41 instruction acquires a bus lock and is executed. This allows the kernel to
47 The kernel #AC and #DB handlers handle bus lock based on the kernel
51 |split_lock_detect=|#AC for split lock |#DB for bus lock |
[all …]
/Documentation/arch/s390/
Dvfio-ap-locking.rst16 The Matrix Devices Lock (drivers/s390/crypto/vfio_ap_private.h)
28 The Matrix Devices Lock (matrix_dev->mdevs_lock) is implemented as a global
29 mutex contained within the single object of struct ap_matrix_dev. This lock
31 (matrix_dev->mdev_list). This lock must be held while reading from, writing to
35 The KVM Lock (include/linux/kvm_host.h)
42 struct mutex lock;
46 The KVM Lock (kvm->lock) controls access to the state data for a KVM guest. This
47 lock must be held by the vfio_ap device driver while one or more AP adapters,
54 The Guests Lock (drivers/s390/crypto/vfio_ap_private.h)
66 The Guests Lock (matrix_dev->guests_lock) controls access to the
[all …]
/Documentation/networking/devlink/
Dindex.rst15 the devlink instance lock is already held. Drivers can take the instance
16 lock by calling ``devl_lock()``. It is also held all callbacks of devlink
19 Drivers are encouraged to use the devlink instance lock for their own needs.
21 Drivers need to be cautious when taking devlink instance lock and
22 taking RTNL lock at the same time. Devlink instance lock needs to be taken
23 first, only after that RTNL lock could be taken.
32 - Lock ordering should be maintained. If driver needs to take instance
33 lock of both nested and parent instances at the same time, devlink
34 instance lock of the parent instance should be taken first, only then
35 instance lock of the nested instance could be taken.
/Documentation/filesystems/
Dgfs2-glocks.rst11 1. A spinlock (gl_lockref.lock) which protects the internal state such
13 2. A non-blocking bit lock, GLF_LOCK, which is used to prevent other
15 thread takes this lock, it must then call run_queue (usually via the
19 The gl_holders list contains all the queued lock requests (not
25 There are three lock states that users of the glock layer can request,
27 to the following DLM lock modes:
30 Glock mode DLM lock mode
32 UN IV/NL Unlocked (no DLM lock associated with glock) or NL
39 shared lock mode, SH. In GFS2 the DF mode is used exclusively for direct I/O
40 operations. The glocks are basically a lock plus some routines which deal
[all …]
Ddirectory-locking.rst22 * lock the directory we are accessing (shared)
26 * lock the directory we are accessing (exclusive)
30 * lock the parent (exclusive)
32 * lock the victim (exclusive)
36 * lock the parent (exclusive)
38 * lock the source (exclusive; probably could be weakened to shared)
42 * lock the parent (exclusive)
55 * lock the filesystem
57 * lock the parents in "ancestors first" order (exclusive). If neither is an
58 ancestor of the other, lock the parent of source first.
[all …]
Ddlmfs.rst61 Once you're heartbeating, DLM lock 'domains' can be easily created /
71 dlmfs handles lock caching automatically for the user, so a lock
72 request for an already acquired lock will not generate another DLM
82 Lock value blocks can be read and written to a resource via read(2)
97 The open(2) call will not return until your lock has been granted or
99 operation. If the lock succeeds, you'll get an fd.
102 not automatically create inodes for existing lock resources.
105 Open Flag Lock Request Type
121 could not lock the resource then open(2) will return ETXTBUSY.
123 close(2) drops the lock associated with your fd.
/Documentation/sound/cards/
Dimg-spdif-in.rst25 rates. The active rate can be obtained by reading the 'SPDIF In Lock Frequency'
36 * name='SPDIF In Lock Frequency',index=0
38 This control returns the active capture rate, or 0 if a lock has not been
41 * name='SPDIF In Lock TRK',index=0
47 * name='SPDIF In Lock Acquire Threshold',index=0
49 This control is used to change the threshold at which a lock is acquired.
51 * name='SPDIF In Lock Release Threshold',index=0
53 This control is used to change the threshold at which a lock is released.
/Documentation/translations/zh_CN/locking/
Dmutex-design.rst111 void mutex_lock(struct mutex *lock);
112 void mutex_lock_nested(struct mutex *lock, unsigned int subclass);
113 int mutex_trylock(struct mutex *lock);
117 int mutex_lock_interruptible_nested(struct mutex *lock,
119 int mutex_lock_interruptible(struct mutex *lock);
123 int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock);
127 void mutex_unlock(struct mutex *lock);
131 int mutex_is_locked(struct mutex *lock);
/Documentation/admin-guide/device-mapper/
Dvdo-design.rst41 design attempts to be lock-free.
54 each zone has an implicit lock on the structures it manages for all its
335 to the application. The data_vio pool is protected by a spin lock.
346 2. The data_vio places a claim (the "logical lock") on the logical address
356 lock holder that it is waiting. Most notably, a new data_vio waiting
357 for a logical lock will flush the previous lock holder out of the
361 This stage requires the data_vio to get an implicit lock on the
374 data_vio to lock the page-node that needs to be allocated. This
375 lock, like the logical block lock in step 2, is a hashtable entry
379 The implicit logical zone lock is released while the allocation is
[all …]
/Documentation/devicetree/bindings/pinctrl/
Dcirrus,madera.yaml64 fll1-lock, fll2-clk, fll2-lock, fll3-clk,
65 fll3-lock, fllao-clk, fllao-lock, opclk,
66 opclk-async, pwm1, pwm2, spdif, asrc1-in1-lock,
67 asrc1-in2-lock, asrc2-in1-lock, asrc2-in2-lock,

12345678910>>...14