Lines Matching +full:cpu +full:- +full:release +full:- +full:addr
23 See :ref:`Documentation/process/volatile-considered-harmful.rst
26 local_t is very similar to atomic_t. If the counter is per CPU and only
27 updated by one CPU, local_t is probably more appropriate. Please see
28 :ref:`Documentation/core-api/local_ops.rst <local_ops>` for the semantics of
35 #define atomic_set(v, i) ((v)->counter = (i))
59 return -ENOMEM;
60 atomic_set(&k->counter, 0);
70 #define atomic_read(v) ((v)->counter)
155 variable a is set at boot time before the second CPU is brought online
179 Don't even -think- about doing this without proper use of memory barriers,
305 Preceding a non-value-returning read-modify-write atomic operation with
307 provides the same full ordering that is provided by value-returning
308 read-modify-write atomic operations.
312 obj->dead = 1;
314 atomic_dec(&obj->ref_count);
319 "1" to obj->dead will be globally visible to other cpus before the
324 to other cpus before the "obj->dead = 1;" assignment.
335 obj->active = 1;
336 list_add(&obj->list, head);
341 list_del(&obj->list);
342 obj->active = 0;
347 BUG_ON(obj->active);
356 obj = list_entry(head->next, struct obj, list);
357 atomic_inc(&obj->refcnt);
372 obj->ops->poke(obj);
373 if (atomic_dec_and_test(&obj->refcnt))
384 if (atomic_dec_and_test(&obj->refcnt))
395 Given the above scheme, it must be the case that the obj->active
399 Otherwise, the counter could fall to zero, yet obj->active would still
403 cpu 0 cpu 1
408 obj->active = 0 ...
415 BUG() triggers since obj->active
417 obj->active update visibility occurs
423 obj->active update does.
425 As a historical note, 32-bit Sparc used to only allow usage of
426 24-bits of its atomic_t type. This was because it used 8 bits
428 type instruction. However, 32-bit Sparc has since been moved over
429 to a "hash table of spinlocks" scheme, that allows the full 32-bit
449 native endianness of the cpu. ::
451 void set_bit(unsigned long nr, volatile unsigned long *addr);
452 void clear_bit(unsigned long nr, volatile unsigned long *addr);
453 void change_bit(unsigned long nr, volatile unsigned long *addr);
456 indicated by "nr" on the bit mask pointed to by "ADDR".
461 int test_and_set_bit(unsigned long nr, volatile unsigned long *addr);
462 int test_and_clear_bit(unsigned long nr, volatile unsigned long *addr);
463 int test_and_change_bit(unsigned long nr, volatile unsigned long *addr);
477 paths using these interfaces, so on 64-bit if the bit is set in the
478 upper 32-bits then testers will never see that.
492 obj->dead = 1;
493 if (test_and_set_bit(0, &obj->flags))
495 obj->killed = 1;
498 "obj->dead = 1;" is visible to cpus before the atomic memory operation
501 "obj->killed = 1;" is visible.
505 int test_bit(unsigned long nr, __const__ volatile unsigned long *addr);
508 pointed to by "addr".
531 There are two special bitops with lock barrier semantics (acquire/release,
532 same as spinlocks). These operate in the same way as their non-_lock/unlock
533 postfixed variants, except that they are to provide acquire/release semantics,
537 int test_and_set_bit_lock(unsigned long nr, unsigned long *addr);
538 void clear_bit_unlock(unsigned long nr, unsigned long *addr);
539 void __clear_bit_unlock(unsigned long nr, unsigned long *addr);
541 The __clear_bit_unlock version is non-atomic, however it still implements
545 Finally, there are non-atomic versions of the bitmask operations
546 provided. They are used in contexts where some other higher-level SMP
548 expensive non-atomic operations may be used in the implementation.
552 void __set_bit(unsigned long nr, volatile unsigned long *addr);
553 void __clear_bit(unsigned long nr, volatile unsigned long *addr);
554 void __change_bit(unsigned long nr, volatile unsigned long *addr);
555 int __test_and_set_bit(unsigned long nr, volatile unsigned long *addr);
556 int __test_and_clear_bit(unsigned long nr, volatile unsigned long *addr);
557 int __test_and_change_bit(unsigned long nr, volatile unsigned long *addr);
559 These non-atomic variants also do not require any special memory
563 memory-barrier semantics as the atomic and bit operations returning
580 lock release.
583 architecture-neutral version implemented in lib/dec_and_lock.c,
626 Let's use cas() in order to build a pseudo-C atomic_dec_and_lock()::
636 new = old - 1;