1 Semantics and Behavior of Atomic and 2 Bitmask Operations 3 4 David S. Miller 5 6 This document is intended to serve as a guide to Linux port 7maintainers on how to implement atomic counter, bitops, and spinlock 8interfaces properly. 9 10 The atomic_t type should be defined as a signed integer. 11Also, it should be made opaque such that any kind of cast to a normal 12C integer type will fail. Something like the following should 13suffice: 14 15 typedef struct { int counter; } atomic_t; 16 17Historically, counter has been declared volatile. This is now discouraged. 18See Documentation/volatile-considered-harmful.txt for the complete rationale. 19 20local_t is very similar to atomic_t. If the counter is per CPU and only 21updated by one CPU, local_t is probably more appropriate. Please see 22Documentation/local_ops.txt for the semantics of local_t. 23 24The first operations to implement for atomic_t's are the initializers and 25plain reads. 26 27 #define ATOMIC_INIT(i) { (i) } 28 #define atomic_set(v, i) ((v)->counter = (i)) 29 30The first macro is used in definitions, such as: 31 32static atomic_t my_counter = ATOMIC_INIT(1); 33 34The initializer is atomic in that the return values of the atomic operations 35are guaranteed to be correct reflecting the initialized value if the 36initializer is used before runtime. If the initializer is used at runtime, a 37proper implicit or explicit read memory barrier is needed before reading the 38value with atomic_read from another thread. 39 40The second interface can be used at runtime, as in: 41 42 struct foo { atomic_t counter; }; 43 ... 44 45 struct foo *k; 46 47 k = kmalloc(sizeof(*k), GFP_KERNEL); 48 if (!k) 49 return -ENOMEM; 50 atomic_set(&k->counter, 0); 51 52The setting is atomic in that the return values of the atomic operations by 53all threads are guaranteed to be correct reflecting either the value that has 54been set with this operation or set with another operation. A proper implicit 55or explicit memory barrier is needed before the value set with the operation 56is guaranteed to be readable with atomic_read from another thread. 57 58Next, we have: 59 60 #define atomic_read(v) ((v)->counter) 61 62which simply reads the counter value currently visible to the calling thread. 63The read is atomic in that the return value is guaranteed to be one of the 64values initialized or modified with the interface operations if a proper 65implicit or explicit memory barrier is used after possible runtime 66initialization by any other thread and the value is modified only with the 67interface operations. atomic_read does not guarantee that the runtime 68initialization by any other thread is visible yet, so the user of the 69interface must take care of that with a proper implicit or explicit memory 70barrier. 71 72*** WARNING: atomic_read() and atomic_set() DO NOT IMPLY BARRIERS! *** 73 74Some architectures may choose to use the volatile keyword, barriers, or inline 75assembly to guarantee some degree of immediacy for atomic_read() and 76atomic_set(). This is not uniformly guaranteed, and may change in the future, 77so all users of atomic_t should treat atomic_read() and atomic_set() as simple 78C statements that may be reordered or optimized away entirely by the compiler 79or processor, and explicitly invoke the appropriate compiler and/or memory 80barrier for each use case. Failure to do so will result in code that may 81suddenly break when used with different architectures or compiler 82optimizations, or even changes in unrelated code which changes how the 83compiler optimizes the section accessing atomic_t variables. 84 85*** YOU HAVE BEEN WARNED! *** 86 87Properly aligned pointers, longs, ints, and chars (and unsigned 88equivalents) may be atomically loaded from and stored to in the same 89sense as described for atomic_read() and atomic_set(). The ACCESS_ONCE() 90macro should be used to prevent the compiler from using optimizations 91that might otherwise optimize accesses out of existence on the one hand, 92or that might create unsolicited accesses on the other. 93 94For example consider the following code: 95 96 while (a > 0) 97 do_something(); 98 99If the compiler can prove that do_something() does not store to the 100variable a, then the compiler is within its rights transforming this to 101the following: 102 103 tmp = a; 104 if (a > 0) 105 for (;;) 106 do_something(); 107 108If you don't want the compiler to do this (and you probably don't), then 109you should use something like the following: 110 111 while (ACCESS_ONCE(a) < 0) 112 do_something(); 113 114Alternatively, you could place a barrier() call in the loop. 115 116For another example, consider the following code: 117 118 tmp_a = a; 119 do_something_with(tmp_a); 120 do_something_else_with(tmp_a); 121 122If the compiler can prove that do_something_with() does not store to the 123variable a, then the compiler is within its rights to manufacture an 124additional load as follows: 125 126 tmp_a = a; 127 do_something_with(tmp_a); 128 tmp_a = a; 129 do_something_else_with(tmp_a); 130 131This could fatally confuse your code if it expected the same value 132to be passed to do_something_with() and do_something_else_with(). 133 134The compiler would be likely to manufacture this additional load if 135do_something_with() was an inline function that made very heavy use 136of registers: reloading from variable a could save a flush to the 137stack and later reload. To prevent the compiler from attacking your 138code in this manner, write the following: 139 140 tmp_a = ACCESS_ONCE(a); 141 do_something_with(tmp_a); 142 do_something_else_with(tmp_a); 143 144For a final example, consider the following code, assuming that the 145variable a is set at boot time before the second CPU is brought online 146and never changed later, so that memory barriers are not needed: 147 148 if (a) 149 b = 9; 150 else 151 b = 42; 152 153The compiler is within its rights to manufacture an additional store 154by transforming the above code into the following: 155 156 b = 42; 157 if (a) 158 b = 9; 159 160This could come as a fatal surprise to other code running concurrently 161that expected b to never have the value 42 if a was zero. To prevent 162the compiler from doing this, write something like: 163 164 if (a) 165 ACCESS_ONCE(b) = 9; 166 else 167 ACCESS_ONCE(b) = 42; 168 169Don't even -think- about doing this without proper use of memory barriers, 170locks, or atomic operations if variable a can change at runtime! 171 172*** WARNING: ACCESS_ONCE() DOES NOT IMPLY A BARRIER! *** 173 174Now, we move onto the atomic operation interfaces typically implemented with 175the help of assembly code. 176 177 void atomic_add(int i, atomic_t *v); 178 void atomic_sub(int i, atomic_t *v); 179 void atomic_inc(atomic_t *v); 180 void atomic_dec(atomic_t *v); 181 182These four routines add and subtract integral values to/from the given 183atomic_t value. The first two routines pass explicit integers by 184which to make the adjustment, whereas the latter two use an implicit 185adjustment value of "1". 186 187One very important aspect of these two routines is that they DO NOT 188require any explicit memory barriers. They need only perform the 189atomic_t counter update in an SMP safe manner. 190 191Next, we have: 192 193 int atomic_inc_return(atomic_t *v); 194 int atomic_dec_return(atomic_t *v); 195 196These routines add 1 and subtract 1, respectively, from the given 197atomic_t and return the new counter value after the operation is 198performed. 199 200Unlike the above routines, it is required that explicit memory 201barriers are performed before and after the operation. It must be 202done such that all memory operations before and after the atomic 203operation calls are strongly ordered with respect to the atomic 204operation itself. 205 206For example, it should behave as if a smp_mb() call existed both 207before and after the atomic operation. 208 209If the atomic instructions used in an implementation provide explicit 210memory barrier semantics which satisfy the above requirements, that is 211fine as well. 212 213Let's move on: 214 215 int atomic_add_return(int i, atomic_t *v); 216 int atomic_sub_return(int i, atomic_t *v); 217 218These behave just like atomic_{inc,dec}_return() except that an 219explicit counter adjustment is given instead of the implicit "1". 220This means that like atomic_{inc,dec}_return(), the memory barrier 221semantics are required. 222 223Next: 224 225 int atomic_inc_and_test(atomic_t *v); 226 int atomic_dec_and_test(atomic_t *v); 227 228These two routines increment and decrement by 1, respectively, the 229given atomic counter. They return a boolean indicating whether the 230resulting counter value was zero or not. 231 232It requires explicit memory barrier semantics around the operation as 233above. 234 235 int atomic_sub_and_test(int i, atomic_t *v); 236 237This is identical to atomic_dec_and_test() except that an explicit 238decrement is given instead of the implicit "1". It requires explicit 239memory barrier semantics around the operation. 240 241 int atomic_add_negative(int i, atomic_t *v); 242 243The given increment is added to the given atomic counter value. A 244boolean is return which indicates whether the resulting counter value 245is negative. It requires explicit memory barrier semantics around the 246operation. 247 248Then: 249 250 int atomic_xchg(atomic_t *v, int new); 251 252This performs an atomic exchange operation on the atomic variable v, setting 253the given new value. It returns the old value that the atomic variable v had 254just before the operation. 255 256 int atomic_cmpxchg(atomic_t *v, int old, int new); 257 258This performs an atomic compare exchange operation on the atomic value v, 259with the given old and new values. Like all atomic_xxx operations, 260atomic_cmpxchg will only satisfy its atomicity semantics as long as all 261other accesses of *v are performed through atomic_xxx operations. 262 263atomic_cmpxchg requires explicit memory barriers around the operation. 264 265The semantics for atomic_cmpxchg are the same as those defined for 'cas' 266below. 267 268Finally: 269 270 int atomic_add_unless(atomic_t *v, int a, int u); 271 272If the atomic value v is not equal to u, this function adds a to v, and 273returns non zero. If v is equal to u then it returns zero. This is done as 274an atomic operation. 275 276atomic_add_unless requires explicit memory barriers around the operation 277unless it fails (returns 0). 278 279atomic_inc_not_zero, equivalent to atomic_add_unless(v, 1, 0) 280 281 282If a caller requires memory barrier semantics around an atomic_t 283operation which does not return a value, a set of interfaces are 284defined which accomplish this: 285 286 void smp_mb__before_atomic_dec(void); 287 void smp_mb__after_atomic_dec(void); 288 void smp_mb__before_atomic_inc(void); 289 void smp_mb__after_atomic_inc(void); 290 291For example, smp_mb__before_atomic_dec() can be used like so: 292 293 obj->dead = 1; 294 smp_mb__before_atomic_dec(); 295 atomic_dec(&obj->ref_count); 296 297It makes sure that all memory operations preceding the atomic_dec() 298call are strongly ordered with respect to the atomic counter 299operation. In the above example, it guarantees that the assignment of 300"1" to obj->dead will be globally visible to other cpus before the 301atomic counter decrement. 302 303Without the explicit smp_mb__before_atomic_dec() call, the 304implementation could legally allow the atomic counter update visible 305to other cpus before the "obj->dead = 1;" assignment. 306 307The other three interfaces listed are used to provide explicit 308ordering with respect to memory operations after an atomic_dec() call 309(smp_mb__after_atomic_dec()) and around atomic_inc() calls 310(smp_mb__{before,after}_atomic_inc()). 311 312A missing memory barrier in the cases where they are required by the 313atomic_t implementation above can have disastrous results. Here is 314an example, which follows a pattern occurring frequently in the Linux 315kernel. It is the use of atomic counters to implement reference 316counting, and it works such that once the counter falls to zero it can 317be guaranteed that no other entity can be accessing the object: 318 319static void obj_list_add(struct obj *obj, struct list_head *head) 320{ 321 obj->active = 1; 322 list_add(&obj->list, head); 323} 324 325static void obj_list_del(struct obj *obj) 326{ 327 list_del(&obj->list); 328 obj->active = 0; 329} 330 331static void obj_destroy(struct obj *obj) 332{ 333 BUG_ON(obj->active); 334 kfree(obj); 335} 336 337struct obj *obj_list_peek(struct list_head *head) 338{ 339 if (!list_empty(head)) { 340 struct obj *obj; 341 342 obj = list_entry(head->next, struct obj, list); 343 atomic_inc(&obj->refcnt); 344 return obj; 345 } 346 return NULL; 347} 348 349void obj_poke(void) 350{ 351 struct obj *obj; 352 353 spin_lock(&global_list_lock); 354 obj = obj_list_peek(&global_list); 355 spin_unlock(&global_list_lock); 356 357 if (obj) { 358 obj->ops->poke(obj); 359 if (atomic_dec_and_test(&obj->refcnt)) 360 obj_destroy(obj); 361 } 362} 363 364void obj_timeout(struct obj *obj) 365{ 366 spin_lock(&global_list_lock); 367 obj_list_del(obj); 368 spin_unlock(&global_list_lock); 369 370 if (atomic_dec_and_test(&obj->refcnt)) 371 obj_destroy(obj); 372} 373 374(This is a simplification of the ARP queue management in the 375 generic neighbour discover code of the networking. Olaf Kirch 376 found a bug wrt. memory barriers in kfree_skb() that exposed 377 the atomic_t memory barrier requirements quite clearly.) 378 379Given the above scheme, it must be the case that the obj->active 380update done by the obj list deletion be visible to other processors 381before the atomic counter decrement is performed. 382 383Otherwise, the counter could fall to zero, yet obj->active would still 384be set, thus triggering the assertion in obj_destroy(). The error 385sequence looks like this: 386 387 cpu 0 cpu 1 388 obj_poke() obj_timeout() 389 obj = obj_list_peek(); 390 ... gains ref to obj, refcnt=2 391 obj_list_del(obj); 392 obj->active = 0 ... 393 ... visibility delayed ... 394 atomic_dec_and_test() 395 ... refcnt drops to 1 ... 396 atomic_dec_and_test() 397 ... refcount drops to 0 ... 398 obj_destroy() 399 BUG() triggers since obj->active 400 still seen as one 401 obj->active update visibility occurs 402 403With the memory barrier semantics required of the atomic_t operations 404which return values, the above sequence of memory visibility can never 405happen. Specifically, in the above case the atomic_dec_and_test() 406counter decrement would not become globally visible until the 407obj->active update does. 408 409As a historical note, 32-bit Sparc used to only allow usage of 41024-bits of its atomic_t type. This was because it used 8 bits 411as a spinlock for SMP safety. Sparc32 lacked a "compare and swap" 412type instruction. However, 32-bit Sparc has since been moved over 413to a "hash table of spinlocks" scheme, that allows the full 32-bit 414counter to be realized. Essentially, an array of spinlocks are 415indexed into based upon the address of the atomic_t being operated 416on, and that lock protects the atomic operation. Parisc uses the 417same scheme. 418 419Another note is that the atomic_t operations returning values are 420extremely slow on an old 386. 421 422We will now cover the atomic bitmask operations. You will find that 423their SMP and memory barrier semantics are similar in shape and scope 424to the atomic_t ops above. 425 426Native atomic bit operations are defined to operate on objects aligned 427to the size of an "unsigned long" C data type, and are least of that 428size. The endianness of the bits within each "unsigned long" are the 429native endianness of the cpu. 430 431 void set_bit(unsigned long nr, volatile unsigned long *addr); 432 void clear_bit(unsigned long nr, volatile unsigned long *addr); 433 void change_bit(unsigned long nr, volatile unsigned long *addr); 434 435These routines set, clear, and change, respectively, the bit number 436indicated by "nr" on the bit mask pointed to by "ADDR". 437 438They must execute atomically, yet there are no implicit memory barrier 439semantics required of these interfaces. 440 441 int test_and_set_bit(unsigned long nr, volatile unsigned long *addr); 442 int test_and_clear_bit(unsigned long nr, volatile unsigned long *addr); 443 int test_and_change_bit(unsigned long nr, volatile unsigned long *addr); 444 445Like the above, except that these routines return a boolean which 446indicates whether the changed bit was set _BEFORE_ the atomic bit 447operation. 448 449WARNING! It is incredibly important that the value be a boolean, 450ie. "0" or "1". Do not try to be fancy and save a few instructions by 451declaring the above to return "long" and just returning something like 452"old_val & mask" because that will not work. 453 454For one thing, this return value gets truncated to int in many code 455paths using these interfaces, so on 64-bit if the bit is set in the 456upper 32-bits then testers will never see that. 457 458One great example of where this problem crops up are the thread_info 459flag operations. Routines such as test_and_set_ti_thread_flag() chop 460the return value into an int. There are other places where things 461like this occur as well. 462 463These routines, like the atomic_t counter operations returning values, 464require explicit memory barrier semantics around their execution. All 465memory operations before the atomic bit operation call must be made 466visible globally before the atomic bit operation is made visible. 467Likewise, the atomic bit operation must be visible globally before any 468subsequent memory operation is made visible. For example: 469 470 obj->dead = 1; 471 if (test_and_set_bit(0, &obj->flags)) 472 /* ... */; 473 obj->killed = 1; 474 475The implementation of test_and_set_bit() must guarantee that 476"obj->dead = 1;" is visible to cpus before the atomic memory operation 477done by test_and_set_bit() becomes visible. Likewise, the atomic 478memory operation done by test_and_set_bit() must become visible before 479"obj->killed = 1;" is visible. 480 481Finally there is the basic operation: 482 483 int test_bit(unsigned long nr, __const__ volatile unsigned long *addr); 484 485Which returns a boolean indicating if bit "nr" is set in the bitmask 486pointed to by "addr". 487 488If explicit memory barriers are required around clear_bit() (which 489does not return a value, and thus does not need to provide memory 490barrier semantics), two interfaces are provided: 491 492 void smp_mb__before_clear_bit(void); 493 void smp_mb__after_clear_bit(void); 494 495They are used as follows, and are akin to their atomic_t operation 496brothers: 497 498 /* All memory operations before this call will 499 * be globally visible before the clear_bit(). 500 */ 501 smp_mb__before_clear_bit(); 502 clear_bit( ... ); 503 504 /* The clear_bit() will be visible before all 505 * subsequent memory operations. 506 */ 507 smp_mb__after_clear_bit(); 508 509There are two special bitops with lock barrier semantics (acquire/release, 510same as spinlocks). These operate in the same way as their non-_lock/unlock 511postfixed variants, except that they are to provide acquire/release semantics, 512respectively. This means they can be used for bit_spin_trylock and 513bit_spin_unlock type operations without specifying any more barriers. 514 515 int test_and_set_bit_lock(unsigned long nr, unsigned long *addr); 516 void clear_bit_unlock(unsigned long nr, unsigned long *addr); 517 void __clear_bit_unlock(unsigned long nr, unsigned long *addr); 518 519The __clear_bit_unlock version is non-atomic, however it still implements 520unlock barrier semantics. This can be useful if the lock itself is protecting 521the other bits in the word. 522 523Finally, there are non-atomic versions of the bitmask operations 524provided. They are used in contexts where some other higher-level SMP 525locking scheme is being used to protect the bitmask, and thus less 526expensive non-atomic operations may be used in the implementation. 527They have names similar to the above bitmask operation interfaces, 528except that two underscores are prefixed to the interface name. 529 530 void __set_bit(unsigned long nr, volatile unsigned long *addr); 531 void __clear_bit(unsigned long nr, volatile unsigned long *addr); 532 void __change_bit(unsigned long nr, volatile unsigned long *addr); 533 int __test_and_set_bit(unsigned long nr, volatile unsigned long *addr); 534 int __test_and_clear_bit(unsigned long nr, volatile unsigned long *addr); 535 int __test_and_change_bit(unsigned long nr, volatile unsigned long *addr); 536 537These non-atomic variants also do not require any special memory 538barrier semantics. 539 540The routines xchg() and cmpxchg() need the same exact memory barriers 541as the atomic and bit operations returning values. 542 543Spinlocks and rwlocks have memory barrier expectations as well. 544The rule to follow is simple: 545 5461) When acquiring a lock, the implementation must make it globally 547 visible before any subsequent memory operation. 548 5492) When releasing a lock, the implementation must make it such that 550 all previous memory operations are globally visible before the 551 lock release. 552 553Which finally brings us to _atomic_dec_and_lock(). There is an 554architecture-neutral version implemented in lib/dec_and_lock.c, 555but most platforms will wish to optimize this in assembler. 556 557 int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock); 558 559Atomically decrement the given counter, and if will drop to zero 560atomically acquire the given spinlock and perform the decrement 561of the counter to zero. If it does not drop to zero, do nothing 562with the spinlock. 563 564It is actually pretty simple to get the memory barrier correct. 565Simply satisfy the spinlock grab requirements, which is make 566sure the spinlock operation is globally visible before any 567subsequent memory operation. 568 569We can demonstrate this operation more clearly if we define 570an abstract atomic operation: 571 572 long cas(long *mem, long old, long new); 573 574"cas" stands for "compare and swap". It atomically: 575 5761) Compares "old" with the value currently at "mem". 5772) If they are equal, "new" is written to "mem". 5783) Regardless, the current value at "mem" is returned. 579 580As an example usage, here is what an atomic counter update 581might look like: 582 583void example_atomic_inc(long *counter) 584{ 585 long old, new, ret; 586 587 while (1) { 588 old = *counter; 589 new = old + 1; 590 591 ret = cas(counter, old, new); 592 if (ret == old) 593 break; 594 } 595} 596 597Let's use cas() in order to build a pseudo-C atomic_dec_and_lock(): 598 599int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock) 600{ 601 long old, new, ret; 602 int went_to_zero; 603 604 went_to_zero = 0; 605 while (1) { 606 old = atomic_read(atomic); 607 new = old - 1; 608 if (new == 0) { 609 went_to_zero = 1; 610 spin_lock(lock); 611 } 612 ret = cas(atomic, old, new); 613 if (ret == old) 614 break; 615 if (went_to_zero) { 616 spin_unlock(lock); 617 went_to_zero = 0; 618 } 619 } 620 621 return went_to_zero; 622} 623 624Now, as far as memory barriers go, as long as spin_lock() 625strictly orders all subsequent memory operations (including 626the cas()) with respect to itself, things will be fine. 627 628Said another way, _atomic_dec_and_lock() must guarantee that 629a counter dropping to zero is never made visible before the 630spinlock being acquired. 631 632Note that this also means that for the case where the counter 633is not dropping to zero, there are no memory ordering 634requirements. 635