1 ============================ 2 LINUX KERNEL MEMORY BARRIERS 3 ============================ 4 5By: David Howells <dhowells@redhat.com> 6 7Contents: 8 9 (*) Abstract memory access model. 10 11 - Device operations. 12 - Guarantees. 13 14 (*) What are memory barriers? 15 16 - Varieties of memory barrier. 17 - What may not be assumed about memory barriers? 18 - Data dependency barriers. 19 - Control dependencies. 20 - SMP barrier pairing. 21 - Examples of memory barrier sequences. 22 - Read memory barriers vs load speculation. 23 24 (*) Explicit kernel barriers. 25 26 - Compiler barrier. 27 - CPU memory barriers. 28 - MMIO write barrier. 29 30 (*) Implicit kernel memory barriers. 31 32 - Locking functions. 33 - Interrupt disabling functions. 34 - Miscellaneous functions. 35 36 (*) Inter-CPU locking barrier effects. 37 38 - Locks vs memory accesses. 39 - Locks vs I/O accesses. 40 41 (*) Where are memory barriers needed? 42 43 - Interprocessor interaction. 44 - Atomic operations. 45 - Accessing devices. 46 - Interrupts. 47 48 (*) Kernel I/O barrier effects. 49 50 (*) Assumed minimum execution ordering model. 51 52 (*) The effects of the cpu cache. 53 54 - Cache coherency. 55 - Cache coherency vs DMA. 56 - Cache coherency vs MMIO. 57 58 (*) The things CPUs get up to. 59 60 - And then there's the Alpha. 61 62 (*) References. 63 64 65============================ 66ABSTRACT MEMORY ACCESS MODEL 67============================ 68 69Consider the following abstract model of the system: 70 71 : : 72 : : 73 : : 74 +-------+ : +--------+ : +-------+ 75 | | : | | : | | 76 | | : | | : | | 77 | CPU 1 |<----->| Memory |<----->| CPU 2 | 78 | | : | | : | | 79 | | : | | : | | 80 +-------+ : +--------+ : +-------+ 81 ^ : ^ : ^ 82 | : | : | 83 | : | : | 84 | : v : | 85 | : +--------+ : | 86 | : | | : | 87 | : | | : | 88 +---------->| Device |<----------+ 89 : | | : 90 : | | : 91 : +--------+ : 92 : : 93 94Each CPU executes a program that generates memory access operations. In the 95abstract CPU, memory operation ordering is very relaxed, and a CPU may actually 96perform the memory operations in any order it likes, provided program causality 97appears to be maintained. Similarly, the compiler may also arrange the 98instructions it emits in any order it likes, provided it doesn't affect the 99apparent operation of the program. 100 101So in the above diagram, the effects of the memory operations performed by a 102CPU are perceived by the rest of the system as the operations cross the 103interface between the CPU and rest of the system (the dotted lines). 104 105 106For example, consider the following sequence of events: 107 108 CPU 1 CPU 2 109 =============== =============== 110 { A == 1; B == 2 } 111 A = 3; x = A; 112 B = 4; y = B; 113 114The set of accesses as seen by the memory system in the middle can be arranged 115in 24 different combinations: 116 117 STORE A=3, STORE B=4, x=LOAD A->3, y=LOAD B->4 118 STORE A=3, STORE B=4, y=LOAD B->4, x=LOAD A->3 119 STORE A=3, x=LOAD A->3, STORE B=4, y=LOAD B->4 120 STORE A=3, x=LOAD A->3, y=LOAD B->2, STORE B=4 121 STORE A=3, y=LOAD B->2, STORE B=4, x=LOAD A->3 122 STORE A=3, y=LOAD B->2, x=LOAD A->3, STORE B=4 123 STORE B=4, STORE A=3, x=LOAD A->3, y=LOAD B->4 124 STORE B=4, ... 125 ... 126 127and can thus result in four different combinations of values: 128 129 x == 1, y == 2 130 x == 1, y == 4 131 x == 3, y == 2 132 x == 3, y == 4 133 134 135Furthermore, the stores committed by a CPU to the memory system may not be 136perceived by the loads made by another CPU in the same order as the stores were 137committed. 138 139 140As a further example, consider this sequence of events: 141 142 CPU 1 CPU 2 143 =============== =============== 144 { A == 1, B == 2, C = 3, P == &A, Q == &C } 145 B = 4; Q = P; 146 P = &B D = *Q; 147 148There is an obvious data dependency here, as the value loaded into D depends on 149the address retrieved from P by CPU 2. At the end of the sequence, any of the 150following results are possible: 151 152 (Q == &A) and (D == 1) 153 (Q == &B) and (D == 2) 154 (Q == &B) and (D == 4) 155 156Note that CPU 2 will never try and load C into D because the CPU will load P 157into Q before issuing the load of *Q. 158 159 160DEVICE OPERATIONS 161----------------- 162 163Some devices present their control interfaces as collections of memory 164locations, but the order in which the control registers are accessed is very 165important. For instance, imagine an ethernet card with a set of internal 166registers that are accessed through an address port register (A) and a data 167port register (D). To read internal register 5, the following code might then 168be used: 169 170 *A = 5; 171 x = *D; 172 173but this might show up as either of the following two sequences: 174 175 STORE *A = 5, x = LOAD *D 176 x = LOAD *D, STORE *A = 5 177 178the second of which will almost certainly result in a malfunction, since it set 179the address _after_ attempting to read the register. 180 181 182GUARANTEES 183---------- 184 185There are some minimal guarantees that may be expected of a CPU: 186 187 (*) On any given CPU, dependent memory accesses will be issued in order, with 188 respect to itself. This means that for: 189 190 Q = P; D = *Q; 191 192 the CPU will issue the following memory operations: 193 194 Q = LOAD P, D = LOAD *Q 195 196 and always in that order. 197 198 (*) Overlapping loads and stores within a particular CPU will appear to be 199 ordered within that CPU. This means that for: 200 201 a = *X; *X = b; 202 203 the CPU will only issue the following sequence of memory operations: 204 205 a = LOAD *X, STORE *X = b 206 207 And for: 208 209 *X = c; d = *X; 210 211 the CPU will only issue: 212 213 STORE *X = c, d = LOAD *X 214 215 (Loads and stores overlap if they are targeted at overlapping pieces of 216 memory). 217 218And there are a number of things that _must_ or _must_not_ be assumed: 219 220 (*) It _must_not_ be assumed that independent loads and stores will be issued 221 in the order given. This means that for: 222 223 X = *A; Y = *B; *D = Z; 224 225 we may get any of the following sequences: 226 227 X = LOAD *A, Y = LOAD *B, STORE *D = Z 228 X = LOAD *A, STORE *D = Z, Y = LOAD *B 229 Y = LOAD *B, X = LOAD *A, STORE *D = Z 230 Y = LOAD *B, STORE *D = Z, X = LOAD *A 231 STORE *D = Z, X = LOAD *A, Y = LOAD *B 232 STORE *D = Z, Y = LOAD *B, X = LOAD *A 233 234 (*) It _must_ be assumed that overlapping memory accesses may be merged or 235 discarded. This means that for: 236 237 X = *A; Y = *(A + 4); 238 239 we may get any one of the following sequences: 240 241 X = LOAD *A; Y = LOAD *(A + 4); 242 Y = LOAD *(A + 4); X = LOAD *A; 243 {X, Y} = LOAD {*A, *(A + 4) }; 244 245 And for: 246 247 *A = X; Y = *A; 248 249 we may get either of: 250 251 STORE *A = X; Y = LOAD *A; 252 STORE *A = Y = X; 253 254 255========================= 256WHAT ARE MEMORY BARRIERS? 257========================= 258 259As can be seen above, independent memory operations are effectively performed 260in random order, but this can be a problem for CPU-CPU interaction and for I/O. 261What is required is some way of intervening to instruct the compiler and the 262CPU to restrict the order. 263 264Memory barriers are such interventions. They impose a perceived partial 265ordering over the memory operations on either side of the barrier. 266 267Such enforcement is important because the CPUs and other devices in a system 268can use a variety of tricks to improve performance, including reordering, 269deferral and combination of memory operations; speculative loads; speculative 270branch prediction and various types of caching. Memory barriers are used to 271override or suppress these tricks, allowing the code to sanely control the 272interaction of multiple CPUs and/or devices. 273 274 275VARIETIES OF MEMORY BARRIER 276--------------------------- 277 278Memory barriers come in four basic varieties: 279 280 (1) Write (or store) memory barriers. 281 282 A write memory barrier gives a guarantee that all the STORE operations 283 specified before the barrier will appear to happen before all the STORE 284 operations specified after the barrier with respect to the other 285 components of the system. 286 287 A write barrier is a partial ordering on stores only; it is not required 288 to have any effect on loads. 289 290 A CPU can be viewed as committing a sequence of store operations to the 291 memory system as time progresses. All stores before a write barrier will 292 occur in the sequence _before_ all the stores after the write barrier. 293 294 [!] Note that write barriers should normally be paired with read or data 295 dependency barriers; see the "SMP barrier pairing" subsection. 296 297 298 (2) Data dependency barriers. 299 300 A data dependency barrier is a weaker form of read barrier. In the case 301 where two loads are performed such that the second depends on the result 302 of the first (eg: the first load retrieves the address to which the second 303 load will be directed), a data dependency barrier would be required to 304 make sure that the target of the second load is updated before the address 305 obtained by the first load is accessed. 306 307 A data dependency barrier is a partial ordering on interdependent loads 308 only; it is not required to have any effect on stores, independent loads 309 or overlapping loads. 310 311 As mentioned in (1), the other CPUs in the system can be viewed as 312 committing sequences of stores to the memory system that the CPU being 313 considered can then perceive. A data dependency barrier issued by the CPU 314 under consideration guarantees that for any load preceding it, if that 315 load touches one of a sequence of stores from another CPU, then by the 316 time the barrier completes, the effects of all the stores prior to that 317 touched by the load will be perceptible to any loads issued after the data 318 dependency barrier. 319 320 See the "Examples of memory barrier sequences" subsection for diagrams 321 showing the ordering constraints. 322 323 [!] Note that the first load really has to have a _data_ dependency and 324 not a control dependency. If the address for the second load is dependent 325 on the first load, but the dependency is through a conditional rather than 326 actually loading the address itself, then it's a _control_ dependency and 327 a full read barrier or better is required. See the "Control dependencies" 328 subsection for more information. 329 330 [!] Note that data dependency barriers should normally be paired with 331 write barriers; see the "SMP barrier pairing" subsection. 332 333 334 (3) Read (or load) memory barriers. 335 336 A read barrier is a data dependency barrier plus a guarantee that all the 337 LOAD operations specified before the barrier will appear to happen before 338 all the LOAD operations specified after the barrier with respect to the 339 other components of the system. 340 341 A read barrier is a partial ordering on loads only; it is not required to 342 have any effect on stores. 343 344 Read memory barriers imply data dependency barriers, and so can substitute 345 for them. 346 347 [!] Note that read barriers should normally be paired with write barriers; 348 see the "SMP barrier pairing" subsection. 349 350 351 (4) General memory barriers. 352 353 A general memory barrier gives a guarantee that all the LOAD and STORE 354 operations specified before the barrier will appear to happen before all 355 the LOAD and STORE operations specified after the barrier with respect to 356 the other components of the system. 357 358 A general memory barrier is a partial ordering over both loads and stores. 359 360 General memory barriers imply both read and write memory barriers, and so 361 can substitute for either. 362 363 364And a couple of implicit varieties: 365 366 (5) LOCK operations. 367 368 This acts as a one-way permeable barrier. It guarantees that all memory 369 operations after the LOCK operation will appear to happen after the LOCK 370 operation with respect to the other components of the system. 371 372 Memory operations that occur before a LOCK operation may appear to happen 373 after it completes. 374 375 A LOCK operation should almost always be paired with an UNLOCK operation. 376 377 378 (6) UNLOCK operations. 379 380 This also acts as a one-way permeable barrier. It guarantees that all 381 memory operations before the UNLOCK operation will appear to happen before 382 the UNLOCK operation with respect to the other components of the system. 383 384 Memory operations that occur after an UNLOCK operation may appear to 385 happen before it completes. 386 387 LOCK and UNLOCK operations are guaranteed to appear with respect to each 388 other strictly in the order specified. 389 390 The use of LOCK and UNLOCK operations generally precludes the need for 391 other sorts of memory barrier (but note the exceptions mentioned in the 392 subsection "MMIO write barrier"). 393 394 395Memory barriers are only required where there's a possibility of interaction 396between two CPUs or between a CPU and a device. If it can be guaranteed that 397there won't be any such interaction in any particular piece of code, then 398memory barriers are unnecessary in that piece of code. 399 400 401Note that these are the _minimum_ guarantees. Different architectures may give 402more substantial guarantees, but they may _not_ be relied upon outside of arch 403specific code. 404 405 406WHAT MAY NOT BE ASSUMED ABOUT MEMORY BARRIERS? 407---------------------------------------------- 408 409There are certain things that the Linux kernel memory barriers do not guarantee: 410 411 (*) There is no guarantee that any of the memory accesses specified before a 412 memory barrier will be _complete_ by the completion of a memory barrier 413 instruction; the barrier can be considered to draw a line in that CPU's 414 access queue that accesses of the appropriate type may not cross. 415 416 (*) There is no guarantee that issuing a memory barrier on one CPU will have 417 any direct effect on another CPU or any other hardware in the system. The 418 indirect effect will be the order in which the second CPU sees the effects 419 of the first CPU's accesses occur, but see the next point: 420 421 (*) There is no guarantee that a CPU will see the correct order of effects 422 from a second CPU's accesses, even _if_ the second CPU uses a memory 423 barrier, unless the first CPU _also_ uses a matching memory barrier (see 424 the subsection on "SMP Barrier Pairing"). 425 426 (*) There is no guarantee that some intervening piece of off-the-CPU 427 hardware[*] will not reorder the memory accesses. CPU cache coherency 428 mechanisms should propagate the indirect effects of a memory barrier 429 between CPUs, but might not do so in order. 430 431 [*] For information on bus mastering DMA and coherency please read: 432 433 Documentation/PCI/pci.txt 434 Documentation/PCI/PCI-DMA-mapping.txt 435 Documentation/DMA-API.txt 436 437 438DATA DEPENDENCY BARRIERS 439------------------------ 440 441The usage requirements of data dependency barriers are a little subtle, and 442it's not always obvious that they're needed. To illustrate, consider the 443following sequence of events: 444 445 CPU 1 CPU 2 446 =============== =============== 447 { A == 1, B == 2, C = 3, P == &A, Q == &C } 448 B = 4; 449 <write barrier> 450 P = &B 451 Q = P; 452 D = *Q; 453 454There's a clear data dependency here, and it would seem that by the end of the 455sequence, Q must be either &A or &B, and that: 456 457 (Q == &A) implies (D == 1) 458 (Q == &B) implies (D == 4) 459 460But! CPU 2's perception of P may be updated _before_ its perception of B, thus 461leading to the following situation: 462 463 (Q == &B) and (D == 2) ???? 464 465Whilst this may seem like a failure of coherency or causality maintenance, it 466isn't, and this behaviour can be observed on certain real CPUs (such as the DEC 467Alpha). 468 469To deal with this, a data dependency barrier or better must be inserted 470between the address load and the data load: 471 472 CPU 1 CPU 2 473 =============== =============== 474 { A == 1, B == 2, C = 3, P == &A, Q == &C } 475 B = 4; 476 <write barrier> 477 P = &B 478 Q = P; 479 <data dependency barrier> 480 D = *Q; 481 482This enforces the occurrence of one of the two implications, and prevents the 483third possibility from arising. 484 485[!] Note that this extremely counterintuitive situation arises most easily on 486machines with split caches, so that, for example, one cache bank processes 487even-numbered cache lines and the other bank processes odd-numbered cache 488lines. The pointer P might be stored in an odd-numbered cache line, and the 489variable B might be stored in an even-numbered cache line. Then, if the 490even-numbered bank of the reading CPU's cache is extremely busy while the 491odd-numbered bank is idle, one can see the new value of the pointer P (&B), 492but the old value of the variable B (2). 493 494 495Another example of where data dependency barriers might by required is where a 496number is read from memory and then used to calculate the index for an array 497access: 498 499 CPU 1 CPU 2 500 =============== =============== 501 { M[0] == 1, M[1] == 2, M[3] = 3, P == 0, Q == 3 } 502 M[1] = 4; 503 <write barrier> 504 P = 1 505 Q = P; 506 <data dependency barrier> 507 D = M[Q]; 508 509 510The data dependency barrier is very important to the RCU system, for example. 511See rcu_dereference() in include/linux/rcupdate.h. This permits the current 512target of an RCU'd pointer to be replaced with a new modified target, without 513the replacement target appearing to be incompletely initialised. 514 515See also the subsection on "Cache Coherency" for a more thorough example. 516 517 518CONTROL DEPENDENCIES 519-------------------- 520 521A control dependency requires a full read memory barrier, not simply a data 522dependency barrier to make it work correctly. Consider the following bit of 523code: 524 525 q = &a; 526 if (p) 527 q = &b; 528 <data dependency barrier> 529 x = *q; 530 531This will not have the desired effect because there is no actual data 532dependency, but rather a control dependency that the CPU may short-circuit by 533attempting to predict the outcome in advance. In such a case what's actually 534required is: 535 536 q = &a; 537 if (p) 538 q = &b; 539 <read barrier> 540 x = *q; 541 542 543SMP BARRIER PAIRING 544------------------- 545 546When dealing with CPU-CPU interactions, certain types of memory barrier should 547always be paired. A lack of appropriate pairing is almost certainly an error. 548 549A write barrier should always be paired with a data dependency barrier or read 550barrier, though a general barrier would also be viable. Similarly a read 551barrier or a data dependency barrier should always be paired with at least an 552write barrier, though, again, a general barrier is viable: 553 554 CPU 1 CPU 2 555 =============== =============== 556 a = 1; 557 <write barrier> 558 b = 2; x = b; 559 <read barrier> 560 y = a; 561 562Or: 563 564 CPU 1 CPU 2 565 =============== =============================== 566 a = 1; 567 <write barrier> 568 b = &a; x = b; 569 <data dependency barrier> 570 y = *x; 571 572Basically, the read barrier always has to be there, even though it can be of 573the "weaker" type. 574 575[!] Note that the stores before the write barrier would normally be expected to 576match the loads after the read barrier or the data dependency barrier, and vice 577versa: 578 579 CPU 1 CPU 2 580 =============== =============== 581 a = 1; }---- --->{ v = c 582 b = 2; } \ / { w = d 583 <write barrier> \ <read barrier> 584 c = 3; } / \ { x = a; 585 d = 4; }---- --->{ y = b; 586 587 588EXAMPLES OF MEMORY BARRIER SEQUENCES 589------------------------------------ 590 591Firstly, write barriers act as partial orderings on store operations. 592Consider the following sequence of events: 593 594 CPU 1 595 ======================= 596 STORE A = 1 597 STORE B = 2 598 STORE C = 3 599 <write barrier> 600 STORE D = 4 601 STORE E = 5 602 603This sequence of events is committed to the memory coherence system in an order 604that the rest of the system might perceive as the unordered set of { STORE A, 605STORE B, STORE C } all occurring before the unordered set of { STORE D, STORE E 606}: 607 608 +-------+ : : 609 | | +------+ 610 | |------>| C=3 | } /\ 611 | | : +------+ }----- \ -----> Events perceptible to 612 | | : | A=1 | } \/ the rest of the system 613 | | : +------+ } 614 | CPU 1 | : | B=2 | } 615 | | +------+ } 616 | | wwwwwwwwwwwwwwww } <--- At this point the write barrier 617 | | +------+ } requires all stores prior to the 618 | | : | E=5 | } barrier to be committed before 619 | | : +------+ } further stores may take place 620 | |------>| D=4 | } 621 | | +------+ 622 +-------+ : : 623 | 624 | Sequence in which stores are committed to the 625 | memory system by CPU 1 626 V 627 628 629Secondly, data dependency barriers act as partial orderings on data-dependent 630loads. Consider the following sequence of events: 631 632 CPU 1 CPU 2 633 ======================= ======================= 634 { B = 7; X = 9; Y = 8; C = &Y } 635 STORE A = 1 636 STORE B = 2 637 <write barrier> 638 STORE C = &B LOAD X 639 STORE D = 4 LOAD C (gets &B) 640 LOAD *C (reads B) 641 642Without intervention, CPU 2 may perceive the events on CPU 1 in some 643effectively random order, despite the write barrier issued by CPU 1: 644 645 +-------+ : : : : 646 | | +------+ +-------+ | Sequence of update 647 | |------>| B=2 |----- --->| Y->8 | | of perception on 648 | | : +------+ \ +-------+ | CPU 2 649 | CPU 1 | : | A=1 | \ --->| C->&Y | V 650 | | +------+ | +-------+ 651 | | wwwwwwwwwwwwwwww | : : 652 | | +------+ | : : 653 | | : | C=&B |--- | : : +-------+ 654 | | : +------+ \ | +-------+ | | 655 | |------>| D=4 | ----------->| C->&B |------>| | 656 | | +------+ | +-------+ | | 657 +-------+ : : | : : | | 658 | : : | | 659 | : : | CPU 2 | 660 | +-------+ | | 661 Apparently incorrect ---> | | B->7 |------>| | 662 perception of B (!) | +-------+ | | 663 | : : | | 664 | +-------+ | | 665 The load of X holds ---> \ | X->9 |------>| | 666 up the maintenance \ +-------+ | | 667 of coherence of B ----->| B->2 | +-------+ 668 +-------+ 669 : : 670 671 672In the above example, CPU 2 perceives that B is 7, despite the load of *C 673(which would be B) coming after the LOAD of C. 674 675If, however, a data dependency barrier were to be placed between the load of C 676and the load of *C (ie: B) on CPU 2: 677 678 CPU 1 CPU 2 679 ======================= ======================= 680 { B = 7; X = 9; Y = 8; C = &Y } 681 STORE A = 1 682 STORE B = 2 683 <write barrier> 684 STORE C = &B LOAD X 685 STORE D = 4 LOAD C (gets &B) 686 <data dependency barrier> 687 LOAD *C (reads B) 688 689then the following will occur: 690 691 +-------+ : : : : 692 | | +------+ +-------+ 693 | |------>| B=2 |----- --->| Y->8 | 694 | | : +------+ \ +-------+ 695 | CPU 1 | : | A=1 | \ --->| C->&Y | 696 | | +------+ | +-------+ 697 | | wwwwwwwwwwwwwwww | : : 698 | | +------+ | : : 699 | | : | C=&B |--- | : : +-------+ 700 | | : +------+ \ | +-------+ | | 701 | |------>| D=4 | ----------->| C->&B |------>| | 702 | | +------+ | +-------+ | | 703 +-------+ : : | : : | | 704 | : : | | 705 | : : | CPU 2 | 706 | +-------+ | | 707 | | X->9 |------>| | 708 | +-------+ | | 709 Makes sure all effects ---> \ ddddddddddddddddd | | 710 prior to the store of C \ +-------+ | | 711 are perceptible to ----->| B->2 |------>| | 712 subsequent loads +-------+ | | 713 : : +-------+ 714 715 716And thirdly, a read barrier acts as a partial order on loads. Consider the 717following sequence of events: 718 719 CPU 1 CPU 2 720 ======================= ======================= 721 { A = 0, B = 9 } 722 STORE A=1 723 <write barrier> 724 STORE B=2 725 LOAD B 726 LOAD A 727 728Without intervention, CPU 2 may then choose to perceive the events on CPU 1 in 729some effectively random order, despite the write barrier issued by CPU 1: 730 731 +-------+ : : : : 732 | | +------+ +-------+ 733 | |------>| A=1 |------ --->| A->0 | 734 | | +------+ \ +-------+ 735 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | 736 | | +------+ | +-------+ 737 | |------>| B=2 |--- | : : 738 | | +------+ \ | : : +-------+ 739 +-------+ : : \ | +-------+ | | 740 ---------->| B->2 |------>| | 741 | +-------+ | CPU 2 | 742 | | A->0 |------>| | 743 | +-------+ | | 744 | : : +-------+ 745 \ : : 746 \ +-------+ 747 ---->| A->1 | 748 +-------+ 749 : : 750 751 752If, however, a read barrier were to be placed between the load of B and the 753load of A on CPU 2: 754 755 CPU 1 CPU 2 756 ======================= ======================= 757 { A = 0, B = 9 } 758 STORE A=1 759 <write barrier> 760 STORE B=2 761 LOAD B 762 <read barrier> 763 LOAD A 764 765then the partial ordering imposed by CPU 1 will be perceived correctly by CPU 7662: 767 768 +-------+ : : : : 769 | | +------+ +-------+ 770 | |------>| A=1 |------ --->| A->0 | 771 | | +------+ \ +-------+ 772 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | 773 | | +------+ | +-------+ 774 | |------>| B=2 |--- | : : 775 | | +------+ \ | : : +-------+ 776 +-------+ : : \ | +-------+ | | 777 ---------->| B->2 |------>| | 778 | +-------+ | CPU 2 | 779 | : : | | 780 | : : | | 781 At this point the read ----> \ rrrrrrrrrrrrrrrrr | | 782 barrier causes all effects \ +-------+ | | 783 prior to the storage of B ---->| A->1 |------>| | 784 to be perceptible to CPU 2 +-------+ | | 785 : : +-------+ 786 787 788To illustrate this more completely, consider what could happen if the code 789contained a load of A either side of the read barrier: 790 791 CPU 1 CPU 2 792 ======================= ======================= 793 { A = 0, B = 9 } 794 STORE A=1 795 <write barrier> 796 STORE B=2 797 LOAD B 798 LOAD A [first load of A] 799 <read barrier> 800 LOAD A [second load of A] 801 802Even though the two loads of A both occur after the load of B, they may both 803come up with different values: 804 805 +-------+ : : : : 806 | | +------+ +-------+ 807 | |------>| A=1 |------ --->| A->0 | 808 | | +------+ \ +-------+ 809 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | 810 | | +------+ | +-------+ 811 | |------>| B=2 |--- | : : 812 | | +------+ \ | : : +-------+ 813 +-------+ : : \ | +-------+ | | 814 ---------->| B->2 |------>| | 815 | +-------+ | CPU 2 | 816 | : : | | 817 | : : | | 818 | +-------+ | | 819 | | A->0 |------>| 1st | 820 | +-------+ | | 821 At this point the read ----> \ rrrrrrrrrrrrrrrrr | | 822 barrier causes all effects \ +-------+ | | 823 prior to the storage of B ---->| A->1 |------>| 2nd | 824 to be perceptible to CPU 2 +-------+ | | 825 : : +-------+ 826 827 828But it may be that the update to A from CPU 1 becomes perceptible to CPU 2 829before the read barrier completes anyway: 830 831 +-------+ : : : : 832 | | +------+ +-------+ 833 | |------>| A=1 |------ --->| A->0 | 834 | | +------+ \ +-------+ 835 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | 836 | | +------+ | +-------+ 837 | |------>| B=2 |--- | : : 838 | | +------+ \ | : : +-------+ 839 +-------+ : : \ | +-------+ | | 840 ---------->| B->2 |------>| | 841 | +-------+ | CPU 2 | 842 | : : | | 843 \ : : | | 844 \ +-------+ | | 845 ---->| A->1 |------>| 1st | 846 +-------+ | | 847 rrrrrrrrrrrrrrrrr | | 848 +-------+ | | 849 | A->1 |------>| 2nd | 850 +-------+ | | 851 : : +-------+ 852 853 854The guarantee is that the second load will always come up with A == 1 if the 855load of B came up with B == 2. No such guarantee exists for the first load of 856A; that may come up with either A == 0 or A == 1. 857 858 859READ MEMORY BARRIERS VS LOAD SPECULATION 860---------------------------------------- 861 862Many CPUs speculate with loads: that is they see that they will need to load an 863item from memory, and they find a time where they're not using the bus for any 864other loads, and so do the load in advance - even though they haven't actually 865got to that point in the instruction execution flow yet. This permits the 866actual load instruction to potentially complete immediately because the CPU 867already has the value to hand. 868 869It may turn out that the CPU didn't actually need the value - perhaps because a 870branch circumvented the load - in which case it can discard the value or just 871cache it for later use. 872 873Consider: 874 875 CPU 1 CPU 2 876 ======================= ======================= 877 LOAD B 878 DIVIDE } Divide instructions generally 879 DIVIDE } take a long time to perform 880 LOAD A 881 882Which might appear as this: 883 884 : : +-------+ 885 +-------+ | | 886 --->| B->2 |------>| | 887 +-------+ | CPU 2 | 888 : :DIVIDE | | 889 +-------+ | | 890 The CPU being busy doing a ---> --->| A->0 |~~~~ | | 891 division speculates on the +-------+ ~ | | 892 LOAD of A : : ~ | | 893 : :DIVIDE | | 894 : : ~ | | 895 Once the divisions are complete --> : : ~-->| | 896 the CPU can then perform the : : | | 897 LOAD with immediate effect : : +-------+ 898 899 900Placing a read barrier or a data dependency barrier just before the second 901load: 902 903 CPU 1 CPU 2 904 ======================= ======================= 905 LOAD B 906 DIVIDE 907 DIVIDE 908 <read barrier> 909 LOAD A 910 911will force any value speculatively obtained to be reconsidered to an extent 912dependent on the type of barrier used. If there was no change made to the 913speculated memory location, then the speculated value will just be used: 914 915 : : +-------+ 916 +-------+ | | 917 --->| B->2 |------>| | 918 +-------+ | CPU 2 | 919 : :DIVIDE | | 920 +-------+ | | 921 The CPU being busy doing a ---> --->| A->0 |~~~~ | | 922 division speculates on the +-------+ ~ | | 923 LOAD of A : : ~ | | 924 : :DIVIDE | | 925 : : ~ | | 926 : : ~ | | 927 rrrrrrrrrrrrrrrr~ | | 928 : : ~ | | 929 : : ~-->| | 930 : : | | 931 : : +-------+ 932 933 934but if there was an update or an invalidation from another CPU pending, then 935the speculation will be cancelled and the value reloaded: 936 937 : : +-------+ 938 +-------+ | | 939 --->| B->2 |------>| | 940 +-------+ | CPU 2 | 941 : :DIVIDE | | 942 +-------+ | | 943 The CPU being busy doing a ---> --->| A->0 |~~~~ | | 944 division speculates on the +-------+ ~ | | 945 LOAD of A : : ~ | | 946 : :DIVIDE | | 947 : : ~ | | 948 : : ~ | | 949 rrrrrrrrrrrrrrrrr | | 950 +-------+ | | 951 The speculation is discarded ---> --->| A->1 |------>| | 952 and an updated value is +-------+ | | 953 retrieved : : +-------+ 954 955 956======================== 957EXPLICIT KERNEL BARRIERS 958======================== 959 960The Linux kernel has a variety of different barriers that act at different 961levels: 962 963 (*) Compiler barrier. 964 965 (*) CPU memory barriers. 966 967 (*) MMIO write barrier. 968 969 970COMPILER BARRIER 971---------------- 972 973The Linux kernel has an explicit compiler barrier function that prevents the 974compiler from moving the memory accesses either side of it to the other side: 975 976 barrier(); 977 978This is a general barrier - lesser varieties of compiler barrier do not exist. 979 980The compiler barrier has no direct effect on the CPU, which may then reorder 981things however it wishes. 982 983 984CPU MEMORY BARRIERS 985------------------- 986 987The Linux kernel has eight basic CPU memory barriers: 988 989 TYPE MANDATORY SMP CONDITIONAL 990 =============== ======================= =========================== 991 GENERAL mb() smp_mb() 992 WRITE wmb() smp_wmb() 993 READ rmb() smp_rmb() 994 DATA DEPENDENCY read_barrier_depends() smp_read_barrier_depends() 995 996 997All memory barriers except the data dependency barriers imply a compiler 998barrier. Data dependencies do not impose any additional compiler ordering. 999 1000Aside: In the case of data dependencies, the compiler would be expected to 1001issue the loads in the correct order (eg. `a[b]` would have to load the value 1002of b before loading a[b]), however there is no guarantee in the C specification 1003that the compiler may not speculate the value of b (eg. is equal to 1) and load 1004a before b (eg. tmp = a[1]; if (b != 1) tmp = a[b]; ). There is also the 1005problem of a compiler reloading b after having loaded a[b], thus having a newer 1006copy of b than a[b]. A consensus has not yet been reached about these problems, 1007however the ACCESS_ONCE macro is a good place to start looking. 1008 1009SMP memory barriers are reduced to compiler barriers on uniprocessor compiled 1010systems because it is assumed that a CPU will appear to be self-consistent, 1011and will order overlapping accesses correctly with respect to itself. 1012 1013[!] Note that SMP memory barriers _must_ be used to control the ordering of 1014references to shared memory on SMP systems, though the use of locking instead 1015is sufficient. 1016 1017Mandatory barriers should not be used to control SMP effects, since mandatory 1018barriers unnecessarily impose overhead on UP systems. They may, however, be 1019used to control MMIO effects on accesses through relaxed memory I/O windows. 1020These are required even on non-SMP systems as they affect the order in which 1021memory operations appear to a device by prohibiting both the compiler and the 1022CPU from reordering them. 1023 1024 1025There are some more advanced barrier functions: 1026 1027 (*) set_mb(var, value) 1028 1029 This assigns the value to the variable and then inserts a full memory 1030 barrier after it, depending on the function. It isn't guaranteed to 1031 insert anything more than a compiler barrier in a UP compilation. 1032 1033 1034 (*) smp_mb__before_atomic_dec(); 1035 (*) smp_mb__after_atomic_dec(); 1036 (*) smp_mb__before_atomic_inc(); 1037 (*) smp_mb__after_atomic_inc(); 1038 1039 These are for use with atomic add, subtract, increment and decrement 1040 functions that don't return a value, especially when used for reference 1041 counting. These functions do not imply memory barriers. 1042 1043 As an example, consider a piece of code that marks an object as being dead 1044 and then decrements the object's reference count: 1045 1046 obj->dead = 1; 1047 smp_mb__before_atomic_dec(); 1048 atomic_dec(&obj->ref_count); 1049 1050 This makes sure that the death mark on the object is perceived to be set 1051 *before* the reference counter is decremented. 1052 1053 See Documentation/atomic_ops.txt for more information. See the "Atomic 1054 operations" subsection for information on where to use these. 1055 1056 1057 (*) smp_mb__before_clear_bit(void); 1058 (*) smp_mb__after_clear_bit(void); 1059 1060 These are for use similar to the atomic inc/dec barriers. These are 1061 typically used for bitwise unlocking operations, so care must be taken as 1062 there are no implicit memory barriers here either. 1063 1064 Consider implementing an unlock operation of some nature by clearing a 1065 locking bit. The clear_bit() would then need to be barriered like this: 1066 1067 smp_mb__before_clear_bit(); 1068 clear_bit( ... ); 1069 1070 This prevents memory operations before the clear leaking to after it. See 1071 the subsection on "Locking Functions" with reference to UNLOCK operation 1072 implications. 1073 1074 See Documentation/atomic_ops.txt for more information. See the "Atomic 1075 operations" subsection for information on where to use these. 1076 1077 1078MMIO WRITE BARRIER 1079------------------ 1080 1081The Linux kernel also has a special barrier for use with memory-mapped I/O 1082writes: 1083 1084 mmiowb(); 1085 1086This is a variation on the mandatory write barrier that causes writes to weakly 1087ordered I/O regions to be partially ordered. Its effects may go beyond the 1088CPU->Hardware interface and actually affect the hardware at some level. 1089 1090See the subsection "Locks vs I/O accesses" for more information. 1091 1092 1093=============================== 1094IMPLICIT KERNEL MEMORY BARRIERS 1095=============================== 1096 1097Some of the other functions in the linux kernel imply memory barriers, amongst 1098which are locking and scheduling functions. 1099 1100This specification is a _minimum_ guarantee; any particular architecture may 1101provide more substantial guarantees, but these may not be relied upon outside 1102of arch specific code. 1103 1104 1105LOCKING FUNCTIONS 1106----------------- 1107 1108The Linux kernel has a number of locking constructs: 1109 1110 (*) spin locks 1111 (*) R/W spin locks 1112 (*) mutexes 1113 (*) semaphores 1114 (*) R/W semaphores 1115 (*) RCU 1116 1117In all cases there are variants on "LOCK" operations and "UNLOCK" operations 1118for each construct. These operations all imply certain barriers: 1119 1120 (1) LOCK operation implication: 1121 1122 Memory operations issued after the LOCK will be completed after the LOCK 1123 operation has completed. 1124 1125 Memory operations issued before the LOCK may be completed after the LOCK 1126 operation has completed. 1127 1128 (2) UNLOCK operation implication: 1129 1130 Memory operations issued before the UNLOCK will be completed before the 1131 UNLOCK operation has completed. 1132 1133 Memory operations issued after the UNLOCK may be completed before the 1134 UNLOCK operation has completed. 1135 1136 (3) LOCK vs LOCK implication: 1137 1138 All LOCK operations issued before another LOCK operation will be completed 1139 before that LOCK operation. 1140 1141 (4) LOCK vs UNLOCK implication: 1142 1143 All LOCK operations issued before an UNLOCK operation will be completed 1144 before the UNLOCK operation. 1145 1146 All UNLOCK operations issued before a LOCK operation will be completed 1147 before the LOCK operation. 1148 1149 (5) Failed conditional LOCK implication: 1150 1151 Certain variants of the LOCK operation may fail, either due to being 1152 unable to get the lock immediately, or due to receiving an unblocked 1153 signal whilst asleep waiting for the lock to become available. Failed 1154 locks do not imply any sort of barrier. 1155 1156Therefore, from (1), (2) and (4) an UNLOCK followed by an unconditional LOCK is 1157equivalent to a full barrier, but a LOCK followed by an UNLOCK is not. 1158 1159[!] Note: one of the consequences of LOCKs and UNLOCKs being only one-way 1160 barriers is that the effects of instructions outside of a critical section 1161 may seep into the inside of the critical section. 1162 1163A LOCK followed by an UNLOCK may not be assumed to be full memory barrier 1164because it is possible for an access preceding the LOCK to happen after the 1165LOCK, and an access following the UNLOCK to happen before the UNLOCK, and the 1166two accesses can themselves then cross: 1167 1168 *A = a; 1169 LOCK 1170 UNLOCK 1171 *B = b; 1172 1173may occur as: 1174 1175 LOCK, STORE *B, STORE *A, UNLOCK 1176 1177Locks and semaphores may not provide any guarantee of ordering on UP compiled 1178systems, and so cannot be counted on in such a situation to actually achieve 1179anything at all - especially with respect to I/O accesses - unless combined 1180with interrupt disabling operations. 1181 1182See also the section on "Inter-CPU locking barrier effects". 1183 1184 1185As an example, consider the following: 1186 1187 *A = a; 1188 *B = b; 1189 LOCK 1190 *C = c; 1191 *D = d; 1192 UNLOCK 1193 *E = e; 1194 *F = f; 1195 1196The following sequence of events is acceptable: 1197 1198 LOCK, {*F,*A}, *E, {*C,*D}, *B, UNLOCK 1199 1200 [+] Note that {*F,*A} indicates a combined access. 1201 1202But none of the following are: 1203 1204 {*F,*A}, *B, LOCK, *C, *D, UNLOCK, *E 1205 *A, *B, *C, LOCK, *D, UNLOCK, *E, *F 1206 *A, *B, LOCK, *C, UNLOCK, *D, *E, *F 1207 *B, LOCK, *C, *D, UNLOCK, {*F,*A}, *E 1208 1209 1210 1211INTERRUPT DISABLING FUNCTIONS 1212----------------------------- 1213 1214Functions that disable interrupts (LOCK equivalent) and enable interrupts 1215(UNLOCK equivalent) will act as compiler barriers only. So if memory or I/O 1216barriers are required in such a situation, they must be provided from some 1217other means. 1218 1219 1220MISCELLANEOUS FUNCTIONS 1221----------------------- 1222 1223Other functions that imply barriers: 1224 1225 (*) schedule() and similar imply full memory barriers. 1226 1227 1228================================= 1229INTER-CPU LOCKING BARRIER EFFECTS 1230================================= 1231 1232On SMP systems locking primitives give a more substantial form of barrier: one 1233that does affect memory access ordering on other CPUs, within the context of 1234conflict on any particular lock. 1235 1236 1237LOCKS VS MEMORY ACCESSES 1238------------------------ 1239 1240Consider the following: the system has a pair of spinlocks (M) and (Q), and 1241three CPUs; then should the following sequence of events occur: 1242 1243 CPU 1 CPU 2 1244 =============================== =============================== 1245 *A = a; *E = e; 1246 LOCK M LOCK Q 1247 *B = b; *F = f; 1248 *C = c; *G = g; 1249 UNLOCK M UNLOCK Q 1250 *D = d; *H = h; 1251 1252Then there is no guarantee as to what order CPU 3 will see the accesses to *A 1253through *H occur in, other than the constraints imposed by the separate locks 1254on the separate CPUs. It might, for example, see: 1255 1256 *E, LOCK M, LOCK Q, *G, *C, *F, *A, *B, UNLOCK Q, *D, *H, UNLOCK M 1257 1258But it won't see any of: 1259 1260 *B, *C or *D preceding LOCK M 1261 *A, *B or *C following UNLOCK M 1262 *F, *G or *H preceding LOCK Q 1263 *E, *F or *G following UNLOCK Q 1264 1265 1266However, if the following occurs: 1267 1268 CPU 1 CPU 2 1269 =============================== =============================== 1270 *A = a; 1271 LOCK M [1] 1272 *B = b; 1273 *C = c; 1274 UNLOCK M [1] 1275 *D = d; *E = e; 1276 LOCK M [2] 1277 *F = f; 1278 *G = g; 1279 UNLOCK M [2] 1280 *H = h; 1281 1282CPU 3 might see: 1283 1284 *E, LOCK M [1], *C, *B, *A, UNLOCK M [1], 1285 LOCK M [2], *H, *F, *G, UNLOCK M [2], *D 1286 1287But assuming CPU 1 gets the lock first, CPU 3 won't see any of: 1288 1289 *B, *C, *D, *F, *G or *H preceding LOCK M [1] 1290 *A, *B or *C following UNLOCK M [1] 1291 *F, *G or *H preceding LOCK M [2] 1292 *A, *B, *C, *E, *F or *G following UNLOCK M [2] 1293 1294 1295LOCKS VS I/O ACCESSES 1296--------------------- 1297 1298Under certain circumstances (especially involving NUMA), I/O accesses within 1299two spinlocked sections on two different CPUs may be seen as interleaved by the 1300PCI bridge, because the PCI bridge does not necessarily participate in the 1301cache-coherence protocol, and is therefore incapable of issuing the required 1302read memory barriers. 1303 1304For example: 1305 1306 CPU 1 CPU 2 1307 =============================== =============================== 1308 spin_lock(Q) 1309 writel(0, ADDR) 1310 writel(1, DATA); 1311 spin_unlock(Q); 1312 spin_lock(Q); 1313 writel(4, ADDR); 1314 writel(5, DATA); 1315 spin_unlock(Q); 1316 1317may be seen by the PCI bridge as follows: 1318 1319 STORE *ADDR = 0, STORE *ADDR = 4, STORE *DATA = 1, STORE *DATA = 5 1320 1321which would probably cause the hardware to malfunction. 1322 1323 1324What is necessary here is to intervene with an mmiowb() before dropping the 1325spinlock, for example: 1326 1327 CPU 1 CPU 2 1328 =============================== =============================== 1329 spin_lock(Q) 1330 writel(0, ADDR) 1331 writel(1, DATA); 1332 mmiowb(); 1333 spin_unlock(Q); 1334 spin_lock(Q); 1335 writel(4, ADDR); 1336 writel(5, DATA); 1337 mmiowb(); 1338 spin_unlock(Q); 1339 1340this will ensure that the two stores issued on CPU 1 appear at the PCI bridge 1341before either of the stores issued on CPU 2. 1342 1343 1344Furthermore, following a store by a load from the same device obviates the need 1345for the mmiowb(), because the load forces the store to complete before the load 1346is performed: 1347 1348 CPU 1 CPU 2 1349 =============================== =============================== 1350 spin_lock(Q) 1351 writel(0, ADDR) 1352 a = readl(DATA); 1353 spin_unlock(Q); 1354 spin_lock(Q); 1355 writel(4, ADDR); 1356 b = readl(DATA); 1357 spin_unlock(Q); 1358 1359 1360See Documentation/DocBook/deviceiobook.tmpl for more information. 1361 1362 1363================================= 1364WHERE ARE MEMORY BARRIERS NEEDED? 1365================================= 1366 1367Under normal operation, memory operation reordering is generally not going to 1368be a problem as a single-threaded linear piece of code will still appear to 1369work correctly, even if it's in an SMP kernel. There are, however, three 1370circumstances in which reordering definitely _could_ be a problem: 1371 1372 (*) Interprocessor interaction. 1373 1374 (*) Atomic operations. 1375 1376 (*) Accessing devices. 1377 1378 (*) Interrupts. 1379 1380 1381INTERPROCESSOR INTERACTION 1382-------------------------- 1383 1384When there's a system with more than one processor, more than one CPU in the 1385system may be working on the same data set at the same time. This can cause 1386synchronisation problems, and the usual way of dealing with them is to use 1387locks. Locks, however, are quite expensive, and so it may be preferable to 1388operate without the use of a lock if at all possible. In such a case 1389operations that affect both CPUs may have to be carefully ordered to prevent 1390a malfunction. 1391 1392Consider, for example, the R/W semaphore slow path. Here a waiting process is 1393queued on the semaphore, by virtue of it having a piece of its stack linked to 1394the semaphore's list of waiting processes: 1395 1396 struct rw_semaphore { 1397 ... 1398 spinlock_t lock; 1399 struct list_head waiters; 1400 }; 1401 1402 struct rwsem_waiter { 1403 struct list_head list; 1404 struct task_struct *task; 1405 }; 1406 1407To wake up a particular waiter, the up_read() or up_write() functions have to: 1408 1409 (1) read the next pointer from this waiter's record to know as to where the 1410 next waiter record is; 1411 1412 (2) read the pointer to the waiter's task structure; 1413 1414 (3) clear the task pointer to tell the waiter it has been given the semaphore; 1415 1416 (4) call wake_up_process() on the task; and 1417 1418 (5) release the reference held on the waiter's task struct. 1419 1420In other words, it has to perform this sequence of events: 1421 1422 LOAD waiter->list.next; 1423 LOAD waiter->task; 1424 STORE waiter->task; 1425 CALL wakeup 1426 RELEASE task 1427 1428and if any of these steps occur out of order, then the whole thing may 1429malfunction. 1430 1431Once it has queued itself and dropped the semaphore lock, the waiter does not 1432get the lock again; it instead just waits for its task pointer to be cleared 1433before proceeding. Since the record is on the waiter's stack, this means that 1434if the task pointer is cleared _before_ the next pointer in the list is read, 1435another CPU might start processing the waiter and might clobber the waiter's 1436stack before the up*() function has a chance to read the next pointer. 1437 1438Consider then what might happen to the above sequence of events: 1439 1440 CPU 1 CPU 2 1441 =============================== =============================== 1442 down_xxx() 1443 Queue waiter 1444 Sleep 1445 up_yyy() 1446 LOAD waiter->task; 1447 STORE waiter->task; 1448 Woken up by other event 1449 <preempt> 1450 Resume processing 1451 down_xxx() returns 1452 call foo() 1453 foo() clobbers *waiter 1454 </preempt> 1455 LOAD waiter->list.next; 1456 --- OOPS --- 1457 1458This could be dealt with using the semaphore lock, but then the down_xxx() 1459function has to needlessly get the spinlock again after being woken up. 1460 1461The way to deal with this is to insert a general SMP memory barrier: 1462 1463 LOAD waiter->list.next; 1464 LOAD waiter->task; 1465 smp_mb(); 1466 STORE waiter->task; 1467 CALL wakeup 1468 RELEASE task 1469 1470In this case, the barrier makes a guarantee that all memory accesses before the 1471barrier will appear to happen before all the memory accesses after the barrier 1472with respect to the other CPUs on the system. It does _not_ guarantee that all 1473the memory accesses before the barrier will be complete by the time the barrier 1474instruction itself is complete. 1475 1476On a UP system - where this wouldn't be a problem - the smp_mb() is just a 1477compiler barrier, thus making sure the compiler emits the instructions in the 1478right order without actually intervening in the CPU. Since there's only one 1479CPU, that CPU's dependency ordering logic will take care of everything else. 1480 1481 1482ATOMIC OPERATIONS 1483----------------- 1484 1485Whilst they are technically interprocessor interaction considerations, atomic 1486operations are noted specially as some of them imply full memory barriers and 1487some don't, but they're very heavily relied on as a group throughout the 1488kernel. 1489 1490Any atomic operation that modifies some state in memory and returns information 1491about the state (old or new) implies an SMP-conditional general memory barrier 1492(smp_mb()) on each side of the actual operation (with the exception of 1493explicit lock operations, described later). These include: 1494 1495 xchg(); 1496 cmpxchg(); 1497 atomic_cmpxchg(); 1498 atomic_inc_return(); 1499 atomic_dec_return(); 1500 atomic_add_return(); 1501 atomic_sub_return(); 1502 atomic_inc_and_test(); 1503 atomic_dec_and_test(); 1504 atomic_sub_and_test(); 1505 atomic_add_negative(); 1506 atomic_add_unless(); /* when succeeds (returns 1) */ 1507 test_and_set_bit(); 1508 test_and_clear_bit(); 1509 test_and_change_bit(); 1510 1511These are used for such things as implementing LOCK-class and UNLOCK-class 1512operations and adjusting reference counters towards object destruction, and as 1513such the implicit memory barrier effects are necessary. 1514 1515 1516The following operations are potential problems as they do _not_ imply memory 1517barriers, but might be used for implementing such things as UNLOCK-class 1518operations: 1519 1520 atomic_set(); 1521 set_bit(); 1522 clear_bit(); 1523 change_bit(); 1524 1525With these the appropriate explicit memory barrier should be used if necessary 1526(smp_mb__before_clear_bit() for instance). 1527 1528 1529The following also do _not_ imply memory barriers, and so may require explicit 1530memory barriers under some circumstances (smp_mb__before_atomic_dec() for 1531instance): 1532 1533 atomic_add(); 1534 atomic_sub(); 1535 atomic_inc(); 1536 atomic_dec(); 1537 1538If they're used for statistics generation, then they probably don't need memory 1539barriers, unless there's a coupling between statistical data. 1540 1541If they're used for reference counting on an object to control its lifetime, 1542they probably don't need memory barriers because either the reference count 1543will be adjusted inside a locked section, or the caller will already hold 1544sufficient references to make the lock, and thus a memory barrier unnecessary. 1545 1546If they're used for constructing a lock of some description, then they probably 1547do need memory barriers as a lock primitive generally has to do things in a 1548specific order. 1549 1550Basically, each usage case has to be carefully considered as to whether memory 1551barriers are needed or not. 1552 1553The following operations are special locking primitives: 1554 1555 test_and_set_bit_lock(); 1556 clear_bit_unlock(); 1557 __clear_bit_unlock(); 1558 1559These implement LOCK-class and UNLOCK-class operations. These should be used in 1560preference to other operations when implementing locking primitives, because 1561their implementations can be optimised on many architectures. 1562 1563[!] Note that special memory barrier primitives are available for these 1564situations because on some CPUs the atomic instructions used imply full memory 1565barriers, and so barrier instructions are superfluous in conjunction with them, 1566and in such cases the special barrier primitives will be no-ops. 1567 1568See Documentation/atomic_ops.txt for more information. 1569 1570 1571ACCESSING DEVICES 1572----------------- 1573 1574Many devices can be memory mapped, and so appear to the CPU as if they're just 1575a set of memory locations. To control such a device, the driver usually has to 1576make the right memory accesses in exactly the right order. 1577 1578However, having a clever CPU or a clever compiler creates a potential problem 1579in that the carefully sequenced accesses in the driver code won't reach the 1580device in the requisite order if the CPU or the compiler thinks it is more 1581efficient to reorder, combine or merge accesses - something that would cause 1582the device to malfunction. 1583 1584Inside of the Linux kernel, I/O should be done through the appropriate accessor 1585routines - such as inb() or writel() - which know how to make such accesses 1586appropriately sequential. Whilst this, for the most part, renders the explicit 1587use of memory barriers unnecessary, there are a couple of situations where they 1588might be needed: 1589 1590 (1) On some systems, I/O stores are not strongly ordered across all CPUs, and 1591 so for _all_ general drivers locks should be used and mmiowb() must be 1592 issued prior to unlocking the critical section. 1593 1594 (2) If the accessor functions are used to refer to an I/O memory window with 1595 relaxed memory access properties, then _mandatory_ memory barriers are 1596 required to enforce ordering. 1597 1598See Documentation/DocBook/deviceiobook.tmpl for more information. 1599 1600 1601INTERRUPTS 1602---------- 1603 1604A driver may be interrupted by its own interrupt service routine, and thus the 1605two parts of the driver may interfere with each other's attempts to control or 1606access the device. 1607 1608This may be alleviated - at least in part - by disabling local interrupts (a 1609form of locking), such that the critical operations are all contained within 1610the interrupt-disabled section in the driver. Whilst the driver's interrupt 1611routine is executing, the driver's core may not run on the same CPU, and its 1612interrupt is not permitted to happen again until the current interrupt has been 1613handled, thus the interrupt handler does not need to lock against that. 1614 1615However, consider a driver that was talking to an ethernet card that sports an 1616address register and a data register. If that driver's core talks to the card 1617under interrupt-disablement and then the driver's interrupt handler is invoked: 1618 1619 LOCAL IRQ DISABLE 1620 writew(ADDR, 3); 1621 writew(DATA, y); 1622 LOCAL IRQ ENABLE 1623 <interrupt> 1624 writew(ADDR, 4); 1625 q = readw(DATA); 1626 </interrupt> 1627 1628The store to the data register might happen after the second store to the 1629address register if ordering rules are sufficiently relaxed: 1630 1631 STORE *ADDR = 3, STORE *ADDR = 4, STORE *DATA = y, q = LOAD *DATA 1632 1633 1634If ordering rules are relaxed, it must be assumed that accesses done inside an 1635interrupt disabled section may leak outside of it and may interleave with 1636accesses performed in an interrupt - and vice versa - unless implicit or 1637explicit barriers are used. 1638 1639Normally this won't be a problem because the I/O accesses done inside such 1640sections will include synchronous load operations on strictly ordered I/O 1641registers that form implicit I/O barriers. If this isn't sufficient then an 1642mmiowb() may need to be used explicitly. 1643 1644 1645A similar situation may occur between an interrupt routine and two routines 1646running on separate CPUs that communicate with each other. If such a case is 1647likely, then interrupt-disabling locks should be used to guarantee ordering. 1648 1649 1650========================== 1651KERNEL I/O BARRIER EFFECTS 1652========================== 1653 1654When accessing I/O memory, drivers should use the appropriate accessor 1655functions: 1656 1657 (*) inX(), outX(): 1658 1659 These are intended to talk to I/O space rather than memory space, but 1660 that's primarily a CPU-specific concept. The i386 and x86_64 processors do 1661 indeed have special I/O space access cycles and instructions, but many 1662 CPUs don't have such a concept. 1663 1664 The PCI bus, amongst others, defines an I/O space concept which - on such 1665 CPUs as i386 and x86_64 - readily maps to the CPU's concept of I/O 1666 space. However, it may also be mapped as a virtual I/O space in the CPU's 1667 memory map, particularly on those CPUs that don't support alternate I/O 1668 spaces. 1669 1670 Accesses to this space may be fully synchronous (as on i386), but 1671 intermediary bridges (such as the PCI host bridge) may not fully honour 1672 that. 1673 1674 They are guaranteed to be fully ordered with respect to each other. 1675 1676 They are not guaranteed to be fully ordered with respect to other types of 1677 memory and I/O operation. 1678 1679 (*) readX(), writeX(): 1680 1681 Whether these are guaranteed to be fully ordered and uncombined with 1682 respect to each other on the issuing CPU depends on the characteristics 1683 defined for the memory window through which they're accessing. On later 1684 i386 architecture machines, for example, this is controlled by way of the 1685 MTRR registers. 1686 1687 Ordinarily, these will be guaranteed to be fully ordered and uncombined, 1688 provided they're not accessing a prefetchable device. 1689 1690 However, intermediary hardware (such as a PCI bridge) may indulge in 1691 deferral if it so wishes; to flush a store, a load from the same location 1692 is preferred[*], but a load from the same device or from configuration 1693 space should suffice for PCI. 1694 1695 [*] NOTE! attempting to load from the same location as was written to may 1696 cause a malfunction - consider the 16550 Rx/Tx serial registers for 1697 example. 1698 1699 Used with prefetchable I/O memory, an mmiowb() barrier may be required to 1700 force stores to be ordered. 1701 1702 Please refer to the PCI specification for more information on interactions 1703 between PCI transactions. 1704 1705 (*) readX_relaxed() 1706 1707 These are similar to readX(), but are not guaranteed to be ordered in any 1708 way. Be aware that there is no I/O read barrier available. 1709 1710 (*) ioreadX(), iowriteX() 1711 1712 These will perform appropriately for the type of access they're actually 1713 doing, be it inX()/outX() or readX()/writeX(). 1714 1715 1716======================================== 1717ASSUMED MINIMUM EXECUTION ORDERING MODEL 1718======================================== 1719 1720It has to be assumed that the conceptual CPU is weakly-ordered but that it will 1721maintain the appearance of program causality with respect to itself. Some CPUs 1722(such as i386 or x86_64) are more constrained than others (such as powerpc or 1723frv), and so the most relaxed case (namely DEC Alpha) must be assumed outside 1724of arch-specific code. 1725 1726This means that it must be considered that the CPU will execute its instruction 1727stream in any order it feels like - or even in parallel - provided that if an 1728instruction in the stream depends on an earlier instruction, then that 1729earlier instruction must be sufficiently complete[*] before the later 1730instruction may proceed; in other words: provided that the appearance of 1731causality is maintained. 1732 1733 [*] Some instructions have more than one effect - such as changing the 1734 condition codes, changing registers or changing memory - and different 1735 instructions may depend on different effects. 1736 1737A CPU may also discard any instruction sequence that winds up having no 1738ultimate effect. For example, if two adjacent instructions both load an 1739immediate value into the same register, the first may be discarded. 1740 1741 1742Similarly, it has to be assumed that compiler might reorder the instruction 1743stream in any way it sees fit, again provided the appearance of causality is 1744maintained. 1745 1746 1747============================ 1748THE EFFECTS OF THE CPU CACHE 1749============================ 1750 1751The way cached memory operations are perceived across the system is affected to 1752a certain extent by the caches that lie between CPUs and memory, and by the 1753memory coherence system that maintains the consistency of state in the system. 1754 1755As far as the way a CPU interacts with another part of the system through the 1756caches goes, the memory system has to include the CPU's caches, and memory 1757barriers for the most part act at the interface between the CPU and its cache 1758(memory barriers logically act on the dotted line in the following diagram): 1759 1760 <--- CPU ---> : <----------- Memory -----------> 1761 : 1762 +--------+ +--------+ : +--------+ +-----------+ 1763 | | | | : | | | | +--------+ 1764 | CPU | | Memory | : | CPU | | | | | 1765 | Core |--->| Access |----->| Cache |<-->| | | | 1766 | | | Queue | : | | | |--->| Memory | 1767 | | | | : | | | | | | 1768 +--------+ +--------+ : +--------+ | | | | 1769 : | Cache | +--------+ 1770 : | Coherency | 1771 : | Mechanism | +--------+ 1772 +--------+ +--------+ : +--------+ | | | | 1773 | | | | : | | | | | | 1774 | CPU | | Memory | : | CPU | | |--->| Device | 1775 | Core |--->| Access |----->| Cache |<-->| | | | 1776 | | | Queue | : | | | | | | 1777 | | | | : | | | | +--------+ 1778 +--------+ +--------+ : +--------+ +-----------+ 1779 : 1780 : 1781 1782Although any particular load or store may not actually appear outside of the 1783CPU that issued it since it may have been satisfied within the CPU's own cache, 1784it will still appear as if the full memory access had taken place as far as the 1785other CPUs are concerned since the cache coherency mechanisms will migrate the 1786cacheline over to the accessing CPU and propagate the effects upon conflict. 1787 1788The CPU core may execute instructions in any order it deems fit, provided the 1789expected program causality appears to be maintained. Some of the instructions 1790generate load and store operations which then go into the queue of memory 1791accesses to be performed. The core may place these in the queue in any order 1792it wishes, and continue execution until it is forced to wait for an instruction 1793to complete. 1794 1795What memory barriers are concerned with is controlling the order in which 1796accesses cross from the CPU side of things to the memory side of things, and 1797the order in which the effects are perceived to happen by the other observers 1798in the system. 1799 1800[!] Memory barriers are _not_ needed within a given CPU, as CPUs always see 1801their own loads and stores as if they had happened in program order. 1802 1803[!] MMIO or other device accesses may bypass the cache system. This depends on 1804the properties of the memory window through which devices are accessed and/or 1805the use of any special device communication instructions the CPU may have. 1806 1807 1808CACHE COHERENCY 1809--------------- 1810 1811Life isn't quite as simple as it may appear above, however: for while the 1812caches are expected to be coherent, there's no guarantee that that coherency 1813will be ordered. This means that whilst changes made on one CPU will 1814eventually become visible on all CPUs, there's no guarantee that they will 1815become apparent in the same order on those other CPUs. 1816 1817 1818Consider dealing with a system that has a pair of CPUs (1 & 2), each of which 1819has a pair of parallel data caches (CPU 1 has A/B, and CPU 2 has C/D): 1820 1821 : 1822 : +--------+ 1823 : +---------+ | | 1824 +--------+ : +--->| Cache A |<------->| | 1825 | | : | +---------+ | | 1826 | CPU 1 |<---+ | | 1827 | | : | +---------+ | | 1828 +--------+ : +--->| Cache B |<------->| | 1829 : +---------+ | | 1830 : | Memory | 1831 : +---------+ | System | 1832 +--------+ : +--->| Cache C |<------->| | 1833 | | : | +---------+ | | 1834 | CPU 2 |<---+ | | 1835 | | : | +---------+ | | 1836 +--------+ : +--->| Cache D |<------->| | 1837 : +---------+ | | 1838 : +--------+ 1839 : 1840 1841Imagine the system has the following properties: 1842 1843 (*) an odd-numbered cache line may be in cache A, cache C or it may still be 1844 resident in memory; 1845 1846 (*) an even-numbered cache line may be in cache B, cache D or it may still be 1847 resident in memory; 1848 1849 (*) whilst the CPU core is interrogating one cache, the other cache may be 1850 making use of the bus to access the rest of the system - perhaps to 1851 displace a dirty cacheline or to do a speculative load; 1852 1853 (*) each cache has a queue of operations that need to be applied to that cache 1854 to maintain coherency with the rest of the system; 1855 1856 (*) the coherency queue is not flushed by normal loads to lines already 1857 present in the cache, even though the contents of the queue may 1858 potentially affect those loads. 1859 1860Imagine, then, that two writes are made on the first CPU, with a write barrier 1861between them to guarantee that they will appear to reach that CPU's caches in 1862the requisite order: 1863 1864 CPU 1 CPU 2 COMMENT 1865 =============== =============== ======================================= 1866 u == 0, v == 1 and p == &u, q == &u 1867 v = 2; 1868 smp_wmb(); Make sure change to v is visible before 1869 change to p 1870 <A:modify v=2> v is now in cache A exclusively 1871 p = &v; 1872 <B:modify p=&v> p is now in cache B exclusively 1873 1874The write memory barrier forces the other CPUs in the system to perceive that 1875the local CPU's caches have apparently been updated in the correct order. But 1876now imagine that the second CPU wants to read those values: 1877 1878 CPU 1 CPU 2 COMMENT 1879 =============== =============== ======================================= 1880 ... 1881 q = p; 1882 x = *q; 1883 1884The above pair of reads may then fail to happen in the expected order, as the 1885cacheline holding p may get updated in one of the second CPU's caches whilst 1886the update to the cacheline holding v is delayed in the other of the second 1887CPU's caches by some other cache event: 1888 1889 CPU 1 CPU 2 COMMENT 1890 =============== =============== ======================================= 1891 u == 0, v == 1 and p == &u, q == &u 1892 v = 2; 1893 smp_wmb(); 1894 <A:modify v=2> <C:busy> 1895 <C:queue v=2> 1896 p = &v; q = p; 1897 <D:request p> 1898 <B:modify p=&v> <D:commit p=&v> 1899 <D:read p> 1900 x = *q; 1901 <C:read *q> Reads from v before v updated in cache 1902 <C:unbusy> 1903 <C:commit v=2> 1904 1905Basically, whilst both cachelines will be updated on CPU 2 eventually, there's 1906no guarantee that, without intervention, the order of update will be the same 1907as that committed on CPU 1. 1908 1909 1910To intervene, we need to interpolate a data dependency barrier or a read 1911barrier between the loads. This will force the cache to commit its coherency 1912queue before processing any further requests: 1913 1914 CPU 1 CPU 2 COMMENT 1915 =============== =============== ======================================= 1916 u == 0, v == 1 and p == &u, q == &u 1917 v = 2; 1918 smp_wmb(); 1919 <A:modify v=2> <C:busy> 1920 <C:queue v=2> 1921 p = &v; q = p; 1922 <D:request p> 1923 <B:modify p=&v> <D:commit p=&v> 1924 <D:read p> 1925 smp_read_barrier_depends() 1926 <C:unbusy> 1927 <C:commit v=2> 1928 x = *q; 1929 <C:read *q> Reads from v after v updated in cache 1930 1931 1932This sort of problem can be encountered on DEC Alpha processors as they have a 1933split cache that improves performance by making better use of the data bus. 1934Whilst most CPUs do imply a data dependency barrier on the read when a memory 1935access depends on a read, not all do, so it may not be relied on. 1936 1937Other CPUs may also have split caches, but must coordinate between the various 1938cachelets for normal memory accesses. The semantics of the Alpha removes the 1939need for coordination in the absence of memory barriers. 1940 1941 1942CACHE COHERENCY VS DMA 1943---------------------- 1944 1945Not all systems maintain cache coherency with respect to devices doing DMA. In 1946such cases, a device attempting DMA may obtain stale data from RAM because 1947dirty cache lines may be resident in the caches of various CPUs, and may not 1948have been written back to RAM yet. To deal with this, the appropriate part of 1949the kernel must flush the overlapping bits of cache on each CPU (and maybe 1950invalidate them as well). 1951 1952In addition, the data DMA'd to RAM by a device may be overwritten by dirty 1953cache lines being written back to RAM from a CPU's cache after the device has 1954installed its own data, or cache lines present in the CPU's cache may simply 1955obscure the fact that RAM has been updated, until at such time as the cacheline 1956is discarded from the CPU's cache and reloaded. To deal with this, the 1957appropriate part of the kernel must invalidate the overlapping bits of the 1958cache on each CPU. 1959 1960See Documentation/cachetlb.txt for more information on cache management. 1961 1962 1963CACHE COHERENCY VS MMIO 1964----------------------- 1965 1966Memory mapped I/O usually takes place through memory locations that are part of 1967a window in the CPU's memory space that has different properties assigned than 1968the usual RAM directed window. 1969 1970Amongst these properties is usually the fact that such accesses bypass the 1971caching entirely and go directly to the device buses. This means MMIO accesses 1972may, in effect, overtake accesses to cached memory that were emitted earlier. 1973A memory barrier isn't sufficient in such a case, but rather the cache must be 1974flushed between the cached memory write and the MMIO access if the two are in 1975any way dependent. 1976 1977 1978========================= 1979THE THINGS CPUS GET UP TO 1980========================= 1981 1982A programmer might take it for granted that the CPU will perform memory 1983operations in exactly the order specified, so that if the CPU is, for example, 1984given the following piece of code to execute: 1985 1986 a = *A; 1987 *B = b; 1988 c = *C; 1989 d = *D; 1990 *E = e; 1991 1992they would then expect that the CPU will complete the memory operation for each 1993instruction before moving on to the next one, leading to a definite sequence of 1994operations as seen by external observers in the system: 1995 1996 LOAD *A, STORE *B, LOAD *C, LOAD *D, STORE *E. 1997 1998 1999Reality is, of course, much messier. With many CPUs and compilers, the above 2000assumption doesn't hold because: 2001 2002 (*) loads are more likely to need to be completed immediately to permit 2003 execution progress, whereas stores can often be deferred without a 2004 problem; 2005 2006 (*) loads may be done speculatively, and the result discarded should it prove 2007 to have been unnecessary; 2008 2009 (*) loads may be done speculatively, leading to the result having been fetched 2010 at the wrong time in the expected sequence of events; 2011 2012 (*) the order of the memory accesses may be rearranged to promote better use 2013 of the CPU buses and caches; 2014 2015 (*) loads and stores may be combined to improve performance when talking to 2016 memory or I/O hardware that can do batched accesses of adjacent locations, 2017 thus cutting down on transaction setup costs (memory and PCI devices may 2018 both be able to do this); and 2019 2020 (*) the CPU's data cache may affect the ordering, and whilst cache-coherency 2021 mechanisms may alleviate this - once the store has actually hit the cache 2022 - there's no guarantee that the coherency management will be propagated in 2023 order to other CPUs. 2024 2025So what another CPU, say, might actually observe from the above piece of code 2026is: 2027 2028 LOAD *A, ..., LOAD {*C,*D}, STORE *E, STORE *B 2029 2030 (Where "LOAD {*C,*D}" is a combined load) 2031 2032 2033However, it is guaranteed that a CPU will be self-consistent: it will see its 2034_own_ accesses appear to be correctly ordered, without the need for a memory 2035barrier. For instance with the following code: 2036 2037 U = *A; 2038 *A = V; 2039 *A = W; 2040 X = *A; 2041 *A = Y; 2042 Z = *A; 2043 2044and assuming no intervention by an external influence, it can be assumed that 2045the final result will appear to be: 2046 2047 U == the original value of *A 2048 X == W 2049 Z == Y 2050 *A == Y 2051 2052The code above may cause the CPU to generate the full sequence of memory 2053accesses: 2054 2055 U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A 2056 2057in that order, but, without intervention, the sequence may have almost any 2058combination of elements combined or discarded, provided the program's view of 2059the world remains consistent. 2060 2061The compiler may also combine, discard or defer elements of the sequence before 2062the CPU even sees them. 2063 2064For instance: 2065 2066 *A = V; 2067 *A = W; 2068 2069may be reduced to: 2070 2071 *A = W; 2072 2073since, without a write barrier, it can be assumed that the effect of the 2074storage of V to *A is lost. Similarly: 2075 2076 *A = Y; 2077 Z = *A; 2078 2079may, without a memory barrier, be reduced to: 2080 2081 *A = Y; 2082 Z = Y; 2083 2084and the LOAD operation never appear outside of the CPU. 2085 2086 2087AND THEN THERE'S THE ALPHA 2088-------------------------- 2089 2090The DEC Alpha CPU is one of the most relaxed CPUs there is. Not only that, 2091some versions of the Alpha CPU have a split data cache, permitting them to have 2092two semantically-related cache lines updated at separate times. This is where 2093the data dependency barrier really becomes necessary as this synchronises both 2094caches with the memory coherence system, thus making it seem like pointer 2095changes vs new data occur in the right order. 2096 2097The Alpha defines the Linux kernel's memory barrier model. 2098 2099See the subsection on "Cache Coherency" above. 2100 2101 2102========== 2103REFERENCES 2104========== 2105 2106Alpha AXP Architecture Reference Manual, Second Edition (Sites & Witek, 2107Digital Press) 2108 Chapter 5.2: Physical Address Space Characteristics 2109 Chapter 5.4: Caches and Write Buffers 2110 Chapter 5.5: Data Sharing 2111 Chapter 5.6: Read/Write Ordering 2112 2113AMD64 Architecture Programmer's Manual Volume 2: System Programming 2114 Chapter 7.1: Memory-Access Ordering 2115 Chapter 7.4: Buffering and Combining Memory Writes 2116 2117IA-32 Intel Architecture Software Developer's Manual, Volume 3: 2118System Programming Guide 2119 Chapter 7.1: Locked Atomic Operations 2120 Chapter 7.2: Memory Ordering 2121 Chapter 7.4: Serializing Instructions 2122 2123The SPARC Architecture Manual, Version 9 2124 Chapter 8: Memory Models 2125 Appendix D: Formal Specification of the Memory Models 2126 Appendix J: Programming with the Memory Models 2127 2128UltraSPARC Programmer Reference Manual 2129 Chapter 5: Memory Accesses and Cacheability 2130 Chapter 15: Sparc-V9 Memory Models 2131 2132UltraSPARC III Cu User's Manual 2133 Chapter 9: Memory Models 2134 2135UltraSPARC IIIi Processor User's Manual 2136 Chapter 8: Memory Models 2137 2138UltraSPARC Architecture 2005 2139 Chapter 9: Memory 2140 Appendix D: Formal Specifications of the Memory Models 2141 2142UltraSPARC T1 Supplement to the UltraSPARC Architecture 2005 2143 Chapter 8: Memory Models 2144 Appendix F: Caches and Cache Coherency 2145 2146Solaris Internals, Core Kernel Architecture, p63-68: 2147 Chapter 3.3: Hardware Considerations for Locks and 2148 Synchronization 2149 2150Unix Systems for Modern Architectures, Symmetric Multiprocessing and Caching 2151for Kernel Programmers: 2152 Chapter 13: Other Memory Models 2153 2154Intel Itanium Architecture Software Developer's Manual: Volume 1: 2155 Section 2.6: Speculation 2156 Section 4.4: Memory Access 2157