1Reference-count design for elements of lists/arrays protected by RCU. 2 3Reference counting on elements of lists which are protected by traditional 4reader/writer spinlocks or semaphores are straightforward: 5 61. 2. 7add() search_and_reference() 8{ { 9 alloc_object read_lock(&list_lock); 10 ... search_for_element 11 atomic_set(&el->rc, 1); atomic_inc(&el->rc); 12 write_lock(&list_lock); ... 13 add_element read_unlock(&list_lock); 14 ... ... 15 write_unlock(&list_lock); } 16} 17 183. 4. 19release_referenced() delete() 20{ { 21 ... write_lock(&list_lock); 22 atomic_dec(&el->rc, relfunc) ... 23 ... remove_element 24} write_unlock(&list_lock); 25 ... 26 if (atomic_dec_and_test(&el->rc)) 27 kfree(el); 28 ... 29 } 30 31If this list/array is made lock free using RCU as in changing the 32write_lock() in add() and delete() to spin_lock() and changing read_lock() 33in search_and_reference() to rcu_read_lock(), the atomic_inc() in 34search_and_reference() could potentially hold reference to an element which 35has already been deleted from the list/array. Use atomic_inc_not_zero() 36in this scenario as follows: 37 381. 2. 39add() search_and_reference() 40{ { 41 alloc_object rcu_read_lock(); 42 ... search_for_element 43 atomic_set(&el->rc, 1); if (!atomic_inc_not_zero(&el->rc)) { 44 spin_lock(&list_lock); rcu_read_unlock(); 45 return FAIL; 46 add_element } 47 ... ... 48 spin_unlock(&list_lock); rcu_read_unlock(); 49} } 503. 4. 51release_referenced() delete() 52{ { 53 ... spin_lock(&list_lock); 54 if (atomic_dec_and_test(&el->rc)) ... 55 call_rcu(&el->head, el_free); remove_element 56 ... spin_unlock(&list_lock); 57} ... 58 if (atomic_dec_and_test(&el->rc)) 59 call_rcu(&el->head, el_free); 60 ... 61 } 62 63Sometimes, a reference to the element needs to be obtained in the 64update (write) stream. In such cases, atomic_inc_not_zero() might be 65overkill, since we hold the update-side spinlock. One might instead 66use atomic_inc() in such cases. 67 68It is not always convenient to deal with "FAIL" in the 69search_and_reference() code path. In such cases, the 70atomic_dec_and_test() may be moved from delete() to el_free() 71as follows: 72 731. 2. 74add() search_and_reference() 75{ { 76 alloc_object rcu_read_lock(); 77 ... search_for_element 78 atomic_set(&el->rc, 1); atomic_inc(&el->rc); 79 spin_lock(&list_lock); ... 80 81 add_element rcu_read_unlock(); 82 ... } 83 spin_unlock(&list_lock); 4. 84} delete() 853. { 86release_referenced() spin_lock(&list_lock); 87{ ... 88 ... remove_element 89 if (atomic_dec_and_test(&el->rc)) spin_unlock(&list_lock); 90 kfree(el); ... 91 ... call_rcu(&el->head, el_free); 92} ... 935. } 94void el_free(struct rcu_head *rhp) 95{ 96 release_referenced(); 97} 98 99The key point is that the initial reference added by add() is not removed 100until after a grace period has elapsed following removal. This means that 101search_and_reference() cannot find this element, which means that the value 102of el->rc cannot increase. Thus, once it reaches zero, there are no 103readers that can or ever will be able to reference the element. The 104element can therefore safely be freed. This in turn guarantees that if 105any reader finds the element, that reader may safely acquire a reference 106without checking the value of the reference counter. 107 108In cases where delete() can sleep, synchronize_rcu() can be called from 109delete(), so that el_free() can be subsumed into delete as follows: 110 1114. 112delete() 113{ 114 spin_lock(&list_lock); 115 ... 116 remove_element 117 spin_unlock(&list_lock); 118 ... 119 synchronize_rcu(); 120 if (atomic_dec_and_test(&el->rc)) 121 kfree(el); 122 ... 123} 124