The Design and Implementation of the FreeBSD Operating System, Second Edition
Now available: The Design and Implementation of the FreeBSD Operating System (Second Edition)


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]

FreeBSD/Linux Kernel Cross Reference
sys/Documentation/atomic_ops.txt

Version: -  FREEBSD  -  FREEBSD10  -  FREEBSD9  -  FREEBSD92  -  FREEBSD91  -  FREEBSD90  -  FREEBSD8  -  FREEBSD82  -  FREEBSD81  -  FREEBSD80  -  FREEBSD7  -  FREEBSD74  -  FREEBSD73  -  FREEBSD72  -  FREEBSD71  -  FREEBSD70  -  FREEBSD6  -  FREEBSD64  -  FREEBSD63  -  FREEBSD62  -  FREEBSD61  -  FREEBSD60  -  FREEBSD5  -  FREEBSD55  -  FREEBSD54  -  FREEBSD53  -  FREEBSD52  -  FREEBSD51  -  FREEBSD50  -  FREEBSD4  -  FREEBSD3  -  FREEBSD22  -  linux-2.6  -  linux-2.4.22  -  MK83  -  MK84  -  PLAN9  -  DFBSD  -  NETBSD  -  NETBSD5  -  NETBSD4  -  NETBSD3  -  NETBSD20  -  OPENBSD  -  xnu-517  -  xnu-792  -  xnu-792.6.70  -  xnu-1228  -  xnu-1456.1.26  -  xnu-1699.24.8  -  xnu-2050.18.24  -  OPENSOLARIS  -  minix-3-1-1 
SearchContext: -  none  -  3  -  10 

    1                 Semantics and Behavior of Atomic and
    2                          Bitmask Operations
    3 
    4                           David S. Miller        
    5 
    6         This document is intended to serve as a guide to Linux port
    7 maintainers on how to implement atomic counter, bitops, and spinlock
    8 interfaces properly.
    9 
   10         The atomic_t type should be defined as a signed integer.
   11 Also, it should be made opaque such that any kind of cast to a normal
   12 C integer type will fail.  Something like the following should
   13 suffice:
   14 
   15         typedef struct { int counter; } atomic_t;
   16 
   17 Historically, counter has been declared volatile.  This is now discouraged.
   18 See Documentation/volatile-considered-harmful.txt for the complete rationale.
   19 
   20 local_t is very similar to atomic_t. If the counter is per CPU and only
   21 updated by one CPU, local_t is probably more appropriate. Please see
   22 Documentation/local_ops.txt for the semantics of local_t.
   23 
   24 The first operations to implement for atomic_t's are the initializers and
   25 plain reads.
   26 
   27         #define ATOMIC_INIT(i)          { (i) }
   28         #define atomic_set(v, i)        ((v)->counter = (i))
   29 
   30 The first macro is used in definitions, such as:
   31 
   32 static atomic_t my_counter = ATOMIC_INIT(1);
   33 
   34 The initializer is atomic in that the return values of the atomic operations
   35 are guaranteed to be correct reflecting the initialized value if the
   36 initializer is used before runtime.  If the initializer is used at runtime, a
   37 proper implicit or explicit read memory barrier is needed before reading the
   38 value with atomic_read from another thread.
   39 
   40 The second interface can be used at runtime, as in:
   41 
   42         struct foo { atomic_t counter; };
   43         ...
   44 
   45         struct foo *k;
   46 
   47         k = kmalloc(sizeof(*k), GFP_KERNEL);
   48         if (!k)
   49                 return -ENOMEM;
   50         atomic_set(&k->counter, 0);
   51 
   52 The setting is atomic in that the return values of the atomic operations by
   53 all threads are guaranteed to be correct reflecting either the value that has
   54 been set with this operation or set with another operation.  A proper implicit
   55 or explicit memory barrier is needed before the value set with the operation
   56 is guaranteed to be readable with atomic_read from another thread.
   57 
   58 Next, we have:
   59 
   60         #define atomic_read(v)  ((v)->counter)
   61 
   62 which simply reads the counter value currently visible to the calling thread.
   63 The read is atomic in that the return value is guaranteed to be one of the
   64 values initialized or modified with the interface operations if a proper
   65 implicit or explicit memory barrier is used after possible runtime
   66 initialization by any other thread and the value is modified only with the
   67 interface operations.  atomic_read does not guarantee that the runtime
   68 initialization by any other thread is visible yet, so the user of the
   69 interface must take care of that with a proper implicit or explicit memory
   70 barrier.
   71 
   72 *** WARNING: atomic_read() and atomic_set() DO NOT IMPLY BARRIERS! ***
   73 
   74 Some architectures may choose to use the volatile keyword, barriers, or inline
   75 assembly to guarantee some degree of immediacy for atomic_read() and
   76 atomic_set().  This is not uniformly guaranteed, and may change in the future,
   77 so all users of atomic_t should treat atomic_read() and atomic_set() as simple
   78 C statements that may be reordered or optimized away entirely by the compiler
   79 or processor, and explicitly invoke the appropriate compiler and/or memory
   80 barrier for each use case.  Failure to do so will result in code that may
   81 suddenly break when used with different architectures or compiler
   82 optimizations, or even changes in unrelated code which changes how the
   83 compiler optimizes the section accessing atomic_t variables.
   84 
   85 *** YOU HAVE BEEN WARNED! ***
   86 
   87 Properly aligned pointers, longs, ints, and chars (and unsigned
   88 equivalents) may be atomically loaded from and stored to in the same
   89 sense as described for atomic_read() and atomic_set().  The ACCESS_ONCE()
   90 macro should be used to prevent the compiler from using optimizations
   91 that might otherwise optimize accesses out of existence on the one hand,
   92 or that might create unsolicited accesses on the other.
   93 
   94 For example consider the following code:
   95 
   96         while (a > 0)
   97                 do_something();
   98 
   99 If the compiler can prove that do_something() does not store to the
  100 variable a, then the compiler is within its rights transforming this to
  101 the following:
  102 
  103         tmp = a;
  104         if (a > 0)
  105                 for (;;)
  106                         do_something();
  107 
  108 If you don't want the compiler to do this (and you probably don't), then
  109 you should use something like the following:
  110 
  111         while (ACCESS_ONCE(a) < 0)
  112                 do_something();
  113 
  114 Alternatively, you could place a barrier() call in the loop.
  115 
  116 For another example, consider the following code:
  117 
  118         tmp_a = a;
  119         do_something_with(tmp_a);
  120         do_something_else_with(tmp_a);
  121 
  122 If the compiler can prove that do_something_with() does not store to the
  123 variable a, then the compiler is within its rights to manufacture an
  124 additional load as follows:
  125 
  126         tmp_a = a;
  127         do_something_with(tmp_a);
  128         tmp_a = a;
  129         do_something_else_with(tmp_a);
  130 
  131 This could fatally confuse your code if it expected the same value
  132 to be passed to do_something_with() and do_something_else_with().
  133 
  134 The compiler would be likely to manufacture this additional load if
  135 do_something_with() was an inline function that made very heavy use
  136 of registers: reloading from variable a could save a flush to the
  137 stack and later reload.  To prevent the compiler from attacking your
  138 code in this manner, write the following:
  139 
  140         tmp_a = ACCESS_ONCE(a);
  141         do_something_with(tmp_a);
  142         do_something_else_with(tmp_a);
  143 
  144 For a final example, consider the following code, assuming that the
  145 variable a is set at boot time before the second CPU is brought online
  146 and never changed later, so that memory barriers are not needed:
  147 
  148         if (a)
  149                 b = 9;
  150         else
  151                 b = 42;
  152 
  153 The compiler is within its rights to manufacture an additional store
  154 by transforming the above code into the following:
  155 
  156         b = 42;
  157         if (a)
  158                 b = 9;
  159 
  160 This could come as a fatal surprise to other code running concurrently
  161 that expected b to never have the value 42 if a was zero.  To prevent
  162 the compiler from doing this, write something like:
  163 
  164         if (a)
  165                 ACCESS_ONCE(b) = 9;
  166         else
  167                 ACCESS_ONCE(b) = 42;
  168 
  169 Don't even -think- about doing this without proper use of memory barriers,
  170 locks, or atomic operations if variable a can change at runtime!
  171 
  172 *** WARNING: ACCESS_ONCE() DOES NOT IMPLY A BARRIER! ***
  173 
  174 Now, we move onto the atomic operation interfaces typically implemented with
  175 the help of assembly code.
  176 
  177         void atomic_add(int i, atomic_t *v);
  178         void atomic_sub(int i, atomic_t *v);
  179         void atomic_inc(atomic_t *v);
  180         void atomic_dec(atomic_t *v);
  181 
  182 These four routines add and subtract integral values to/from the given
  183 atomic_t value.  The first two routines pass explicit integers by
  184 which to make the adjustment, whereas the latter two use an implicit
  185 adjustment value of "1".
  186 
  187 One very important aspect of these two routines is that they DO NOT
  188 require any explicit memory barriers.  They need only perform the
  189 atomic_t counter update in an SMP safe manner.
  190 
  191 Next, we have:
  192 
  193         int atomic_inc_return(atomic_t *v);
  194         int atomic_dec_return(atomic_t *v);
  195 
  196 These routines add 1 and subtract 1, respectively, from the given
  197 atomic_t and return the new counter value after the operation is
  198 performed.
  199 
  200 Unlike the above routines, it is required that explicit memory
  201 barriers are performed before and after the operation.  It must be
  202 done such that all memory operations before and after the atomic
  203 operation calls are strongly ordered with respect to the atomic
  204 operation itself.
  205 
  206 For example, it should behave as if a smp_mb() call existed both
  207 before and after the atomic operation.
  208 
  209 If the atomic instructions used in an implementation provide explicit
  210 memory barrier semantics which satisfy the above requirements, that is
  211 fine as well.
  212 
  213 Let's move on:
  214 
  215         int atomic_add_return(int i, atomic_t *v);
  216         int atomic_sub_return(int i, atomic_t *v);
  217 
  218 These behave just like atomic_{inc,dec}_return() except that an
  219 explicit counter adjustment is given instead of the implicit "1".
  220 This means that like atomic_{inc,dec}_return(), the memory barrier
  221 semantics are required.
  222 
  223 Next:
  224 
  225         int atomic_inc_and_test(atomic_t *v);
  226         int atomic_dec_and_test(atomic_t *v);
  227 
  228 These two routines increment and decrement by 1, respectively, the
  229 given atomic counter.  They return a boolean indicating whether the
  230 resulting counter value was zero or not.
  231 
  232 It requires explicit memory barrier semantics around the operation as
  233 above.
  234 
  235         int atomic_sub_and_test(int i, atomic_t *v);
  236 
  237 This is identical to atomic_dec_and_test() except that an explicit
  238 decrement is given instead of the implicit "1".  It requires explicit
  239 memory barrier semantics around the operation.
  240 
  241         int atomic_add_negative(int i, atomic_t *v);
  242 
  243 The given increment is added to the given atomic counter value.  A
  244 boolean is return which indicates whether the resulting counter value
  245 is negative.  It requires explicit memory barrier semantics around the
  246 operation.
  247 
  248 Then:
  249 
  250         int atomic_xchg(atomic_t *v, int new);
  251 
  252 This performs an atomic exchange operation on the atomic variable v, setting
  253 the given new value.  It returns the old value that the atomic variable v had
  254 just before the operation.
  255 
  256         int atomic_cmpxchg(atomic_t *v, int old, int new);
  257 
  258 This performs an atomic compare exchange operation on the atomic value v,
  259 with the given old and new values. Like all atomic_xxx operations,
  260 atomic_cmpxchg will only satisfy its atomicity semantics as long as all
  261 other accesses of *v are performed through atomic_xxx operations.
  262 
  263 atomic_cmpxchg requires explicit memory barriers around the operation.
  264 
  265 The semantics for atomic_cmpxchg are the same as those defined for 'cas'
  266 below.
  267 
  268 Finally:
  269 
  270         int atomic_add_unless(atomic_t *v, int a, int u);
  271 
  272 If the atomic value v is not equal to u, this function adds a to v, and
  273 returns non zero. If v is equal to u then it returns zero. This is done as
  274 an atomic operation.
  275 
  276 atomic_add_unless requires explicit memory barriers around the operation
  277 unless it fails (returns 0).
  278 
  279 atomic_inc_not_zero, equivalent to atomic_add_unless(v, 1, 0)
  280 
  281 
  282 If a caller requires memory barrier semantics around an atomic_t
  283 operation which does not return a value, a set of interfaces are
  284 defined which accomplish this:
  285 
  286         void smp_mb__before_atomic_dec(void);
  287         void smp_mb__after_atomic_dec(void);
  288         void smp_mb__before_atomic_inc(void);
  289         void smp_mb__after_atomic_inc(void);
  290 
  291 For example, smp_mb__before_atomic_dec() can be used like so:
  292 
  293         obj->dead = 1;
  294         smp_mb__before_atomic_dec();
  295         atomic_dec(&obj->ref_count);
  296 
  297 It makes sure that all memory operations preceding the atomic_dec()
  298 call are strongly ordered with respect to the atomic counter
  299 operation.  In the above example, it guarantees that the assignment of
  300 "1" to obj->dead will be globally visible to other cpus before the
  301 atomic counter decrement.
  302 
  303 Without the explicit smp_mb__before_atomic_dec() call, the
  304 implementation could legally allow the atomic counter update visible
  305 to other cpus before the "obj->dead = 1;" assignment.
  306 
  307 The other three interfaces listed are used to provide explicit
  308 ordering with respect to memory operations after an atomic_dec() call
  309 (smp_mb__after_atomic_dec()) and around atomic_inc() calls
  310 (smp_mb__{before,after}_atomic_inc()).
  311 
  312 A missing memory barrier in the cases where they are required by the
  313 atomic_t implementation above can have disastrous results.  Here is
  314 an example, which follows a pattern occurring frequently in the Linux
  315 kernel.  It is the use of atomic counters to implement reference
  316 counting, and it works such that once the counter falls to zero it can
  317 be guaranteed that no other entity can be accessing the object:
  318 
  319 static void obj_list_add(struct obj *obj, struct list_head *head)
  320 {
  321         obj->active = 1;
  322         list_add(&obj->list, head);
  323 }
  324 
  325 static void obj_list_del(struct obj *obj)
  326 {
  327         list_del(&obj->list);
  328         obj->active = 0;
  329 }
  330 
  331 static void obj_destroy(struct obj *obj)
  332 {
  333         BUG_ON(obj->active);
  334         kfree(obj);
  335 }
  336 
  337 struct obj *obj_list_peek(struct list_head *head)
  338 {
  339         if (!list_empty(head)) {
  340                 struct obj *obj;
  341 
  342                 obj = list_entry(head->next, struct obj, list);
  343                 atomic_inc(&obj->refcnt);
  344                 return obj;
  345         }
  346         return NULL;
  347 }
  348 
  349 void obj_poke(void)
  350 {
  351         struct obj *obj;
  352 
  353         spin_lock(&global_list_lock);
  354         obj = obj_list_peek(&global_list);
  355         spin_unlock(&global_list_lock);
  356 
  357         if (obj) {
  358                 obj->ops->poke(obj);
  359                 if (atomic_dec_and_test(&obj->refcnt))
  360                         obj_destroy(obj);
  361         }
  362 }
  363 
  364 void obj_timeout(struct obj *obj)
  365 {
  366         spin_lock(&global_list_lock);
  367         obj_list_del(obj);
  368         spin_unlock(&global_list_lock);
  369 
  370         if (atomic_dec_and_test(&obj->refcnt))
  371                 obj_destroy(obj);
  372 }
  373 
  374 (This is a simplification of the ARP queue management in the
  375  generic neighbour discover code of the networking.  Olaf Kirch
  376  found a bug wrt. memory barriers in kfree_skb() that exposed
  377  the atomic_t memory barrier requirements quite clearly.)
  378 
  379 Given the above scheme, it must be the case that the obj->active
  380 update done by the obj list deletion be visible to other processors
  381 before the atomic counter decrement is performed.
  382 
  383 Otherwise, the counter could fall to zero, yet obj->active would still
  384 be set, thus triggering the assertion in obj_destroy().  The error
  385 sequence looks like this:
  386 
  387         cpu 0                           cpu 1
  388         obj_poke()                      obj_timeout()
  389         obj = obj_list_peek();
  390         ... gains ref to obj, refcnt=2
  391                                         obj_list_del(obj);
  392                                         obj->active = 0 ...
  393                                         ... visibility delayed ...
  394                                         atomic_dec_and_test()
  395                                         ... refcnt drops to 1 ...
  396         atomic_dec_and_test()
  397         ... refcount drops to 0 ...
  398         obj_destroy()
  399         BUG() triggers since obj->active
  400         still seen as one
  401                                         obj->active update visibility occurs
  402 
  403 With the memory barrier semantics required of the atomic_t operations
  404 which return values, the above sequence of memory visibility can never
  405 happen.  Specifically, in the above case the atomic_dec_and_test()
  406 counter decrement would not become globally visible until the
  407 obj->active update does.
  408 
  409 As a historical note, 32-bit Sparc used to only allow usage of
  410 24-bits of its atomic_t type.  This was because it used 8 bits
  411 as a spinlock for SMP safety.  Sparc32 lacked a "compare and swap"
  412 type instruction.  However, 32-bit Sparc has since been moved over
  413 to a "hash table of spinlocks" scheme, that allows the full 32-bit
  414 counter to be realized.  Essentially, an array of spinlocks are
  415 indexed into based upon the address of the atomic_t being operated
  416 on, and that lock protects the atomic operation.  Parisc uses the
  417 same scheme.
  418 
  419 Another note is that the atomic_t operations returning values are
  420 extremely slow on an old 386.
  421 
  422 We will now cover the atomic bitmask operations.  You will find that
  423 their SMP and memory barrier semantics are similar in shape and scope
  424 to the atomic_t ops above.
  425 
  426 Native atomic bit operations are defined to operate on objects aligned
  427 to the size of an "unsigned long" C data type, and are least of that
  428 size.  The endianness of the bits within each "unsigned long" are the
  429 native endianness of the cpu.
  430 
  431         void set_bit(unsigned long nr, volatile unsigned long *addr);
  432         void clear_bit(unsigned long nr, volatile unsigned long *addr);
  433         void change_bit(unsigned long nr, volatile unsigned long *addr);
  434 
  435 These routines set, clear, and change, respectively, the bit number
  436 indicated by "nr" on the bit mask pointed to by "ADDR".
  437 
  438 They must execute atomically, yet there are no implicit memory barrier
  439 semantics required of these interfaces.
  440 
  441         int test_and_set_bit(unsigned long nr, volatile unsigned long *addr);
  442         int test_and_clear_bit(unsigned long nr, volatile unsigned long *addr);
  443         int test_and_change_bit(unsigned long nr, volatile unsigned long *addr);
  444 
  445 Like the above, except that these routines return a boolean which
  446 indicates whether the changed bit was set _BEFORE_ the atomic bit
  447 operation.
  448 
  449 WARNING! It is incredibly important that the value be a boolean,
  450 ie. "0" or "1".  Do not try to be fancy and save a few instructions by
  451 declaring the above to return "long" and just returning something like
  452 "old_val & mask" because that will not work.
  453 
  454 For one thing, this return value gets truncated to int in many code
  455 paths using these interfaces, so on 64-bit if the bit is set in the
  456 upper 32-bits then testers will never see that.
  457 
  458 One great example of where this problem crops up are the thread_info
  459 flag operations.  Routines such as test_and_set_ti_thread_flag() chop
  460 the return value into an int.  There are other places where things
  461 like this occur as well.
  462 
  463 These routines, like the atomic_t counter operations returning values,
  464 require explicit memory barrier semantics around their execution.  All
  465 memory operations before the atomic bit operation call must be made
  466 visible globally before the atomic bit operation is made visible.
  467 Likewise, the atomic bit operation must be visible globally before any
  468 subsequent memory operation is made visible.  For example:
  469 
  470         obj->dead = 1;
  471         if (test_and_set_bit(0, &obj->flags))
  472                 /* ... */;
  473         obj->killed = 1;
  474 
  475 The implementation of test_and_set_bit() must guarantee that
  476 "obj->dead = 1;" is visible to cpus before the atomic memory operation
  477 done by test_and_set_bit() becomes visible.  Likewise, the atomic
  478 memory operation done by test_and_set_bit() must become visible before
  479 "obj->killed = 1;" is visible.
  480 
  481 Finally there is the basic operation:
  482 
  483         int test_bit(unsigned long nr, __const__ volatile unsigned long *addr);
  484 
  485 Which returns a boolean indicating if bit "nr" is set in the bitmask
  486 pointed to by "addr".
  487 
  488 If explicit memory barriers are required around clear_bit() (which
  489 does not return a value, and thus does not need to provide memory
  490 barrier semantics), two interfaces are provided:
  491 
  492         void smp_mb__before_clear_bit(void);
  493         void smp_mb__after_clear_bit(void);
  494 
  495 They are used as follows, and are akin to their atomic_t operation
  496 brothers:
  497 
  498         /* All memory operations before this call will
  499          * be globally visible before the clear_bit().
  500          */
  501         smp_mb__before_clear_bit();
  502         clear_bit( ... );
  503 
  504         /* The clear_bit() will be visible before all
  505          * subsequent memory operations.
  506          */
  507          smp_mb__after_clear_bit();
  508 
  509 There are two special bitops with lock barrier semantics (acquire/release,
  510 same as spinlocks). These operate in the same way as their non-_lock/unlock
  511 postfixed variants, except that they are to provide acquire/release semantics,
  512 respectively. This means they can be used for bit_spin_trylock and
  513 bit_spin_unlock type operations without specifying any more barriers.
  514 
  515         int test_and_set_bit_lock(unsigned long nr, unsigned long *addr);
  516         void clear_bit_unlock(unsigned long nr, unsigned long *addr);
  517         void __clear_bit_unlock(unsigned long nr, unsigned long *addr);
  518 
  519 The __clear_bit_unlock version is non-atomic, however it still implements
  520 unlock barrier semantics. This can be useful if the lock itself is protecting
  521 the other bits in the word.
  522 
  523 Finally, there are non-atomic versions of the bitmask operations
  524 provided.  They are used in contexts where some other higher-level SMP
  525 locking scheme is being used to protect the bitmask, and thus less
  526 expensive non-atomic operations may be used in the implementation.
  527 They have names similar to the above bitmask operation interfaces,
  528 except that two underscores are prefixed to the interface name.
  529 
  530         void __set_bit(unsigned long nr, volatile unsigned long *addr);
  531         void __clear_bit(unsigned long nr, volatile unsigned long *addr);
  532         void __change_bit(unsigned long nr, volatile unsigned long *addr);
  533         int __test_and_set_bit(unsigned long nr, volatile unsigned long *addr);
  534         int __test_and_clear_bit(unsigned long nr, volatile unsigned long *addr);
  535         int __test_and_change_bit(unsigned long nr, volatile unsigned long *addr);
  536 
  537 These non-atomic variants also do not require any special memory
  538 barrier semantics.
  539 
  540 The routines xchg() and cmpxchg() need the same exact memory barriers
  541 as the atomic and bit operations returning values.
  542 
  543 Spinlocks and rwlocks have memory barrier expectations as well.
  544 The rule to follow is simple:
  545 
  546 1) When acquiring a lock, the implementation must make it globally
  547    visible before any subsequent memory operation.
  548 
  549 2) When releasing a lock, the implementation must make it such that
  550    all previous memory operations are globally visible before the
  551    lock release.
  552 
  553 Which finally brings us to _atomic_dec_and_lock().  There is an
  554 architecture-neutral version implemented in lib/dec_and_lock.c,
  555 but most platforms will wish to optimize this in assembler.
  556 
  557         int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock);
  558 
  559 Atomically decrement the given counter, and if will drop to zero
  560 atomically acquire the given spinlock and perform the decrement
  561 of the counter to zero.  If it does not drop to zero, do nothing
  562 with the spinlock.
  563 
  564 It is actually pretty simple to get the memory barrier correct.
  565 Simply satisfy the spinlock grab requirements, which is make
  566 sure the spinlock operation is globally visible before any
  567 subsequent memory operation.
  568 
  569 We can demonstrate this operation more clearly if we define
  570 an abstract atomic operation:
  571 
  572         long cas(long *mem, long old, long new);
  573 
  574 "cas" stands for "compare and swap".  It atomically:
  575 
  576 1) Compares "old" with the value currently at "mem".
  577 2) If they are equal, "new" is written to "mem".
  578 3) Regardless, the current value at "mem" is returned.
  579 
  580 As an example usage, here is what an atomic counter update
  581 might look like:
  582 
  583 void example_atomic_inc(long *counter)
  584 {
  585         long old, new, ret;
  586 
  587         while (1) {
  588                 old = *counter;
  589                 new = old + 1;
  590 
  591                 ret = cas(counter, old, new);
  592                 if (ret == old)
  593                         break;
  594         }
  595 }
  596 
  597 Let's use cas() in order to build a pseudo-C atomic_dec_and_lock():
  598 
  599 int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock)
  600 {
  601         long old, new, ret;
  602         int went_to_zero;
  603 
  604         went_to_zero = 0;
  605         while (1) {
  606                 old = atomic_read(atomic);
  607                 new = old - 1;
  608                 if (new == 0) {
  609                         went_to_zero = 1;
  610                         spin_lock(lock);
  611                 }
  612                 ret = cas(atomic, old, new);
  613                 if (ret == old)
  614                         break;
  615                 if (went_to_zero) {
  616                         spin_unlock(lock);
  617                         went_to_zero = 0;
  618                 }
  619         }
  620 
  621         return went_to_zero;
  622 }
  623 
  624 Now, as far as memory barriers go, as long as spin_lock()
  625 strictly orders all subsequent memory operations (including
  626 the cas()) with respect to itself, things will be fine.
  627 
  628 Said another way, _atomic_dec_and_lock() must guarantee that
  629 a counter dropping to zero is never made visible before the
  630 spinlock being acquired.
  631 
  632 Note that this also means that for the case where the counter
  633 is not dropping to zero, there are no memory ordering
  634 requirements.

Cache object: ff6eaf6bb8743fb6c381eda84964edb9


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]


This page is part of the FreeBSD/Linux Linux Kernel Cross-Reference, and was automatically generated using a modified version of the LXR engine.