From: Ingo Molnar Here's a detailed analysis of atomic_dec_return(): there are 8 types of instruction reordering that can happen at the beginning and at the end of a critical section. Firstly, here's the programmed order of instructions: pre-read pre-write [critical-START] critial-read critical-write [critical-END] post-read post-write the following reorderings are possible: a pre-read crossing forwards into (and over) the critical section: OK a pre-write crossing forwards into (and over) the critical section: OK a critical-read crossing backwards across critical-START: BAD a critical-write crossing backwards across critical-START: BAD a critical-read crossing forwards across critical-END: BAD a critical-write crossing forwards across critical-END: BAD a post-read crossing backwards into (and over) the critical section: OK a post-write crossing backwards into (and over) the critical section: OK so critical-START needs to be a read and a write barrier for reads and writes happening after it. I.e. it's a memory barrier that only lets instruction reordering in a forward direction, not in a backwards direction. critical-END needs to be a read and write barrier that only lets instructions into the critical section in a backwards direction - and it's a full memory barrier otherwise. AFAICS, currently we dont have such a 'half-conductor / diode' memory barrier primitive: smp_mb() is a full barrier for both directions. But lets assume they existed, and lets call them smp_mb_forwards() and smp_mb_backwards(). furthermore, the locking and unlocking instruction must not cross into the critical section, so the lock sequence must be at least: lock smp_rmb_for_lock_forwards() smp_mb_backwards() ... critical section ... smp_mb_forwards() smp_wmb_for_lock_backwards() unlock and yet another requirement is that two subsequent critical sections for the same lock must not reorder the 'lock' and 'unlock' instructions: smp_mb_forwards() unlock ... unlocked code ... lock smp_mb_backwards() i.e. 'lock' and 'unlock' must not switch places. So the most relaxed list of requirements would be: smp_wmb_for_lock_backwards() lock smp_wmb_for_lock_forwards() smp_mb_backwards() ... critical section ... smp_mb_forwards() smp_wmb_for_lock_backwards() unlock i also think this is the 'absolute minimum memory ordering requirement' for critical sections: relaxing this any further is not possible without breaking critical sections. i doubt many (in fact, any) CPUs are capable of expressing it in such a finegrained way. With our existing primitives, probably the closest one would be: lock smp_mb(); ... smp_mb(); unlock as long as the CPU always executes the lock and unlock stores (which go to the same address) in program order. (is there any CPU doesnt do that?) in that sense, both atomic_dec_return() and atomic_inc_return() are in indeed incorrect (for the use of mutexes) e.g. on ppc64. They are both done via: eioio ... atomic-dec ... isync eioio is a stronger than smp_wmb() - it is a barrier for system memory and IO space memory writes. isync is a read barrier - it throws away all speculative register contents. So it is roughly equivalent to: smp_wmb(); ... atomic-dec ... smp_rmb(); this fulfills the requirement of the critical section not leaking out of the lock sequence itself, but (if i got the ppc64 semantics right) it doesnt protect a write within the critical section to cross out over the smp_rmb(), and to get reordered with the atomic-dec - violating the critical section rules. some other architectures are safe by accident, e.g. Alpha's atomic_dec_return() does: smp_mb(); ... atomic-dec ... smp_mb(); which is overkill, full read and write barrier on both sides. Sparc64's atomic_dec_return() does yet another thing: membar StoreLoad | LoadLoad ... atomic-load ... ... atomic-conditional-store ... membar StoreLoad | StoreStore AFAICS this violates the requirements: a load from within the critical section may go before the atomic-conditional-store, and may thus be executed before the critical section acquires the lock. on MIPS, atomic_dec_return() does what is equivalent to: ... atomic-dec ... smp_mb(); which is fine for a lock sequence, but atomic_inc_return() is not adequate for an unlock sequence: ... atomic-inc ... smp_mb(); because this allows reads and writes within the critical section to reorder with the atomic-inc instructions. To sum it up: atomic_dec/inc_return() alone is not enough to implement critical sections, on a number of architectures. atomic_xchg() seems to have similar problems too. The patch below adds the smp_mb() barriers to the generic headers, which should now fulfill all the ordering requirements, on every architecture. It only relies on one property of the atomic primitives: that they wont get reordered with respect to themselves, so an atomic_inc_ret() and an atomic_dec_ret() cannot switch place. Signed-off-by: Ingo Molnar Signed-off-by: Andrew Morton --- include/asm-generic/mutex-dec.h | 11 +++++++++-- include/asm-generic/mutex-xchg.h | 8 +++++++- 2 files changed, 16 insertions(+), 3 deletions(-) diff -puN include/asm-generic/mutex-dec.h~mutex-subsystem-memory-ordering-fixes include/asm-generic/mutex-dec.h --- 25/include/asm-generic/mutex-dec.h~mutex-subsystem-memory-ordering-fixes Fri Jan 6 17:34:57 2006 +++ 25-akpm/include/asm-generic/mutex-dec.h Fri Jan 6 17:34:57 2006 @@ -21,6 +21,8 @@ do { \ if (unlikely(atomic_dec_return(count) < 0)) \ fail_fn(count); \ + else \ + smp_mb(); \ } while (0) /** @@ -38,8 +40,10 @@ __mutex_fastpath_lock_retval(atomic_t *c { if (unlikely(atomic_dec_return(count) < 0)) return fail_fn(count); - else + else { + smp_mb(); return 0; + } } /** @@ -57,6 +61,7 @@ __mutex_fastpath_lock_retval(atomic_t *c */ #define __mutex_fastpath_unlock(count, fail_fn) \ do { \ + smp_mb(); \ if (unlikely(atomic_inc_return(count) <= 0)) \ fail_fn(count); \ } while (0) @@ -92,8 +97,10 @@ __mutex_fastpath_trylock(atomic_t *count * the mutex state would be. */ #ifdef __HAVE_ARCH_CMPXCHG - if (likely(atomic_cmpxchg(count, 1, 0)) == 1) + if (likely(atomic_cmpxchg(count, 1, 0)) == 1) { + smp_mb(); return 1; + } return 0; #else return fail_fn(count); diff -puN include/asm-generic/mutex-xchg.h~mutex-subsystem-memory-ordering-fixes include/asm-generic/mutex-xchg.h --- 25/include/asm-generic/mutex-xchg.h~mutex-subsystem-memory-ordering-fixes Fri Jan 6 17:34:57 2006 +++ 25-akpm/include/asm-generic/mutex-xchg.h Fri Jan 6 17:34:57 2006 @@ -26,6 +26,8 @@ do { \ if (unlikely(atomic_xchg(count, 0) != 1)) \ fail_fn(count); \ + else \ + smp_mb(); \ } while (0) @@ -44,8 +46,10 @@ __mutex_fastpath_lock_retval(atomic_t *c { if (unlikely(atomic_xchg(count, 0) != 1)) return fail_fn(count); - else + else { + smp_mb(); return 0; + } } /** @@ -62,6 +66,7 @@ __mutex_fastpath_lock_retval(atomic_t *c */ #define __mutex_fastpath_unlock(count, fail_fn) \ do { \ + smp_mb(); \ if (unlikely(atomic_xchg(count, 1) != 0)) \ fail_fn(count); \ } while (0) @@ -104,6 +109,7 @@ __mutex_fastpath_trylock(atomic_t *count if (prev < 0) prev = 0; } + smp_mb(); return prev; } _