From: Ravikiran G Thirumalai Right now we have on mainline (non PREEMPT case); i386 x86_64 ----------------------------------------------------------------------------- spin_lock_irq cli when spin cli when spin spin_lock_irqsave spin with intr enabled spin with intr enabled The posted patchset changed this to: i386 x86_64 ----------------------------------------------------------------------------- spin_lock_irq cli when spin spin with intr enabled spin_lock_irqsave spin with intr enabled spin with intr enabled Here goes the i386 part as well for spin_lock_irq. Signed-off-by: Ravikiran Thirumalai Cc: Ingo Molnar Cc: Andi Kleen Cc: Michael Davidson Cc: Pravin B. Shelar Cc: Shai Fultheim Signed-off-by: Andrew Morton --- include/asm-i386/spinlock.h | 17 ++++++++++++++++- 1 files changed, 16 insertions(+), 1 deletion(-) diff -puN include/asm-i386/spinlock.h~spin_lock_irq-enable-interrupts-while-spinning-i386-implementation include/asm-i386/spinlock.h --- a/include/asm-i386/spinlock.h~spin_lock_irq-enable-interrupts-while-spinning-i386-implementation +++ a/include/asm-i386/spinlock.h @@ -82,7 +82,22 @@ static inline void __raw_spin_lock_flags CLI_STI_INPUT_ARGS : "memory" CLI_STI_CLOBBERS); } -# define __raw_spin_lock_irq(lock) __raw_spin_lock(lock) + +static inline void __raw_spin_lock_irq(raw_spinlock_t *lock) +{ + asm volatile("\n1:\t" + LOCK_PREFIX " ; decb %0\n\t" + "jns 3f\n" + STI_STRING "\n" + "2:\t" + "rep;nop\n\t" + "cmpb $0,%0\n\t" + "jle 2b\n\t" + CLI_STRING "\n" + "jmp 1b\n" + "3:\n\t" + : "+m" (lock->slock) : : "memory"); +} #endif static inline int __raw_spin_trylock(raw_spinlock_t *lock) _