linux/include/asm-generic
Peter Zijlstra 54cf809b95 locking,qspinlock: Fix spin_is_locked() and spin_unlock_wait()
Similar to commits:

  51d7d5205d33 ("powerpc: Add smp_mb() to arch_spin_is_locked()")
  d86b8da04dfa ("arm64: spinlock: serialise spin_unlock_wait against concurrent lockers")

qspinlock suffers from the fact that the _Q_LOCKED_VAL store is
unordered inside the ACQUIRE of the lock.

And while this is not a problem for the regular mutual exclusive
critical section usage of spinlocks, it breaks creative locking like:

	spin_lock(A)			spin_lock(B)
	spin_unlock_wait(B)		if (!spin_is_locked(A))
	do_something()			  do_something()

In that both CPUs can end up running do_something at the same time,
because our _Q_LOCKED_VAL store can drop past the spin_unlock_wait()
spin_is_locked() loads (even on x86!!).

To avoid making the normal case slower, add smp_mb()s to the less used
spin_unlock_wait() / spin_is_locked() side of things to avoid this
problem.

Reported-and-tested-by: Davidlohr Bueso <dave@stgolabs.net>
Reported-by: Giovanni Gherdovich <ggherdovich@suse.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: stable@vger.kernel.org   # v4.2 and later
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-05-20 19:30:32 -07:00
..
2015-07-27 14:06:24 +02:00
2016-03-24 23:13:48 -07:00
2011-03-17 09:19:04 +08:00
2012-10-02 18:01:56 +01:00
2011-07-26 16:49:47 -07:00
2011-07-26 16:49:47 -07:00
2012-12-09 23:14:14 +01:00
2014-11-23 13:01:47 +01:00
2013-01-03 15:57:16 -08:00
2015-08-25 09:59:45 +02:00
2014-06-17 19:12:40 -04:00
2016-05-19 19:12:14 -07:00
2014-05-15 00:32:09 +01:00
2015-09-05 13:19:09 +02:00
2012-11-29 00:01:23 -05:00
2013-02-14 09:21:15 -05:00
2015-11-23 09:44:58 +01:00