diff options
author | Heiko Carstens <heiko.carstens@de.ibm.com> | 2005-09-03 15:58:05 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@evo.osdl.org> | 2005-09-05 00:06:29 -0700 |
commit | 9513e5e3f5a6b429da8a9fd4330f71f1e547c8e0 (patch) | |
tree | 7585e2271d2fc3393aa2368cd7dad85d7552cd97 | |
parent | c563077e526d130b8c9aab4e75116551eb5fdc2d (diff) |
[PATCH] s390: spinlock corner case
On s390 the lock value used for spinlocks consists of the lower 32 bits of the
PSW that holds the lock. If this address happens to be on a four gigabyte
boundary the lock is left unlocked. This allows other cpus to grab the same
lock and enter a lock protected code path concurrently. In theory this can
happen if the vmalloc area for the code of a module crosses a 4 GB boundary.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-rw-r--r-- | include/asm-s390/spinlock.h | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/include/asm-s390/spinlock.h b/include/asm-s390/spinlock.h index 8ff10300f7e..321b23bba1e 100644 --- a/include/asm-s390/spinlock.h +++ b/include/asm-s390/spinlock.h @@ -47,7 +47,7 @@ extern int _raw_spin_trylock_retry(spinlock_t *lp, unsigned int pc); static inline void _raw_spin_lock(spinlock_t *lp) { - unsigned long pc = (unsigned long) __builtin_return_address(0); + unsigned long pc = 1 | (unsigned long) __builtin_return_address(0); if (unlikely(_raw_compare_and_swap(&lp->lock, 0, pc) != 0)) _raw_spin_lock_wait(lp, pc); @@ -55,7 +55,7 @@ static inline void _raw_spin_lock(spinlock_t *lp) static inline int _raw_spin_trylock(spinlock_t *lp) { - unsigned long pc = (unsigned long) __builtin_return_address(0); + unsigned long pc = 1 | (unsigned long) __builtin_return_address(0); if (likely(_raw_compare_and_swap(&lp->lock, 0, pc) == 0)) return 1; |