diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2010-01-12 18:16:42 -0800 |
---|---|---|
committer | H. Peter Anvin <hpa@zytor.com> | 2010-01-13 22:39:50 -0800 |
commit | bafaecd11df15ad5b1e598adc7736afcd38ee13d (patch) | |
tree | 99b676d1ecc202358fe67acd095aa2c1f1ef2b1f /arch/um | |
parent | 5d0b7235d83eefdafda300656e97d368afcafc9a (diff) |
x86-64: support native xadd rwsem implementation
This one is much faster than the spinlock based fallback rwsem code,
with certain artifical benchmarks having shown 300%+ improvement on
threaded page faults etc.
Again, note the 32767-thread limit here. So this really does need that
whole "make rwsem_count_t be 64-bit and fix the BIAS values to match"
extension on top of it, but that is conceptually a totally independent
issue.
NOT TESTED! The original patch that this all was based on were tested by
KAMEZAWA Hiroyuki, but maybe I screwed up something when I created the
cleaned-up series, so caveat emptor..
Also note that it _may_ be a good idea to mark some more registers
clobbered on x86-64 in the inline asms instead of saving/restoring them.
They are inline functions, but they are only used in places where there
are not a lot of live registers _anyway_, so doing for example the
clobbers of %r8-%r11 in the asm wouldn't make the fast-path code any
worse, and would make the slow-path code smaller.
(Not that the slow-path really matters to that degree. Saving a few
unnecessary registers is the _least_ of our problems when we hit the slow
path. The instruction/cycle counting really only matters in the fast
path).
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
LKML-Reference: <alpine.LFD.2.00.1001121810410.17145@localhost.localdomain>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Diffstat (limited to 'arch/um')
0 files changed, 0 insertions, 0 deletions