diff options
author | Michael S. Tsirkin <mst@redhat.com> | 2013-05-26 17:32:13 +0300 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2013-05-28 09:41:11 +0200 |
commit | 114276ac0a3beb9c391a410349bd770653e185ce (patch) | |
tree | d5bfaac722054c5a576647edf8642058fae298a7 /include/linux/kernel.h | |
parent | 016be2e55d98aee0b97b94b200d6e0e110c8392a (diff) |
mm, sched: Drop voluntary schedule from might_fault()
might_fault() is called from functions like copy_to_user()
which most callers expect to be very fast, like a couple of
instructions.
So functions like memcpy_toiovec() call them many times in a loop.
But might_fault() calls might_sleep() and with CONFIG_PREEMPT_VOLUNTARY
this results in a function call.
Let's not do this - just call __might_sleep() that produces
a diagnostic for sleep within atomic, but drop
might_preempt().
Here's a test sending traffic between the VM and the host,
host is built with CONFIG_PREEMPT_VOLUNTARY:
before:
incoming: 7122.77 Mb/s
outgoing: 8480.37 Mb/s
after:
incoming: 8619.24 Mb/s
outgoing: 9455.42 Mb/s
As a side effect, this fixes an issue pointed
out by Ingo: might_fault might schedule differently
depending on PROVE_LOCKING. Now there's no
preemption point in both cases, so it's consistent.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1369577426-26721-10-git-send-email-mst@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'include/linux/kernel.h')
-rw-r--r-- | include/linux/kernel.h | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/include/linux/kernel.h b/include/linux/kernel.h index e9ef6d6b51d..24719eaa120 100644 --- a/include/linux/kernel.h +++ b/include/linux/kernel.h @@ -198,7 +198,7 @@ void might_fault(void); #else static inline void might_fault(void) { - might_sleep(); + __might_sleep(__FILE__, __LINE__, 0); } #endif |