summaryrefslogtreecommitdiffstats
path: root/mm/page_io.c
diff options
context:
space:
mode:
authorAndrea Arcangeli <aarcange@redhat.com>2011-10-31 17:08:26 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2011-10-31 17:30:48 -0700
commit7b6efc2bc4f19952b25ebf9b236e5ac43cd386c2 (patch)
treebae674bd95329a498a5f2cc5d9c23bf5a4a54305 /mm/page_io.c
parentebed48460be5abd86d9a24fa7c66378e58109f30 (diff)
mremap: avoid sending one IPI per page
This replaces ptep_clear_flush() with ptep_get_and_clear() and a single flush_tlb_range() at the end of the loop, to avoid sending one IPI for each page. The mmu_notifier_invalidate_range_start/end section is enlarged accordingly but this is not going to fundamentally change things. It was more by accident that the region under mremap was for the most part still available for secondary MMUs: the primary MMU was never allowed to reliably access that region for the duration of the mremap (modulo trapping SIGSEGV on the old address range which sounds unpractical and flakey). If users wants secondary MMUs not to lose access to a large region under mremap they should reduce the mremap size accordingly in userland and run multiple calls. Overall this will run faster so it's actually going to reduce the time the region is under mremap for the primary MMU which should provide a net benefit to apps. For KVM this is a noop because the guest physical memory is never mremapped, there's just no point it ever moving it while guest runs. One target of this optimization is JVM GC (so unrelated to the mmu notifier logic). Signed-off-by: Andrea Arcangeli <aarcange@redhat.com> Acked-by: Johannes Weiner <jweiner@redhat.com> Acked-by: Mel Gorman <mgorman@suse.de> Acked-by: Rik van Riel <riel@redhat.com> Cc: Hugh Dickins <hughd@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/page_io.c')
0 files changed, 0 insertions, 0 deletions