diff options
author | Suresh Siddha <suresh.b.siddha@intel.com> | 2012-03-22 17:01:25 -0700 |
---|---|---|
committer | H. Peter Anvin <hpa@zytor.com> | 2012-03-22 17:23:48 -0700 |
commit | a6fca40f1d7f3e232c9de27c1cebbb9f787fbc4f (patch) | |
tree | 53206c42ab7bdc85e026d2023203209ef43e689d /arch/x86/mm/tlb.c | |
parent | 722bc6b16771ed80871e1fd81c86d3627dda2ac8 (diff) |
x86, tlb: Switch cr3 in leave_mm() only when needed
Currently leave_mm() unconditionally switches the cr3 to swapper_pg_dir.
But there is no need to change the cr3, if we already left that mm.
intel_idle() for example calls leave_mm() on every deep c-state entry where
the CPU flushes the TLB for us. Similarly flush_tlb_all() was also calling
leave_mm() whenever the TLB is in LAZY state. Both these paths will be
improved with this change.
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Link: http://lkml.kernel.org/r/1332460885.16101.147.camel@sbsiddha-desk.sc.intel.com
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Diffstat (limited to 'arch/x86/mm/tlb.c')
-rw-r--r-- | arch/x86/mm/tlb.c | 8 |
1 files changed, 5 insertions, 3 deletions
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index d6c0418c3e4..125bcad1b75 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -61,11 +61,13 @@ static DEFINE_PER_CPU_READ_MOSTLY(int, tlb_vector_offset); */ void leave_mm(int cpu) { + struct mm_struct *active_mm = percpu_read(cpu_tlbstate.active_mm); if (percpu_read(cpu_tlbstate.state) == TLBSTATE_OK) BUG(); - cpumask_clear_cpu(cpu, - mm_cpumask(percpu_read(cpu_tlbstate.active_mm))); - load_cr3(swapper_pg_dir); + if (cpumask_test_cpu(cpu, mm_cpumask(active_mm))) { + cpumask_clear_cpu(cpu, mm_cpumask(active_mm)); + load_cr3(swapper_pg_dir); + } } EXPORT_SYMBOL_GPL(leave_mm); |