diff options
author | Peter Zijlstra <a.p.zijlstra@chello.nl> | 2011-11-16 14:38:16 +0100 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2011-12-06 08:33:52 +0100 |
commit | 0f5a2601284237e2ba089389fd75d67f77626cef (patch) | |
tree | 37eedc660f09a36cfbd6b2a2c28e8cd0d1dbe167 /arch/x86/kernel/cpu/perf_event.c | |
parent | d6c1c49de577fa292af2449817364b7d89b574d8 (diff) |
perf: Avoid a useless pmu_disable() in the perf-tick
Gleb writes:
> Currently pmu is disabled and re-enabled on each timer interrupt even
> when no rotation or frequency adjustment is needed. On Intel CPU this
> results in two writes into PERF_GLOBAL_CTRL MSR per tick. On bare metal
> it does not cause significant slowdown, but when running perf in a virtual
> machine it leads to 20% slowdown on my machine.
Cure this by keeping a perf_event_context::nr_freq counter that counts the
number of active events that require frequency adjustments and use this in a
similar fashion to the already existing nr_events != nr_active test in
perf_rotate_context().
By being able to exclude both rotation and frequency adjustments a-priory for
the common case we can avoid the otherwise superfluous PMU disable.
Suggested-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/n/tip-515yhoatehd3gza7we9fapaa@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'arch/x86/kernel/cpu/perf_event.c')
0 files changed, 0 insertions, 0 deletions