diff options
author | Jiri Olsa <jolsa@redhat.com> | 2011-05-11 13:06:13 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2011-05-11 13:21:23 +0200 |
commit | 9bbeacf52f66d165739a4bbe9c018d17493a74b5 (patch) | |
tree | d47a268e184907bc1c962e91f361907f29b4093b /arch/x86/kernel/kprobes.c | |
parent | 693d92a1bbc9e42681c42ed190bd42b636ca876f (diff) |
kprobes, x86: Disable irqs during optimized callback
Disable irqs during optimized callback, so we dont miss any in-irq kprobes.
The following commands:
# cd /debug/tracing/
# echo "p mutex_unlock" >> kprobe_events
# echo "p _raw_spin_lock" >> kprobe_events
# echo "p smp_apic_timer_interrupt" >> ./kprobe_events
# echo 1 > events/enable
Cause the optimized kprobes to be missed. None is missed
with the fix applied.
Signed-off-by: Jiri Olsa <jolsa@redhat.com>
Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Link: http://lkml.kernel.org/r/20110511110613.GB2390@jolsa.brq.redhat.com
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'arch/x86/kernel/kprobes.c')
-rw-r--r-- | arch/x86/kernel/kprobes.c | 5 |
1 files changed, 3 insertions, 2 deletions
diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c index c969fd9d156..f1a6244d7d9 100644 --- a/arch/x86/kernel/kprobes.c +++ b/arch/x86/kernel/kprobes.c @@ -1183,12 +1183,13 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs) { struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); + unsigned long flags; /* This is possible if op is under delayed unoptimizing */ if (kprobe_disabled(&op->kp)) return; - preempt_disable(); + local_irq_save(flags); if (kprobe_running()) { kprobes_inc_nmissed_count(&op->kp); } else { @@ -1207,7 +1208,7 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op, opt_pre_handler(&op->kp, regs); __this_cpu_write(current_kprobe, NULL); } - preempt_enable_no_resched(); + local_irq_restore(flags); } static int __kprobes copy_optimized_instructions(u8 *dest, u8 *src) |