diff options
author | Oleg Nesterov <oleg@redhat.com> | 2012-08-08 17:11:42 +0200 |
---|---|---|
committer | Oleg Nesterov <oleg@redhat.com> | 2012-08-28 18:21:18 +0200 |
commit | f8ac4ec9c064b330dcc49e03c450fe74298c4622 (patch) | |
tree | 62d53120d0289719438d27de5f868c36e136ed28 /kernel/events | |
parent | 78f7411668aa0b2006d331f6a288416dd91b8e5d (diff) |
uprobes: Introduce MMF_HAS_UPROBES
Add the new MMF_HAS_UPROBES flag. It is set by install_breakpoint()
and it is copied by dup_mmap(), uprobe_pre_sstep_notifier() checks
it to avoid the slow path if the task was never probed. Perhaps it
makes sense to check it in valid_vma(is_register => false) as well.
This needs the new dup_mmap()->uprobe_dup_mmap() hook. We can't use
uprobe_reset_state() or put MMF_HAS_UPROBES into MMF_INIT_MASK, we
need oldmm->mmap_sem to avoid the race with uprobe_register() or
mmap() from another thread.
Currently we never clear this bit, it can be false-positive after
uprobe_unregister() or uprobe_munmap() or if dup_mmap() hits the
probed VM_DONTCOPY vma. But this is fine correctness-wise and has
no effect unless the task hits the non-uprobe breakpoint.
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Acked-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Diffstat (limited to 'kernel/events')
-rw-r--r-- | kernel/events/uprobes.c | 22 |
1 files changed, 21 insertions, 1 deletions
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 3e2996b809b..33870b17e1d 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -647,6 +647,7 @@ static int install_breakpoint(struct uprobe *uprobe, struct mm_struct *mm, struct vm_area_struct *vma, unsigned long vaddr) { + bool first_uprobe; int ret; /* @@ -678,7 +679,17 @@ install_breakpoint(struct uprobe *uprobe, struct mm_struct *mm, uprobe->flags |= UPROBE_COPY_INSN; } + /* + * set MMF_HAS_UPROBES in advance for uprobe_pre_sstep_notifier(), + * the task can hit this breakpoint right after __replace_page(). + */ + first_uprobe = !test_bit(MMF_HAS_UPROBES, &mm->flags); + if (first_uprobe) + set_bit(MMF_HAS_UPROBES, &mm->flags); + ret = set_swbp(&uprobe->arch, mm, vaddr); + if (ret && first_uprobe) + clear_bit(MMF_HAS_UPROBES, &mm->flags); return ret; } @@ -1032,6 +1043,9 @@ void uprobe_munmap(struct vm_area_struct *vma, unsigned long start, unsigned lon if (!atomic_read(&vma->vm_mm->mm_users)) /* called by mmput() ? */ return; + if (!test_bit(MMF_HAS_UPROBES, &vma->vm_mm->flags)) + return; + /* TODO: unmapping uprobe(s) will need more work */ } @@ -1142,6 +1156,12 @@ void uprobe_reset_state(struct mm_struct *mm) mm->uprobes_state.xol_area = NULL; } +void uprobe_dup_mmap(struct mm_struct *oldmm, struct mm_struct *newmm) +{ + if (test_bit(MMF_HAS_UPROBES, &oldmm->flags)) + set_bit(MMF_HAS_UPROBES, &newmm->flags); +} + /* * - search for a free slot. */ @@ -1507,7 +1527,7 @@ int uprobe_pre_sstep_notifier(struct pt_regs *regs) { struct uprobe_task *utask; - if (!current->mm) + if (!current->mm || !test_bit(MMF_HAS_UPROBES, ¤t->mm->flags)) return 0; utask = current->utask; |