summaryrefslogtreecommitdiffstats
path: root/tools/perf/util
diff options
context:
space:
mode:
authorStephane Eranian <eranian@google.com>2011-10-03 11:38:15 +0200
committerArnaldo Carvalho de Melo <acme@redhat.com>2011-10-07 17:00:31 -0300
commite39622ceb169467dbe3d11491745aa1f7f3a92ad (patch)
treed542c3548c2537d69f48730a168955149d49dc2f /tools/perf/util
parent8b1bfdbdb3041c0503c42ef49bab25caabeaa558 (diff)
perf tools: Fix broken number of samples for perf report -n
The perf report -n option was broken because it was not reporting the correct number of samples depending on the sorting mode. By default, samples are sorted by comm,dso,sym. That means that samples for the same command (binary) get collapsed. The hists__collapse_insert_entry() had a bug whereby it was aggregating the number of events observed (periods) but not the number of samples. Consequently, the number of samples reported could be below reality. The percentage remained correct because based on the periods. This patch fixes the problem by also aggregating the number of samples. Here is an example: $ perf report -n --stdio 12.38% 842 pong [kernel.kallsyms] [k] __lock_acquire Here pong (a ctxsw stress test), is the only program running and thus it is the only one responsible for the lock_acquire samples. If we change the sorting mode: $ perf report -n --stdio --sort=sym 12.38% 1732 [k] __lock_acquire The actual number of samples is shown. With the fix: $ perf report -n --stdio 12.38% 1732 pong [kernel.kallsyms] [k] __lock_acquire Cc: David Ahern <dsahern@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20111003093815.GA6393@quad Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Diffstat (limited to 'tools/perf/util')
-rw-r--r--tools/perf/util/hist.c1
1 files changed, 1 insertions, 0 deletions
diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c
index 87ef5c7797d..50c8fece168 100644
--- a/tools/perf/util/hist.c
+++ b/tools/perf/util/hist.c
@@ -281,6 +281,7 @@ static bool hists__collapse_insert_entry(struct hists *hists,
if (!cmp) {
iter->period += he->period;
+ iter->nr_events += he->nr_events;
if (symbol_conf.use_callchain) {
callchain_cursor_reset(&hists->callchain_cursor);
callchain_merge(&hists->callchain_cursor, iter->callchain,