Age | Commit message (Collapse) | Author |
|
On 64bit, local_t is of size long, and thus we make local64_t an alias.
On 32bit, we fall back to atomic64_t. (architecture can provide optimized
32-bit version)
(This new facility is to be used by perf events optimizations.)
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: linux-arch@vger.kernel.org
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
needed
Currently there are perf_buffer_alloc() + perf_buffer_init() + some
separate bits, fold it all into a single perf_buffer_alloc() and only
leave the attachment to the event separate.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
Rename to clarify code.
s/perf_mmap_data/perf_buffer/g and selective s/data/buffer/g
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
On Netburst PMU we need a second write to a performance counter
due to cpu erratum.
A simple flag test instead of alternative instructions was choosen
because wrmsrl is already a macro and if virtualization is turned
on will need an additional wrapper call which is more expencise.
nb: we should propably switch to jump-labels as only this facility
reach the mainline.
Signed-off-by: Cyrill Gorcunov <gorcunov@openvz.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Robert Richter <robert.richter@amd.com>
Cc: Lin Ming <ming.m.lin@intel.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <20100602212304.GC5264@lenovo>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
Clarify some of the transactional group scheduling API details
and change it so that a successfull ->commit_txn also closes
the transaction.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
LKML-Reference: <1274803086.5882.1752.camel@twins>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
Add the capacility to track data mmap()s. This can be used together
with PERF_SAMPLE_ADDR for data profiling.
Signed-off-by: Anton Blanchard <anton@samba.org>
[Updated code for stable perf ABI]
Signed-off-by: Eric B Munson <ebmunson@us.ibm.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
LKML-Reference: <1274193049-25997-1-git-send-email-ebmunson@us.ibm.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
__DO_TRACE() already calls the callbacks under rcu_read_lock_sched(),
which is sufficient for our needs, avoid doing it again.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
Inline perf_swevent_put_recursion_context into perf_tp_event(), this
shrinks the per trace template code footprint and saves a function
call.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
Fixes to 'cpuid10_edx' to comply with Intel documentation.
According to the Intel Manual, Volume 2A, Table 3-12, the cpuid for
architecture performance monitoring returns, in EDX, two pieces of
information:
1) Number of fixed-function counters (5 bits, not 4)
2) Width of fixed-function counters (8 bits)
Signed-off-by: Livio Soares <livio@eecg.toronto.edu>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux-2.6 into perf/core
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-2.6-trace into perf/core
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/rric/oprofile into perf/urgent
|
|
Frederic reported that frequency driven swevents didn't work properly
and even caused a division-by-zero error.
It turns out there are two bugs, the division-by-zero comes from a
failure to deal with that in perf_calculate_period().
The other was more interesting and turned out to be a wrong comparison
in perf_adjust_period(). The comparison was between an s64 and u64 and
got implicitly converted to an unsigned comparison. The problem is
that period_left is typically < 0, so it ended up being always true.
Cure this by making the local period variables s64.
Reported-by: Frederic Weisbecker <fweisbec@gmail.com>
Tested-by: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: <stable@kernel.org>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
So that we don't require that the kernel be configured first, and as we
don't use KERNELRELEASE at all in the -src-pkg targets, we need o add a
new wildcard for targets ending in src-pkg:
On a make mrproper'ed kernel we get this without this patch:
[linux-2.6-tip]$ LANG= make perf-tarbz2-src-pkg
/bin/sh: include/config/kernel.release: No such file or directory
make: *** [include/config/kernel.release] Error 1
[acme@emilia linux-2.6-tip]$
Acked-by: Michal Marek <mmarek@suse.cz>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Sam Ravnborg <sam@ravnborg.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Tom Zanussi <tzanussi@gmail.com>
LKML-Reference: <20100604173552.GA875@ghostprotocols.net>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Fix potential initial_lfsr buffer overrun.
Writing past the end of the buffer could happen when index == ENTRIES
Signed-off-by: Denis Kirjanov <dkirjanov@kernel.org>
Cc: stable@kernel.org
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|
In a shared multi-core environment, users want to analyze why their
program was slow. In particular, if the code ran slower only on certain
CPUs due to interference from other programs or kernel threads, the user
should be able to notice that.
Sample usage:
perf record -f -a -- sleep 3
perf report --sort cpu,comm
Workload:
program is running on 16 CPUs
Experiencing interference from an antagonist only on 4 CPUs.
Samples: 106218177676 cycles
Overhead CPU Command
........ ... ...............
6.25% 2 program
6.24% 6 program
6.24% 11 program
6.24% 5 program
6.24% 9 program
6.24% 10 program
6.23% 15 program
6.23% 7 program
6.23% 3 program
6.23% 14 program
6.22% 1 program
6.20% 13 program
3.17% 12 program
3.15% 8 program
3.14% 0 program
3.13% 4 program
3.11% 4 antagonist
3.11% 0 antagonist
3.10% 8 antagonist
3.07% 12 antagonist
Cc: David S. Miller <davem@davemloft.net>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Tom Zanussi <tzanussi@gmail.com>
LKML-Reference: <20100505181612.GA5091@sharma-home.net>
Signed-off-by: Arun Sharma <aruns@google.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Simplifying the tools that were using both in sequence and allowing
upcoming simplifications, such as Arun's patch to sort by cpus.
Cc: David S. Miller <davem@davemloft.net>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Tom Zanussi <tzanussi@gmail.com>
LKML-Reference: <new-submission>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Perf report is demangling symbols but not annotate.
The former uses internal demangling via libbdf or libiberty. The latter
executes objdump which by default does not demangle symbols.
This patch adds the -C option to the objdump cmdline to enable symbol
demangling.
Cc: David S. Miller <davem@davemloft.net>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Tom Zanussi <tzanussi@gmail.com>
LKML-Reference: <4c07b323.2126e30a.6245.0e1e@mx.google.com>
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
This patch adds the ability to specify an alternate directory to store the
buildid cache (buildids, copy of binaries). By default, it is hardcoded to
$HOME/.debug. This directory contains immutable data. The layout of the
directory is such that no conflicts in filenames are possible. A modification
in a file, yields a different buildid and thus a different location in the
subdir hierarchy.
You may want to put the buildid cache elsewhere because of disk space
limitation or simply to share the cache between users. It is also useful for
remote collect vs. local analysis of profiles.
This patch adds a new config option to the perfconfig file. Under the tag
'buildid', there is a dir option. For instance, if you have:
$ cat /etc/perfconfig
[buildid]
dir = /var/cache/perf-buildid
All buildids and binaries are be saved in the directory specified. The perf
record, buildid-list, buildid-cache, report, annotate, and archive commands
will it to pull information out.
The option can be set in the system-wide perfconfig file or in the
$HOME/.perfconfig file.
Cc: David S. Miller <davem@davemloft.net>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Tom Zanussi <tzanussi@gmail.com>
LKML-Reference: <4c055fb7.df0ce30a.5f0d.ffffae52@mx.google.com>
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Useful for when people want to try some version of the perf tools and don't
wants to download the kernel tarball.
Here is a session using this new target:
[root@emilia linux-2.6-tip]# make help | grep -i perf
perf-tar-src-pkg - Build perf-2.6.35-rc1.tar source tarball
perf-targz-src-pkg - Build perf-2.6.35-rc1.tar.gz source tarball
perf-tarbz2-src-pkg - Build perf-2.6.35-rc1.tar.bz2 source tarball
[root@emilia linux-2.6-tip]# make perf-tarbz2-src-pkg
TAR
[root@emilia linux-2.6-tip]# ls -la perf-2.6.35-rc1.tar.bz2
-rw-r--r-- 1 root root 295731 May 31 11:18 perf-2.6.35-rc1.tar.bz2
[root@emilia linux-2.6-tip]# tar xf perf-2.6.35-rc1.tar.bz2
[root@emilia linux-2.6-tip]# cd perf-2.6.35-rc1
[root@emilia perf-2.6.35-rc1]# ls
arch HEAD include lib tools
[root@emilia perf-2.6.35-rc1]# cd tools/perf
[root@emilia perf]# make -j9 2>&1 | tail
CC arch/x86/util/dwarf-regs.o
CC util/probe-finder.o
CC util/newt.o
CC util/scripting-engines/trace-event-perl.o
CC scripts/perl/Perf-Trace-Util/Context.o
CC perf.o
CC builtin-help.o
AR libperf.a
LINK perf
rm .perf.dev.null
[root@emilia perf]# ./perf record -a sleep 1
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.262 MB perf.data (~11457 samples) ]
[root@emilia perf]# ./perf report | head -12
# Events: 6K cycles
#
# Overhead Command Shared Object Symbol
# ........ ............... .................. ......
#
4.73% perf [kernel.kallsyms] [k] format_decode
4.49% perf libc-2.12.so [.] _IO_file_underflow_internal
4.38% init [kernel.kallsyms] [k] mwait_idle
3.29% perf [kernel.kallsyms] [k] vsnprintf
2.38% init [kernel.kallsyms] [k] sched_clock_local
2.35% init [kernel.kallsyms] [k] apic_timer_interrupt
1.86% sirq-timer/5 [kernel.kallsyms] [k] find_busiest_group
[root@emilia perf]#
Acked-by: Michal Marek <mmarek@suse.cz>
Acked-by: Sam Ravnborg <sam@ravnborg.org>
Cc: David S. Miller <davem@davemloft.net>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sam Ravnborg <sam@ravnborg.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Tom Zanussi <tzanussi@gmail.com>
LKML-Reference: <20100528185357.GA28009@ghostprotocols.net>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
This patch adds a -C option to stat, record, top to designate a list of CPUs to
monitor. CPUs can be specified as a comma-separated list or ranges, no space
allowed.
Examples:
$ perf record -a -C0-1,4-7 sleep 1
$ perf top -C0-4
$ perf stat -a -C1,2,3,4 sleep 1
With perf record in per-thread mode with inherit mode on, samples are collected
only when the thread runs on the designated CPUs.
The -C option does not turn on system-wide mode automatically.
Cc: David S. Miller <davem@davemloft.net>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Tom Zanussi <tzanussi@gmail.com>
LKML-Reference: <4bff9496.d345d80a.41fe.7b00@mx.google.com>
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
It is useful to know on which CPU a sample was captured on.
The information is captured with perf record -R but it was
not printed out by perf report -D. This patch adds this.
When -R is not used, cpu is set to -1to indicate that
the CPU is unknown (it is not captured).
Cc: David S. Miller <davem@davemloft.net>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Tom Zanussi <tzanussi@gmail.com>
LKML-Reference: <4bff964c.e88cd80a.3106.7d31@mx.google.com>
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux-2.6 into perf/urgent
|
|
We need to set the long name to the name specified via, for instance,
'perf annotate --vmlinux /path/to/vmlinux', if not it will remain as
'[kernel.kallsyms]' and that will make annotate fail when passing this
as the vmlinux name in the call to objdump.
The way this is setup grew unwieldly and dso__load_vmlinux is the
function that should allocate space for the long name, with callers not
assuming that filenames should be allocated somehow by then (strdup,
dso__build_id_filename, etc).
For now this is the minimalistic patch, a proper fix for .36 will be
made.
Reported-by: Stephane Eranian <eranian@google.com>
Tested-by: Stephane Eranian <eranian@google.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Tom Zanussi <tzanussi@gmail.com>
LKML-Reference: <20100604003900.GD10469@ghostprotocols.net>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
The ftrace_preempt_disable/enable functions were to address a
recursive race caused by the function tracer. The function tracer
traces all functions which makes it easily susceptible to recursion.
One area was preempt_enable(). This would call the scheduler and
the schedulre would call the function tracer and loop.
(So was it thought).
The ftrace_preempt_disable/enable was made to protect against recursion
inside the scheduler by storing the NEED_RESCHED flag. If it was
set before the ftrace_preempt_disable() it would not call schedule
on ftrace_preempt_enable(), thinking that if it was set before then
it would have already scheduled unless it was already in the scheduler.
This worked fine except in the case of SMP, where another task would set
the NEED_RESCHED flag for a task on another CPU, and then kick off an
IPI to trigger it. This could cause the NEED_RESCHED to be saved at
ftrace_preempt_disable() but the IPI to arrive in the the preempt
disabled section. The ftrace_preempt_enable() would not call the scheduler
because the flag was already set before entring the section.
This bug would cause a missed preemption check and cause lower latencies.
Investigating further, I found that the recusion caused by the function
tracer was not due to schedule(), but due to preempt_schedule(). Now
that preempt_schedule is completely annotated with notrace, the recusion
no longer is an issue.
Reported-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
The function tracer code uses ftrace_preempt_disable() to disable
preemption instead of normal preempt_disable(). But there's a slight
race condition that may cause it to lose a preemption check.
This was made to keep the function tracer from recursing on itself
by disabling preemption then having the enable call the function tracer
again, causing infinite recursion.
The bug was assumed to happen if the call was just in schedule, but
this is incorrect. The bug is caused by preempt_schedule() which
is called by preempt_enable(). The calling of preempt_enable() when
NEED_RESCHED was set would call preempt_schedule() which would call
the function tracer again.
By making the preempt_schedule() and add_preempt_count() notrace
then this will prevent the inifinite recursion. This is because
the add_preempt_count() would stop the preempt_enable() in the
function tracer from calling preempt_schedule() again.
The sub_preempt_count() is also made notrace just to keep it
symmetric.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
Frederic reported that because swevents handling doesn't disable IRQs
anymore, we can get a recursion of perf_adjust_period(), once from
overflow handling and once from the tick.
If both call ->disable, we get a double hlist_del_rcu() and trigger
a LIST_POISON2 dereference.
Since we don't actually need to stop/start a swevent to re-programm
the hardware (lack of hardware to program), simply nop out these
callbacks for the swevent pmu.
Reported-by: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1275557609.27810.35218.camel@twins>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux-2.6 into perf/urgent
|
|
When we use plain 'perf buildid-list' we use only what is in the buildid
table in the perf.data header. And those have absolute pathnames because
at 'perf record' time we used __perf_session__process_events and that
doesn't sets up the path shortening code in map__new() that happens if
symbol_conf.full_paths is false, the default.
On the other hand, when we use 'perf buildid-list --with-hits' we
process all the events using perf_session__process_events, adding
entries to the global DSO list _after_ removing the current directory
from the DSO name, for presentation purposes.
Because of that we end up having two entries in the DSO list when
recording events for binaries using relative pathnames.
Fix it minimally by setting symbol_conf.full_paths to true when marking
the DSOs with hits in 'perf buildid-list --with-hits', as used by 'perf
archive'
Right fix longer term is to shorten the path only at presentation time.
Will be done for 2.6.36.
Reported-by: Stephane Eranian <eranian@google.com>
Tested-by: Stephane Eranian <eranian@google.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Tom Zanussi <tzanussi@gmail.com>
LKML-Reference: <20100601183837.GC4093@ghostprotocols.net>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
trace_unhandled() callback does not allow to access event fields, this patch
resolves the problem.
It can also been used as a more pythonic and flexible way for script writters
to demux event types
This will for example greatly simplify pytimechart event demux.
Acked-by: Frederic Weisbecker <fweisbec@gmail.com>
Acked-by: Tom Zanussi <tzanussi@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>,
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Tom Zanussi <tzanussi@gmail.com>
LKML-Reference: <1275340329-2397-1-git-send-email-tardyp@gmail.com>
Signed-off-by: Pierre Tardy <tardyp@gmail.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
hist_entry__annotate() runs objdump with -S option so the output may contain
lines of any format. If a line starts with a colon strtoull() returns 0 and
calculated offset will be negative. This causes perf annotate segfaults.
Make sure that strtoull() has parsed at least one digit.
Cc: David S. Miller <davem@davemloft.net>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Tom Zanussi <tzanussi@gmail.com>
LKML-Reference: <new-submission>
Signed-off-by: Konstantin Stepanyuk <konstantin.stepanyuk@gmail.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
When forking the child to be traced, we should check the correct
return value from fork() and not a local variable which is otherwise
unused.
Signed-off-by: Borislav Petkov <bp@alien8.de>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Tom Zanussi <tzanussi@gmail.com>
Cc: Stephane Eranian <eranian@google.com>
LKML-Reference: <20100531211818.GA30175@liondog.tnic>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
|
|
event__process_task() doesn't propagate the comm copy on clone,
but only on process fork. So we loose all the tid:comm resolution
for tasks that aren't a main process thread.
Progragate the per thread granularity to event__process_task for
pid resolution.
This fixes various unresolved pids in perf sched, especially when
we trace multithread processes. The problem is quickly reproducible
with the messaging benchmark using the multithread mode "-t" :
perf sched record perf bench sched messaging -t
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Tom Zanussi <tzanussi@gmail.com>
Cc: Stephane Eranian <eranian@google.com>
|
|
perf sched uses event__process_comm(), which means it can resolve
comms from:
- tasks that have exec'ed (kernel comm events)
- tasks that were running when perf record started the actual
recording (synthetized comm events)
But perf sched can't resolve the pids of tasks that were created
after the recording started.
To solve this, we need to inherit the comms on fork events using
event__process_task().
This fixes various unresolved pids in perf sched, easily visible
with:
perf sched record perf bench sched messaging
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Tom Zanussi <tzanussi@gmail.com>
Cc: Stephane Eranian <eranian@google.com>
|
|
When we synthetize the existing running tasks though procfs,
we walk through every threads of a process, queuing one comm
events per tid.
But then on report time, event__process_comm() only creates and
sets the comm on a per process granularity. This is the right
thing for comm events that came from the kernel, as they are
only created on exec. Sub-threads then inherit their comm
from fork events. But that doesn't work with our synthetized
comm events taken from procfs informations as the per thread
granularity is done on comm events directly there.
Hence we need event__process_comm() to work with the tid rather
than the pid. It won't change anything for comm events coming
from the kernel but this will fix the synthetized ones.
Before:
$ ./perf report -D | grep COMM | grep firefox
0x2c7b8 [0x18]: PERF_RECORD_COMM: firefox:5297
0x2c7d0 [0x18]: PERF_RECORD_COMM: firefox:5297
0x2c7e8 [0x18]: PERF_RECORD_COMM: firefox:5297
0x2c800 [0x18]: PERF_RECORD_COMM: firefox:5297
0x2c818 [0x18]: PERF_RECORD_COMM: firefox:5297
0x2c830 [0x18]: PERF_RECORD_COMM: firefox:5297
After:
$ ./perf report -D | grep COMM | grep firefox
0x2c7b8 [0x18]: PERF_RECORD_COMM: firefox:5297
0x2c7d0 [0x18]: PERF_RECORD_COMM: firefox:5299
0x2c7e8 [0x18]: PERF_RECORD_COMM: firefox:5300
0x2c800 [0x18]: PERF_RECORD_COMM: firefox:5308
0x2c818 [0x18]: PERF_RECORD_COMM: firefox:5309
0x2c830 [0x18]: PERF_RECORD_COMM: firefox:5312
This fixes various unresolved pid on perf sched.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Tom Zanussi <tzanussi@gmail.com>
Cc: Stephane Eranian <eranian@google.com>
|
|
Fix blktrace.c kernel-doc warnings:
Warning(kernel/trace/blktrace.c:858): No description found for parameter 'ignore'
Warning(kernel/trace/blktrace.c:890): No description found for parameter 'ignore'
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Cc: Jens Axboe <jens.axboe@oracle.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <20100529114507.c466fc1e.randy.dunlap@oracle.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
If a sample size crosses to the next page boundary, the copy
will be made in more than one step. However we forget to advance
the source offset for the next copy, leading to unexpected double
copies that completely mess up the traces.
This fixes various kinds of bad traces that have irrelevant
data inside, as an example:
geany-4979 [001] 5758.077775: sched_switch: prev_comm=! prev_pid=121
prev_prio=0 prev_state=S|D|Z|X|x ==> next_comm= next_pid=7497072
next_prio=0
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Paul Mackerras <paulus@samba.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1274988898-5639-1-git-send-regression-fweisbec@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
The transactional API patch between the generic and model-specific
code introduced several important bugs with event scheduling, at
least on X86. If you had pinned events, e.g., watchdog, and were
over-committing the PMU, you would get bogus counts. The bug was
showing up on Intel CPU because events would move around more
often that on AMD. But the problem also existed on AMD, though
harder to expose.
The issues were:
- group_sched_in() was missing a cancel_txn() in the error path
- cpuc->n_added was not properly maintained, leading to missing
actions in hw_perf_enable(), i.e., n_running being 0. You cannot
update n_added until you know the transaction has succeeded. In
case of failed transaction n_added was not adjusted back.
- in case of failed transactions, event_sched_out() was called
and eventually invoked x86_disable_event() to touch the HW reg.
But with transactions, on X86, event_sched_in() does not touch
HW registers, it simply collects events into a list. Thus, you
could end up calling x86_disable_event() on a counter which
did not correspond to the current event when idx != -1.
The patch modifies the generic and X86 code to avoid all those problems.
First, we keep track of the number of events added last. In case the
transaction fails, we substract them from n_added. This approach is
necessary (as opposed to delaying updates to n_added) because not all
event updates use the transaction API, e.g., single events.
Second, we encapsulate the event_sched_in() and event_sched_out() in
group_sched_in() inside the transaction. That makes the operations
symmetrical and you can also detect that you are inside a transaction
and skip the HW reg access by checking cpuc->group_flag.
With this patch, you can now overcommit the PMU even with pinned
system-wide events present and still get valid counts.
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1274796225.5882.1389.camel@twins>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
Steve spotted I forgot to do the destroy under event_mutex.
Reported-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1274451913.1674.1707.camel@laptop>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
tracepoint_probe_unregister() does not synchronize against the probe
callbacks, so do that explicitly. This properly serializes the callbacks
and the free of the data used therein.
Also, use this_cpu_ptr() where possible.
Acked-by: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1274438476.1674.1702.camel@laptop>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
Group siblings don't pin each-other or the parent, so when we destroy
events we must make sure to clean up all cross referencing pointers.
In particular, for destruction of a group leader we must be able to
find all its siblings and remove their reference to it.
This means that detaching an event from its context must not detach it
from the group, otherwise we can end up failing to clear all pointers.
Solve this by clearly separating the attachment to a context and
attachment to a group, and keep the group composed until we destroy
the events.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
In order to move toward separate buffer objects, rework the whole
perf_mmap_data construct to be a more self-sufficient entity, one
with its own lifetime rules.
This greatly sanitizes the whole output redirection code, which
was riddled with bugs and races.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: <stable@kernel.org>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
.. and thus endeth the merge window.
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/penberg/slab-2.6
* 'slub/urgent' of git://git.kernel.org/pub/scm/linux/kernel/git/penberg/slab-2.6:
SLUB: Allow full duplication of kmalloc array for 390
slub: move kmem_cache_node into it's own cacheline
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip
* 'core-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
mutex: Fix optimistic spinning vs. BKL
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip
* 'perf-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
perf tui: Fix last use_browser problem related to .perfconfig
perf symbols: Add the build id cache to the vmlinux path
perf tui: Reset use_browser if stdout is not a tty
ring-buffer: Move zeroing out excess in page to ring buffer code
ring-buffer: Reset "real_end" when page is filled
|
|
This partially reverts commit 4ec37de89d8c758ee8115e0e64b3f994910789ee
("[IA64] Fix build breakage"), since the commit that made it necessary
got reverted earlier (see commit 35926ff5fba8, 'Revert "cpusets:
randomize node rotor used in cpuset_mem_spread_node()"')
Even if we ever re-introduce this, there is no reason to make
__node_random be some architecture-specific function.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/fuse
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/fuse:
mm: export generic_pipe_buf_*() to modules
fuse: support splice() reading from fuse device
fuse: allow splice to move pages
mm: export remove_from_page_cache() to modules
mm: export lru_cache_add_*() to modules
fuse: support splice() writing to fuse device
fuse: get page reference for readpages
fuse: use get_user_pages_fast()
fuse: remove unneeded variable
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-2.6-kconfig
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-2.6-kconfig:
kconfig: Hide error output in find command in streamline_config.pl
kconfig: Fix typo in comment in streamline_config.pl
kconfig: Make a variable local in streamline_config.pl
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/sameo/mfd-2.6
* 'for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/sameo/mfd-2.6: (47 commits)
mfd: Rename twl5031 sih modules
mfd: Storage class for timberdale should be before const qualifier
mfd: Remove unneeded and dangerous clearing of clientdata
mfd: New AB8500 driver
gpio: Fix inverted rdc321x gpio data out registers
mfd: Change rdc321x resources flags to IORESOURCE_IO
mfd: Move pcf50633 irq related functions to its own file.
mfd: Use threaded irq for pcf50633
mfd: pcf50633-adc: Fix potential race in pcf50633_adc_sync_read
mfd: Fix pcf50633 bitfield logic in interrupt handler
gpio: rdc321x needs to select MFD_CORE
mfd: Use menuconfig for quicker config editing
ARM: AB3550 board configuration and irq for U300
mfd: AB3550 core driver
mfd: AB3100 register access change to abx500 API
mfd: Renamed ab3100.h to abx500.h
gpio: Add TC35892 GPIO driver
mfd: Add Toshiba's TC35892 MFD core
mfd: Delay to mask tsc irq in max8925
mfd: Remove incorrect wm8350 kfree
...
|