Age | Commit message (Collapse) | Author |
|
Provide common sched_clock() infrastructure for platforms to use to
create a 64-bit ns based sched_clock() implementation from a counter
running at a non-variable clock rate.
This implementation is based upon maintaining an epoch for the counter
and an epoch for the nanosecond time. When we desire a sched_clock()
time, we calculate the number of counter ticks since the last epoch
update, convert this to nanoseconds and add to the epoch nanoseconds.
We regularly refresh these epochs within the counter wrap interval.
We perform a similar calculation as above, and store the new epochs.
We read and write the epochs in such a way that sched_clock() can easily
(and locklessly) detect when an update is in progress, and repeat the
loading of these constants when they're known not to be stable. The
one caveat is that sched_clock() is not called in the middle of an
update. We achieve that by disabling IRQs.
Finally, if the clock rate is known at compile time, the counter to ns
conversion factors can be specified, allowing sched_clock() to be tightly
optimized. We ensure that these factors are correct by providing an
initialization function which performs a run-time check.
Acked-by: Peter Zijlstra <peterz@infradead.org>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Will Deacon <will.deacon@arm.com>
Tested-by: Mikael Pettersson <mikpe@it.uu.se>
Tested-by: Eric Miao <eric.y.miao@gmail.com>
Tested-by: Olof Johansson <olof@lixom.net>
Tested-by: Jamie Iles <jamie@jamieiles.com>
Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
We have two places where we create identity mappings - one when we bring
secondary CPUs online, and one where we setup some mappings for soft-
reboot. Combine these two into a single implementation. Also collect
the identity mapping deletion function.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
The MMU is always configured to read page tables from the L2 cache
so there's little point flushing them out of the L2 cache back to
RAM. Remove these flushes.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
When we soft-CPU hotplug a CPU, we reset the stack pointer and
jump back to start_secondary(). This allows us to restart as if
the CPU was actually reset.
However, we weren't resetting the frame pointer, which could cause
problems with backtracing. Reset the frame pointer to zero (which
means no parent frame) just like the early assembly code also does.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
smp.c is becoming too large, so split out the TLB maintainence
broadcasting into a separate smp_tlb.c file.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
When a CPU is hot unplugged, the generic tick code cleans up the
clock event device, but fails to call down to the device's set_mode
function to actually shut the device down.
To work around this, we've historically had a local_timer_stop()
callback out of the hotplug code. However, this adds needless
complexity when we have the clock event device itself available.
Explicitly call the clock event device's set_mode function with
CLOCK_EVT_MODE_UNUSED, so that the hardware can be cleanly shutdown
without any special external callbacks. When/if the generic code
is fixed, percpu_timer_stop() can be killed off.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
We used to print a bland error message which gave no clue as to the
failure when we failed to bring up a secondary CPU. Resolve this by
separating the two failure cases.
If boot_secondary() fails, we print a message indicating the returned
error code from boot_secondary():
"CPU%u: failed to boot: %d\n", cpu, ret.
However, if boot_secondary() succeeded, but the CPU did not appear to
mark itself online within the timeout, indicate that it failed to come
online:
"CPU%u: failed to come online\n", cpu
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
* __fixup_smp_on_up has been modified with support for the
THUMB2_KERNEL case. For THUMB2_KERNEL only, fixups are split
into halfwords in case of misalignment, since we can't rely on
unaligned accesses working before turning the MMU on.
No attempt is made to optimise the aligned case, since the
number of fixups is typically small, and it seems best to keep
the code as simple as possible.
* Add a rotate in the fixup_smp code in order to support
CPU_BIG_ENDIAN, as suggested by Nicolas Pitre.
* Add an assembly-time sanity-check to ALT_UP() to ensure that
the content really is the right size (4 bytes).
(No check is done for ALT_SMP(). Possibly, this could be fixed
by splitting the two uses ot ALT_SMP() (ALT_SMP...SMP_UP versus
ALT_SMP...SMP_UP_B) into two macros. In the first case,
ALT_SMP needs to expand to >= 4 bytes, not == 4.)
* smp_mpidr.h (which implements ALT_SMP()/ALT_UP() manually due
to macro limitations) has not been modified: the affected
instruction (mov) has no 16-bit encoding, so the correct
instruction size is satisfied in this case.
* A "mode" parameter has been added to smp_dmb:
smp_dmb arm @ assumes 4-byte instructions (for ARM code, e.g. kuser)
smp_dmb @ uses W() to ensure 4-byte instructions for ALT_SMP()
This avoids assembly failures due to use of W() inside smp_dmb,
when assembling pure-ARM code in the vectors page.
There might be a better way to achieve this.
* Kconfig: make SMP_ON_UP depend on
(!THUMB2_KERNEL || !BIG_ENDIAN) i.e., THUMB2_KERNEL is now
supported, but only if !BIG_ENDIAN (The fixup code for Thumb-2
currently assumes little-endian order.)
Tested using a single generic realview kernel on:
ARM RealView PB-A8 (CONFIG_THUMB2_KERNEL={n,y})
ARM RealView PBX-A9 (SMP)
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Don't call idle_task_exit() with interrupts disabled, and ensure
that we have a memory barrier after interrupts are disabled but
before signalling that this CPU has shut down.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
We always need to wait for the dying CPU to reach a safe state before
taking it down, irrespective of the requirements of the platform.
Move the completion code into the ARM SMP hotplug code rather than
having each platform re-implement this.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
All platforms call trace_hardirqs_off() in their secondary startup code,
so move this into the core SMP code - it doesn't need to be in the
per-platform code.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
There is a certain amount of smp_prepare_cpus() which doesn't belong
in the platform support code - that is, code which is invariant to the
SMP implementation. Move this code into arch/arm/kernel/smp.c, and
add a platform_ prefix to the original function.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Wait for CPUs to indicate that they've stopped, after sending the
stop IPI, rather than blindly continuing on and hoping that they've
stopped in time. Print a warning if we fail to stop the other CPUs.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Use r0,r3-r6 rather than r0,r3,r4,r6,r7, which makes it easier to
understand which registers can be modified. Also document which
registers hold values which must be preserved.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
The IPI and local timer interrupts weren't being properly accounted
for in /proc/stat. Collect them from the irq_stat structure, and
return their sum.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
This separates out the individual IPI interrupt counts from the
total IPI count, which allows better visibility of what IPIs are
being used for.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
iwmmxt is used in XScale, XScale3, Mohawk and PJ4 core. But the instructions
of accessing CP0 and CP1 is changed in PJ4. Append more files to support
iwmmxt in PJ4 core.
Signed-off-by: Zhou Zhu <zzhu3@marvell.com>
Signed-off-by: Haojian Zhuang <haojian.zhuang@marvell.com>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Eric Miao <eric.y.miao@gmail.com>
|
|
As per x86, align the initial column according to how many IRQs we
have. Also, provide an english explaination for the 'LOC:' and
'IPI:' lines.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Move the ipi_count into irq_stat, which allows the ipi_data structure
to be entirely removed.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Provide __inc_irq_stat() and __get_irq_stat() to increment and
read the irq stat counters.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
send_ipi_message() does nothing except call smp_cross_call(). As
this is a static function, nothing external to this file calls it,
so we can easily clean up this now unnecessary indirection.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
devel-stable
|
|
We should not be incrementing mm_users when we startup a secondary
CPU - doing so results in mm_users incrementing by one each time we
hotplug a CPU, which will eventually wrap, and will cause problems.
Other architectures such as x86 do not increment mm_users, but only
mm_count, so we follow that pattern.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Extend the perf_pmu_register() interface to allow for named and
dynamic pmu types.
Because we need to support the existing static types we cannot use
dynamic types for everything, hence provide a type argument.
If we want to enumerate the PMUs they need a name, provide one.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <20101117222056.259707703@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
The debug registers can only be manipulated from software if monitor
debug mode is enabled. On some cores, this can never be enabled (i.e.
the corresponding bit in the DSCR is RAZ/WI).
This patch ensures we can handle this hardware configuration and fail
gracefully, rather than blow up the kernel during boot.
Reported-by: Cyril Chemparathy <cyril@ti.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
Avoid adding nasty genirq-specific code to local timers to enable PPI
interrupts. Instead, provide a gic function to do this.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Merge reason: Pick up the latest -rc.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
sparse doesn't like per-cpu accesses such as:
static DEFINE_PER_CPU(struct perf_event *, foo[MAXLEN]);
struct perf_event **bar = __get_cpu_var(foo);
and shouts quite loudly about it:
| warning: incorrect type in assignment (different modifiers)
| expected struct perf_event **slots
| got struct perf_event *[noderef] *<noident>
This patch adds casts to these sorts of assignments in hw_breakpoint.c
in order to silence the warnings.
Reported-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
This patch fixes a trivial style issue in ptrace.c.
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
Single-stepping a breakpoint requires us to disable it temporarily so that
we don't get stuck in a recursive debug trap. With per-cpu breakpoints this
presents a problem where an interrupt can be taken before the single-step has
completed and a new task is eventually scheduled. This new task will not
hit the breakpoint because it will have been disabled during the previous
handling code.
This patch disallows per-cpu breakpoints on ARM when an overflow handler
is not present. A similar effect can be created by placing breakpoints on
a shell and then running applications there.
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
The single-stepping code is currently different depending on whether
we are stepping over a breakpoint or a watchpoint. There is no good
reason for this, so let's sort it out.
This patch adds functions for enabling/disabling single-step for
a particular hw_breakpoint and integrates this with the exception
handling code.
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
The watchpoint single-stepping code calls register_user_hw_breakpoint to
register a mismatch breakpoint for stepping over the watchpoint. This is
performed with preemption disabled, which is unsafe as we may end up scheduling
whilst in_atomic(). Furthermore, using the perf API is rather overkill since
we are already in the hw-breakpoint backend and only require access to reserved
breakpoints anyway.
This patch reworks the watchpoint stepping code so that we don't require
another perf_event for the mismatch breakpoint. Instead, we hold a separate
arch_hw_breakpoint_ctrl struct inside the watchpoint which is used exclusively
for stepping. We can check whether or not stepping is enabled when installing
or uninstalling the watchpoint and operate on the breakpoint accordingly.
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
To permit handling of watchpoint exceptions without signalling a
debugger, it is necessary to reserve breakpoint registers for in-kernel
use only.
This patch ensures that we record and subtract the number of reserved
breakpoints from the number of usable breakpoint registers that we
advertise to userspace via the ptrace API.
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
On ARM, debug exceptions occur in the form of data or prefetch aborts.
One difference is that debug exceptions require access to per-cpu banked
registers and data structures which are not saved in the low-level exception
code. For kernels built with CONFIG_PREEMPT, there is an unlikely scenario
that the debug handler ends up running on a different CPU from the one
that originally signalled the event, resulting in random data being read
from the wrong registers.
This patch adds a debug_entry macro to the low-level exception handling
code which checks whether the taken exception is a debug exception. If
it is, the preempt count for the faulting process is incremented. After
the debug handler has finished, the count is decremented.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
The current hw_breakpoint code tries to fix up the alignment of
breakpoints so that we can make use of sparse byte-address-select
bits in the control register and give the illusion that we can
set breakpoints on unaligned addresses.
Although this works on v6 cores, v7 forbids this behaviour, instead
requiring breakpoints to be set on aligned addresses and have contiguous
byte-address-select ranges depending on the instruction set in use.
For ARM the only supported size is 4 bytes, whilst Thumb-2 also permits
2 byte breakpoints (watchpoints can be of 1, 2, 4 or 8 bytes long).
This patch simplifies the alignment fixup code so that we require
addresses to be aligned to the size of the corresponding breakpoint.
This allows us to handle the common case of breaking on a half-word
aligned Thumb-2 instruction and also allows us to set byte watchpoints
on arbitrary addresses.
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
The ARMv7 debug architecture doesn't make any guarantees about the
contents of debug control registers following a debug logic reset.
This patch ensures that we reset the control registers when a cpu
comes ONLINE (for example, with hotplug) so that when we enable
monitor mode while inserting a breakpoint we won't exhibit random
behaviour.
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
ARMv7 architects a system for saving and restoring the debug registers
across low-power modes. At the heart of this system is a lock register
which, when set, forbids writes to the debug registers. While locked,
writes to debug registers via the co-processor interface will result
in undefined instruction traps. Linux currently doesn't make use of
this feature because we update the debug registers on context switch
anyway, however the status of the lock is IMPLEMENTATION DEFINED on
reset.
This patch ensures that the lock is cleared during boot so that we
can write to the debug registers safely.
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
As our SMP implementation uses MESI protocols. Grouping together data
which is mostly only read together means that we avoid unnecessary
cache line bouncing when this code shares a cache line with other data.
In other words, cache lines associated with read-mostly data are
expected to spend most of their time in shared state.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
For kernels built with PREEMPT_RT, critical sections protected
by standard spinlocks are preemptible. This is not acceptable
on perf as (a) we may be scheduled onto a different CPU whilst
reading/writing banked PMU registers and (b) the latency when
reading the PMU registers becomes unpredictable.
This patch upgrades the pmu_lock spinlock to a raw_spinlock
instead.
Reported-by: Jamie Iles <jamie@jamieiles.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Russell reported a number of warnings coming from sparse when
checking the ARM perf_event.c files:
| perf_event.c seems to also have problems too:
|
| CHECK arch/arm/kernel/perf_event.c
| arch/arm/kernel/perf_event.c:37:1: warning: symbol 'pmu_lock' was not declared. Should it be static?
| arch/arm/kernel/perf_event.c:70:1: warning: symbol 'cpu_hw_events' was not declared. Should it be static?
| arch/arm/kernel/perf_event.c:1006:1: warning: symbol 'armv6pmu_enable_event' was not declared. Should it be static?
| arch/arm/kernel/perf_event.c:1113:1: warning: symbol 'armv6pmu_stop' was not declared. Should it be static?
| arch/arm/kernel/perf_event.c:1956:6: warning: symbol 'armv7pmu_enable_event' was not declared. Should it be static?
| arch/arm/kernel/perf_event.c:3072:14: warning: incorrect type in argument 1 (different address spaces)
| arch/arm/kernel/perf_event.c:3072:14: expected void const volatile [noderef] <asn:1>*<noident>
| arch/arm/kernel/perf_event.c:3072:14: got struct frame_tail *tail
| arch/arm/kernel/perf_event.c:3074:49: warning: incorrect type in argument 2 (different address spaces)
| arch/arm/kernel/perf_event.c:3074:49: expected void const [noderef] <asn:1>*from
| arch/arm/kernel/perf_event.c:3074:49: got struct frame_tail *tail
This patch resolves these issues so we can live in silence
again.
Reported-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
When kexec is used to start a crash kernel the other cores
are notified. These non-crashing cores will save their state
in the crash notes and then do nothing.
Signed-off-by: Per Fransson <per.xx.fransson@stericsson.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
The existing code invokes the syscall with rubbish in r7,
due to what looks like an incorrect literal load idiom.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
As we've now removed the spinlock and bitmask, we have nothing left
which requires interrupts to be disabled when sending an IPI. All
current IPI-sending implementations use the GIC, which also does not
require interrupts disabled when calling gic_raise_softirq().
Remove the now unnecessary IRQ disable.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Avoid using bitmasks and locks in the percpu area for IPIs, and instead
use individual software generated interrupts to identify the reason for
the IPI. This avoids the problems of having spinlocks in the percpu
area.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
This allows us to use smp_cross_call() to trigger a number of different
software generated interrupts, rather than combining them all on one
SGI. Recover the SGI number via do_IPI.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
If a section is not marked with SHF_ALLOC, it will be discarded
by the module code. Therefore, it is not correct to register
the unwind tables.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
There's no need to keep pointers to the ELF sections available while
the module is loaded - we only need the section pointers while we're
finding and registering the unwind tables, which can all be done during
the finalize stage of loading.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Thumb-2.
The 32-bit conditional branches in Thumb-2 have a shorter range
(+/-512K) than their ARM counterparts (+/-32MB). The linker does
not currently generate trampolines to extend the range of these
Thumb-2 conditional branches, resulting in link errors when vmlinux
is sufficiently large, e.g.:
head.o:(.text+0x464): relocation truncated to fit: R_ARM_THM_JUMP19
This patch forces the longer-range, unconditional branch encoding
by use of an explicit IT instruction. The resulting branches are
triggered on the same conditions as before.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|