summaryrefslogtreecommitdiffstats
path: root/arch/arm/kernel
AgeCommit message (Collapse)Author
2010-10-12genirq: Query arch for number of early descriptorsThomas Gleixner
sparse irq sets up NR_IRQS_LEGACY irq descriptors and archs then go ahead and allocate more. Use the unused return value of arch_probe_nr_irqs() to let the architecture return the number of early allocations. Fix up all users. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Ingo Molnar <mingo@elte.hu>
2010-10-11Merge branch 'oprofile/urgent' (early part) into oprofile/perfRobert Richter
2010-10-11perf: Add helper function to return number of countersMatt Fleming
The number of counters for the registered pmu is needed in a few places so provide a helper function that returns this number. Signed-off-by: Matt Fleming <matt@console-pimps.org> Tested-by: Will Deacon <will.deacon@arm.com> Acked-by: Paul Mundt <lethal@linux-sh.org> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Robert Richter <robert.richter@amd.com>
2010-10-08ARM: add register documentation for __enable_mmuRussell King
Add some additional documentation on register usage in __enable_mmu to help complete the overall picture. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-10-08ARM: hotplug cpu: move secondary_startup, __enable_mmu to cpuinitRussell King
Move these two functions, both of which are required for secondary CPU booting, into the cpuinit section. Ensure bad processors call __error_p for better diagnostics, rather than just __error. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-10-08ARM: hotplug cpu: ensure that __enable_mmu is identity mappedRussell King
__enable_mmu is required to be executed in an identity mapped region to ensure that variances in CPUs do not cause a crash. We currently achieve this by assuming that it will be co-located with __create_page_tables. With hotplug CPU support, this assumption becomes invalid. Implement a better solution which ensures that it will be appropriately mapped no matter where it is placed. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-10-08ARM: cleanup lookup_machine_type data and ensure these are placed in __HEADRussell King
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-10-08ARM: hotplug cpu: move __error and __error_p to cpuinit sectionRussell King
__error and __error_p may be used by secondary CPUs, so these need to be in the cpuinit section. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-10-08ARM: move __mmap_switched, C-API functions to init sectionRussell King
Move these functions, which are only ever used during boot CPU initialization, to the init section. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-10-08ARM: cleanup boot cpu calling __mmap_switchedRussell King
This allows us to relocate __mmap_switched and associated data away from the head section. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-10-08ARM: hotplug cpu: Keep processor information, startup code & ↵Russell King
__lookup_processor_type When hotplug CPU is enabled, we need to keep the list of supported CPUs, their setup functions, and __lookup_processor_type in place so that we can find and initialize secondary CPUs. Move these into the __CPUINIT section. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-10-08ARM: hotplug cpu: setup 1:1 map for entire kernel image for secondary CPUsRussell King
Make the entire kernel image available for secondary CPUs rather than just the first MB of memory. This allows the startup code to appear in the cpuinit sections. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-10-08ARM: no need for nommu to jump through the hoops that mmu doesRussell King
nommu can jump directly to __mmap_switched without the absolute address branching which the mmuful kernel does. Acked-by: Greg Ungerer <gerg@uclinux.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-10-08ARM: vmlinux.lds: Move unwind tables into _stext.._etextRussell King
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-10-08ARM: vmlinux.lds: Refer to start of .data using _sdata rather than _dataRussell King
Use _sdata as the start of the data section, rather than _data. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-10-08ARM: 6428/1: add cpu_idle_wait() to support CPUidle on SMP systems.Kevin Hilman
In order for CPUidle to work on SMP systems, an implementation of cpu_idle_wait() is needed. This patch duplicates the x86 implementation of cpu_idle_wait() for ARM. Tested-by: Colin Cross <ccross@android.com> Signed-off-by: Kevin Hilman <khilman@deeprootsystems.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-10-08ARM: 6431/1: fix isb regression on CPU < v7Linus Walleij
The kernel does not compile for my ARM926EJ-S system U300 due to the isb instruction inserted in generic assember statement from commit 8925ec4c530094b878e7e28a1fd78e7122afd973, "ARM: 6385/1: setup: detect aliasing I-cache when D-cache is non-aliasing" hey the isb is only available when assembling for v7 so let's use the generic isb() macro from setup.h instead. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-10-08Merge commit 'v2.6.36-rc7' into perf/coreIngo Molnar
Conflicts: arch/x86/kernel/module.c Merge reason: Resolve the conflict, pick up fixes. Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-10-04ARM: 6385/1: setup: detect aliasing I-cache when D-cache is non-aliasingWill Deacon
Currently, the Kernel assumes that if a CPU has a non-aliasing D-cache then the I-cache is also non-aliasing. This may not be true on ARM cores from v6 onwards, which may have aliasing I-caches but non-aliasing D-caches. This patch adds a cpu_has_aliasing_icache function, which is called from cacheid_init and adds CACHEID_VIPT_I_ALIASING to the cacheid when appropriate. A utility macro, icache_is_vipt_aliasing(), is also provided. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-10-04ARM: 6402/1: Don't send IPI in smp_send_stop if there's only one CPUTony Lindgren
No need to send IPI if there's one CPU, especially when booting systems with CONFIG_SMP_ON_UP that may not even support IPI. Signed-off-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-10-04ARM: Allow SMP kernels to boot on UP systemsRussell King
UP systems do not implement all the instructions that SMP systems have, so in order to boot a SMP kernel on a UP system, we need to rewrite parts of the kernel. Do this using an 'alternatives' scheme, where the kernel code and data is modified prior to initialization to replace the SMP instructions, thereby rendering the problematical code ineffectual. We use the linker to generate a list of 32-bit word locations and their replacement values, and run through these replacements when we detect a UP system. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-10-04ARM: 6291/1: coresight: move struct tracectx inside etm driverAlexander Shishkin
This is done so as to be able to make use of the coresight components' registers in assembler code (like omap sleep code). Also, there shouldn't be any users of this structure outside the etm driver. Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Alexander Shishkin <virtuoso@slind.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-10-04ARM: 6412/1: kprobes-decode: add support for MOVW instructionWill Deacon
The MOVW instruction moves a 16-bit immediate into the bottom halfword of the destination register. This patch ensures that kprobes leaves the 16-bit immediate intact, rather than assume a 12-bit immediate and mask out the upper 4 bits. Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-10-01ARM: add a vma entry for the user accessible vector pageNicolas Pitre
The kernel makes the high vector page visible to user space. This page contains (amongst others) small code segments that can be executed in user space. Make this page visible through ptrace and /proc/<pid>/mem in order to let gdb perform code parsing needed for proper unwinding. For example, the ERESTART_RESTARTBLOCK handler actually has a stack frame -- it returns to a PC value stored on the user's stack. To unwind after a "sleep" system call was interrupted twice, GDB would have to recognize this situation and understand that stack frame layout -- which it currently cannot do. We could fix this by hard-coding addresses in the vector page range into GDB, but that isn't really portable as not all of those addresses are guaranteed to remain stable across kernel releases. And having the gdb process make an exception for this page and get content from its own address space for it looks strange, and it is not future proof either. Being located above PAGE_OFFSET, this vma cannot be deleted by user space code. Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
2010-10-01ARM: SECCOMP supportNicolas Pitre
Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
2010-09-27Merge master.kernel.org:/home/rmk/linux-2.6-armLinus Torvalds
* master.kernel.org:/home/rmk/linux-2.6-arm: (28 commits) ARM: 6411/1: vexpress: set RAM latencies to 1 cycle for PL310 on ct-ca9x4 tile ARM: 6409/1: davinci: map sram using MT_MEMORY_NONCACHED instead of MT_DEVICE ARM: 6408/1: omap: Map only available sram memory ARM: 6407/1: mmu: Setup MT_MEMORY and MT_MEMORY_NONCACHED L1 entries ARM: pxa: remove pr_<level> uses of KERN_<level> ARM: pxa168fb: clear enable bit when not active ARM: pxa: fix cpu_is_pxa*() not expanding to zero when not configured ARM: pxa168: fix corrected reset vector ARM: pxa: Use PIO for PI2C communication on Palm27x ARM: pxa: Fix Vpac270 gpio_power for MMC ARM: 6401/1: plug a race in the alignment trap handler ARM: 6406/1: at91sam9g45: fix i2c bus speed leds: leds-ns2: fix locking ARM: dove: fix __io() definition to use bus based offset dmaengine: fix interrupt clearing for mv_xor ARM: kirkwood: Unbreak PCIe I/O port ARM: Fix build error when using KCONFIG_CONFIG ARM: 6383/1: Implement phys_mem_access_prot() to avoid attributes aliasing ARM: 6400/1: at91: fix arch_gettimeoffset fallout ARM: 6398/1: add proc info for ARM11MPCore/Cortex-A9 from ARM ...
2010-09-21Merge commit 'v2.6.36-rc5' into perf/coreIngo Molnar
Merge reason: Pick up the latest fixes in -rc5. Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-09-17arm: fix really nasty sigreturn bugAl Viro
If a signal hits us outside of a syscall and another gets delivered when we are in sigreturn (e.g. because it had been in sa_mask for the first one and got sent to us while we'd been in the first handler), we have a chance of returning from the second handler to location one insn prior to where we ought to return. If r0 happens to contain -513 (-ERESTARTNOINTR), sigreturn will get confused into doing restart syscall song and dance. Incredible joy to debug, since it manifests as random, infrequent and very hard to reproduce double execution of instructions in userland code... The fix is simple - mark it "don't bother with restarts" in wrapper, i.e. set r8 to 0 in sys_sigreturn and sys_rt_sigreturn wrappers, suppressing the syscall restart handling on return from these guys. They can't legitimately return a restart-worthy error anyway. Testcase: #include <unistd.h> #include <signal.h> #include <stdlib.h> #include <sys/time.h> #include <errno.h> void f(int n) { __asm__ __volatile__( "ldr r0, [%0]\n" "b 1f\n" "b 2f\n" "1:b .\n" "2:\n" : : "r"(&n)); } void handler1(int sig) { } void handler2(int sig) { raise(1); } void handler3(int sig) { exit(0); } main() { struct sigaction s = {.sa_handler = handler2}; struct itimerval t1 = { .it_value = {1} }; struct itimerval t2 = { .it_value = {2} }; signal(1, handler1); sigemptyset(&s.sa_mask); sigaddset(&s.sa_mask, 1); sigaction(SIGALRM, &s, NULL); signal(SIGVTALRM, handler3); setitimer(ITIMER_REAL, &t1, NULL); setitimer(ITIMER_VIRTUAL, &t2, NULL); f(-513); /* -ERESTARTNOINTR */ write(1, "buggered\n", 9); return 1; } Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Acked-by: Russell King <rmk+kernel@arm.linux.org.uk> Cc: stable@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-09-17ARM: prevent multiple syscall restartsRussell King
Al Viro reports that calling "sys_sigsuspend(-ERESTARTNOHAND, 0, 0)" with two signals coming and being handled in kernel space results in the syscall restart being done twice. Avoid this by clearing the 'why' flag when we call the signal handling code to prevent further syscall restarts after the first. Acked-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-09-15Merge branch 'tip/perf/core' of ↵Ingo Molnar
git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-2.6-trace into perf/core
2010-09-09perf: Remove the sysfs bitsPeter Zijlstra
Neither the overcommit nor the reservation sysfs parameter were actually working, remove them as they'll only get in the way. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: paulus <paulus@samba.org> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-09-09perf: Rework the PMU methodsPeter Zijlstra
Replace pmu::{enable,disable,start,stop,unthrottle} with pmu::{add,del,start,stop}, all of which take a flags argument. The new interface extends the capability to stop a counter while keeping it scheduled on the PMU. We replace the throttled state with the generic stopped state. This also allows us to efficiently stop/start counters over certain code paths (like IRQ handlers). It also allows scheduling a counter without it starting, allowing for a generic frozen state (useful for rotating stopped counters). The stopped state is implemented in two different ways, depending on how the architecture implemented the throttled state: 1) We disable the counter: a) the pmu has per-counter enable bits, we flip that b) we program a NOP event, preserving the counter state 2) We store the counter state and ignore all read/overflow events Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: paulus <paulus@samba.org> Cc: stephane eranian <eranian@googlemail.com> Cc: Robert Richter <robert.richter@amd.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Cyrill Gorcunov <gorcunov@gmail.com> Cc: Lin Ming <ming.m.lin@intel.com> Cc: Yanmin <yanmin_zhang@linux.intel.com> Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com> Cc: David Miller <davem@davemloft.net> Cc: Michael Cree <mcree@orcon.net.nz> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-09-09perf: Per PMU disablePeter Zijlstra
Changes perf_disable() into perf_pmu_disable(). Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: paulus <paulus@samba.org> Cc: stephane eranian <eranian@googlemail.com> Cc: Robert Richter <robert.richter@amd.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Cyrill Gorcunov <gorcunov@gmail.com> Cc: Lin Ming <ming.m.lin@intel.com> Cc: Yanmin <yanmin_zhang@linux.intel.com> Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com> Cc: David Miller <davem@davemloft.net> Cc: Michael Cree <mcree@orcon.net.nz> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-09-09perf: Reduce perf_disable() usagePeter Zijlstra
Since the current perf_disable() usage is only an optimization, remove it for now. This eases the removal of the __weak hw_perf_enable() interface. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: paulus <paulus@samba.org> Cc: stephane eranian <eranian@googlemail.com> Cc: Robert Richter <robert.richter@amd.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Cyrill Gorcunov <gorcunov@gmail.com> Cc: Lin Ming <ming.m.lin@intel.com> Cc: Yanmin <yanmin_zhang@linux.intel.com> Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com> Cc: David Miller <davem@davemloft.net> Cc: Michael Cree <mcree@orcon.net.nz> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-09-09perf: Register PMU implementationsPeter Zijlstra
Simple registration interface for struct pmu, this provides the infrastructure for removing all the weak functions. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: paulus <paulus@samba.org> Cc: stephane eranian <eranian@googlemail.com> Cc: Robert Richter <robert.richter@amd.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Cyrill Gorcunov <gorcunov@gmail.com> Cc: Lin Ming <ming.m.lin@intel.com> Cc: Yanmin <yanmin_zhang@linux.intel.com> Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com> Cc: David Miller <davem@davemloft.net> Cc: Michael Cree <mcree@orcon.net.nz> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-09-09perf: Deconstify struct pmuPeter Zijlstra
sed -ie 's/const struct pmu\>/struct pmu/g' `git grep -l "const struct pmu\>"` Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: paulus <paulus@samba.org> Cc: stephane eranian <eranian@googlemail.com> Cc: Robert Richter <robert.richter@amd.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Cyrill Gorcunov <gorcunov@gmail.com> Cc: Lin Ming <ming.m.lin@intel.com> Cc: Yanmin <yanmin_zhang@linux.intel.com> Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com> Cc: David Miller <davem@davemloft.net> Cc: Michael Cree <mcree@orcon.net.nz> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-09-08ARM: 6358/1: hw-breakpoint: add HAVE_HW_BREAKPOINT to KconfigWill Deacon
If we're targetting a v6 or v7 core and have at least software perf events available, then automatically add support for hardware breakpoints. Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: S. Karthikeyan <informkarthik@gmail.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-09-08ARM: 6357/1: hw-breakpoint: add new ptrace requests for hw-breakpoint ↵Will Deacon
interaction For debuggers to take advantage of the hw-breakpoint framework in the kernel, it is necessary to expose the API calls via a ptrace interface. This patch exposes the hardware breakpoints framework as a collection of virtual registers, accesible using PTRACE_SETHBPREGS and PTRACE_GETHBPREGS requests. The breakpoints are stored in the debug_info struct of the running thread. Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: S. Karthikeyan <informkarthik@gmail.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-09-08ARM: 6356/1: hw-breakpoint: add ARM backend for the hw-breakpoint frameworkWill Deacon
The hw-breakpoint framework in the kernel requires architecture-specific support in order to install, remove, validate and manage hardware breakpoints. This patch adds initial support for this framework to the ARM architecture, but restricts the number of watchpoints to a single resource to get around the fact that the Data Fault Address Register is unknown when a watchpoint debug exception is taken. On cores with v7 debug, the Kernel can handle breakpoint and watchpoint exceptions occuring from userspace. Older cores require clients to handle the exception themselves by registering an appropriate overflow handler or, in the case of ptrace, handling the raised SIGTRAP. The memory-mapped extended debug interface is unsupported due to its unreliability in real implementations. Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: S. Karthikeyan <informkarthik@gmail.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-09-02ARM: 6352/1: perf: fix event validationWill Deacon
The validate_event function in the ARM perf events backend has the following problems: 1.) Events that are disabled count towards the cost. 2.) Events associated with other PMUs [for example, software events or breakpoints] do not count towards the cost, but do fail validation, causing the group to fail. This patch changes validate_event so that it ignores events in the PERF_EVENT_STATE_OFF state or that are scheduled for other PMUs. Reported-by: Pawel Moll <pawel.moll@arm.com> Acked-by: Jamie Iles <jamie.iles@picochip.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-09-02ARM: 6341/1: unwind - optimise linked-list searches for modulesPhil Carmody
With several sections per module, and dozens of modules, the searches down the linked list of sections would dominate the lookup time, dwarfing any savings from the binary search within the section. A simple move-to-front optimisation exploits the commonality of the code paths taken, and in simple real-world tests reduces the number of steps in the search to barely more than 1. Signed-off-by: Phil Carmody <ext-phil.2.carmody@nokia.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-09-02ARM: 6340/1: module - additional unwind tables for exit/devexit sectionsPhil Carmody
Without these, exit functions cannot be stack-traced, so to speak. This implies that module unloads that perform allocations (don't laugh) will cause noisy warnings on the console when kmemleak is enabled, as it presumes that all code's call chains are traceable. Similarly, BUGs and WARN_ONs will give additional console spam. Signed-off-by: Phil Carmody <ext-phil.2.carmody@nokia.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-09-02ARM: 6339/1: module - simplify unwind table handlingPhil Carmody
The various sections are all dealt with similarly, so factor out that common behaviour. (Incorporating Peter Huewe's fix.) Cc: Peter Huewe <peterhuewe@gmx.de> Signed-off-by: Phil Carmody <ext-phil.2.carmody@nokia.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-09-02ARM: 6338/1: module - simplify code with temporariesPhil Carmody
Less to read. Signed-off-by: Phil Carmody <ext-phil.2.carmody@nokia.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-09-02ARM: 6319/1: ftrace: add Thumb-2 support to dynamic ftraceRabin Vincent
Handle the different nop and call instructions for Thumb-2. Also, we need to adjust the recorded mcount_loc addresses because they have the lsb set. Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Steven Rostedt <rostedt@goodmis.org> [recordmcount.pl change] Signed-off-by: Rabin Vincent <rabin@rab.in> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-09-02ARM: 6318/1: ftrace: fix and update dynamic ftraceRabin Vincent
This adds mcount recording and updates dynamic ftrace for ARM to work with the new ftrace dyamic tracing implementation. It also adds support for the mcount format used by newer ARM compilers. With dynamic tracing, mcount() is implemented as a nop. Callsites are patched on startup with nops, and dynamically patched to call to the ftrace_caller() routine as needed. Acked-by: Steven Rostedt <rostedt@goodmis.org> [recordmcount.pl change] Signed-off-by: Rabin Vincent <rabin@rab.in> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-09-02ARM: 6316/1: ftrace: add Thumb-2 supportRabin Vincent
Fix the mcount routines to build and run on a kernel built with the Thumb-2 instruction set by correcting the following errors using the fixes suggested by Catalin Marinas: - Problem: The following assembler errors appear at the "adr r0, ftrace_stub" instruction: entry-common.S: Assembler messages: entry-common.S:179: Error: invalid immediate for address calculation (value = 0x00000004) Fix: The errors don't occur with a non-global symbol, so use one. - Problem: The "mov lr, pc" does not set the lsb when storing the pc in lr. The called function returns with "bx lr", and the mode changes to ARM. Fix: Add a label on the return address and use "adr lr, BSYM(label)". We don't modify the old mcount because it won't be built when using Thumb-2. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Rabin Vincent <rabin@rab.in> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-09-02ARM: 6315/1: ftrace: add ENDPROC annotationsRabin Vincent
When building as Thumb-2, the ".type foo, %function" annotation in ENDPROC seems to be required in order for the assembly routines to be recognized as Thumb-2 code. If the ENDPROC annotations are not present, calls to these routines are generated as BLX instead of BL. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Rabin Vincent <rabin@rab.in> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-09-02ARM: 6314/1: ftrace: allow build without frame pointers on ARMRabin Vincent
With a new enough GCC, ARM function tracing can be supported without the need for frame pointers. This is essential for Thumb-2 support, since frame pointers aren't available then. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Rabin Vincent <rabin@rab.in> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-09-01ARM: 6343/1: wire up fanotify and prlimit64 syscalls on ARMMikael Pettersson
The 2.6.36-rc kernel added three new system calls: fanotify_init, fanotify_mark, and prlimit64. This patch wires them up on ARM. The only non-trivial issue here is the u64 argument to sys_fanotify_mark(), but it is the 3rd argument and thus passed in r2/r3 in both kernel and user space, so it causes no problems. Tested with a 2.6.36-rc2 EABI kernel on an ixp4xx machine. Tested-by: Anand Gadiyar <gadiyar@ti.com> Signed-off-by: Mikael Pettersson <mikpe@it.uu.se> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>