summaryrefslogtreecommitdiffstats
path: root/arch
AgeCommit message (Collapse)Author
2013-10-24s390/percpu: make use of interlocked-access facility 1 instructionsHeiko Carstens
Optimize this_cpu_* functions for 64 bit by making use of new instructions that came with the interlocked-access facility 1 (load-and-*) and the general-instructions-extension facility (asi, agsi). That way we get rid of the compare-and-swap loop in most cases. Code size reduction (defconfig, -march=z196): 11,555 bytes. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/percpu: use generic percpu ops for CONFIG_32BITHeiko Carstens
Remove the special cases for the this_cpu_* functions for 32 bit in order to make it easier to add additional code for 64 bit. 32 bit will use the generic implementation. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/compat: make psw32_user_bits a constant value againHeiko Carstens
Make psw32_user_bits a constant value again. This is a leftover of the code which allowed to run the kernel either in primary or home space which got removed with 9a905662 "s390/uaccess: always run the kernel in home space". Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390: fix handling of runtime instrumentation psw bitHeiko Carstens
Fix the following bugs: - When returning from a signal the signal handler copies the saved psw mask from user space and uses parts of it. Especially it restores the RI bit unconditionally. If however the machine doesn't support RI, or RI is disabled for the task, the last lpswe instruction which returns to user space will generate a specification exception. To fix this check if the RI bit is allowed to be set and kill the task if not. - In the compat mode signal handler code the RI bit of the psw mask gets propagated to the mask of the return psw: if user space enables RI in the signal handler, RI will also be enabled after the signal handler is finished. This is a different behaviour than with 64 bit tasks. So change this to match the 64 bit semantics, which restores the original RI bit value. - Fix similar oddities within the ptrace code as well. Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390: fix save and restore of the floating-point-control registerMartin Schwidefsky
The FPC_VALID_MASK has been used to check the validity of the value to be loaded into the floating-point-control register. With the introduction of the floating-point extension facility and the decimal-floating-point additional bits have been defined which need to be checked in a non straight forward way. So far these bits have been ignored which can cause an incorrect results for decimal- floating-point operations, e.g. an incorrect rounding mode to be set after signal return. The static check with the FPC_VALID_MASK is replaced with a trial load of the floating-point-control value, see test_fp_ctl. In addition an information leak with the padding word between the floating-point-control word and the floating-point registers in the s390_fp_regs is fixed. Reported-by: Heiko Carstens <heiko.carstens@de.ibm.com> Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/crypto: fix aes_s390 crypto module unload problemIngo Tuchscherer
If a machine has no hardware support for the xts-aes or ctr-aes algorithms they are not registered in aes_s390_init. But aes_s390_fini unconditionally unregisters the algorithms which causes crypto_remove_alg to crash. Add two flag variables to remember if xts-aes and ctr-aes have been added. Signed-off-by: Ingo Tuchscherer <ingo.tuchscherer@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/appldata: make copy_from_user() invocations provably correctHeiko Carstens
Just change the type of "len" to unsigned int so the compiler can prove that we don't have a buffer overflow (and generates less code). We get rid of these: In function 'copy_from_user', inlined from 'appldata_interval_handler' at arch/s390/appldata/appldata_base.c:265: uaccess.h:303: warning: call to 'copy_from_user_overflow' declared with attribute warning: copy_from_user() buffer size is not provably correct In function 'copy_from_user', inlined from 'appldata_timer_handler' at arch/s390/appldata/appldata_base.c:225: uaccess.h:303: warning: call to 'copy_from_user_overflow' declared with attribute warning: copy_from_user() buffer size is not provably correct In function 'copy_from_user', inlined from 'appldata_generic_handler' at arch/s390/appldata/appldata_base.c:333: uaccess.h:303: warning: call to 'copy_from_user_overflow' declared with attribute warning: copy_from_user() buffer size is not provably correct Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/cmm: make copy_from_user() invocation provably correctHeiko Carstens
Get rid of these two warnings: In function 'copy_from_user', inlined from 'cmm_timeout_handler' at arch/s390/mm/cmm.c:310: uaccess.h:303: warning: call to 'copy_from_user_overflow' declared with attribute warning: copy_from_user() buffer size is not provably correct In function 'copy_from_user', inlined from 'cmm_pages_handler' at arch/s390/mm/cmm.c:270: uaccess.h:303: warning: call to 'copy_from_user_overflow' declared with attribute warning: copy_from_user() buffer size is not provably correct Change the "len" type to unsigned int, so we can make sure that there is no buffer overflow. This also generates less code. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/cache: get rid of compile warningHeiko Carstens
Get rid of this one: arch/s390/kernel/cache.c: In function 'cache_build_info': arch/s390/kernel/cache.c:144: warning: 'private' may be used uninitialized in this function Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/compat,signal: change return values to -EFAULTHeiko Carstens
Instead of returnin the number of bytes not copied and/or -EFAULT let the signal handler helper functions always return -EFAULT if a user space access failed. This doesn't fix a bug in the current code, but makes is harder to get it wrong in the future. Also "smatch" won't complain anymore about the fact that the number of remaining bytes gets returned instead of -EFAULT. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390: Remove unused declaration of zfcpdump_prefix_array[]Michael Holzheu
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/cio: fix error-prone definesPeter Oberparleiter
Missing parenthesis may cause problems when using the defines together with operations of higher precedence. Signed-off-by: Peter Oberparleiter <peter.oberparleiter@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390: Remove zfcpdump NR_CPUS dependencyMichael Holzheu
Currently zfpcdump can only collect registers for up to CONFIG_NR_CPUS CPUss. This dependency is not necessary. So remove it by dynamically allocating the save area array. Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com> Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/ftrace: prepare_ftrace_return() function call orderHeiko Carstens
Steven Rostedt noted that s390 is the only architecture which calls ftrace_push_return_trace() before ftrace_graph_entry() and therefore has the small advantage that trace.depth gets initialized automatically. However this small advantage isn't worth the difference and possible subtle breakage that may result from this. So change s390 to have the same function call order like all other architectures: first ftrace_graph_entry(), then ftrace_push_return_trace() Reported-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/crashdump: remove unused variableHeiko Carstens
Get rid of this compile warning: arch/s390/kernel/crash_dump.c: In function 'copy_from_realmem': arch/s390/kernel/crash_dump.c:48:6: warning: unused variable 'rc' [-Wunused-variable] int rc; ^ Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/atomic: use 'unsigned int' instead of 'unsigned long' for atomic_*_mask()Chen Gang
The type of 'v->counter' is always 'int', and related inline assembly code also process 'int', so use 'unsigned int' instead of 'unsigned long' for the 'mask'. Signed-off-by: Chen Gang <gang.chen@asianux.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/gup: handle zero nr_pages case correctlyHeiko Carstens
If [__]get_user_pages_fast() gets called with nr_pages == 0, the current code would walk the page tables and pin as many pages until the first invalid pte (or the kernel crashed while writing struct page pointers to the pages array). So let's handle at least the nr_pages == 0 case correctly and exit early. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/gup: reduce code duplication between [__]get_user_pages_fast functionsHeiko Carstens
Just call __get_user_pages_fast() from get_user_pages_fast() like powerpc. This saves a lot of duplicated code. Reviewed-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/mm: do not initialize storage keysMartin Schwidefsky
With dirty and referenced bits implemented in software it is unnecessary to initialize the storage key for every page. With this patch not a single storage key operation is done for a system that does not use KVM. For KVM set_pte_at/pgste_set_key will do the initialization for the guest view of the storage key when the mapping for the page is established in the host. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/bpf,jit: fix prolog oddityMartin Schwidefsky
The prolog of functions generated by the bpf jit compiler uses an instruction sequence with an "ahi" instruction to create stack space instead of using an "aghi" instruction. Using the 32-bit "ahi" is not wrong as the stack we are operating on is an order-4 allocation which is always aligned to 16KB. But it is more consistent to use an "aghi" as the stack pointer is a 64-bit value. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390: cleanup and add sanity checks to control register macrosHeiko Carstens
- turn some macros into functions - merge two almost identical versions for 32/64 bit - add BUILD_BUG_ON() check to make sure the passed in array is large enough Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/pci: implement hibernation hooksSebastian Ott
Implement architecture-specific functionality when a PCI device is doing a hibernate transition. Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/uaccess: always run the kernel in home spaceMartin Schwidefsky
Simplify the uaccess code by removing the user_mode=home option. The kernel will now always run in the home space mode. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/bitops: rename find_first_bit_left() to find_first_bit_inv()Heiko Carstens
find_first_bit_left() and friends have nothing to do with the normal LSB0 bit numbering for big endian machines used in Linux (least significant bit has bit number 0). Instead they use MSB0 bit numbering, where the most signficant bit has bit number 0. So rename find_first_bit_left() and friends to find_first_bit_inv(), to avoid any confusion. Also provide inv versions of set_bit, clear_bit and test_bit. This also removes the confusing use of e.g. set_bit() in airq.c which uses a "be_to_le" bit number conversion, which could imply that instead set_bit_le() could be used. But that is entirely wrong since the _le bitops variant uses yet another bit numbering scheme. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/bitops: use flogr instruction to implement __ffs, ffs, __fls, fls and fls64Heiko Carstens
Since z9 109 we have the flogr instruction which can be used to implement optimized versions of __ffs, ffs, __fls, fls and fls64. So implement and use them, instead of the generic variants. This reduces the size of the kernel image (defconfig, -march=z9-109) by 19,648 bytes. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/bitops: use generic find bit functions / reimplement _left variantHeiko Carstens
Just like all other architectures we should use out-of-line find bit operations, since the inline variant bloat the size of the kernel image. And also like all other architecures we should only supply optimized variants of the __ffs, ffs, etc. primitives. Therefore this patch removes the inlined s390 find bit functions and uses the generic out-of-line variants instead. The optimization of the primitives follows with the next patch. With this patch also the functions find_first_bit_left() and find_next_bit_left() have been reimplemented, since logically, they are nothing else but a find_first_bit()/find_next_bit() implementation that use an inverted __fls() instead of __ffs(). Also the restriction that these functions only work on machines which support the "flogr" instruction is gone now. This reduces the size of the kernel image (defconfig, -march=z9-109) by 144,482 bytes. Alone the size of the function build_sched_domains() gets reduced from 7 KB to 3,5 KB. We also git rid of unused functions like find_first_bit_le()... Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/s390dbf: add debug_level_enabled() functionHendrik Brueckner
Add the debug_level_enabled() function to check if debug events for a particular level would be logged. This might help to save cycles for debug events that require additional information collection. Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/bitops: optimize set_bit() for constant valuesHeiko Carstens
Since zEC12 we have the interlocked-access facility 2 which allows to use the instructions ni/oi/xi to update a single byte in storage with compare-and-swap semantics. So change set_bit(), clear_bit() and change_bit() to generate such code instead of a compare-and-swap loop (or using the load-and-* instruction family), if possible. This reduces the text segment by yet another 8KB (defconfig). Alternatively the long displacement variants niy/oiy/xiy could have been used, but the extended displacement field is usually not needed and therefore would only increase the size of the text segment again. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/bitops: remove CONFIG_SMP / simplify non-atomic bitopsHeiko Carstens
Remove CONFIG_SMP from bitops code. This reduces the C code significantly but also generates better code for the SMP case. This means that for !CONFIG_SMP set_bit() and friends now also have compare and swap semantics (read: more code). However nobody really cares for !CONFIG_SMP and this is the trade-off to simplify the SMP code which we do care about. The non-atomic bitops like __set_bit() now generate also better code because the old code did not have a __builtin_contant_p() check for the CONFIG_SMP case and therefore always generated the inline assembly variant. However the inline assemblies for the non-atomic case now got completely removed since gcc can produce better code, which accesses less memory operands. test_bit() got also a bit simplified since it did have a __builtin_constant_p() check, however two identical code pathes for each case (written differently). In result this mainly reduces the to be maintained code but is not very relevant for code generation, since there are not many non-atomic bitops usages that we care about. (code reduction defconfig kernel image before/after: 560 bytes). Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/atomic: various small cleanupsHeiko Carstens
- add a typecheck to the defines to make sure they operate on an atomic_t - simplify inline assembly constraints - keep variable names common between functions Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/atomic: optimize atomic_add() for constant valuesHeiko Carstens
If the interlocked-access facility 1 is available we can use the asi and agsi instructions for interlocked updates if the to be added value is a contanst and small (in the range of -128..127). asi and agsi do not not return the old or new value, therefore these instructions can only be used for atomic_(add|sub|inc|dec)[64]. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/kprobes: allow kprobes only on known instructionsHeiko Carstens
Since we have an in-kernel disassembler we can make sure that there won't be any kprobes set on random data. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/kprobes: use insn_length helper functionHeiko Carstens
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/dis: move disassembler function prototypes to proper header fileHeiko Carstens
Now that the in-kernel disassembler has an own header file move the disassembler related function prototypes to that header file. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/dis: move common definitions to a header fileSuzuki K. Poulose
The patch moves some of the definitions to a header file. No functional changes involved. I have retained the Copyright Statement from the original file. Signed-off-by: Suzuki K Poulose <suzuki@in.ibm.com> [Heiko Carstens: rename s390-dis.h to dis.h] Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/dis: rename structures for unique typesSuzuki K. Poulose
Rename 'insn' and 'operand' structures to more canonical names to avoid conflicts. struct insn represents information about an instruction, including the mnemonics, format and opcode. struct operand represents the 'properties' and information on howto interpret the operand value and doesn't contain the value. We rename these structures for avoiding a global conflict. i.e, 1,$s/struct insn/struct s390_insn/g 1,$s/struct operand/struct s390_operand/g Signed-off-by: Suzuki K Poulose <suzuki@in.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/atomic: make use of interlocked-access facility 1 instructionsHeiko Carstens
Same as for bitops: make use of the interlocked-access facility 1 instructions which allow to atomically update storage locations without a compare-and-swap loop. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/atomic: implement atomic_sub_return() with atomic_add_return()Heiko Carstens
Get rid of the own atomic_sub_return() implementation. Otherwise we can't make use of the interlocked-access facility 1 instructions for atomic_sub_return(), since there is no "load and subtract" instruction available. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/kprobes: have more correct if statement in s390_get_insn_slot()Heiko Carstens
When checking the insn address wether it is a kernel image or module address it should be an if-else-if statement not two independent if statements. This doesn't really fix a bug, but matches s390_free_insn_slot(). Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390: always set -march compiler optionHeiko Carstens
Currently we only set the -march compiler option if the kbuild system figured out that the compiler actually supports the selected architecture (cc-option test). In result this means that no -march compiler option is set when an unsupported cpu architecture of the current compiler is selected. The kernel compile will afterwards succeed but with the default architecture instead of the (unsupported) selected one. Change this behaviour, so compiles will fail if the compiler does not support the selected cpu architecture. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/bitops: make use of interlocked-access facility 1 instructionsHeiko Carstens
Make use of the interlocked-access facility 1 that got added with the z196 architecure. This facilility added new instructions which can atomically update a storage location without a compare-and-swap loop. E.g. setting a bit within a "long" can be done with a single instruction. The size of the kernel image gets ~30kb smaller. Considering that there are appr. 1900 bitops call sites this means that each one saves about 15-16 bytes per call site which is expected. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24ARM: OMAP3: fix dpll4_m3_ck and dpll4_m4_ck dividersTomi Valkeinen
dpll4_m3_ck and dpll4_m4_ck have divider bit fields which are 6 bits wide. However, only values from 1 to 32 are allowed. This means we have to add a divider tables and list the dividers explicitly. I believe the same issue is there for other dpll4_mx_ck clocks, but as I'm not familiar with them, I didn't touch them. Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com> Signed-off-by: Paul Walmsley <paul@pwsan.com>
2013-10-24ARM: OMAP3: use CLK_SET_RATE_PARENT for dss clocksTomi Valkeinen
Set CLK_SET_RATE_PARENT flag for dss1_alwon_fck_3430es2, dss1_alwon_fck_3430es1 and dpll4_m4x2_ck so that the DSS's fclk can be configured without the need to get the parent's parent of the fclk. Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com> Signed-off-by: Paul Walmsley <paul@pwsan.com>
2013-10-24ARM: OMAP4: use CLK_SET_RATE_PARENT for dss_dss_clkTomi Valkeinen
Set CLK_SET_RATE_PARENT flag for dss_dss_clk so that the DSS's fclk can be configured without the need to get the parent of the fclk. Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com> Signed-off-by: Paul Walmsley <paul@pwsan.com>
2013-10-24arm64: Export __copy_in_user() to modulesCatalin Marinas
This function may be called from loadable modules, so it needs exporting. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: Loc Ho <lho@apm.com>
2013-10-24arm64: cmpxchg: implement cmpxchg64_relaxedWill Deacon
This patch introduces cmpxchg64_relaxed for arm64 using the existing cmpxchg_local macro, which performs a cmpxchg operation (up to 64 bits) without barrier semantics. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-24arm64: lockref: add support for lockless lockrefs using cmpxchgWill Deacon
Our spinlocks are only 32-bit (2x16-bit tickets) and our cmpxchg can deal with 8-bytes (as one would hope!). This patch wires up the cmpxchg-based lockless lockref implementation for arm64. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-24arm64: locks: introduce ticket-based spinlock implementationWill Deacon
This patch introduces a ticket lock implementation for arm64, along the same lines as the implementation for arch/arm/. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-24microblaze/pci: Drop PowerPC-ism from irq parsingGrant Likely
The Microblaze PCI code copied the PowerPC irq handling, but powerpc needs to handle broken device trees that are not present on Microblaze. This patch removes the powerpc special case and replaces it with a direct of_irq_parse_and_map_pci() call. Signed-off-by: Grant Likely <grant.likely@linaro.org> Acked-by: Michal Simek <monstr@monstr.eu>
2013-10-24of/irq: Create of_irq_parse_and_map_pci() to consolidate arch code.Grant Likely
Several architectures open code effectively the same code block for finding and mapping PCI irqs. This patch consolidates it down to a single function. Signed-off-by: Grant Likely <grant.likely@linaro.org> Acked-by: Michal Simek <monstr@monstr.eu> Cc: Russell King <linux@arm.linux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>