summaryrefslogtreecommitdiffstats
path: root/arch/arm64/mm/proc.S
AgeCommit message (Collapse)Author
2014-05-09arm64: mm: use inner-shareable barriers for inner-shareable maintenanceWill Deacon
In order to ensure ordering and completion of inner-shareable maintenance instructions (cache and TLB) on AArch64, we can use the -ish suffix to the dmb and dsb instructions respectively. This patch updates our low-level cache and tlb maintenance routines to use the inner-shareable barrier variants where appropriate. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-04-03arm64: Update the TCR_EL1 translation granule definitions for 16K pagesCatalin Marinas
The current TCR register setting in arch/arm64/mm/proc.S assumes that TCR_EL1.TG* fields are one bit wide and bit 31 is RES1 (reserved, set to 1). With the addition of 16K pages (currently unsupported in the kernel), the TCR_EL1.TG* fields have been extended to two bits. This patch updates the corresponding Linux definitions and drops the bit 31 setting in proc.S in favour of the new macros. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: Joe Sylve <joe.sylve@gmail.com>
2014-03-13arm64: Add boot time configuration of Intermediate Physical Address sizeRadha Mohan Chintakuntla
ARMv8 supports a range of physical address bit sizes. The PARange bits from ID_AA64MMFR0_EL1 register are read during boot-time and the intermediate physical address size bits are written in the translation control registers (TCR_EL1 and VTCR_EL2). There is no change in the VA bits and levels of translation. Signed-off-by: Radha Mohan Chintakuntla <rchintakuntla@cavium.com> Reviewed-by: Will Deacon <Will.deacon@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-03-04arm64: remove unnecessary cache flush at bootMark Rutland
Currently we flush the entire dcache at boot within __cpu_setup, but this is unnecessary as the booting protocol demands that the dcache is invalid and off upon entering the kernel. The presence of the cache flush only serves to hide bugs in bootloaders, and is not safe in the presence of SMP. In an SMP boot scenario the CPUs enter coherency outside of the kernel, and the primary CPU enables its caches before bringing up secondary CPUs. Therefore if any secondary CPU has an entry in its cache (in violation of the boot protocol), the primary CPU might snoop it even if the secondary CPU's cache is disabled. The boot-time cache flush only serves to hide a firmware bug, and slows down a cpu boot unnecessarily. This patch removes the unnecessary boot-time cache flush. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> [catalin.marinas@arm.com: make __flush_dcache_all local only] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-01-27arm64: mm: fix the function name in comment of cpu_do_switch_mmJingoo Han
Fix the function name of comment of cpu_do_switch_mm, because cpu_do_switch_mm is the correct name. Signed-off-by: Jingoo Han <jg1.han@samsung.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-12-16arm64: kernel: suspend/resume registers save/restoreLorenzo Pieralisi
Power management software requires the kernel to save and restore CPU registers while going through suspend and resume operations triggered by kernel subsystems like CPU idle and suspend to RAM. This patch implements code that provides save and restore mechanism for the arm v8 implementation. Memory for the context is passed as parameter to both cpu_do_suspend and cpu_do_resume functions, and allows the callers to implement context allocation as they deem fit. The registers that are saved and restored correspond to the registers set actually required by the kernel to be up and running which represents a subset of v8 ISA. Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
2013-12-06arm64: ensure completion of TLB invalidatationMark Rutland
Currently there is no dsb between the tlbi in __cpu_setup and the write to SCTLR_EL1 which enables the MMU in __turn_mmu_on. This means that the TLB invalidation is not guaranteed to have completed at the point address translation is enabled, leading to a number of possible issues including incorrect translations and TLB conflict faults. This patch moves the tlbi in __cpu_setup above an existing dsb used to synchronise I-cache invalidation, ensuring that the TLBs have been invalidated at the point the MMU is enabled. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-25arm64: big-endian: set correct endianess on kernel entryMatthew Leach
The endianness of memory accesses at EL2 and EL1 are configured by SCTLR_EL2.EE and SCTLR_EL1.EE respectively. When the kernel is booted, the state of SCTLR_EL{2,1}.EE is unknown, and thus the kernel must ensure that they are set before performing any memory accesses. This patch ensures that SCTLR_EL{2,1} are configured appropriately at boot for kernels of either endianness. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Matthew Leach <matthew.leach@arm.com> [catalin.marinas@arm.com: fix SCTLR_EL1.E0E bit setting in head.S] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-09-03arm64: mm: permit use of tagged pointers at EL0Will Deacon
TCR.TBI0 can be used to cause hardware address translation to ignore the top byte of userspace virtual addresses. Whilst not especially useful in standard C programs, this can be used by JITs to `tag' pointers with various pieces of metadata. This patch enables this bit for AArch64 Linux, and adds a new file to Documentation/arm64/ which describes some potential caveats when using tagged virtual addresses. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-09-02arm64: Remove unused cpu_name ascii in arch/arm64/mm/proc.SCatalin Marinas
This string has been moved to arch/arm64/kernel/cputable.c. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-05-13arm64: debug: clear mdscr_el1 instead of taking the OS lockWill Deacon
During boot, we take the debug OS lock before interrupts are enabled. This is required to prevent clearing of PSTATE.D on the interrupt entry path, which could result in spurious debug exceptions before we've got round to resetting things like the hardware breakpoints registers to a sane state. A problem with this approach is that taking the OS lock prevents an external JTAG debugger from debugging the system, which is especially irritating during boot, where JTAG debugging can be most useful. This patch clears mdscr_el1 rather than taking the lock, clearing the MDE and KDE bits and preventing self-hosted hardware debug exceptions from occurring. Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Cc: stable@vger.kernel.org
2012-09-24arm64: Do not set the SMP/nAMP processor bitCatalin Marinas
If such bit exists on a given CPU, it must be set by the firmware or boot-loader prior to starting the kernel (see Documentation/arm64/booting.txt). Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2012-09-17arm64: CPU supportCatalin Marinas
This patch adds AArch64 CPU specific functionality. It assumes that the implementation is generic to AArch64 and does not require specific identification. Different CPU implementations may require the setting of various ACTLR_EL1 bits but such information is not currently available and it should ideally be pushed to firmware. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Nicolas Pitre <nico@linaro.org> Acked-by: Olof Johansson <olof@lixom.net> Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>