From 230aef7a6a23b6166bd4003bfff5af23c9bd381f Mon Sep 17 00:00:00 2001 From: Anton Blanchard Date: Wed, 7 Aug 2013 02:01:19 +1000 Subject: powerpc: Handle unaligned ldbrx/stdbrx Normally when we haven't implemented an alignment handler for a load or store instruction the process will be terminated. The alignment handler uses the DSISR (or a pseudo one) to locate the right handler. Unfortunately ldbrx and stdbrx overlap lfs and stfs so we incorrectly think ldbrx is an lfs and stdbrx is an stfs. This bug is particularly nasty - instead of terminating the process we apply an incorrect fixup and continue on. With more and more overlapping instructions we should stop creating a pseudo DSISR and index using the instruction directly, but for now add a special case to catch ldbrx/stdbrx. Signed-off-by: Anton Blanchard Cc: Signed-off-by: Benjamin Herrenschmidt --- arch/powerpc/kernel/align.c | 10 ++++++++++ 1 file changed, 10 insertions(+) (limited to 'arch/powerpc/kernel/align.c') diff --git a/arch/powerpc/kernel/align.c b/arch/powerpc/kernel/align.c index ee5b690a0be..52e5758ea36 100644 --- a/arch/powerpc/kernel/align.c +++ b/arch/powerpc/kernel/align.c @@ -764,6 +764,16 @@ int fix_alignment(struct pt_regs *regs) nb = aligninfo[instr].len; flags = aligninfo[instr].flags; + /* ldbrx/stdbrx overlap lfs/stfs in the DSISR unfortunately */ + if (IS_XFORM(instruction) && ((instruction >> 1) & 0x3ff) == 532) { + nb = 8; + flags = LD+SW; + } else if (IS_XFORM(instruction) && + ((instruction >> 1) & 0x3ff) == 660) { + nb = 8; + flags = ST+SW; + } + /* Byteswap little endian loads and stores */ swiz = 0; if (regs->msr & MSR_LE) { -- cgit v1.2.3-70-g09d2 From 5c2e08231b68a3c8082716a7ed4e972dde406e4a Mon Sep 17 00:00:00 2001 From: Anton Blanchard Date: Tue, 20 Aug 2013 20:30:07 +1000 Subject: powerpc: Never handle VSX alignment exceptions from kernel The VSX alignment handler needs to write out the existing VSX state to memory before operating on it (flush_vsx_to_thread()). If we take a VSX alignment exception in the kernel bad things will happen. It looks like we could write the kernel state out to the user process, or we could handle the kernel exception using data from the user process (depending if MSR_VSX is set or not). Worse still, if the code to read or write the VSX state causes an alignment exception, we will recurse forever. I ended up with hundreds of megabytes of kernel stack to look through as a result. Floating point and SPE code have similar issues but already include a user check. Add the same check to emulate_vsx(). With this patch any unaligned VSX loads and stores in the kernel will show up as a clear oops rather than silent corruption of kernel or userspace VSX state, or worse, corruption of a potentially unlimited amount of kernel memory. Signed-off-by: Anton Blanchard Signed-off-by: Benjamin Herrenschmidt --- arch/powerpc/kernel/align.c | 4 ++++ 1 file changed, 4 insertions(+) (limited to 'arch/powerpc/kernel/align.c') diff --git a/arch/powerpc/kernel/align.c b/arch/powerpc/kernel/align.c index 52e5758ea36..a27ccd5dc6b 100644 --- a/arch/powerpc/kernel/align.c +++ b/arch/powerpc/kernel/align.c @@ -651,6 +651,10 @@ static int emulate_vsx(unsigned char __user *addr, unsigned int reg, int sw = 0; int i, j; + /* userland only */ + if (unlikely(!user_mode(regs))) + return 0; + flush_vsx_to_thread(current); if (reg < 32) -- cgit v1.2.3-70-g09d2