summaryrefslogtreecommitdiffstats
path: root/arch/powerpc
AgeCommit message (Collapse)Author
2013-12-17Merge tag 'v3.13-rc4' into core/lockingIngo Molnar
Merge Linux 3.13-rc4, to refresh this rather old tree with the latest fixes. Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-12-16Merge tag 'v3.13-rc4' into perf/coreIngo Molnar
Merge Linux 3.13-rc4, to refresh this branch with the latest fixes. Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-12-16powerpc: Full barrier for smp_mb__after_unlock_lock()Paul E. McKenney
The powerpc lock acquisition sequence is as follows: lwarx; cmpwi; bne; stwcx.; lwsync; Lock release is as follows: lwsync; stw; If CPU 0 does a store (say, x=1) then a lock release, and CPU 1 does a lock acquisition then a load (say, r1=y), then there is no guarantee of a full memory barrier between the store to 'x' and the load from 'y'. To see this, suppose that CPUs 0 and 1 are hardware threads in the same core that share a store buffer, and that CPU 2 is in some other core, and that CPU 2 does the following: y = 1; sync; r2 = x; If 'x' and 'y' are both initially zero, then the lock acquisition and release sequences above can result in r1 and r2 both being equal to zero, which could not happen if unlock+lock was a full barrier. This commit therefore makes powerpc's smp_mb__after_unlock_lock() be a full barrier. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Reviewed-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> Cc: linuxppc-dev@lists.ozlabs.org Cc: <linux-arch@vger.kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/1386799151-2219-8-git-send-email-paulmck@linux.vnet.ibm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-12-13KVM: Use cond_resched() directly and remove useless kvm_resched()Takuya Yoshikawa
Since the commit 15ad7146 ("KVM: Use the scheduler preemption notifiers to make kvm preemptible"), the remaining stuff in this function is a simple cond_resched() call with an extra need_resched() check which was there to avoid dropping VCPUs unnecessarily. Now it is meaningless. Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-12-13powerpc/powernv: Fix OPAL LPC access in Little EndianBenjamin Herrenschmidt
We are passing pointers to the firmware for reads, we need to properly convert the result as OPAL is always BE. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-13powerpc/powernv: Fix endian issue in opal_xscom_readAnton Blanchard
opal_xscom_read uses a pointer to return the data so we need to byteswap it on LE builds. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-13powerpc: Fix endian issues in crash dump codeAnton Blanchard
A couple more device tree properties that need byte swapping. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-13powerpc/pseries: Fix endian issues in MSI codeAnton Blanchard
The MSI code is miscalculating quotas in little endian mode. Add required byteswaps to fix this. Before we claimed a quota of 65536, after the patch we see the correct value of 256. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-13powerpc/pseries: Fix PCIE link speed endian issueAnton Blanchard
We need to byteswap ibm,pcie-link-speed-stats. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-13powerpc/pseries: Fix endian issues in nvram codeAnton Blanchard
The NVRAM code has a number of endian issues. I noticed a very confused error log count: RTAS: 100663330 -------- RTAS event begin -------- 100663330 == 0x06000022. 0x6 LE error logs and 0x22 BE error logs. The pstore code has similar issues - if we write an oops in one endian and attempt to read it in another we get junk. Make both of these formats big endian, and byteswap as required. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-13powerpc/pseries: Fix endian issues in /proc/ppc64/lparcfgAnton Blanchard
Some obvious issues: cat /proc/ppc64/lparcfg ... partition_id=16777216 ... partition_potential_processors=268435456 Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-13powerpc: Fix topology core_id endian issue on LE buildsAnton Blanchard
cpu_to_core_id() is missing a byteswap: cat /sys/devices/system/cpu/cpu63/topology/core_id 201326592 Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-13powerpc: Fix endian issue in setup-common.cAnton Blanchard
During on LE boot we see: Partition configured for 1073741824 cpus, operating system maximum is 2048. Clearly missing a byteswap here. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-13powerpc: PTRACE_PEEKUSR always returns FPR0Ulrich Weigand
There is a bug in using ptrace to access FPRs via PTRACE_PEEKUSR / PTRACE_POKEUSR. In effect, trying to access any of the FPRs always really accesses FPR0, which does seriously break debugging :-) The problem seems to have been introduced by commit 3ad26e5c4459d (Merge branch 'for-kvm' into next). [ It is indeed a merge conflict between Paul's FPU/VSX state rework and my LE patches - Anton ] Signed-off-by: Ulrich Weigand <Ulrich.Weigand@de.ibm.com> Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-11powerpc/kvm/booke: Fix build break due to stack frame size warningScott Wood
Commit ce11e48b7fdd256ec68b932a89b397a790566031 ("KVM: PPC: E500: Add userspace debug stub support") added "struct thread_struct" to the stack of kvmppc_vcpu_run(). thread_struct is 1152 bytes on my build, compared to 48 bytes for the recently-introduced "struct debug_reg". Use the latter instead. This fixes the following error: cc1: warnings being treated as errors arch/powerpc/kvm/booke.c: In function 'kvmppc_vcpu_run': arch/powerpc/kvm/booke.c:760:1: error: the frame size of 1424 bytes is larger than 1024 bytes make[2]: *** [arch/powerpc/kvm/booke.o] Error 1 make[1]: *** [arch/powerpc/kvm] Error 2 make[1]: *** Waiting for unfinished jobs.... Signed-off-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-12-10powerpc: Fix up the kdump base cap to 128MMahesh Salgaonkar
The current logic sets the kdump base to min of 2G or ppc64_rma_size/2. On PowerNV kernel the first memory block 'memory@0' can be very large, equal to the DIMM size with ppc64_rma_size value capped to 1G. Hence on PowerNV, kdump base is set to 512M resulting kdump to fail while allocating paca array. This is because, paca need its memory from RMA region capped at 256M (see allocate_pacas()). This patch lowers the kdump base cap to 128M so that kdump kernel can successfully get memory below 256M for paca allocation. Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-10powernv: Fix VFIO support with PHB3Thadeu Lima de Souza Cascardo
I have recently found out that no iommu_groups could be found under /sys/ on a P8. That prevents PCI passthrough from working. During my investigation, I found out there seems to be a missing iommu_register_group for PHB3. The following patch seems to fix the problem. After applying it, I see iommu_groups under /sys/kernel/iommu_groups/, and can also bind vfio-pci to an adapter, which gives me a device at /dev/vfio/. Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-10powerpc/52xx: Re-enable bestcomm driver in defconfigsAnatolij Gustschin
The bestcomm driver has been moved to drivers/dma, so to select this driver by default additionally CONFIG_DMADEVICES has to be enabled. Currently it is not enabled in the config despite existing CONFIG_PPC_BESTCOMM=y in the config files. Fix it. Signed-off-by: Anatolij Gustschin <agust@denx.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-10powerpc/pasemi: Turn on devtmpfs in defconfigOlof Johansson
At least some distros expect it these days; turn it on. Also, random churn from doing a savedefconfig for the first time in a year or so. Signed-off-by: Olof Johansson <olof@lixom.net> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-10powerpc: Fix PTE page address mismatch in pgtable ctor/dtorHong H. Pham
In pte_alloc_one(), pgtable_page_ctor() is passed an address that has not been converted by page_address() to the newly allocated PTE page. When the PTE is freed, __pte_free_tlb() calls pgtable_page_dtor() with an address to the PTE page that has been converted by page_address(). The mismatch in the PTE's page address causes pgtable_page_dtor() to access invalid memory, so resources for that PTE (such as the page lock) is not properly cleaned up. On PPC32, only SMP kernels are affected. On PPC64, only SMP kernels with 4K page size are affected. This bug was introduced by commit d614bb041209fd7cb5e4b35e11a7b2f6ee8f62b8 "powerpc: Move the pte free routines from common header". On a preempt-rt kernel, a spinlock is dynamically allocated for each PTE in pgtable_page_ctor(). When the PTE is freed, calling pgtable_page_dtor() with a mismatched page address causes a memory leak, as the pointer to the PTE's spinlock is bogus. On mainline, there isn't any immediately obvious symptoms, but the problem still exists here. Fixes: d614bb041209fd7c "powerpc: Move the pte free routes from common header" Cc: Paul Mackerras <paulus@samba.org> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: linux-stable <stable@vger.kernel.org> # v3.10+ Signed-off-by: Hong H. Pham <hong.pham@windriver.com> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-10powerpc/44x: Fix ocm_block allocationIlia Mirkin
Allocate enough memory for the ocm_block structure, not just a pointer to it. Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-10powerpc: Fix build break with PPC_EARLY_DEBUG_BOOTX=yMichael Ellerman
A kernel configured with PPC_EARLY_DEBUG_BOOTX=y but PPC_PMAC=n and PPC_MAPLE=n will fail to link: btext.c:(.text+0x2d0fc): undefined reference to `.rmci_off' btext.c:(.text+0x2d214): undefined reference to `.rmci_on' Fix it by making the build of rmci_on/off() depend on PPC_EARLY_DEBUG_BOOTX, which also enable the only code that uses them. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-09KVM: PPC: Book3S: PR: Enable interrupts earlierAlexander Graf
Now that the svcpu sync is interrupt aware we can enable interrupts earlier in the exit code path again, moving 32bit and 64bit closer together. While at it, document the fact that we're always executing the exit path with interrupts enabled so that the next person doesn't trap over this. Signed-off-by: Alexander Graf <agraf@suse.de>
2013-12-09KVM: PPC: Book3S: PR: Make svcpu -> vcpu store preempt savvyAlexander Graf
As soon as we get back to our "highmem" handler in virtual address space we may get preempted. Today the reason we can get preempted is that we replay interrupts and all the lazy logic thinks we have interrupts enabled. However, it's not hard to make the code interruptible and that way we can enable and handle interrupts even earlier. This fixes random guest crashes that happened with CONFIG_PREEMPT=y for me. Signed-off-by: Alexander Graf <agraf@suse.de>
2013-12-09KVM: PPC: Book3S: PR: Export kvmppc_copy_to|from_svcpuAlexander Graf
The kvmppc_copy_{to,from}_svcpu functions are publically visible, so we should also export them in a header for others C files to consume. So far we didn't need this because we only called it from asm code. The next patch will introduce a C caller. Signed-off-by: Alexander Graf <agraf@suse.de>
2013-12-09KVM: PPC: Book3S: PR: Don't clobber our exit handler idAlexander Graf
We call a C helper to save all svcpu fields into our vcpu. The C ABI states that r12 is considered volatile. However, we keep our exit handler id in r12 currently. So we need to save it away into a non-volatile register instead that definitely does get preserved across the C call. This bug usually didn't hit anyone yet since gcc is smart enough to generate code that doesn't even need r12 which means it stayed identical throughout the call by sheer luck. But we can't rely on that. Signed-off-by: Alexander Graf <agraf@suse.de>
2013-12-09powerpc/powernv: Get FSP memory errors and plumb into memory poison ↵Mahesh Salgaonkar
infrastructure. Get the memory errors reported by opal and plumb it into memory poison infrastructure. This patch uses new messaging channel infrastructure to pull the fsp memory errors to linux. Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-09powerpc/powernv: Add config option for hwpoisoning.Mahesh Salgaonkar
Add config option to enable generic memory hwpoisoning infrastructure for ppc64. Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-09powerpc/mm: Enable _PAGE_NUMA for book3sAneesh Kumar K.V
We steal the _PAGE_COHERENCE bit and use that for indicating NUMA ptes. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Acked-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-09powerpc/mm: Only check for _PAGE_PRESENT in set_pte/pmd functionsAneesh Kumar K.V
We want to make sure we don't use these function when updating a pte or pmd entry that have a valid hpte entry, because these functions don't invalidate them. So limit the check to _PAGE_PRESENT bit. Numafault core changes use these functions for updating _PAGE_NUMA bits. That should be ok because when _PAGE_NUMA is set we can be sure that hpte entries are not present. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Acked-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-09powerpc/mm: Free up _PAGE_COHERENCE for numa fault use laterAneesh Kumar K.V
Set memory coherence always on hash64 config. If a platform cannot have memory coherence always set they can infer that from _PAGE_NO_CACHE and _PAGE_WRITETHRU like in lpar. So we dont' really need a separate bit for tracking _PAGE_COHERENCE. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Acked-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-09powerpc/mm: Use HPTE constants when updating hpte bitsAneesh Kumar K.V
Even though we have same value for linux PTE bits and hash PTE pits use the hash pte bits wen updating hash pte Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Acked-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-09powerpc: Dynamically allocate slb_shadow from memblockJeremy Kerr
Currently, the slb_shadow buffer is our largest symbol: [jk@pablo linux]$ nm --size-sort -r -S obj/vmlinux | head -1 c000000000da0000 0000000000040000 d slb_shadow - we allocate 128 bytes per cpu; so 256k with NR_CPUS=2048. As we have constant initialisers, it's allocated in .text, causing a larger vmlinux image. We may also allocate unecessary slb_shadow buffers (> no. pacas), since we use the build-time NR_CPUS rather than the run-time nr_cpu_ids. We could move this to the bss, but then we still have the NR_CPUS vs nr_cpu_ids potential for overallocation. This change dynamically allocates the slb_shadow array, during initialise_pacas(). At a cost of 104 bytes of text, we save 256k of data: [jk@pablo linux]$ size obj/vmlinux{.orig,} text data bss dec hex filename 9202795 5244676 1169576 15617047 ee4c17 obj/vmlinux.orig 9202899 4982532 1169576 15355007 ea4c7f obj/vmlinux Tested on pseries. Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-09powerpc: Make slb_shadow a localJeremy Kerr
The only external user of slb_shadow is the pseries lpar code, and it can access through the paca array instead. Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-09powerpc/pci: Use dev_is_pci() to check whether it is pci deviceYijing Wang
Use PCI standard marco dev_is_pci() instead of directly compare pci_bus_type to check whether it is pci device. Signed-off-by: Yijing Wang <wangyijing@huawei.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-07powerpc/512x: dts: remove misplaced IRQ spec from 'soc' nodeGerhard Sittig
the 'soc' node in the common .dtsi for MPC5121 has an '#interrupt-cells' property although this node is not an interrupt controller remove this erroneously placed property because starting with v3.13-rc1 lookup and resolution of 'interrupts' specs for peripherals gets misled, emits 'no irq domain found' WARN() messages and breaks the boot process irq: no irq domain found for /soc@80000000 ! ------------[ cut here ]------------ WARNING: at drivers/of/platform.c:171 Modules linked in: CPU: 0 PID: 1 Comm: swapper Tainted: G W 3.13.0-rc1-00001-g8a66234 #8 task: df823bb0 ti: df834000 task.ti: df834000 NIP: c02b5190 LR: c02b5180 CTR: c01cf4e0 REGS: df835c50 TRAP: 0700 Tainted: G W (3.13.0-rc1-00001-g8a66234) MSR: 00029032 <EE,ME,IR,DR,RI> CR: 229a9d42 XER: 20000000 GPR00: c02b5180 df835d00 df823bb0 00000000 00000000 df835b18 ffffffff 00000308 GPR08: c0479cc0 c0480000 c0479cc0 00000308 00000308 00000000 c00040fc 00000000 GPR16: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 df850880 GPR24: df84d670 00000000 00000001 df8561a0 dffffccc df85089c 00000020 00000001 NIP [c02b5190] of_device_alloc+0xf4/0x1a0 LR [c02b5180] of_device_alloc+0xe4/0x1a0 Call Trace: [df835d00] [c02b5180] of_device_alloc+0xe4/0x1a0 (unreliable) [df835d50] [c02b5278] of_platform_device_create_pdata+0x3c/0xc8 [df835d70] [c02b53fc] of_platform_bus_create+0xf8/0x170 [df835dc0] [c02b5448] of_platform_bus_create+0x144/0x170 [df835e10] [c02b55a8] of_platform_bus_probe+0x98/0xe8 [df835e30] [c0437508] mpc512x_init+0x28/0x1c4 [df835e70] [c0435de8] ppc_init+0x4c/0x60 [df835e80] [c0003b28] do_one_initcall+0x150/0x1a4 [df835ef0] [c0432048] kernel_init_freeable+0x114/0x1c0 [df835f30] [c0004114] kernel_init+0x18/0x124 [df835f40] [c000e910] ret_from_kernel_thread+0x5c/0x64 Instruction dump: 409effd4 57c9103a 57de2834 7c89f050 7f83e378 7c972214 7f45d378 48001f55 7c63d278 7c630034 5463d97e 687a0001 <0f1a0000> 2f990000 387b0010 939b0098 ---[ end trace 2257f10e5a20cbdd ]--- ... irq: no irq domain found for /soc@80000000 ! fsl-diu-fb 80002100.display: could not get DIU IRQ fsl-diu-fb: probe of 80002100.display failed with error -22 irq: no irq domain found for /soc@80000000 ! mpc512x_dma 80014000.dma: Error mapping IRQ! mpc512x_dma: probe of 80014000.dma failed with error -22 ... irq: no irq domain found for /soc@80000000 ! fs_enet: probe of 80002800.ethernet failed with error -22 ... irq: no irq domain found for /soc@80000000 ! mpc5121-rtc 80000a00.rtc: mpc5121_rtc_probe: could not request irq: 0 mpc5121-rtc: probe of 80000a00.rtc failed with error -22 ... Cc: Anatolij Gustschin <agust@denx.de> Cc: linuxppc-dev@lists.ozlabs.org Cc: devicetree@vger.kernel.org Signed-off-by: Gerhard Sittig <gsi@denx.de> Signed-off-by: Anatolij Gustschin <agust@denx.de>
2013-12-05powernv: Remove get/set_rtc_time when they are not presentMichael Neuling
Currently we continue to poll get/set_rtc_time even when we know they are not working. This changes it so that if it fails at boot time we remove the ppc_md get/set_rtc_time hooks so that we don't end up polling known broken calls. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-05powerpc: Add real mode cache inhibited IO accessorsMichael Ellerman
These accessors allow us to do cache inhibited accesses when in real mode. They should only be used in real mode. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-05powerpc: Increase EEH recovery timeout for SR-IOVBrian King
In order to support concurrent adapter firmware download to SR-IOV adapters on pSeries, each VF will see an EEH event where the slot will remain in the unavailable state for the duration of the adapter firmware update, which can take as long as 5 minutes. Extend the EEH recovery timeout to account for this. Signed-off-by: Brian King <brking@linux.vnet.ibm.com> Acked-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-05powerpc/eeh: Output PHB diag-dataGavin Shan
When hitting frozen PE or fenced PHB, it's always indicative to have dumped PHB diag-data for further analysis and diagnosis. However, we never dump that for the cases. The patch intends to dump PHB diag-data at the backend of eeh_ops::get_log() for PowerNV platform. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-05powerpc/powernv: Move PHB-diag dump functions aroundGavin Shan
Prior to the completion of PCI enumeration, we actively detects EEH errors on PCI config cycles and dump PHB diag-data if necessary. The EEH backend also dumps PHB diag-data in case of frozen PE or fenced PHB. However, we are using different functions to dump the PHB diag-data for those 2 cases. The patch merges the functions for dumping PHB diag-data to one so that we can avoid duplicate code. Also, we never dump PHB3 diag-data during PCI config cycles with frozen PE. The patch fixes it as well. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-05PPC: POWERNV: move iommu_add_device earlierAlexey Kardashevskiy
The current implementation of IOMMU on sPAPR does not use iommu_ops and therefore does not call IOMMU API's bus_set_iommu() which 1) sets iommu_ops for a bus 2) registers a bus notifier Instead, PCI devices are added to IOMMU groups from subsys_initcall_sync(tce_iommu_init) which does basically the same thing without using iommu_ops callbacks. However Freescale PAMU driver (https://lkml.org/lkml/2013/7/1/158) implements iommu_ops and when tce_iommu_init is called, every PCI device is already added to some group so there is a conflict. This patch does 2 things: 1. removes the loop in which PCI devices were added to groups and adds explicit iommu_add_device() calls to add devices as soon as they get the iommu_table pointer assigned to them. 2. moves a bus notifier to powernv code in order to avoid conflict with the notifier from Freescale driver. iommu_add_device() and iommu_del_device() are public now. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-05powerpc/powernv: Move SG list structure to header fileVasant Hegde
Move SG list and entry structure to header file so that it can be used in other places as well. Signed-off-by: Vasant Hegde <hegdevasant@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-05powerpc/powernv: Infrastructure to read opal messages in generic format.Mahesh Salgaonkar
Opal now has a new messaging infrastructure to push the messages to linux in a generic format for different type of messages using only one event bit. The format of the opal message is as below: struct opal_msg { uint32_t msg_type; uint32_t reserved; uint64_t params[8]; }; This patch allows clients to subscribe for notification for specific message type. It is upto the subscriber to decipher the messages who showed interested in receiving specific message type. The interface to subscribe for notification is: int opal_message_notifier_register(enum OpalMessageType msg_type, struct notifier_block *nb) The notifier will fetch the opal message when available and notify the subscriber with message type and the opal message. It is subscribers responsibility to copy the message data before returning from notifier callback. Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-05powerpc/powernv: Machine check exception handling.Mahesh Salgaonkar
Add basic error handling in machine check exception handler. - If MSR_RI isn't set, we can not recover. - Check if disposition set to OpalMCE_DISPOSITION_RECOVERED. - Check if address at fault is inside kernel address space, if not then send SIGBUS to process if we hit exception when in userspace. - If address at fault is not provided then and if we get a synchronous machine check while in userspace then kill the task. Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-05powerpc/powernv: Remove machine check handling in OPAL.Mahesh Salgaonkar
Now that we are ready to handle machine check directly in linux, do not register with firmware to handle machine check exception. Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-05powerpc/book3s: Queue up and process delayed MCE events.Mahesh Salgaonkar
When machine check real mode handler can not continue into host kernel in V mode, it returns from the interrupt and we loose MCE event which never gets logged. In such a situation queue up the MCE event so that we can log it later when we get back into host kernel with r1 pointing to kernel stack e.g. during syscall exit. Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-05powerpc/book3s: Decode and save machine check event.Mahesh Salgaonkar
Now that we handle machine check in linux, the MCE decoding should also take place in linux host. This info is crucial to log before we go down in case we can not handle the machine check errors. This patch decodes and populates a machine check event which contain high level meaning full MCE information. We do this in real mode C code with ME bit on. The MCE information is still available on emergency stack (in pt_regs structure format). Even if we take another exception at this point the MCE early handler will allocate a new stack frame on top of current one. So when we return back here we still have our MCE information safe on current stack. We use per cpu buffer to save high level MCE information. Each per cpu buffer is an array of machine check event structure indexed by per cpu counter mce_nest_count. The mce_nest_count is incremented every time we enter machine check early handler in real mode to get the current free slot (index = mce_nest_count - 1). The mce_nest_count is decremented once the MCE info is consumed by virtual mode machine exception handler. This patch provides save_mce_event(), get_mce_event() and release_mce_event() generic routines that can be used by machine check handlers to populate and retrieve the event. The routine release_mce_event() will free the event slot so that it can be reused. Caller can invoke get_mce_event() with a release flag either to release the event slot immediately OR keep it so that it can be fetched again. The event slot can be also released anytime by invoking release_mce_event(). This patch also updates kvm code to invoke get_mce_event to retrieve generic mce event rather than paca->opal_mce_evt. The KVM code always calls get_mce_event() with release flags set to false so that event is available for linus host machine If machine check occurs while we are in guest, KVM tries to handle the error. If KVM is able to handle MC error successfully, it enters the guest and delivers the machine check to guest. If KVM is not able to handle MC error, it exists the guest and passes the control to linux host machine check handler which then logs MC event and decides how to handle it in linux host. In failure case, KVM needs to make sure that the MC event is available for linux host to consume. Hence KVM always calls get_mce_event() with release flags set to false and later it invokes release_mce_event() only if it succeeds to handle error. Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-05powerpc/book3s: Flush SLB/TLBs if we get SLB/TLB machine check errors on power8.Mahesh Salgaonkar
This patch handles the memory errors on power8. If we get a machine check exception due to SLB or TLB errors, then flush SLBs/TLBs and reload SLBs to recover. Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Acked-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-05powerpc/book3s: Flush SLB/TLBs if we get SLB/TLB machine check errors on power7.Mahesh Salgaonkar
If we get a machine check exception due to SLB or TLB errors, then flush SLBs/TLBs and reload SLBs to recover. We do this in real mode before turning on MMU. Otherwise we would run into nested machine checks. If we get a machine check when we are in guest, then just flush the SLBs and continue. This patch handles errors for power7. The next patch will handle errors for power8 Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>