summaryrefslogtreecommitdiffstats
path: root/arch
AgeCommit message (Collapse)Author
2014-07-29ARM: dts: Add PMU DT node for exynos5260 SoCVikas Sajjan
Adds PMU DT node for exynos5260 SoC. Signed-off-by: Vikas Sajjan <vikas.sajjan@samsung.com> Signed-off-by: Kukjin Kim <kgene.kim@samsung.com>
2014-07-29ARM: EXYNOS: Add support for Exynos5410 PMUAndreas Faerber
Exynos initialization code now relies on obtaining the PMU address, so add the new 5410 value to the list of compatible string matches. This unbreaks booting on 5410 based boards. Fixes: fce9e5bb2526 ("ARM: EXYNOS: Add support for mapping PMU base address via DT") Signed-off-by: Andreas Faerber <afaerber@suse.de> Signed-off-by: Kukjin Kim <kgene.kim@samsung.com>
2014-07-29ARM: dts: Add PMU to exynos5410Andreas Faerber
Exynos initialization code now relies on obtaining the PMU address, so prepare a PMU node for Exynos5410. Fixes: fce9e5bb2526 ("ARM: EXYNOS: Add support for mapping PMU base address via DT") Signed-off-by: Andreas Faerber <afaerber@suse.de> Signed-off-by: Kukjin Kim <kgene.kim@samsung.com>
2014-07-29Merge branch 'v3.17-next/power-exynos' into v3.17-next/dt-samsung-2Kukjin Kim
2014-07-28Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6Linus Torvalds
Pull ARM AES crypto fixes from Herbert Xu: "This push fixes a regression on ARM where odd-sized blocks supplied to AES may cause crashes" * git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: crypto: arm-aes - fix encryption of unaligned data crypto: arm64-aes - fix encryption of unaligned data
2014-07-28Merge branch 'merge' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc Pull powerpc fixes from Ben Herrenschmidt: "Here are 3 more small powerpc fixes that should still go into .16. One is a recent regression (MMCR2 business), the other is a trivial endian fix without which FW updates won't work on LE in IBM machines, and the 3rd one turns a BUG_ON into a WARN_ON which is definitely a LOT more friendly especially when the whole thing is about retrieving error logs ..." * 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc: powerpc: Fix endianness of flash_block_list in rtas_flash powerpc/powernv: Change BUG_ON to WARN_ON in elog code powerpc/perf: Fix MMCR2 handling for EBB
2014-07-28KVM: PPC: Remove DCR handlingAlexander Graf
DCR handling was only needed for 440 KVM. Since we removed it, we can also remove handling of DCR accesses. Signed-off-by: Alexander Graf <agraf@suse.de>
2014-07-28ARM: dts: am437x-gp-evm: Update binding for touchscreen sizeRoger Quadros
Update the bindings for touchscreen size. Signed-off-by: Roger Quadros <rogerq@ti.com> Acked-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
2014-07-28ARM: dts: am43x-epos-evm: Update binding for touchscreen sizeRoger Quadros
Update the bindings for touchscreen size. Signed-off-by: Roger Quadros <rogerq@ti.com> Acked-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
2014-07-28KVM: PPC: Expose helper functions for data/inst faultsAlexander Graf
We're going to implement guest code interpretation in KVM for some rare corner cases. This code needs to be able to inject data and instruction faults into the guest when it encounters them. Expose generic APIs to do this in a reasonably subarch agnostic fashion. Signed-off-by: Alexander Graf <agraf@suse.de>
2014-07-28KVM: PPC: Separate loadstore emulation from priv emulationAlexander Graf
Today the instruction emulator can get called via 2 separate code paths. It can either be called by MMIO emulation detection code or by privileged instruction traps. This is bad, as both code paths prepare the environment differently. For MMIO emulation we already know the virtual address we faulted on, so instructions there don't have to actually fetch that information. Split out the two separate use cases into separate files. Signed-off-by: Alexander Graf <agraf@suse.de>
2014-07-28Merge tag 'for-3.17/bcm-soc' of git://github.com/broadcom/mach-bcm into next/socArnd Bergmann
Merge "ARM: mach-bcm: soc updates for 3.17" from Matt Porter: - BCM Mobile SMP support - BRCM STB platform support * tag 'for-3.17/bcm-soc' of git://github.com/broadcom/mach-bcm: MAINTAINERS: add entry for Broadcom ARM STB architecture ARM: brcmstb: select GISB arbiter and interrupt drivers ARM: brcmstb: add infrastructure for ARM-based Broadcom STB SoCs ARM: configs: enable SMP in bcm_defconfig ARM: add SMP support for Broadcom mobile SoCs Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2014-07-28Merge tag 'for-3.17/bcm-dt' of git://github.com/broadcom/mach-bcm into next/dtArnd Bergmann
Merge "ARM: mach-bcm: dt updatees for 3.17" from Matt Porter: - BCM Mobile SMP support - BRCM STB platform support * tag 'for-3.17/bcm-dt' of git://github.com/broadcom/mach-bcm: ARM: brcmstb: dts: add a reference DTS for Broadcom 7445 ARM: brcmstb: gic: add compatible string for Broadcom Brahma15 ARM: brcmstb: add misc. DT bindings for brcmstb ARM: brcmstb: add CPU binding for Broadcom Brahma15 ARM: dts: enable SMP support for bcm21664 ARM: dts: enable SMP support for bcm28155 devicetree: bindings: document Broadcom CPU enable method Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2014-07-28Merge tag 'v3.16-rc6' into next/dtArnd Bergmann
Update to Linux 3.16-rc6 as a dependency for the broadcom changes. Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2014-07-28Merge tag 'mvebu-dt-3.17-4' of git://git.infradead.org/linux-mvebu into next/dtArnd Bergmann
Merge "mvebu DT changes for v3.17 (round 4)" from Jason Cooper" - Armada XP - New board, Lenovo ix4-300d NAS - Add Lenovo to vendor-prefixes - Dove - Add LCD controllers * tag 'mvebu-dt-3.17-4' of git://git.infradead.org/linux-mvebu: ARM: mvebu: Add dts definition for Lenovo Iomega ix4-300d NAS of: Add Lenovo Group Ltd. to the vendor-prefixes list. ARM: dts: dove: add DT LCD controllers Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2014-07-28KVM: PPC: Handle magic page in kvmppc_ld/stAlexander Graf
We use kvmppc_ld and kvmppc_st to emulate load/store instructions that may as well access the magic page. Special case it out so that we can properly access it. Signed-off-by: Alexander Graf <agraf@suse.de>
2014-07-28KVM: PPC: Use kvm_read_guest in kvmppc_ldAlexander Graf
We have a nice and handy helper to read from guest physical address space, so we should make use of it in kvmppc_ld as we already do for its counterpart in kvmppc_st. Signed-off-by: Alexander Graf <agraf@suse.de>
2014-07-28KVM: PPC: Remove kvmppc_bad_hva()Alexander Graf
We have a proper define for invalid HVA numbers. Use those instead of the ppc specific kvmppc_bad_hva(). Signed-off-by: Alexander Graf <agraf@suse.de>
2014-07-28KVM: PPC: Move kvmppc_ld/st to common codeAlexander Graf
We have enough common infrastructure now to resolve GVA->GPA mappings at runtime. With this we can move our book3s specific helpers to load / store in guest virtual address space to common code as well. Signed-off-by: Alexander Graf <agraf@suse.de>
2014-07-28KVM: PPC: Implement kvmppc_xlate for all targetsAlexander Graf
We have a nice API to find the translated GPAs of a GVA including protection flags. So far we only use it on Book3S, but there's no reason the same shouldn't be used on BookE as well. Implement a kvmppc_xlate() version for BookE and clean it up to make it more readable in general. Signed-off-by: Alexander Graf <agraf@suse.de>
2014-07-28KVM: PPC: BOOK3S: HV: Update compute_tlbie_rb to handle 16MB base pageAneesh Kumar K.V
When calculating the lower bits of AVA field, use the shift count based on the base page size. Also add the missing segment size and remove stale comment. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Acked-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Alexander Graf <agraf@suse.de>
2014-07-28ARM: brcmstb: dts: add a reference DTS for Broadcom 7445Marc Carino
Add a sample DTS which will allow bootup of a board populated with the BCM7445 chip. Signed-off-by: Marc Carino <marc.ceeeee@gmail.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Brian Norris <computersforpeace@gmail.com> Cc: Matt Porter <mporter@linaro.org> Signed-off-by: Matt Porter <mporter@linaro.org>
2014-07-28crypto: arm-aes - fix encryption of unaligned dataMikulas Patocka
Fix the same alignment bug as in arm64 - we need to pass residue unprocessed bytes as the last argument to blkcipher_walk_done. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Cc: stable@vger.kernel.org # 3.13+ Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-07-28crypto: arm64-aes - fix encryption of unaligned dataMikulas Patocka
cryptsetup fails on arm64 when using kernel encryption via AF_ALG socket. See https://bugzilla.redhat.com/show_bug.cgi?id=1122937 The bug is caused by incorrect handling of unaligned data in arch/arm64/crypto/aes-glue.c. Cryptsetup creates a buffer that is aligned on 8 bytes, but not on 16 bytes. It opens AF_ALG socket and uses the socket to encrypt data in the buffer. The arm64 crypto accelerator causes data corruption or crashes in the scatterwalk_pagedone. This patch fixes the bug by passing the residue bytes that were not processed as the last parameter to blkcipher_walk_done. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-07-28ARM: brcmstb: select GISB arbiter and interrupt driversBrian Norris
Signed-off-by: Brian Norris <computersforpeace@gmail.com> Acked-by: Florian Fainelli <f.fainelli@gmail.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Matt Porter <mporter@linaro.org>
2014-07-28ARM: mvebu: Add dts definition for Lenovo Iomega ix4-300d NASBenoit Masson
The Lenovo Iomega ix4-300d is a 4-Bay sata NAS with dual Gb, USB2.0 & 3.0, powered by a Marvell Armada XP MV78230 dual core CPU. http://shop.lenovo.com/us/en/servers/network-storage/lenovoemc/ix4-300d/ Signed-off-by: Benoit Masson <yahoo@perenite.com> Link: https://lkml.kernel.org/r/1406503839-4662-1-git-send-email-yahoo@perenite.com Signed-off-by: Jason Cooper <jason@lakedaemon.net>
2014-07-28ARM: brcmstb: add infrastructure for ARM-based Broadcom STB SoCsMarc Carino
The BCM7xxx series of Broadcom SoCs are used primarily in set-top boxes. This patch adds machine support for the ARM-based Broadcom SoCs. Signed-off-by: Marc Carino <marc.ceeeee@gmail.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Brian Norris <computersforpeace@gmail.com> Signed-off-by: Matt Porter <mporter@linaro.org>
2014-07-28ARM: configs: enable SMP in bcm_defconfigAlex Elder
Also explicitly set CONFIG_NR_CPUS to 2, limiting it to the most we currently need. Signed-off-by: Ray Jui <rjui@broadcom.com> Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Matt Porter <mporter@linaro.org>
2014-07-28ARM: add SMP support for Broadcom mobile SoCsAlex Elder
This patch adds SMP support for BCM281XX and BCM21664 family SoCs. This feature is controlled with a distinct config option such that an SMP-enabled multi-v7 binary can be configured to run these SoCs in uniprocessor mode. Since this SMP functionality is used for multiple Broadcom mobile chip families the config option is called ARCH_BCM_MOBILE_SMP (for lack of a better name). On SoCs of this type, the secondary core is not held in reset on power-on. Instead it loops in a ROM-based holding pen. To release it, one must write into a special register a jump address whose low-order bits have been replaced with a secondary core's id, then trigger an event with SEV. On receipt of an event, the ROM code will examine the register's contents, and if the low-order bits match its cpu id, it will clear them and write the value back to the register just prior to jumping to the address specified. The location of the special register is defined in the device tree using a "secondary-boot-reg" property in a node whose "enable-method" matches. Derived from code originally provided by Ray Jui <rjui@broadcom.com> Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Matt Porter <mporter@linaro.org>
2014-07-28ARM: dts: enable SMP support for bcm21664Alex Elder
Define nodes representing the two Cortex A9 CPUs in a bcm21644 SoC. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Matt Porter <mporter@linaro.org>
2014-07-28ARM: dts: enable SMP support for bcm28155Alex Elder
Define nodes representing the two Cortex A9 CPUs in a bcm28155 SoC. Signed-off-by: Ray Jui <rjui@broadcom.com> Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Matt Porter <mporter@linaro.org>
2014-07-28KVM: PPC: Book3S: Provide different CAPs based on HV or PR modeAlexander Graf
With Book3S KVM we can create both PR and HV VMs in parallel on the same machine. That gives us new challenges on the CAPs we return - both have different capabilities. When we get asked about CAPs on the kvm fd, there's nothing we can do. We can try to be smart and assume we're running HV if HV is available, PR otherwise. However with the newly added VM CHECK_EXTENSION we can now ask for capabilities directly on a VM which knows whether it's PR or HV. With this patch I can successfully expose KVM PVINFO data to user space in the PR case, fixing magic page mapping for PAPR guests. Signed-off-by: Alexander Graf <agraf@suse.de> Acked-by: Paolo Bonzini <pbonzini@redhat.com>
2014-07-28KVM: Rename and add argument to check_extensionAlexander Graf
In preparation to make the check_extension function available to VM scope we add a struct kvm * argument to the function header and rename the function accordingly. It will still be called from the /dev/kvm fd, but with a NULL argument for struct kvm *. Signed-off-by: Alexander Graf <agraf@suse.de> Acked-by: Paolo Bonzini <pbonzini@redhat.com>
2014-07-28Use the POWER8 Micro Partition Prefetch Engine in KVM HV on POWER8Stewart Smith
The POWER8 processor has a Micro Partition Prefetch Engine, which is a fancy way of saying "has way to store and load contents of L2 or L2+MRU way of L3 cache". We initiate the storing of the log (list of addresses) using the logmpp instruction and start restore by writing to a SPR. The logmpp instruction takes parameters in a single 64bit register: - starting address of the table to store log of L2/L2+L3 cache contents - 32kb for L2 - 128kb for L2+L3 - Aligned relative to maximum size of the table (32kb or 128kb) - Log control (no-op, L2 only, L2 and L3, abort logout) We should abort any ongoing logging before initiating one. To initiate restore, we write to the MPPR SPR. The format of what to write to the SPR is similar to the logmpp instruction parameter: - starting address of the table to read from (same alignment requirements) - table size (no data, until end of table) - prefetch rate (from fastest possible to slower. about every 8, 16, 24 or 32 cycles) The idea behind loading and storing the contents of L2/L3 cache is to reduce memory latency in a system that is frequently swapping vcores on a physical CPU. The best case scenario for doing this is when some vcores are doing very cache heavy workloads. The worst case is when they have about 0 cache hits, so we just generate needless memory operations. This implementation just does L2 store/load. In my benchmarks this proves to be useful. Benchmark 1: - 16 core POWER8 - 3x Ubuntu 14.04LTS guests (LE) with 8 VCPUs each - No split core/SMT - two guests running sysbench memory test. sysbench --test=memory --num-threads=8 run - one guest running apache bench (of default HTML page) ab -n 490000 -c 400 http://localhost/ This benchmark aims to measure performance of real world application (apache) where other guests are cache hot with their own workloads. The sysbench memory benchmark does pointer sized writes to a (small) memory buffer in a loop. In this benchmark with this patch I can see an improvement both in requests per second (~5%) and in mean and median response times (again, about 5%). The spread of minimum and maximum response times were largely unchanged. benchmark 2: - Same VM config as benchmark 1 - all three guests running sysbench memory benchmark This benchmark aims to see if there is a positive or negative affect to this cache heavy benchmark. Although due to the nature of the benchmark (stores) we may not see a difference in performance, but rather hopefully an improvement in consistency of performance (when vcore switched in, don't have to wait many times for cachelines to be pulled in) The results of this benchmark are improvements in consistency of performance rather than performance itself. With this patch, the few outliers in duration go away and we get more consistent performance in each guest. benchmark 3: - same 3 guests and CPU configuration as benchmark 1 and 2. - two idle guests - 1 guest running STREAM benchmark This scenario also saw performance improvement with this patch. On Copy and Scale workloads from STREAM, I got 5-6% improvement with this patch. For Add and triad, it was around 10% (or more). benchmark 4: - same 3 guests as previous benchmarks - two guests running sysbench --memory, distinctly different cache heavy workload - one guest running STREAM benchmark. Similar improvements to benchmark 3. benchmark 5: - 1 guest, 8 VCPUs, Ubuntu 14.04 - Host configured with split core (SMT8, subcores-per-core=4) - STREAM benchmark In this benchmark, we see a 10-20% performance improvement across the board of STREAM benchmark results with this patch. Based on preliminary investigation and microbenchmarks by Prerna Saxena <prerna@linux.vnet.ibm.com> Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com> Acked-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Alexander Graf <agraf@suse.de>
2014-07-28Split out struct kvmppc_vcore creation to separate functionStewart Smith
No code changes, just split it out to a function so that with the addition of micro partition prefetch buffer allocation (in subsequent patch) looks neater and doesn't require excessive indentation. Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com> Acked-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Alexander Graf <agraf@suse.de>
2014-07-28KVM: PPC: Book3S: Make kvmppc_ld return a more accurate error indicationPaul Mackerras
At present, kvmppc_ld calls kvmppc_xlate, and if kvmppc_xlate returns any error indication, it returns -ENOENT, which is taken to mean an HPTE not found error. However, the error could have been a segment found (no SLB entry) or a permission error. Similarly, kvmppc_pte_to_hva currently does permission checking, but any error from it is taken by kvmppc_ld to mean that the access is an emulated MMIO access. Also, kvmppc_ld does no execute permission checking. This fixes these problems by (a) returning any error from kvmppc_xlate directly, (b) moving the permission check from kvmppc_pte_to_hva into kvmppc_ld, and (c) adding an execute permission check to kvmppc_ld. This is similar to what was done for kvmppc_st() by commit 82ff911317c3 ("KVM: PPC: Deflect page write faults properly in kvmppc_st"). Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Alexander Graf <agraf@suse.de>
2014-07-28KVM: PPC: Book3S PR: Take SRCU read lock around RTAS kvm_read_guest() callPaul Mackerras
This does for PR KVM what c9438092cae4 ("KVM: PPC: Book3S HV: Take SRCU read lock around kvm_read_guest() call") did for HV KVM, that is, eliminate a "suspicious rcu_dereference_check() usage!" warning by taking the SRCU lock around the call to kvmppc_rtas_hcall(). It also fixes a return of RESUME_HOST to return EMULATE_FAIL instead, since kvmppc_h_pr() is supposed to return EMULATE_* values. Signed-off-by: Paul Mackerras <paulus@samba.org> Cc: stable@vger.kernel.org Signed-off-by: Alexander Graf <agraf@suse.de>
2014-07-28KVM: PPC: Book3S: Fix LPCR one_reg interfaceAlexey Kardashevskiy
Unfortunately, the LPCR got defined as a 32-bit register in the one_reg interface. This is unfortunate because KVM allows userspace to control the DPFD (default prefetch depth) field, which is in the upper 32 bits. The result is that DPFD always get set to 0, which reduces performance in the guest. We can't just change KVM_REG_PPC_LPCR to be a 64-bit register ID, since that would break existing userspace binaries. Instead we define a new KVM_REG_PPC_LPCR_64 id which is 64-bit. Userspace can still use the old KVM_REG_PPC_LPCR id, but it now only modifies those fields in the bottom 32 bits that userspace can modify (ILE, TC and AIL). If userspace uses the new KVM_REG_PPC_LPCR_64 id, it can modify DPFD as well. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Paul Mackerras <paulus@samba.org> Cc: stable@vger.kernel.org Signed-off-by: Alexander Graf <agraf@suse.de>
2014-07-28KVM: PPC: Remove 440 supportAlexander Graf
The 440 target hasn't been properly functioning for a few releases and before I was the only one who fixes a very serious bug that indicates to me that nobody used it before either. Furthermore KVM on 440 is slow to the extent of unusable. We don't have to carry along completely unused code. Remove 440 and give us one less thing to worry about. Signed-off-by: Alexander Graf <agraf@suse.de>
2014-07-28KVM: PPC: Remove comment saying SPRG1 is used for vcpu pointerBharat Bhushan
Scott Wood pointed out that We are no longer using SPRG1 for vcpu pointer, but using SPRN_SPRG_THREAD <=> SPRG3 (thread->vcpu). So this comment is not valid now. Note: SPRN_SPRG3R is not supported (do not see any need as of now), and if we want to support this in future then we have to shift to using SPRG1 for VCPU pointer. Signed-off-by: Bharat Bhushan <Bharat.Bhushan@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de>
2014-07-28KVM: PPC: Booke-hv: Add one reg interface for SPRG9Bharat Bhushan
We now support SPRG9 for guest, so also add a one reg interface for same Note: Changes are in bookehv code only as we do not have SPRG9 on booke-pr. Signed-off-by: Bharat Bhushan <Bharat.Bhushan@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de>
2014-07-28kvm: ppc: bookehv: Save restore SPRN_SPRG9 on guest entry exitBharat Bhushan
SPRN_SPRG is used by debug interrupt handler, so this is required for debug support. Signed-off-by: Bharat Bhushan <Bharat.Bhushan@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de>
2014-07-28KVM: PPC: Bookehv: Get vcpu's last instruction for emulationMihai Caraman
On book3e, KVM uses load external pid (lwepx) dedicated instruction to read guest last instruction on the exit path. lwepx exceptions (DTLB_MISS, DSI and LRAT), generated by loading a guest address, needs to be handled by KVM. These exceptions are generated in a substituted guest translation context (EPLC[EGS] = 1) from host context (MSR[GS] = 0). Currently, KVM hooks only interrupts generated from guest context (MSR[GS] = 1), doing minimal checks on the fast path to avoid host performance degradation. lwepx exceptions originate from host state (MSR[GS] = 0) which implies additional checks in DO_KVM macro (beside the current MSR[GS] = 1) by looking at the Exception Syndrome Register (ESR[EPID]) and the External PID Load Context Register (EPLC[EGS]). Doing this on each Data TLB miss exception is obvious too intrusive for the host. Read guest last instruction from kvmppc_load_last_inst() by searching for the physical address and kmap it. This address the TODO for TLB eviction and execute-but-not-read entries, and allow us to get rid of lwepx until we are able to handle failures. A simple stress benchmark shows a 1% sys performance degradation compared with previous approach (lwepx without failure handling): time for i in `seq 1 10000`; do /bin/echo > /dev/null; done real 0m 8.85s user 0m 4.34s sys 0m 4.48s vs real 0m 8.84s user 0m 4.36s sys 0m 4.44s A solution to use lwepx and to handle its exceptions in KVM would be to temporary highjack the interrupt vector from host. This imposes additional synchronizations for cores like FSL e6500 that shares host IVOR registers between hardware threads. This optimized solution can be later developed on top of this patch. Signed-off-by: Mihai Caraman <mihai.caraman@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de>
2014-07-28KVM: PPC: Allow kvmppc_get_last_inst() to failMihai Caraman
On book3e, guest last instruction is read on the exit path using load external pid (lwepx) dedicated instruction. This load operation may fail due to TLB eviction and execute-but-not-read entries. This patch lay down the path for an alternative solution to read the guest last instruction, by allowing kvmppc_get_lat_inst() function to fail. Architecture specific implmentations of kvmppc_load_last_inst() may read last guest instruction and instruct the emulation layer to re-execute the guest in case of failure. Make kvmppc_get_last_inst() definition common between architectures. Signed-off-by: Mihai Caraman <mihai.caraman@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de>
2014-07-28KVM: PPC: Book3s: Remove kvmppc_read_inst() functionMihai Caraman
In the context of replacing kvmppc_ld() function calls with a version of kvmppc_get_last_inst() which allow to fail, Alex Graf suggested this: "If we get EMULATE_AGAIN, we just have to make sure we go back into the guest. No need to inject an ISI into the guest - it'll do that all by itself. With an error returning kvmppc_get_last_inst we can just use completely get rid of kvmppc_read_inst() and only use kvmppc_get_last_inst() instead." As a intermediate step get rid of kvmppc_read_inst() and only use kvmppc_ld() instead. Signed-off-by: Mihai Caraman <mihai.caraman@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de>
2014-07-28KVM: PPC: Book3e: Add TLBSEL/TSIZE defines for MAS0/1Mihai Caraman
Add mising defines MAS0_GET_TLBSEL() and MAS1_GET_TSIZE() for Book3E. Signed-off-by: Mihai Caraman <mihai.caraman@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de>
2014-07-28KVM: PPC: e500mc: Revert "add load inst fixup"Mihai Caraman
The commit 1d628af7 "add load inst fixup" made an attempt to handle failures generated by reading the guest current instruction. The fixup code that was added works by chance hiding the real issue. Load external pid (lwepx) instruction, used by KVM to read guest instructions, is executed in a subsituted guest translation context (EPLC[EGS] = 1). In consequence lwepx's TLB error and data storage interrupts need to be handled by KVM, even though these interrupts are generated from host context (MSR[GS] = 0) where lwepx is executed. Currently, KVM hooks only interrupts generated from guest context (MSR[GS] = 1), doing minimal checks on the fast path to avoid host performance degradation. As a result, the host kernel handles lwepx faults searching the faulting guest data address (loaded in DEAR) in its own Logical Partition ID (LPID) 0 context. In case a host translation is found the execution returns to the lwepx instruction instead of the fixup, the host ending up in an infinite loop. Revert the commit "add load inst fixup". lwepx issue will be addressed in a subsequent patch without needing fixup code. Signed-off-by: Mihai Caraman <mihai.caraman@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de>
2014-07-28kvm: ppc: Add SPRN_EPR get helper functionBharat Bhushan
kvmppc_set_epr() is already defined in asm/kvm_ppc.h, So rename and move get_epr helper function to same file. Signed-off-by: Bharat Bhushan <Bharat.Bhushan@freescale.com> [agraf: remove duplicate return] Signed-off-by: Alexander Graf <agraf@suse.de>
2014-07-28kvm: ppc: booke: Use the shared struct helpers for SPRN_SPRG0-7Bharat Bhushan
Use kvmppc_set_sprg[0-7]() and kvmppc_get_sprg[0-7]() helper functions Signed-off-by: Bharat Bhushan <Bharat.Bhushan@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de>
2014-07-28kvm: ppc: booke: Add shared struct helpers of SPRN_ESRBharat Bhushan
Add and use kvmppc_set_esr() and kvmppc_get_esr() helper functions Signed-off-by: Bharat Bhushan <Bharat.Bhushan@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de>