Age | Commit message (Collapse) | Author |
|
Contrary to the comment, the guest CP0_EPC register cannot be set via
kvm_regs, since it is distinct from the guest PC. Add the EPC register
to the KVM_{GET,SET}_ONE_REG ioctl interface.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Gleb Natapov <gleb@kernel.org>
Cc: kvm@vger.kernel.org
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: David Daney <david.daney@cavium.com>
Cc: Sanjay Lal <sanjayl@kymasys.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
When MIPS KVM needs to write a TLB entry for the guest it reads the
CP0_Random register, uses it to generate the CP_Index, and writes the
TLB entry using the TLBWI instruction (tlb_write_indexed()).
However there's an instruction for that, TLBWR (tlb_write_random()) so
use that instead.
This happens to also fix an issue with Ingenic XBurst cores where the
same TLB entry is replaced each time preventing forward progress on
stores due to alternating between TLB load misses for the instruction
fetch and TLB store misses.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Gleb Natapov <gleb@kernel.org>
Cc: kvm@vger.kernel.org
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: Sanjay Lal <sanjayl@kymasys.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
MIPS KVM uses mips32_SyncICache to synchronise the icache with the
dcache after dynamically modifying guest instructions or writing guest
exception vector. However this uses rdhwr to get the SYNCI step, which
causes a reserved instruction exception on Ingenic XBurst cores.
It would seem to make more sense to use local_flush_icache_range()
instead which does the same thing but is more portable.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Gleb Natapov <gleb@kernel.org>
Cc: kvm@vger.kernel.org
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: Sanjay Lal <sanjayl@kymasys.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Export the local_flush_icache_range function pointer for GPL modules so
that it can be used by KVM for syncing the icache after binary
translation of trapping instructions.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Gleb Natapov <gleb@kernel.org>
Cc: kvm@vger.kernel.org
Cc: linux-mips@linux-mips.org
Cc: Sanjay Lal <sanjayl@kymasys.com>
Acked-by: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Each MIPS KVM guest has its own copy of the KVM exception vector. This
contains the TLB refill exception handler at offset 0x000, the general
exception handler at offset 0x180, and interrupt exception handlers at
offset 0x200 in case Cause_IV=1. A common handler is copied to offset
0x2000 and offset 0x3000 is used for temporarily storing k1 during entry
from guest.
However the amount of memory allocated for this purpose is calculated as
0x200 rounded up to the next page boundary, which is insufficient if 4KB
pages are in use. This can lead to the common handler at offset 0x2000
being overwritten and infinitely recursive exceptions on the next exit
from the guest.
Increase the minimum size from 0x200 to 0x4000 to cover the full use of
the page.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Gleb Natapov <gleb@kernel.org>
Cc: kvm@vger.kernel.org
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: Sanjay Lal <sanjayl@kymasys.com>
Cc: stable@vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux into kvm-next
1. Several minor fixes and cleanups for KVM:
2. Fix flag check for gdb support
3. Remove unnecessary vcpu start
4. Remove code duplication for sigp interrupts
5. Better DAT handling for the TPROT instruction
6. Correct addressing exception for standby memory
|
|
It will be used later on by the sllv and srlv instructions.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: http://patchwork.linux-mips.org/patch/6723/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Sometimes it's useful to let the user, while doing performance research,
know what in the IEEE754 exceptions has caused many times of FP emulation
when running a specific application. This patch adds 5 more files to
/sys/kernel/debug/mips/fpuemustats/, whose filenames begin with "ieee754".
These stats are in addition to the existing cp1ops, cp1xops, errors, loads
and stores, which may not be useful in understanding the reasons of ieee754
exceptions.
[ralf@linux-mips.org: Fixed reject due to other changes to the kernel
FP assist software.]
Signed-off-by: Deng-Cheng Zhu <dengcheng.zhu@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: Steven.Hill@imgtec.com
Cc: james.hogan@imgtec.com
Patchwork: http://patchwork.linux-mips.org/patch/7044/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Patch was tested on devices with 64 MiB and 256 MiB of RAM.
It documents every part nicely and drops this hacky part of code:
max = off | ((128 << 20) - 1);
Signed-off-by: Rafał Miłecki <zajec5@gmail.com>
Cc: linux-mips@linux-mips.org
Cc: Hauke Mehrtens <hauke@hauke-m.de>
Patchwork: https://patchwork.linux-mips.org/patch/6808/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Based on original patch from Jeng-fang (Nick) Wang
When standby memory is specified for a guest Linux, but no virtual memory has
been allocated on the Qemu host backing that guest, the guest memory detection
process encounters a memory access exception which is not thrown from the KVM
handle_tprot() instruction-handler function. The access exception comes from
sie64a returning EFAULT, which then passes an addressing exception to the guest.
Unfortunately this does not the proper PSW fixup (nullifying vs.
suppressing) so the guest will get a fault for the wrong address.
Let's just intercept the tprot instruction all the time to do the right thing
and not go the page fault handler path for standby memory. tprot is only used
by Linux during startup so some exits should be ok.
Without this patch, standby memory cannot be used with KVM.
Signed-off-by: Nick Wang <jfwang@us.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Tested-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
|
|
This patch removes the start of a VCPU when delivering a RESTART interrupt.
Interrupt delivery is called from kvm_arch_vcpu_ioctl_run. So the VCPU is
already considered started - no need to call kvm_s390_vcpu_start. This function
will early exit anyway.
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
|
|
This patch fixes a minor bug when updating the guest debug settings.
We should check the given debug flags, not the already set ones.
Doesn't do any harm but too many (for now unused) flags could be set internally
without error.
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
|
|
We have all the logic to inject interrupts available in
kvm_s390_inject_vcpu(), so let's use it instead of
injecting irqs manually to the list in sigp code.
SIGP stop is special because we have to check the
action_flags before injecting the interrupt. As
the action_flags are not available in kvm_s390_inject_vcpu()
we leave the code for the stop order code untouched for now.
Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
|
|
The TPROT instruction can be used to check the accessability of storage
for any kind of logical addresses. So far, our handler only supported
real addresses. This patch now also enables support for addresses that
have to be translated via DAT first. And while we're at it, change the
code to use the common KVM function gfn_to_hva_prot() to check for the
validity and writability of the memory page.
Signed-off-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
|
|
This patch adds a function for translating logical guest addresses into
physical guest addresses without touching the memory at the given location.
Signed-off-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
|
|
Pull ARM fixes from Russell King:
"The usual random collection of relatively small ARM fixes"
* 'fixes' of git://ftp.arm.linux.org.uk/~rmk/linux-arm:
ARM: 8063/1: bL_switcher: fix individual online status reporting of removed CPUs
ARM: 8064/1: fix v7-M signal return
ARM: 8057/1: amba: Add Qualcomm vendor ID.
ARM: 8052/1: unwind: Fix handling of "Pop r4-r[4+nnn],r14" opcode
ARM: 8051/1: put_user: fix possible data corruption in put_user
ARM: 8048/1: fix v7-M setup stack location
|
|
When configure kprobe events of ftrace with "stacktrace" option enabled
in arm, there is no stacktrace was recorded after the kprobe event was
triggered. The root cause is no save_stack_trace_regs() function implemented.
Implement the save_stack_trace_regs() function in arm, then ftrace will
call this architecture-related function to record the stacktrace into
ring buffer.
After this fix, stacktrace can be recorded, for example:
# mount -t debugfs nodev /sys/kernel/debug
# echo "p:netrx net_rx_action" >> /sys/kernel/debug/tracing/kprobe_events
# echo 1 > /sys/kernel/debug/tracing/events/kprobes/netrx/enable
# echo 1 > /sys/kernel/debug/tracing/options/stacktrace
# echo 1 > /sys/kernel/debug/tracing/tracing_on
# ping 127.0.0.1 -c 1
# echo 0 > /sys/kernel/debug/tracing/tracing_on
# cat /sys/kernel/debug/tracing/trace
# tracer: nop
#
# entries-in-buffer/entries-written: 12/12 #P:1
#
# _-----=> irqs-off
# / _----=> need-resched
# | / _---=> hardirq/softirq
# || / _--=> preempt-depth
# ||| / delay
# TASK-PID CPU# |||| TIMESTAMP FUNCTION
# | | | |||| | |
<------ missing some entries ---------------->
ping-1200 [000] dNs1 667.603250: netrx: (net_rx_action+0x0/0x1f8)
ping-1200 [000] dNs1 667.604738: <stack trace>
=> net_rx_action
=> do_softirq
=> local_bh_enable
=> ip_finish_output
=> ip_output
=> ip_local_out
=> ip_send_skb
=> ip_push_pending_frames
=> raw_sendmsg
=> inet_sendmsg
=> sock_sendmsg
=> SyS_sendto
=> ret_fast_syscall
Signed-off-by: Lin Yongting <linyongting@gmail.com>
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Support for ARM710 CPUs was removed in v3.5. Now remove the last code
depending on its Kconfig macro.
Signed-off-by: Paul Bolle <pebolle@tiscali.nl>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
We will reach fixup handler when one thread(say cpu0) caused an undefined exception, while another thread(say cpu1) is unmmaping the page.
Fixup handler returns to the next userspace instruction which has caused the undef execption, rather than going to the same instruction.
ARM ARM says that after undefined exception, the PC will be pointing
to the next instruction. ie +4 offset in case of ARM and +2 in case of Thumb
And there is no correction offset passed to vector_stub in case of
undef exception.
File: arch/arm/kernel/entry-armv.S +1085
vector_stub und, UND_MODE
During an undefined exception, in normal scenario(ie when ldrt
instruction does not cause an abort) after resorting the context in
VFP hardware, the PC is modified as show below before jumping to
ret_from_exception which is in r9.
File: arch/arm/vfp/vfphw.S +169
@ The context stored in the VFP hardware is up to date with this thread
vfp_hw_state_valid:
tst r1, #FPEXC_EX
bne process_exception @ might as well handle the pending
@ exception before retrying branch
@ out before setting an FPEXC that
@ stops us reading stuff
VFPFMXR FPEXC, r1 @ Restore FPEXC last
sub r2, r2, #4 @ Retry current instruction - if Thumb
str r2, [sp, #S_PC] @ mode it's two 16-bit instructions,
@ else it's one 32-bit instruction, so
@ always subtract 4 from the following
@ instruction address.
But if ldrt results in an abort, we reach the fixup handler and return
to ret_from_execption without correcting the pc.
This patch modifes the fixup handler to re-execute the same instruction which caused undefined execption.
Signed-off-by: Vinayak Menon <vinayakm.list@gmail.com>
Signed-off-by: Arun KS <getarunks@gmail.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
asm-generic offers an atomic-add based rwsem implementation, which
can avoid the need for heavier, spinlock-based synchronisation on the
fast path.
This patch makes use of the optimised implementation for ARM CPUs.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
As we have now removed all instances of the L2C-310 having its cache
size "modified" via platform/SoC code, discourage new cases showing
up by printing a warning.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
We no longer need or require the .set_debug method; we handle everything
it used to do via the .write_sec method instead.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
L2X0_AUX_CTRL_MASK is not useful for PL310s. It would be better if
people thought about their value for this rather than cargo-cult
programming.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Remove the explicit call to l2x0_of_init(), converting to the generic
infrastructure instead.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
The cache size should already be present in the L2 cache auxiliary
control register: it is part of the integration process to configure
the hardware IP. Most platforms get this right, yet still many
cargo-cult program, and assume that they always need specifying to
the L2 cache code. Remove them so we can find out which really need
this.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Remove the explicit call to l2x0_of_init(), converting to the generic
infrastructure instead.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
It is beneficial to have the L2 cache up and running earlier in the
system boot. Not only will this allow for simpler code when we come to
enable some features, but it also means that we get a more accurate
bogomips value for the udelay() loop. Calibrating the loop with the
L2 cache off, and then running with the L2 cache on is not the best
idea.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
ux500 can't change the auxiliary control register, so there's no point
passing values to try and modify it to the l2x0 init functions.
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
The cache size should already be present in the L2 cache auxiliary
control register: it is part of the integration process to configure
the hardware IP. Most platforms get this right, yet still many
cargo-cult program, and assume that they always need specifying to
the L2 cache code. Remove them so we can find out which really need
this.
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
ux500 can't write to any of the secure registers on the L2C controllers,
so provide a dummy handler which ignores all writes.
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Remove the explicit call to l2x0_of_init(), converting to the generic
infrastructure instead.
Acked-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Acked-by: Stephen Warren <swarren@nvidia.com>
Tested-by: Stephen Warren <swarren@nvidia.com>
Reviewed-by: Joseph Lo <josephl@nvidia.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
The cache size should already be present in the L2 cache auxiliary
control register: it is part of the integration process to configure
the hardware IP. Most platforms get this right, yet still many
cargo-cult program, and assume that they always need specifying to
the L2 cache code. Remove them so we can find out which really need
this.
Acked-by: Stephen Warren <swarren@nvidia.com>
Tested-by: Stephen Warren <swarren@nvidia.com>
Acked-by: Peter De Schrijver <pdeschrijver@nvidia.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Remove the explicit call to l2x0_of_init(), converting to the generic
infrastructure instead. We can remove the .init_machine as it becomes
the same as the generic version.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
The cache size should already be present in the L2 cache auxiliary
control register: it is part of the integration process to configure
the hardware IP. Most platforms get this right, yet still many
cargo-cult program, and assume that they always need specifying to
the L2 cache code. Remove them so we can find out which really need
this.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Remove the explicit call to l2x0_of_init(), converting to the generic
infrastructure instead.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
The cache size should already be present in the L2 cache auxiliary
control register: it is part of the integration process to configure
the hardware IP. Most platforms get this right, yet still many
cargo-cult program, and assume that they always need specifying to
the L2 cache code. Remove them so we can find out which really need
this.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Remove the explicit call to l2x0_of_init(), converting to the generic
infrastructure instead. This also allows us to eliminate the
.init_machine function as this becomes the same as the generic version.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Add better commentry about the L2 cache requirements on these platforms.
Unfortunately, the auxiliary control register is not pre-set to indicate
the correct cache parameters, so we have to manually program these.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Remove the explicit call to l2x0_of_init(), converting to the generic
infrastructure instead. Along with this change, we can delete l2x0.c
from prima2.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
The cache size should already be present in the L2 cache auxiliary
control register: it is part of the integration process to configure
the hardware IP. Most platforms get this right, yet still many
cargo-cult program, and assume that they always need specifying to
the L2 cache code. Remove them so we can find out which really need
this.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Add support for L2 cache controller (PL310) on AM437x SoC.
Signed-off-by: Sekhar Nori <nsekhar@ti.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Get rid of init call to initialize L2 cache. Instead use the init_early
machine hook. This helps in using the initialization routine across
SoCs without the need of ugly cpu_is_*() checks.
Signed-off-by: Sekhar Nori <nsekhar@ti.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
L2 cache initialization for OMAP4 redundantly sets the cache policy to
Round-Robin. This is not needed since thats the PL310 default anyway.
Removing this reduces the number of platform specific aux control
settings.
Signed-off-by: Sekhar Nori <nsekhar@ti.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Avoid reading directly from the L2 registers in platform code. The L2
code will have already saved the register values itself into the
l2x0_saved_regs structure, so platform code should just move these
values to where they're required.
This is safe because the L2x0 will have been initialised by an early
initcall, whereas the OMAP4 PM code is initialised late.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Since we now always enable NS access to the unlock registers, this can
be removed from OMAP4. Remove the NS access bit for the interrupt
registers from OMAP4 as well - nothing in the kernel accesses that yet,
and we can add it in core code when we have the need.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
The cache size should already be present in the L2 cache auxiliary
control register: it is part of the integration process to configure
the hardware IP. Most platforms get this right, yet still many
cargo-cult program, and assume that they always need specifying to
the L2 cache code. Remove them so we can find out which really need
this.
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Now that OMAP2 uses the write_sec method, we don't need to enable the L2
cache in OMAP2 specific code; this can be done via the normal mechanisms
in the L2C code. Remove the OMAP2 specific code.
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|