diff options
author | Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> | 2011-07-12 03:33:44 +0800 |
---|---|---|
committer | Avi Kivity <avi@redhat.com> | 2011-07-24 11:50:40 +0300 |
commit | ce88decffd17bf9f373cc233c961ad2054965667 (patch) | |
tree | 65202d01a10c790eacb4b63bacc5fccfbe5bb050 /arch/x86/kvm/mmu.h | |
parent | dd3bfd59dbc69fd970394ab354cfca5f959d5755 (diff) |
KVM: MMU: mmio page fault support
The idea is from Avi:
| We could cache the result of a miss in an spte by using a reserved bit, and
| checking the page fault error code (or seeing if we get an ept violation or
| ept misconfiguration), so if we get repeated mmio on a page, we don't need to
| search the slot list/tree.
| (https://lkml.org/lkml/2011/2/22/221)
When the page fault is caused by mmio, we cache the info in the shadow page
table, and also set the reserved bits in the shadow page table, so if the mmio
is caused again, we can quickly identify it and emulate it directly
Searching mmio gfn in memslots is heavy since we need to walk all memeslots, it
can be reduced by this feature, and also avoid walking guest page table for
soft mmu.
[jan: fix operator precedence issue]
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Diffstat (limited to 'arch/x86/kvm/mmu.h')
-rw-r--r-- | arch/x86/kvm/mmu.h | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 05310b105da..e374db9af02 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -49,6 +49,8 @@ #define PFERR_FETCH_MASK (1U << 4) int kvm_mmu_get_spte_hierarchy(struct kvm_vcpu *vcpu, u64 addr, u64 sptes[4]); +void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask); +int handle_mmio_page_fault_common(struct kvm_vcpu *vcpu, u64 addr, bool direct); int kvm_init_shadow_mmu(struct kvm_vcpu *vcpu, struct kvm_mmu *context); static inline unsigned int kvm_mmu_available_pages(struct kvm *kvm) |