summaryrefslogtreecommitdiffstats
path: root/virt/kvm/kvm_main.c
AgeCommit message (Collapse)Author
2010-08-02KVM: Return EFAULT from kvm ioctl when guest accesses bad areaGleb Natapov
Currently if guest access address that belongs to memory slot but is not backed up by page or page is read only KVM treats it like MMIO access. Remove that capability. It was never part of the interface and should not be relied upon. Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2010-08-02KVM: define hwpoison variables staticGleb Natapov
They are not used outside of the file. Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2010-08-01KVM: Remove unnecessary divide operationsJoerg Roedel
This patch converts unnecessary divide and modulo operations in the KVM large page related code into logical operations. This allows to convert gfn_t to u64 while not breaking 32 bit builds. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2010-08-01KVM: Fix a race condition for usage of is_hwpoison_address()Huang Ying
is_hwpoison_address accesses the page table, so the caller must hold current->mm->mmap_sem in read mode. So fix its usage in hva_to_pfn of kvm accordingly. Comment is_hwpoison_address to remind other users. Reported-by: Avi Kivity <avi@redhat.com> Signed-off-by: Huang Ying <ying.huang@intel.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2010-08-01KVM: Keep slot ID in memory slot structureAvi Kivity
May be used for distinguishing between internal and user slots, or for sorting slots in size order. Signed-off-by: Avi Kivity <avi@redhat.com>
2010-08-01KVM: Add mini-API for vcpu->requestsAvi Kivity
Makes it a little more readable and hackable. Signed-off-by: Avi Kivity <avi@redhat.com>
2010-08-01KVM: Remove memory alias supportAvi Kivity
As advertised in feature-removal-schedule.txt. Equivalent support is provided by overlapping memory regions. Signed-off-by: Avi Kivity <avi@redhat.com>
2010-08-01KVM: Fix KVM_SET_SIGNAL_MASK with arg == NULLAndi Kleen
When the user passed in a NULL mask pass this on from the ioctl handler. Found by gcc 4.6's new warnings. Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2010-08-01KVM: cleanup "*new.rmap" typeLai Jiangshan
The type of '*new.rmap' is not 'struct page *', fix it Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2010-08-01KVM: Update Red Hat copyrightsAvi Kivity
Signed-off-by: Avi Kivity <avi@redhat.com>
2010-08-01KVM: Consolidate arch specific vcpu ioctl lockingAvi Kivity
Now that all arch specific ioctls have centralized locking, it is easy to move it to the central dispatcher. Signed-off-by: Avi Kivity <avi@redhat.com>
2010-08-01KVM: move vcpu locking to dispatcher for generic vcpu ioctlsAvi Kivity
All vcpu ioctls need to be locked, so instead of locking each one specifically we lock at the generic dispatcher. This patch only updates generic ioctls and leaves arch specific ioctls alone. Signed-off-by: Avi Kivity <avi@redhat.com>
2010-08-01KVM: Avoid killing userspace through guest SRAO MCE on unmapped pagesHuang Ying
In common cases, guest SRAO MCE will cause corresponding poisoned page be un-mapped and SIGBUS be sent to QEMU-KVM, then QEMU-KVM will relay the MCE to guest OS. But it is reported that if the poisoned page is accessed in guest after unmapping and before MCE is relayed to guest OS, userspace will be killed. The reason is as follows. Because poisoned page has been un-mapped, guest access will cause guest exit and kvm_mmu_page_fault will be called. kvm_mmu_page_fault can not get the poisoned page for fault address, so kernel and user space MMIO processing is tried in turn. In user MMIO processing, poisoned page is accessed again, then userspace is killed by force_sig_info. To fix the bug, kvm_mmu_page_fault send HWPOISON signal to QEMU-KVM and do not try kernel and user space MMIO processing for poisoned page. [xiao: fix warning introduced by avi] Reported-by: Max Asbock <masbock@linux.vnet.ibm.com> Signed-off-by: Huang Ying <ying.huang@intel.com> Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2010-05-19KVM: Let vcpu structure alignment be determined at runtimeAvi Kivity
vmx and svm vcpus have different contents and therefore may have different alignmment requirements. Let each specify its required alignment. Signed-off-by: Avi Kivity <avi@redhat.com>
2010-05-17KVM: Remove test-before-set optimization for dirty bitsTakuya Yoshikawa
As Avi pointed out, testing bit part in mark_page_dirty() was important in the days of shadow paging, but currently EPT and NPT has already become common and the chance of faulting a page more that once per iteration is small. So let's remove the test bit to avoid extra access. Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Avi Kivity <avi@redhat.com>
2010-05-17KVM: do not call hardware_disable() on CPU_UP_CANCELEDLai Jiangshan
When CPU_UP_CANCELED, hardware_enable() has not been called at the CPU which is going up because raw_notifier_call_chain(CPU_ONLINE) has not been called for this cpu. Drop the handling for CPU_UP_CANCELED. Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2010-05-17KVM: use the correct RCU API for PROVE_RCU=yLai Jiangshan
The RCU/SRCU API have already changed for proving RCU usage. I got the following dmesg when PROVE_RCU=y because we used incorrect API. This patch coverts rcu_deference() to srcu_dereference() or family API. =================================================== [ INFO: suspicious rcu_dereference_check() usage. ] --------------------------------------------------- arch/x86/kvm/mmu.c:3020 invoked rcu_dereference_check() without protection! other info that might help us debug this: rcu_scheduler_active = 1, debug_locks = 0 2 locks held by qemu-system-x86/8550: #0: (&kvm->slots_lock){+.+.+.}, at: [<ffffffffa011a6ac>] kvm_set_memory_region+0x29/0x50 [kvm] #1: (&(&kvm->mmu_lock)->rlock){+.+...}, at: [<ffffffffa012262d>] kvm_arch_commit_memory_region+0xa6/0xe2 [kvm] stack backtrace: Pid: 8550, comm: qemu-system-x86 Not tainted 2.6.34-rc4-tip-01028-g939eab1 #27 Call Trace: [<ffffffff8106c59e>] lockdep_rcu_dereference+0xaa/0xb3 [<ffffffffa012f6c1>] kvm_mmu_calculate_mmu_pages+0x44/0x7d [kvm] [<ffffffffa012263e>] kvm_arch_commit_memory_region+0xb7/0xe2 [kvm] [<ffffffffa011a5d7>] __kvm_set_memory_region+0x636/0x6e2 [kvm] [<ffffffffa011a6ba>] kvm_set_memory_region+0x37/0x50 [kvm] [<ffffffffa015e956>] vmx_set_tss_addr+0x46/0x5a [kvm_intel] [<ffffffffa0126592>] kvm_arch_vm_ioctl+0x17a/0xcf8 [kvm] [<ffffffff810a8692>] ? unlock_page+0x27/0x2c [<ffffffff810bf879>] ? __do_fault+0x3a9/0x3e1 [<ffffffffa011b12f>] kvm_vm_ioctl+0x364/0x38d [kvm] [<ffffffff81060cfa>] ? up_read+0x23/0x3d [<ffffffff810f3587>] vfs_ioctl+0x32/0xa6 [<ffffffff810f3b19>] do_vfs_ioctl+0x495/0x4db [<ffffffff810e6b2f>] ? fget_light+0xc2/0x241 [<ffffffff810e416c>] ? do_sys_open+0x104/0x116 [<ffffffff81382d6d>] ? retint_swapgs+0xe/0x13 [<ffffffff810f3ba6>] sys_ioctl+0x47/0x6a [<ffffffff810021db>] system_call_fastpath+0x16/0x1b Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2010-05-17KVM: limit the number of pages per memory slotTakuya Yoshikawa
This patch limits the number of pages per memory slot to make us free from extra care about type issues. Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2010-05-17KVM: coalesced_mmio: fix kvm_coalesced_mmio_init()'s error handlingTakuya Yoshikawa
kvm_coalesced_mmio_init() keeps to hold the addresses of a coalesced mmio ring page and dev even after it has freed them. Also, if this function fails, though it might be rare, it seems to be suggesting the system's serious state: so we'd better stop the works following the kvm_creat_vm(). This patch clears these problems. We move the coalesced mmio's initialization out of kvm_create_vm(). This seems to be natural because it includes a registration which can be done only when vm is successfully created. Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2010-05-17KVM: fix the errno of ioctl KVM_[UN]REGISTER_COALESCED_MMIO failureWei Yongjun
This patch change the errno of ioctl KVM_[UN]REGISTER_COALESCED_MMIO from -EINVAL to -ENXIO if no coalesced mmio dev exists. Signed-off-by: Wei Yongjun <yjwei@cn.fujitsu.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2010-05-17KVM: cleanup kvm traceXiao Guangrong
This patch does: - no need call tracepoint_synchronize_unregister() when kvm module is unloaded since ftrace can handle it - cleanup ftrace's macro Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2010-04-25KVM: update gfn_to_hva() to use gfn_to_hva_memslot()Takuya Yoshikawa
Marcelo introduced gfn_to_hva_memslot() when he implemented gfn_to_pfn_memslot(). Let's use this for gfn_to_hva() too. Note: also remove parentheses next to return as checkpatch said to do. Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Avi Kivity <avi@redhat.com>
2010-04-21KVM: Add missing srcu_read_lock() for kvm_mmu_notifier_release()Lai Jiangshan
I got this dmesg due to srcu_read_lock() is missing in kvm_mmu_notifier_release(). =================================================== [ INFO: suspicious rcu_dereference_check() usage. ] --------------------------------------------------- arch/x86/kvm/x86.h:72 invoked rcu_dereference_check() without protection! other info that might help us debug this: rcu_scheduler_active = 1, debug_locks = 0 2 locks held by qemu-system-x86/3100: #0: (rcu_read_lock){.+.+..}, at: [<ffffffff810d73dc>] __mmu_notifier_release+0x38/0xdf #1: (&(&kvm->mmu_lock)->rlock){+.+...}, at: [<ffffffffa0130a6a>] kvm_mmu_zap_all+0x21/0x5e [kvm] stack backtrace: Pid: 3100, comm: qemu-system-x86 Not tainted 2.6.34-rc3-22949-gbc8a97a-dirty #2 Call Trace: [<ffffffff8106afd9>] lockdep_rcu_dereference+0xaa/0xb3 [<ffffffffa0123a89>] unalias_gfn+0x56/0xab [kvm] [<ffffffffa0119600>] gfn_to_memslot+0x16/0x25 [kvm] [<ffffffffa012ffca>] gfn_to_rmap+0x17/0x6e [kvm] [<ffffffffa01300c1>] rmap_remove+0xa0/0x19d [kvm] [<ffffffffa0130649>] kvm_mmu_zap_page+0x109/0x34d [kvm] [<ffffffffa0130a7e>] kvm_mmu_zap_all+0x35/0x5e [kvm] [<ffffffffa0122870>] kvm_arch_flush_shadow+0x16/0x22 [kvm] [<ffffffffa01189e0>] kvm_mmu_notifier_release+0x15/0x17 [kvm] [<ffffffff810d742c>] __mmu_notifier_release+0x88/0xdf [<ffffffff810d73dc>] ? __mmu_notifier_release+0x38/0xdf [<ffffffff81040848>] ? exit_mm+0xe0/0x115 [<ffffffff810c2cb0>] exit_mmap+0x2c/0x17e [<ffffffff8103c472>] mmput+0x2d/0xd4 [<ffffffff81040870>] exit_mm+0x108/0x115 [...] Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2010-04-20KVM: fix the handling of dirty bitmaps to avoid overflowsTakuya Yoshikawa
Int is not long enough to store the size of a dirty bitmap. This patch fixes this problem with the introduction of a wrapper function to calculate the sizes of dirty bitmaps. Note: in mark_page_dirty(), we have to consider the fact that __set_bit() takes the offset as int, not long. Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2010-03-30include cleanup: Update gfp.h and slab.h includes to prepare for breaking ↵Tejun Heo
implicit slab.h inclusion from percpu.h percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by: Tejun Heo <tj@kernel.org> Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-01KVM: Convert kvm->requests_lock to raw_spinlock_tAvi Kivity
The code relies on kvm->requests_lock inhibiting preemption. Noted by Jan Kiszka. Signed-off-by: Avi Kivity <avi@redhat.com>
2010-03-01KVM: Introduce kvm_host_page_sizeJoerg Roedel
This patch introduces a generic function to find out the host page size for a given gfn. This function is needed by the kvm iommu code. This patch also simplifies the x86 host_mapping_level function. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2010-03-01KVM: Fix kvm_coalesced_mmio_ring duplicate allocationSheng Yang
The commit 0953ca73 "KVM: Simplify coalesced mmio initialization" allocate kvm_coalesced_mmio_ring in the kvm_coalesced_mmio_init(), but didn't discard the original allocation... Signed-off-by: Sheng Yang <sheng@linux.intel.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2010-03-01KVM: fix cleanup_srcu_struct on vm destructionMarcelo Tosatti
cleanup_srcu_struct on VM destruction remains broken: BUG: unable to handle kernel paging request at ffffffffffffffff IP: [<ffffffff802533d2>] srcu_read_lock+0x16/0x21 RIP: 0010:[<ffffffff802533d2>] [<ffffffff802533d2>] srcu_read_lock+0x16/0x21 Call Trace: [<ffffffffa05354c4>] kvm_arch_vcpu_uninit+0x1b/0x48 [kvm] [<ffffffffa05339c6>] kvm_vcpu_uninit+0x9/0x15 [kvm] [<ffffffffa0569f7d>] vmx_free_vcpu+0x7f/0x8f [kvm_intel] [<ffffffffa05357b5>] kvm_arch_destroy_vm+0x78/0x111 [kvm] [<ffffffffa053315b>] kvm_put_kvm+0xd4/0xfe [kvm] Move it to kvm_arch_destroy_vm. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Reported-by: Jan Kiszka <jan.kiszka@siemens.com>
2010-03-01KVM: convert slots_lock to a mutexMarcelo Tosatti
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2010-03-01KVM: convert io_bus to SRCUMarcelo Tosatti
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2010-03-01KVM: x86: switch kvm_set_memory_alias to SRCU updateMarcelo Tosatti
Using a similar two-step procedure as for memslots. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2010-03-01KVM: introduce kvm->srcu and convert kvm_set_memory_region to SRCU updateMarcelo Tosatti
Use two steps for memslot deletion: mark the slot invalid (which stops instantiation of new shadow pages for that slot, but allows destruction), then instantiate the new empty slot. Also simplifies kvm_handle_hva locking. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2010-03-01KVM: use gfn_to_pfn_memslot in kvm_iommu_map_pagesMarcelo Tosatti
So its possible to iommu map a memslot before making it visible to kvm. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2010-03-01KVM: introduce gfn_to_pfn_memslotMarcelo Tosatti
Which takes a memslot pointer instead of using kvm->memslots. To be used by SRCU convertion later. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2010-03-01KVM: split kvm_arch_set_memory_region into prepare and commitMarcelo Tosatti
Required for SRCU convertion later. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2010-03-01KVM: modify memslots layout in struct kvmMarcelo Tosatti
Have a pointer to an allocated region inside struct kvm. [alex: fix ppc book 3s] Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2010-03-01KVM: Simplify coalesced mmio initializationAvi Kivity
- add destructor function - move related allocation into constructor - add stubs for !CONFIG_KVM_MMIO Signed-off-by: Avi Kivity <avi@redhat.com>
2010-03-01KVM: Remove ifdefs from mmu notifier initializationAvi Kivity
Signed-off-by: Avi Kivity <avi@redhat.com>
2010-03-01KVM: Disentangle mmu notifiers and coalesced_mmio registrationAvi Kivity
They aren't related. Signed-off-by: Avi Kivity <avi@redhat.com>
2009-12-27KVM: get rid of kvm_create_vm() unused label warning on s390Heiko Carstens
arch/s390/kvm/../../../virt/kvm/kvm_main.c: In function 'kvm_create_vm': arch/s390/kvm/../../../virt/kvm/kvm_main.c:409: warning: label 'out_err' defined but not used Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2009-12-27KVM: Fix possible circular locking in kvm_vm_ioctl_assign_device()Sheng Yang
One possible order is: KVM_CREATE_IRQCHIP ioctl(took kvm->lock) -> kvm_iobus_register_dev() -> down_write(kvm->slots_lock). The other one is in kvm_vm_ioctl_assign_device(), which take kvm->slots_lock first, then kvm->lock. Update the comment of lock order as well. Observe it due to kernel locking debug warnings. Cc: stable@kernel.org Signed-off-by: Sheng Yang <sheng@linux.intel.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2009-12-22anonfd: Allow making anon files read-onlyRoland Dreier
It seems a couple places such as arch/ia64/kernel/perfmon.c and drivers/infiniband/core/uverbs_main.c could use anon_inode_getfile() instead of a private pseudo-fs + alloc_file(), if only there were a way to get a read-only file. So provide this by having anon_inode_getfile() create a read-only file if we pass O_RDONLY in flags. Signed-off-by: Roland Dreier <rolandd@cisco.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2009-12-09Merge commit 'origin/master' into nextBenjamin Herrenschmidt
Conflicts: include/linux/kvm.h
2009-12-03KVM: Allow internal errors reported to userspace to carry extra dataAvi Kivity
Usually userspace will freeze the guest so we can inspect it, but some internal state is not available. Add extra data to internal error reporting so we can expose it to the debugger. Extra data is specific to the suberror. Signed-off-by: Avi Kivity <avi@redhat.com>
2009-12-03KVM: Enable 32bit dirty log pointers on 64bit hostArnd Bergmann
With big endian userspace, we can't quite figure out if a pointer is 32 bit (shifted >> 32) or 64 bit when we read a 64 bit pointer. This is what happens with dirty logging. To get the pointer interpreted correctly, we thus need Arnd's patch to implement a compat layer for the ioctl: A better way to do this is to add a separate compat_ioctl() method that converts this for you. Based on initial patch from Arnd Bergmann. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2009-12-03KVM: introduce kvm_vcpu_on_spinZhai, Edwin
Introduce kvm_vcpu_on_spin, to be used by VMX/SVM to yield processing once the cpu detects pause-based looping. Signed-off-by: "Zhai, Edwin" <edwin.zhai@intel.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2009-12-03KVM: Activate Virtualization On DemandAlexander Graf
X86 CPUs need to have some magic happening to enable the virtualization extensions on them. This magic can result in unpleasant results for users, like blocking other VMMs from working (vmx) or using invalid TLB entries (svm). Currently KVM activates virtualization when the respective kernel module is loaded. This blocks us from autoloading KVM modules without breaking other VMMs. To circumvent this problem at least a bit, this patch introduces on demand activation of virtualization. This means, that instead virtualization is enabled on creation of the first virtual machine and disabled on destruction of the last one. So using this, KVM can be easily autoloaded, while keeping other hypervisors usable. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2009-12-03KVM: Move assigned device code to own fileAvi Kivity
Signed-off-by: Avi Kivity <avi@redhat.com>
2009-12-03KVM: Drop kvm->irq_lock lock from irq injection pathGleb Natapov
The only thing it protects now is interrupt injection into lapic and this can work lockless. Even now with kvm->irq_lock in place access to lapic is not entirely serialized since vcpu access doesn't take kvm->irq_lock. Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>