Monolithic and "Micro"-hypervisors
In Xen, the virtual drivers run in a different address space than the hypervisor, which essentially makes it a "micro"-hypervisor. There have been many discussions about monolithic versus micro-kernels, and often they boil down to security/stability versus performance - since switching address spaces (context switch) takes extra time micro-kernels are slower, but running many processes in the same address space can be hazardous. The same arguments apply to monolithic and micro-hypervisors.
The virtual drivers that Xen exposes to its domains run in a special VM, dom0. Each time the virtual drivers in dom0 needs to manage for example multiplexing of a packet from the network driver, the VMM scheduler switches to dom0 kernel space, and the packet is multiplexed in dom0 userspace. Receiving this packet in domU thus incurs at least six context switches: domU -> VMM scheduler -> dom0 kernel space -> dom0 user space -> dom0 kernel space -> VMM scheduler -> domU
KVM is not really a monolithic hypervisor, either. But it's closer: VM -> Linux kernel space -> Linux user space -> Linux kernel space -> VM
A monolithic hypervisor would multiplex drivers in the same address space as the VMM, and only two context switches would be incurred: VM -> VMM space -> VM
Qemu, and, in effect, KVM and HVM Xen, use a particular kind of OS Image to boot a VM, which includes the kernel and a boot sector, similar to a bootable CD. This is different from a paravirt Xen OS image, which only includes the userspace parts of the OS image and is a raw copy of a volume with a root filesystem.
In KVM, a VM runs on top of a normal Linux kernel as a normal process. For example, using the 'kill' command on a KVM process kills the VM.
There is no logical network between the VMs and the VMM's physical network interface. This can be an advantage or a disadvantage:
There is less isolation between VMs and VMM, which can affect stability and security. In terms of performance, it is hard to say if one approach is better than the other, but having a virtual network driver between VMs gives a performance penalty, and direct access to hardware is always faster. In both KVM and Xen it should be possible to bypass the driver domain (dom0 in Xen or the KVM VMM) and, with Xen paravirt, this has already been done, allowing VMs to have direct access to hardware. Bypassing the driver domain, however, is harder to implement in full virtualization (Xen HVM and KVM), although this may be solved with newer virtualization hardware extensions (e.g. VT-d). KVM does not yet support SMP in VMs. KVM is in the vanilla Linux 2.6.20 kernel and can be patched against older kernels. The source code profile of KVM is less intrusive on the kernel source and should be more easy to backport to older versions of Linux. KVM relies on the availability of VT or SVM hardware extensions of x86 CPUs, which should be available in most new PC computers.
$ cat /proc/cpuinfo | grep vmx flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts pni dtes64 monitor ds_cpl vmx' est tm2 ssse3 cx16 xtpr pdcm xsave lahf_lm tpr_shadow vnmi flexpriority ...
A AMD (svm):
egrep '(vmx|svm)' --color=always /proc/cpuinfo
Vegeu Virtual Box.
Vegeu Virtual Box