Migrating Windows 10 from VirtualBox to LibVirt


I'd like to run Android emulators on my PC for unit tests of python3-android. However, Android emulators failed to run as the same host already runs a Windows VM using VirtualBox, and VirtualBox VMs cannot run together with KVM-based ones like recent Android system images for x86/x86_64. Thinking twice, I decided to move away from VirtualBox and try out LibVirt, which uses qemu under the hood and is comaptible with Android emulators.


Mostly I follow a Chinese blog post. A tricky point is about permissions. libvirtd, the daemon for LibVirt, uses PolicyKit to restrict connections to authenticated users only. Either I need to add myself to the libvirt group as described in the aforementioned blog post, or I need to enter my password whenever I want to connect to LibVirt. In the end, I found that the remote-viewer command from the virt-viewer package allows connecting to virtual machines without bothering with authentication. As I seldom modify settings of virtual machines, I didn't add myself to the libvirt group, following the principle of least priviledge.

Performance tuning

Like VirtualBox, default configuration in a LibVirt VM provides hardware simulation that can boot Windows without extra steps, while the performance is awful, especially for disk I/O and graphics. To get better performance, I set up VirtIO to enjoy the benefit of paravirtualization.

Enabling VirtIO-backed disks

This is somehow tricky as Windows does not seem to install the virtio driver for the disk controller if I boot the VM with the default disk controller, and if I switch to the VirtIO-based disk controller, Windows does not boot, showing an error INACCESSIBLE BOOT DEVICE. In the end, I install the driver for the disk controller in the Windows Recovery Environment (Windows RE). First, mount the VirtIO ISO to the VM before booting. During boot, hit F8 repeatedly until Windows RE is reached. In a command prompt, I first install the VirtIO driver to the running Windows RE system (assuming the VirtIO ISO is mounted on D:\),

C:\>drvload D:\viostor\w10\amd64\viostor.inf

And then I can reach my Windows installation on the VirtIO-based disk and install the driver into the normal system (assuming mounted on E:\),

C:\>dism /image:E: /add-driver /driver:D:\viostor\w10\amd64\viostor.inf`

Other hardware components, including the QXL graphics controller and the VirtIO ethernet adapter, are much easier to setup as they can be configured with GUI tools on a running Windows instance.

CPU usage tuning

Another performance issue I noticed is that CPU utilization of the qemu process is quite high when VM is idle. I remember the value is < 10% for a VirtualBox VM. Google brings me to a Reddit post, and after disabling HPET, the CPU usage went down to the normal range.


After some time I noticed that the CPU usage of the qemu process is quite high again. It often goes beyond 50%. Using the perf command mentioned by /u/tholin in the Reddit post above, I found that the vmx_l1d_flush function takes some time.

$ sudo perf kvm --host top -p `pidof qemu-system-x86_64`
4.84%  [kernel]                    [k] vmx_l1d_flush
3.33%  [kernel]                    [k] __fget_files
2.75%  [kernel]                    [k] native_write_msr
2.62%  [kernel]                    [k] do_sys_poll
2.30%  [kernel]                    [k] _raw_spin_lock_irqsave

The function with the highest overhead vmx_l1d_flush seems to be related to L1TF. From kernel source codes, this function is controlled by a module kernel parameter vmentry_l1d_flush. From this Ubuntu wiki page, it can be disabled by the following command:

$ echo never | sudo tee /sys/module/kvm_intel/parameters/vmentry_l1d_flush

After shutting down the VM, setting the module parameter and restarting the VM, CPU usage of the qemu process goes back to under 10% most of time. As it is a security mitigation, I'll reverted to the default and keep it until I observed real performance bottlenecks.

Shared folder performance

I used the shared folders feature of VirtualBox to allow virtual machines to access files on the host. The equivalent feature in LibVirt seems VirtIO-FS, while it requires the WinFSP kernel module, and I had some issues using SSHFS-Win with WinFSP. Instead, I choose to deploy a Samba server on the host. A performance bottleneck I noticed is that when the first Office file is opened after booting, there is a long delay with several seconds. Using WireShark, I found connection attemps to port 80 from the VM to the host using User-Agent: DavClnt, and there is already a blog post about it. It seems the WebClient service is still started if I set its startup type to manual, so I disable the service altogether, and there is no longer delay when files are opened.

Memories about VirtualBox

My experience with VirtualBox was great. I had used it on Windows, macOS and Linux, and there are only minor hiccups. Host kernel modules for VirtualBox is out of tree, and thus upgrading the kernel requires additional care. VirtualBox guest additions follow the same release schedule as the main VirtualBox program. As a result there were often Windows notifications about upgrading it. Furthermore, the extension pack for VirtualBox is not open source, and I need to manually package it often. With LibVirt and qemu, those minor issues are gone as KVM is built into the kernel, VirtIO Windows drivers are not updated often, and the open source version of LibVirt is already feature complete for me. To me, LibVirt is not substantially better, but I guess I will not go back to VirtualBox :)