Create a Gaming HVM


Everythings needed is referenced here


You have a functional Windows HVM (Windows 7 or Windows 10). The "how to" for this part can be found on the Qubes OS documentation and here: Usefull github comment. However, few tips:


To have a Windows HVM for gaming, you must have:

In my case:


Short list of things to do to make the GPU passthrough work:


Warning: I am far from understanding the IOMMU group. Check online references on that subject. It seems that you can only do a successfull GPU passthough if you can passthrough everything that is in the IOMMU Group of the GPU. Also, you can’t see your IOMMU Group when you are using Xen (the information is hidden from dom0). So, what I did: I booted from a Linux Mint Live USB. In the grub I enabled the IOMMU (iommu=1 iommu_amd=on), and then displayed the folder structure of /sys/kernel/iommu_group

tree /sys/kernel/iommu_group

My secondary GPU was alone in its IOMMU group.

GRUB modification

You must hide your secondary GPU from dom0. To do that, you have to edit the GRUB. In a dom0 Terminal, type:


Then find the devices id for your secondary gpu. In my case, it is dom0:0a_00.0 and dom0:0a_00.1. Edit /etc/default/grub, and add the PCI hiding

GRUB_CMDLINE_LINUX="... rd.qubes.hide_pci=0a:00.0,0a:00.1 "

then regenerate the grub

grub2-mkconfig -o /boot/grub2/grub.cfg

Patching stubdom-linux-rootfs.gz

Follow the instructions here:

Copy-paste of the comment:
This is caused by the default TOLUD (Top of Low Usable DRAM) of 3.75G provided by qemu not being large enough to accommodate the larger BARs that a graphics card typically has. The code to pass a custom max-ram-below-4g value to the qemu command line does exist in the libxl_dm.c file of xen, but there is no functionality in libvirt to add this parameter. It is possible to manually add this parameter to the qemu commandline by doing the following in a dom0 terminal:

mkdir stubroot
cp /usr/lib/xen/boot/stubdom-linux-rootfs stubroot/stubdom-linux-rootfs.gz
cd stubroot
gunzip stubdom-linux-rootfs.gz
cpio -i -d -H newc --no-absolute-filenames < stubdom-linux-rootfs
rm stubdom-linux-rootfs
nano init

Before the line "#$dm_args and $kernel are separated with \x1b to allow for spaces in arguments." add:

dm_args=$(echo "$dm_args" \
 | sed "s/-machine\\${SP}xenfv/-machine\

Then execute:

find . -print0 | cpio --null -ov \
--format=newc | gzip -9 > ../stubdom-linux-rootfs
sudo mv ../stubdom-linux-rootfs /usr/lib/xen/boot/

Note that this will apply the change to all HVMs, so if you have any other HVM with more than 3.5G ram assigned, they will not start without the adapter being passed through. Ideally to fix this libvirt should be extended to pass the max-ram-below-4g parameter through to xen, and then a calculation added to determine the correct TOLUD based on the total BAR size of the PCI devices are being passed through to the vm.

Pass the GPU

In qubes settings for the windows HVM, go to the "devices" tab, pass the ID corresponding to your AMD GPU. (in my case, it was 0a:00.0 and 0a:00.1) And check the option for "nostrict reset" for those 2. In some case, you might also need to set the "permissive" flag to true (But I didn’t need that with the RX 580):

qvm-pci attach windows-hvm dom0:0a_00.0 -o permissive=True -o no-strict-reset=True
qvm-pci attach windows-hvm dom0:0a_00.1 -o permissive=True -o no-strict-reset=True


Don’t forget to install the GPU drivers, you can install the official one from AMD website, no modification or trick to do. Nothing else is required to make it work (in my case at least, once I finish to fight to find those informations). If you have issues, you can refer to the links in the first sections. If it doesn’t work and you need to debug more things, you can go deeper.

I am able to play games on my windows HVM with very good performances. And safely.


The AMD GPUs have a bug when used in HVM: each time you will reboot your windows HVM, it will get slower and slower. It is because the AMD GPUs is not correctly resetted when you restart your windows HVM Two solutions for that:

This bug is referenced somewhere, but lost the link and too lazy to search for it.