Introducing the Xen hypervisor
The Xen hypervisor runs directly on top of hardware and sits in between the various virtual machines and the hardware itself. Unlike QEMU or KVM, which run as a process within Linux to offer the virtualization functionality, Xen works more independently. As a result, administrators will not see the running instances as separate processes. Instead, they need to rely on Xen commands and APIs to get more information and to interact with the Xen hypervisor.
As with libvirt, the Xen hypervisor uses the term domain to point to its guests. As we use the term domain frequently in SELinux to mean the SELinux type of a running process, and thus also the SELinux type of a running guest, we will use guest wherever possible. However, there will be some terminology associated with Xen where we will have to keep the domain terminology in place.
Xen always has at least one virtual guest defined, called Domain 0 (dom0). This guest manages the system and runs the Xen daemon (
xend). It is through dom0 that administrators will create and operate virtual guests running within Xen. These regular guests are unprivileged, and therefore abbreviated as domU—unprivileged domains.
When administrators boot a Xen host, they boot into Xen’s dom0 instance, through which they then further interact with Xen. The Linux kernel has included support for running both within dom0 as well as domU for quite some time now (with complete support, including backend drivers, since Linux kernel 3.0).
Let’s use an existing Linux deployment to install Xen, and use this existing deployment as Xen’s dom0 guest.
While many Linux distributions offer Xen out of the box, it is very likely that these deployments do not support XSM (which we will enable in the Running XSM-enabled Xen section). So, rather than fiddling with prebuilt Xen environments first, we want to build it from source as released by the Xen Project immediately.
Before we start using Xen, let alone its XSM support, we first need to make sure that we are running with a Xen-enabled Linux kernel.
Running with a Xen-enabled Linux kernel
The Linux kernel on the system must have support for running (at least) inside a dom0 guest. Without this support, not only will the dom0 guest not be able to interact with the Xen hypervisor, it will also not be able to boot the Xen hypervisor itself (the Xen-enabled kernel needs to bootstrap the Xen hypervisor before launching itself as the dom0 guest).
If you build your own Linux kernel, you need to configure the kernel with the settings as documented at https://wiki.xenproject.org/wiki/Mainline_Linux_Kernel_Configs. Some Linux distributions provide more in-depth build instructions (such as Gentoo at https://wiki.gentoo.org/wiki/Xen). On CentOS, however, out-of-the-box Xen support is currently missing from the last release (as CentOS focuses more on libvirt and related technologies for its virtualization support).
Luckily, the community offers well-maintained Linux kernel builds that do include Xen support, through the
kernel-ml package. Let’s install this kernel package:
# yum install elrepo-release
# yum install --enablerepo=elrepo-kernel kernel-ml
# grub2-mkconfig -o /boot/grub2/grub.cfg
Of course, if your system uses a different boot loader, different instructions apply. Consult your Linux distribution’s documentation for more information on how to configure the boot loader.
- Reboot the system using the newly installed kernel:
If all goes well, you will now be running with a Xen-compatible kernel. That, of course, does not mean that Xen is active, but merely that the kernel can support Xen if it is needed. Let’s now move forward with building the Xen hypervisor and related tooling.
Building Xen from source
- Enable the
# dnf config-manger --set-enabled PowerTools
- Install the dependencies supported by the CentOS repositories:
# yum install gcc xz-devel python36-devel acpica-tools uuid-devel ncurses-devel glib2-devel pixman-devel yajl yajl-devel zlib-devel transfig pandoc perl-Pod-Html git glibc-devel.i686 patch libuuid-devel
- Install the
dev86package. At the time of writing, this package is not yet available for CentOS 8 so we deploy the version from CentOS 7 instead:
# yum install https://download-ib01.fedoraproject.org/pub/epel/7/x86_64/Packages/d/dev86-0.16.21-2.el7.x86_64.rpm
With the dependencies now installed, let’s download the latest Xen and build it:
- Go to https://xenproject.org/downloads/ and go to the last Xen Project release.
- At the bottom of the page, download the latest archive.
- Unpack the downloaded archive on the system:
$ tar xvf xen-4.13.1.tar.gz
- Enter the directory the archive is unpacked in:
$ cd xen-4.13.1
- Configure the sources for the local system. At this point, no specific arguments need to be passed on:
- Build the Xen hypervisor and associated tools:
$ make world
- Install the Xen hypervisor and tools on the system:
# make install
- Reconfigure the boot loader. This should automatically detect the Xen binaries and add the necessary boot loader entries:
# grub2-mkconfig -o /boot/grub2/grub.cfg
- Configure the system to support libraries installed in
# echo "/usr/local/lib" > /etc/ld.so.conf.d/local-xen.conf # ldconfig
# semanage fcontext -a -e /usr/local/bin /usr/bin # semanage fcontext -a -e /usr/local/sbin /usr/sbin
- Relabel the files inside
# restorecon -RvF /usr/local
- The result of these steps is that Xen is ready to be booted on the system. The boot loader will not use the Xen-enabled kernel by default though, so during reboot, it is important to select the right entry. Its title will contain with Xen hypervisor:
- After rebooting into the Xen-enabled system, all we need to do is to start the Xen daemons:
# systemctl start xencommons # systemctl start xendomains # systemctl start xendriverdomain # systemctl start xen-watchdog
- To verify that everything is working as expected, list the currently running guests:
# xl list Name ID Mem VCPUs State Time(s) Domain-0 0 7836 4 r----- 46.2
The listing should contain a single guest, named
Domain-0, which is the guest you just executed the
xl list command in.
- Finalize the installation by ensuring that the previously started daemons are started at boot:
# systemctl enable xencommons # systemctl enable xendomains # systemctl enable xendriverdomain # systemctl enable xen-watchdog
Creating an unprivileged guest
When the Xen hypervisor is active, the operating system through which we interact with Xen is called dom0 and is the (only) privileged guest that Xen supports. The other guests are unprivileged, and it is the interaction between these guests and the actions taken by these guests that we want to isolate and protect further with XSM.
Let’s first create a simple, unprivileged guest to run alongside the privileged dom0 one. We use Alpine Linux in this example, but you can easily substitute this with other distributions or operating systems. This example will use the ParaVirtualized (PV) guest approach, but Xen also supports Hardware Virtual Machine (HVM) guests:
- Download the ISO for the Alpine Linux distribution, as this distribution is more optimized for low memory consumption and lower (virtual) disk size requirements. Of course, you are free to pick other distributions as well if your system can handle it. We pick the release optimized for virtual systems from https://www.alpinelinux.org/downloads/ and store the ISO on the system in
- Mount the ISO on the system so that we can use its bootable kernel when creating an unprivileged guest in our next steps:
# mount -o loop -t iso9660 /srv/data/alpine-virt-3.8.0-x86_64.iso /media/cdrom
- Create an image file, which will be used as the boot disk for the virtual guest:
# dd if=/dev/zero of=/srv/data/a1.img bs=1M count=3000
- Next, create a configuration file for the virtual guest. We call the file
a1.cfgand place it in
# Alpine Linux PV DomU # Kernel paths for install kernel = "/media/cdrom/boot/vmlinuz-virt" ramdisk = "/media/cdrom/boot/initramfs-virt" extra = "modules=loop,squashfs console=hvc0" # Path to HDD and ISO file disk = [ 'format=raw, vdev=xvda, access=w, data-et-target-link=/srv/data/a1.img', 'format=raw, vdev=xvdc, access=r, devtype=cdrom, data-et-target-link=/srv/data/alpine-virt-3.8.0-x86_64.iso' ] # DomU settings memory = 512 name = "alpine-a1" vcpus = 1 maxvcpus = 1
# xl create -f /etc/xen/a1.cfg -c
-c option will immediately show the console to interact with, allowing you to initiate and complete the installation of the operating system in the guest.
- When the guest needs to reboot, use shutdown instead, and edit the configuration file. Remove the line referring to the ISO to prevent the guest from booting into the installation environment again.
- To launch the guest again, use the
xl createcommand again. If the guest installation finishes and you no longer need to have access to the console, drop the
# xl create -f /etc/xen/xa1.cfg
- We can confirm that the virtual guest is running with
# xl list Name ID Mem VCPUs State Time(s) Domain-0 0 7836 4 r----- 99.4 alpina-a1 1 128 1 -b---- 2.5
Understanding Xen Security Modules
With Xen Security Modules (XSM), Xen makes it possible to define and control actions between Xen guests, and between a Xen guest and the Xen hypervisor. Unlike the Linux kernel though, where several mandatory access control frameworks exist that can plug into the LSM subsystem, Xen currently only has a single module available for XSM, called XSM-FLASK.
FLASK stands for Flux Advanced Security Kernel and is the security architecture and approach that SELinux also uses for its own access control expressions. With XSM-FLASK, developers and administrators can do the following:
- Define permissions and fine-grained access controls between guests
- Define limited privilege escalation for otherwise unprivileged guests
- Control direct hardware and device access from guests on a policy level
- Restrict and audit activities executed by privileged guests
While XSM-FLASK uses SELinux-like naming conventions (and even SELinux build tools to build the policy), the XSM-FLASK-related settings are independent of SELinux. If dom0 is running with SELinux enabled (and there is no reason why it shouldn’t), its policy has nothing to do with the XSM-FLASK policy.
The labels that XSM-FLASK uses will also not be visible for regular Linux commands running inside the guests (and thus also dom0). As the running guests are not shown as processes within the system, they do not have an SELinux label at all, only an XSM-FLASK label (if enabled). Hence, Xen cannot benefit from the sVirt approach.