Enhancing libvirt with SELinux support

July 02, 2021

The libvirt project offers a virtualization abstraction layer, through which administrators can manage virtual machines without direct knowledge of or expertise in the underlying virtualization platform. As such, administrators can use the libvirt-offered tools to manage virtual machines running on QEMU, QEMU/KVM, Xen, and so on.

To use the sVirt approach, libvirt can be built with SELinux support. When this is the case and the guests are governed (security-wise) through SELinux, then the sVirt domains and types are used/enforced by libvirt. The libvirt code will also perform the category selection to enforce guest isolation and will ensure that the image files are assigned the right label (image files that are in use should get a different label than inactive images files).

Differentiating between shared and dedicated resources

The different labels for images allow for different use cases. The image used to host the main operating system (of the guest) will generally receive the svirt_image_t label and will be recategorized with the same pair of categories as the guest runtime itself (running as svirt_t). This image is writable by the guest.

When we consider an image that needs to be readable or writable by multiple guests, then libvirt can opt not to assign any categories to the file. Without categories, MCS constraints don’t apply (well, they still apply, but any set of categories dominates an empty set, and as such, actions against those properly labeled files are allowed).

Images that need to be mounted read-only for a guest (such as bootable media) are assigned the virt_content_t type. If they are dedicated, then categories can be assigned as well. For shared read access, no categories need to be assigned.

Note that these label differences apply mainly to virtualization technologies and not container technologies.

Assessing the libvirt architecture

The libvirt project has several clients that interact with the libvirtd daemon. This daemon is responsible for managing the local hypervisor software (be it QEMU/KVM, Xen, or any other virtualization software) and is even able to manage remote hypervisors. This latter functionality is often used for proprietary hypervisors that offer the necessary APIs to manage the virtual resources on the host:

selinux libvirt image

Due to the cross-platform and cross-hypervisor nature of the libvirt project, sVirt is a good match. Instead of hypervisor-specific domains, generic (yet confined) domains are used to ensure the security of the environment.

Configuring libvirt for sVirt

Most systems that support libvirt on SELinux systems will have SELinux support automatically enabled. If this is not the case, but SELinux support is possible, then all it takes is to configure libvirt to allow the SELinux security model. We map the SELinux security model in libvirt on a per-hypervisor basis.

The configuration parameters related to sVirt are generally defined on a per-hypervisor basis. For instance, for the QEMU-based virtualization driver, we need to edit the /etc/libvirt/qemu.conf file. Let’s look at the various parameters related to secure virtualization:

  • The first parameter, which defines whether sVirt is active or not, is the security_driver parameter. While libvirt will by default enable SELinux once it detects SELinux is active, we can explicitly mark sVirt support as enabled by setting the selinux value:
security_driver = "selinux"

SELinux support will by default be enabled without explicitly marking the security_driver variable in the configuration file. If you want to use libvirt without SELinux support (and consequently without sVirt), then you need to explicitly mark the security_driver setting as none:

security_driver = "none"
  • A second sVirt-related setting in libvirt is  security_default_confined.  This variable defines whether guests are by default confined (and thus associated with the sVirt protections) or not. The default value is 1, which means that the confinement is by default enabled. To disable it, you need to set it to 0:
security_default_confined = 0
  • Users of the libvirt software can also ask to create an unconfined guest (and libvirt allows this by default). If we set  security_require_confined  to 1, then no unconfined guests can be created:
security_require_confined = 1

We can confirm that sVirt is running when we have a guest active on the platform, as we can then consult the label for its processes to verify that it indeed received two random categories.

Let’s create such a guest, using the regular QEMU hypervisor. We use an Alpine Linux ISO to boot the guest with, but that is merely an example—you can substitute it with any ISO you want:

# virt-install --virt-type=qemu --name test \
 --ram 128 --vcpus=1 --graphics none \
 --os-variant=alpinelinux3.8 \
 --cdrom=/var/lib/libvirt/boot/alpine-extended-x86_64.iso \
 --disk path=/var/lib/libvirt/images/test.qcow2,size=1,format=qcow2

The locations mentioned are important, as they will ensure that the files are properly labeled:

  • In /var/lib/libvirt/boot (and /var/lib/libvirt/isos), read-only content should be placed, which will result in the files automatically being labeled with virt_content_t.
  • In /var/lib/libvirt/images, we create the actual guest images. When the guests are shut down, the images will be labeled with virt_image_t, but once started, the labels will be adjusted to match the categories associated with the domain.

The command will create a guest called test, with 128 MB of memory and 1 vCPU. No specific graphics support will be enabled, meaning that the standard console or screen of the virtual machine will not be associated with any graphical service such as Virtual Network Computing (VNC) but will rely on a serial console definition inside the guest. Furthermore, we have the guest use a small, 1 GB disk that uses the QEMU copy-on-write (QCOW2) format.

Once we have created the guest and launched it, we can check its label easily:

# ps -efZ | grep test
system_u:system_r:svirt_tcg_t:s0:c533,c565 /usr/bin/qemu-system-x86_64 -name guest=test,...

To list the currently defined guests, use the virsh command:

# virsh list --all
 Id Name State
------------------------
 1 test running

It will gives you output similar to below:The --all argument will ensure that even guests that are defined but are not running currently are listed as well.

Note:

Within libvirt, guests are actually called domains. As SELinux (and thus this book) also uses the term domain frequently when referring to the context of a process, we will be using guest as terminology when referring to libvirt’s domains to keep possible confusion to a minimum.

The virsh command is the main entry point for interacting with libvirt. For instance, to send a shutdown signal to a guest, you would use the shutdown argument, whereas the destroy argument will force the shutdown of the guest. Finally, to remove a definition, you would use undefine.

As shown in the previous example, the guest we defined is running with the svirt_tcg_t domain. Let’s see how we can adjust the labels used by libvirt for guests.

Changing a guest’s SELinux labels

Once a guest has been defined, libvirt allows administrators to modify its parameters by editing an XML file representing the guest. Within this XML file, the SELinux labeling has a place as well.

To view the current definition, you can use the dumpxml argument to virsh:

# virsh dumpxml test

At the end of the XML, the security labels are shown. For SELinux, this could look like so:

<seclabel type='dynamic' model='selinux' relabel='yes'>
 <label>system_u:system_r:svirt_tcg_t:s0:c533,c565</label>
 <imagelabel>system_u:object_r:svirt_image_t:s0:c533,c565</imagelabel>
</seclabel>

If we want to modify these settings, we can use the edit argument to virsh:

# virsh edit test

This will open the XML file in the local editor. However, once we accomplish that, we’ll notice that the seclabel entries are nowhere to be found. That is because the default behavior is to use dynamic labels (hence type='dynamic') with default labels, which does not require any default definition.

Let’s instead use a static definition, and have the guest run with the c123,c124 category pair. In the displayed XML, at the end (but still within the <domain>...</domain> definition), place the following XML snippet:

<seclabel type='static' model='selinux' relabel='yes'>
 <label>system_u:system_r:svirt_tcg_t:s0:c123,c124</label>
</seclabel>

To run a guest with a different type is of course done in a similar fashion, changing svirt_tcg_t to a different type. However, keep in mind that not all types can be used regardless. For instance, the default svirt_t domain cannot be used with QEMU’s full-system virtualization (as QEMU uses TCG if it cannot use KVM).

Important note

The default types that libvirt uses are declared inside /etc/selinux/targeted/contexts, in the virtual_domain_context and virtual_image_context files. However, it is not recommended to change these files as they will be overwritten when SELinux policy updates are released by the distribution.

The relabel statement requests libvirt to relabel all resources for the guest according to the guest’s current assigned label (relabel='yes') or not (relabel='no'). With dynamic category assignment, this will always be yes, while with static definitions both values are possible.

Of course, if we want to, we can use dynamic category assignment with custom type definitions as well. For that, we declare type='dynamic' but explicitly define a label within a <baselabel> entity, like so:

<seclabel type='dynamic' model='selinux'>
 <baselabel>system_u:system_r:svirt_t:s0</baselabel>
</seclabel>

This will have the guest run with a dynamically associated category pair, while using a custom label rather than the default selected one.

Customizing resource labels

If the guest definition has relabeling active (either because it uses dynamic category assignment or on explicit request of the administrator), then the resources that the guest uses will be relabeled accordingly.

Administrators can customize the labeling behavior of libvirt through the same interface we used previously: guest definition files. For instance, if we would not want libvirt to relabel the test.qcow2 file that represents the guest’s disk, we could add to the XML like so:

<disk type='file' device='disk'>
 <driver name='qemu' type='qcow2'/>
 <source file='https://510848-1853064-raikfcquaxqncofqfm.stackpathdns.com/var/lib/libvirt/images/test.qcow2'>
 <seclabel relabel='no'/>
 </source>
 <target dev='hda' bus='ide'/>
 <address type='drive' controller='0' bus='0'
 target='0' unit='0'/>
</disk>

This is useful when you want to allow the sharing of some resources across different guests, without making them readable by all guests. In such a situation, we could label the file itself with (say) svirt_image_t:s0:c123 and have the guests with category pairs always contain the category c123.

Controlling available categories

When libvirt selects random categories, it does so based on its own category range. By default, MCS systems will have this range set to c0.c1023. To change the category range, we need to ensure that we launch the libvirt daemon (libvirtd) in the proper context.

  • First, copy over the system-provided libvirtd.service file to /etc/systemd/system:
# cp /usr/lib/systemd/system/libvirtd.service /etc/systemd/system
  • Edit the libvirtd.service file and add the following definition:
SELinuxContext=system_u:system_r:virtd_t:s0-s0:c800.c899
  • Reload the daemon definitions for systemd so that it picks up the new libvirtd.service file:
# systemctl daemon-reload
  • Restart the libvirtd daemon:
# systemctl stop libvirtd
# systemctl start libvirtd
  • We can now start our guests again and verify that each guest is now running with a category pair within the range defined for the libvirtd daemon:
# virsh start test
# ps -efZ | grep virt
system_u:system_r:virtd_t:s0-s0:c800.c899 /usr/sbin/libvirtd
system_u:system_r:svirt_t:s0:c846,c891 /usr/bin/qemu-system-x86_64 -name guest=test...

As we can see, the categories selected by libvirt are now within the defined range.

Systems that do not use systemd can edit the SysV-style init script and use runcon:

runcon -l s0-s0:c800.c899 /usr/sbin/libvirtd \
 --config /etc/libvirt/libvirtd.conf --listen

Every time we launch a new guest, the libvirt code will randomly select two categories. The service will then check whether these categories are part of its own range and whether the category pair is already used or not. If any of these checks fail, libvirt will randomly select a new pair of categories until a free pair matches the requirements.

Changing the storage pool locations

A very common configuration change with libvirt is to reconfigure it to use a different storage pool location. This has a slight impact on SELinux as well, as we do not have proper file context definitions for the new location.

Let’s see how to create a new pool location and change the SELinux configuration for it:

  • List the current storage pools to make sure the new pool name is not already taken:
# virsh pool-list --all
 Name State Autostart
-----------------------------------------------
 boot active yes
 images active yes
 root active yes
  • Create the target location:
# mkdir /srv/images
  • Create the new storage pool with pool-define-as. In the following command, we name the pool large_images:
# virsh pool-define-as large_images dir - - - - "/srv/images"
Pool large_images defined
  • Configure SELinux to label the pool properly:
# semanage fcontext -a -t virt_image_t "/srv/images(/.*)?" 
  • Relabel the directory structure:
# restorecon -R /srv/images
  • Have libvirt populate the directory structure:
# virsh pool-build large_images
  • Start the storage pool:
# virsh pool-start large_images
  • Turn on auto-start so that, when libvirtd starts, the pool is immediately usable as well:
# virsh pool-autostart large_images
  • We can verify that everything is functioning properly with the pool-info command:
# virsh pool-info large_images

The output will show the current and available capacity for the new location.

If we host the storage pool on an NFS-mounted location, then we need to enable the virt_use_nfs SELinux boolean as well.

Now that we’ve fully grasped how to configure libvirt and SELinux for it, let’s see how we can use the popular Vagrant tool with libvirt.

Related Articles

How to add swap space on Ubuntu 21.04 Operating System

How to add swap space on Ubuntu 21.04 Operating System

The swap space is a unique space on the disk that is used by the system when Physical RAM is full. When a Linux machine runout the RAM it use swap space to move inactive pages from RAM. Swap space can be created into Linux system in two ways, one we can create a...

read more

Lorem ipsum dolor sit amet consectetur

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

20 + 7 =