The libvirt project offers a virtualization abstraction layer, through which administrators can manage virtual machines without direct knowledge of or expertise in the underlying virtualization platform. As such, administrators can use the libvirt-offered tools to manage virtual machines running on QEMU, QEMU/KVM, Xen, and so on.
To use the sVirt approach, libvirt can be built with SELinux support. When this is the case and the guests are governed (security-wise) through SELinux, then the sVirt domains and types are used/enforced by libvirt. The libvirt code will also perform the category selection to enforce guest isolation and will ensure that the image files are assigned the right label (image files that are in use should get a different label than inactive images files).
Differentiating between shared and dedicated resources
The different labels for images allow for different use cases. The image used to host the main operating system (of the guest) will generally receive the
svirt_image_t label and will be recategorized with the same pair of categories as the guest runtime itself (running as
svirt_t). This image is writable by the guest.
When we consider an image that needs to be readable or writable by multiple guests, then libvirt can opt not to assign any categories to the file. Without categories, MCS constraints don’t apply (well, they still apply, but any set of categories dominates an empty set, and as such, actions against those properly labeled files are allowed).
Images that need to be mounted read-only for a guest (such as bootable media) are assigned the
virt_content_t type. If they are dedicated, then categories can be assigned as well. For shared read access, no categories need to be assigned.
Note that these label differences apply mainly to virtualization technologies and not container technologies.
Assessing the libvirt architecture
The libvirt project has several clients that interact with the
libvirtd daemon. This daemon is responsible for managing the local hypervisor software (be it QEMU/KVM, Xen, or any other virtualization software) and is even able to manage remote hypervisors. This latter functionality is often used for proprietary hypervisors that offer the necessary APIs to manage the virtual resources on the host:
Configuring libvirt for sVirt
Most systems that support libvirt on SELinux systems will have SELinux support automatically enabled. If this is not the case, but SELinux support is possible, then all it takes is to configure libvirt to allow the SELinux security model. We map the SELinux security model in libvirt on a per-hypervisor basis.
The configuration parameters related to sVirt are generally defined on a per-hypervisor basis. For instance, for the QEMU-based virtualization driver, we need to edit the
/etc/libvirt/qemu.conf file. Let’s look at the various parameters related to secure virtualization:
- The first parameter, which defines whether sVirt is active or not, is the
driverparameter. While libvirt will by default enable SELinux once it detects SELinux is active, we can explicitly mark sVirt support as enabled by setting the
security_driver = "selinux"
SELinux support will by default be enabled without explicitly marking the
security_driver variable in the configuration file. If you want to use libvirt without SELinux support (and consequently without sVirt), then you need to explicitly mark the
security_driver setting as
security_driver = "none"
- A second sVirt-related setting in libvirt is
security_default_confined. This variable defines whether guests are by default confined (and thus associated with the sVirt protections) or not. The default value is
1, which means that the confinement is by default enabled. To disable it, you need to set it to
security_default_confined = 0
- Users of the libvirt software can also ask to create an unconfined guest (and libvirt allows this by default). If we set
1, then no unconfined guests can be created:
security_require_confined = 1
We can confirm that sVirt is running when we have a guest active on the platform, as we can then consult the label for its processes to verify that it indeed received two random categories.
Let’s create such a guest, using the regular QEMU hypervisor. We use an Alpine Linux ISO to boot the guest with, but that is merely an example—you can substitute it with any ISO you want:
# virt-install --virt-type=qemu --name test \ --ram 128 --vcpus=1 --graphics none \ --os-variant=alpinelinux3.8 \ --cdrom=/var/lib/libvirt/boot/alpine-extended-x86_64.iso \ --disk path=/var/lib/libvirt/images/test.qcow2,size=1,format=qcow2
The locations mentioned are important, as they will ensure that the files are properly labeled:
/var/lib/libvirt/isos), read-only content should be placed, which will result in the files automatically being labeled with
/var/lib/libvirt/images, we create the actual guest images. When the guests are shut down, the images will be labeled with
virt_image_t, but once started, the labels will be adjusted to match the categories associated with the domain.
The command will create a guest called
test, with 128 MB of memory and 1 vCPU. No specific graphics support will be enabled, meaning that the standard console or screen of the virtual machine will not be associated with any graphical service such as Virtual Network Computing (VNC) but will rely on a serial console definition inside the guest. Furthermore, we have the guest use a small, 1 GB disk that uses the QEMU copy-on-write (QCOW2) format.
Once we have created the guest and launched it, we can check its label easily:
# ps -efZ | grep test system_u:system_r:svirt_tcg_t:s0:c533,c565 /usr/bin/qemu-system-x86_64 -name guest=test,...
To list the currently defined guests, use the
# virsh list --all Id Name State ------------------------ 1 test running
It will gives you output similar to below:The
--all argument will ensure that even guests that are defined but are not running currently are listed as well.
virsh command is the main entry point for interacting with libvirt. For instance, to send a shutdown signal to a guest, you would use the
shutdown argument, whereas the
destroy argument will force the shutdown of the guest. Finally, to remove a definition, you would use
As shown in the previous example, the guest we defined is running with the
svirt_tcg_t domain. Let’s see how we can adjust the labels used by libvirt for guests.
Changing a guest’s SELinux labels
# virsh dumpxml test
At the end of the XML, the security labels are shown. For SELinux, this could look like so:
<seclabel type='dynamic' model='selinux' relabel='yes'> <label>system_u:system_r:svirt_tcg_t:s0:c533,c565</label> <imagelabel>system_u:object_r:svirt_image_t:s0:c533,c565</imagelabel> </seclabel>
If we want to modify these settings, we can use the
edit argument to
# virsh edit test
This will open the XML file in the local editor. However, once we accomplish that, we’ll notice that the
seclabel entries are nowhere to be found. That is because the default behavior is to use dynamic labels (hence
type='dynamic') with default labels, which does not require any default definition.
Let’s instead use a static definition, and have the guest run with the
c123,c124 category pair. In the displayed XML, at the end (but still within the
<domain>...</domain> definition), place the following XML snippet:
<seclabel type='static' model='selinux' relabel='yes'> <label>system_u:system_r:svirt_tcg_t:s0:c123,c124</label> </seclabel>
To run a guest with a different type is of course done in a similar fashion, changing
svirt_tcg_t to a different type. However, keep in mind that not all types can be used regardless. For instance, the default
svirt_t domain cannot be used with QEMU’s full-system virtualization (as QEMU uses TCG if it cannot use KVM).
The default types that libvirt uses are declared inside
/etc/selinux/targeted/contexts, in the
virtual_image_context files. However, it is not recommended to change these files as they will be overwritten when SELinux policy updates are released by the distribution.
relabel statement requests libvirt to relabel all resources for the guest according to the guest’s current assigned label (
relabel='yes') or not (
relabel='no'). With dynamic category assignment, this will always be
yes, while with static definitions both values are possible.
Of course, if we want to, we can use dynamic category assignment with custom type definitions as well. For that, we declare
type='dynamic' but explicitly define a label within a
<baselabel> entity, like so:
<seclabel type='dynamic' model='selinux'> <baselabel>system_u:system_r:svirt_t:s0</baselabel> </seclabel>
This will have the guest run with a dynamically associated category pair, while using a custom label rather than the default selected one.
Customizing resource labels
If the guest definition has relabeling active (either because it uses dynamic category assignment or on explicit request of the administrator), then the resources that the guest uses will be relabeled accordingly.
Administrators can customize the labeling behavior of libvirt through the same interface we used previously: guest definition files. For instance, if we would not want libvirt to relabel the
test.qcow2 file that represents the guest’s disk, we could add to the XML like so:
<disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='https://510848-1853064-raikfcquaxqncofqfm.stackpathdns.com/var/lib/libvirt/images/test.qcow2'> <seclabel relabel='no'/> </source> <target dev='hda' bus='ide'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk>
This is useful when you want to allow the sharing of some resources across different guests, without making them readable by all guests. In such a situation, we could label the file itself with (say)
svirt_image_t:s0:c123 and have the guests with category pairs always contain the category
Controlling available categories
- First, copy over the system-provided
# cp /usr/lib/systemd/system/libvirtd.service /etc/systemd/system
- Edit the
libvirtd.servicefile and add the following definition:
- Reload the daemon definitions for systemd so that it picks up the new
# systemctl daemon-reload
- Restart the
# systemctl stop libvirtd # systemctl start libvirtd
- We can now start our guests again and verify that each guest is now running with a category pair within the range defined for the
# virsh start test # ps -efZ | grep virt system_u:system_r:virtd_t:s0-s0:c800.c899 /usr/sbin/libvirtd system_u:system_r:svirt_t:s0:c846,c891 /usr/bin/qemu-system-x86_64 -name guest=test...
As we can see, the categories selected by libvirt are now within the defined range.
Systems that do not use systemd can edit the SysV-style
init script and use
runcon -l s0-s0:c800.c899 /usr/sbin/libvirtd \ --config /etc/libvirt/libvirtd.conf --listen
Every time we launch a new guest, the libvirt code will randomly select two categories. The service will then check whether these categories are part of its own range and whether the category pair is already used or not. If any of these checks fail, libvirt will randomly select a new pair of categories until a free pair matches the requirements.
Changing the storage pool locations
A very common configuration change with libvirt is to reconfigure it to use a different storage pool location. This has a slight impact on SELinux as well, as we do not have proper file context definitions for the new location.
Let’s see how to create a new pool location and change the SELinux configuration for it:
- List the current storage pools to make sure the new pool name is not already taken:
# virsh pool-list --all Name State Autostart ----------------------------------------------- boot active yes images active yes root active yes
- Create the target location:
# mkdir /srv/images
- Create the new storage pool with
pool-define-as. In the following command, we name the pool
# virsh pool-define-as large_images dir - - - - "/srv/images" Pool large_images defined
- Configure SELinux to label the pool properly:
# semanage fcontext -a -t virt_image_t "/srv/images(/.*)?"
- Relabel the directory structure:
# restorecon -R /srv/images
- Have libvirt populate the directory structure:
# virsh pool-build large_images
- Start the storage pool:
# virsh pool-start large_images
- Turn on auto-start so that, when
libvirtdstarts, the pool is immediately usable as well:
# virsh pool-autostart large_images
- We can verify that everything is functioning properly with the
# virsh pool-info large_images
The output will show the current and available capacity for the new location.