Leveraging Kubernetes’ SELinux support

July 03, 2021

When containers are used in a larger environment, they are often managed through container orchestration frameworks that allow scaling container deployment and management across multiple systems. Kubernetes is a popular container orchestration framework with a good community, as well as commercial support.

Kubernetes uses the container software found on the machines under the hood. When, for instance, we install Kubernetes on Fedora’s CoreOS, it will detect that Docker is available and use the Docker engine for managing the containers.

Configuring Kubernetes with SELinux support

Installing Kubernetes can be a daunting task, and several methods exist, ranging from single-node playground deployments up to commercially supported installations. One of the well-documented installation methods on the Kubernetes website is to use kubeadm for bootstrapping Kubernetes clusters.

Note:

The installation of Kubernetes is documented on the Kubernetes website at https://kubernetes.io/docs/setup/production-environment/tools/kubeadm. In this section, we will not go through the individual steps to set up a working Kubernetes instance, but give pointers as to which changes are needed for having proper SELinux support.

The kubeadm command, when initializing the Kubernetes cluster, will download and run the various Kubernetes services as containers. Unfortunately, Kubernetes’ services use several mappings from the host system into the container to facilitate their operations. These mappings are not done using the :Z or :z options—it would even be wrong to do so, as the locations are system-wide locations that should retain their current SELinux labels.

As a result, Kubernetes’ services will be running with the default container_t SELinux domain (as Docker will happily apply the sVirt protections), which does not have access to these locations. The most obvious change we can apply is to have the services run with the highly privileged spc_t domain for now. Applying this change however during the installation is hard, as we would need to change the domain sufficiently quickly before the installation fails.

While we can create deployment configuration information for Kubernetes that immediately configures the services with spc_t, another method can be pursued:

  • Mark the container_t type as a permissive domain before the installation starts. While this will prevent any SELinux controls on the container, we can argue that the installation of Kubernetes is done in a contained and supervised manner:
# semanage permissive -a container_t
  • Run kubeadm init, which will install the services on the system:
# kubeadm init
  • When the services are installed, go to /etc/kubernetes/manifests. Inside this directory, you will find four manifests, each one representing a Kubernetes service:
# cd /etc/kubernetes/manifests
  • Edit each manifest file (etcd.yamlkube-apiserver.ymlkube-controller-manager.yml, and kube-scheduler.yml) and add a security context definition that configures the service to run with the spc_t domain. This is done as a configuration directive under the containers section:
apiVersion: v1
kind: Pod
metadata:
 name: etcd
spec:
 containers:
 - command: …
 securityContext:
 seLinuxOptions:
 type: spc_t
 image: k8s.gcr.io/etcd:3.4.3-0
 …
  • During the Kubernetes installation, the kubelet service will be installed, which will detect that these files have been changed, and will automatically restart the containers. If not, you can shut down and remove the container definitions within Docker, and kubelet will automatically recreate them:
# docker ps
CONTAINER ID ... NAMES
548f0c3ed18e k8s_POD_etcd-ppubssa3ed_kube…
b7b1df2d0027 k8s_POD_kube-apiserver-…
eecd4d4ad108 k8s_POD_kube-scheduler-…
76da4910b927 k8s_POD_kube-controller-…
# for n in 548f0c3ed18e b7b1df2d0027 eecd4d4ad108 76da4910b927; do docker stop $n; docker rm $n; done
  • Verify that the services are now running with the privileged spc_t domain:
# ps -ef | grep spc_t
  • Remove the permissive state of container_t so that it is back to enforcing mode:
# semanage permissive -d container_t

With these slight adjustments during the installation, Kubernetes is now running fine with SELinux support enabled.

Setting SELinux contexts for pods

Within Kubernetes, containers are part of pods. A pod is a group of containers that all see the same resources and can interact with each other seamlessly. Previously, in the Configuring podman section, we worked on the container level. The podman utility is also able to use the pods concept (hence the name). For instance, we could put the Nginx container in a pod called webserver like so:

# podman pod create -p 80:80 --name webserver
# podman pull docker.io/library/nginx
# podman run -dit --pod webserver --name nginx-test nginx

Unlike podman, Kubernetes does not rely on command-line interaction to create and manage resources such as pods. Instead, it uses manifest files (as we’ve briefly touched upon in the Configuring Kubernetes with SELinux support section). Kubernetes administrators or DevOps teams will create manifest files and apply those to the environment.

For instance, to have the Nginx containers run on Kubernetes, the following manifest could be used:

apiVersion: apps/v1
kind: Deployment
metadata:
 name: nginx-test-deployment
spec:
 selector:
 matchLabels:
 app: nginx-test
 replicas: 2
 template:
 metadata:
 labels:
 app: nginx-test
 spec:
 containers:
 - name: nginx-test
 image: nginx:latest
 ports:
 - containerPort: 80

This manifest is a Kubernetes deployment, and tells Kubernetes that we want to run two Nginx containers. To apply this to the environment, use kubectl apply:

$ kubectl apply -f simple-nginx.yml

As with the manifests for the Kubernetes services, we can tell Kubernetes to use a specific SELinux type:

…
spec:
 containers: ...
 securityContext:
 seLinuxOptions:
 type: "container_logreader_t"

The seLinuxOptions block can contain user, role, type, and level to define the SELinux user, SELinux role, SELinux type and SELinux sensitivity level.

Unlike the regular container management services (such as Docker or CRI-O), Kubernetes does not allow changing SELinux labels on mapped volumes (except on single-node deployments): when we map volumes into containers, they retain their current SELinux label on the system. Hence, if you want to make sure that the resources are accessible from a regular container_t domain, you need to make sure these locations are labeled with container_file_t.

Kubernetes does offer advanced access controls itself. Enabling volumes within the containers is also handled by a plugin architecture, with several plugins already available. When the plugin enables SELinux labeling, then Kubernetes will attempt to relabel the resource and assign the categories (as with sVirt). However, this support is currently only made available on single-node deployments (using the local host storage plugin)—and for such deployments, using podman is much simpler.

Related Articles

How to add swap space on Ubuntu 21.04 Operating System

How to add swap space on Ubuntu 21.04 Operating System

The swap space is a unique space on the disk that is used by the system when Physical RAM is full. When a Linux machine runout the RAM it use swap space to move inactive pages from RAM. Swap space can be created into Linux system in two ways, one we can create a...

read more

Lorem ipsum dolor sit amet consectetur

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

thirteen + thirteen =