When containers are used in a larger environment, they are often managed through container orchestration frameworks that allow scaling container deployment and management across multiple systems. Kubernetes is a popular container orchestration framework with a good community, as well as commercial support.
Kubernetes uses the container software found on the machines under the hood. When, for instance, we install Kubernetes on Fedora’s CoreOS, it will detect that Docker is available and use the Docker engine for managing the containers.
Configuring Kubernetes with SELinux support
The installation of Kubernetes is documented on the Kubernetes website at https://kubernetes.io/docs/setup/production-environment/tools/kubeadm. In this section, we will not go through the individual steps to set up a working Kubernetes instance, but give pointers as to which changes are needed for having proper SELinux support.
kubeadm command, when initializing the Kubernetes cluster, will download and run the various Kubernetes services as containers. Unfortunately, Kubernetes’ services use several mappings from the host system into the container to facilitate their operations. These mappings are not done using the
:z options—it would even be wrong to do so, as the locations are system-wide locations that should retain their current SELinux labels.
As a result, Kubernetes’ services will be running with the default
container_t SELinux domain (as Docker will happily apply the sVirt protections), which does not have access to these locations. The most obvious change we can apply is to have the services run with the highly privileged
spc_t domain for now. Applying this change however during the installation is hard, as we would need to change the domain sufficiently quickly before the installation fails.
While we can create deployment configuration information for Kubernetes that immediately configures the services with
spc_t, another method can be pursued:
- Mark the
container_ttype as a permissive domain before the installation starts. While this will prevent any SELinux controls on the container, we can argue that the installation of Kubernetes is done in a contained and supervised manner:
# semanage permissive -a container_t
kubeadm init, which will install the services on the system:
# kubeadm init
- When the services are installed, go to
/etc/kubernetes/manifests. Inside this directory, you will find four manifests, each one representing a Kubernetes service:
# cd /etc/kubernetes/manifests
apiVersion: v1 kind: Pod metadata: name: etcd spec: containers: - command: … securityContext: seLinuxOptions: type: spc_t image: k8s.gcr.io/etcd:3.4.3-0 …
- During the Kubernetes installation, the
kubeletservice will be installed, which will detect that these files have been changed, and will automatically restart the containers. If not, you can shut down and remove the container definitions within Docker, and
kubeletwill automatically recreate them:
# docker ps CONTAINER ID ... NAMES 548f0c3ed18e k8s_POD_etcd-ppubssa3ed_kube… b7b1df2d0027 k8s_POD_kube-apiserver-… eecd4d4ad108 k8s_POD_kube-scheduler-… 76da4910b927 k8s_POD_kube-controller-… # for n in 548f0c3ed18e b7b1df2d0027 eecd4d4ad108 76da4910b927; do docker stop $n; docker rm $n; done
- Verify that the services are now running with the privileged
# ps -ef | grep spc_t
# semanage permissive -d container_t
With these slight adjustments during the installation, Kubernetes is now running fine with SELinux support enabled.
Setting SELinux contexts for pods
Within Kubernetes, containers are part of pods. A pod is a group of containers that all see the same resources and can interact with each other seamlessly. Previously, in the Configuring podman section, we worked on the container level. The
podman utility is also able to use the pods concept (hence the name). For instance, we could put the Nginx container in a pod called
webserver like so:
# podman pod create -p 80:80 --name webserver # podman pull docker.io/library/nginx # podman run -dit --pod webserver --name nginx-test nginx
podman, Kubernetes does not rely on command-line interaction to create and manage resources such as pods. Instead, it uses manifest files (as we’ve briefly touched upon in the Configuring Kubernetes with SELinux support section). Kubernetes administrators or DevOps teams will create manifest files and apply those to the environment.
For instance, to have the Nginx containers run on Kubernetes, the following manifest could be used:
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-test-deployment spec: selector: matchLabels: app: nginx-test replicas: 2 template: metadata: labels: app: nginx-test spec: containers: - name: nginx-test image: nginx:latest ports: - containerPort: 80
$ kubectl apply -f simple-nginx.yml
As with the manifests for the Kubernetes services, we can tell Kubernetes to use a specific SELinux type:
… spec: containers: ... securityContext: seLinuxOptions: type: "container_logreader_t"
seLinuxOptions block can contain
level to define the SELinux user, SELinux role, SELinux type and SELinux sensitivity level.
Unlike the regular container management services (such as Docker or CRI-O), Kubernetes does not allow changing SELinux labels on mapped volumes (except on single-node deployments): when we map volumes into containers, they retain their current SELinux label on the system. Hence, if you want to make sure that the resources are accessible from a regular
container_t domain, you need to make sure these locations are labeled with
Kubernetes does offer advanced access controls itself. Enabling volumes within the containers is also handled by a plugin architecture, with several plugins already available. When the plugin enables SELinux labeling, then Kubernetes will attempt to relabel the resource and assign the categories (as with sVirt). However, this support is currently only made available on single-node deployments (using the local host storage plugin)—and for such deployments, using
podman is much simpler.