Utilizing SaltStack to configure SELinux

June 20, 2021

The second orchestration and automation framework we’ll consider is SaltStack, which has commercial backing by the SaltStack company. SaltStack uses a declarative language similar to Ansible and is also written in Python. In this chapter, we will use the open source SaltStack framework, but an enterprise version of SaltStack is available as well, which adds more features on top of the open source one.

How SaltStack works

SaltStack, often also described as just Salt, is an automation framework that uses an agent/server model for its integrations. Unlike Ansible, SaltStack generally requires agent installations on the target nodes (called minions) and activation of the minion daemons to enable communications to the master. This communication is encrypted, and the minion authentication uses public-key validation, which needs to be approved on the master to ensure no rogue minions participate in a SaltStack environment.

While agent-less installations are possible with SaltStack as well, we will focus on agent-based deployments. In such a configuration, the minions regularly check with the master to see whether any updates need to be applied. But administrators do not need to wait until the minion pulls the latest updates: you can also trigger updates from the master, effectively pushing changes to the nodes.

The target state that a minion should be in is written down in a Salt State file, which uses the .sls suffix. These Salt State files can refer to other state files, to allow a modular design and reusability across multiple machines.

If we need more elaborate coding, SaltStack supports the creation and distribution of modules, called Salt execution modules. However, unlike Ansible’s Galaxy, no community repositories currently exist to find more execution modules.

Installing and configuring SaltStack

The installation of SaltStack is similar across the different Linux distributions. Let’s see how the installation is done on a CentOS machine:

  • We first need to enable the SaltStack repository that contains its software. The project maintains the repository definitions through RPM files that can be installed immediately:
# yum install https://repo.saltstack.com/py3/redhat/salt-py3-repo-latest.el8.noarch.rpm
  • Once we have enabled the repository on all systems, install salt-master on the master, and salt-minion on the remote systems:
master ~# yum install salt-master
remote ~# yum install salt-minion
  • Before we start the daemons on the systems, we first update the minion configuration to point to the master. By default, the minions will attempt to connect to a host with the hostname salt, but this can be easily changed by editing /etc/salt/minion and setting the right hostname:
remote ~# vim /etc/salt/minion
master: ppubssa3ed
  • With the minion configured, we can now launch the SaltStack master (salt-master) and minion (salt-minion) daemons:
master ~# systemctl start salt-master
remote ~# systemctl start salt-minion
  • The minion will connect to the master and present its public key. To list the agents currently connected, use salt-key -L:
master ~# salt-key -L
Accepted Keys:
Denied Keys:
Unaccepted Keys:
rem1.internal.genfic.local
Rejected Keys:

We need to accept the keys for the remote machines:

master ~# salt-key -a rem1.internal.genfic.local
The following keys are going to be accepted:
Unaccepted Keys:
rem1.internal.genfic.local
Proceed? [n/Y] y
Key for minion rem1.internal.genfic.local accepted.
  • Once we have accepted the key, the master will know and control the minion. Let’s see whether we can properly interact with the remote system:
master ~# salt '*' service.get_all

This command will list all system services on the minion.

The salt command is the main command used to query and interact with the remote minions from the master. If the last command is successfully returning all system services, then SaltStack is correctly configured and ready to manage the remote systems.

Creating and testing our SELinux state with SaltStack

Let’s create our SELinux state called packt_selinux, and have it applied to the remote minion:

  • We first need to create the top file. This file is the master file for SaltStack, from which the entire environment is configured:
master ~# mkdir /srv/salt
master ~# vim /srv/salt/top.sls
base:
  '*':
    - packt_selinux
  • Next, we create the state definition for packt_selinux:
master ~# mkdir /srv/salt/packt_selinux
master ~# vim /srv/salt/packt_selinux/init.sls
/usr/share/selinux/custom/test.cil:
  file.managed:
    - source: salt://packt_selinux/test.cil
    - mode: 644
    - user: root
    - group: root
    - makedirs: True

The init.sls file is the main state file for this packt_selinux state. So, when SaltStack reads the top.sls file, it sees a reference to the packt_selinux state and then searches for the init.sls file inside this state.

  • Place the SELinux test.cil module, as defined earlier on in this chapter, inside /srv/salt/packt_selinux as we refer to it in the state definition. Once placed, we can apply this state to the environment:
master ~# salt '*' state.apply

The state.apply subcommand of the salt command is used to apply the state across the environment. Each time we modify our state definition, this command can be used to force an update to the minions. Without this, the minions will (by default) update their state every 60 minutes. These scheduled state updates are called mine updates and are configured on the agents inside /etc/salt/minion.

Assigning SELinux contexts to filesystem resources with SaltStack

At the time of writing, support for addressing SELinux types in resources has not yet reached the stable versions of SaltStack. SaltStack, however, supports running commands but only if a certain test has succeeded (or failed).

Update the init.sls file and add the following code to it:

{%- set path = 'https://510848-1853064-raikfcquaxqncofqfm.stackpathdns.com/usr/share/selinux/custom/test.cil' %}
{%- set context = 'system_u:object_r:usr_t:s0' %}
set {{ path }} context:
  cmd.run:
    - name: chcon {{ context}} {{ path }}
    - unless: test $(stat -c %C {{ path }}) == {{ context }}

In this code snippet, we declare two variables (path and context) so that we do not need to iterate the path and context multiple times, and then use these variables in a cmd.run call.

The cmd.run approach allows us to easily create custom SELinux support using the commands we’ve seen earlier on in this book. The unless check contains the test to see whether we need to execute the command or not, allowing us to create idempotent state definitions.

Loading custom SELinux policies with SaltStack

Let’s load our custom SELinux module on the remote systems. SaltStack has support for loading SELinux modules through the selinux.module state:

load test.cil:
  selinux.module:
    - name: test
    - source: /usr/share/selinux/custom/test.cil
    - install: True
    - unless: "semodule -l | grep -q ^test$"

As in the previous section, we need to add an unless statement, as otherwise, SaltStack will attempt to load the SELinux module repeatedly every time the state is applied.

Using SaltStack’s out-of-the-box SELinux support

SaltStack’s native SELinux support is gradually expanding but still has much room for improvement:

  • With selinux.boolean, the SELinux boolean values can be set on the target machines:
httpd_builtin_scription:
  selinux.boolean:
    - value: True
  • The file contexts, as managed with semanage fcontext, can be defined using the selinux.fcontext_policy_present state:
"/srv/web(/.*)?":
  selinux.fcontext_policy_present:
    - sel_type: httpd_sys_content_t
  • To remove the definition, use the selinux.fcontext_policy_absent definition.
  • With selinux.mode, we can put the system in enforcing or permissive mode:
enforcing:
  selinux.mode
  • Port mappings are handled using the selinux.port_policy_present state:
tcp/10122:
  selinux.port_policy_present:
    - sel_type: ssh_port_t

With the cmd.run approach mentioned earlier, we can apply SELinux configuration updates to systems in a repeatable fashion for unsupported settings.

Related Articles

No Results Found

The page you requested could not be found. Try refining your search, or use the navigation above to locate the post.

Lorem ipsum dolor sit amet consectetur

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

one + 10 =