1. Workshop Introduction

1.1. Presenters/Lab Developers

Matej Tyc, Software Engineer and Tech Lead - Security Compliance in Red Hat® Enterprise Linux® (RHEL) Security, Red Hat®

Marek Haicman, Senior Quality Engineer and Product Owner - Security Compliance in RHEL Security, Red Hat®

Lucy Kerner, Senior Principal Security Global Technical Evangelist and Strategist, Red Hat®

Gabriel Gaspar Becker, Software Engineer - Security Compliance in RHEL Security, Red Hat®

1.2. Additional Lab Developers

Jan Cerny, Software Engineer - Security Compliance in RHEL Security, Red Hat®

Watson Sato, Software Engineer - Security Compliance in RHEL Security, Red Hat®

Matúš Marhefka, Quality Engineer - Security Compliance in RHEL Security, Red Hat®

Vojtěch Polašek, Software Engineer - Security Compliance in RHEL Security, Red Hat®

1.3. Overview and Prerequisites

This lab introduces you to the ComplianceAsCode project, a comprehensive tool that creates content for automated security tools. The project contains over 1,000 rules—​elements of security policies. Rules have descriptions, justifications, and references to existing security standards. They also have Open Vulnerability and Assessment Language (OVAL) checks, bash remediations, and Red Hat® Ansible® Automation content to a varying degree.

ComplianceAsCode enables automated evaluation and fast and efficient remediations against security controls for compliance with regulatory or custom profiles, and for automated configuration compliance. It allows you to produce a tailor-made security policy for your company with minimal effort, and the OpenSCAP ecosystem can do the scanning and support for problem resolution. Specifically, OpenSCAP is a National Institute of Standards and Technology (NIST) certified scanner designed to perform configuration and vulnerability scans on a system, validate security compliance content, generate reports and guides based on these scans and evaluations, and allows users to automatically remediate systems that have been found in a non-compliant state.

Red Hat® Enterprise Linux® provides tools that allow for fully automated compliance audits. These tools are based on ComplianceAsCode and the Security Content Automation Protocol (SCAP) standard and are designed for automated tailoring of compliance policies.

This lab is geared toward system administrators, cloud administrators and operators, architects, and others working on infrastructure operations management who are interested in learning how to automate security compliance using Red Hat® provided tooling for compliance against both industry standard and custom policies.

The prerequisites for this lab include basic Linux skills gained from a Red Hat® Certified System Administrator (RHCSA®) certification or equivalent system administration skills.

1.4. What Attendees Will Learn In This Lab:

  • How to use the OpenSCAP scanner to scan systems and perform security fixes as needed.

  • How to navigate among existing rules and learn how to modify them and take advantage of parameterization.

  • How to create new security profiles and populate them with existing rules.

  • How to create new rules from scratch and add them to security profiles.

  • How to write OVAL checks with minimal effort and ensure correctness.

  • How to create Ansible Automation content for remediations of systems.

1.5. Lab Environment

Your entire lab environment is hosted online and includes Red Hat® Enterprise Linux® and Red Hat® Ansible® Automation.

2. Setup Steps

2.1. Using the Terminal to Access the Remote Shell

  1. To connect to your environment, first execute the below SSH command in the terminal:

    ssh -o "ServerAliveInterval 30" lab-user@<IP_ADDRESS>
Tip
Use Ctrl+Shift+V to paste in the terminal.
  1. Answer yes to accept server’s identity if asked, and then input the following password:

    <PASSWORD>
  2. If everything works correctly, you end up in the lab’s system shell. You can confirm this by listing the directory with lab exercises:

    [... ~]$ cd
    [... ~]$ ls labs
    lab1_introduction  lab2_openscap  lab3_profiles  lab4_ansible  lab5_oval

Congratulations, now you are in your text console.

2.2. Accessing the Graphical User Interface of your dedicated environment

  1. This is a Red Hat® Enterprise Linux® 8 system with GUI. It is the machine that you will use throughout all of the exercises in this lab. To access the Graphical User Interface (GUI) you need either a Virtual Network Computing (VNC) client or a Remote Desktop Protocol (RDP) client installed on your system. We recommend using a VNC client since it is faster.

2.2.1. Connecting to the GUI through a VNC Client

  1. We recommend you to install tigervnc, check how to install here: Tiger VNC. Or run one of the following:

    1. RHEL:

      yum install tigervnc
    2. Fedora:

      dnf install tigervnc
    3. Ubuntu:

      apt-get install tigervnc-viewer
    4. macOS:

  2. After you install tigervnc, you can run the following commands on a terminal:

    1. First you open a SSH connection using port forwarding. This will open a connection forwarding the port 5901 to your localhost:

      ssh -N -L 5901:localhost:5901 lab-user@<IP_ADDRESS>
    2. Answer yes to accept server’s identity if asked, and then input the following password. Note that the terminal hangs because of the port forwarding. In the end of the workshop you can terminate the connecting by hitting Ctrl+C:

      <PASSWORD>
    3. Open TigerVNC (it’s called either tigervnc or vncviewer) application and type under VNC Server text input:

      localhost:1
    4. Click Connect and then answer yes to accept server’s identity if asked and input the following password in the pop up window:

      <PASSWORD>

If an alert appears stating that the connection isn’t secure, disregard that alert. Although VNC data is unencrypted by default, you’re accessing the VNC server using an encrypted SSH tunnel.

Congratulations, you are in your graphical console using a VNC connection.

This section contains various tips that may be useful to keep in mind as you are doing the lab exercises.

2.3.1. Command Listings

Shell session listings obey the following conventions:

[... ~]$ pwd
/home/lab-user
[... ~]$ cd labs
[... labs]$ ls
lab1_introduction  lab2_openscap  lab3_profiles  lab4_ansible  lab5_oval
[... labs]$ cat /etc/passwd
...
lab-user:x:1000:1000:GTPE Student:/home/lab-user:/bin/bash
  • Commands such as pwd and cat /etc/passwd in this example are prefixed by […​, followed by the respective directory name and ]$. For reference, in the actual terminal, commands are prefixed also by the current username and hostname—​for example, [lab-user@<hostname> ~]$.

  • Lines that follow commands and are not commands themselves represent the last command’s output. In the example above, the output of the ls command in the labs directory is a list of directories with lab exercises.

  • Ellipses may be used to indicate multiple output lines that have been omitted because they are of no interest. In the example above, the output of the cat /etc/passwd command contains many lines with the line containing lab-user's entry emphasized by an ellipsis.

2.3.2. Copy and Paste Conventions

Normally, when you select text you want to copy in the document, you press Ctrl+C to copy it to the system clipboard, and you paste it from the clipboard to the editor using Ctrl+V.

Keep in mind that when you paste to the terminal console or terminal editor, you have to use Ctrl+Shift+V instead of Ctrl+V. The same applies when copying from the Terminal window—​you have to use Ctrl+Shift+C after selecting the text, not just Ctrl+C.

2.3.3. Browser Searches

When you search for an occurrence of text in the Firefox browser, you have the following options:

  • Pressing Ctrl+F, which brings up the search window.

  • Clicking the "hamburger menu" at the top right corner, and clicking the Find in This Page entry. This is the same as the previous option, but it is useful if you have problems with the keyboard shortcut.

    600

  • If the browser has the Find in Page extension installed, there is a blue icon close to the "hamburger menu" at the top right corner of the browser. You can click it and start typing the text to search for. The extension displays previews of the web page next to occurrences of the expression.

    600

2.4. Read everything!

This lab has been designed for you to learn how things work from top to bottom. This means there are lots of descriptions and reading, not just commands for you to copy and paste! If you just copy and paste all the commands you can be done in 30 minutes…​ but you won’t learn anything!

You have plenty of time to complete the lab, take it slow and read everything. If you get stuck, don’t be afraid to ask for help at any time, but the answer is probably in the lab documentation.

3. Say Hello to ComplianceAsCode

3.1. Introduction

In this lab, you will become familiar with the ComplianceAsCode project. The purpose of this project is to help content authors create security policy content for various platforms. The ComplianceAsCode project enables content authors to efficiently develop and share security content.

Using the powerful build system, you can generate output in various formats such as Ansible® Playbooks or SCAP datastreams that you can use to automate security auditing and hardening. The project contains many useful rules and checks that form various security policies and enables content authors to easily add new rules and checks.

You work with the project source repository at https://github.com/ComplianceAsCode/content.

In Red Hat® Enterprise Linux® (RHEL), the SCAP content generated from ComplianceAsCode data is shipped as the scap-security-guide RPM package.

Goals
  • Learn about the ComplianceAsCode project to understand what is where and what you can use the project for.

  • Learn how to build the content from the source and go through what gets built.

  • Understand how to find the source of a particular part of the built artifact.

  • Learn how to parameterize rules that use variables.

  • Learn where to find additional rule content, such as checks and remediations.

Preconfigured Lab Environment
  • The ComplianceAsCode repository is already cloned to all of the /home/lab-user/labs/ directories. For example, /home/lab-user/labs/lab1_introduction is a clone of the ComplianceAsCode project repository.

  • The following required dependencies for the ComplianceAsCode content build are already installed using yum install:

    • Generic build utilities: cmake and make

    • Utilities for generating SCAP content: openscap-scanner

    • Python dependencies for putting content together: python3-pyyaml and python3-jinja2

Important
Content used in this lab has been altered to increase its educative potential, and is therefore different from the content in ComplianceAsCode upstream repository or the content in the scap-security-guide package shipped in Red Hat® products.

3.2. Hands-on Lab

The ComplianceAsCode project consists of human-readable files that are compiled into standard-compliant files that are difficult to read and edit directly.

For your convenience, the environment is already set up, so the content is built and ready to be used. No worries, though—​you get to rebuild it later in the exercise.

To start the hands-on section, take the following steps:

  1. Log in to the VM using the text console if you have not done so already.

  2. Go to the text console (Terminal window) and navigate to /home/lab-user/labs/lab1_introduction:

    [... ~]$ cd /home/lab-user/labs/lab1_introduction
    [... lab1_introduction v0.1.60|+4]$

3.2.1. Viewing the HTML Guides for the ComplianceAsCode Project

The ComplianceAsCode project provides HTML guides that are a great resource for those interested in the rules that make up a policy. The HTML guides are located in the respective build/guides of each lab exercise subdirectory. Therefore, the full path of the directory for this lab exercise is:

/home/lab-user/labs/lab1_introduction/build/guides/

In the ComplianceAsCode project, policies are referred to as security profiles. The HTML guide filenames have a ssg-<product>-guide-<profile>.html format, so the HTML guide for the RHEL 8 Protection Profile for General Purpose Operating Systems (OSPP profile) is ssg-rhel8-guide-ospp.html.

  1. On the remote desktop, you open the guide in a web browser. Click Activities at the top left of your desktop and click the "file cabinet" icon to open the file explorer.

    100
  2. After the window appears, click the Home icon in the top left portion of the file explorer window.

  3. Then, navigate to the location of the exercise by double-clicking the labs folder, followed by double-clicking the lab1_introduction, build, and guides folders.

  4. As a last step, double-click the ssg-rhel8-guide-ospp.html file to open the HTML guide for the RHEL 8 OSPP profile.

    1000
    1. Rules are organized in a system of hierarchical groups. Take a look through this HTML guide to see the various rules of the RHEL 8 OSPP profile.

      html guide
      Figure 1. HTML guide showing all of the rules of the RHEL 8 Protection Profile for General Purpose Operating Systems (OSPP) profile

3.2.2. Updating a Rule Description to Find the Source of a Specific Rule

You will now take a closer look at a specific rule in the HTML guide of the RHEL 8 OSPP profile. For example, take a closer look at the Set Interactive Session Timeout rule entry.

  1. In the HTML guide of the RHEL 8 OSPP profile that you opened in Firefox, press Ctrl+F and search for session timeout.

    session timeout
    Figure 2. The Set Interactive Session Timeout rule in the RHEL 8 OSPP profile HTML guide
  2. Review the description just below the Set Interactive Session Timeout rule:

    Setting the TMOUT option in /etc/profile ensures that Setting the TMOUT option in /etc/profile ensures that all user sessions will terminate based on inactivity. The TMOUT setting in /etc/profile should read as follows:
    
    TMOUT=600

    Note that the leading text is incorrectly repeated twice in this rule: Setting the TMOUT option in /etc/profile ensures that. This was done on purpose for you to fix, so you can understand how rule definitions are created and updated.

  3. Locate this duplicated rule-definition text.

    Rule definitions for Linux systems are under the linux_os/guide directory of the ComplianceAsCode project. Remember that the ComplianceAsCode project was already cloned to all of the /home/lab-user/labs/* directories. So, for example, /home/lab-user/labs/lab1_introduction is a clone of the ComplianceAsCode project repository. Because there are about 1,000 rules, it is better to search all of the rules for the text, rather than trying to find a particular rule in the directory hierarchy by browsing it.

    Rule definitions are written as YAML files, which are particularly suited for storing key-value data. All rules are defined by the respective rule.yml file, and the parent directory is the respective rule’s ID. The ID of the rule in question is accounts_tmout. Given that, you can search for the directory.

  4. Make sure you are in the /home/lab-user/labs/lab1_introduction directory, then execute the following find command. This command searches for a file or directory with the exact name accounts_tmout in the directory subtree below the linux_os directory. Expect to see the following output after typing the find command:

    [... ~]$ cd /home/lab-user/labs/lab1_introduction
    [... lab1_introduction v0.1.60|+4]$ find linux_os -name accounts_tmout
    linux_os/guide/system/accounts/accounts-session/accounts_tmout

    Note that the linux_os/guide/system/accounts/accounts-session/accounts_tmout directory was reported as the result, and the rule is defined in the rule.yml file in that directory.

  5. Open the rule.yml file so you can remove the duplicate text that you saw earlier: Setting the TMOUT option in /etc/profile ensures that:

    [... ~]$ cd /home/lab-user/labs/lab1_introduction
    [... lab1_introduction v0.1.60|+4]$ nano linux_os/guide/system/accounts/accounts-session/accounts_tmout/rule.yml
  6. Luckily, the rule’s description is right at the beginning of the rule.yml file. Remove the duplicate occurrence of Setting the <tt>TMOUT</tt> option in <tt>/etc/profile</tt> ensures that.

  7. Press Ctrl+X to bring up the "save and exit" option, and confirm that you want to save the changes and exit by entering y.

  8. Recompile the content to check whether your fix worked.

    The ComplianceAsCode/content project uses the CMake build system. The build itself is based on Python, the oscap tool, and XSLT transformations.

    1. Make sure that you are in the /home/lab-user/labs/lab1_introduction directory in the Terminal window of your laptop.

    2. From this directory, run ./build_product rhel8 to compile content for Red Hat® Enterprise Linux® 8:

      [... lab1_introduction v0.1.60|+4]$ ./build_product rhel8

      It is also possible to build content for other products. A product can be an operating system, such as RHEL 8, RHEL 7, or Fedora, or an application, such as Firefox or Java™.

      In general, you can run ./build_product <product> to build only the content for a product you are interested in. The <product> is the lowercase form of the product, so you run ./build_product rhel8 to build content for RHEL 8, ./build_product fedora to build content for Fedora, and so on.

      0 02 post build
      Figure 3. Completed build of security content for RHEL 8 in the Terminal window
  9. Go back to the HTML guide of the RHEL 8 OSPP profile that you opened earlier, and refresh your web browser.

  10. Review the fix. Expect to now see the fixed description, without the duplicate Setting the TMOUT option in /etc/profile ensures that text, if you scroll down to the Set Interactive Session Timeout rule.

3.2.3. Customizing a Parameterized Rule

In this lab exercise, you will learn about parameterized rules. Parameterization can be used to set timeout durations, password length, umask, and other settings. You will learn about parameterized rules by:

  • Observing where the value comes from

  • Changing the parameterized rule to see how it is applied

  • Observing what happens when the parameterized variable is omitted

  1. Customizing parameterized rule s.a. this accounts_tmout is very easy, as the rule does not have the timeout duration hard-coded—​it is parameterized by a variable. As the description for the Set Interactive Session Timeout rule indicates, the rule uses the var_accounts_tmout variable. This is defined in the var_accounts_tmout.var file. Just as you did in the previous step, you can search for the variable definition:

    [... lab1_introduction v0.1.60|+4]$ find linux_os -name var_accounts_tmout.var
    linux_os/guide/system/accounts/accounts-session/var_accounts_tmout.var
    
    [... lab1_introduction v0.1.60|+4]$ cat linux_os/guide/system/accounts/accounts-session/var_accounts_tmout.var
    ...
    options:
        30_min: 1800
        10_min: 600
        15_min: 900
        5_min: 300
        default: 600
    ...

    Though the var_accounts_tmout.var file contains the variable description—​which is helpful—​you cannot be sure what the number 600 means. However, the contents of the file indicate that it is the same as 10 minutes, which is 600 seconds.

  2. The rule is parameterized per profile. This is because there can be multiple profiles in one datastream file, one rule can exist in multiple profiles, and it can be parameterized differently in different profiles.

    To see how the rule is connected to its variable, you have to review the respective profile definition, products/rhel8/profiles/ospp.profile. Open it in the editor and search for accounts_tmout:

    [... lab1_introduction v0.1.60|+4]$ nano products/rhel8/profiles/ospp.profile
    1. In the editor, press F6 to search for accounts_tmout.

    2. Then press Alt+W to jump to the next occurrence.

          ...
          ### FMT_MOF_EXT.1 / AC-11(a)
          ### Set Screen Lock Timeout Period to 10 Minutes or Less
          - accounts_tmout
          - var_accounts_tmout=10_min
          ...
  3. Modify the var_accounts_tmout variable to 30_min.

    1. Press Ctrl+X, then enter y to save and exit.

    2. Rebuild the content:

      [... lab1_introduction v0.1.60|+4]$ ./build_product rhel8

      After the build finishes, refresh the HTML guide either by reloading it in the browser, or by reopening build/guides/ssg-rhel8-guide-ospp.html. Expect the variable value to be updated to 1800.

      500
      Figure 4. The Firefox Refresh Page button
  4. What happens if you omit the variable definition?

    1. Open the OSPP profile file in an editor.

      [... lab1_introduction v0.1.60|+4]$ nano products/rhel8/profiles/ospp.profile
    2. Again, use F6 in connection with Alt+W in the editor to search for accounts_tmout.

    3. Comment out the line containing - var_accounts_tmout=30_min by inserting # just before the leading dash.

    4. After you are done, press Ctrl+X, then enter y to save and exit.

    5. Rebuild the content again:

      [... lab1_introduction v0.1.60|+4]$ ./build_product rhel8
    6. After the build finishes, re-examine the variable definition—​maybe you can predict the result without looking! Open the variable definition in the editor and execute the following command:

      [... lab1_introduction v0.1.60|+4]$ nano linux_os/guide/system/accounts/accounts-session/var_accounts_tmout.var

      In this YAML file, you have the options: key that defines mappings between the supplied and effective values. As the default: 600 line indicates, if you do not specify the timeout duration in a profile, it is going to be 600 seconds (10 minutes).

    7. After you are finished looking, press Ctrl+X to bring up the "save and exit" option. If you are asked about saving any changes, you probably do not want that, so enter n.

    8. Time to review the HTML guide—​when refreshing or reopening build/guides/ssg-rhel8-guide-ospp.html, you can clearly see the rule’s timeout indeed equals to 600.

Note
The set of values a variable can have is discrete—​all values have to be defined in the variable file. Therefore, it is possible to specify var_accounts_tmout=20_min in the profile only after adding 20_min: 1200 to the options: key of the variable definition.

3.3. Associated Content

A rule needs more than a description to be of any use. Other functions are:

  • check whether the system complies with the rule definition, and

  • bring a noncompliant system into a compliant state.

For these reasons, a rule should contain a check and possibly also remediations. The additional content is placed in subdirectories of the rule, so explore your accounts_tmout rule.

You can browse the associated content if you list the contents of the directory. In the terminal, run the following commands:

[... lab1_introduction v0.1.60|+4]$ cd linux_os/guide/system/accounts/accounts-session/accounts_tmout
[... accounts_tmout v0.1.60|+4]$ ls
ansible  bash  oval  rule.yml  tests

The following sections describe the currently supported associated content types.

3.3.1. Macros

You have probably noticed strange snippets in the project’s code s.a. {{{ xccdf_value("var_accounts_tmout") }}} in the accounts_tmout rule yaml. Those are jinja2 macros with one minor syntax difference — there is an additional layer of curly brackets to regular jinja2 macros. That way, Ansible content that uses regular jinja2 doesn’t interfere with the build system.

Macros allow content authors to avoid writing complex directives s.a. variable substitution in rules or remediations, and they can also prevent copy-pasting of the code anywhere in the content. Rules, remediations, checks and other definition files are processed by jinja2, so one can define own local macros there, or one can used shared macros that are available. Macros are defined in various .jinja files, and they are documented online on the ComplianceAsCode readthedocs website.

Usage of macros in the content is shown in subsequent chapters.

3.3.2. Checks

Checks can be found under the oval directory. They are written in an standardized, declarative, XML-based language called OVAL (Open Vulnerability and Assessment Language). Writing checks in this language is considered cumbersome, but the ComplianceAsCode project helps content authors to write it more efficiently.

You do not get into the details of OVAL now—​just note that the OVAL content can be found in a rule’s oval subdirectory. The OVAL checks are described in Lab Exercise 5. If you are familiar with the language, you can take this opportunity to examine the oval subdirectory of the accounts_tmout rule’s directory containing the shared.xml file. The shared.xml file features a shorthand OVAL, which is much simpler than the full version of OVAL that you otherwise have to write.

3.3.3. Remediations

If the system is not set up according to the rule description, the scanner reports that the rule has failed, and the system administrator is supposed to fix it. The ComplianceAsCode content provides users with snippets that they can run to make the system compliant again or at least to provide administrators with hints about what they need to do.

Remediations are expected to work on the clean installation configuration—​if the administrator has made some changes in the meantime, remediations are not guaranteed to work.

The majority of rules present in profiles come with a Bash remediation, and a large number of them have Ansible remediation. Anaconda remediations are used to guide the user during system installation. Remediations in the form of a Puppet script are also supported.

Remediations can be found under bash, ansible, anaconda, and puppet directories.

For example, in the accounts_tmout rule there is a remediation in the form of a Bash script located in the bash subdirectory of the rule’s directory. Run ls bash to display the contents of the bash directory—​there is a shared.sh file in it. The shared basename has a special meaning—​it indicates that the remediation can be used with any product. If the remediation is named rhel8.sh, it means that it is a RHEL8-only remediation and cannot be used to remediate RHEL7 systems. This name coding is relevant for all types of additional content.

Unlike checks, you can review remediations in the guide—​there is a clickable (show) link to do so. Bring back the browser window with the guide open, and see for yourself.

0 03 remediation
Figure 5. Bash remediation snippet in the HTML guide
  1. Now you improve the remediation script by adding a comment stating that the numerical value is "number of seconds." Edit the remediation file:

    [... accounts_tmout v0.1.60|+4]$ cd /home/lab-user/labs/lab1_introduction
    [... lab1_introduction v0.1.60|+4]$ nano linux_os/guide/system/accounts/accounts-session/accounts_tmout/bash/shared.sh

    You can see that there are some extra lines, but the script corresponds to the content displayed in the HTML guide.

  2. The populate var_accounts_tmout line is the one that gets transformed into the variable assignment statement. Put the explanatory comment just above it:

    # platform = multi_platform_all
    
    # The timeout delay is defined by number of seconds
    {{{ bash_instantiate_variables("var_accounts_tmout") }}}
    
    # if 0, no occurence of tmout found, if 1, occurence found
    tmout_found=0
    
    for f in /etc/profile /etc/profile.d/*.sh; do
        if grep --silent '^\s*TMOUT' $f; then
            sed -i -E "s/^(\s*)TMOUT\s*=\s*(\w|\$)*(.*)$/\1TMOUT=$var_accounts_tmout\3/g" $f
            tmout_found=1
        fi
    done
    
    if [ $tmout_found -eq 0 ]; then
            echo -e "\n# Set TMOUT to $var_accounts_tmout per security requirements" >> /etc/profile.d/tmout.sh
            echo "TMOUT=$var_accounts_tmout" >> /etc/profile.d/tmout.sh
    fi
  3. After you are done, press Ctrl+X, then enter y to save and exit.

  4. Rebuild the guide:

    [... lab1_introduction v0.1.60|+4]$ ./build_product rhel8
  5. Once the build is done, refresh the guide. Expect the remediation to contain the newly added comment.

Congratulations, by completing the lab exercise, you became familiar with a comprehensive content creation tool and one of the largest open source compliance content repositories available.

4. Automated Security Scanning Using ComplianceAsCode

4.1. Introduction

As you already know from Lab Exercise 1, the ComplianceAsCode project provides security content that can be used for automated security scanning of your system.

Red Hat® Enterprise Linux® 8 (RHEL 8) contains OpenSCAP Scanner, which is a security scanner that works with ComplianceAsCode content. The content that you build in ComplianceAsCode can be simply passed to OpenSCAP Scanner and the scan can be started right away.

OpenSCAP Scanner allows you to perform security compliance checks in a fully automated way. It is possible to run the scan using either the oscap command line tool or the SCAP Workbench graphical application. Several integrations for continuous scanning also exist, but in this lab exercise, you focus on one-off scanning.

Goals
  • Learn the basics of automated security scanning in Red Hat® Enterprise Linux® 8

  • Learn how to use ComplianceAsCode for automated security scanning

  • Learn how to do lightweight customization of a predefined security policy using a GUI tool

  • Explore the possibilities for remediations of failing rules

Preconfigured Lab Environment
  • The ComplianceAsCode repository was cloned to the lab2_openscap directory.

  • The dependencies required for the ComplianceAsCode content build were installed using yum install:

    • Generic build utilities: cmake and make

    • Utilities for generating SCAP content: openscap-scanner

    • Python dependencies for putting content together: python3-pyyaml and python3-jinja2

  • The following OpenSCAP ecosystem packages were installed using yum install:

    • The scanner: openscap-scanner

    • Utilities for scanning remote systems: openscap-utils

    • The GUI front end and datastream tool: scap-workbench

Important
Content used in this lab has been altered to increase its educative potential, and is therefore different from the content in ComplianceAsCode upstream repository or the content in the scap-security-guide package shipped in Red Hat® products.

4.2. Introduction to OpenSCAP Command Line Tool

OpenSCAP provides a command line tool called oscap that can be used for automated security scanning.

  1. You can verify a successful installation of oscap by running the following commands:

    [... lab1_introduction v0.1.60|+2]$ cd /home/lab-user/labs/lab2_openscap
    [... lab2_openscap v0.1.60|+2]$ oscap --version
    
    OpenSCAP command line tool (oscap) 1.3.1
    Copyright 2009--2018 Red Hat Inc., Durham, North Carolina.
    
    ==== Supported specifications ====
    XCCDF Version: 1.2
    OVAL Version: 5.11.1
    CPE Version: 2.3
    CVSS Version: 2.0
    CVE Version: 2.0
    Asset Identification Version: 1.1
    Asset Reporting Format Version: 1.1
    CVRF Version: 1.1
    ...

    Note that this command outputs the OpenSCAP version and versions of supported standards.

4.3. Using ComplianceAsCode Content with OpenSCAP Command Line Tool

In this section, you build the security content for Red Hat® Enterprise Linux® 8 from ComplianceAsCode source code and then you use the built content with the OpenSCAP command line tool to scan your machine.

  1. The content has been built, so you can take a look at the generated files in the build directory right away:

    [... lab2_openscap v0.1.60|+2]$ cd build
    [... build v0.1.60|+2]$ ls
    ...
    ssg-rhel8-ds.xml
    ssg-rhel8-ocil.xml
    ssg-rhel8-oval.xml
    ...

    There are multiple files produced by the build. The file that is going to be used with the OpenSCAP scanner is ssg-rhel8-ds.xml. This file is called a SCAP Datastream.

  2. Check which compliance profiles are available for RHEL 8.

    [... build v0.1.60|+2]$ oscap info ssg-rhel8-ds.xml
    ...
        Profiles:
            Title: Criminal Justice Information Services (CJIS) Security Policy
                Id: xccdf_org.ssgproject.content_profile_cjis
            Title: Unclassified Information in Non-federal Information Systems and Organizations (NIST 800-171)
                Id: xccdf_org.ssgproject.content_profile_cui
            Title: Red Hat Corporate Profile for Certified Cloud Providers (RH CCP)
                Id: xccdf_org.ssgproject.content_profile_rht-ccp
    ...

    In the "Profiles:" section, you can see a list of profiles contained in the datastream. The datastream contains multiple profiles that cover different security baselines for different purposes. Each profile is identified by a profile ID.

    The built ComplianceAsCode content is available as a RHEL scap-security-guide RPM package. Unlike the upstream repository that you work with now, the package contains only content that is officially tested and supported by Red Hat®. Therefore, the scap-security-guide package in RHEL 8 contains only the OSPP and PCI-DSS profiles at the moment.

  3. Perform your first baseline testing scan with the vanilla OSPP profile.

    Note in the command below that you can skip the profile ID prefix to make the command simpler. The real ID is xccdf_org.ssgproject.content_profile_ospp.

    The scanning command has to be executed by a privileged user using sudo, so the scanner can access parts of the system that are off-limits to common users. The simplest scanner invocation can look like this:

    sudo oscap xccdf eval --profile ospp ssg-rhel8-ds.xml

    However, you also want to store the scan results so you can process them later. Therefore, you have to supply additional arguments:

    • Use --results-arf to get a machine-readable results archive that includes results of the OVAL scan

    • Use --report to get a human-readable report (this can also be generated from ARF after the scan, as you see in the next optional step)

      Now execute the following to run the scan and generate the HTML report as a side-effect:

      [... build v0.1.60|+2]$ sudo oscap xccdf eval --profile ospp --results-arf /tmp/arf.xml --report /home/lab-user/labs/lab2_openscap/lab2_report.html --oval-results ./ssg-rhel8-ds.xml
      ...
      Note

      You can also generate the HTML report later by executing these commands:

      [... build v0.1.60|+2]$ sudo rm -f /home/lab-user/labs/lab2_openscap/lab2_report.html
      [... build v0.1.60|+2]$ oscap xccdf generate report /tmp/arf.xml > /home/lab-user/labs/lab2_openscap/lab2_report.html
  4. Open the file explorer by clicking Activities and then the "file cabinet" icon. Once it opens, click the Home icon in the top left portion of the browser’s window. Click labs then lab2_openscap folders. Expect to see the lab2_report.html file there. Double-click it to open it in the browser.

    getting report

    You see the compliance scan results for every security control in the OSPP security baseline profile in HTML format.

    lab1.1 scapreport

    Rules can have several types of results, but the most common ones are pass and fail, which indicate whether a particular security control has passed or failed the scan. Other results you frequently encounter are notapplicable for rules that have been skipped as not relevant to the scanned system, and notchecked for rules without an automated check.

  5. Click the rule title in the HTML report to bring up a pop-up dialog that allows you to examine why a particular rule failed or passed.

    For example, if a rule is testing file permissions on a list of files, it specifies which files failed and what their permission bits are.

    scap report pass
    scap report fail

4.4. Customizing Existing SCAP Security Content Using SCAP Workbench

  1. In the console view, click Activities in the top left corner of the screen, then select the green circle icon for SCAP Workbench.

  2. After Workbench starts, select Other SCAP content in the drop-down list and click Load Content. A file browser window appears.

  3. Locate ssg-rhel8-ds.xml from the /home/lab-user/labs/lab2_openscap/build directory and click Open to open the compliance content for Red Hat® Enterprise Linux® 8 that you built in the previous section.

    load content

    SCAP Workbench opened
  4. Customize the PCI-DSS Control baseline.

    1. Select this profile from the Profile drop-down list.

    2. Click Customize.

      select profile

    3. In the Customize Profile pop-up window, leave the name generated by default for New Profile ID and click OK.

      500

    4. Now you can select and deselect rules according to your organization’s needs, and change values such as minimum password length, to tailor the compliance profile.

    5. IMPORTANT: Search for verify file hash and deselect the following rules, these rules can take a long time to process and might cause problems on systems with limited resources:

      • Verify File Hashes with RPM

      • Verify and Correct File Permissions with RPM

  5. After you are done customizing, click OK to save the profile. You have now created a new custom profile.

    SCAP Workbench content customization
  6. Run a test scan with the new custom profile you just created.

    1. Click Scan and inspect the results.

    2. When prompted for the password for GPTE Student, type <PASSWORD>. This takes a few minutes, so feel free to move on with the lab exercise and not wait for the scan to complete.

      500

    3. Close the Diagnostics window that pops-up at the end of the scan (error messages showing there, are fixed in the next minor version of Red Hat® Enterprise Linux® 8).

Tip

You can save the customization to a tailoring file by selecting File→Save Customization Only.

300

4.5. Security Remediations with OpenSCAP, Red Hat Ansible Automation, and Bash

Putting the machine into compliance (for example, by changing its configuration) is called remediation in the SCAP terminology. Remediation changes the configuration of the machine, and it is possible to lock yourself out or disable important workloads! As a result, it is a best practice to test the remediation changes before deploying.

You use Terminal on your laptop for the next part—​there is no need to use graphical console.

  1. Generate an Ansible® Playbook that puts your machine into compliance.

    1. Generate a playbook from the scan results. Use the --fix-type ansible option to request an Ansible Playbook with the fixes:

      [... build v0.1.60|+2]$ oscap xccdf generate fix --fix-type ansible --result-id "" /tmp/arf.xml > playbook.yml

      You specified the empty result-id because oscap supports generation of fixes from a result file that has results from multiple scans. However, as there is only one result from a single scan, you do not have to specify the result ID explicitly.

  2. Check the output using a text editor:

    [... build v0.1.60|+2]$ nano playbook.yml
  3. When finished, exit nano by pressing Ctrl+X.

  4. Generate a Bash remediation script from the scan results.

    1. Run the following command, using --fix-type bash to request a bash script with the fixes:

      [... build v0.1.60|+2]$ oscap xccdf generate fix --fix-type bash --result-id "" /tmp/arf.xml > bash-fix.sh
  5. Check the output using a text editor:

    [... build v0.1.60|+2]$ nano bash-fix.sh
  6. When finished, exit nano by pressing Ctrl+X.

The Ansible Playbook can be used to configure a system to meet a compliant state. Using Ansible Playbooks is discussed in Lab Exercise 4. The Bash remediation script also can be used to change the configuration of the system. It is recommended that you review the contents of these scripts and test them in a testing environment first, as they have the potential to make unexpected or harmful changes.

5. Create Your Own Security Policy From Scratch

5.1. Introduction

Imagine that your company has approved an internal security policy that enforces certain configurations for laptops being used outside the company site. Your task is to implement an automated way of checking the laptop configuration. In this lab exercise, you learn how to solve the task using ComplianceAsCode.

Goals
  • Learn how to represent your company security policy as a security profile in ComplianceAsCode

  • Learn how to operate with basic building blocks (rules) of ComplianceAsCode

  • Learn how to choose between hundreds of existing rules and add them into a profile

  • Learn how to customize the rules for your needs by using variables

  • Learn how to create a new rule

  • Learn how to scan your system against the profile you created

Preconfigured Lab Environment
  • The ComplianceAsCode repository was cloned to the lab3_profiles directory.

  • The following dependencies required for the ComplianceAsCode content build were installed using yum install:

    • Generic build utilities: cmake and make

    • Utilities for generating SCAP content: openscap-scanner

    • The Python dependencies for putting content together: python3-pyyaml and python3-jinja2

Important
Content used in this lab has been altered to increase its educative potential, and is therefore different from the content in ComplianceAsCode upstream repository or the content in the scap-security-guide package shipped in Red Hat® products.

5.2. Creating a New Empty Profile

The basic building block of a security policy in ComplianceAsCode is a rule. The rule represents a single configuration setting—​for example, "Password length is at least 8 characters" or "Logging is enabled."

A set of rules is called a profile. A profile represents a specific security policy. In ComplianceAsCode, there are multiple profiles.

Rules and profiles are also standardized by the XCCDF standard, which is part of SCAP. However, there is also a concept in-between — a Security Control. As profiles are usually based on policies that are available as documents that group requirements by chapters or sections, it makes sense to introduce this concept of Security Controls that corresponds to that sectioning. As a result, the profile definition doesn’t have to reference tens or hundreds of rules individually, but it can reference Security Controls that point directly to the essence of the policy the profile implements.

Security Control has the same structure as profile — it consists of selections and associated metadata. One can use available Security Controls as an additional means of how to define profiles — one can use Security Controls in addition to already mentioned rules and other selections.

There are profiles for different products. The term "product" means operating systems or applications—​for example, Red Hat® Enterprise Linux® 8 or Red Hat® OpenShift® Container Platform 4. The products are represented by directories in the products directory of the ComplianceAsCode repository—​for example, rhel8, fedora or ocp4 subdirectories.

Each product has a different set of profiles because some security policies are relevant only for certain operating systems or applications. The profiles are located in the profiles directory in the product directory. The profiles are represented by a simple YAML (YAML Ain’t Markup Language) file, such as ospp.profile, which defines a profile.

In this lab, you create a new “Travel” profile for Red Hat® Enterprise Linux® 8. The profile represents your company’s new security policy for laptops.

5.2.1. Navigating the Profiles Directory

  1. Go to the profiles directory for Red Hat® Enterprise Linux® 8:

    [... ~]$ cd /home/lab-user/labs/lab3_profiles
    [... lab3_profiles v0.1.60|+2]$ cd products/rhel8/profiles
    [... profiles v0.1.60|+2]$ ls
    cjis.profile  e8.profile     ospp-mls.profile  pci-dss.profile  standard.profile
    cui.profile   hipaa.profile  ospp.profile      rht-ccp.profile

    As you can see, there are already some .profile files in the profiles directory. You can get some inspiration from them.

5.2.2. Creating the New Profile

  1. Create a new travel.profile file in the profiles directory and open it in the editor:

    [... profiles v0.1.60|+2]$ nano travel.profile
    Note

    profile is a file in YAML format. Expect it to be fine if you copy and paste the content from the previous listing. When creating a new YAML file from scratch, the most common mistake tends to be incorrect indentation. Make sure you use spaces, not tabs. Also check that there is no trailing whitespace.

    The profile consists of four items that are required:

    1. documentation_complete: true means that your profile is not in a draft state, so the build system picks it up.

    2. title is a short profile title.

    3. description consists of a few paragraphs that describe the purpose of the profile.

    4. selections is a list of rules and variables that make up the profile. It cannot be an empty list, so for now you add the sshd_enable_strictmodes rule. You learn how to find and add other rules later in this lab exercise.

  2. Next, create the basic structure and fill in the profile title and description as specified in this listing. You can copy and paste the following text to the editor—​just keep in mind that when pasting to the console, you have to use Ctrl+Shift+V.

    documentation_complete: true
    
    title: Travel profile for corporate laptops
    
    description: This profile represents settings that are required by company security policy for employee laptops.
    
    selections:
        - sshd_enable_strictmodes
  3. When you are finished editing, press Ctrl+X, then enter y to save and exit.

5.2.3. Rebuilding and Reviewing the Content

  1. Go back to the project’s root directory.

    [... profiles v0.1.60|+2]$ cd /home/lab-user/labs/lab3_profiles
  2. Rebuild the content:

    [... lab3_profiles v0.1.60|+2]$ ./build_product rhel8
    ...

    This command rebuilds content for all of the product profiles in Red Hat® Enterprise Linux® 8, including your new “Travel” profile. The command builds the human-readable HTML guide that can be displayed in a web browser and the machine-readable SCAP files that can be consumed by OpenSCAP.

  3. Check the resulting HTML guide to see your new profile.

    1. This is the same thing you did in the first lab—​click Activities and then the "file cabinet" icon to open the file browser:

      100
    2. Just to make sure, click the Home icon in the upper left portion of the file explorer window.

    3. Navigate to the location of the exercise by double-clicking labs, followed by double-clicking the lab3_profiles, build, and guides folders.

      700
    4. Finally, double-click the ssg-rhel8-guide-travel.html file. A Firefox window opens and you can see the guide for your "Travel" profile, which contains just the single sshd_enable_strictmodes rule:

      HTML Guide
      Figure 6. The header of the HTML Guide generated by OpenSCAP during the build

5.3. Adding Rules to the Profile

Next, imagine that one of the requirements of your company policy is that the root user cannot log in to the machine via SSH. ComplianceAsCode already contains a rule implementing this requirement. You only need to add this rule to your “Travel” profile.

5.3.1. Finding the Relevant Rule

Rules are represented by directories in ComplianceAsCode. Each rule directory contains a file called rule.yml, which contains a rule description and metadata.

  1. In this case, you are looking to see if you have a rule.yml file in your repository that contains “SSH root login.” You can use git grep for this:

    [... lab3_profiles v0.1.60|+2]$ git grep -i "SSH root login" "*rule.yml"
    linux_os/guide/services/ssh/ssh_server/sshd_disable_root_login/rule.yml:title: 'Disable SSH Root Login'
  2. If you want, you can verify that this is the right rule by opening the rule.yml file and reading the description section.

    [... lab3_profiles v0.1.60|+2]$ nano linux_os/guide/services/ssh/ssh_server/sshd_disable_root_login/rule.yml

    It looks like this:

    documentation_complete: true
    
    
    title: 'Disable SSH Root Login'
    
    
    description: |-
        The root user should never be allowed to login to a
        system directly over a network.
        To disable root login via SSH, add or correct the following line
    [ ... snip ... ]
  3. In order to add the rule to your new "Travel" profile, you need to determine the ID of the rule you found. The rule ID is the name of the directory where the rule.yml file is located. In this case, the rule ID is sshd_disable_root_login.

5.3.2. Including the Rule in the New Profile

  1. Add the rule ID to the selections list in your "Travel" profile.

    [... lab3_profiles v0.1.60|+2]$ nano products/rhel8/profiles/travel.profile
  2. Add sshd_disable_root_login as a new item in selections list. The selections list is a list of rules that the profile consists of.

    Please make sure that you use spaces for indentation.

  3. After you are finished editing, press Ctrl+X, then enter y to save and exit.

    Expect your travel.profile file to look like this:

    documentation_complete: true
    
    title: Travel profile for corporate laptops
    
    description: This profile represents settings which are required by company security policy for employee laptops.
    
    selections:
        - sshd_enable_strictmodes
        - sshd_disable_root_login

5.3.3. Verifying the Result

  1. To review the result, you need to rebuild the content:

    [... lab3_profiles v0.1.60|+2]$ ./build_product rhel8

    The sshd_disable_root_login rule is included in your profile by the build system.

  2. Check the resulting HTML guide.

    1. Switch to the graphical console in the web browser on your laptop.

    2. Click Activities, and then the "file cabinet" icon to bring up the file browser. Expect to be in the labs/lab3_profiles/build/guides directory from the previous step. If that is not the case, refer to the end of the Rebuilding and Reviewing the Content section for the steps to get there.

    3. Double-click the ssg-rhel8-guide-travel.html file. A Firefox window opens and you can see your "Travel" profile, which contains two rules.

5.4. Adding Customizable Rules to the Profile and Customizing Them

Imagine that one of the requirements set in your company policy is that the user sessions must timeout if the user is inactive for more than 5 minutes.

ComplianceAsCode already contains an implementation of this requirement in the form of a rule. You now need to add this rule to your “Travel” profile.

However, the rule in ComplianceAsCode is generic—​or, in other words, customizable. It can check for an arbitrary period of user inactivity. You need to set the specific value of 5 minutes in the profile.

5.4.1. Adding Another Rule to the List

This is similar to the previous section.

  1. First, use command line tools to search for the correct rule file:

    [... lab3_profiles v0.1.60|+2]$ git grep -i "Interactive Session Timeout" "*rule.yml"
    linux_os/guide/system/accounts/accounts-session/accounts_tmout/rule.yml:title: 'Set Interactive Session Timeout'

    As you already know from the first lab exercise, the rule is located in linux_os/guide/system/accounts/accounts-session/accounts_tmout/rule.yml. It is easy to spot that the rule ID is accounts_tmout because the rule ID is the name of the directory where the rule is located.

  2. Add the rule ID to the selections list in your "Travel" profile.

    [... lab3_profiles v0.1.60|+2]$ nano products/rhel8/profiles/travel.profile
  3. Add accounts_tmout as a new item in the selections list.

    Make sure your indentation is consistent and use spaces, not tabs. Also make sure there is no trailing whitespace.

  4. Check the rule contents to find out whether there is a variable involved:

    [... lab3_profiles v0.1.60|+2]$ nano linux_os/guide/system/accounts/accounts-session/accounts_tmout/rule.yml

    You can see there are two occurences of xccdf_value("var_accounts_tmout"). This is the reference we are looking for.

  5. After you are finished looking, press Ctrl+X to bring up the "save and exit" option. If you are asked about saving any changes, you do not want that, so enter n.

    From the rule contents you can clearly see that it is parameterized by the var_accounts_tmout variable. Note that the var_accounts_tmout variable is used in the description instead of an exact value. In the HTML guide, you later see that var_accounts_tmout has been assigned a value. The value is also automatically substituted into OVAL checks, Ansible® Playbooks, and the remediation scripts.

5.4.2. Examining the Parameterization

  1. In order to learn more about the parameterization, find and review the variable definition file.

    [... lab3_profiles v0.1.60|+2]$ find . -name 'var_accounts_tmout*'
    linux_os/guide/system/accounts/accounts-session/var_accounts_tmout.var
    [... lab3_profiles v0.1.60|+2]$ nano linux_os/guide/system/accounts/accounts-session/var_accounts_tmout.var
  2. The variable has multiple options, which you can see in the options list:

    options:
        30_min: 1800
        10_min: 600
        15_min: 900
        5_min: 300
        default: 600

    options: is defined as a YAML dictionary that maps keys to values. In ComplianceAsCode, the YAML dictionary keys are used as selectors and the YAML dictionary values are concrete values that are used in the checks. You use the selector to choose the value in the profile. You can add a new key and value to the options dictionary if none of the values suits your needs. Later, you add a new pair—​variable name and selector—​into the profile and you use the 5_min selector to choose 300 seconds.

  3. After you are finished looking, press Ctrl+X to bring up the "save and exit" option. If you are asked about saving any changes, you probably do not want that, so enter n.

  4. To apply the variable parameterization, the variable and the selector have to be added to the travel profile.

    [... lab3_profiles v0.1.60|+2]$ nano products/rhel8/profiles/travel.profile

    As with the rule IDs, the variable values also belong to the selections list in the profile. However, the entry for a variable has the format variable=selector. So in this case, the format of the list entry is var_accounts_tmout=5_min.

5.4.3. Modify rule attributes

Some rule properties depend on the context of the particular security policy in question. Such obvious property is the rule severity — different policies may emphasize individual security requirements differently. The SCAP standard allows to override rule properties in profile definitions by a mechanics that is referred to as "rule refinement".

The syntax for this is similar to setting of a value:

- rule_id.property=new_value

So let’s change the severity of our accounts_tmout rule to high as it is our favourite rule. Edit products/rhel8/profiles/travel.profile and add the refinement to the selections list:

...

selections:
    ...
    - accounts_tmout
    - accounts_tmout.severity=high
    ...

After you are finished editing, press Ctrl+X, then enter y to save and exit.

5.4.4. Completing the Parameterization

Make sure your travel.profile file matches the following listing:

documentation_complete: true

title: Travel profile for corporate laptops

description: This profile represents settings which are required by company security policy for employee laptops.


selections:
    - sshd_enable_strictmodes
    - sshd_disable_root_login
    - accounts_tmout
    - accounts_tmout.severity=high
    - var_accounts_tmout=5_min

Please make sure that you use spaces for indentation.

You can copy-paste file contents into the editor, keep in mind that if you paste to the console, you have to use Ctrl+Shift+V instead of just Ctrl+V. After you are finished editing, press Ctrl+X, then enter y to save and exit.

5.4.5. Reviewing the Result

  1. To review the result, rebuild the content again:

    [... lab3_profiles v0.1.60|+2]$ ./build_product rhel8

    The accounts_tmout rule gets included into your profile by the build system.

  2. Check the resulting HTML guide.

    1. The file browser already has the corresponding guide loaded, so you just need to refresh it to review the changes. Click the "Refresh" icon in the top left corner of the browser window.

    2. The Travel profile now contains three rules. Scroll down to the Account Inactivity Timeout rule and note that 300 seconds is substituted there.

5.5. Creating a New Rule from Scratch

Imagine that one of the requirements in your corporate policy is that users have to install the Hexchat application when their laptops are used during travel outside the company site because Hexchat is the preferred way to communicate with the company IT support center.

You want to add a check to your new profile that checks if Hexchat is installed.

ComplianceAsCode does not have a rule ready for installing this application yet. That means you need to add a new rule for that.

Creating the rule definition file

You will now create the rule.yml file for your new rule.

  1. Find a group directory that best fits your new rule.

    The rules are located in the linux_os directory. Rules in the ComplianceAsCode project are organized into groups, which are represented by directories. It is up to you to decide which group the new rule belongs to. You can browse the directory tree to find a suitable group:

    1. You are in the linux_os/guide directory, which has intro, system, and services directories.

    2. You definitely do not want to configure a service setting, so explore system.

    3. There are more subdirectories under system, and as you want a new software package installed, it makes sense to explore the software directory.

    4. Here, you create the directory for your rule.

  2. Create a new rule directory in a group directory.

    The name of the directory is the rule ID. In this case, package_hexchat_installed is a suitable ID. You create the directory using mkdir and use the -p switch to make sure that the directory is created along with its parents if needed.

    [... lab3_profiles v0.1.60|+2]$ cd /home/lab-user/labs/lab3_profiles
    [... lab3_profiles v0.1.60|+2]$ mkdir -p linux_os/guide/system/software/package_hexchat_installed
  3. Create rule.yml in the rule directory.

    A description of the rule is stored. Each rule needs to have it. rule.yml is a simple YAML file.

    [... lab3_profiles v0.1.60|+2]$ nano linux_os/guide/system/software/package_hexchat_installed/rule.yml
  4. Add the following content to the rule.yml file:

    Tip
    You can select the text in the laptop’s browser, copy it to the clipboard using Ctrl+C, and paste it to the nano editor using Ctrl+Shift+V.
    documentation_complete: true
    
    title: Install Hexchat Application
    
    description: As of company policy, the traveling laptops have to have the Hexchat application installed.
    
    rationale: The Hexchat application enables IRC communication with the corporate IT support centre.
    
    severity: medium
  5. When you have finished editing, press Ctrl+X, then enter y to save and exit.

    Note
    1. documentation_complete: true again indicates that the rule is picked up by the build system whenever it is applicable.

    2. title is the rule title, which is displayed on the command line and in SCAP Workbench.

    3. description is a section that describes the check.

    4. rationale needs to contain a justification for why the rule exists.

    5. severity can be either low, medium, or high.

  6. Add the rule ID to the profile selections.

    1. As described in the previous section, you need to add the ID of your new rule (package_hexchat_installed) to the selections list in your profile (travel.profile). You do it by editing the travel profile file:

      [... lab3_profiles v0.1.60|+2]$ nano products/rhel8/profiles/travel.profile
    2. When adding the package_hexchat_installed item, please make sure that you use spaces, not tabs for indentation:

      documentation_complete: true
      
      title: Travel profile for corporate laptops
      
      description: This profile represents settings which are required by company security policy for employee laptops.
      
      selections:
          - sshd_enable_strictmodes
          - sshd_disable_root_login
          - accounts_tmout
          - accounts_tmout.severity=high
          - var_accounts_tmout=5_min
          - package_hexchat_installed
    3. When you have finished editing, press Ctrl+X, then enter y to save and exit.

5.5.2. Use templates to generate checks automatically

You have successfully defined the rule and added it to the profile. However, the rule currently has no check nor remediation. That means OpenSCAP can’t check if the Hexchat package is installed. Writing OVAL checks is a process out of scope of this chapter and it is described in a separate lab. However, in some cases you can use the already created templates. You can try to search by keyword in list of templates to find out if some template suits your case. In this case it does.

Templates are a great way of simplifying development of new rules and avoiding unnecessarily large amount of duplicated code. There are sets of rules which perform very similar checks and can be remediated in a similar way. This applies for example to checks that a certain package is installed, that a certain Systemd service is disabled, etc. Using templates is recommended whenever possible to avoid code duplication and possible inconsistencies. Another benefit of templates is ease of creation of new rules. As demonstrated below, you don’t have to know how to write OVAL checks or Bash remediations to create a fully working rule. The template will create this for you automatically. You only need to append a special block at the end of the particular rule.yml file.

  1. Open the list of templates in your web browser.

  2. You can quickly glance through the list of templates. Notice that every template is accompanied by a description and one or more parameters. Finally, search for the package_installed template. Notice that the template has two parameters:

    pkgname

    name of the RPM or DEB package, eg. tmux

    evr

    Optional parameter. It can be used to check if the package is of a specific version or newer. Provide epoch, version, release in epoch:version-release format, eg. 0:2.17-55.0.4.el7_0.3. Used only in OVAL checks. The OVAL state uses operation "greater than or equal" to compare the collected package

  3. Open the rule.yml file for the package_hexchat_installed rule .

    [... lab3_profiles v0.1.60|+2]$ nano linux_os/guide/system/software/package_hexchat_installed/rule.yml
  4. Add the special block at the end of the file, so it looks like this:

    documentation_complete: true
    
    title: Install Hexchat Application
    
    description: As of company policy, the traveling laptops have to have the Hexchat application installed.
    
    rationale: The Hexchat application enables IRC communication with the corporate IT support centre.
    
    severity: medium
    
    template:
        name: package_installed
        vars:
            pkgname: hexchat

    Notice that you used one of the two possible parameters; pkgname.

  5. When you have finished editing, press Ctrl+X, then enter y to save and exit.

  6. Build the content.

    [... lab3_profiles v0.1.60|+2]$ ./build_product rhel8
  7. Check the resulting HTML guide. Expect to still have it as a tab in your browser, which you can refresh by clicking the refresh button in the browser window. Alternatively, you can locate the ssg-rhel8-guide-travel.html file in the /home/lab-user/lab3_profiles/build/guides directory as you already did earlier in this exercise.

    500
    Figure 7. The Firefox Refresh Page button

    Either way, you see your "Travel" profile with four rules, including the newly added rule.

    New rule
    Figure 8. New "Install Hexchat Application" rule displayed in the HTML guide

    Note that the rule is using yum install in the Bash remediation snippet. This template is product aware, and so it always use the recommended way how to install packages. For example if the rule was built into a profile on Fedora, the recommendation would be dnf install instead.

For more details about the rule.yml format, please refer to contributor’s section of the developer guide. For more information about the templating system, including the list of currently available templates, refer to the Templating section of the developer guide.

5.6. Scanning the System Against the New Profile

In the final section, you use the new profile that you just created to scan your machine using OpenSCAP.

You have examined only the HTML guide so far, but for automated scanning, you use a datastream instead. A datastream is an XML file that contains all of the data (rules, checks, remediations, and metadata) in a single file. The datastream that contains your new profile was also built during the content build. It is called ssg-rhel8-ds.xml and is located in the build directory.

  1. Run an OpenSCAP scan using the built content.

    oscap is the command line tool that you use to scan the machine. You need to give oscap the name of the profile (travel) and the path to the built datastream (ssg-rhel8-ds.xml) as arguments. You also add arguments to turn on full reporting, which generates XML and HTML results that you can review later.

    1. Use sudo to run the command as the privileged user, to scan the parts of the system that common users are not able to access.

      [... lab3_profiles v0.1.60|+2]$ sudo oscap xccdf eval --results-arf results.xml --report report.html --profile travel build/ssg-rhel8-ds.xml
  2. Check the scan results.

    In your terminal you see all three rules, and all of them were evaluated:

    Terminal
    Figure 9. The oscap output from evaluating the "Travel" profile
  3. Review the details in the HTML report. The report is located in the /home/lab-user/labs/lab3_profiles directory, so you can locate it using the file explorer as you did in the previous exercises:

    1. Open the file explorer by clicking Activities, and then the "file cabinet" icon.

    2. Once it opens, click Home at the top left corner of the browser’s window.

    3. Then, double-click the labs and lab3_profiles folders.

    4. Double-click the report.html file to open it in the browser.

      The structure of the HTML report is similar to the HTML guide, but it contains the evaluation results.

    5. After clicking the rule title, you can see the detailed rule results.

      In the detailed rule results for the Set Interactive Session Timeout rule, you can review the rule description to see which requirement was not met by the scanned system.

    6. Review the OVAL details section to examine the reason why this rule failed. It states that items were missing, which means that objects described by the table shown below the message do not exist on the scanned system. In this specific example, there was no string to match the pattern in /etc/profile, which means there is no TMOUT entry in /etc/profile. To fix this problem, you need to insert TMOUT=300 into /etc/profile and then run the scan again.

      Report
      Figure 10. Details of the rule evaluation displayed in the HTML report

5.7. Protecting a Profile

Profiles can grow quite complex — check out e.g. the products/rhel8/profiles/ospp.profile that contains group of rule selections and comments. Such files can get non-functional changes that regroup selections or modify comments, and this creates noise in the profile commit’s history. As a result, it is not clear how a profile really changed in course of its history.

The project addresses this problem — it allows you to "freeze" a profile in a certain state, so each substantial change to it has to be confirmed, and the history of changes is easily available.

It’s build artifacts that play the key role here. One of those artifacts is a compiled profile, which is a profile file that doesn’t contain any comments, and whose selections are sorted lexicographically. For instance, let’s take a look at our compiled travel profile by writing its contents to the console using the cat command:

[... lab3_profiles v0.1.60|+2]$ cat build/rhel8/profiles/travel.yml
title: Travel profile for corporate laptops
description: This profile represents settings which are required by company security
    policy for employee laptops.
...
selections:
- accounts_tmout
- package_hexchat_installed
- sshd_disable_root_login
- sshd_enable_strictmodes
- var_accounts_tmout=5_min
- accounts_tmout.severity=high
...

As the exact form of the compiled profile is not relevant to content authors and also a likely subject to change, in the future, we escape those lines from the listing using the ellipsis (…​). As we can see, first come rule selections, then variable assignments, and finally rule refinements.

Then, we copy the file to a directory where reference compiled profiles are expected, and we also remove yaml keys except selections and title. Only selections are taken into the account when comparing the reference profile with the actual profile, so the title isn’t needed, but we include it to see that other keys are allowed. So we make sure that the directory tests/data/profile_stability/<product> exists, we copy the build artifact, and we remove redundant lines.

[... lab3_profiles v0.1.60|+2]$ mkdir -p tests/data/profile_stability/rhel8
[... lab3_profiles v0.1.60|+2]$ cp build/rhel8/profiles/travel.profile tests/data/profile_stability/rhel8
[... lab3_profiles v0.1.60|+2]$ nano tests/data/profile_stability/rhel8/travel.profile

Note in the nano editor, the keyboard shortcut Ctrl+K is useful — it removes the current line. In case when you remove wrong line by accident, remember that you can undo using Alt+U. So edit the file, that it matches the listing below:

title: Travel profile for corporate laptops
selections:
- accounts_tmout
- package_hexchat_installed
- sshd_disable_root_login
- sshd_enable_strictmodes
- var_accounts_tmout=5_min
- accounts_tmout.severity=high

After you are done editing, press Ctrl+X, then enter y to save and exit.

Time to try it out - let’s execute the profile stability test! We do so by executing ctest to execute tests that have stability in mind, considering the build directory as the base test directory:

[... lab3_profiles v0.1.60|+2]$ ctest -R profiles --output-on-failure --test-dir build

The test should have 100% passed.

Will the test fail if we change the profile? Let’s try that out by modifying the project’s profile.

[... lab3_profiles v0.1.60|+2]$ nano products/rhel8/profiles/travel.profile

Let’s change the severity override in the selections section from high to medium in the profile file, then recompile and retest.

...

selections:
    - sshd_enable_strictmodes
    - sshd_disable_root_login
    - accounts_tmout
    - accounts_tmout.severity=medium
    ...
[... lab3_profiles v0.1.60|+2]$ ./build_product rhel8
...
[... lab3_profiles v0.1.60|+2]$ ctest -R profiles --output-on-failure --test-dir build

This time, the test will fail, and thanks to the --output-on-failure option, it will tell us that the changed severity is indeed a problem. We will keep the original profile reference in the tests directory, and we will restore the severity back to high in the upcoming section.

5.8. Defining the Company Policy

We have created a travel profile, but perhaps we would also create other company profiles that could share the same big-picture concepts. For instance, the travel profile is about session protection, SSH hardening, and about installing a communication tool. Therefore, let’s formally define a company policy file with Security Controls, and use those in the profile definition.

5.8.1. Creating a Control File

Policies are defined in the controls folder of the project, so let’s create a controls/my-company.yml policy file in that directory with basic metadata:

[... lab3_profiles v0.1.60|+2]$ nano controls/my-company.yml

and enter the following metadata to start:

title: 'Security Guidelines of My Company'
id: my-company

Next, we add two security controls — one for the the SSH hardening and the other one for the interactive terminal session protection. So add the following contents to the file — you can copy-paste it into the editor, keep in mind that if you paste into the console, you have to use Ctrl+Shift+V instead of just Ctrl+V.

controls:
  - id: ssh-protection
    title: Protection of the SSH session
    rules:
    - sshd_enable_strictmodes
    - sshd_disable_root_login

  - id: session-protection
    title: Protection of the interactive terminal session
    rules:
    - accounts_tmout
    - accounts_tmout.severity=high
    - var_accounts_tmout=5_min
    - package_hexchat_installed

After you are done editing, press Ctrl+X, then enter y to save and exit.

The main purpose of security controls is to assist content authors in interpretation of security policies. Therefore, the control file can mirror the document, and optional keys s.a. description or notes can assist in clarifying choices that were taken when the policy got interpreted to a profile. On the policy level, only the keys id, title and controls are required, and on the control level, only the key id is required.

You can add some more optional metadata, so the link between the real-world policy and its projection into a SCAP profile is strenghtened. Adding policy, version and source allows content authors to quickly understand exact details concerning the security policy that is interpreted. That additional metadata isn’t processed by the build system, it only serves as a context that can facilitate content creation, completion assesment and so on.

As a result, the whole control file controls/my-company.yml will look like

title: 'Security Guidelines of My Company'
id: 'my-company'
policy: 'MCSecurity'
version: '0.1'
source: https://my.company/security-policy.pdf

controls:
  - id: ssh-protection
    title: Protection of the SSH session
    rules:
    - sshd_enable_strictmodes
    - sshd_disable_root_login

  - id: session-protection
    title: Protection of the interactive terminal session
    rules:
    - accounts_tmout
    - accounts_tmout.severity=high
    - var_accounts_tmout=5_min
    - package_hexchat_installed

5.8.2. Migrating the Profile to Controls

Let’s use those new controls in our travel profile. We do it by replacing respective selections with security controls invocations. A security control is identified by policy id:control id, and when we select it, all rules that are applicable for the product that we build get selected by the profile.

Security controls can contain rules that are built only for a limited set of products by means of restricting the rule’s prodtype, which is an optional rule attribute. Such rules are skipped when expanding security controls into profiles, so a profiles that are defined using the same security controls on different products can vary. This may surprise content authors, but it is a useful way how to make security controls reusable despite product differences s.a. different names of packages.

So let’s modify the profile definition and alter its selections.

[... lab3_profiles v0.1.60|+2]$ nano products/rhel8/travel.yml

Substitute the three accounts_tmout--related selections with my-company:session-protection, and the two SSH-related selections with my-company:ssh-protection. In other words, make sure that file contents match the following snippet: The updated definition of the profile that mixes selection of rules with selection of controls will look like this:

documentation_complete: true

title: Travel profile for corporate laptops

description: This profile represents settings which are required by company security policy for employee laptops.

selections:
    - my-company:ssh-protection
    - my-company:session-protection
    - package_hexchat_installed

After you are done editing, press Ctrl+X, then enter y to save and exit. Then, let’s compile the product:

[... lab3_profiles v0.1.60|+2]$ ./build_product rhel8

Hopefully the build terminated without issues. This indicates that control files are loaded automatically — as soon as you create them, you can start to use them in profile definitions.

Finally, we execute the profile stability test from before — as we have aimed to change the syntax of the new profile, but not to alter its behavior, we expect the test to pass:

[... lab3_profiles v0.1.60|+2]$ ctest -R profiles --output-on-failure --test-dir build

And indeed, the test passes, which proves that the control-based way of defining profiles is compatible with the literal profile definition in a profile file.

6. Using Ansible in ComplianceAsCode

6.1. Introduction

Red Hat® Ansible® Automation is a powerful tool for automating the configuration of systems. By running a predefined file called an Ansible Playbook, you can quickly configure the system according to your needs.

Ansible Automation can easily be used for security compliance automation, because it allows you to keep your system hardened, and it can operate on a large scale. Using Ansible Automation, you can set everything you need, including minimal password lengths, firewall rules, package installation, or the disabling of services.

ComplianceAsCode works great with Ansible Automation. ComplianceAsCode connects a human-readable security policy with Ansible tasks that implement the settings required by the security policy. The rules in ComplianceAsCode profiles contain Ansible tasks that configure the system to conform to the rule. From these Ansible tasks and rule metadata, ComplianceAsCode generates an Ansible Playbook that you can use to harden your system. After running the playbook, your system meets the requirements of the security policy.

ComplianceAsCode generates a playbook for each profile that conforms to the profile definition. Moreover, it generates separate playbooks for every rule.

Goals
  • Learn how to add an Ansible task for a rule

  • Learn how to leverage ComplianceAsCode structure when creating Ansible content

  • Learn how to generate Ansible Playbooks from a ComplianceAsCode repository

  • Learn how to use the Ansible Playbooks generated by ComplianceAsCode

Important
Content used in this lab has been altered to increase its educative potential, and is therefore different from the content in ComplianceAsCode upstream repository or the content in the scap-security-guide package shipped in Red Hat® products.
Preconfigured Lab Environment
  • The ComplianceAsCode repository was cloned to the lab4_ansible directory.

  • Ansible Automation was installed. It is available from the ansible-2.9-for-rhel-8-x86_64-rpms repository, which is enabled by Red Hat® Subscription Management.

  • The following dependencies required for the ComplianceAsCode content build were installed using yum install:

    • Generic build utilities: cmake and make,

    • Utilities for generating SCAP content: openscap-scanner

    • Python dependencies for putting content together: python3-pyyaml and python3-jinja2

6.2. Adding an Ansible Task for a Rule

Ansible tasks can be attached to every rule in a ComplianceAsCode project. A lot of rules already contain Ansible tasks.

As an example, you add a new Ansible task for the accounts_tmout rule. You have already encountered this rule in Lab Exercise 1. This rule is about the interactive session timeout for inactive users. Terminating an idle session within a short time period reduces the risk of unauthorized operations. The Ansible task that you add configures the session timeout in the respective configuration file.

You work in the context of Red Hat® Enterprise Linux® 8 (RHEL 8) product and the OSPP profile for Red Hat® Enterprise Linux® 8, but the structure is the same for all rules in ComplianceAsCode.

As you already know from Lab Exercise 1, source code for the accounts_tmout rule is located in the linux_os/guide/system/accounts/accounts-session/accounts_tmout directory.

  1. First, navigate to the lab4_ansible directory.

    [... labs]$ cd lab4_ansible/
    [... lab4_ansible v0.1.60|+3]$
  2. Next, switch to the rule directory and examine its content.

    [... lab4_ansible v0.1.60|+3]$ cd linux_os/guide/system/accounts/accounts-session/accounts_tmout
    [... accounts_tmout v0.1.60|+3]$ ls
    bash  oval  rule.yml  tests

    As you learned in previous lab exercises, the rule.yml file contains the rule description, rationale, and metadata.

    Note

    Apart from rule.yml, there are also three subdirectories in the accounts_tmout directory:

    • bash contains source code for a Bash script that can fix the timeout settings, also called "Bash remediation."

    • oval contains source code for an OVAL check that checks if the timeout is set.

    • tests contains test scenarios which are used to test checks and remediations of a rule.

  3. To add Ansible tasks in this rule, you first create a new directory called ansible in the rule directory, at the same level as the bash and oval directories. Create a new ansible directory and change into it:

    [... accounts_tmout v0.1.60|+3]$ mkdir ansible
    [... ansible v0.1.60|+3]$ cd ansible
  4. Next, you create a new file in this directory that contains your Ansible task.

    Note

    In ComplianceAsCode, the file needs to have a specific name. You have two options:

    • shared.yml is an universal name, the "shared" in this context means that the task can be applied to any product—​that is, any Linux distribution.

    • product_id.yml, (for example, fedora.yml), can be used if the task is specific to a single Linux distribution and cannot be extended to other Linux distributions.

  5. Because the interactive session timeout is not a specific feature of RHEL 8, but is handled the same way in most Linux distributions, you can name the file shared.yml. Create a new shared.yml file in the ansible directory and open it in the text editor.

    [... ansible v0.1.60|+3]$ nano shared.yml
  6. Next, you start to write the Ansible content in this file. It is not in the format of an Ansible Playbook—​instead, it uses a special format. It is a simple YAML file.

    The first part of this file must be a header that helps the build system integrate the Ansible tasks with the SCAP content and also with the rule metadata.

    Add the following content to the top of the shared.yml file, including the # characters. If you want to copy and paste the text, you have to use Shift+Ctrl+V to paste into a terminal with nano:

    # platform = multi_platform_all
    # reboot = false
    # strategy = restrict
    # complexity = low
    # disruption = low

    Do not close the file yet.

    Note

    The header contains optional metadata. The platform and reboot fields have well-defined meanings:

    • platform is a comma-separated list of products that the Ansible tasks are applicable to. It can be an operating system name such as Red Hat Enterprise Linux 8, or a wildcard string that matches multiple products—​for example, multi_platform_rhel. Here we use the wildcard string, multi_platform_all, that matches all of the possible platforms.

    • reboot specifies if a reboot is needed to activate the settings. This can be either true or false. Here, we signal that a reboot is not needed. This value is purely informational and setting it to true does not cause Ansible Automation to reboot the system.

    The other fields are optional, and their meanings are fuzzier:

    • strategy is the method or approach for making the described fix. It is typically one of the following: configure, disable, enable, patch, restrict, or unknown.

    • complexity is the estimated complexity or difficulty of applying the fix to the target. It can be unknown, low, medium, or high.

    • disruption is an estimate of the potential for disruption or operational degradation that the application of this fix imposes on the target. It can be unknown, low, medium, or high.

  7. Now, you add an Ansible task or tasks for this rule below the header in shared.yml. Add the following content at the end of the shared.yml file. Again, do not close the file just yet.

    - name: configure timeout
      lineinfile:
          create: yes
          dest: /etc/profile
          regexp: "^#?TMOUT"
          line: "TMOUT=600"

    At this point, expect the entire file to look like this:

    # platform = multi_platform_all
    # reboot = false
    # strategy = restrict
    # complexity = low
    # disruption = low
    
    - name: configure timeout
      lineinfile:
          create: yes
          dest: /etc/profile
          regexp: "^#?TMOUT"
          line: "TMOUT=600"
    Note

    If you are familiar with Ansible Automation, you probably know that you just wrote an Ansible task. Normally, Ansible tasks are low-level components of Ansible Playbooks. The ComplianceAsCode project allows content contributors to focus on tasks, and the playbook that aggregates them is generated by the project. When writing tasks, you can use the standard Ansible syntax and write the Ansible tasks the exact same way as you write in Ansible Playbooks. You can use any Ansible module.

    Using Ansible language, you have defined a new Ansible task with the name "configure timeout". It uses the lineinfile Ansible module, which can add, modify, and remove lines in configuration files. Using the lineinfile module, you insert the line TMOUT=600 to /etc/profile.

    Note that the regexp line defines a regular expression that determines what Ansible Automation is going to do. If the regular expression matches a line, it is substituted with line, so the lines TMOUT=1800 and #TMOUT=600 are replaced by TMOUT=600. If no line matches the regular expression, contents of line are simply appended to dest, which in this case is /etc/profile.

    In this rule, you add only a single Ansible task. If your goals need to be achieved by multiple Ansible tasks, they all go into the same file.

    In ComplianceAsCode, the general rule is that the Ansible tasks must conform to the rule description in rule.yml for the given rule. Tasks must not do anything different than what the rule.yml description requires. Think of the rule description as a natural language specification of what needs to be implemented in Ansible Automation.

6.3. Using Variables in Ansible Tasks

At this point, your task does not fully conform to the rule description in rule.yml. The difference is that rule.yml does not define a specific value for the timeout.

  1. Check that rule.yml does not specify whether the timeout should be 600 seconds or a different amount of time. In fact, the rule is parameterized by a variable, var_accounts_tmout. The specific value for a timeout variable is set by setting var_accounts_tmout in the profile definition. This way, every profile can define a different timeout but still reuse the same source code.

    You need to fix the Ansible task to use the var_accounts_tmout variable instead of explicitly setting 600 seconds in the task. The general format for binding a variable from ComplianceAsCode profiles is - (xccdf-var name_of_the_variable).

  2. Add the following line (including the dash at the beginning of the line) right after the # disruption = low line in the shared.yml file:

    - (xccdf-var var_accounts_tmout)

    Now, you can use the bound variable in the configure timeout Ansible task as an Ansible variable using the standard Ansible syntax. When the shared.yml file is processed by the ComplianceAsCode build system, this variable binding is resolved automatically and a new Ansible variable is created in the vars list in the generated playbook.

  3. Replace line: "TMOUT=600" with line: "TMOUT={{ var_accounts_tmout }}" to use the variable in the task.

    At this point you have completed adding Ansible tasks for the accounts_tmout rule. Expect the contents of the shared.yml file to look like this:

    # platform = multi_platform_all
    # reboot = false
    # strategy = restrict
    # complexity = low
    # disruption = low
    - (xccdf-var var_accounts_tmout)
    
    - name: configure timeout
      lineinfile:
          create: yes
          dest: /etc/profile
          regexp: ^#?TMOUT
          line: "TMOUT={{ var_accounts_tmout }}"
  4. You can now save the file by pressing Ctrl+X, then entering y to save and exit.

6.4. Generating and Using Ansible Playbooks for a Rule

You now generate a playbook for the accounts_tmout rule you modified. You do this in the context of the Red Hat® Enterprise Linux® 8 product and the OSPP profile for Red Hat® Enterprise Linux® 8.

To generate Ansible Playbooks, a complete build of the content for the product needs to be performed. That means that all of the other playbooks for all of the other rules are generated as well. Moreover, the SCAP content is also generated.

  1. Go back to the project root directory and run the following command to build the RHEL 8 product:

    [... ansible v0.1.60|+3]$ cd /home/lab-user/labs/lab4_ansible
    [... lab4_ansible v0.1.60|+3]$ ./build_product rhel8
  2. The Playbooks are generated in the build/rhel8/playbooks directory. Check the contents of this directory:

    [... lab4_ansible v0.1.60|+3]$ ls build/rhel8/playbooks
    all/  cjis/  cui/  e8/  hipaa/  ospp/  pci-dss/  rht-ccp/  standard/

    Note that there is a directory for each profile in the RHEL8 product. That is because each profile consists of a different set of rules and the rules are parameterized by variables which can have different values in each profile.

  3. The accounts_tmout rule is, for example, a part of the OSPP profile, so take a peek into the ospp directory:

    [... lab4_ansible v0.1.60|+3]$ ls build/rhel8/playbooks/ospp

    There are many playbook files in the ospp directory. One of them is the accounts_tmout.yml file, which is the Ansible Playbook that contains the Ansible tasks you added in the accounts_tmout rule.

  4. Open it in the text editor:

    [... lab4_ansible v0.1.60|+3]$ nano build/rhel8/playbooks/ospp/accounts_tmout.yml

    The contents of the build/rhel8/playbooks/ospp/accounts_tmout.yml file look like this:

    # platform = multi_platform_all
    # reboot = false
    # strategy = restrict
    # complexity = low
    # disruption = low
    - name: Set Interactive Session Timeout
      hosts: '@@HOSTS@@'
      become: true
      vars:
        var_accounts_tmout: '600'
      tags:
        - CCE-80673-7
        - NIST-800-171-3.1.11
        - NIST-800-53-AC-12
        - NIST-800-53-SC-10
        - accounts_tmout
        - low_complexity
        - low_disruption
        - medium_severity
        - no_reboot_needed
        - restrict_strategy
      tasks:
    
        - name: configure timeout
          lineinfile:
            create: true
            dest: /etc/profile
            regexp: ^#?TMOUT
            line: TMOUT={{ var_accounts_tmout }}
    Tip

    If you see a typo in the YAML file, edit the source again and rebuild.

    This is a normal Ansible Playbook that Ansible users are familiar with. The name of the playbook is the same as the title of the rule, which is defined in rule.yml.

  5. The hosts section contains only a placeholder string, '@@HOSTS@@', which needs to be replaced by a list of IP addresses or hosts that the playbook applies to. You have to edit this in order to check the playbook. To use your playbook on your machine (on a local host), replace '@@HOSTS@@' with 'localhost' and press Ctrl+X, then enter y to save and exit.

    [... lab4_ansible v0.1.60|+3]$ nano build/rhel8/playbooks/ospp/accounts_tmout.yml
    ...
    - name: Set Interactive Session Timeout
      hosts: 'localhost'
      become: true
    ...

    Note that the timeout value supplied by the var_accounts_tmout variable was set to a specific value (600 seconds) during the build process, and the variable was added to the vars section of the playbook.

    Note also that the playbook has tags in the tags section that were added based on metadata in rule.yml. At the beginning, it contains the CCE (Common Configuration Enumeration) identifier. Finally, the tasks: section contains the Ansible task that you created.

  6. Run the playbook:

    [... lab4_ansible v0.1.60|+3]$ ansible-playbook build/rhel8/playbooks/ospp/accounts_tmout.yml
  7. Check if it has any effect:

    [... lab4_ansible v0.1.60|+3]$ cat /etc/profile

    Note that TMOUT=600 is at the end of the file!

    The biggest advantage of using Ansible tasks in ComplianceAsCode is that it gets integrated with the SCAP content, the HTML report, and the HTML guide as well.

  8. Switch to the console view and open the terminal if it is not yet open.

  9. Run the following command to open the HTML guide for the OSPP profile for Red Hat® Enterprise Linux® 8 in your Firefox web browser, or navigate to the OSPP guide the same way you have in previous exercises:

    [... ~ v0.1.60|+3]$ firefox /home/lab-user/labs/lab4_ansible/build/guides/ssg-rhel8-guide-ospp.html
  10. Check the "Set Interactive Session Timeout" rule. Click the blue (show) link to the right of the green "Remediation Ansible snippet" label and you see your recently added Ansible content.

    4 01 guide
    Figure 11. The "Set Interactive Session Timeout" rule displayed in an HTML guide and including the expanded Ansible content

You no longer need console view in this lab.

6.5. Using the Profile Ansible Playbooks

In the previous section, you learned about using a playbook for the accounts_tmout rule. However, security policies are usually complex, which in turn means that profiles consist of many rules. It is not convenient to have a separate Ansible Playbook for each rule, because that means you need to apply many Ansible Playbooks to the system. Fortunately, ComplianceAsCode also generates Ansible Playbooks that contain all of the tasks for a given profile in a single playbook.

The playbooks are located in the build/ansible directory. This directory contains Ansible Playbooks for each profile. The Playbooks files have .yml extension.

[... lab4_ansible v0.1.60|+3]$ ls build/ansible
all-profile-playbooks-rhel8  rhel8-playbook-cui.yml      rhel8-playbook-e8.yml     rhel8-playbook-ospp.yml     rhel8-playbook-rht-ccp.yml
rhel8-playbook-cjis.yml      rhel8-playbook-default.yml  rhel8-playbook-hipaa.yml  rhel8-playbook-pci-dss.yml  rhel8-playbook-standard.yml
  1. Check the contents of the OSPP profile playbook in your editor and verify that a task for the accounts_tmout rule is there among all the other tasks.

    [... lab4_ansible v0.1.60|+3]$ nano build/ansible/rhel8-playbook-ospp.yml

    At this point, you have per-rule Ansible Playbooks available, as well as per-profile ones. You can integrate these into your CI/CD pipelines and infrastructure management as needed.

7. The Art of OVAL Checks

7.1. Introduction

OVAL stands for Open Vulnerability and Assessment Language. In a nutshell, it is an XML-based declarative language that is part of the SCAP standard. This lab focuses on its ability to query and evaluate the state of a system. Quoting from the OVAL FAQ:

The language standardizes the three main steps of the assessment process: representing configuration information of systems for testing; analyzing the system for the presence of the specified machine state (vulnerability, configuration, patch state, etc.); and reporting the results of this assessment.

The ComplianceAsCode project supports OVAL as the language for writing automated configurable checks. It compiles OVAL snippets into checks that are understood by OVAL interpreters—​for example, the OpenSCAP scanner. The scanner evaluates the check, and determines whether the system passes.

In this lab exercise, you go through the OVAL snippet of the accounts_tmout rule. You see how even simple checks can rapidly become complicated, and what you can do about it. Finally, you discover that the check was written incorrectly and you fix it.

Goals
  • Learn about OVAL.

  • Learn how ComplianceAsCode facilitates creation of new OVAL content.

  • Learn how to test OVAL checks.

  • Learn how to use tests and remediations to safely and gradually improve an OVAL check.

Preconfigured Lab Environment
  • The ComplianceAsCode repository was cloned to the lab5_oval directory.

  • The following dependencies required for the content build were installed using yum install:

    • Generic build utilities: cmake and make,

    • Utilities for generating SCAP content: openscap-scanner

    • Python dependencies for putting content together: python3-pyyaml and python3-jinja2

  • A podman ssg_test_suite image was built using the Dockerfiles/test_suite-rhel8 files. The SSH keys for root are authorized by the container’s root user. The steps for how to accomplish this are in the tests/README.md file of the ComplianceAsCode project.

  • The OVAL check for accounts_tmout was modified so you can improve it.

Important
Content used in this lab has been altered to increase its educative potential, and is therefore different from the content in ComplianceAsCode upstream repository or the content in the scap-security-guide package shipped in Red Hat® products.

7.2. Anatomy of an Existing Check-Remediation Pair

There is already a built HTML guide in the build directory.

  1. To examine it, you navigate to it, and then open the guide in the browser:

    1. Click Activities, then click the "file cabinet" icon to open the file explorer.

      100
    2. Just to make sure, click the Home icon in the upper left portion of the file explorer window.

    3. Navigate to the location of the exercise by double-clicking the labs folder, followed by double-clicking the lab5_oval, build, and guides folders.

    4. Finally, double-click the ssg-rhel8-guide-ospp.html file.

  2. In this lab exercise, you focus on the accounts_tmout rule. To find the rule entry in the guide, press Ctrl+F or use the Edit → Find in this page menu item, and search for the Set Interactive Session Timeout string, which is the rule title.

    The description says:

    Setting the TMOUT option in /etc/profile ensures that all user sessions
    will terminate based on inactivity. The TMOUT setting in /etc/profile
    should read as follows:
    
    TMOUT=600

    When dealing with the rule check, there are additional aspects to keep in mind:

    • Because the timeout is supposed to be set to 600 seconds, what is the consequence if the timeout value is set to 100? Is it more or less secure?

      Having a shorter time interval between inactivity and logout is more bothersome for the user, but it is a stricter requirement. Therefore, you need to make sure that if the rule requires TMOUT=600, having TMOUT=100 is also evaluated as correct.

    • The rule description that the TMOUT=…​ statement is in a config file is accurate, but guides on the Internet often recommend that you have export TMOUT=…​ there. The assignment form with the export keyword ensures that the variable is available to other programs. Environmental variables such as PATH and HOME are commonly exported, so this probably is where the confusion comes from that export is needed for TMOUT to work.

      In this case, you want to make sure that the rule’s check allows both forms—​with and without export, even though the export keyword is not required.

7.2.1. Bash Remediation

  1. Examine the Bash remediation by opening the following file in the text editor:

    [... ~ v0.1.60|+8]$ cd /home/lab-user/labs/lab5_oval
    [... lab5_oval v0.1.60|+8]$ nano linux_os/guide/system/accounts/accounts-session/accounts_tmout/bash/shared.sh

    The remediation body looks like this:

    Note
    The header of the remediation is processed by the build system, so the actual file contents and the remediation displayed in the HTML guide are different.
    if grep --silent ^TMOUT /etc/profile ; then
            sed -i "s/^TMOUT.*/TMOUT=$var_accounts_tmout/g" /etc/profile
    else
            echo -e "\n# Set TMOUT to $var_accounts_tmout per security requirements" >> /etc/profile
            echo "TMOUT=$var_accounts_tmout" >> /etc/profile
    fi

    You do not need to make any changes to the file.

  2. After you are finished looking, press Ctrl+X to bring up the "save and exit" option. If you are asked about saving any changes, you probably do not want that, so enter n.

    You can see that the remediation is in sync with the description—​it handles the /etc/profile file, and it does one of the following:

    • Adds the TMOUT assignment to the file if it is missing

    • Modifies the TMOUT assignment so that the correct value is used if an assignment already exists

7.2.2. OVAL Check

In this section, you move on to the OVAL check.

  1. In the text editor, open the file that defines the check:

    [... lab5_oval v0.1.60|+8]$ nano linux_os/guide/system/accounts/accounts-session/accounts_tmout/oval/shared.xml
  2. This file is much more complicated, so examine it piece by piece:

    1. Note the leading definition element:

        <definition class="compliance" id="accounts_tmout" version="2">
          <metadata>
            <title>Set Interactive Session Timeout</title>
            <affected family="unix">
              <platform>multi_platform_rhel</platform>
              <platform>multi_platform_fedora</platform>
              <platform>multi_platform_ol</platform>
            </affected>
            <description>Checks interactive shell timeout</description>
          </metadata>
          <criteria operator="OR">
            <criterion comment="TMOUT value in /etc/profile >= var_accounts_tmout" test_ref="test_etc_profile_tmout" />
            <criterion comment="TMOUT value in /etc/profile.d/*.sh >= var_accounts_tmout" test_ref="test_etc_profiled_tmout" />
          </criteria>
        </definition>
        ...

      The definition specifies a criteria element. Here is a close-up of those criteria:

          ...
          <criteria operator="OR">
            <criterion comment="TMOUT value in /etc/profile >= var_accounts_tmout"
              test_ref="test_etc_profile_tmout" />
            <criterion comment="TMOUT value in /etc/profile.d/*.sh >= var_accounts_tmout"
              test_ref="test_etc_profiled_tmout" />
          </criteria>
        </definition>
        ...

      You can see that each criterion references a test. The first test checks for the TMOUT setting in the /etc/profile file, the other one checks all files in /etc/profile.d/ that have the sh file extension. If either test passes, the whole test passes as well, as the operator="OR" attribute of the criteria element imposes.

      A test is typically composed of an object and state definitions. The object defines what should be gathered on the tested system, the state defines expected properties of the object. In order for the test to pass, the object has to exist, and it has to conform to the specified state.

  3. Now examine the test for the /etc/profile criterion and its dependencies:

      ...
      <ind:textfilecontent54_test check="all" check_existence="all_exist"
          comment="TMOUT in /etc/profile" id="test_etc_profile_tmout" version="1">
        <ind:object object_ref="object_etc_profile_tmout" />
        <ind:state state_ref="state_etc_profile_tmout" />
      </ind:textfilecontent54_test>
      ...

    The object definition associates a filename with a regular expression. The filename is checked for the regular expression, and if there is a match, contents of the regular expression group become the object.

  4. Note the instance element that equals 1. This indicates that it is the first match of the regular expression that defines the object:

      ...
      <ind:textfilecontent54_object id="object_etc_profile_tmout" version="1">
        <ind:filepath>/etc/profile</ind:filepath>
        <ind:pattern operation="pattern match">^[\s]*TMOUT[\s]*=[\s]*(.*)[\s]*$</ind:pattern>
        <ind:instance datatype="int">1</ind:instance>
      </ind:textfilecontent54_object>
  5. The state is a specification that the object (the matched substring) should be an integer that equals the value of the var_accounts_tmout variable:

      <ind:textfilecontent54_state id="state_etc_profile_tmout" version="1">
        <ind:subexpression datatype="int" operation="equals" var_check="all" var_ref="var_accounts_tmout" />
      </ind:textfilecontent54_state>
    
      <external_variable comment="external variable for TMOUT" datatype="int"
          id="var_accounts_tmout" version="1" />
      ...

    There are two regular expressions that check for TMOUT=…​ in the shared.xml file: one for the profile test and one for the profile.d/*.sh test. As there are two types of locations that need to be examined, (the single /etc/profile file and *.sh files in the /etc/profile.d directory), there have to be two objects. The object_etc_profile_tmout and object_etc_profiled_tmout objects have different file/path specifications, but the regular expression is the same. The alternative form of the assignment export TMOUT=…​ is not handled in either of them.

    Moreover, there is the equals operation used to perform the match. As stated in the previous section, this looks wrong, as shorter timeouts are more secure, and therefore should be allowed.

  6. Now you can close the file. As a reminder, you do not need to make any changes at this point. Therefore, press Ctrl+X to bring up the "save and exit" option. If you are asked about saving any changes, you probably do not want that, so enter n.

7.3. Tests Introduction

The ComplianceAsCode project features a test suite that is useful for defining which scenarios the check and remediation are supposed to handle. It sets up a system to a certain state and runs the scan and possibly remediations. Results are reported in the form of console output, and detailed reports are saved to a log directory.

Regarding scenarios, consider, for example, the accounts_tmout rule—​the two simplest cases are handled using the following scenarios:

  • TMOUT=600 is present in /etc/profile. This test scenario should pass.

  • TMOUT=600 is not present in /etc/profile or /etc/profile.d/*.sh. This is more complicated because remediations become involved:

    • This test scenario should fail the initial scan.

    • If there is a remediation for the rule, it should apply without errors.

    • The final scan after the remediation should pass.

The test suite has to prepare a system, scan it, and report results. Due to practical considerations, the system under test should be isolated from the system running the test. The test suite supports libvirt VMs, and docker or podman containers that satisfy this isolation requirement. In this exercise, you are going to use a podman container with the Red Hat® Enterprise Linux® 7 (RHEL 7) image.

7.3.1. Tests Hands-on

  1. We need the RHEL 7 content to test the RHEL 7 image. As we have already seen earlier, the initial build of the content including build of the guide has already been done for us.

  2. You test the accounts_tmout rule included in the ospp profile of the RHEL 7 datastream. You need to run the test suite as a superuser, because it involves spinning up a container that exposes an SSH port. With that in mind, execute the test suite:

    [... lab5_oval v0.1.60|+8]$ sudo python3 tests/test_suite.py rule --container ssg_test_suite --datastream build/ssg-rhel8-ds.xml accounts_tmout
    Setting console output to log level INFO
    INFO - The base image option has been specified, choosing Podman-based test environment.
    INFO - Logging into /home/lab-user/labs/lab5_oval/logs/...
    INFO - xccdf_org.ssgproject.content_rule_accounts_tmout
    INFO - Script comment.fail.sh using profile xccdf_org.ssgproject.content_profile_ospp OK
    INFO - Script correct_value.pass.sh using profile xccdf_org.ssgproject.content_profile_ospp OK
    INFO - Script line_not_there.fail.sh using profile xccdf_org.ssgproject.content_profile_ospp OK
    INFO - Script wrong_value.fail.sh using profile xccdf_org.ssgproject.content_profile_ospp OK
    Note

    The test suite is a Python script tests/test_suite.py. You supplied the following arguments to it:

    • You want to use the test suite in rule mode—​you want to test a rule under all available rule test scenarios.

      The alternative mode is profile mode, which is simpler—​there are no test scenarios and the system is scanned.

    • You want to use podman with the ssg_test_suite image as the back end, so you supply the --container ssg_test_suite arguments.

    • Of course you have to specify which datastream to use for testing—​you use the built one, so you specify --datastream build/ssg-rhel8-ds.xml arguments.

    • Finally, you specify what to test—​a rule regular expression: accounts_tmout or ^accounts_tmout$.

The output tells you the following:

  • The rule with full ID xccdf_org.ssgproject.content_rule_accounts_tmout was tested in the OSPP profile context.

  • There were four test scenarios: comment.fail.sh, line_not_there.fail.sh, correct_value.pass.sh and wrong_value.fail.sh, all of which passed. These scenarios test whether the rule can handle various situations correctly. You examine these test scenarios later in this lab exercise. For now, it is important to realize that all of the scenarios should still pass after you make any changes in the OVAL.

  • More information about the test run is available in the respective log directory. This is useful when a test breaks unexpectedly or the test suite suffers from internal issues.

Now when you have a reasonable amount of certainty about your rules, you can improve the OVAL content.

Tip
You repeat the (re)build of the content and subsequent test suite execution multiple times. Therefore, it may be practical to dedicate a terminal window for this purpose. You can browse the command history using Up and Down keyboard arrow keys, so if you want to rebuild after the test run finishes, tap the Up key until the build_product command shows up (typically you have to tap twice), and confirm the execution of the build command by pressing Enter.

7.4. OVAL Optimization

In this section, you analyze the OVAL check for the accounts_tmout rule and perform the following steps:

  1. Analyze the OVAL and identify duplicated elements.

  2. Design a Jinja2 macro that deduplicates test definitions.

  3. Test changes.

  4. Design a Jinja2 macro that deduplicates test objects.

  5. Test changes again.

7.4.1. Code Duplication Analysis

The OVAL test repeats itself a bit—​there are checks for the /etc/profile file as well as for other /etc/profile.d/*.sh files, but the tests and respective objects are very similar. This makes editing tedious and prone to copy-paste errors. Luckily, ComplianceAsCode supports the Jinja2 macro language that can be used to introduce templating, thus removing the duplication.

  1. Analyze the difference between the two tests:

    There is a difference in name and comment, and test objects are also different.

    1. Compare the following two excerpts:

      <ind:textfilecontent54_test check="all" check_existence="all_exist"
          comment="TMOUT in /etc/profile" id="test_etc_profile_tmout" version="1">
        <ind:object object_ref="object_etc_profile_tmout" />
        <ind:state state_ref="state_etc_profile_tmout" />
      </ind:textfilecontent54_test>
      ...
      
      <ind:textfilecontent54_test check="all" check_existence="all_exist"
          comment="TMOUT in /etc/profile.d/*.sh" id="test_etc_profiled_tmout" version="1">
        <ind:object object_ref="object_etc_profiled_tmout" />
        <ind:state state_ref="state_etc_profile_tmout" />
      </ind:textfilecontent54_test>
      ...

You have etc_profile_tmout and etc_profiled_tmout (note the extra d) in the test ID and in the object reference.

7.4.2. Deduplication of Tests

Luckily, the Jinja2 language enables you to define macros that can help you to remove the duplication. You are going to define a macro that accepts the filename comment and the test stem as arguments.

Therefore, you remove both tests and add the new macro and its new invocations.

Tip
To delete a text section in nano, move the cursor to the start of the text you want to select. Press Ctrl+6 to mark the start, then move the cursor to the end of the section you want to select. Finally, press Ctrl+K to erase the selection. Undo by pressing Alt+U, redo by pressing Alt+E. Also remember that if you paste to the terminal, you have to press Ctrl+Shift+V.
  1. Open the oval/shared.xml file in the editor:

    [... lab5_oval v0.1.60|+8]$ nano linux_os/guide/system/accounts/accounts-session/accounts_tmout/oval/shared.xml
  2. Now, delete the two textfilecontent54_test XML elements, and then copy and paste the following content to replace it (between the definition and the first of the textfilecontent54_object elements):

      {{% macro test_tmout(test_stem, files) %}}
      <ind:textfilecontent54_test check="all" check_existence="all_exist"
          comment="TMOUT in {{{ files }}}" id="test_{{{ test_stem }}}" version="1">
        <ind:object object_ref="object_{{{ test_stem }}}" />
        <ind:state state_ref="state_etc_profile_tmout" />
      </ind:textfilecontent54_test>
      {{% endmacro %}}
    
      {{{ test_tmout(  test_stem="etc_profile_tmout", files="/etc/profile") }}}
      {{{ test_tmout(  test_stem="etc_profiled_tmout", files="/etc/profile.d/*.sh") }}}
  3. Finish your edits as usual by pressing Ctrl+X and then entering y to save and exit.

    Note
    The delimiters are different than the Jinja2 website shows—​that is, instead of {% macro …​ %}, you use the {{% macro …​ %}} form and so on. There is always one curly bracket more than the website documentation shows.

7.4.3. Checking That You Are Safe

So, did you do everything correctly?

  1. Rebuild the datastream and execute the test suite again—​the result should be exactly the same.

    TIP:You can use the Up arrow key to browse the command history so you do not have to retype them every time.

    [... lab5_oval v0.1.60|+8]$ ./build_product rhel8
    ...
    [... lab5_oval v0.1.60|+8]$ sudo python3 tests/test_suite.py rule --container ssg_test_suite --datastream build/ssg-rhel8-ds.xml accounts_tmout
    ...
    INFO - Logging into /home/lab-user/labs/lab5_oval/logs/...
    INFO - xccdf_org.ssgproject.content_rule_accounts_tmout
    INFO - Script comment.fail.sh using profile xccdf_org.ssgproject.content_profile_ospp OK
    INFO - Script line_not_there.fail.sh using profile xccdf_org.ssgproject.content_profile_ospp OK
    INFO - Script correct_value.pass.sh using profile xccdf_org.ssgproject.content_profile_ospp OK
    INFO - Script wrong_value.fail.sh using profile xccdf_org.ssgproject.content_profile_ospp OK

7.4.4. Deduplication of Objects

Next, the test objects are very similar, as well—​the only thing that differs is their name, and path + filename/filepath attributes. So you define a macro that accepts the test name stem and path, filename, or filepath attributes.

You use the if-statement here—​if, for example, filepath is not supplied, {{% if filepath %}} evaluates to False and the body of the condition is ignored. Conversely, if the filepath is supplied, the textfilecontent54_object definition created by the macro includes the ind:filepath child element holding the respective value.

  1. Open the oval/shared.xml file in the editor, if it is not already open:

    [... lab5_oval v0.1.60|+8]$ nano linux_os/guide/system/accounts/accounts-session/accounts_tmout/oval/shared.xml
  2. Remove the two textfilecontent54_object XML elements and then copy and paste the following block as a replacement (between the test creation and the textfilecontent54_state XML elements):

      {{% macro object_tmout(test_stem, path, filename, filepath) %}}
      <ind:textfilecontent54_object id="object_{{{ test_stem }}}" version="1">
        {{% if path %}}
        <ind:path>{{{ path }}}</ind:path>
        {{% endif %}}
        {{% if filename %}}
        <ind:filename operation="pattern match">{{{ filename }}}</ind:filename>
        {{% endif %}}
        {{% if filepath %}}
        <ind:filepath>{{{ filepath }}}</ind:filepath>
        {{% endif %}}
        <ind:pattern operation="pattern match">^[\s]*TMOUT[\s]*=[\s]*(.*)[\s]*$</ind:pattern>
        <ind:instance datatype="int">1</ind:instance>
      </ind:textfilecontent54_object>
      {{% endmacro %}}
    
      {{{ object_tmout(test_stem="etc_profile_tmout", filepath="/etc/profile") }}}
      {{{ object_tmout(test_stem="etc_profiled_tmout", path="/etc/profile.d", filename="^.*\.sh$") }}}
  3. To actually create tests and objects, macros have to be called. Therefore, do it and place the macro calls close to each other. Doing this emphasizes that there are two tests: etc_profile_tmout that examines the single file and etc_profiled_tmout that goes through the whole directory.

  4. Finish your edits as usual by pressing Ctrl+X and then entering y.

  5. If you get errors during the build or during the tests and you do not know how to fix them, you are covered. The snippet below represents the OVAL file after performing the deduplication described in the previous section. To get back on track, copy and paste the text below to the linux_os/guide/system/accounts/accounts-session/accounts_tmout/oval/shared.xml file.

    <def-group>
      <definition class="compliance" id="accounts_tmout" version="2">
        <metadata>
          <title>Set Interactive Session Timeout</title>
          <affected family="unix">
            <platform>multi_platform_rhel</platform>
            <platform>multi_platform_fedora</platform>
            <platform>multi_platform_ol</platform>
          </affected>
          <description>Checks interactive shell timeout</description>
        </metadata>
        <criteria operator="OR">
          <criterion comment="TMOUT value in /etc/profile >= var_accounts_tmout"
            test_ref="test_etc_profile_tmout" />
          <criterion comment="TMOUT value in /etc/profile.d/*.sh >= var_accounts_tmout"
            test_ref="test_etc_profiled_tmout" />
        </criteria>
      </definition>
    
      {{% macro test_tmout(test_stem, files) %}}
      <ind:textfilecontent54_test check="all" check_existence="all_exist"
          comment="TMOUT in {{{ files }}}" id="test_{{{ test_stem }}}" version="1">
        <ind:object object_ref="object_{{{ test_stem }}}" />
        <ind:state state_ref="state_etc_profile_tmout" />
      </ind:textfilecontent54_test>
      {{% endmacro %}}
    
      {{{ test_tmout(  test_stem="etc_profile_tmout", files="/etc/profile") }}}
      {{{ test_tmout(  test_stem="etc_profiled_tmout", files="/etc/profile.d/*.sh") }}}
    
      {{% macro object_tmout(test_stem, path, filename, filepath) %}}
      <ind:textfilecontent54_object id="object_{{{ test_stem }}}" version="1">
        {{% if path %}}
        <ind:path>{{{ path }}}</ind:path>
        {{% endif %}}
        {{% if filename %}}
        <ind:filename operation="pattern match">{{{ filename }}}</ind:filename>
        {{% endif %}}
        {{% if filepath %}}
        <ind:filepath>{{{ filepath }}}</ind:filepath>
        {{% endif %}}
        <ind:pattern operation="pattern match">^[\s]*TMOUT[\s]*=[\s]*(.*)[\s]*$</ind:pattern>
        <ind:instance datatype="int">1</ind:instance>
      </ind:textfilecontent54_object>
      {{% endmacro %}}
    
      {{{ object_tmout(test_stem="etc_profile_tmout", filepath="/etc/profile") }}}
      {{{ object_tmout(test_stem="etc_profiled_tmout", path="/etc/profile.d", filename="^.*\.sh$") }}}
    
      <ind:textfilecontent54_state id="state_etc_profile_tmout" version="1">
        <ind:subexpression datatype="int" operation="equals" var_check="all"
          var_ref="var_accounts_tmout" />
      </ind:textfilecontent54_state>
    
      <external_variable comment="external variable for TMOUT" datatype="int" id="var_accounts_tmout" version="1" />
    </def-group>

    This way, you do not have to worry about possibly introducing those copy-paste errors.

7.4.5. Reassuring That You Are Safe

  1. Finally, run the rule’s test again—​it may be that a typo was introduced, and the OVAL is not actually correct:

    [... lab5_oval v0.1.60|+8]$ ./build_product rhel8
    ...
    [... lab5_oval v0.1.60|+8]$ sudo python3 tests/test_suite.py rule --container ssg_test_suite accounts_tmout
    ...
    INFO - Logging into /home/lab-user/labs/lab5_oval/logs/...
    INFO - xccdf_org.ssgproject.content_rule_accounts_tmout
    INFO - Script comment.fail.sh using profile xccdf_org.ssgproject.content_profile_ospp OK
    INFO - Script line_not_there.fail.sh using profile xccdf_org.ssgproject.content_profile_ospp OK
    INFO - Script correct_value.pass.sh using profile xccdf_org.ssgproject.content_profile_ospp OK
    INFO - Script wrong_value.fail.sh using profile xccdf_org.ssgproject.content_profile_ospp OK

    As there are no errors, this proves that your check-remediation combination works as expected.

Tip
You do not need to specify the parameter --datastream when there is datastream build for only one product, so our command this time is shorter.

7.5. OVAL Development

7.5.1. Correct Handling of Supercompliance

  1. Examine the test scenarios—​for example, the wrong_value.fail.sh scenario.

    1. Open a new terminal window, and change to the test definitions directory. Tests reside within the same directory as the rule.yml:

      [... lab5_oval v0.1.60|+8]$ cd linux_os/guide/system/accounts/accounts-session/accounts_tmout/tests
    2. Open the wrong_value.fail.sh file:

      [... tests v0.1.60|+8]$ nano wrong_value.fail.sh

      As you can see, the test sets the TMOUT value to 1234. The value is correctly considered to be noncompliant—​the timeout should be 600, and 1234 is longer and therefore less secure.

    3. After you finish looking, press Ctrl+X to bring up the "save and exit" option. If you are asked about saving any changes, you probably do not want that, so enter n.

    4. What about the correct_value.pass.sh scenario? Open it in the editor, as well:

      [... tests v0.1.60|+8]$ nano correct_value.pass.sh

      As you can see, this one sets the TMOUT value to 600, which is the value defined by the profile.

    5. After you finish looking, press Ctrl+X to bring up the "save and exit" option. If you are asked about saving any changes, you probably do not want that, so enter n.

  2. Add another check for a correct value—​check for a timeout of 100. In the case of a timeout, 100 seconds is more secure than 600 seconds. Therefore, the scenario represents a supercompliant case, that is, the setting is stricter than necessary, but it is within the area of allowed values.

    1. Copy that one, and make a new test scenario out of it. Run this command in the terminal in the tests directory:

      [... tests v0.1.60|+8]$ cp correct_value.pass.sh supercompliant.pass.sh
    2. Then, open it in the nano editor, and change the value from 600 to 100.

      [... tests v0.1.60|+8]$ nano supercompliant.pass.sh
    3. After you finish editing, press Ctrl+X, then enter y to save and exit. For reference, the supercompliant.pass.sh file now looks like this:

      #!/bin/bash
      #
      # profiles = xccdf_org.ssgproject.content_profile_ospp
      
      if grep -q "TMOUT" /etc/profile; then
              sed -i "s/.*TMOUT.*/TMOUT=100/" /etc/profile
      else
              echo "TMOUT=100" >> /etc/profile
      fi
  3. Now go back to the tests and run them:

    [... tests v0.1.60|+8]$ cd /home/lab-user/labs/lab5_oval
    [... lab5_oval v0.1.60|+8]$ ./build_product rhel8
    ...
    [... lab5_oval v0.1.60|+8]$ sudo python3 tests/test_suite.py rule --container ssg_test_suite accounts_tmout
    ...
    INFO - Logging into /home/lab-user/labs/lab5_oval/logs/...
    INFO - xccdf_org.ssgproject.content_rule_accounts_tmout
    INFO - Script correct_value.pass.sh using profile xccdf_org.ssgproject.content_profile_ospp OK
    INFO - Script comment.fail.sh using profile xccdf_org.ssgproject.content_profile_ospp OK
    ERROR - Script supercompliant.pass.sh using profile xccdf_org.ssgproject.content_profile_ospp found issue:
    ERROR - Rule evaluation resulted in fail, instead of expected pass during initial stage
    ERROR - The initial scan failed for rule 'xccdf_org.ssgproject.content_rule_accounts_tmout'.
    INFO - Script line_not_there.fail.sh using profile xccdf_org.ssgproject.content_profile_ospp OK
    INFO - Script wrong_value.fail.sh using profile xccdf_org.ssgproject.content_profile_ospp OK

    The test output tells you that the supercompliant.pass.sh scenario has failed, which was not expected.

  4. Modify the OVAL snippet, so timeouts shorter than the threshold are allowed:

    [... lab5_oval v0.1.60|+8]$ cd linux_os/guide/system/accounts/accounts-session/accounts_tmout
    [... accounts_tmout v0.1.60|+8]$ nano oval/shared.xml
  5. The modification should be easy—​instead of checking that the timeout value equals the threshold, you use the less than or equal check as per the OVAL specification. So just replace equals with less than or equal in the definition of the textfilecontent54_state like this:

      <ind:textfilecontent54_state id="state_etc_profile_tmout" version="1">
        <ind:subexpression datatype="int" operation="less than or equal" var_check="all" var_ref="var_accounts_tmout" />
      </ind:textfilecontent54_state>
  6. After you are finished editing, press Ctrl+X, then enter y to save and exit. This time, when rebuilt and executed again, the tests pass:

    [... accounts_tmout v0.1.60|+8]$ cd /home/lab-user/labs/lab5_oval
    [... lab5_oval v0.1.60|+8]$ ./build_product rhel8
    ...
    [... lab5_oval v0.1.60|+8]$ sudo python3 tests/test_suite.py rule --container ssg_test_suite accounts_tmout
    INFO - The base image option has been specified, choosing Podman-based test environment.
    INFO - Logging into /home/lab-user/labs/lab5_oval/logs/...
    INFO - xccdf_org.ssgproject.content_rule_accounts_tmout
    INFO - Script comment.fail.sh using profile xccdf_org.ssgproject.content_profile_ospp OK
    INFO - Script line_not_there.fail.sh using profile xccdf_org.ssgproject.content_profile_ospp OK
    INFO - Script correct_value.pass.sh using profile xccdf_org.ssgproject.content_profile_ospp OK
    INFO - Script supercompliant.pass.sh using profile xccdf_org.ssgproject.content_profile_ospp OK
    INFO - Script wrong_value.fail.sh using profile xccdf_org.ssgproject.content_profile_ospp OK

7.5.2. Correct Handling of Export

As discussed at the beginning of this exercise, the TMOUT variable can be prefixed by the export keyword—​this is allowed, but not required.

  1. Modify the passing correct_value.pass.sh test scenario to test a correct value in addition to the usage of the export keyword:

    [... lab5_oval v0.1.60|+8]$ nano linux_os/guide/system/accounts/accounts-session/accounts_tmout/tests/correct_value.pass.sh
    #!/bin/bash
    #
    # profiles = xccdf_org.ssgproject.content_profile_ospp
    
    if grep -q "TMOUT" /etc/profile; then
            sed -i "s/.*TMOUT.*/export TMOUT=600/" /etc/profile
    else
            echo "export TMOUT=600" >> /etc/profile
    fi
  2. After you are finished editing, press Ctrl+X, then enter y to save and exit.

  3. It is time to rerun those tests. You do not have to rebuild the product, as you have changed only the test definition, and you can rerun the test suite without the prior rebuild. Execute the test suite again and expect the Script correct_value.pass.sh using profile xccdf_org.ssgproject.content_profile_ospp found issue: line to appear in the output.

    [... lab5_oval v0.1.60|+8]$ sudo python3 tests/test_suite.py rule --container ssg_test_suite accounts_tmout
    ...

    This confirms the theory that OVAL does not allow this configuration, although it is valid. Therefore, in order to make tests pass, you have to edit the OVAL so that the occurrence of export is allowed. Thanks to the OVAL optimization that you performed earlier, there is only one place that needs to be changed—​the definition of the test object.

  4. Open the OVAL file again:

    [... lab5_oval v0.1.60|+8]$ cd linux_os/guide/system/accounts/accounts-session/accounts_tmout
    [... accounts_tmout v0.1.60|+8]$ nano oval/shared.xml
  5. Note that the current test object specifies the following:

    <ind:pattern operation="pattern match">^[\s]*TMOUT[\s]*=[\s]*(.*)[\s]*$</ind:pattern>
    <ind:instance datatype="int">1</ind:instance>

    It needs to be changed to ignore the export keyword followed by at least one whitespace.

  6. The best approach is to make this an optional group. This means adding (export[\s]+)? to the regular expression, but as you do not want that group to be registered (stored in memory or captured), you have to add some special syntax. Add (?:export[\s]+) and the section becomes this:

    <ind:pattern operation="pattern match">^[\s]*(?:export[\s]+)?TMOUT[\s]*=[\s]*(.*)[\s]*$</ind:pattern>
    <ind:instance datatype="int">1</ind:instance>

    The non-capturing group that consists of export followed by at least one whitespace can be either absent or present exactly once.

  7. It is time to save the OVAL. Press Ctrl+X, then enter y to save and exit, and then rebuild the product and run the tests again:

    [... accounts_tmout v0.1.60|+8]$ cd /home/lab-user/labs/lab5_oval
    [... lab5_oval v0.1.60|+8]$ ./build_product rhel8
    ...
    [... lab5_oval v0.1.60|+8]$ sudo python3 tests/test_suite.py rule --container ssg_test_suite accounts_tmout
    INFO - The base image option has been specified, choosing Podman-based test environment.
    INFO - Logging into /home/lab-user/labs/lab5_oval/logs/...
    INFO - xccdf_org.ssgproject.content_rule_accounts_tmout
    INFO - Script comment.fail.sh using profile xccdf_org.ssgproject.content_profile_ospp OK
    INFO - Script line_not_there.fail.sh using profile xccdf_org.ssgproject.content_profile_ospp OK
    INFO - Script correct_value.pass.sh using profile xccdf_org.ssgproject.content_profile_ospp OK
    INFO - Script wrong_value.fail.sh using profile xccdf_org.ssgproject.content_profile_ospp OK
    INFO - Script supercompliant.pass.sh using profile xccdf_org.ssgproject.content_profile_ospp OK

    Everything passes, which means that your check can now handle a range of compliant values and it does not produce false positives when the export keyword is involved.

Congratulations—​now you know how to use the ComplianceAsCode project to make OVAL creation less error-prone and how to make sure that OVAL checks are working according to expectations.