Security hardening
Enhancing security of Red Hat Enterprise Linux 9 systems
Abstract
Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Providing feedback on Red Hat documentation
We appreciate your feedback on our documentation. Let us know how we can improve it.
Submitting comments on specific passages
- View the documentation in the Multi-page HTML format and ensure that you see the Feedback button in the upper right corner after the page fully loads.
- Use your cursor to highlight the part of the text that you want to comment on.
- Click the Add Feedback button that appears near the highlighted text.
- Add your feedback and click Submit.
Submitting feedback through Jira (account required)
- Log in to the Jira website.
- Click Create in the top navigation bar
- Enter a descriptive title in the Summary field.
- Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation.
- Click Create at the bottom of the dialogue.
Chapter 1. Securing RHEL during installation
Security begins even before you start the installation of Red Hat Enterprise Linux. Configuring your system securely from the beginning makes it easier to implement additional security settings later.
1.1. BIOS and UEFI security
Password protection for the BIOS (or BIOS equivalent) and the boot loader can prevent unauthorized users who have physical access to systems from booting using removable media or obtaining root privileges through single user mode. The security measures you should take to protect against such attacks depends both on the sensitivity of the information about the workstation and the location of the machine.
For example, if a machine is used in a trade show and contains no sensitive information, then it may not be critical to prevent such attacks. However, if an employee’s laptop with private, unencrypted SSH keys for the corporate network is left unattended at that same trade show, it could lead to a major security breach with ramifications for the entire company.
If the workstation is located in a place where only authorized or trusted people have access, however, then securing the BIOS or the boot loader may not be necessary.
1.1.1. BIOS passwords
The two primary reasons for password protecting the BIOS of a computer are[1]:
- Preventing changes to BIOS settings — If an intruder has access to the BIOS, they can set it to boot from a CD-ROM or a flash drive. This makes it possible for them to enter rescue mode or single user mode, which in turn allows them to start arbitrary processes on the system or copy sensitive data.
- Preventing system booting — Some BIOSes allow password protection of the boot process. When activated, an attacker is forced to enter a password before the BIOS launches the boot loader.
Because the methods for setting a BIOS password vary between computer manufacturers, consult the computer’s manual for specific instructions.
If you forget the BIOS password, it can either be reset with jumpers on the motherboard or by disconnecting the CMOS battery. For this reason, it is good practice to lock the computer case if possible. However, consult the manual for the computer or motherboard before attempting to disconnect the CMOS battery.
1.1.2. Non-BIOS-based systems security
Other systems and architectures use different programs to perform low-level tasks roughly equivalent to those of the BIOS on x86 systems. For example, the Unified Extensible Firmware Interface (UEFI) shell.
For instructions on password protecting BIOS-like programs, see the manufacturer’s instructions.
1.2. Disk partitioning
Red Hat recommends creating separate partitions for the /boot
, /
, /home
, /tmp
, and /var/tmp/
directories.
/boot
-
This partition is the first partition that is read by the system during boot up. The boot loader and kernel images that are used to boot your system into Red Hat Enterprise Linux 9 are stored in this partition. This partition should not be encrypted. If this partition is included in
/
and that partition is encrypted or otherwise becomes unavailable then your system is not able to boot. /home
-
When user data (
/home
) is stored in/
instead of in a separate partition, the partition can fill up causing the operating system to become unstable. Also, when upgrading your system to the next version of Red Hat Enterprise Linux 9 it is a lot easier when you can keep your data in the/home
partition as it is not be overwritten during installation. If the root partition (/
) becomes corrupt your data could be lost forever. By using a separate partition there is slightly more protection against data loss. You can also target this partition for frequent backups. /tmp
and/var/tmp/
-
Both the
/tmp
and/var/tmp/
directories are used to store data that does not need to be stored for a long period of time. However, if a lot of data floods one of these directories it can consume all of your storage space. If this happens and these directories are stored within/
then your system could become unstable and crash. For this reason, moving these directories into their own partitions is a good idea.
During the installation process, you have an option to encrypt partitions. You must supply a passphrase. This passphrase serves as a key to unlock the bulk encryption key, which is used to secure the partition’s data.
1.3. Restricting network connectivity during the installation process
When installing Red Hat Enterprise Linux 9, the installation medium represents a snapshot of the system at a particular time. Because of this, it may not be up-to-date with the latest security fixes and may be vulnerable to certain issues that were fixed only after the system provided by the installation medium was released.
When installing a potentially vulnerable operating system, always limit exposure only to the closest necessary network zone. The safest choice is the “no network” zone, which means to leave your machine disconnected during the installation process. In some cases, a LAN or intranet connection is sufficient while the Internet connection is the riskiest. To follow the best security practices, choose the closest zone with your repository while installing Red Hat Enterprise Linux 9 from a network.
1.4. Installing the minimum amount of packages required
It is best practice to install only the packages you will use because each piece of software on your computer could possibly contain a vulnerability. If you are installing from the DVD media, take the opportunity to select exactly what packages you want to install during the installation. If you find you need another package, you can always add it to the system later.
1.5. Post-installation procedures
The following steps are the security-related procedures that should be performed immediately after installation of Red Hat Enterprise Linux 9.
Update your system. Enter the following command as root:
# dnf update
Even though the firewall service,
firewalld
, is automatically enabled with the installation of Red Hat Enterprise Linux, there are scenarios where it might be explicitly disabled, for example in the kickstart configuration. In such a case, it is recommended to consider re-enabling the firewall.To start
firewalld
enter the following commands as root:# systemctl start firewalld # systemctl enable firewalld
To enhance security, disable services you do not need. For example, if there are no printers installed on your computer, disable the
cups
service using the following command:# systemctl disable cups
To review active services, enter the following command:
$ systemctl list-units | grep service
Chapter 2. Installing the system in FIPS mode
To enable the cryptographic module self-checks mandated by the Federal Information Processing Standard (FIPS) 140-3, you must operate RHEL 9 in FIPS mode. Starting the installation in FIPS mode is the recommended method if you aim for FIPS compliance.
The cryptographic modules of RHEL 9 are not yet certified for the FIPS 140-3 requirements.
2.1. Federal Information Processing Standards 140 and FIPS mode
The Federal Information Processing Standards (FIPS) Publication 140 is a series of computer security standards developed by the National Institute of Standards and Technology (NIST) to ensure the quality of cryptographic modules. The FIPS 140 standard ensures that cryptographic tools implement their algorithms correctly. Runtime cryptographic algorithm and integrity self-tests are some of the mechanisms to ensure a system uses cryptography that meets the requirements of the standard.
To ensure that your RHEL system generates and uses all cryptographic keys only with FIPS-approved algorithms, you must switch RHEL to FIPS mode.
You can enable FIPS mode by using one of the following methods:
- Starting the installation in FIPS mode
- Switching the system into FIPS mode after the installation
If you aim for FIPS compliance, start the installation in FIPS mode. This avoids cryptographic key material regeneration and reevaluation of the compliance of the resulting system associated with converting already deployed systems.
To operate a FIPS-compliant system, create all cryptographic key material in FIPS mode. Furthermore, the cryptographic key material must never leave the FIPS environment unless it is securely wrapped and never unwrapped in non-FIPS environments.
Switching the system to FIPS mode by using the fips-mode-setup
tool does not guarantee compliance with the FIPS 140 standard. Re-generating all cryptographic keys after setting the system to FIPS mode may not be possible. For example, in the case of an existing IdM realm with users' cryptographic keys you cannot re-generate all the keys. If you cannot start the installation in FIPS mode, always enable FIPS mode as the first step after the installation, before you make any post-installation configuration steps or install any workloads.
The fips-mode-setup
tool also uses the FIPS
system-wide cryptographic policy internally. But on top of what the update-crypto-policies --set FIPS
command does, fips-mode-setup
ensures the installation of the FIPS dracut module by using the fips-finish-install
tool, it also adds the fips=1
boot option to the kernel command line and regenerates the initial RAM disk.
Furthermore, enforcement of restrictions required in FIPS mode depends on the contents of the /proc/sys/crypto/fips_enabled
file. If the file contains 1
, RHEL core cryptographic components switch to mode, in which they use only FIPS-approved implementations of cryptographic algorithms. If /proc/sys/crypto/fips_enabled
contains 0
, the cryptographic components do not enable their FIPS mode.
The FIPS
system-wide cryptographic policy helps to configure higher-level restrictions. Therefore, communication protocols supporting cryptographic agility do not announce ciphers that the system refuses when selected. For example, the ChaCha20 algorithm is not FIPS-approved, and the FIPS
cryptographic policy ensures that TLS servers and clients do not announce the TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 TLS cipher suite, because any attempt to use such a cipher fails.
If you operate RHEL in FIPS mode and use an application providing it’s own FIPS-mode-related configuration options, ignore these options and the corresponding application guidance. The system running in FIPS mode and the system-wide cryptographic policies enforce only FIPS-compliant cryptography. For example, the Node.js configuration option --enable-fips
is ignored if the system runs in FIPS mode. If you use the --enable-fips
option on a system not running in FIPS mode, you do not meet the FIPS-140 compliance requirements.
The cryptographic modules of RHEL 9 are not yet certified for the FIPS 140-3 requirements by the National Institute of Standards and Technology (NIST) Cryptographic Module Validation Program (CMVP). You can see the validation status of cryptographic modules FIPS 140-2 and FIPS 140-3 section in the Compliance Activities and Government Standards Knowledgebase article.
A RHEL 9.2 and later system running in FIPS mode enforces that any TLS 1.2 connection must use the Extended Master Secret (EMS) extension (RFC 7627) as requires the FIPS 140-3 standard. Thus, legacy clients not supporting EMS or TLS 1.3 cannot connect to RHEL 9 servers running in FIPS mode, RHEL 9 clients in FIPS mode cannot connect to servers that support only TLS 1.2 without EMS. See TLS Extension "Extended Master Secret" enforced with Red Hat Enterprise Linux 9.2
Additional resources
2.2. Installing the system with FIPS mode enabled
To enable the cryptographic module self-checks mandated by the Federal Information Processing Standard (FIPS) 140, enable FIPS mode during the system installation.
Only enabling FIPS mode during the RHEL installation ensures that the system generates all keys with FIPS-approved algorithms and continuous monitoring tests in place.
Procedure
-
Add the
fips=1
option to the kernel command line during the system installation. - During the software selection stage, do not install any third-party software.
- After the installation, the system starts in FIPS mode automatically.
Verification
After the system starts, check that FIPS mode is enabled:
$ fips-mode-setup --check FIPS mode is enabled.
Additional resources
- Editing boot options section in the Boot options for RHEL Installer document
2.3. Additional resources
Chapter 3. Using system-wide cryptographic policies
The system-wide cryptographic policies is a system component that configures the core cryptographic subsystems, covering the TLS, IPsec, SSH, DNSSec, and Kerberos protocols. It provides a small set of policies, which the administrator can select.
3.1. System-wide cryptographic policies
When a system-wide policy is set up, applications in RHEL follow it and refuse to use algorithms and protocols that do not meet the policy, unless you explicitly request the application to do so. That is, the policy applies to the default behavior of applications when running with the system-provided configuration but you can override it if required.
RHEL 9 contains the following predefined policies:
| The default system-wide cryptographic policy level offers secure settings for current threat models. It allows the TLS 1.2 and 1.3 protocols, as well as the IKEv2 and SSH2 protocols. The RSA keys and Diffie-Hellman parameters are accepted if they are at least 2048 bits long. |
| This policy ensures maximum compatibility with Red Hat Enterprise Linux 6 and earlier; it is less secure due to an increased attack surface. SHA-1 is allowed to be used as TLS hash, signature, and algorithm. CBC-mode ciphers are allowed to be used with SSH. Applications using GnuTLS allow certificates signed with SHA-1. It allows the TLS 1.2 and 1.3 protocols, as well as the IKEv2 and SSH2 protocols. The RSA keys and Diffie-Hellman parameters are accepted if they are at least 2048 bits long. |
| A stricter forward-looking security level intended for testing a possible future policy. This policy does not allow the use of SHA-1 in DNSSec or as an HMAC. SHA2-224 and SHA3-224 hashes are rejected. 128-bit ciphers are disabled. CBC-mode ciphers are disabled except in Kerberos. It allows the TLS 1.2 and 1.3 protocols, as well as the IKEv2 and SSH2 protocols. The RSA keys and Diffie-Hellman parameters are accepted if they are at least 3072 bits long. If your system communicates on the public internet, you might face interoperability problems. |
|
A policy level that conforms with the FIPS 140 requirements. The |
Red Hat continuously adjusts all policy levels so that all libraries, except when using the LEGACY
policy, provide secure defaults. Even though the LEGACY
profile does not provide secure defaults, it does not include any algorithms that are easily exploitable. As such, the set of enabled algorithms or acceptable key sizes in any provided policy may change during the lifetime of Red Hat Enterprise Linux.
Such changes reflect new security standards and new security research. If you must ensure interoperability with a specific system for the whole lifetime of Red Hat Enterprise Linux, you should opt-out from the system-wide cryptographic policies for components that interact with that system or re-enable specific algorithms using custom cryptographic policies.
Because a cryptographic key used by a certificate on the Customer Portal API does not meet the requirements by the FUTURE
system-wide cryptographic policy, the redhat-support-tool
utility does not work with this policy level at the moment.
To work around this problem, use the DEFAULT
cryptographic policy while connecting to the Customer Portal API.
The specific algorithms and ciphers described in the policy levels as allowed are available only if an application supports them.
Tool for managing the cryptographic policies
To view or change the current system-wide cryptographic policy, use the update-crypto-policies
tool, for example:
$ update-crypto-policies --show DEFAULT # update-crypto-policies --set FUTURE Setting system policy to FUTURE
To ensure that the change of the cryptographic policy is applied, restart the system.
Strong cryptographic defaults by removing insecure cipher suites and protocols
The following list contains cipher suites and protocols removed from the core cryptographic libraries in RHEL. They are not present in the sources, or their support is disabled during the build, so applications cannot use them.
- DES (since RHEL 7)
- All export grade cipher suites (since RHEL 7)
- MD5 in signatures (since RHEL 7)
- SSLv2 (since RHEL 7)
- SSLv3 (since RHEL 8)
- All ECC curves < 224 bits (since RHEL 6)
- All binary field ECC curves (since RHEL 6)
Algorithms disabled in all policy levels
The following algorithms are disabled in LEGACY
, DEFAULT
, FUTURE
, and FIPS
cryptographic policies included in RHEL 9. They can be enabled only by applying a custom cryptographic policy or by an explicit configuration of individual applications, but the resulting configuration falls outside of the Production Support Scope of Coverage.
- TLS older than version 1.2 (since RHEL 9, was < 1.0 in RHEL 8)
- DTLS older than version 1.2 (since RHEL 9, was < 1.0 in RHEL 8)
- DH with parameters < 2048 bits (since RHEL 9, was < 1024 bits in RHEL 8)
- RSA with key size < 2048 bits (since RHEL 9, was < 1024 bits in RHEL 8)
- DSA (since RHEL 9, was < 1024 bits in RHEL 8)
- 3DES (since RHEL 9)
- RC4 (since RHEL 9)
- FFDHE-1024 (since RHEL 9)
- DHE-DSS (since RHEL 9)
- Camellia (since RHEL 9)
- ARIA
- IKEv1 (since RHEL 8)
Algorithms enabled in the cryptographic policies
The following table shows the comparison of all four cryptographic policies with regard to selected algorithms.
LEGACY | DEFAULT | FIPS | FUTURE | |
---|---|---|---|---|
IKEv1 | no | no | no | no |
3DES | no | no | no | no |
RC4 | no | no | no | no |
DH | min. 2048-bit | min. 2048-bit | min. 2048-bit | min. 3072-bit |
RSA | min. 2048-bit | min. 2048-bit | min. 2048-bit | min. 3072-bit |
DSA | no | no | no | no |
TLS v1.1 and older | no | no | no | no |
TLS v1.2 and newer | yes | yes | yes | yes |
SHA-1 in digital signatures and certificates | yes | no | no | no |
CBC mode ciphers | yes | no[a] | no[b] | no[c] |
Symmetric ciphers with keys < 256 bits | yes | yes | yes | no |
[a]
CBC ciphers are disabled for SSH
[b]
CBC ciphers are disabled for all protocols except Kerberos
[c]
CBC ciphers are disabled for all protocols except Kerberos
|
Additional resources
-
update-crypto-policies(8)
man page
3.2. Switching the system-wide cryptographic policy to mode compatible with earlier releases
The default system-wide cryptographic policy in Red Hat Enterprise Linux 9 does not allow communication using older, insecure protocols. For environments that require to be compatible with Red Hat Enterprise Linux 6 and in some cases also with earlier releases, the less secure LEGACY
policy level is available.
Switching to the LEGACY
policy level results in a less secure system and applications.
Procedure
To switch the system-wide cryptographic policy to the
LEGACY
level, enter the following command asroot
:# update-crypto-policies --set LEGACY Setting system policy to LEGACY
Additional resources
-
For the list of available cryptographic policy levels, see the
update-crypto-policies(8)
man page. -
For defining custom cryptographic policies, see the
Custom Policies
section in theupdate-crypto-policies(8)
man page and theCrypto Policy Definition Format
section in thecrypto-policies(7)
man page.
3.3. Setting up system-wide cryptographic policies in the web console
You can set one of system-wide cryptographic policies and subpolicies directly in the RHEL web console interface. Besides the four predefined system-wide cryptographic policies, you can also apply the following combinations of policies and subpolicies through the graphical interface now:
-
DEFAULT:SHA1
is theDEFAULT
policy with theSHA-1
algorithm enabled. -
LEGACY:AD-SUPPORT
is theLEGACY
policy with less secure settings that improve interoperability for Active Directory services. -
FIPS:OSPP
is theFIPS
policy with further restrictions inspired by the Common Criteria for Information Technology Security Evaluation standard.
Prerequisites
- The RHEL 9 web console has been installed. For details, see Installing and enabling the web console.
-
You have
root
privileges or permissions to enter administrative commands withsudo
.
Procedure
- Log in to the web console. For more information, see Logging in to the web console.
In the Configuration card of the Overview page, click your current policy value next to Crypto policy.
In the Change crypto policy dialog window, click on the policy you want to start using on your system.
- Click the Apply and reboot button.
Verification
-
After the restart, log back in to web console, and check that the Crypto policy value corresponds to the one you selected. Alternatively, you can enter the
update-crypto-policies --show
command to display the current system-wide cryptographic policy in your terminal.
3.4. Switching the system to FIPS mode
The system-wide cryptographic policies contain a policy level that enables cryptographic algorithms in accordance with the requirements by the Federal Information Processing Standard (FIPS) Publication 140. The fips-mode-setup
tool that enables or disables FIPS mode internally uses the FIPS
system-wide cryptographic policy.
Switching the system to FIPS mode by using the FIPS
system-wide cryptographic policy does not guarantee compliance with the FIPS 140 standard. Re-generating all cryptographic keys after setting the system to FIPS mode may not be possible. For example, in the case of an existing IdM realm with users' cryptographic keys you cannot re-generate all the keys.
The fips-mode-setup
tool uses the FIPS
policy internally. But on top of what the update-crypto-policies
command with the --set FIPS
option does, fips-mode-setup
ensures the installation of the FIPS dracut module by using the fips-finish-install
tool, it also adds the fips=1
boot option to the kernel command line and regenerates the initial RAM disk.
Only enabling FIPS mode during the RHEL installation ensures that the system generates all keys with FIPS-approved algorithms and continuous monitoring tests in place.
The cryptographic modules of RHEL 9 are not yet certified for the FIPS 140-3 requirements.
Procedure
To switch the system to FIPS mode:
# fips-mode-setup --enable Kernel initramdisks are being regenerated. This might take some time. Setting system policy to FIPS Note: System-wide crypto policies are applied on application start-up. It is recommended to restart the system for the change of policies to fully take place. FIPS mode will be enabled. Please reboot the system for the setting to take effect.
Restart your system to allow the kernel to switch to FIPS mode:
# reboot
Verification
After the restart, you can check the current state of FIPS mode:
# fips-mode-setup --check FIPS mode is enabled.
Additional resources
-
fips-mode-setup(8)
man page - Installing the system in FIPS mode
- Security Requirements for Cryptographic Modules on the National Institute of Standards and Technology (NIST) web site.
3.5. Enabling FIPS mode in a container
To enable the full set of cryptographic module self-checks mandated by the Federal Information Processing Standard Publication 140-2 (FIPS mode), the host system kernel must be running in FIPS mode. The podman
utility automatically enables FIPS mode on supported containers.
The fips-mode-setup
command does not work correctly in containers, and it cannot be used to enable or check FIPS mode in this scenario.
The cryptographic modules of RHEL 9 are not yet certified for the FIPS 140-3 requirements.
Prerequisites
- The host system must be in FIPS mode.
Procedure
-
On systems with FIPS mode enabled, the
podman
utility automatically enables FIPS mode on supported containers.
Additional resources
3.6. List of RHEL applications using cryptography that is not compliant with FIPS 140-3
To pass all relevant cryptographic certifications, such as FIPS 140-3, use libraries from the core cryptographic components set. These libraries, except from libgcrypt
, also follow the RHEL system-wide cryptographic policies.
See the RHEL core cryptographic components article for an overview of the core cryptographic components, the information on how are they selected, how are they integrated into the operating system, how do they support hardware security modules and smart cards, and how do cryptographic certifications apply to them.
List of RHEL 9 applications using cryptography that is not compliant with FIPS 140-3
- Bacula
- Implements the CRAM-MD5 authentication protocol.
- Cyrus SASL
- Uses the SCRAM-SHA-1 authentication method.
- Dovecot
- Uses SCRAM-SHA-1.
- Emacs
- Uses SCRAM-SHA-1.
- FreeRADIUS
- Uses MD5 and SHA-1 for authentication protocols.
- Ghostscript
- Custom cryptography implementation (MD5, RC4, SHA-2, AES) to encrypt and decrypt documents.
- GRUB2
-
Supports legacy firmware protocols requiring SHA-1 and includes the
libgcrypt
library. - iPXE
- Implements TLS stack.
- Kerberos
- Preserves support for SHA-1 (interoperability with Windows).
- Lasso
-
The
lasso_wsse_username_token_derive_key()
key derivation function (KDF) uses SHA-1. - MariaDB, MariaDB Connector
-
The
mysql_native_password
authentication plugin uses SHA-1. - MySQL
-
mysql_native_password
uses SHA-1. - OpenIPMI
- The RAKP-HMAC-MD5 authentication method is not approved for FIPS usage and does not work in FIPS mode.
- Ovmf (UEFI firmware), Edk2, shim
- Full cryptographic stack (an embedded copy of the OpenSSL library).
- Perl
- Uses HMAC, HMAC-SHA1, HMAC-MD5, SHA-1, SHA-224,….
- Pidgin
- Implements DES and RC4 ciphers.
- PKCS #12 file processing (OpenSSL, GnuTLS, NSS, Firefox, Java)
- All uses of PKCS #12 are not FIPS-compliant, because the Key Derivation Function (KDF) used for calculating the whole-file HMAC is not FIPS-approved. As such, PKCS #12 files are considered to be plain text for the purposes of FIPS compliance. For key-transport purposes, wrap PKCS #12 (.p12) files using a FIPS-approved encryption scheme.
- Poppler
- Can save PDFs with signatures, passwords, and encryption based on non-allowed algorithms if they are present in the original PDF (for example MD5, RC4, and SHA-1).
- PostgreSQL
- Implements Blowfish, DES, and MD5. A KDF uses SHA-1.
- QAT Engine
- Mixed hardware and software implementation of cryptographic primitives (RSA, EC, DH, AES,…)
- Ruby
- Provides insecure MD5 and SHA-1 library functions.
- Samba
- Preserves support for RC4 and DES (interoperability with Windows).
- Syslinux
- BIOS passwords use SHA-1.
- Unbound
- DNS specification requires that DNSSEC resolvers use a SHA-1-based algorithm in DNSKEY records for validation.
- Valgrind
- AES, SHA hashes.[2]
Additional resources
- FIPS 140-2 and FIPS 140-3 section in the Compliance Activities and Government Standards Knowledgebase article
- RHEL core cryptographic components Knowledgebase article
3.7. Excluding an application from following system-wide crypto policies
You can customize cryptographic settings used by your application preferably by configuring supported cipher suites and protocols directly in the application.
You can also remove a symlink related to your application from the /etc/crypto-policies/back-ends
directory and replace it with your customized cryptographic settings. This configuration prevents the use of system-wide cryptographic policies for applications that use the excluded back end. Furthermore, this modification is not supported by Red Hat.
3.7.1. Examples of opting out of system-wide crypto policies
wget
To customize cryptographic settings used by the wget
network downloader, use --secure-protocol
and --ciphers
options. For example:
$ wget --secure-protocol=TLSv1_1 --ciphers="SECURE128" https://example.com
See the HTTPS (SSL/TLS) Options section of the wget(1)
man page for more information.
curl
To specify ciphers used by the curl
tool, use the --ciphers
option and provide a colon-separated list of ciphers as a value. For example:
$ curl https://example.com --ciphers '@SECLEVEL=0:DES-CBC3-SHA:RSA-DES-CBC3-SHA'
See the curl(1)
man page for more information.
Firefox
Even though you cannot opt out of system-wide cryptographic policies in the Firefox
web browser, you can further restrict supported ciphers and TLS versions in Firefox’s Configuration Editor. Type about:config
in the address bar and change the value of the security.tls.version.min
option as required. Setting security.tls.version.min
to 1
allows TLS 1.0 as the minimum required, security.tls.version.min 2
enables TLS 1.1, and so on.
OpenSSH
To opt out of the system-wide cryptographic policies for your OpenSSH server, specify the cryptographic policy in a drop-in configuration file located in the /etc/ssh/sshd_config.d/
directory, with a two-digit number prefix smaller than 50, so that it lexicographically precedes the 50-redhat.conf
file, and with a .conf
suffix, for example, 49-crypto-policy-override.conf
.
See the sshd_config(5)
man page for more information.
To opt out of system-wide cryptographic policies for your OpenSSH client, perform one of the following tasks:
-
For a given user, override the global
ssh_config
with a user-specific configuration in the~/.ssh/config
file. -
For the entire system, specify the cryptographic policy in a drop-in configuration file located in the
/etc/ssh/ssh_config.d/
directory, with a two-digit number prefix smaller than 50, so that it lexicographically precedes the50-redhat.conf
file, and with a.conf
suffix, for example,49-crypto-policy-override.conf
.
See the ssh_config(5)
man page for more information.
Libreswan
See the Configuring IPsec connections that opt out of the system-wide crypto policies in the Securing networks document for detailed information.
Additional resources
-
update-crypto-policies(8)
man page
3.8. Customizing system-wide cryptographic policies with subpolicies
Use this procedure to adjust the set of enabled cryptographic algorithms or protocols.
You can either apply custom subpolicies on top of an existing system-wide cryptographic policy or define such a policy from scratch.
The concept of scoped policies allows enabling different sets of algorithms for different back ends. You can limit each configuration directive to specific protocols, libraries, or services.
Furthermore, directives can use asterisks for specifying multiple values using wildcards.
The /etc/crypto-policies/state/CURRENT.pol
file lists all settings in the currently applied system-wide cryptographic policy after wildcard expansion. To make your cryptographic policy more strict, consider using values listed in the /usr/share/crypto-policies/policies/FUTURE.pol
file.
You can find example subpolicies in the /usr/share/crypto-policies/policies/modules/
directory. The subpolicy files in this directory contain also descriptions in lines that are commented out.
Procedure
Checkout to the
/etc/crypto-policies/policies/modules/
directory:# cd /etc/crypto-policies/policies/modules/
Create subpolicies for your adjustments, for example:
# touch MYCRYPTO-1.pmod # touch SCOPES-AND-WILDCARDS.pmod
ImportantUse upper-case letters in file names of policy modules.
Open the policy modules in a text editor of your choice and insert options that modify the system-wide cryptographic policy, for example:
# vi MYCRYPTO-1.pmod
min_rsa_size = 3072 hash = SHA2-384 SHA2-512 SHA3-384 SHA3-512
# vi SCOPES-AND-WILDCARDS.pmod
# Disable the AES-128 cipher, all modes cipher = -AES-128-* # Disable CHACHA20-POLY1305 for the TLS protocol (OpenSSL, GnuTLS, NSS, and OpenJDK) cipher@TLS = -CHACHA20-POLY1305 # Allow using the FFDHE-1024 group with the SSH protocol (libssh and OpenSSH) group@SSH = FFDHE-1024+ # Disable all CBC mode ciphers for the SSH protocol (libssh and OpenSSH) cipher@SSH = -*-CBC # Allow the AES-256-CBC cipher in applications using libssh cipher@libssh = AES-256-CBC+
- Save the changes in the module files.
Apply your policy adjustments to the
DEFAULT
system-wide cryptographic policy level:# update-crypto-policies --set DEFAULT:MYCRYPTO-1:SCOPES-AND-WILDCARDS
To make your cryptographic settings effective for already running services and applications, restart the system:
# reboot
Verification
Check that the
/etc/crypto-policies/state/CURRENT.pol
file contains your changes, for example:$ cat /etc/crypto-policies/state/CURRENT.pol | grep rsa_size min_rsa_size = 3072
Additional resources
-
Custom Policies
section in theupdate-crypto-policies(8)
man page -
Crypto Policy Definition Format
section in thecrypto-policies(7)
man page - How to customize crypto policies in RHEL 8.2 Red Hat blog article
3.9. Re-enabling SHA-1
The use of the SHA-1 algorithm for creating and verifying signatures is restricted in the DEFAULT
cryptographic policy. If your scenario requires the use of SHA-1 for verifying existing or third-party cryptographic signatures, you can enable it by applying the SHA1
subpolicy, which RHEL 9 provides by default. Note that it weakens the security of the system.
Prerequisites
-
The system uses the
DEFAULT
system-wide cryptographic policy.
Procedure
Apply the
SHA1
subpolicy to theDEFAULT
cryptographic policy:# update-crypto-policies --set DEFAULT:SHA1 Setting system policy to DEFAULT:SHA1 Note: System-wide crypto policies are applied on application start-up. It is recommended to restart the system for the change of policies to fully take place.
Restart the system:
# reboot
Verification
Display the current cryptographic policy:
# update-crypto-policies --show DEFAULT:SHA1
Switching to the LEGACY
cryptographic policy by using the update-crypto-policies --set LEGACY
command also enables SHA-1 for signatures. However, the LEGACY
cryptographic policy makes your system much more vulnerable by also enabling other weak cryptographic algorithms. Use this workaround only for scenarios that require the enablement of other legacy cryptographic algorithms than SHA-1 signatures.
Additional resources
3.10. Creating and setting a custom system-wide cryptographic policy
The following steps demonstrate customizing the system-wide cryptographic policies by a complete policy file.
Procedure
Create a policy file for your customizations:
# cd /etc/crypto-policies/policies/ # touch MYPOLICY.pol
Alternatively, start by copying one of the four predefined policy levels:
# cp /usr/share/crypto-policies/policies/DEFAULT.pol /etc/crypto-policies/policies/MYPOLICY.pol
Edit the file with your custom cryptographic policy in a text editor of your choice to fit your requirements, for example:
# vi /etc/crypto-policies/policies/MYPOLICY.pol
Switch the system-wide cryptographic policy to your custom level:
# update-crypto-policies --set MYPOLICY
To make your cryptographic settings effective for already running services and applications, restart the system:
# reboot
Additional resources
-
Custom Policies
section in theupdate-crypto-policies(8)
man page and theCrypto Policy Definition Format
section in thecrypto-policies(7)
man page - How to customize crypto policies in RHEL Red Hat blog article
Chapter 4. Setting a custom cryptographic policy by using the crypto-policies
RHEL System Role
As an administrator, you can use the crypto_policies
RHEL System Role to quickly and consistently configure custom cryptographic policies across many different systems using the Ansible Core package.
4.1. crypto_policies
System Role variables and facts
In a crypto_policies
System Role playbook, you can define the parameters for the crypto_policies
configuration file according to your preferences and limitations.
If you do not configure any variables, the System Role does not configure the system and only reports the facts.
Selected variables for the crypto_policies
System Role
crypto_policies_policy
- Determines the cryptographic policy the System Role applies to the managed nodes. For details about the different crypto policies, see System-wide cryptographic policies .
crypto_policies_reload
-
If set to
yes
, the affected services, currently theipsec
,bind
, andsshd
services, reload after applying a crypto policy. Defaults toyes
. crypto_policies_reboot_ok
-
If set to
yes
, and a reboot is necessary after the System Role changes the crypto policy, it setscrypto_policies_reboot_required
toyes
. Defaults tono
.
Facts set by the crypto_policies
System Role
crypto_policies_active
- Lists the currently selected policy.
crypto_policies_available_policies
- Lists all available policies available on the system.
crypto_policies_available_subpolicies
- Lists all available subpolicies available on the system.
Additional resources
4.2. Setting a custom cryptographic policy using the crypto_policies
System Role
You can use the crypto_policies
System Role to configure a large number of managed nodes consistently from a single control node.
Prerequisites
-
Access and permissions to one or more managed nodes, which are systems you want to configure with the
crypto_policies
System Role. Access and permissions to a control node, which is a system from which Red Hat Ansible Core configures other systems.
On the control node:
-
The
ansible-core
andrhel-system-roles
packages are installed.
-
The
RHEL 8.0-8.5 provided access to a separate Ansible repository that contains Ansible Engine 2.9 for automation based on Ansible. Ansible Engine contains command-line utilities such as ansible
, ansible-playbook
, connectors such as docker
and podman
, and many plugins and modules. For information about how to obtain and install Ansible Engine, see the How to download and install Red Hat Ansible Engine Knowledgebase article.
RHEL 8.6 and 9.0 have introduced Ansible Core (provided as the ansible-core
package), which contains the Ansible command-line utilities, commands, and a small set of built-in Ansible plugins. RHEL provides this package through the AppStream repository, and it has a limited scope of support. For more information, see the Scope of support for the Ansible Core package included in the RHEL 9 and RHEL 8.6 and later AppStream repositories Knowledgebase article.
- An inventory file which lists the managed nodes.
Procedure
Create a new
playbook.yml
file with the following content:--- - hosts: all tasks: - name: Configure crypto policies include_role: name: rhel-system-roles.crypto_policies vars: - crypto_policies_policy: FUTURE - crypto_policies_reboot_ok: true
You can replace the FUTURE value with your preferred crypto policy, for example:
DEFAULT
,LEGACY
, andFIPS:OSPP
.The
crypto_policies_reboot_ok: true
variable causes the system to reboot after the System Role changes the cryptographic policy.For more details, see crypto_policies System Role variables and facts .
Optional: Verify playbook syntax.
# ansible-playbook --syntax-check playbook.yml
Run the playbook on your inventory file:
# ansible-playbook -i inventory_file playbook.yml
Verification
On the control node, create another playbook named, for example,
verify_playbook.yml
:- hosts: all tasks: - name: Verify active crypto policy include_role: name: rhel-system-roles.crypto_policies - debug: var: crypto_policies_active
This playbook does not change any configurations on the system, only reports the active policy on the managed nodes.
Run the playbook on the same inventory file:
# ansible-playbook -i inventory_file verify_playbook.yml TASK [debug] ************************** ok: [host] => { "crypto_policies_active": "FUTURE" }
The
"crypto_policies_active":
variable shows the policy active on the managed node.
4.3. Additional resources
-
/usr/share/ansible/roles/rhel-system-roles.crypto_policies/README.md
file. -
ansible-playbook(1)
man page. - Preparing a control node and managed nodes to use RHEL System Roles.
Chapter 5. Configuring applications to use cryptographic hardware through PKCS #11
Separating parts of your secret information about dedicated cryptographic devices, such as smart cards and cryptographic tokens for end-user authentication and hardware security modules (HSM) for server applications, provides an additional layer of security. In RHEL, support for cryptographic hardware through the PKCS #11 API is consistent across different applications, and the isolation of secrets on cryptographic hardware is not a complicated task.
5.1. Cryptographic hardware support through PKCS #11
PKCS #11 (Public-Key Cryptography Standard) defines an application programming interface (API) to cryptographic devices that hold cryptographic information and perform cryptographic functions. These devices are called tokens, and they can be implemented in a hardware or software form.
A PKCS #11 token can store various object types including a certificate; a data object; and a public, private, or secret key. These objects are uniquely identifiable through the PKCS #11 URI scheme.
A PKCS #11 URI is a standard way to identify a specific object in a PKCS #11 module according to the object attributes. This enables you to configure all libraries and applications with the same configuration string in the form of a URI.
RHEL provides the OpenSC PKCS #11 driver for smart cards by default. However, hardware tokens and HSMs can have their own PKCS #11 modules that do not have their counterpart in the system. You can register such PKCS #11 modules with the p11-kit
tool, which acts as a wrapper over the registered smart-card drivers in the system.
To make your own PKCS #11 module work on the system, add a new text file to the /etc/pkcs11/modules/
directory
You can add your own PKCS #11 module into the system by creating a new text file in the /etc/pkcs11/modules/
directory. For example, the OpenSC configuration file in p11-kit
looks as follows:
$ cat /usr/share/p11-kit/modules/opensc.module
module: opensc-pkcs11.so
Additional resources
5.2. Using SSH keys stored on a smart card
Red Hat Enterprise Linux enables you to use RSA and ECDSA keys stored on a smart card on OpenSSH clients. Use this procedure to enable authentication using a smart card instead of using a password.
Prerequisites
-
On the client side, the
opensc
package is installed and thepcscd
service is running.
Procedure
List all keys provided by the OpenSC PKCS #11 module including their PKCS #11 URIs and save the output to the keys.pub file:
$ ssh-keygen -D pkcs11: > keys.pub $ ssh-keygen -D pkcs11: ssh-rsa AAAAB3NzaC1yc2E...KKZMzcQZzx pkcs11:id=%02;object=SIGN%20pubkey;token=SSH%20key;manufacturer=piv_II?module-path=/usr/lib64/pkcs11/opensc-pkcs11.so ecdsa-sha2-nistp256 AAA...J0hkYnnsM= pkcs11:id=%01;object=PIV%20AUTH%20pubkey;token=SSH%20key;manufacturer=piv_II?module-path=/usr/lib64/pkcs11/opensc-pkcs11.so
To enable authentication using a smart card on a remote server (example.com), transfer the public key to the remote server. Use the
ssh-copy-id
command with keys.pub created in the previous step:$ ssh-copy-id -f -i keys.pub username@example.com
To connect to example.com using the ECDSA key from the output of the
ssh-keygen -D
command in step 1, you can use just a subset of the URI, which uniquely references your key, for example:$ ssh -i "pkcs11:id=%01?module-path=/usr/lib64/pkcs11/opensc-pkcs11.so" example.com Enter PIN for 'SSH key': [example.com] $
You can use the same URI string in the
~/.ssh/config
file to make the configuration permanent:$ cat ~/.ssh/config IdentityFile "pkcs11:id=%01?module-path=/usr/lib64/pkcs11/opensc-pkcs11.so" $ ssh example.com Enter PIN for 'SSH key': [example.com] $
Because OpenSSH uses the
p11-kit-proxy
wrapper and the OpenSC PKCS #11 module is registered to PKCS#11 Kit, you can simplify the previous commands:$ ssh -i "pkcs11:id=%01" example.com Enter PIN for 'SSH key': [example.com] $
If you skip the id=
part of a PKCS #11 URI, OpenSSH loads all keys that are available in the proxy module. This can reduce the amount of typing required:
$ ssh -i pkcs11: example.com
Enter PIN for 'SSH key':
[example.com] $
Additional resources
- Fedora 28: Better smart card support in OpenSSH
-
p11-kit(8)
,opensc.conf(5)
,pcscd(8)
,ssh(1)
, andssh-keygen(1)
man pages
5.3. Configuring applications to authenticate using certificates from smart cards
Authentication using smart cards in applications may increase security and simplify automation.
The
wget
network downloader enables you to specify PKCS #11 URIs instead of paths to locally stored private keys, and thus simplifies creating scripts for tasks that require safely stored private keys and certificates. For example:$ wget --private-key 'pkcs11:token=softhsm;id=%01;type=private?pin-value=111111' --certificate 'pkcs11:token=softhsm;id=%01;type=cert' https://example.com/
See the
wget(1)
man page for more information.Specifying PKCS #11 URI for use by the
curl
tool is analogous:$ curl --key 'pkcs11:token=softhsm;id=%01;type=private?pin-value=111111' --cert 'pkcs11:token=softhsm;id=%01;type=cert' https://example.com/
See the
curl(1)
man page for more information.NoteBecause a PIN is a security measure that controls access to keys stored on a smart card and the configuration file contains the PIN in the plain-text form, consider additional protection to prevent an attacker from reading the PIN. For example, you can use the
pin-source
attribute and provide afile:
URI for reading the PIN from a file. See RFC 7512: PKCS #11 URI Scheme Query Attribute Semantics for more information. Note that using a command path as a value of thepin-source
attribute is not supported.-
The
Firefox
web browser automatically loads thep11-kit-proxy
module. This means that every supported smart card in the system is automatically detected. For using TLS client authentication, no additional setup is required and keys from a smart card are automatically used when a server requests them.
Using PKCS #11 URIs in custom applications
If your application uses the GnuTLS
or NSS
library, support for PKCS #11 URIs is ensured by their built-in support for PKCS #11. Also, applications relying on the OpenSSL
library can access cryptographic hardware modules thanks to the openssl-pkcs11
engine.
With applications that require working with private keys on smart cards and that do not use NSS
, GnuTLS
, and OpenSSL
, use p11-kit
to implement registering PKCS #11 modules.
Additional resources
-
p11-kit(8)
man page.
5.4. Using HSMs protecting private keys in Apache
The Apache
HTTP server can work with private keys stored on hardware security modules (HSMs), which helps to prevent the keys' disclosure and man-in-the-middle attacks. Note that this usually requires high-performance HSMs for busy servers.
For secure communication in the form of the HTTPS protocol, the Apache
HTTP server (httpd
) uses the OpenSSL library. OpenSSL does not support PKCS #11 natively. To use HSMs, you have to install the openssl-pkcs11
package, which provides access to PKCS #11 modules through the engine interface. You can use a PKCS #11 URI instead of a regular file name to specify a server key and a certificate in the /etc/httpd/conf.d/ssl.conf
configuration file, for example:
SSLCertificateFile "pkcs11:id=%01;token=softhsm;type=cert" SSLCertificateKeyFile "pkcs11:id=%01;token=softhsm;type=private?pin-value=111111"
Install the httpd-manual
package to obtain complete documentation for the Apache
HTTP Server, including TLS configuration. The directives available in the /etc/httpd/conf.d/ssl.conf
configuration file are described in detail in the /usr/share/httpd/manual/mod/mod_ssl.html
file.
5.5. Using HSMs protecting private keys in Nginx
The Nginx
HTTP server can work with private keys stored on hardware security modules (HSMs), which helps to prevent the keys' disclosure and man-in-the-middle attacks. Note that this usually requires high-performance HSMs for busy servers.
Because Nginx
also uses the OpenSSL for cryptographic operations, support for PKCS #11 must go through the openssl-pkcs11
engine. Nginx
currently supports only loading private keys from an HSM, and a certificate must be provided separately as a regular file. Modify the ssl_certificate
and ssl_certificate_key
options in the server
section of the /etc/nginx/nginx.conf
configuration file:
ssl_certificate /path/to/cert.pem ssl_certificate_key "engine:pkcs11:pkcs11:token=softhsm;id=%01;type=private?pin-value=111111";
Note that the engine:pkcs11:
prefix is needed for the PKCS #11 URI in the Nginx
configuration file. This is because the other pkcs11
prefix refers to the engine name.
5.6. Additional resources
-
pkcs11.conf(5)
man page.
Chapter 6. Controlling access to smart cards using polkit
To cover possible threats that cannot be prevented by mechanisms built into smart cards, such as PINs, PIN pads, and biometrics, and for more fine-grained control, RHEL uses the polkit
framework for controlling access control to smart cards.
System administrators can configure polkit
to fit specific scenarios, such as smart-card access for non-privileged or non-local users or services.
6.1. Smart-card access control through polkit
The Personal Computer/Smart Card (PC/SC) protocol specifies a standard for integrating smart cards and their readers into computing systems. In RHEL, the pcsc-lite
package provides middleware to access smart cards that use the PC/SC API. A part of this package, the pcscd
(PC/SC Smart Card) daemon, ensures that the system can access a smart card using the PC/SC protocol.
Because access-control mechanisms built into smart cards, such as PINs, PIN pads, and biometrics, do not cover all possible threats, RHEL uses the polkit
framework for more robust access control. The polkit
authorization manager can grant access to privileged operations. In addition to granting access to disks, you can use polkit
also to specify policies for securing smart cards. For example, you can define which users can perform which operations with a smart card.
After installing the pcsc-lite
package and starting the pcscd
daemon, the system enforces policies defined in the /usr/share/polkit-1/actions/
directory. The default system-wide policy is in the /usr/share/polkit-1/actions/org.debian.pcsc-lite.policy
file. Polkit policy files use the XML format and the syntax is described in the polkit(8)
man page.
The polkitd
service monitors the /etc/polkit-1/rules.d/
and /usr/share/polkit-1/rules.d/
directories for any changes in rule files stored in these directories. The files contain authorization rules in JavaScript format. System administrators can add custom rule files in both directories, and polkitd
reads them in lexical order based on their file name. If two files have the same names, then the file in /etc/polkit-1/rules.d/
is read first.
Additional resources
-
polkit(8)
,polkitd(8)
, andpcscd(8)
man pages.
6.2. Troubleshooting problems related to PC/SC and polkit
Polkit policies that are automatically enforced after you install the pcsc-lite
package and start the pcscd
daemon may ask for authentication in the user’s session even if the user does not directly interact with a smart card. In GNOME, you can see the following error message:
Authentication is required to access the PC/SC daemon
Note that the system can install the pcsc-lite
package as a dependency when you install other packages related to smart cards such as opensc
.
If your scenario does not require any interaction with smart cards and you want to prevent displaying authorization requests for the PC/SC daemon, you can remove the pcsc-lite
package. Keeping the minimum of necessary packages is a good security practice anyway.
If you use smart cards, start troubleshooting by checking the rules in the system-provided policy file at /usr/share/polkit-1/actions/org.debian.pcsc-lite.policy
. You can add your custom rule files to the policy in the /etc/polkit-1/rules.d/
directory, for example, 03-allow-pcscd.rules
. Note that the rule files use the JavaScript syntax, the policy file is in the XML format.
To understand what authorization requests the system displays, check the Journal log, for example:
$ journalctl -b | grep pcsc
...
Process 3087 (user: 1001) is NOT authorized for action: access_pcsc
...
The previous log entry means that the user is not authorized to perform an action by the policy. You can solve this denial by adding a corresponding rule to /etc/polkit-1/rules.d/
.
You can search also for log entries related to the polkitd
unit, for example:
$ journalctl -u polkit
...
polkitd[NNN]: Error compiling script /etc/polkit-1/rules.d/00-debug-pcscd.rules
...
polkitd[NNN]: Operator of unix-session:c2 FAILED to authenticate to gain authorization for action org.debian.pcsc-lite.access_pcsc for unix-process:4800:14441 [/usr/libexec/gsd-smartcard] (owned by unix-user:group)
...
In the previous output, the first entry means that the rule file contains some syntax error. The second entry means that the user failed to gain the access to pcscd
.
You can also list all applications that use the PC/SC protocol by a short script. Create an executable file, for example, pcsc-apps.sh
, and insert the following code:
#!/bin/bash cd /proc for p in [0-9]* do if grep libpcsclite.so.1.0.0 $p/maps &> /dev/null then echo -n "process: " cat $p/cmdline echo " ($p)" fi done
Run the script as root
:
# ./pcsc-apps.sh
process: /usr/libexec/gsd-smartcard (3048)
enable-sync --auto-ssl-client-auth --enable-crashpad (4828)
...
Additional resources
-
journalctl
,polkit(8)
,polkitd(8)
, andpcscd(8)
man pages.
6.3. Displaying more detailed information about polkit authorization to PC/SC
In the default configuration, the polkit
authorization framework sends only limited information to the Journal log. You can extend polkit
log entries related to the PC/SC protocol by adding new rules.
Prerequisites
-
You have installed the
pcsc-lite
package on your system. -
The
pcscd
daemon is running.
Procedure
Create a new file in the
/etc/polkit-1/rules.d/
directory:# touch /etc/polkit-1/rules.d/00-test.rules
Edit the file in an editor of your choice, for example:
# vi /etc/polkit-1/rules.d/00-test.rules
Insert the following lines:
polkit.addRule(function(action, subject) { if (action.id == "org.debian.pcsc-lite.access_pcsc" || action.id == "org.debian.pcsc-lite.access_card") { polkit.log("action=" + action); polkit.log("subject=" + subject); } });
Save the file, and exit the editor.
Restart the
pcscd
andpolkit
services:# systemctl restart pcscd.service pcscd.socket polkit.service
Verification
-
Make an authorization request for
pcscd
. For example, open the Firefox web browser or use thepkcs11-tool -L
command provided by theopensc
package. Display the extended log entries, for example:
# journalctl -u polkit --since "1 hour ago" polkitd[1224]: <no filename>:4: action=[Action id='org.debian.pcsc-lite.access_pcsc'] polkitd[1224]: <no filename>:5: subject=[Subject pid=2020481 user=user' groups=user,wheel,mock,wireshark seat=null session=null local=true active=true]
Additional resources
-
polkit(8)
andpolkitd(8)
man pages.
6.4. Additional resources
- Controlling access to smart cards Red Hat Blog article.
Chapter 7. Scanning the system for configuration compliance and vulnerabilities
A compliance audit is a process of determining whether a given object follows all the rules specified in a compliance policy. The compliance policy is defined by security professionals who specify the required settings, often in the form of a checklist, that a computing environment should use.
Compliance policies can vary substantially across organizations and even across different systems within the same organization. Differences among these policies are based on the purpose of each system and its importance for the organization. Custom software settings and deployment characteristics also raise a need for custom policy checklists.
7.1. Configuration compliance tools in RHEL
Red Hat Enterprise Linux provides tools that enable you to perform a fully automated compliance audit. These tools are based on the Security Content Automation Protocol (SCAP) standard and are designed for automated tailoring of compliance policies.
-
SCAP Workbench - The
scap-workbench
graphical utility is designed to perform configuration and vulnerability scans on a single local or remote system. You can also use it to generate security reports based on these scans and evaluations. -
OpenSCAP - The
OpenSCAP
library, with the accompanyingoscap
command-line utility, is designed to perform configuration and vulnerability scans on a local system, to validate configuration compliance content, and to generate reports and guides based on these scans and evaluations.
You can experience memory-consumption problems while using OpenSCAP, which can cause stopping the program prematurely and prevent generating any result files. See the OpenSCAP memory-consumption problems Knowledgebase article for details.
-
SCAP Security Guide (SSG) - The
scap-security-guide
package provides the latest collection of security policies for Linux systems. The guidance consists of a catalog of practical hardening advice, linked to government requirements where applicable. The project bridges the gap between generalized policy requirements and specific implementation guidelines. -
Script Check Engine (SCE) - SCE is an extension to the SCAP protocol that enables administrators to write their security content using a scripting language, such as Bash, Python, and Ruby. The SCE extension is provided in the
openscap-engine-sce
package. The SCE itself is not part of the SCAP standard.
To perform automated compliance audits on multiple systems remotely, you can use the OpenSCAP solution for Red Hat Satellite.
Additional resources
-
oscap(8)
,scap-workbench(8)
, andscap-security-guide(8)
man pages - Red Hat Security Demos: Creating Customized Security Policy Content to Automate Security Compliance
- Red Hat Security Demos: Defend Yourself with RHEL Security Technologies
- Security Compliance Management in the Administering Red Hat Satellite Guide.
7.2. Vulnerability scanning
7.2.1. Red Hat Security Advisories OVAL feed
Red Hat Enterprise Linux security auditing capabilities are based on the Security Content Automation Protocol (SCAP) standard. SCAP is a multi-purpose framework of specifications that supports automated configuration, vulnerability and patch checking, technical control compliance activities, and security measurement.
SCAP specifications create an ecosystem where the format of security content is well-known and standardized although the implementation of the scanner or policy editor is not mandated. This enables organizations to build their security policy (SCAP content) once, no matter how many security vendors they employ.
The Open Vulnerability Assessment Language (OVAL) is the essential and oldest component of SCAP. Unlike other tools and custom scripts, OVAL describes a required state of resources in a declarative manner. OVAL code is never executed directly but using an OVAL interpreter tool called scanner. The declarative nature of OVAL ensures that the state of the assessed system is not accidentally modified.
Like all other SCAP components, OVAL is based on XML. The SCAP standard defines several document formats. Each of them includes a different kind of information and serves a different purpose.
Red Hat Product Security helps customers evaluate and manage risk by tracking and investigating all security issues affecting Red Hat customers. It provides timely and concise patches and security advisories on the Red Hat Customer Portal. Red Hat creates and supports OVAL patch definitions, providing machine-readable versions of our security advisories.
Because of differences between platforms, versions, and other factors, Red Hat Product Security qualitative severity ratings of vulnerabilities do not directly align with the Common Vulnerability Scoring System (CVSS) baseline ratings provided by third parties. Therefore, we recommend that you use the RHSA OVAL definitions instead of those provided by third parties.
The RHSA OVAL definitions are available individually and as a complete package, and are updated within an hour of a new security advisory being made available on the Red Hat Customer Portal.
Each OVAL patch definition maps one-to-one to a Red Hat Security Advisory (RHSA). Because an RHSA can contain fixes for multiple vulnerabilities, each vulnerability is listed separately by its Common Vulnerabilities and Exposures (CVE) name and has a link to its entry in our public bug database.
The RHSA OVAL definitions are designed to check for vulnerable versions of RPM packages installed on a system. It is possible to extend these definitions to include further checks, for example, to find out if the packages are being used in a vulnerable configuration. These definitions are designed to cover software and updates shipped by Red Hat. Additional definitions are required to detect the patch status of third-party software.
The Red Hat Insights for Red Hat Enterprise Linux compliance service helps IT security and compliance administrators to assess, monitor, and report on the security policy compliance of Red Hat Enterprise Linux systems. You can also create and manage your SCAP security policies entirely within the compliance service UI.
7.2.2. Scanning the system for vulnerabilities
The oscap
command-line utility enables you to scan local systems, validate configuration compliance content, and generate reports and guides based on these scans and evaluations. This utility serves as a front end to the OpenSCAP library and groups its functionalities to modules (sub-commands) based on the type of SCAP content it processes.
Prerequisites
-
The
openscap-scanner
andbzip2
packages are installed.
Procedure
Download the latest RHSA OVAL definitions for your system:
# wget -O - https://www.redhat.com/security/data/oval/v2/RHEL9/rhel-9.oval.xml.bz2 | bzip2 --decompress > rhel-9.oval.xml
Scan the system for vulnerabilities and save results to the vulnerability.html file:
# oscap oval eval --report vulnerability.html rhel-9.oval.xml
Verification
Check the results in a browser of your choice, for example:
$ firefox vulnerability.html &
Additional resources
-
oscap(8)
man page - Red Hat OVAL definitions
- OpenSCAP memory consumption problems
7.2.3. Scanning remote systems for vulnerabilities
You can check also remote systems for vulnerabilities with the OpenSCAP scanner using the oscap-ssh
tool over the SSH protocol.
Prerequisites
-
The
openscap-utils
andbzip2
packages are installed on the system you use for scanning. -
The
openscap-scanner
package is installed on the remote systems. - The SSH server is running on the remote systems.
Procedure
Download the latest RHSA OVAL definitions for your system:
# wget -O - https://www.redhat.com/security/data/oval/v2/RHEL9/rhel-9.oval.xml.bz2 | bzip2 --decompress > rhel-9.oval.xml
Scan a remote system with the machine1 host name, SSH running on port 22, and the joesec user name for vulnerabilities and save results to the remote-vulnerability.html file:
# oscap-ssh joesec@machine1 22 oval eval --report remote-vulnerability.html rhel-9.oval.xml
Additional resources
7.3. Configuration compliance scanning
7.3.1. Configuration compliance in RHEL
You can use configuration compliance scanning to conform to a baseline defined by a specific organization. For example, if you work with the US government, you might have to align your systems with the Operating System Protection Profile (OSPP), and if you are a payment processor, you might have to align your systems with the Payment Card Industry Data Security Standard (PCI-DSS). You can also perform configuration compliance scanning to harden your system security.
Red Hat recommends you follow the Security Content Automation Protocol (SCAP) content provided in the SCAP Security Guide package because it is in line with Red Hat best practices for affected components.
The SCAP Security Guide package provides content which conforms to the SCAP 1.2 and SCAP 1.3 standards. The openscap scanner
utility is compatible with both SCAP 1.2 and SCAP 1.3 content provided in the SCAP Security Guide package.
Performing a configuration compliance scanning does not guarantee the system is compliant.
The SCAP Security Guide suite provides profiles for several platforms in a form of data stream documents. A data stream is a file that contains definitions, benchmarks, profiles, and individual rules. Each rule specifies the applicability and requirements for compliance. RHEL provides several profiles for compliance with security policies. In addition to the industry standard, Red Hat data streams also contain information for remediation of failed rules.
Structure of compliance scanning resources
Data stream ├── xccdf | ├── benchmark | ├── profile | | ├──rule reference | | └──variable | ├── rule | ├── human readable data | ├── oval reference ├── oval ├── ocil reference ├── ocil ├── cpe reference └── cpe └── remediation
A profile is a set of rules based on a security policy, such as OSPP, PCI-DSS, and Health Insurance Portability and Accountability Act (HIPAA). This enables you to audit the system in an automated way for compliance with security standards.
You can modify (tailor) a profile to customize certain rules, for example, password length. For more information about profile tailoring, see Customizing a security profile with SCAP Workbench.
7.3.2. Possible results of an OpenSCAP scan
Depending on various properties of your system and the data stream and profile applied to an OpenSCAP scan, each rule may produce a specific result. This is a list of possible results with brief explanations of what they mean.
Table 7.1. Possible results of an OpenSCAP scan
Result | Explanation |
---|---|
Pass | The scan did not find any conflicts with this rule. |
Fail | The scan found a conflict with this rule. |
Not checked | OpenSCAP does not perform an automatic evaluation of this rule. Check whether your system conforms to this rule manually. |
Not applicable | This rule does not apply to the current configuration. |
Not selected | This rule is not part of the profile. OpenSCAP does not evaluate this rule and does not display these rules in the results. |
Error |
The scan encountered an error. For additional information, you can enter the |
Unknown |
The scan encountered an unexpected situation. For additional information, you can enter the |
7.3.3. Viewing profiles for configuration compliance
Before you decide to use profiles for scanning or remediation, you can list them and check their detailed descriptions using the oscap info
subcommand.
Prerequisites
-
The
openscap-scanner
andscap-security-guide
packages are installed.
Procedure
List all available files with security compliance profiles provided by the SCAP Security Guide project:
$ ls /usr/share/xml/scap/ssg/content/ ssg-rhel9-ds.xml
Display detailed information about a selected data stream using the
oscap info
subcommand. XML files containing data streams are indicated by the-ds
string in their names. In theProfiles
section, you can find a list of available profiles and their IDs:$ oscap info /usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml Profiles: ... Title: Australian Cyber Security Centre (ACSC) Essential Eight Id: xccdf_org.ssgproject.content_profile_e8 Title: Health Insurance Portability and Accountability Act (HIPAA) Id: xccdf_org.ssgproject.content_profile_hipaa Title: PCI-DSS v3.2.1 Control Baseline for Red Hat Enterprise Linux 9 Id: xccdf_org.ssgproject.content_profile_pci-dss ...
Select a profile from the data stream file and display additional details about the selected profile. To do so, use
oscap info
with the--profile
option followed by the last section of the ID displayed in the output of the previous command. For example, the ID of the HIPPA profile is:xccdf_org.ssgproject.content_profile_hipaa
, and the value for the--profile
option ishipaa
:$ oscap info --profile hipaa /usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml ... Profile Title: [RHEL9 DRAFT] Health Insurance Portability and Accountability Act (HIPAA) Id: xccdf_org.ssgproject.content_profile_hipaa Description: The HIPAA Security Rule establishes U.S. national standards to protect individuals’ electronic personal health information that is created, received, used, or maintained by a covered entity. The Security Rule requires appropriate administrative, physical and technical safeguards to ensure the confidentiality, integrity, and security of electronic protected health information. This profile configures Red Hat Enterprise Linux 9 to the HIPAA Security Rule identified for securing of electronic protected health information. Use of this profile in no way guarantees or makes claims against legal compliance against the HIPAA Security Rule(s).
Additional resources
-
scap-security-guide(8)
man page - OpenSCAP memory consumption problems
7.3.4. Assessing configuration compliance with a specific baseline
To determine whether your system conforms to a specific baseline, follow these steps.
Prerequisites
-
The
openscap-scanner
andscap-security-guide
packages are installed - You know the ID of the profile within the baseline with which the system should comply. To find the ID, see Viewing Profiles for Configuration Compliance.
Procedure
Evaluate the compliance of the system with the selected profile and save the scan results in the report.html HTML file, for example:
$ oscap xccdf eval --report report.html --profile hipaa /usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml
Optional: Scan a remote system with the
machine1
host name, SSH running on port22
, and thejoesec
user name for compliance and save results to theremote-report.html
file:$ oscap-ssh joesec@machine1 22 xccdf eval --report remote_report.html --profile hipaa /usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml
Additional resources
-
scap-security-guide(8)
man page -
SCAP Security Guide
documentation in the/usr/share/doc/scap-security-guide/
directory -
/usr/share/doc/scap-security-guide/guides/ssg-rhel9-guide-index.html
- [Guide to the Secure Configuration of Red Hat Enterprise Linux 9] installed with thescap-security-guide-doc
package - OpenSCAP memory consumption problems
7.4. Remediating the system to align with a specific baseline
You can remediate the RHEL system to align with a specific baseline. This example uses the Health Insurance Portability and Accountability Act (HIPAA) profile, but you can remediate to align with any other profile provided by the SCAP Security Guide. For the details on listing the available profiles, see the Viewing profiles for configuration compliance section.
If not used carefully, running the system evaluation with the Remediate
option enabled might render the system non-functional. Red Hat does not provide any automated method to revert changes made by security-hardening remediations. Remediations are supported on RHEL systems in the default configuration. If your system has been altered after the installation, running remediation might not make it compliant with the required security profile.
Prerequisites
-
The
scap-security-guide
package is installed on your RHEL system.
Procedure
Use the
oscap
command with the--remediate
option:# oscap xccdf eval --profile hipaa --remediate /usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml
- Restart your system.
Verification
Evaluate compliance of the system with the HIPAA profile, and save scan results in the
hipaa_report.html
file:$ oscap xccdf eval --report hipaa_report.html --profile hipaa /usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml
Additional resources
-
scap-security-guide(8)
andoscap(8)
man pages
7.5. Remediating the system to align with a specific baseline using an SSG Ansible playbook
You can remediate your system to align with a specific baseline by using an Ansible playbook file from the SCAP Security Guide project. This example uses the Health Insurance Portability and Accountability Act (HIPAA) profile, but you can remediate to align with any other profile provided by the SCAP Security Guide. For the details on listing the available profiles, see the * Viewing profiles for configuration compliance section.
If not used carefully, running the system evaluation with the Remediate
option enabled might render the system non-functional. Red Hat does not provide any automated method to revert changes made by security-hardening remediations. Remediations are supported on RHEL systems in the default configuration. If your system has been altered after the installation, running remediation might not make it compliant with the required security profile.
Prerequisites
-
The
scap-security-guide
package is installed. -
The
ansible-core
package is installed. See the Ansible Installation Guide for more information.
In RHEL 8.6 and later, Ansible Engine is replaced by the ansible-core
package, which contains only built-in modules. Note that many Ansible remediations use modules from the community and Portable Operating System Interface (POSIX) collections, which are not included in the built-in modules. In this case, you can use Bash remediations as a substitute to Ansible remediations. The Red Hat Connector in RHEL 9 includes the necessary Ansible modules to enable the remediation playbooks to function with Ansible Core.
Procedure
Remediate your system to align with HIPAA using Ansible:
# ansible-playbook -i localhost, -c local /usr/share/scap-security-guide/ansible/rhel9-playbook-hipaa.yml
- Restart the system.
Verification
Evaluate compliance of the system with the HIPAA profile, and save scan results in the
hipaa_report.html
file:# oscap xccdf eval --profile hipaa --report hipaa_report.html /usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml
Additional resources
-
scap-security-guide(8)
andoscap(8)
man pages - Ansible Documentation
7.6. Creating a remediation Ansible playbook to align the system with a specific baseline
You can create an Ansible playbook containing only the remediations that are required to align your system with a specific baseline. This example uses the Health Insurance Portability and Accountability Act (HIPAA) profile. With this procedure, you create a smaller playbook that does not cover already satisfied requirements. By following these steps, you do not modify your system in any way, you only prepare a file for later application.
In RHEL 9, Ansible Engine is replaced by the ansible-core
package, which contains only built-in modules. Note that many Ansible remediations use modules from the community and Portable Operating System Interface (POSIX) collections, which are not included in the built-in modules. In this case, you can use Bash remediations as a substitute for Ansible remediations. The Red Hat Connector in RHEL 9.0 includes the necessary Ansible modules to enable the remediation playbooks to function with Ansible Core.
Prerequisites
-
The
scap-security-guide
package is installed.
Procedure
Scan the system and save the results:
# oscap xccdf eval --profile hipaa --results <hipaa-results.xml> /usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml
Find the value of the result ID in the file with the results:
# oscap info <hipaa-results.xml>
Generate an Ansible playbook based on the file generated in step 1:
# oscap xccdf generate fix --fix-type ansible --result-id <xccdf_org.open-scap_testresult_xccdf_org.ssgproject.content_profile_hipaa> --output <hipaa-remediations.yml> <hipaa-results.xml>
-
Review the generated file, which contains the Ansible remediations for rules that failed during the scan performed in step 1. After reviewing this generated file, you can apply it by using the
ansible-playbook <hipaa-remediations.yml>
command.
Verification
-
In a text editor of your choice, review that the generated
<hipaa-remediations.yml>
file contains rules that failed in the scan performed in step 1.
Additional resources
-
scap-security-guide(8)
andoscap(8)
man pages - Ansible Documentation
7.7. Creating a remediation Bash script for a later application
Use this procedure to create a Bash script containing remediations that align your system with a security profile such as HIPAA. Using the following steps, you do not do any modifications to your system, you only prepare a file for later application.
Prerequisites
-
The
scap-security-guide
package is installed on your RHEL system.
Procedure
Use the
oscap
command to scan the system and to save the results to an XML file. In the following example,oscap
evaluates the system against thehipaa
profile:# oscap xccdf eval --profile hipaa --results <hipaa-results.xml> /usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml
Find the value of the result ID in the file with the results:
# oscap info <hipaa-results.xml>
Generate a Bash script based on the results file generated in step 1:
# oscap xccdf generate fix --fix-type bash --result-id <xccdf_org.open-scap_testresult_xccdf_org.ssgproject.content_profile_hipaa> --output <hipaa-remediations.sh> <hipaa-results.xml>
-
The
<hipaa-remediations.sh>
file contains remediations for rules that failed during the scan performed in step 1. After reviewing this generated file, you can apply it with the./<hipaa-remediations.sh>
command when you are in the same directory as this file.
Verification
-
In a text editor of your choice, review that the
<hipaa-remediations.sh>
file contains rules that failed in the scan performed in step 1.
Additional resources
-
scap-security-guide(8)
,oscap(8)
, andbash(1)
man pages
7.8. Scanning the system with a customized profile using SCAP Workbench
SCAP Workbench
, which is contained in the scap-workbench
package, is a graphical utility that enables users to perform configuration and vulnerability scans on a single local or a remote system, perform remediation of the system, and generate reports based on scan evaluations. Note that SCAP Workbench
has limited functionality compared with the oscap
command-line utility. SCAP Workbench
processes security content in the form of data stream files.
7.8.1. Using SCAP Workbench to scan and remediate the system
To evaluate your system against the selected security policy, use the following procedure.
Prerequisites
-
The
scap-workbench
package is installed on your system.
Procedure
To run
SCAP Workbench
from theGNOME Classic
desktop environment, press the Super key to enter theActivities Overview
, typescap-workbench
, and then press Enter. Alternatively, use:$ scap-workbench &
Select a security policy using either of the following options:
-
Load Content
button on the starting window -
Open content from SCAP Security Guide
Open Other Content
in theFile
menu, and search the respective XCCDF, SCAP RPM, or data stream file.
-
You can allow automatic correction of the system configuration by selecting the Remediate check box. With this option enabled,
SCAP Workbench
attempts to change the system configuration in accordance with the security rules applied by the policy. This process should fix the related checks that fail during the system scan.WarningIf not used carefully, running the system evaluation with the
Remediate
option enabled might render the system non-functional. Red Hat does not provide any automated method to revert changes made by security-hardening remediations. Remediations are supported on RHEL systems in the default configuration. If your system has been altered after the installation, running remediation might not make it compliant with the required security profile.Scan your system with the selected profile by clicking the Scan button.
-
To store the scan results in form of an XCCDF, ARF, or HTML file, click the Save Results combo box. Choose the
HTML Report
option to generate the scan report in human-readable format. The XCCDF and ARF (data stream) formats are suitable for further automatic processing. You can repeatedly choose all three options. - To export results-based remediations to a file, use the Generate remediation role pop-up menu.
7.8.2. Customizing a security profile with SCAP Workbench
You can customize a security profile by changing parameters in certain rules (for example, minimum password length), removing rules that you cover in a different way, and selecting additional rules, to implement internal policies. You cannot define new rules by customizing a profile.
The following procedure demonstrates the use of SCAP Workbench
for customizing (tailoring) a profile. You can also save the tailored profile for use with the oscap
command-line utility.
Prerequisites
-
The
scap-workbench
package is installed on your system.
Procedure
-
Run
SCAP Workbench
, and select the profile to customize by using eitherOpen content from SCAP Security Guide
orOpen Other Content
in theFile
menu. To adjust the selected security profile according to your needs, click the Customize button.
This opens the new Customization window that enables you to modify the currently selected profile without changing the original data stream file. Choose a new profile ID.
- Find a rule to modify using either the tree structure with rules organized into logical groups or the Search field.
Include or exclude rules using check boxes in the tree structure, or modify values in rules where applicable.
- Confirm the changes by clicking the OK button.
To store your changes permanently, use one of the following options:
-
Save a customization file separately by using
Save Customization Only
in theFile
menu. Save all security content at once by
Save All
in theFile
menu.If you select the
Into a directory
option,SCAP Workbench
saves both the data stream file and the customization file to the specified location. You can use this as a backup solution.By selecting the
As RPM
option, you can instructSCAP Workbench
to create an RPM package containing the data stream file and the customization file. This is useful for distributing the security content to systems that cannot be scanned remotely, and for delivering the content for further processing.
-
Save a customization file separately by using
Because SCAP Workbench
does not support results-based remediations for tailored profiles, use the exported remediations with the oscap
command-line utility.
7.8.3. Additional resources
-
scap-workbench(8)
man page -
/usr/share/doc/scap-workbench/user_manual.html
file provided by thescap-workbench
package - Deploy customized SCAP policies with Satellite 6.x KCS article
7.9. Deploying systems that are compliant with a security profile immediately after an installation
You can use the OpenSCAP suite to deploy RHEL systems that are compliant with a security profile, such as OSPP, PCI-DSS, and HIPAA profile, immediately after the installation process. Using this deployment method, you can apply specific rules that cannot be applied later using remediation scripts, for example, a rule for password strength and partitioning.
7.9.1. Profiles not compatible with Server with GUI
Certain security profiles provided as part of the SCAP Security Guide are not compatible with the extended package set included in the Server with GUI base environment. Therefore, do not select Server with GUI when installing systems compliant with one of the following profiles:
Table 7.2. Profiles not compatible with Server with GUI
Profile name | Profile ID | Justification | Notes |
---|---|---|---|
[DRAFT] CIS Red Hat Enterprise Linux 9 Benchmark for Level 2 - Server |
|
Packages | |
[DRAFT] CIS Red Hat Enterprise Linux 9 Benchmark for Level 1 - Server |
|
Packages | |
[DRAFT] DISA STIG for Red Hat Enterprise Linux 9 |
|
Packages | To install a RHEL system as a Server with GUI aligned with DISA STIG, you can use the DISA STIG with GUI profile BZ#1648162 |
7.9.2. Deploying baseline-compliant RHEL systems using the graphical installation
Use this procedure to deploy a RHEL system that is aligned with a specific baseline. This example uses Protection Profile for General Purpose Operating System (OSPP).
Certain security profiles provided as part of the SCAP Security Guide are not compatible with the extended package set included in the Server with GUI base environment. For additional details, see Profiles not compatible with a GUI server.
Prerequisites
-
You have booted into the
graphical
installation program. Note that the OSCAP Anaconda Add-on does not support interactive text-only installation. -
You have accessed the
Installation Summary
window.
Procedure
-
From the
Installation Summary
window, clickSoftware Selection
. TheSoftware Selection
window opens. -
From the
Base Environment
pane, select theServer
environment. You can select only one base environment. -
Click
Done
to apply the setting and return to theInstallation Summary
window. -
Because OSPP has strict partitioning requirements that must be met, create separate partitions for
/boot
,/home
,/var
,/tmp
,/var/log
,/var/tmp
, and/var/log/audit
. -
Click
Security Policy
. TheSecurity Policy
window opens. -
To enable security policies on the system, toggle the
Apply security policy
switch toON
. -
Select
Protection Profile for General Purpose Operating Systems
from the profile pane. -
Click
Select Profile
to confirm the selection. -
Confirm the changes in the
Changes that were done or need to be done
pane that is displayed at the bottom of the window. Complete any remaining manual changes. Complete the graphical installation process.
NoteThe graphical installation program automatically creates a corresponding Kickstart file after a successful installation. You can use the
/root/anaconda-ks.cfg
file to automatically install OSPP-compliant systems.
Verification
To check the current status of the system after installation is complete, reboot the system and start a new scan:
# oscap xccdf eval --profile ospp --report eval_postinstall_report.html /usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml
Additional resources
7.9.3. Deploying baseline-compliant RHEL systems using Kickstart
Use this procedure to deploy RHEL systems that are aligned with a specific baseline. This example uses Protection Profile for General Purpose Operating System (OSPP).
Prerequisites
-
The
scap-security-guide
package is installed on your RHEL 9 system.
Procedure
-
Open the
/usr/share/scap-security-guide/kickstart/ssg-rhel9-ospp-ks.cfg
Kickstart file in an editor of your choice. -
Update the partitioning scheme to fit your configuration requirements. For OSPP compliance, the separate partitions for
/boot
,/home
,/var
,/tmp
,/var/log
,/var/tmp
, and/var/log/audit
must be preserved, and you can only change the size of the partitions. - Start a Kickstart installation as described in Performing an automated installation using Kickstart.
Passwords in Kickstart files are not checked for OSPP requirements.
Verification
To check the current status of the system after installation is complete, reboot the system and start a new scan:
# oscap xccdf eval --profile ospp --report eval_postinstall_report.html /usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml
Additional resources
7.10. Scanning container and container images for vulnerabilities
Use this procedure to find security vulnerabilities in a container or a container image.
Prerequisites
-
The
openscap-utils
andbzip2
packages are installed.
Procedure
Download the latest RHSA OVAL definitions for your system:
# wget -O - https://www.redhat.com/security/data/oval/v2/RHEL9/rhel-9.oval.xml.bz2 | bzip2 --decompress > rhel-9.oval.xml
Get the ID of a container or a container image, for example:
# podman images REPOSITORY TAG IMAGE ID CREATED SIZE registry.access.redhat.com/ubi9/ubi latest 096cae65a207 7 weeks ago 239 MB
Scan the container or the container image for vulnerabilities and save results to the vulnerability.html file:
# oscap-podman 096cae65a207 oval eval --report vulnerability.html rhel-9.oval.xml
Note that the
oscap-podman
command requires root privileges, and the ID of a container is the first argument.
Verification
Check the results in a browser of your choice, for example:
$ firefox vulnerability.html &
Additional resources
-
For more information, see the
oscap-podman(8)
andoscap(8)
man pages.
7.11. Assessing security compliance of a container or a container image with a specific baseline
Follow these steps to assess compliance of your container or a container image with a specific security baseline, such as Operating System Protection Profile (OSPP), Payment Card Industry Data Security Standard (PCI-DSS), and Health Insurance Portability and Accountability Act (HIPAA).
Prerequisites
-
The
openscap-utils
andscap-security-guide
packages are installed.
Procedure
Get the ID of a container or a container image, for example:
# podman images REPOSITORY TAG IMAGE ID CREATED SIZE registry.access.redhat.com/ubi9/ubi latest 096cae65a207 7 weeks ago 239 MB
Evaluate the compliance of the container image with the HIPAA profile and save scan results into the report.html HTML file
# oscap-podman 096cae65a207 xccdf eval --report report.html --profile hipaa /usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml
Replace 096cae65a207 with the ID of your container image and the hipaa value with ospp or pci-dss if you assess security compliance with the OSPP or PCI-DSS baseline. Note that the
oscap-podman
command requires root privileges.
Verification
Check the results in a browser of your choice, for example:
$ firefox report.html &
The rules marked as notapplicable are rules that do not apply to containerized systems. These rules apply only to bare-metal and virtualized systems.
Additional resources
-
oscap-podman(8)
andscap-security-guide(8)
man pages. -
/usr/share/doc/scap-security-guide/
directory.
7.12. SCAP Security Guide profiles supported in RHEL 9
Use only the SCAP content provided in the particular minor release of RHEL. This is because components that participate in hardening are sometimes updated with new capabilities. SCAP content changes to reflect these updates, but it is not always backward compatible.
In the following tables, you can find the profiles provided in RHEL 9, together with the version of the policy with which the profile aligns.
Table 7.3. SCAP Security Guide profiles supported in RHEL 9.2
Profile name | Profile ID | Policy version |
---|---|---|
French National Agency for the Security of Information Systems (ANSSI) BP-028 Enhanced Level |
|
RHEL 9.2.0 to RHEL 9.2.2:1.2 |
French National Agency for the Security of Information Systems (ANSSI) BP-028 High Level |
|
RHEL 9.2.0 to RHEL 9.2.2:1.2 |
French National Agency for the Security of Information Systems (ANSSI) BP-028 Intermediary Level |
|
RHEL 9.2.0 to RHEL 9.2.2:1.2 |
French National Agency for the Security of Information Systems (ANSSI) BP-028 Minimal Level |
|
RHEL 9.2.0 to RHEL 9.2.2:1.2 |
CIS Red Hat Enterprise Linux 9 Benchmark for Level 2 - Server |
| 1.0.0 |
CIS Red Hat Enterprise Linux 9 Benchmark for Level 1 - Server |
| 1.0.0 |
CIS Red Hat Enterprise Linux 9 Benchmark for Level 1 - Workstation |
| 1.0.0 |
CIS Red Hat Enterprise Linux 9 Benchmark for Level 2 - Workstation |
| 1.0.0 |
[DRAFT] Unclassified Information in Non-federal Information Systems and Organizations (NIST 800-171) |
| r2 |
Australian Cyber Security Centre (ACSC) Essential Eight |
| not versioned |
Health Insurance Portability and Accountability Act (HIPAA) |
| not versioned |
Australian Cyber Security Centre (ACSC) ISM Official |
| not versioned |
Protection Profile for General Purpose Operating Systems |
| 4.2.1 |
PCI-DSS v3.2.1 Control Baseline for Red Hat Enterprise Linux 9 |
| 3.2.1 |
[DRAFT] DISA STIG for Red Hat Enterprise Linux 9 |
| DRAFT[a] |
[DRAFT] DISA STIG with GUI for Red Hat Enterprise Linux 9 |
| DRAFT[a] |
CCN Red Hat Enterprise Linux 9 - Basic |
| RHEL 9.2.0 and later:2022-10 |
CCN Red Hat Enterprise Linux 9 - Intermediate |
| RHEL 9.2.0 and later:2022-10 |
CCN Red Hat Enterprise Linux 9 - Advanced |
| RHEL 9.2.0 and later:2022-10 |
[a]
DISA has not yet published an official benchmark for RHEL 9
|
Table 7.4. SCAP Security Guide profiles supported in RHEL 9.1
Profile name | Profile ID | Policy version |
---|---|---|
French National Agency for the Security of Information Systems (ANSSI) BP-028 Enhanced Level |
| 1.2 |
French National Agency for the Security of Information Systems (ANSSI) BP-028 High Level |
| 1.2 |
French National Agency for the Security of Information Systems (ANSSI) BP-028 Intermediary Level |
| 1.2 |
French National Agency for the Security of Information Systems (ANSSI) BP-028 Minimal Level |
| 1.2 |
CIS Red Hat Enterprise Linux 9 Benchmark for Level 2 - Server |
|
RHEL 9.1.0 and RHEL 9.1.1:DRAFT[a] |
CIS Red Hat Enterprise Linux 9 Benchmark for Level 1 - Server |
|
RHEL 9.1.0 and RHEL 9.1.1:DRAFT[a] |
CIS Red Hat Enterprise Linux 9 Benchmark for Level 1 - Workstation |
|
RHEL 9.1.0 and RHEL 9.1.1:DRAFT[a] |
CIS Red Hat Enterprise Linux 9 Benchmark for Level 2 - Workstation |
|
RHEL 9.1.0 and RHEL 9.1.1:DRAFT[a] |
[DRAFT] Unclassified Information in Non-federal Information Systems and Organizations (NIST 800-171) |
| r2 |
Australian Cyber Security Centre (ACSC) Essential Eight |
| not versioned |
Health Insurance Portability and Accountability Act (HIPAA) |
| not versioned |
Australian Cyber Security Centre (ACSC) ISM Official |
| not versioned |
Protection Profile for General Purpose Operating Systems |
| 4.2.1 |
PCI-DSS v3.2.1 Control Baseline for Red Hat Enterprise Linux 9 |
| 3.2.1 |
[DRAFT] DISA STIG for Red Hat Enterprise Linux 9 |
| DRAFT[a] |
[DRAFT] DISA STIG with GUI for Red Hat Enterprise Linux 9 |
| DRAFT[a] |
[a]
CIS has not yet published an official benchmark for RHEL 9
|
Table 7.5. SCAP Security Guide profiles supported in RHEL 9.0
Profile name | Profile ID | Policy version |
---|---|---|
French National Agency for the Security of Information Systems (ANSSI) BP-028 Enhanced Level |
|
RHEL 9.0.0 to RHEL 9.0.10:1.2 |
French National Agency for the Security of Information Systems (ANSSI) BP-028 High Level |
|
RHEL 9.0.0 to RHEL 9.0.10:1.2 |
French National Agency for the Security of Information Systems (ANSSI) BP-028 Intermediary Level |
|
RHEL 9.0.0 to RHEL 9.0.10:1.2 |
French National Agency for the Security of Information Systems (ANSSI) BP-028 Minimal Level |
|
RHEL 9.0.0 to RHEL 9.0.10:1.2 |
CIS Red Hat Enterprise Linux 9 Benchmark for Level 2 - Server |
|
RHEL 9.0.0 to RHEL 9.0.6:DRAFT[a] |
CIS Red Hat Enterprise Linux 9 Benchmark for Level 1 - Server |
|
RHEL 9.0.0 to RHEL 9.0.6:DRAFT[a] |
CIS Red Hat Enterprise Linux 9 Benchmark for Level 1 - Workstation |
|
RHEL 9.0.0 to RHEL 9.0.6:DRAFT[a] |
CIS Red Hat Enterprise Linux 9 Benchmark for Level 2 - Workstation |
|
RHEL 9.0.0 to RHEL 9.0.6:DRAFT[a] |
[DRAFT] Unclassified Information in Non-federal Information Systems and Organizations (NIST 800-171) |
| r2 |
Australian Cyber Security Centre (ACSC) Essential Eight |
| not versioned |
Health Insurance Portability and Accountability Act (HIPAA) |
| not versioned |
Australian Cyber Security Centre (ACSC) ISM Official |
| not versioned |
Protection Profile for General Purpose Operating Systems |
|
RHEL 9.0.0 to RHEL 9.0.2:DRAFT |
PCI-DSS v3.2.1 Control Baseline for Red Hat Enterprise Linux 9 |
| 3.2.1 |
[DRAFT] DISA STIG for Red Hat Enterprise Linux 9 |
| DRAFT[a] |
[DRAFT] DISA STIG with GUI for Red Hat Enterprise Linux 9 |
| DRAFT[a] |
CCN Red Hat Enterprise Linux 9 - Basic |
| RHEL 9.0.11 and later:2022-10 |
CCN Red Hat Enterprise Linux 9 - Intermediate |
| RHEL 9.0.11 and later:2022-10 |
CCN Red Hat Enterprise Linux 9 - Advanced |
| RHEL 9.0.11 and later:2022-10 |
7.13. Additional resources
- Supported versions of the SCAP Security Guide in RHEL
-
The OpenSCAP project page provides detailed information about the
oscap
utility and other components and projects related to SCAP. -
The SCAP Workbench project page provides detailed information about the
scap-workbench
application. - The SCAP Security Guide (SSG) project page provides the latest security content for Red Hat Enterprise Linux.
- Using OpenSCAP for security compliance and vulnerability scanning - A hands-on lab on running tools based on the Security Content Automation Protocol (SCAP) standard for compliance and vulnerability scanning in RHEL.
- Red Hat Security Demos: Creating Customized Security Policy Content to Automate Security Compliance - A hands-on lab to get initial experience in automating security compliance using the tools that are included in RHEL to comply with both industry standard security policies and custom security policies. If you want training or access to these lab exercises for your team, contact your Red Hat account team for additional details.
- Red Hat Security Demos: Defend Yourself with RHEL Security Technologies - A hands-on lab to learn how to implement security at all levels of your RHEL system, using the key security technologies available to you in RHEL, including OpenSCAP. If you want training or access to these lab exercises for your team, contact your Red Hat account team for additional details.
- National Institute of Standards and Technology (NIST) SCAP page has a vast collection of SCAP-related materials, including SCAP publications, specifications, and the SCAP Validation Program.
- National Vulnerability Database (NVD) has the largest repository of SCAP content and other SCAP standards-based vulnerability management data.
- Red Hat OVAL content repository contains OVAL definitions for vulnerabilities of RHEL systems. This is the recommended source of vulnerability content.
- MITRE CVE - This is a database of publicly known security vulnerabilities provided by the MITRE corporation. For RHEL, using OVAL CVE content provided by Red Hat is recommended.
- MITRE OVAL - This is an OVAL-related project provided by the MITRE corporation. Among other OVAL-related information, these pages contain the OVAL language and a repository of OVAL content with thousands of OVAL definitions. Note that for scanning RHEL, using OVAL CVE content provided by Red Hat is recommended.
- Managing security compliance in Red Hat Satellite - This set of guides describes, among other topics, how to maintain system security on multiple systems by using OpenSCAP.
Chapter 8. Ensuring system integrity with Keylime
With Keylime, you can continuously monitor the integrity of remote systems and verify the state of systems at boot. You can also send encrypted files to the monitored systems, and specify automated actions triggered whenever a monitored system fails the integrity test.
8.1. How Keylime works
You can deploy Keylime agents to perform one or more of the following actions:
- Runtime integrity monitoring
- Keylime runtime integrity monitoring continuously monitors the system on which the agent is deployed and measures the integrity of the files included in the allowlist and not included in the excludelist.
- Measured boot
- Keylime measured boot verifies the system state at boot.
Keylime’s concept of trust is based on the Trusted Platform Module (TPM) technology. A TPM is a hardware, firmware, or virtual component with integrated cryptographic keys. By polling TPM quotes and comparing the hashes of objects, Keylime provides initial and runtime monitoring of remote systems.
Keylime running in a virtual machine or using a virtual TPM depends upon the integrity of the underlying host. Ensure you trust the host environment before relying upon Keylime measurements in a virtual environment.
Keylime consists of three main components:
- The verifier initially and continuously verifies the integrity of the systems that run the agent.
- The registrar contains a database of all agents and it hosts the public keys of the TPM vendors.
- The agent is the component deployed to remote systems measured by the verifier.
In addition, Keylime uses the keylime_tenant
utility for many functions, including provisioning the agents on the target systems.
Figure 8.1. Connections between Keylime components through configurations

Keylime ensures the integrity of the monitored systems in a chain of trust by using keys and certificates exchanged between the components and the tenant. For a secure foundation of this chain, use a certificate authority (CA) that you can trust.
If the agent receives no key and certificate, it generates a key and a self-signed certificate with no involvement from the CA.
Figure 8.2. Connections between Keylime components certificates and keys

8.2. Configuring Keylime verifier
The verifier is the most important component in Keylime. It performs initial and periodic checks of system integrity and supports bootstrapping a cryptographic key securely with the agent. The verifier uses mutual Transport Layer Security (TLS) for its control interface.
To maintain the chain of trust, keep the system that runs the verifier secure and under your control.
You can install the verifier on a separate system or on the same system as the Keylime registrar, depending on your requirements. Running the verifier and registrar on separate systems provides better performance.
To keep the configuration files organized within the drop-in directories, use file names with a two-digit number prefix, for example /etc/keylime/verifier.conf.d/00-verifier-ip.conf
. The configuration processing reads the files inside the drop-in directory in lexicographic order and sets each option to the last value it reads.
Prerequisites
-
You have
root
permissions and network connection to the system or systems on which you want to install Keylime components. - You have valid keys and certificates from your certificate authority.
Optional: You have access to two databases, where Keylime saves data from the registrar and from the verifier. You can use any of the following database management systems:
- SQLite (default)
- PostgreSQL
- MySQL
- MariaDB
Procedure
Install the Keylime verifier:
# dnf install keylime-verifier
Define the IP address and port of the registrar and verifier in the verifier configuration.
Create a new
.conf
file in the/etc/keylime/verifier.conf.d/
directory, for example,/etc/keylime/verifier.conf.d/00-verifier-ip.conf
, with the following content:[verifier] ip = <verifier_IP_address>
-
Replace
<verifier_IP_address>
with the verifier’s IP address. Alternatively, useip = *
orip = 0.0.0.0
to bind the verifier to all available IP addresses. -
Optionally, you can also change the verifier’s port from the default value
8881
by using theport = <verifier_port>
option.
-
Replace
Create a new
.conf
file in the/etc/keylime/verifier.conf.d/
directory, for example,/etc/keylime/verifier.conf.d/00-registrar-ip.conf
, with the following content:[verifier] registrar_ip = <registrar_IP_address>
-
Replace
<registrar_IP_address>
with the registrar’s IP address. -
If the registrar uses a different port than the default value
8891
, add theregistrar_port = <registrar_port>
setting.
-
Replace
Optional: Configure the verifier’s database for the list of agents. The default configuration uses an SQLite database in the verifiers’s
/var/lib/keylime/cv_data.sqlite/
directory. You can define a different database by creating a new.conf
file in the/etc/keylime/verifier.conf.d/
directory, for example,/etc/keylime/verifier.conf.d/00-db-url.conf
, with the following content:[verifier] database_url = <protocol>://<name>:<password>@<ip_address_or_hostname>/<properties>
Replace
<protocol>://<name>:<password>@<ip_address_or_hostname>/<properties>
with the URL of the database, for example,postgresql://verifier:UQ?nRNY9g7GZzN7@198.51.100.1/verifierdb
.Ensure that the credentials you use have the permissions for Keylime to create the database structure.
Add certificates and keys to the verifier. You can either let Keylime generate them, or use existing keys and certificates:
-
With the default
tls_dir = generate
option, Keylime generates new certificates for the verifier, registrar, and tenant in the/var/lib/keylime/cv_ca/
directory. To load existing keys and certificates in the configuration, define their location in the verifier configuration.
NoteCertificates must be accessible by the
keylime
user, under which the Keylime services are running.Create a new
.conf
file in the/etc/keylime/verifier.conf.d/
directory, for example,/etc/keylime/verifier.conf.d/00-keys-and-certs.conf
, with the following content:[verifier] tls_dir = /var/lib/keylime/cv_ca server_key = </path/to/server_key> server_key_password = <passphrase1> server_cert = </path/to/server_cert> trusted_client_ca = ['</path/to/ca/cert1>', '</path/to/ca/cert2>'] client_key = </path/to/client_key> client_key_password = <passphrase2> client_cert = </path/to/client_cert> trusted_server_ca = ['</path/to/ca/cert3>', '</path/to/ca/cert4>']
NoteUse absolute paths to define key and certificate locations. Alternatively, relative paths are resolved from the directory defined in the
tls_dir
option.
-
With the default
Open the port in firewall:
# firewall-cmd --add-port 8881/tcp # firewall-cmd --runtime-to-permanent
If you use a different port, replace
8881
with the port number defined in the.conf
file.Start the verifier service:
# systemctl enable --now keylime_verifier
NoteIn the default configuration, start the
keylime_verifier
before starting thekeylime_registrar
service because the verifier creates the CA and certificates for the other Keylime components. This order is not necessary when you use custom certificates.
Verification
Check that the
keylime_verifier
service is active and running:# systemctl status keylime_verifier ● keylime_verifier.service - The Keylime verifier Loaded: loaded (/usr/lib/systemd/system/keylime_verifier.service; disabled; vendor preset: disabled) Active: active (running) since Wed 2022-11-09 10:10:08 EST; 1min 45s ago
Next steps
8.3. Configuring Keylime registrar
The registrar is the Keylime component that contains a database of all agents, and it hosts the public keys of the TPM vendors. After the registrar’s HTTPS service accepts trusted platform module (TPM) public keys, it presents an interface to obtain these public keys for checking quotes.
To maintain the chain of trust, keep the system that runs the registrar secure and under your control.
You can install the registrar on a separate system or on the same system as the Keylime verifier, depending on your requirements. Running the verifier and registrar on separate systems provides better performance.
To keep the configuration files organized within the drop-in directories, use file names with a two-digit number prefix, for example /etc/keylime/registrar.conf.d/00-registrar-ip.conf
. The configuration processing reads the files inside the drop-in directory in lexicographic order and sets each option to the last value it reads.
Prerequisites
- You have network access to the systems where the Keylime verifier is installed and running. For more information, see Section 8.2, “Configuring Keylime verifier”.
-
You have
root
permissions and network connection to the system or systems on which you want to install Keylime components. You have access to two databases, where Keylime saves data from the registrar and from the verifier
You can use any of the following database management systems:
- SQLite (default)
- PostgreSQL
- MySQL
- MariaDB
- You have valid keys and certificates from your certificate authority.
Procedure
Install the Keylime registrar:
# dnf install keylime-registrar
Define the IP address and port of the registrar:
Create a new
.conf
file in the/etc/keylime/registrar.conf.d/
directory, for example,/etc/keylime/registrar.conf.d/00-registrar-ip.conf
, with the following content:[registrar] ip = <registrar_IP_address>
-
Replace
<registrar_IP_address>
with the registrar’s IP address. Alternatively, useip = *
orip = 0.0.0.0
to bind the registrar to all available IP addresses. -
Optionally, change the port to which the Keylime agents connect by using the
port =
option. The default value is8890
. -
Optionally, change the TLS port to which the Keylime verifier and tenant connect by using the
tls_port =
option. The default value is8891
.
-
Replace
Optional: Configure the registrar’s database for the list of agents. The default configuration uses an SQLite database in the registrar’s
/var/lib/keylime/reg_data.sqlite
directory. You can create a new.conf
file in the/etc/keylime/registrar.conf.d/
directory, for example,/etc/keylime/registrar.conf.d/00-db-url.conf
, with the following content:[registrar] database_url = <protocol>://<name>:<password>@<ip_address_or_hostname>/<properties>
Replace
<protocol>://<name>:<password>@<ip_address_or_hostname>/<properties>
with the URL of the database, for example,postgresql://registrar:EKYYX-bqY2?#raXm@198.51.100.1/registrardb
.Ensure that the credentials you use have the permissions for Keylime to create the database structure.
Add certificates and keys to the registrar:
-
You can use the default configuration and load the keys and certificates to the
/var/lib/keylime/reg_ca/
directory. Alternatively, you can define the location of the keys and certificates in the configuration. Create a new
.conf
file in the/etc/keylime/registrar.conf.d/
directory, for example,/etc/keylime/registrar.conf.d/00-keys-and-certs.conf
, with the following content:ImportantDo not use the
tls_dir = default
setting with custom CA certificates. Instead, specify a path to the location of the certificates. For more information, see RHELPLAN-157337.[registrar] tls_dir = /var/lib/keylime/reg_ca server_key = </path/to/server_key> server_key_password = <passphrase1> server_cert = </path/to/server_cert> trusted_client_ca = ['</path/to/ca/cert1>', '</path/to/ca/cert2>']
NoteUse absolute paths to define key and certificate locations. Alternatively, you can define a directory in the
tls_dir
option and use paths relative to that directory.
-
You can use the default configuration and load the keys and certificates to the
Open the ports in firewall:
# firewall-cmd --add-port 8890/tcp --add-port 8891/tcp # firewall-cmd --runtime-to-permanent
If you use a different port, replace
8890
or8891
with the port number defined in the.conf
file.Start the
keylime_registrar
service:# systemctl enable --now keylime_registrar
NoteIn the default configuration, start the
keylime_verifier
before starting thekeylime_registrar
service because the verifier creates the CA and certificates for the other Keylime components. This order is not necessary when you use custom certificates.
Verification
Check that the
keylime_registrar
service is active and running:# systemctl status keylime_registrar ● keylime_registrar.service - The Keylime registrar service Loaded: loaded (/usr/lib/systemd/system/keylime_registrar.service; disabled; vendor preset: disabled) Active: active (running) since Wed 2022-11-09 10:10:17 EST; 1min 42s ago ...
Next steps
8.4. Configuring Keylime tenant
Keylime uses the keylime_tenant
utility for many functions, including provisioning the agents on the target systems. You can install keylime_tenant
on any system, including the systems that run other Keylime components, or on a separate system, depending on your requirements.
Prerequisites
-
You have
root
permissions and network connection to the system or systems on which you want to install Keylime components. You have network access to the systems where the other Keylime components are configured:
- Verifier
- For more information, see Section 8.2, “Configuring Keylime verifier”.
- Registrar
- For more information, see Section 8.3, “Configuring Keylime registrar”.
Procedure
Install the Keylime tenant:
# dnf install keylime-tenant
Define the tenant’s connection to the Keylime verifier by editing the
/etc/keylime/tenant.conf.d/00-verifier-ip.conf
file:[tenant] verifier_ip = <verifier_ip>
-
Replace
<verifier_ip>
with the IP address to the verifier’s system. Alternatively, useip = *
orip = 0.0.0.0
to bind the tenant to all available IP addresses. -
If the verifier uses a different port than the default value
8881
, add theverifier_port = <verifier_port>
setting.
-
Replace
Define the tenant’s connection to the Keylime registrar by editing the
/etc/keylime/tenant.conf.d/00-registrar-ip.conf
file:[tenant] registrar_ip = <registrar_ip> registrar_port = <registrar_port>
-
Replace
<registrar_ip>
with the IP address to the registrar’s system. -
If the registrar uses a different port than the default value
8891
, add theregistrar_port = <registrar_port>
setting.
-
Replace
Add certificates and keys to the tenant:
-
You can use the default configuration and load the keys and certificates to the
/var/lib/keylime/cv_ca
directory. Alternatively, you can define the location of the keys and certificates in the configuration. Create a new
.conf
file in the/etc/keylime/tenant.conf.d/
directory, for example,/etc/keylime/tenant.conf.d/00-keys-and-certs.conf
, with the following content:[tenant] tls_dir = /var/lib/keylime/cv_ca client_key = tenant-key.pem client_key_password = <passphrase1> client_cert = tenant-cert.pem trusted_server_ca = ['/var/lib/keylime/cv_ca/cacert.pem']
The
trusted_server_ca
parameter accepts paths to the verifier and registrar server CA certificate. You can provide multiple comma-separated paths, for example if the verifier and registrar use different CAs.NoteUse absolute paths to define key and certificate locations. Alternatively, you can define a directory in the
tls_dir
option and use paths relative to that directory.
-
You can use the default configuration and load the keys and certificates to the
-
Optional: If the trusted platform module (TPM) endorsement key (EK) cannot be verified by using certificates in the
/var/lib/keylime/tpm_cert_store
directory, add the certificate to that directory. This can occur particularly when using virtual machines with emulated TPMs.
Verification
Check the status of the verifier:
3 keylime_tenant -c cvstatus Reading configuration from ['/etc/keylime/logging.conf'] 2022-10-14 12:56:08.155 - keylime.tpm - INFO - TPM2-TOOLS Version: 5.2 Reading configuration from ['/etc/keylime/tenant.conf'] 2022-10-14 12:56:08.157 - keylime.tenant - INFO - Setting up client TLS... 2022-10-14 12:56:08.158 - keylime.tenant - INFO - Using default client_cert option for tenant 2022-10-14 12:56:08.158 - keylime.tenant - INFO - Using default client_key option for tenant 2022-10-14 12:56:08.178 - keylime.tenant - INFO - TLS is enabled. 2022-10-14 12:56:08.178 - keylime.tenant - WARNING - Using default UUID d432fbb3-d2f1-4a97-9ef7-75bd81c00000 2022-10-14 12:56:08.221 - keylime.tenant - INFO - Verifier at 127.0.0.1 with Port 8881 does not have agent d432fbb3-d2f1-4a97-9ef7-75bd81c00000.
If correctly set up, and if no agent is configured, the verifier responds that it does not recognize the default agent UUID.
Check the status of the registrar:
# keylime_tenant -c regstatus Reading configuration from ['/etc/keylime/logging.conf'] 2022-10-14 12:56:02.114 - keylime.tpm - INFO - TPM2-TOOLS Version: 5.2 Reading configuration from ['/etc/keylime/tenant.conf'] 2022-10-14 12:56:02.116 - keylime.tenant - INFO - Setting up client TLS... 2022-10-14 12:56:02.116 - keylime.tenant - INFO - Using default client_cert option for tenant 2022-10-14 12:56:02.116 - keylime.tenant - INFO - Using default client_key option for tenant 2022-10-14 12:56:02.137 - keylime.tenant - INFO - TLS is enabled. 2022-10-14 12:56:02.137 - keylime.tenant - WARNING - Using default UUID d432fbb3-d2f1-4a97-9ef7-75bd81c00000 2022-10-14 12:56:02.171 - keylime.registrar_client - CRITICAL - Error: could not get agent d432fbb3-d2f1-4a97-9ef7-75bd81c00000 data from Registrar Server: 404 2022-10-14 12:56:02.172 - keylime.registrar_client - CRITICAL - Response code 404: agent d432fbb3-d2f1-4a97-9ef7-75bd81c00000 not found 2022-10-14 12:56:02.172 - keylime.tenant - INFO - Agent d432fbb3-d2f1-4a97-9ef7-75bd81c00000 does not exist on the registrar. Please register the agent with the registrar. 2022-10-14 12:56:02.172 - keylime.tenant - INFO - {"code": 404, "status": "Agent d432fbb3-d2f1-4a97-9ef7-75bd81c00000 does not exist on registrar 127.0.0.1 port 8891.", "results": {}}
If correctly set up, and if no agent is configured, the registrar responds that it does not recognize the default agent UUID.
Additional resources
-
For additional advanced options for the
keylime_tenant
utility, enter thekeylime_tenant -h
command.
8.5. Configuring Keylime agent
The Keylime agent is the component deployed to all systems to be monitored by Keylime.
By default, the Keylime agent stores all its data in the /var/lib/keylime/
directory of the monitored system.
To keep the configuration files organized within the drop-in directories, use file names with a two-digit number prefix, for example /etc/keylime/agent.conf.d/00-registrar-ip.conf
. The configuration processing reads the files inside the drop-in directory in lexicographic order and sets each option to the last value it reads.
Prerequisites
-
You have
root
permissions to the monitored system. The monitored system has a Trusted Platform Module (TPM).
NoteTo verify, enter the
tpm2_pcrread
command. If the output returns several hashes, a TPM is available.You have network access to the systems where the other Keylime components are configured:
- Verifier
- For more information, see Section 8.2, “Configuring Keylime verifier”.
- Registrar
- For more information, see Section 8.3, “Configuring Keylime registrar”.
- Tenant
- For more information, see Section 8.4, “Configuring Keylime tenant”.
- Integrity measurement architecture (IMA) is enabled on the monitored system. For more information, see Enabling integrity measurement architecture and extended verification module.
Procedure
Install the Keylime agent:
# dnf install keylime-agent
This command installs the
keylime-agent-rust
package.Define the agent’s IP address and port in the configuration files. Create a new
.conf
file in the/etc/keylime/agent.conf.d/
directory, for example,/etc/keylime/agent.conf.d/00-agent-ip.conf
, with the following content:[agent] ip = '<agent_ip>'
NoteBecause the Keylime agent configuration uses the TOML format, which is different from the INI format used for configuration of the other components, the values must be in single quotation marks.
-
Replace
<agent_IP_address>
with the agent’s IP address. Alternatively, useip = '*'
orip = '0.0.0.0'
to bind the agent to all available IP addresses. -
Optionally, you can also change the agent’s port from the default value
9002
by using theport = '<agent_port>'
option.
-
Replace
Define the registrar’s IP address and port in the configuration files. Create a new
.conf
file in the/etc/keylime/agent.conf.d/
directory, for example,/etc/keylime/agent.conf.d/00-registrar-ip.conf
, with the following content:[agent] registrar_ip = '<registrar_IP_address>'
-
Replace
<registrar_IP_address>
with the registrar’s IP address. -
Optionally, you can also change the registrar’s port from the default value
8890
by using theregistrar_port = '<registrar_port>'
option.
-
Replace
Optional: Define the agent’s universally unique identifier (UUID). If it is not defined, the default UUID is used. Create a new
.conf
file in the/etc/keylime/agent.conf.d/
directory, for example,/etc/keylime/agent.conf.d/00-agent-uuid.conf
, with the following content:[agent] uuid = '<agent_UUID>'
-
Replace
<agent_UUID>
with the agent’s UUID, for exampled432fbb3-d2f1-4a97-9ef7-abcdef012345
. You can use theuuidgen
utility to generate a UUID.
-
Replace
Optional: Load existing keys and certificates for the agent. If the agent receives no
server_key
andserver_cert
, it generates its own key and a self-signed certificate.ImportantDo not use certificate chains. Keylime currently does not correctly use all the provided certificates during signature verification, which results in a TLS handshake failure. For more information, see RHEL-396.
Define the location of the keys and certificates in the configuration. Create a new
.conf
file in the/etc/keylime/agent.conf.d/
directory, for example,/etc/keylime/agent.conf.d/00-keys-and-certs.conf
, with the following content:[agent] server_key = '</path/to/server_key>' server_key_password = '<passphrase1>' server_cert = '</path/to/server_cert>' trusted_client_ca = '</path/to/ca/cert>'
NoteUse absolute paths to define key and certificate locations. The Keylime agent does not accept relative paths.
Open the port in firewall:
# firewall-cmd --add-port 9002/tcp # firewall-cmd --runtime-to-permanent
If you use a different port, replace
9002
with the port number defined in the.conf
file.Enable and start the
keylime_agent
service:# systemctl enable --now keylime_agent
Optional: From the system where the Keylime tenant is configured, verify that the agent is correctly configured and can connect to the registrar.
# keylime_tenant -c regstatus --uuid <agent_uuid> Reading configuration from ['/etc/keylime/logging.conf'] ... ==\n-----END CERTIFICATE-----\n", "ip": "127.0.0.1", "port": 9002, "regcount": 1, "operational_state": "Registered"}}}
Replace
<agent_uuid>
with the agent’s UUID.If the registrar and agent are correctly configured, the output displays the agent’s IP address and port, followed by
"operational_state": "Registered"
.
Create a new IMA policy by entering the following content into the
/etc/ima/ima-policy
file:# PROC_SUPER_MAGIC dont_measure fsmagic=0x9fa0 # SYSFS_MAGIC dont_measure fsmagic=0x62656572 # DEBUGFS_MAGIC dont_measure fsmagic=0x64626720 # TMPFS_MAGIC dont_measure fsmagic=0x01021994 # RAMFS_MAGIC dont_measure fsmagic=0x858458f6 # SECURITYFS_MAGIC dont_measure fsmagic=0x73636673 # SELINUX_MAGIC dont_measure fsmagic=0xf97cff8c # CGROUP_SUPER_MAGIC dont_measure fsmagic=0x27e0eb # OVERLAYFS_MAGIC dont_measure fsmagic=0x794c7630 # Don't measure log, audit or tmp files dont_measure obj_type=var_log_t dont_measure obj_type=auditd_log_t dont_measure obj_type=tmp_t # MEASUREMENTS measure func=BPRM_CHECK measure func=FILE_MMAP mask=MAY_EXEC measure func=MODULE_CHECK uid=0
- Reboot the system to apply the new IMA policy.
Verification
Verify that the agent is running:
# systemctl status keylime_agent ● keylime_agent.service - The Keylime compute agent Loaded: loaded (/usr/lib/systemd/system/keylime_agent.service; enabled; preset: disabled) Active: active (running) since ...
Next steps
After the agent is configured on all systems you want to monitor, you can deploy Keylime to perform one or both of the following functions:
Additional resources
8.6. Deploying Keylime for runtime monitoring
To verify that the state of monitored systems is correct, the Keylime agent must be running on the monitored systems.
Because Keylime runtime monitoring uses integrity measurement architecture (IMA) to measure large numbers of files, it might have a significant impact on the performance of your system.
When provisioning the agent, you can also define a file that Keylime sends to the monitored system. Keylime encrypts the file sent to the agent, and decrypts it only if the agent’s system complies with the TPM policy and with the IMA allowlist.
You can make Keylime ignore changes of specific files or within specific directories by configuring a Keylime excludelist.
Prerequisites
You have network access to the systems where the Keylime components are configured:
- Verifier
- For more information, see Section 8.2, “Configuring Keylime verifier”.
- Registrar
- For more information, see Section 8.3, “Configuring Keylime registrar”.
- Tenant
- For more information, see Section 8.4, “Configuring Keylime tenant”.
- Agent
- For more information, see Section 8.5, “Configuring Keylime agent”.
Procedure
On the monitored system where the Keylime agent is configured and running, generate an allowlist from the current state of the system:
# /usr/share/keylime/scripts/create_allowlist.sh -o <allowlist.txt> -h sha256sum
Replace
<allowlist.txt>
with the file name of the allowlist.ImportantUse the SHA-256 hash function. SHA-1 is not secure and has been deprecated in RHEL 9. For additional information, see SHA-1 deprecation in Red Hat Enterprise Linux 9.
Copy the generated allowlist to the system where the
keylime_tenant
utility is configured, for example:# scp allowlist.txt root@<tenant_._ip>:/root/allowlist.txt
Optional: You can define a list of files or directories excluded from Keylime measurements by creating a file on the tenant system and entering the files and directories to exclude. The excludelist accepts Python regular expressions with one regular expression per line. For more information, see Regular expression operations at docs.python.org. For example, to exclude all files in the
/tmp/
directory from Keylime measurements, create a/root/excludelist.txt
file with the following content:/tmp/.*
Save the excludelist on the tenant system.
On the system where the Keylime tenant is configured, provision the agent by using the
keylime_tenant
utility:# keylime_tenant -c add -t <agent_ip> -u <agent_uuid> --allowlist <allowlist.txt> --exclude <excludelist> --cert default
-
Replace
<agent_ip>
with the agent’s IP address. -
Replace
<agent_uuid>
with the agent’s UUID. -
Replace
<allowlist.txt>
with the path to the allowlist file. -
Replace
<excludelist>
with the path to the excludelist file. The--exclude
option is optional; provisioning the agent works even without delivering a file. With the
--cert
option, the tenant generates and signs a certificate for the agent by using the CA certificates and keys located in the specified directory, or the default/var/lib/keylime/ca/
directory. If the directory contains no CA certificates and keys, the tenant will generate them automatically according to the configuration in the/etc/keylime/ca.conf
file and save them to the specified directory. The tenant then sends these keys and certificates to the agent.When generating CA certificates or signing agent certificates, you may be prompted for the password to access the CA private key:
Please enter the password to decrypt your keystore:
.If you do not want to use a certificate, use the
-f
option instead for delivering a file to the agent. Provisioning an agent requires sending any file, even an empty file.NoteKeylime encrypts the file sent to the agent, and decrypts it only if the agent’s system complies with the TPM policy and the IMA allowlist. By default, Keylime decompresses
.zip
files.
As an example, with the following command,
keylime_tenant
provisions a new Keylime agent at127.0.0.1
with UUIDd432fbb3-d2f1-4a97-9ef7-75bd81c00000
and loads an allowlist namedallowlist.txt
. It also generates a certificate into the default directory and sends it to the agent. Keylime decrypts the file only if the TPM policy configured in/etc/keylime/verifier.conf
is satisfied:# keylime_tenant -c add -t 127.0.0.1 -u d432fbb3-d2f1-4a97-9ef7-75bd81c00000 -cert default --allowlist allowlist.txt --exclude excludelist.txt
NoteYou can stop Keylime from monitoring a node by using the
# keylime_tenant -c delete -u <agent_uuid>
command.You can modify the configuration of an already registered agent by using the
keylime_tenant -c update
command.-
Replace
Verification
- Optional: Reboot the monitored system before verification to verify that the settings are persistent.
Verify a successful attestation of the agent:
# keylime_tenant -c cvstatus -u <agent.uuid> ... {"<agent.uuid>": {"operational_state": "Get Quote"..."attestation_count": 5 ...
Replace
<agent.uuid>
with the agent’s UUID.If the value of
operational_state
isGet Quote
andattestation_count
is non-zero, the attestation of this agent is successful.If the value of
operational_state
isInvalid Quote
orFailed
attestation fails, the command displays output similar to the following:{"<agent.uuid>": {"operational_state": "Invalid Quote", ... "ima.validation.ima-ng.not_in_allowlist", "attestation_count": 5, "last_received_quote": 1684150329, "last_successful_attestation": 1684150327}}
If the attestation fails, display more details in the verifier log:
# journalctl -u keylime_verifier keylime.tpm - INFO - Checking IMA measurement list... keylime.ima - WARNING - File not found in allowlist: /root/bad-script.sh keylime.ima - ERROR - IMA ERRORS: template-hash 0 fnf 1 hash 0 good 781 keylime.cloudverifier - WARNING - agent D432FBB3-D2F1-4A97-9EF7-75BD81C00000 failed, stopping polling
Additional resources
- For more information about IMA, see Enhancing security with the kernel integrity subsystem.
8.7. Deploying Keylime for measured boot attestation
When you configure Keylime for measured boot attestation, Keylime checks that the boot process on the measured system corresponds to the state you defined.
Prerequisites
You have network access to the systems where the Keylime components are configured:
- Verifier
- For more information, see Section 8.2, “Configuring Keylime verifier”.
- Registrar
- For more information, see Section 8.3, “Configuring Keylime registrar”.
- Tenant
- For more information, see Section 8.4, “Configuring Keylime tenant”.
- Agent
- For more information, see Section 8.5, “Configuring Keylime agent”.
- Unified Extensible Firmware Interface (UEFI) is enabled on the agent system.
Procedure
On the monitored system where the Keylime agent is configured and running, install the
python3-keylime
, which containscreate_mb_refstate
utility:# dnf -y install python3-keylime
On the monitored system, generate a policy from the measured boot log of the current state of the system by using the
create_mb_refstate
script:# /usr/share/keylime/scripts/create_mb_refstate /sys/kernel/security/tpm0/binary_bios_measurements <./measured_boot_reference_state.json>
Replace
<./measured_boot_reference_state.json>
with the path where the script saves the generated policy.ImportantThe policy generated with the
create_mb_refstate
script is based on the current state of the system and is very strict. Any modifications of the system including kernel updates and system updates will change the boot process and the system will fail the attestation.Copy the generated policy to the system where the
keylime_tenant
utility is configured, for example:# scp root@<agent_ip>:/root/measured_boot_reference_state.json /root/measured_boot_reference_state.json
On the system where the Keylime tenant is configured, provision the agent by using the
keylime_tenant
utility:# keylime_tenant -c add -t <agent_ip> -u <agent_uuid> --mb_refstate <./measured_boot_reference_state.json>
-
Replace
<agent_ip>
with the agent’s IP address. -
Replace
<agent_uuid>
with the agent’s UUID. -
Replace
<./measured_boot_reference_state.json>
with the path to the measured boot policy.
If you configure measured boot in combination with runtime monitoring, provide all the options from both use cases when entering the
keylime_tenant -c add
command.NoteYou can stop Keylime from monitoring a node by using the
# keylime_tenant -c delete -t <agent_ip> -u <agent_uuid>
command.You can modify the configuration of an already registered agent by using the
keylime_tenant -c update
command.-
Replace
Verification
Reboot the monitored system and verify a successful attestation of the agent:
# keylime_tenant -c cvstatus -u <agent_uuid> ... {"<agent.uuid>": {"operational_state": "Get Quote"..."attestation_count": 5 ...
Replace
<agent_uuid>
with the agent’s UUID.If the value of
operational_state
isGet Quote
andattestation_count
is non-zero, the attestation of this agent is successful.If the value of
operational_state
isInvalid Quote
orFailed
attestation fails, the command displays output similar to the following:{"<agent.uuid>": {"operational_state": "Invalid Quote", ... "ima.validation.ima-ng.not_in_allowlist", "attestation_count": 5, "last_received_quote": 1684150329, "last_successful_attestation": 1684150327}}
If the attestation fails, display more details in the verifier log:
# journalctl -u keylime_verifier {"d432fbb3-d2f1-4a97-9ef7-75bd81c00000": {"operational_state": "Tenant Quote Failed", ... "last_event_id": "measured_boot.invalid_pcr_0", "attestation_count": 0, "last_received_quote": 1684487093, "last_successful_attestation": 0}}
Chapter 9. Checking integrity with AIDE
Advanced Intrusion Detection Environment (AIDE
) is a utility that creates a database of files on the system, and then uses that database to ensure file integrity and detect system intrusions.
9.1. Installing AIDE
The following steps are necessary to install AIDE
and to initiate its database.
Prerequisites
-
The
AppStream
repository is enabled.
Procedure
To install the aide package:
# dnf install aide
To generate an initial database:
# aide --init
NoteIn the default configuration, the
aide --init
command checks just a set of directories and files defined in the/etc/aide.conf
file. To include additional directories or files in theAIDE
database, and to change their watched parameters, edit/etc/aide.conf
accordingly.To start using the database, remove the
.new
substring from the initial database file name:# mv /var/lib/aide/aide.db.new.gz /var/lib/aide/aide.db.gz
-
To change the location of the
AIDE
database, edit the/etc/aide.conf
file and modify theDBDIR
value. For additional security, store the database, configuration, and the/usr/sbin/aide
binary file in a secure location such as a read-only media.
9.2. Performing integrity checks with AIDE
Prerequisites
-
AIDE
is properly installed and its database is initialized. See Installing AIDE
Procedure
To initiate a manual check:
# aide --check Start timestamp: 2018-07-11 12:41:20 +0200 (AIDE 0.16) AIDE found differences between database and filesystem!! ... [trimmed for clarity]
At a minimum, configure the system to run
AIDE
weekly. Optimally, runAIDE
daily. For example, to schedule a daily execution ofAIDE
at 04:05 a.m. using thecron
command, add the following line to the/etc/crontab
file:05 4 * * * root /usr/sbin/aide --check
9.3. Updating an AIDE database
After verifying the changes of your system such as, package updates or configuration files adjustments, Red Hat recommends updating your baseline AIDE
database.
Prerequisites
-
AIDE
is properly installed and its database is initialized. See Installing AIDE
Procedure
Update your baseline AIDE database:
# aide --update
The
aide --update
command creates the/var/lib/aide/aide.db.new.gz
database file.-
To start using the updated database for integrity checks, remove the
.new
substring from the file name.
9.4. File-integrity tools: AIDE and IMA
Red Hat Enterprise Linux provides several tools for checking and preserving the integrity of files and directories on your system. The following table helps you decide which tool better fits your scenario.
Table 9.1. Comparison between AIDE and IMA
Question | Advanced Intrusion Detection Environment (AIDE) | Integrity Measurement Architecture (IMA) |
---|---|---|
What | AIDE is a utility that creates a database of files and directories on the system. This database serves for checking file integrity and detect intrusion detection. | IMA detects if a file is altered by checking file measurement (hash values) compared to previously stored extended attributes. |
How | AIDE uses rules to compare the integrity state of the files and directories. | IMA uses file hash values to detect the intrusion. |
Why | Detection - AIDE detects if a file is modified by verifying the rules. | Detection and Prevention - IMA detects and prevents an attack by replacing the extended attribute of a file. |
Usage | AIDE detects a threat when the file or directory is modified. | IMA detects a threat when someone tries to alter the entire file. |
Extension | AIDE checks the integrity of files and directories on the local system. | IMA ensures security on the local and remote systems. |
9.5. Additional resources
-
aide(1)
man page - Kernel integrity subsystem
Chapter 10. Encrypting block devices using LUKS
By using the disk encryption, you can protect the data on a block device by encrypting it. To access the device’s decrypted contents, enter a passphrase or key as authentication. This is important for mobile computers and removable media because it helps to protect the device’s contents even if it has been physically removed from the system. The LUKS format is a default implementation of block device encryption in Red Hat Enterprise Linux.
10.1. LUKS disk encryption
Linux Unified Key Setup-on-disk-format (LUKS) provides a set of tools that simplifies managing the encrypted devices. With LUKS, you can encrypt block devices and enable multiple user keys to decrypt a master key. For bulk encryption of the partition, use this master key.
Red Hat Enterprise Linux uses LUKS to perform block device encryption. By default, the option to encrypt the block device is unchecked during the installation. If you select the option to encrypt your disk, the system prompts you for a passphrase every time you boot the computer. This passphrase unlocks the bulk encryption key that decrypts your partition. If you want to modify the default partition table, you can select partitions, which you want to encrypt. This is set in the partition table settings.
- Ciphers
The default cipher used for LUKS is
aes-xts-plain64
. The default key size for LUKS is 512 bits. The default key size for LUKS with Anaconda XTS mode is 512 bits. The following are the available Ciphers:- Advanced Encryption Standard (AES)
- Twofish
- Serpent
- LUKS performs the following operations
- LUKS encrypts entire block devices and is therefore well-suited for protecting contents of mobile devices such as removable storage media or laptop disk drives.
- The underlying contents of the encrypted block device are arbitrary, which makes it useful for encrypting swap devices. This can also be useful with certain databases that use specially formatted block devices for data storage.
- LUKS uses the existing device mapper kernel subsystem.
- LUKS provides passphrase strengthening, which protects against dictionary attacks.
- LUKS devices contain multiple key slots, which allows users to add backup keys or passphrases.
- LUKS is not recommended for the following scenarios
- Disk-encryption solutions such as LUKS protect the data only when your system is off. After the system is on and LUKS has decrypted the disk, the files on that disk are available to anyone who have access to them.
- Scenarios that require multiple users to have distinct access keys to the same device. The LUKS1 format provides eight key slots and LUKS2 provides up to 32 key slots.
- Applications that require file-level encryption.
10.2. LUKS versions in RHEL
In Red Hat Enterprise Linux, the default format for LUKS encryption is LUKS2. The old LUKS1 format remains fully supported and it is provided as a format compatible with earlier Red Hat Enterprise Linux releases. LUKS2 re-encryption is considered more robust and safe to use as compared to LUKS1 re-encryption.
The LUKS2 format enables future updates of various parts without a need to modify binary structures. Internally it uses JSON text format for metadata, provides redundancy of metadata, detects metadata corruption, and automatically repairs from a metadata copy.
Do not use LUKS2 in systems that support only LUKS1.
Since Red Hat Enterprise Linux 9.2, you can use the cryptsetup reencrypt
command for both the LUKS versions to encrypt the disk.
- Online re-encryption
The LUKS2 format supports re-encrypting encrypted devices while the devices are in use. For example, you do not have to unmount the file system on the device to perform the following tasks:
- Change the volume key
Change the encryption algorithm
When encrypting a non-encrypted device, you must still unmount the file system. You can remount the file system after a short initialization of the encryption.
The LUKS1 format does not support online re-encryption.
- Conversion
In certain situations, you can convert LUKS1 to LUKS2. The conversion is not possible specifically in the following scenarios:
-
A LUKS1 device is marked as being used by a Policy-Based Decryption (PBD) Clevis solution. The
cryptsetup
tool does not convert the device when someluksmeta
metadata are detected. - A device is active. The device must be in an inactive state before any conversion is possible.
-
A LUKS1 device is marked as being used by a Policy-Based Decryption (PBD) Clevis solution. The
10.3. Options for data protection during LUKS2 re-encryption
LUKS2 provides several options that prioritize performance or data protection during the re-encryption process. It provides the following modes for the resilience
option, and you can select any of these modes by using the cryptsetup reencrypt --resilience resilience-mode /dev/sdx
command:
checksum
The default mode. It balances data protection and performance.
This mode stores individual checksums of the sectors in the re-encryption area, which the recovery process can detect for the sectors that were re-encrypted by LUKS2. The mode requires that the block device sector write is atomic.
journal
- The safest mode but also the slowest. Since this mode journals the re-encryption area in the binary area, the LUKS2 writes the data twice.
none
-
The
none
mode prioritizes performance and provides no data protection. It protects the data only against safe process termination, such as theSIGTERM
signal or the user pressing Ctrl+C key. Any unexpected system failure or application failure might result in data corruption.
If a LUKS2 re-encryption process terminates unexpectedly by force, LUKS2 can perform the recovery in one of the following ways:
- Automatically
By performing any one of the following actions triggers the automatic recovery action during the next LUKS2 device open action:
-
Executing the
cryptsetup open
command. -
Attaching the device with the
systemd-cryptsetup
command.
-
Executing the
- Manually
-
By using the
cryptsetup repair /dev/sdx
command on the LUKS2 device.
Additional resources
-
cryptsetup-reencrypt(8)
andcryptsetup-repair(8)
man pages
10.4. Encrypting existing data on a block device using LUKS2
You can encrypt the existing data on a not yet encrypted device by using the LUKS2 format. A new LUKS header is stored in the head of the device.
Prerequisites
- The block device has a file system.
You have backed up your data.
WarningYou might lose your data during the encryption process due to a hardware, kernel, or human failure. Ensure that you have a reliable backup before you start encrypting the data.
Procedure
Unmount all file systems on the device that you plan to encrypt, for example:
# umount /dev/mapper/vg00-lv00
Make free space for storing a LUKS header. Use one of the following options that suits your scenario:
In the case of encrypting a logical volume, you can extend the logical volume without resizing the file system. For example:
# lvextend -L+32M /dev/mapper/vg00-lv00
-
Extend the partition by using partition management tools, such as
parted
. -
Shrink the file system on the device. You can use the
resize2fs
utility for the ext2, ext3, or ext4 file systems. Note that you cannot shrink the XFS file system.
Initialize the encryption:
# cryptsetup reencrypt --encrypt --init-only --reduce-device-size 32M /dev/mapper/vg00-lv00 lv00_encrypted /dev/mapper/lv00_encrypted is now active and ready for online encryption.
Mount the device:
# mount /dev/mapper/lv00_encrypted /mnt/lv00_encrypted
Add an entry for a persistent mapping to the
/etc/crypttab
file:Find the
luksUUID
:# cryptsetup luksUUID /dev/mapper/vg00-lv00 a52e2cc9-a5be-47b8-a95d-6bdf4f2d9325
Open
/etc/crypttab
in a text editor of your choice and add a device in this file:$ vi /etc/crypttab lv00_encrypted UUID=a52e2cc9-a5be-47b8-a95d-6bdf4f2d9325 none
Replace a52e2cc9-a5be-47b8-a95d-6bdf4f2d9325 with your device’s
luksUUID
.Refresh initramfs with
dracut
:$ dracut -f --regenerate-all
Add an entry for a persistent mounting to the
/etc/fstab
file:Find the file system’s UUID of the active LUKS block device:
$ blkid -p /dev/mapper/lv00_encrypted /dev/mapper/lv00-encrypted: UUID="37bc2492-d8fa-4969-9d9b-bb64d3685aa9" BLOCK_SIZE="4096" TYPE="xfs" USAGE="filesystem"
Open
/etc/fstab
in a text editor of your choice and add a device in this file, for example:$ vi /etc/fstab UUID=37bc2492-d8fa-4969-9d9b-bb64d3685aa9 /home auto rw,user,auto 0
Replace 37bc2492-d8fa-4969-9d9b-bb64d3685aa9 with your file system’s UUID.
Resume the online encryption:
# cryptsetup reencrypt --resume-only /dev/mapper/vg00-lv00 Enter passphrase for /dev/mapper/vg00-lv00: Auto-detected active dm device 'lv00_encrypted' for data device /dev/mapper/vg00-lv00. Finished, time 00:31.130, 10272 MiB written, speed 330.0 MiB/s
Verification
Verify if the existing data was encrypted:
# cryptsetup luksDump /dev/mapper/vg00-lv00 LUKS header information Version: 2 Epoch: 4 Metadata area: 16384 [bytes] Keyslots area: 16744448 [bytes] UUID: a52e2cc9-a5be-47b8-a95d-6bdf4f2d9325 Label: (no label) Subsystem: (no subsystem) Flags: (no flags) Data segments: 0: crypt offset: 33554432 [bytes] length: (whole device) cipher: aes-xts-plain64 [...]
View the status of the encrypted blank block device:
# cryptsetup status lv00_encrypted /dev/mapper/lv00_encrypted is active and is in use. type: LUKS2 cipher: aes-xts-plain64 keysize: 512 bits key location: keyring device: /dev/mapper/vg00-lv00
Additional resources
-
cryptsetup(8)
,cryptsetup-reencrypt(8)
,lvextend(8)
,resize2fs(8)
, andparted(8)
man pages
10.5. Encrypting existing data on a block device using LUKS2 with a detached header
You can encrypt existing data on a block device without creating free space for storing a LUKS header. The header is stored in a detached location, which also serves as an additional layer of security. The procedure uses the LUKS2 encryption format.
Prerequisites
- The block device has a file system.
You have backed up your data.
WarningYou might lose your data during the encryption process due to a hardware, kernel, or human failure. Ensure that you have a reliable backup before you start encrypting the data.
Procedure
Unmount all file systems on the device, for example:
# umount /dev/nvme0n1p1
Initialize the encryption:
# cryptsetup reencrypt --encrypt --init-only --header /home/header /dev/nvme0n1p1 nvme_encrypted WARNING! ======== Header file does not exist, do you want to create it? Are you sure? (Type 'yes' in capital letters): YES Enter passphrase for /home/header: Verify passphrase: /dev/mapper/nvme_encrypted is now active and ready for online encryption.
Replace /home/header with a path to the file with a detached LUKS header. The detached LUKS header has to be accessible to unlock the encrypted device later.
Mount the device:
# mount /dev/mapper/nvme_encrypted /mnt/nvme_encrypted
Resume the online encryption:
# cryptsetup reencrypt --resume-only --header /home/header /dev/nvme0n1p1 Enter passphrase for /dev/nvme0n1p1: Auto-detected active dm device 'nvme_encrypted' for data device /dev/nvme0n1p1. Finished, time 00m51s, 10 GiB written, speed 198.2 MiB/s
Verification
Verify if the existing data on a block device using LUKS2 with a detached header is encrypted:
# cryptsetup luksDump /home/header LUKS header information Version: 2 Epoch: 88 Metadata area: 16384 [bytes] Keyslots area: 16744448 [bytes] UUID: c4f5d274-f4c0-41e3-ac36-22a917ab0386 Label: (no label) Subsystem: (no subsystem) Flags: (no flags) Data segments: 0: crypt offset: 0 [bytes] length: (whole device) cipher: aes-xts-plain64 sector: 512 [bytes] [...]
View the status of the encrypted blank block device:
# cryptsetup status nvme_encrypted /dev/mapper/nvme_encrypted is active and is in use. type: LUKS2 cipher: aes-xts-plain64 keysize: 512 bits key location: keyring device: /dev/nvme0n1p1
Additional resources
-
cryptsetup(8)
andcryptsetup-reencrypt(8)
man pages
10.6. Encrypting a blank block device using LUKS2
You can encrypt a blank block device, which you can use for an encrypted storage by using the LUKS2 format.
Prerequisites
-
A blank block device. You can use commands such as
lsblk
to find if there is no real data on that device, for example, a file system.
Procedure
Setup a partition as an encrypted LUKS partition:
# cryptsetup luksFormat /dev/nvme0n1p1 WARNING! ======== This will overwrite data on /dev/nvme0n1p1 irrevocably. Are you sure? (Type 'yes' in capital letters): YES Enter passphrase for /dev/nvme0n1p1: Verify passphrase:
Open an encrypted LUKS partition:
# cryptsetup open dev/nvme0n1p1 nvme0n1p1_encrypted Enter passphrase for /dev/nvme0n1p1:
This unlocks the partition and maps it to a new device by using the device mapper. To not overwrite the encrypted data, this command alerts the kernel that the device is an encrypted device and addressed through LUKS by using the
/dev/mapper/device_mapped_name
path.Create a file system to write encrypted data to the partition, which must be accessed through the device mapped name:
# mkfs -t ext4 /dev/mapper/nvme0n1p1_encrypted
Mount the device:
# mount /dev/mapper/nvme0n1p1_encrypted mount-point
Verification
Verify if the blank block device is encrypted:
# cryptsetup luksDump /dev/nvme0n1p1 LUKS header information Version: 2 Epoch: 3 Metadata area: 16384 [bytes] Keyslots area: 16744448 [bytes] UUID: 34ce4870-ffdf-467c-9a9e-345a53ed8a25 Label: (no label) Subsystem: (no subsystem) Flags: (no flags) Data segments: 0: crypt offset: 16777216 [bytes] length: (whole device) cipher: aes-xts-plain64 sector: 512 [bytes] [...]
View the status of the encrypted blank block device:
# cryptsetup status nvme0n1p1_encrypted /dev/mapper/nvme0n1p1_encrypted is active and is in use. type: LUKS2 cipher: aes-xts-plain64 keysize: 512 bits key location: keyring device: /dev/nvme0n1p1 sector size: 512 offset: 32768 sectors size: 20938752 sectors mode: read/write
Additional resources
-
cryptsetup(8)
,cryptsetup-open (8)
, andcryptsetup-lusFormat(8)
man pages
10.7. Creating a LUKS2 encrypted volume using the storage
RHEL System Role
You can use the storage
role to create and configure a volume encrypted with LUKS by running an Ansible playbook.
Prerequisites
-
Access and permissions to one or more managed nodes, which are systems you want to configure with the
crypto_policies
System Role. - An inventory file, which lists the managed nodes.
-
Access and permissions to a control node, which is a system from which Red Hat Ansible Core configures other systems. On the control node, the
ansible-core
andrhel-system-roles
packages are installed.
RHEL 8.0-8.5 provided access to a separate Ansible repository that contains Ansible Engine 2.9 for automation based on Ansible. Ansible Engine contains command-line utilities such as ansible
, ansible-playbook
, connectors such as docker
and podman
, and many plugins and modules. For information about how to obtain and install Ansible Engine, see the How to download and install Red Hat Ansible Engine Knowledgebase article.
RHEL 8.6 and 9.0 have introduced Ansible Core (provided as the ansible-core
package), which contains the Ansible command-line utilities, commands, and a small set of built-in Ansible plugins. RHEL provides this package through the AppStream repository, and it has a limited scope of support. For more information, see the Scope of support for the Ansible Core package included in the RHEL 9 and RHEL 8.6 and later AppStream repositories Knowledgebase article.
Procedure
Create a new
playbook.yml
file with the following content:- hosts: all vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: xfs fs_label: label-name mount_point: /mnt/data encryption: true encryption_password: your-password roles: - rhel-system-roles.storage
You can also add the other encryption parameters such as
encryption_key
,encryption_cipher
,encryption_key_size
, andencryption_luks
version in the playbook.yml file.Optional: Verify playbook syntax:
# ansible-playbook --syntax-check playbook.yml
Run the playbook on your inventory file:
# ansible-playbook -i inventory.file /path/to/file/playbook.yml
Verification
View the encryption status:
# cryptsetup status sdb /dev/mapper/sdb is active and is in use. type: LUKS2 cipher: aes-xts-plain64 keysize: 512 bits key location: keyring device: /dev/sdb [...]
Verify the created LUKS encrypted volume:
# cryptsetup luksDump /dev/sdb Version: 2 Epoch: 6 Metadata area: 16384 [bytes] Keyslots area: 33521664 [bytes] UUID: a4c6be82-7347-4a91-a8ad-9479b72c9426 Label: (no label) Subsystem: (no subsystem) Flags: allow-discards Data segments: 0: crypt offset: 33554432 [bytes] length: (whole device) cipher: aes-xts-plain64 sector: 4096 [bytes] [...]
View the
cryptsetup
parameters in theplaybook.yml
file, which thestorage
role supports:# cat ~/playbook.yml - hosts: all vars: storage_volumes: - name: foo type: disk disks: - nvme0n1 fs_type: xfs fs_label: label-name mount_point: /mnt/data encryption: true #encryption_password: passwdpasswd encryption_key: /home/passwd_key encryption_cipher: aes-xts-plain64 encryption_key_size: 512 encryption_luks_version: luks2 roles: - rhel-system-roles.storage
Additional resources
- Encrypting block devices using LUKS
-
/usr/share/ansible/roles/rhel-system-roles.storage/README.md
file
Chapter 11. Configuring automated unlocking of encrypted volumes using policy-based decryption
Policy-Based Decryption (PBD) is a collection of technologies that enable unlocking encrypted root and secondary volumes of hard drives on physical and virtual machines. PBD uses a variety of unlocking methods, such as user passwords, a Trusted Platform Module (TPM) device, a PKCS #11 device connected to a system, for example, a smart card, or a special network server.
PBD allows combining different unlocking methods into a policy, which makes it possible to unlock the same volume in different ways. The current implementation of the PBD in RHEL consists of the Clevis framework and plug-ins called pins. Each pin provides a separate unlocking capability. Currently, the following pins are available:
-
tang
- allows unlocking volumes using a network server -
tpm2
- allows unlocking volumes using a TPM2 policy -
sss
- allows deploying high-availability systems using the Shamir’s Secret Sharing (SSS) cryptographic scheme
11.1. Network-bound disk encryption
The Network Bound Disc Encryption (NBDE) is a subcategory of Policy-Based Decryption (PBD) that allows binding encrypted volumes to a special network server. The current implementation of the NBDE includes a Clevis pin for the Tang server and the Tang server itself.
In RHEL, NBDE is implemented through the following components and technologies:
Figure 11.1. NBDE scheme when using a LUKS1-encrypted volume. The luksmeta package is not used for LUKS2 volumes.

Tang is a server for binding data to network presence. It makes a system containing your data available when the system is bound to a certain secure network. Tang is stateless and does not require TLS or authentication. Unlike escrow-based solutions, where the server stores all encryption keys and has knowledge of every key ever used, Tang never interacts with any client keys, so it never gains any identifying information from the client.
Clevis is a pluggable framework for automated decryption. In NBDE, Clevis provides automated unlocking of LUKS volumes. The clevis
package provides the client side of the feature.
A Clevis pin is a plug-in into the Clevis framework. One of such pins is a plug-in that implements interactions with the NBDE server — Tang.
Clevis and Tang are generic client and server components that provide network-bound encryption. In RHEL, they are used in conjunction with LUKS to encrypt and decrypt root and non-root storage volumes to accomplish Network-Bound Disk Encryption.
Both client- and server-side components use the José library to perform encryption and decryption operations.
When you begin provisioning NBDE, the Clevis pin for Tang server gets a list of the Tang server’s advertised asymmetric keys. Alternatively, since the keys are asymmetric, a list of Tang’s public keys can be distributed out of band so that clients can operate without access to the Tang server. This mode is called offline provisioning.
The Clevis pin for Tang uses one of the public keys to generate a unique, cryptographically-strong encryption key. Once the data is encrypted using this key, the key is discarded. The Clevis client should store the state produced by this provisioning operation in a convenient location. This process of encrypting data is the provisioning step.
The LUKS version 2 (LUKS2) is the default disk-encryption format in RHEL, hence, the provisioning state for NBDE is stored as a token in a LUKS2 header. The leveraging of provisioning state for NBDE by the luksmeta
package is used only for volumes encrypted with LUKS1.
The Clevis pin for Tang supports both LUKS1 and LUKS2 without specification need. Clevis can encrypt plain-text files but you have to use the cryptsetup
tool for encrypting block devices. See the Encrypting block devices using LUKS for more information.
When the client is ready to access its data, it loads the metadata produced in the provisioning step and it responds to recover the encryption key. This process is the recovery step.
In NBDE, Clevis binds a LUKS volume using a pin so that it can be automatically unlocked. After successful completion of the binding process, the disk can be unlocked using the provided Dracut unlocker.
If the kdump
kernel crash dumping mechanism is set to save the content of the system memory to a LUKS-encrypted device, you are prompted for entering a password during the second kernel boot.
Additional resources
- NBDE (Network-Bound Disk Encryption) Technology Knowledgebase article
-
tang(8)
,clevis(1)
,jose(1)
, andclevis-luks-unlockers(7)
man pages - How to set up Network-Bound Disk Encryption with multiple LUKS devices (Clevis + Tang unlocking) Knowledgebase article
11.2. Installing an encryption client - Clevis
Use this procedure to deploy and start using the Clevis pluggable framework on your system.
Procedure
To install Clevis and its pins on a system with an encrypted volume:
# dnf install clevis
To decrypt data, use a
clevis decrypt
command and provide a cipher text in the JSON Web Encryption (JWE) format, for example:$ clevis decrypt < secret.jwe
Additional resources
-
clevis(1)
man page Built-in CLI help after entering the
clevis
command without any argument:$ clevis Usage: clevis COMMAND [OPTIONS] clevis decrypt Decrypts using the policy defined at encryption time clevis encrypt sss Encrypts using a Shamir's Secret Sharing policy clevis encrypt tang Encrypts using a Tang binding server policy clevis encrypt tpm2 Encrypts using a TPM2.0 chip binding policy clevis luks bind Binds a LUKS device using the specified policy clevis luks edit Edit a binding from a clevis-bound slot in a LUKS device clevis luks list Lists pins bound to a LUKSv1 or LUKSv2 device clevis luks pass Returns the LUKS passphrase used for binding a particular slot. clevis luks regen Regenerate clevis binding clevis luks report Report tang keys' rotations clevis luks unbind Unbinds a pin bound to a LUKS volume clevis luks unlock Unlocks a LUKS volume
11.3. Deploying a Tang server with SELinux in enforcing mode
You can use a Tang server to automatically unlock LUKS-encrypted volumes on Clevis-enabled clients. In the minimalistic scenario, you deploy a Tang server on port 80 by installing the tang
package and entering the systemctl enable tangd.socket --now
command. The following example procedure demonstrates the deployment of a Tang server running on a custom port as a confined service in SELinux enforcing mode.
Prerequisites
-
The
policycoreutils-python-utils
package and its dependencies are installed. -
The
firewalld
service is running.
Procedure
To install the
tang
package and its dependencies, enter the following command asroot
:# dnf install tang
Pick an unoccupied port, for example, 7500/tcp, and allow the
tangd
service to bind to that port:# semanage port -a -t tangd_port_t -p tcp 7500
Note that a port can be used only by one service at a time, and thus an attempt to use an already occupied port implies the
ValueError: Port already defined
error message.Open the port in the firewall:
# firewall-cmd --add-port=7500/tcp # firewall-cmd --runtime-to-permanent
Enable the
tangd
service:# systemctl enable tangd.socket
Create an override file:
# systemctl edit tangd.socket
In the following editor screen, which opens an empty
override.conf
file located in the/etc/systemd/system/tangd.socket.d/
directory, change the default port for the Tang server from 80 to the previously picked number by adding the following lines:[Socket] ListenStream= ListenStream=7500
ImportantInsert the previous code snippet between the lines starting with
# Anything between here
and# Lines below this
, otherwise the system discards your changes.- Save the changes by pressing Ctrl+O and Enter. Exit the editor by pressing Ctrl+X.
Reload the changed configuration:
# systemctl daemon-reload
Check that your configuration is working:
# systemctl show tangd.socket -p Listen Listen=[::]:7500 (Stream)
Start the
tangd
service:# systemctl restart tangd.socket
Because
tangd
uses thesystemd
socket activation mechanism, the server starts as soon as the first connection comes in. A new set of cryptographic keys is automatically generated at the first start. To perform cryptographic operations such as manual key generation, use thejose
utility.
Additional resources
-
tang(8)
,semanage(8)
,firewall-cmd(1)
,jose(1)
,systemd.unit(5)
, andsystemd.socket(5)
man pages.
11.4. Rotating Tang server keys and updating bindings on clients
Use the following steps to rotate your Tang server keys and update existing bindings on clients. The precise interval at which you should rotate them depends on your application, key sizes, and institutional policy.
Alternatively, you can rotate Tang keys by using the nbde_server
RHEL system role. See Using the nbde_server system role for setting up multiple Tang servers for more information.
Prerequisites
- A Tang server is running.
-
The
clevis
andclevis-luks
packages are installed on your clients.
Procedure
Rename all keys in the
/var/db/tang
key database directory to have a leading.
to hide them from advertisement. Note that the file names in the following example differs from unique file names in the key database directory of your Tang server:# cd /var/db/tang # ls -l -rw-r--r--. 1 root root 349 Feb 7 14:55 UV6dqXSwe1bRKG3KbJmdiR020hY.jwk -rw-r--r--. 1 root root 354 Feb 7 14:55 y9hxLTQSiSB5jSEGWnjhY8fDTJU.jwk # mv UV6dqXSwe1bRKG3KbJmdiR020hY.jwk .UV6dqXSwe1bRKG3KbJmdiR020hY.jwk # mv y9hxLTQSiSB5jSEGWnjhY8fDTJU.jwk .y9hxLTQSiSB5jSEGWnjhY8fDTJU.jwk
Check that you renamed and therefore hid all keys from the Tang server advertisement:
# ls -l total 0
Generate new keys using the
/usr/libexec/tangd-keygen
command in/var/db/tang
on the Tang server:# /usr/libexec/tangd-keygen /var/db/tang # ls /var/db/tang 3ZWS6-cDrCG61UPJS2BMmPU4I54.jwk zyLuX6hijUy_PSeUEFDi7hi38.jwk
Check that your Tang server advertises the signing key from the new key pair, for example:
# tang-show-keys 7500 3ZWS6-cDrCG61UPJS2BMmPU4I54
On your NBDE clients, use the
clevis luks report
command to check if the keys advertised by the Tang server remains the same. You can identify slots with the relevant binding using theclevis luks list
command, for example:# clevis luks list -d /dev/sda2 1: tang '{"url":"http://tang.srv"}' # clevis luks report -d /dev/sda2 -s 1 ... Report detected that some keys were rotated. Do you want to regenerate luks metadata with "clevis luks regen -d /dev/sda2 -s 1"? [ynYN]
To regenerate LUKS metadata for the new keys either press
y
to the prompt of the previous command, or use theclevis luks regen
command:# clevis luks regen -d /dev/sda2 -s 1
When you are sure that all old clients use the new keys, you can remove the old keys from the Tang server, for example:
# cd /var/db/tang # rm .*.jwk
Removing the old keys while clients are still using them can result in data loss. If you accidentally remove such keys, use the clevis luks regen
command on the clients, and provide your LUKS password manually.
Additional resources
-
tang-show-keys(1)
,clevis-luks-list(1)
,clevis-luks-report(1)
, andclevis-luks-regen(1)
man pages
11.5. Configuring automated unlocking using a Tang key in the web console
Configure automated unlocking of a LUKS-encrypted storage device using a key provided by a Tang server.
Prerequisites
- The RHEL 9 web console has been installed. See Installing the web console for details.
-
The
cockpit-storaged
andclevis-luks
packages are installed on your system. -
The
cockpit.socket
service is running at port 9090. - A Tang server is available. See Deploying a Tang server with SELinux in enforcing mode for details.
Procedure
Open the RHEL web console by entering the following address in a web browser:
https://<localhost>:9090
Replace the <localhost> part by the remote server’s host name or IP address when you connect to a remote system.
-
Provide your credentials and click Storage. In the
Filesystems
section, click the disk that contains an encrypted volume you plan to add to unlock automatically. - In the following window listing partitions and drive details of the selected disk, click > next to the encrypted file system to expand details of the encrypted volume you want to unlock using the Tang server, and click Encryption.
Click + in the Keys section to add a Tang key:
Select
Tang keyserver
asKey source
, provide the address of your Tang server, and a password that unlocks the LUKS-encrypted device. Click Add to confirm:The following dialog window provides a command to verify that the key hash matches.
In a terminal on the Tang server, use the
tang-show-keys
command to display the key hash for comparison. In this example, the Tang server is running on the port 7500:# tang-show-keys 7500 fM-EwYeiTxS66X3s1UAywsGKGnxnpll8ig0KOQmr9CM
Click Trust key when the key hashes in the web console and in the output of previously listed commands are the same:
-
In RHEL 9.2 and later, after you select an encrypted root file system and a Tang server, you can skip adding the
rd.neednet=1
parameter to the kernel command line, installing theclevis-dracut
package, and regenerating an initial ramdisk (initrd
). For non-root file systems, the web console now enables theremote-cryptsetup.target
andclevis-luks-akspass.path
systemd
units, installs theclevis-systemd
package, and adds the_netdev
parameter to thefstab
andcrypttab
configuration files.
Verification
Check that the newly added Tang key is now listed in the Keys section with the
Keyserver
type:Verify that the bindings are available for the early boot, for example:
# lsinitrd | grep clevis clevis clevis-pin-null clevis-pin-sss clevis-pin-tang clevis-pin-tpm2 lrwxrwxrwx 1 root root 48 Feb 14 17:45 etc/systemd/system/cryptsetup.target.wants/clevis-luks-askpass.path… …
Additional resources
11.6. Basic NBDE and TPM2 encryption-client operations
The Clevis framework can encrypt plain-text files and decrypt both ciphertexts in the JSON Web Encryption (JWE) format and LUKS-encrypted block devices. Clevis clients can use either Tang network servers or Trusted Platform Module 2.0 (TPM 2.0) chips for cryptographic operations.
The following commands demonstrate the basic functionality provided by Clevis on examples containing plain-text files. You can also use them for troubleshooting your NBDE or Clevis+TPM deployments.
Encryption client bound to a Tang server
To check that a Clevis encryption client binds to a Tang server, use the
clevis encrypt tang
sub-command:$ clevis encrypt tang '{"url":"http://tang.srv:port"}' < input-plain.txt > secret.jwe The advertisement contains the following signing keys: _OsIk0T-E2l6qjfdDiwVmidoZjA Do you wish to trust these keys? [ynYN] y
Change the http://tang.srv:port URL in the previous example to match the URL of the server where
tang
is installed. The secret.jwe output file contains your encrypted cipher text in the JWE format. This cipher text is read from the input-plain.txt input file.Alternatively, if your configuration requires a non-interactive communication with a Tang server without SSH access, you can download an advertisement and save it to a file:
$ curl -sfg http://tang.srv:port/adv -o adv.jws
Use the advertisement in the adv.jws file for any following tasks, such as encryption of files or messages:
$ echo 'hello' | clevis encrypt tang '{"url":"http://tang.srv:port","adv":"adv.jws"}'
To decrypt data, use the
clevis decrypt
command and provide the cipher text (JWE):$ clevis decrypt < secret.jwe > output-plain.txt
Encryption client using TPM 2.0
To encrypt using a TPM 2.0 chip, use the
clevis encrypt tpm2
sub-command with the only argument in form of the JSON configuration object:$ clevis encrypt tpm2 '{}' < input-plain.txt > secret.jwe
To choose a different hierarchy, hash, and key algorithms, specify configuration properties, for example:
$ clevis encrypt tpm2 '{"hash":"sha256","key":"rsa"}' < input-plain.txt > secret.jwe
To decrypt the data, provide the ciphertext in the JSON Web Encryption (JWE) format:
$ clevis decrypt < secret.jwe > output-plain.txt
The pin also supports sealing data to a Platform Configuration Registers (PCR) state. That way, the data can only be unsealed if the PCR hashes values match the policy used when sealing.
For example, to seal the data to the PCR with index 0 and 7 for the SHA-256 bank:
$ clevis encrypt tpm2 '{"pcr_bank":"sha256","pcr_ids":"0,7"}' < input-plain.txt > secret.jwe
Hashes in PCRs can be rewritten, and you no longer can unlock your encrypted volume. For this reason, add a strong passphrase that enable you to unlock the encrypted volume manually even when a value in a PCR changes.
If the system cannot automatically unlock your encrypted volume after an upgrade of the shim-x64
package, follow the steps in the Clevis TPM2 no longer decrypts LUKS devices after a restart KCS article.
Additional resources
-
clevis-encrypt-tang(1)
,clevis-luks-unlockers(7)
,clevis(1)
, andclevis-encrypt-tpm2(1)
man pages clevis
,clevis decrypt
, andclevis encrypt tang
commands without any arguments show the built-in CLI help, for example:$ clevis encrypt tang Usage: clevis encrypt tang CONFIG < PLAINTEXT > JWE ...
11.7. Configuring manual enrollment of LUKS-encrypted volumes
Use the following steps to configure unlocking of LUKS-encrypted volumes with NBDE.
Prerequisites
- A Tang server is running and available.
Procedure
To automatically unlock an existing LUKS-encrypted volume, install the
clevis-luks
subpackage:# dnf install clevis-luks
Identify the LUKS-encrypted volume for PBD. In the following example, the block device is referred as /dev/sda2:
# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 12G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 11G 0 part └─luks-40e20552-2ade-4954-9d56-565aa7994fb6 253:0 0 11G 0 crypt ├─rhel-root 253:0 0 9.8G 0 lvm / └─rhel-swap 253:1 0 1.2G 0 lvm [SWAP]
Bind the volume to a Tang server using the
clevis luks bind
command:# clevis luks bind -d /dev/sda2 tang '{"url":"http://tang.srv"}' The advertisement contains the following signing keys: _OsIk0T-E2l6qjfdDiwVmidoZjA Do you wish to trust these keys? [ynYN] y You are about to initialize a LUKS device for metadata storage. Attempting to initialize it may result in data loss if data was already written into the LUKS header gap in a different format. A backup is advised before initialization is performed. Do you wish to initialize /dev/sda2? [yn] y Enter existing LUKS password:
This command performs four steps:
- Creates a new key with the same entropy as the LUKS master key.
- Encrypts the new key with Clevis.
- Stores the Clevis JWE object in the LUKS2 header token or uses LUKSMeta if the non-default LUKS1 header is used.
- Enables the new key for use with LUKS.
NoteThe binding procedure assumes that there is at least one free LUKS password slot. The
clevis luks bind
command takes one of the slots.The volume can now be unlocked with your existing password as well as with the Clevis policy.
To enable the early boot system to process the disk binding, use the
dracut
tool on an already installed system:# dnf install clevis-dracut
In RHEL, Clevis produces a generic
initrd
(initial RAM disk) without host-specific configuration options and does not automatically add parameters such asrd.neednet=1
to the kernel command line. If your configuration relies on a Tang pin that requires network during early boot, use the--hostonly-cmdline
argument anddracut
addsrd.neednet=1
when it detects a Tang binding:# dracut -fv --regenerate-all --hostonly-cmdline
Alternatively, create a .conf file in the
/etc/dracut.conf.d/
, and add thehostonly_cmdline=yes
option to the file, for example:# echo "hostonly_cmdline=yes" > /etc/dracut.conf.d/clevis.conf
NoteYou can also ensure that networking for a Tang pin is available during early boot by using the
grubby
tool on the system where Clevis is installed:# grubby --update-kernel=ALL --args="rd.neednet=1"
Then you can use
dracut
without--hostonly-cmdline
:# dracut -fv --regenerate-all
Verification
To verify that the Clevis JWE object is successfully placed in a LUKS header, use the
clevis luks list
command:# clevis luks list -d /dev/sda2 1: tang '{"url":"http://tang.srv:port"}'
To use NBDE for clients with static IP configuration (without DHCP), pass your network configuration to the dracut
tool manually, for example:
# dracut -fv --regenerate-all --kernel-cmdline "ip=192.0.2.10::192.0.2.1:255.255.255.0::ens3:none"
Alternatively, create a .conf file in the /etc/dracut.conf.d/
directory with the static network information. For example:
# cat /etc/dracut.conf.d/static_ip.conf
kernel_cmdline="ip=192.0.2.10::192.0.2.1:255.255.255.0::ens3:none"
Regenerate the initial RAM disk image:
# dracut -fv --regenerate-all
Additional resources
-
clevis-luks-bind(1)
anddracut.cmdline(7)
man pages. - Kickstart commands for network configuration
11.8. Configuring manual enrollment of LUKS-encrypted volumes using a TPM 2.0 policy
Use the following steps to configure unlocking of LUKS-encrypted volumes by using a Trusted Platform Module 2.0 (TPM 2.0) policy.
Prerequisites
- An accessible TPM 2.0-compatible device.
- A system with the 64-bit Intel or 64-bit AMD architecture.
Procedure
To automatically unlock an existing LUKS-encrypted volume, install the
clevis-luks
subpackage:# dnf install clevis-luks
Identify the LUKS-encrypted volume for PBD. In the following example, the block device is referred as /dev/sda2:
# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 12G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 11G 0 part └─luks-40e20552-2ade-4954-9d56-565aa7994fb6 253:0 0 11G 0 crypt ├─rhel-root 253:0 0 9.8G 0 lvm / └─rhel-swap 253:1 0 1.2G 0 lvm [SWAP]
Bind the volume to a TPM 2.0 device using the
clevis luks bind
command, for example:# clevis luks bind -d /dev/sda2 tpm2 '{"hash":"sha256","key":"rsa"}' ... Do you wish to initialize /dev/sda2? [yn] y Enter existing LUKS password:
This command performs four steps:
- Creates a new key with the same entropy as the LUKS master key.
- Encrypts the new key with Clevis.
- Stores the Clevis JWE object in the LUKS2 header token or uses LUKSMeta if the non-default LUKS1 header is used.
Enables the new key for use with LUKS.
NoteThe binding procedure assumes that there is at least one free LUKS password slot. The
clevis luks bind
command takes one of the slots.Alternatively, if you want to seal data to specific Platform Configuration Registers (PCR) states, add the
pcr_bank
andpcr_ids
values to theclevis luks bind
command, for example:# clevis luks bind -d /dev/sda2 tpm2 '{"hash":"sha256","key":"rsa","pcr_bank":"sha256","pcr_ids":"0,1"}'
WarningBecause the data can only be unsealed if PCR hashes values match the policy used when sealing and the hashes can be rewritten, add a strong passphrase that enable you to unlock the encrypted volume manually when a value in a PCR changes.
If the system cannot automatically unlock your encrypted volume after an upgrade of the
shim-x64
package, follow the steps in the Clevis TPM2 no longer decrypts LUKS devices after a restart KCS article.
- The volume can now be unlocked with your existing password as well as with the Clevis policy.
To enable the early boot system to process the disk binding, use the
dracut
tool on an already installed system:# dnf install clevis-dracut # dracut -fv --regenerate-all
Verification
To verify that the Clevis JWE object is successfully placed in a LUKS header, use the
clevis luks list
command:# clevis luks list -d /dev/sda2 1: tpm2 '{"hash":"sha256","key":"rsa"}'
Additional resources
-
clevis-luks-bind(1)
,clevis-encrypt-tpm2(1)
, anddracut.cmdline(7)
man pages
11.9. Removing a Clevis pin from a LUKS-encrypted volume manually
Use the following procedure for manual removing the metadata created by the clevis luks bind
command and also for wiping a key slot that contains passphrase added by Clevis.
The recommended way to remove a Clevis pin from a LUKS-encrypted volume is through the clevis luks unbind
command. The removal procedure using clevis luks unbind
consists of only one step and works for both LUKS1 and LUKS2 volumes. The following example command removes the metadata created by the binding step and wipe the key slot 1 on the /dev/sda2 device:
# clevis luks unbind -d /dev/sda2 -s 1
Prerequisites
- A LUKS-encrypted volume with a Clevis binding.
Procedure
Check which LUKS version the volume, for example /dev/sda2, is encrypted by and identify a slot and a token that is bound to Clevis:
# cryptsetup luksDump /dev/sda2 LUKS header information Version: 2 ... Keyslots: 0: luks2 ... 1: luks2 Key: 512 bits Priority: normal Cipher: aes-xts-plain64 ... Tokens: 0: clevis Keyslot: 1 ...
In the previous example, the Clevis token is identified by 0 and the associated key slot is 1.
In case of LUKS2 encryption, remove the token:
# cryptsetup token remove --token-id 0 /dev/sda2
If your device is encrypted by LUKS1, which is indicated by the
Version: 1
string in the output of thecryptsetup luksDump
command, perform this additional step with theluksmeta wipe
command:# luksmeta wipe -d /dev/sda2 -s 1
Wipe the key slot containing the Clevis passphrase:
# cryptsetup luksKillSlot /dev/sda2 1
Additional resources
-
clevis-luks-unbind(1)
,cryptsetup(8)
, andluksmeta(8)
man pages
11.10. Configuring automated enrollment of LUKS-encrypted volumes using Kickstart
Follow the steps in this procedure to configure an automated installation process that uses Clevis for the enrollment of LUKS-encrypted volumes.
Procedure
Instruct Kickstart to partition the disk such that LUKS encryption has enabled for all mount points, other than
/boot
, with a temporary password. The password is temporary for this step of the enrollment process.part /boot --fstype="xfs" --ondisk=vda --size=256 part / --fstype="xfs" --ondisk=vda --grow --encrypted --passphrase=temppass
Note that OSPP-compliant systems require a more complex configuration, for example:
part /boot --fstype="xfs" --ondisk=vda --size=256 part / --fstype="xfs" --ondisk=vda --size=2048 --encrypted --passphrase=temppass part /var --fstype="xfs" --ondisk=vda --size=1024 --encrypted --passphrase=temppass part /tmp --fstype="xfs" --ondisk=vda --size=1024 --encrypted --passphrase=temppass part /home --fstype="xfs" --ondisk=vda --size=2048 --grow --encrypted --passphrase=temppass part /var/log --fstype="xfs" --ondisk=vda --size=1024 --encrypted --passphrase=temppass part /var/log/audit --fstype="xfs" --ondisk=vda --size=1024 --encrypted --passphrase=temppass
Install the related Clevis packages by listing them in the
%packages
section:%packages clevis-dracut clevis-luks clevis-systemd %end
- Optionally, to ensure that you can unlock the encrypted volume manually when required, add a strong passphrase before you remove the temporary passphrase. See the How to add a passphrase, key, or keyfile to an existing LUKS device article for more information.
Call
clevis luks bind
to perform binding in the%post
section. Afterward, remove the temporary password:%post clevis luks bind -y -k - -d /dev/vda2 \ tang '{"url":"http://tang.srv"}' <<< "temppass" cryptsetup luksRemoveKey /dev/vda2 <<< "temppass" dracut -fv --regenerate-all %end
If your configuration relies on a Tang pin that requires network during early boot or you use NBDE clients with static IP configurations, you have to modify the
dracut
command as described in Configuring manual enrollment of LUKS-encrypted volumes.Note that the
-y
option for theclevis luks bind
command is available from RHEL 8.3. In RHEL 8.2 and older, replace-y
by-f
in theclevis luks bind
command and download the advertisement from the Tang server:%post curl -sfg http://tang.srv/adv -o adv.jws clevis luks bind -f -k - -d /dev/vda2 \ tang '{"url":"http://tang.srv","adv":"adv.jws"}' <<< "temppass" cryptsetup luksRemoveKey /dev/vda2 <<< "temppass" dracut -fv --regenerate-all %end
WarningThe
cryptsetup luksRemoveKey
command prevents any further administration of a LUKS2 device on which you apply it. You can recover a removed master key using thedmsetup
command only for LUKS1 devices.
You can use an analogous procedure when using a TPM 2.0 policy instead of a Tang server.
Additional resources
-
clevis(1)
,clevis-luks-bind(1)
,cryptsetup(8)
, anddmsetup(8)
man pages - Installing Red Hat Enterprise Linux 9 using Kickstart
11.11. Configuring automated unlocking of a LUKS-encrypted removable storage device
Use this procedure to set up an automated unlocking process of a LUKS-encrypted USB storage device.
Procedure
To automatically unlock a LUKS-encrypted removable storage device, such as a USB drive, install the
clevis-udisks2
package:# dnf install clevis-udisks2
Reboot the system, and then perform the binding step using the
clevis luks bind
command as described in Configuring manual enrollment of LUKS-encrypted volumes, for example:# clevis luks bind -d /dev/sdb1 tang '{"url":"http://tang.srv"}'
The LUKS-encrypted removable device can be now unlocked automatically in your GNOME desktop session. The device bound to a Clevis policy can be also unlocked by the
clevis luks unlock
command:# clevis luks unlock -d /dev/sdb1
You can use an analogous procedure when using a TPM 2.0 policy instead of a Tang server.
Additional resources
-
clevis-luks-unlockers(7)
man page
11.12. Deploying high-availability NBDE systems
Tang provides two methods for building a high-availability deployment:
- Client redundancy (recommended)
-
Clients should be configured with the ability to bind to multiple Tang servers. In this setup, each Tang server has its own keys and clients can decrypt by contacting a subset of these servers. Clevis already supports this workflow through its
sss
plug-in. Red Hat recommends this method for a high-availability deployment. - Key sharing
-
For redundancy purposes, more than one instance of Tang can be deployed. To set up a second or any subsequent instance, install the
tang
packages and copy the key directory to the new host usingrsync
overSSH
. Note that Red Hat does not recommend this method because sharing keys increases the risk of key compromise and requires additional automation infrastructure.
11.12.1. High-available NBDE using Shamir’s Secret Sharing
Shamir’s Secret Sharing (SSS) is a cryptographic scheme that divides a secret into several unique parts. To reconstruct the secret, a number of parts is required. The number is called threshold and SSS is also referred to as a thresholding scheme.
Clevis provides an implementation of SSS. It creates a key and divides it into a number of pieces. Each piece is encrypted using another pin including even SSS recursively. Additionally, you define the threshold t
. If an NBDE deployment decrypts at least t
pieces, then it recovers the encryption key and the decryption process succeeds. When Clevis detects a smaller number of parts than specified in the threshold, it prints an error message.
11.12.1.1. Example 1: Redundancy with two Tang servers
The following command decrypts a LUKS-encrypted device when at least one of two Tang servers is available:
# clevis luks bind -d /dev/sda1 sss '{"t":1,"pins":{"tang":[{"url":"http://tang1.srv"},{"url":"http://tang2.srv"}]}}'
The previous command used the following configuration scheme:
{ "t":1, "pins":{ "tang":[ { "url":"http://tang1.srv" }, { "url":"http://tang2.srv" } ] } }
In this configuration, the SSS threshold t
is set to 1
and the clevis luks bind
command successfully reconstructs the secret if at least one from two listed tang
servers is available.
11.12.1.2. Example 2: Shared secret on a Tang server and a TPM device
The following command successfully decrypts a LUKS-encrypted device when both the tang
server and the tpm2
device are available:
# clevis luks bind -d /dev/sda1 sss '{"t":2,"pins":{"tang":[{"url":"http://tang1.srv"}], "tpm2": {"pcr_ids":"0,7"}}}'
The configuration scheme with the SSS threshold 't' set to '2' is now:
{ "t":2, "pins":{ "tang":[ { "url":"http://tang1.srv" } ], "tpm2":{ "pcr_ids":"0,7" } } }
Additional resources
-
tang(8)
(sectionHigh Availability
),clevis(1)
(sectionShamir’s Secret Sharing
), andclevis-encrypt-sss(1)
man pages
11.13. Deployment of virtual machines in a NBDE network
The clevis luks bind
command does not change the LUKS master key. This implies that if you create a LUKS-encrypted image for use in a virtual machine or cloud environment, all the instances that run this image share a master key. This is extremely insecure and should be avoided at all times.
This is not a limitation of Clevis but a design principle of LUKS. If your scenario requires having encrypted root volumes in a cloud, perform the installation process (usually using Kickstart) for each instance of Red Hat Enterprise Linux in the cloud as well. The images cannot be shared without also sharing a LUKS master key.
To deploy automated unlocking in a virtualized environment, use systems such as lorax
or virt-install
together with a Kickstart file (see Configuring automated enrollment of LUKS-encrypted volumes using Kickstart) or another automated provisioning tool to ensure that each encrypted VM has a unique master key.
Additional resources
-
clevis-luks-bind(1)
man page
11.14. Building automatically-enrollable VM images for cloud environments using NBDE
Deploying automatically-enrollable encrypted images in a cloud environment can provide a unique set of challenges. Like other virtualization environments, it is recommended to reduce the number of instances started from a single image to avoid sharing the LUKS master key.
Therefore, the best practice is to create customized images that are not shared in any public repository and that provide a base for the deployment of a limited amount of instances. The exact number of instances to create should be defined by deployment’s security policies and based on the risk tolerance associated with the LUKS master key attack vector.
To build LUKS-enabled automated deployments, systems such as Lorax or virt-install together with a Kickstart file should be used to ensure master key uniqueness during the image building process.
Cloud environments enable two Tang server deployment options which we consider here. First, the Tang server can be deployed within the cloud environment itself. Second, the Tang server can be deployed outside of the cloud on independent infrastructure with a VPN link between the two infrastructures.
Deploying Tang natively in the cloud does allow for easy deployment. However, given that it shares infrastructure with the data persistence layer of ciphertext of other systems, it may be possible for both the Tang server’s private key and the Clevis metadata to be stored on the same physical disk. Access to this physical disk permits a full compromise of the ciphertext data.
For this reason, Red Hat strongly recommends maintaining a physical separation between the location where the data is stored and the system where Tang is running. This separation between the cloud and the Tang server ensures that the Tang server’s private key cannot be accidentally combined with the Clevis metadata. It also provides local control of the Tang server if the cloud infrastructure is at risk.
11.15. Deploying Tang as a container
The tang
container image provides Tang-server decryption capabilities for Clevis clients that run either in OpenShift Container Platform (OCP) clusters or in separate virtual machines.
Prerequisites
-
The
podman
package and its dependencies are installed on the system. -
You have logged in on the
registry.redhat.io
container catalog using thepodman login registry.redhat.io
command. See Red Hat Container Registry Authentication for more information. - The Clevis client is installed on systems containing LUKS-encrypted volumes that you want to automatically unlock by using a Tang server.
Procedure
Pull the
tang
container image from theregistry.redhat.io
registry:# podman pull registry.redhat.io/rhel9/tang
Run the container, specify its port, and specify the path to the Tang keys. The previous example runs the
tang
container, specifies the port 7500, and indicates a path to the Tang keys of the/var/db/tang
directory:# podman run -d -p 7500:7500 -v tang-keys:/var/db/tang --name tang registry.redhat.io/rhel9/tang
Note that Tang uses port 80 by default but this may collide with other services such as the Apache HTTP server.
[Optional] For increased security, rotate the Tang keys periodically. You can use the
tangd-rotate-keys
script, for example:# podman run --rm -v tang-keys:/var/db/tang registry.redhat.io/rhel9/tang tangd-rotate-keys -v -d /var/db/tang Rotated key 'rZAMKAseaXBe0rcKXL1hCCIq-DY.jwk' -> .'rZAMKAseaXBe0rcKXL1hCCIq-DY.jwk' Rotated key 'x1AIpc6WmnCU-CabD8_4q18vDuw.jwk' -> .'x1AIpc6WmnCU-CabD8_4q18vDuw.jwk' Created new key GrMMX_WfdqomIU_4RyjpcdlXb0E.jwk Created new key _dTTfn17sZZqVAp80u3ygFDHtjk.jwk Keys rotated successfully.
Verification
On a system that contains LUKS-encrypted volumes for automated unlocking by the presence of the Tang server, check that the Clevis client can encrypt and decrypt a plain-text message using Tang:
# echo test | clevis encrypt tang '{"url":"http://localhost:7500"}' | clevis decrypt The advertisement contains the following signing keys: x1AIpc6WmnCU-CabD8_4q18vDuw Do you wish to trust these keys? [ynYN] y test
The previous example command shows the
test
string at the end of its output when a Tang server is available on the localhost URL and communicates through port 7500.
Additional resources
-
podman(1)
,clevis(1)
, andtang(8)
man pages
11.16. Introduction to the nbde_client
and nbde_server
System Roles (Clevis and Tang)
RHEL System Roles is a collection of Ansible roles and modules that provide a consistent configuration interface to remotely manage multiple RHEL systems.
You can use Ansible roles for automated deployments of Policy-Based Decryption (PBD) solutions using Clevis and Tang. The rhel-system-roles
package contains these system roles, the related examples, and also the reference documentation.
The nbde_client
System Role enables you to deploy multiple Clevis clients in an automated way. Note that the nbde_client
role supports only Tang bindings, and you cannot use it for TPM2 bindings at the moment.
The nbde_client
role requires volumes that are already encrypted using LUKS. This role supports to bind a LUKS-encrypted volume to one or more Network-Bound (NBDE) servers - Tang servers. You can either preserve the existing volume encryption with a passphrase or remove it. After removing the passphrase, you can unlock the volume only using NBDE. This is useful when a volume is initially encrypted using a temporary key or password that you should remove after you provision the system.
If you provide both a passphrase and a key file, the role uses what you have provided first. If it does not find any of these valid, it attempts to retrieve a passphrase from an existing binding.
PBD defines a binding as a mapping of a device to a slot. This means that you can have multiple bindings for the same device. The default slot is slot 1.
The nbde_client
role provides also the state
variable. Use the present
value for either creating a new binding or updating an existing one. Contrary to a clevis luks bind
command, you can use state: present
also for overwriting an existing binding in its device slot. The absent
value removes a specified binding.
Using the nbde_client
System Role, you can deploy and manage a Tang server as part of an automated disk encryption solution. This role supports the following features:
- Rotating Tang keys
- Deploying and backing up Tang keys
Additional resources
-
For a detailed reference on Network-Bound Disk Encryption (NBDE) role variables, install the
rhel-system-roles
package, and see theREADME.md
andREADME.html
files in the/usr/share/doc/rhel-system-roles/nbde_client/
and/usr/share/doc/rhel-system-roles/nbde_server/
directories. -
For example system-roles playbooks, install the
rhel-system-roles
package, and see the/usr/share/ansible/roles/rhel-system-roles.nbde_server/examples/
directories. - For more information about RHEL System Roles, see Preparing a control node and managed nodes to use RHEL System Roles.
11.17. Using the nbde_server
System Role for setting up multiple Tang servers
Follow the steps to prepare and apply an Ansible playbook containing your Tang server settings.
Prerequisites
-
Access and permissions to one or more managed nodes, which are systems you want to configure with the
nbde_server
System Role. Access and permissions to a control node, which is a system from which Red Hat Ansible Core configures other systems.
On the control node:
-
The
ansible-core
andrhel-system-roles
packages are installed.
-
The
RHEL 8.0-8.5 provided access to a separate Ansible repository that contains Ansible Engine 2.9 for automation based on Ansible. Ansible Engine contains command-line utilities such as ansible
, ansible-playbook
, connectors such as docker
and podman
, and many plugins and modules. For information about how to obtain and install Ansible Engine, see the How to download and install Red Hat Ansible Engine Knowledgebase article.
RHEL 8.6 and 9.0 have introduced Ansible Core (provided as the ansible-core
package), which contains the Ansible command-line utilities, commands, and a small set of built-in Ansible plugins. RHEL provides this package through the AppStream repository, and it has a limited scope of support. For more information, see the Scope of support for the Ansible Core package included in the RHEL 9 and RHEL 8.6 and later AppStream repositories Knowledgebase article.
- An inventory file which lists the managed nodes.
Procedure
Prepare your playbook containing settings for Tang servers. You can either start from the scratch, or use one of the example playbooks from the
/usr/share/ansible/roles/rhel-system-roles.nbde_server/examples/
directory.# cp /usr/share/ansible/roles/rhel-system-roles.nbde_server/examples/simple_deploy.yml ./my-tang-playbook.yml
Edit the playbook in a text editor of your choice, for example:
# vi my-tang-playbook.yml
Add the required parameters. The following example playbook ensures deploying of your Tang server and a key rotation:
--- - hosts: all vars: nbde_server_rotate_keys: yes nbde_server_manage_firewall: true nbde_server_manage_selinux: true roles: - rhel-system-roles.nbde_server
NoteBecause
nbde_server_manage_firewall
andnbde_server_manage_selinux
are both set totrue
, thenbde_server
role uses thefirewall
andselinux
roles to manage the ports used by thenbde_server
role.Apply the finished playbook:
# ansible-playbook -i inventory-file my-tang-playbook.yml
Where: *
inventory-file
is the inventory file. *logging-playbook.yml
is the playbook you use.
To ensure that networking for a Tang pin is available during early boot by using the grubby
tool on the systems where Clevis is installed:
# grubby --update-kernel=ALL --args="rd.neednet=1"
Additional resources
-
For more information, install the
rhel-system-roles
package, and see the/usr/share/doc/rhel-system-roles/nbde_server/
andusr/share/ansible/roles/rhel-system-roles.nbde_server/
directories.
11.18. Using the nbde_client
System Role for setting up multiple Clevis clients
Follow the steps to prepare and apply an Ansible playbook containing your Clevis client settings.
The nbde_client
System Role supports only Tang bindings. This means that you cannot use it for TPM2 bindings at the moment.
Prerequisites
-
Access and permissions to one or more managed nodes, which are systems you want to configure with the
nbde_client
System Role. - Access and permissions to a control node, which is a system from which Red Hat Ansible Core configures other systems.
- The Ansible Core package is installed on the control machine.
-
The
rhel-system-roles
package is installed on the system from which you want to run the playbook.
Procedure
Prepare your playbook containing settings for Clevis clients. You can either start from the scratch, or use one of the example playbooks from the
/usr/share/ansible/roles/rhel-system-roles.nbde_client/examples/
directory.# cp /usr/share/ansible/roles/rhel-system-roles.nbde_client/examples/high_availability.yml ./my-clevis-playbook.yml
Edit the playbook in a text editor of your choice, for example:
# vi my-clevis-playbook.yml
Add the required parameters. The following example playbook configures Clevis clients for automated unlocking of two LUKS-encrypted volumes by when at least one of two Tang servers is available:
--- - hosts: all vars: nbde_client_bindings: - device: /dev/rhel/root encryption_key_src: /etc/luks/keyfile servers: - http://server1.example.com - http://server2.example.com - device: /dev/rhel/swap encryption_key_src: /etc/luks/keyfile servers: - http://server1.example.com - http://server2.example.com roles: - rhel-system-roles.nbde_client
Apply the finished playbook:
# ansible-playbook -i host1,host2,host3 my-clevis-playbook.yml
To ensure that networking for a Tang pin is available during early boot by using the grubby
tool on the system where Clevis is installed:
# grubby --update-kernel=ALL --args="rd.neednet=1"
Additional resources
-
For details about the parameters and additional information about the NBDE Client System Role, install the
rhel-system-roles
package, and see the/usr/share/doc/rhel-system-roles/nbde_client/
and/usr/share/ansible/roles/rhel-system-roles.nbde_client/
directories.
Chapter 12. Auditing the system
Audit does not provide additional security to your system; rather, it can be used to discover violations of security policies used on your system. These violations can further be prevented by additional security measures such as SELinux.
12.1. Linux Audit
The Linux Audit system provides a way to track security-relevant information about your system. Based on pre-configured rules, Audit generates log entries to record as much information about the events that are happening on your system as possible. This information is crucial for mission-critical environments to determine the violator of the security policy and the actions they performed.
The following list summarizes some of the information that Audit is capable of recording in its log files:
- Date and time, type, and outcome of an event.
- Sensitivity labels of subjects and objects.
- Association of an event with the identity of the user who triggered the event.
- All modifications to Audit configuration and attempts to access Audit log files.
- All uses of authentication mechanisms, such as SSH, Kerberos, and others.
-
Changes to any trusted database, such as
/etc/passwd
. - Attempts to import or export information into or from the system.
- Include or exclude events based on user identity, subject and object labels, and other attributes.
The use of the Audit system is also a requirement for a number of security-related certifications. Audit is designed to meet or exceed the requirements of the following certifications or compliance guides:
- Controlled Access Protection Profile (CAPP)
- Labeled Security Protection Profile (LSPP)
- Rule Set Base Access Control (RSBAC)
- National Industrial Security Program Operating Manual (NISPOM)
- Federal Information Security Management Act (FISMA)
- Payment Card Industry — Data Security Standard (PCI-DSS)
- Security Technical Implementation Guides (STIG)
Audit has also been:
- Evaluated by National Information Assurance Partnership (NIAP) and Best Security Industries (BSI).
- Certified to LSPP/CAPP/RSBAC/EAL4+ on Red Hat Enterprise Linux 5.
- Certified to Operating System Protection Profile / Evaluation Assurance Level 4+ (OSPP/EAL4+) on Red Hat Enterprise Linux 6.
Use Cases
- Watching file access
- Audit can track whether a file or a directory has been accessed, modified, executed, or the file’s attributes have been changed. This is useful, for example, to detect access to important files and have an Audit trail available in case one of these files is corrupted.
- Monitoring system calls
-
Audit can be configured to generate a log entry every time a particular system call is used. This can be used, for example, to track changes to the system time by monitoring the
settimeofday
,clock_adjtime
, and other time-related system calls. - Recording commands run by a user
-
Audit can track whether a file has been executed, so rules can be defined to record every execution of a particular command. For example, a rule can be defined for every executable in the
/bin
directory. The resulting log entries can then be searched by user ID to generate an audit trail of executed commands per user. - Recording execution of system pathnames
- Aside from watching file access which translates a path to an inode at rule invocation, Audit can now watch the execution of a path even if it does not exist at rule invocation, or if the file is replaced after rule invocation. This allows rules to continue to work after upgrading a program executable or before it is even installed.
- Recording security events
-
The
pam_faillock
authentication module is capable of recording failed login attempts. Audit can be set up to record failed login attempts as well and provides additional information about the user who attempted to log in. - Searching for events
-
Audit provides the
ausearch
utility, which can be used to filter the log entries and provide a complete audit trail based on several conditions. - Running summary reports
-
The
aureport
utility can be used to generate, among other things, daily reports of recorded events. A system administrator can then analyze these reports and investigate suspicious activity further. - Monitoring network access
-
The
nftables
,iptables
, andebtables
utilities can be configured to trigger Audit events, allowing system administrators to monitor network access.
System performance may be affected depending on the amount of information that is collected by Audit.
12.2. Audit system architecture
The Audit system consists of two main parts: the user-space applications and utilities, and the kernel-side system call processing. The kernel component receives system calls from user-space applications and filters them through one of the following filters: user, task, fstype, or exit.
Once a system call passes the exclude filter, it is sent through one of the aforementioned filters, which, based on the Audit rule configuration, sends it to the Audit daemon for further processing.
The user-space Audit daemon collects the information from the kernel and creates entries in a log file. Other Audit user-space utilities interact with the Audit daemon, the kernel Audit component, or the Audit log files:
-
auditctl
— the Audit control utility interacts with the kernel Audit component to manage rules and to control many settings and parameters of the event generation process. -
The remaining Audit utilities take the contents of the Audit log files as input and generate output based on user’s requirements. For example, the
aureport
utility generates a report of all recorded events.
In RHEL 9, the Audit dispatcher daemon (audisp
) functionality is integrated in the Audit daemon (auditd
). Configuration files of plugins for the interaction of real-time analytical programs with Audit events are located in the /etc/audit/plugins.d/
directory by default.
12.3. Configuring auditd for a secure environment
The default auditd
configuration should be suitable for most environments. However, if your environment must meet strict security policies, the following settings are suggested for the Audit daemon configuration in the /etc/audit/auditd.conf
file:
- log_file
-
The directory that holds the Audit log files (usually
/var/log/audit/
) should reside on a separate mount point. This prevents other processes from consuming space in this directory and provides accurate detection of the remaining space for the Audit daemon. - max_log_file
-
Specifies the maximum size of a single Audit log file, must be set to make full use of the available space on the partition that holds the Audit log files. The
max_log_file`
parameter specifies the maximum file size in megabytes. The value given must be numeric. - max_log_file_action
-
Decides what action is taken once the limit set in
max_log_file
is reached, should be set tokeep_logs
to prevent Audit log files from being overwritten. - space_left
-
Specifies the amount of free space left on the disk for which an action that is set in the
space_left_action
parameter is triggered. Must be set to a number that gives the administrator enough time to respond and free up disk space. Thespace_left
value depends on the rate at which the Audit log files are generated. If the value of space_left is specified as a whole number, it is interpreted as an absolute size in megabytes (MiB). If the value is specified as a number between 1 and 99 followed by a percentage sign (for example, 5%), the Audit daemon calculates the absolute size in megabytes based on the size of the file system containinglog_file
. - space_left_action
-
It is recommended to set the
space_left_action
parameter toemail
orexec
with an appropriate notification method. - admin_space_left
-
Specifies the absolute minimum amount of free space for which an action that is set in the
admin_space_left_action
parameter is triggered, must be set to a value that leaves enough space to log actions performed by the administrator. The numeric value for this parameter should be lower than the number for space_left. You can also append a percent sign (for example, 1%) to the number to have the audit daemon calculate the number based on the disk partition size. - admin_space_left_action
-
Should be set to
single
to put the system into single-user mode and allow the administrator to free up some disk space. - disk_full_action
-
Specifies an action that is triggered when no free space is available on the partition that holds the Audit log files, must be set to
halt
orsingle
. This ensures that the system is either shut down or operating in single-user mode when Audit can no longer log events. - disk_error_action
-
Specifies an action that is triggered in case an error is detected on the partition that holds the Audit log files, must be set to
syslog
,single
, orhalt
, depending on your local security policies regarding the handling of hardware malfunctions. - flush
-
Should be set to
incremental_async
. It works in combination with thefreq
parameter, which determines how many records can be sent to the disk before forcing a hard synchronization with the hard drive. Thefreq
parameter should be set to100
. These parameters assure that Audit event data is synchronized with the log files on the disk while keeping good performance for bursts of activity.
The remaining configuration options should be set according to your local security policy.
12.4. Starting and controlling auditd
After auditd
is configured, start the service to collect Audit information and store it in the log files. Use the following command as the root user to start auditd
:
# service auditd start
To configure auditd
to start at boot time:
# systemctl enable auditd
You can temporarily disable auditd
with the # auditctl -e 0
command and re-enable it with # auditctl -e 1
.
A number of other actions can be performed on auditd
using the service auditd action
command, where action can be one of the following:
stop
-
Stops
auditd
. restart
-
Restarts
auditd
. reload
orforce-reload
-
Reloads the configuration of
auditd
from the/etc/audit/auditd.conf
file. rotate
-
Rotates the log files in the
/var/log/audit/
directory. resume
- Resumes logging of Audit events after it has been previously suspended, for example, when there is not enough free space on the disk partition that holds the Audit log files.
condrestart
ortry-restart
-
Restarts
auditd
only if it is already running. status
-
Displays the running status of
auditd
.
The service
command is the only way to correctly interact with the auditd
daemon. You need to use the service
command so that the auid
value is properly recorded. You can use the systemctl
command only for two actions: enable
and status
.
12.5. Understanding Audit log files
By default, the Audit system stores log entries in the /var/log/audit/audit.log
file; if log rotation is enabled, rotated audit.log
files are stored in the same directory.
Add the following Audit rule to log every attempt to read or modify the /etc/ssh/sshd_config
file:
# auditctl -w /etc/ssh/sshd_config -p warx -k sshd_config
If the auditd
daemon is running, for example, using the following command creates a new event in the Audit log file:
$ cat /etc/ssh/sshd_config
This event in the audit.log
file looks as follows:
type=SYSCALL msg=audit(1364481363.243:24287): arch=c000003e syscall=2 success=no exit=-13 a0=7fffd19c5592 a1=0 a2=7fffd19c4b50 a3=a items=1 ppid=2686 pid=3538 auid=1000 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=1 comm="cat" exe="/bin/cat" subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key="sshd_config" type=CWD msg=audit(1364481363.243:24287): cwd="/home/shadowman" type=PATH msg=audit(1364481363.243:24287): item=0 name="/etc/ssh/sshd_config" inode=409248 dev=fd:00 mode=0100600 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:etc_t:s0 nametype=NORMAL cap_fp=none cap_fi=none cap_fe=0 cap_fver=0 type=PROCTITLE msg=audit(1364481363.243:24287) : proctitle=636174002F6574632F7373682F737368645F636F6E666967
The above event consists of four records, which share the same time stamp and serial number. Records always start with the type=
keyword. Each record consists of several name=value
pairs separated by a white space or a comma. A detailed analysis of the above event follows:
First Record
type=SYSCALL
-
The
type
field contains the type of the record. In this example, theSYSCALL
value specifies that this record was triggered by a system call to the kernel.
msg=audit(1364481363.243:24287):
The
msg
field records:-
a time stamp and a unique ID of the record in the form
audit(time_stamp:ID)
. Multiple records can share the same time stamp and ID if they were generated as part of the same Audit event. The time stamp is using the Unix time format - seconds since 00:00:00 UTC on 1 January 1970. -
various event-specific
name=value
pairs provided by the kernel or user-space applications.
-
a time stamp and a unique ID of the record in the form
arch=c000003e
-
The
arch
field contains information about the CPU architecture of the system. The value,c000003e
, is encoded in hexadecimal notation. When searching Audit records with theausearch
command, use the-i
or--interpret
option to automatically convert hexadecimal values into their human-readable equivalents. Thec000003e
value is interpreted asx86_64
. syscall=2
-
The
syscall
field records the type of the system call that was sent to the kernel. The value,2
, can be matched with its human-readable equivalent in the/usr/include/asm/unistd_64.h
file. In this case,2
is theopen
system call. Note that theausyscall
utility allows you to convert system call numbers to their human-readable equivalents. Use theausyscall --dump
command to display a listing of all system calls along with their numbers. For more information, see theausyscall
(8) man page. success=no
-
The
success
field records whether the system call recorded in that particular event succeeded or failed. In this case, the call did not succeed. exit=-13
The
exit
field contains a value that specifies the exit code returned by the system call. This value varies for a different system call. You can interpret the value to its human-readable equivalent with the following command:# ausearch --interpret --exit -13
Note that the previous example assumes that your Audit log contains an event that failed with exit code
-13
.a0=7fffd19c5592
,a1=0
,a2=7fffd19c5592
,a3=a
-
The
a0
toa3
fields record the first four arguments, encoded in hexadecimal notation, of the system call in this event. These arguments depend on the system call that is used; they can be interpreted by theausearch
utility. items=1
-
The
items
field contains the number of PATH auxiliary records that follow the syscall record. ppid=2686
-
The
ppid
field records the Parent Process ID (PPID). In this case,2686
was the PPID of the parent process such asbash
. pid=3538
-
The
pid
field records the Process ID (PID). In this case,3538
was the PID of thecat
process. auid=1000
-
The
auid
field records the Audit user ID, that is the loginuid. This ID is assigned to a user upon login and is inherited by every process even when the user’s identity changes, for example, by switching user accounts with thesu - john
command. uid=1000
-
The
uid
field records the user ID of the user who started the analyzed process. The user ID can be interpreted into user names with the following command:ausearch -i --uid UID
. gid=1000
-
The
gid
field records the group ID of the user who started the analyzed process. euid=1000
-
The
euid
field records the effective user ID of the user who started the analyzed process. suid=1000
-
The
suid
field records the set user ID of the user who started the analyzed process. fsuid=1000
-
The
fsuid
field records the file system user ID of the user who started the analyzed process. egid=1000
-
The
egid
field records the effective group ID of the user who started the analyzed process. sgid=1000
-
The
sgid
field records the set group ID of the user who started the analyzed process. fsgid=1000
-
The
fsgid
field records the file system group ID of the user who started the analyzed process. tty=pts0
-
The
tty
field records the terminal from which the analyzed process was invoked. ses=1
-
The
ses
field records the session ID of the session from which the analyzed process was invoked. comm="cat"
-
The
comm
field records the command-line name of the command that was used to invoke the analyzed process. In this case, thecat
command was used to trigger this Audit event. exe="/bin/cat"
-
The
exe
field records the path to the executable that was used to invoke the analyzed process. subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
-
The
subj
field records the SELinux context with which the analyzed process was labeled at the time of execution. key="sshd_config"
-
The
key
field records the administrator-defined string associated with the rule that generated this event in the Audit log.
Second Record
type=CWD
In the second record, the
type
field value isCWD
— current working directory. This type is used to record the working directory from which the process that invoked the system call specified in the first record was executed.The purpose of this record is to record the current process’s location in case a relative path winds up being captured in the associated PATH record. This way the absolute path can be reconstructed.
msg=audit(1364481363.243:24287)
-
The
msg
field holds the same time stamp and ID value as the value in the first record. The time stamp is using the Unix time format - seconds since 00:00:00 UTC on 1 January 1970. cwd="/home/user_name"
-
The
cwd
field contains the path to the directory in which the system call was invoked.
Third Record
type=PATH
-
In the third record, the
type
field value isPATH
. An Audit event contains aPATH
-type record for every path that is passed to the system call as an argument. In this Audit event, only one path (/etc/ssh/sshd_config
) was used as an argument. msg=audit(1364481363.243:24287):
-
The
msg
field holds the same time stamp and ID value as the value in the first and second record. item=0
-
The
item
field indicates which item, of the total number of items referenced in theSYSCALL
type record, the current record is. This number is zero-based; a value of0
means it is the first item. name="/etc/ssh/sshd_config"
-
The
name
field records the path of the file or directory that was passed to the system call as an argument. In this case, it was the/etc/ssh/sshd_config
file. inode=409248
The
inode
field contains the inode number associated with the file or directory recorded in this event. The following command displays the file or directory that is associated with the409248
inode number:# find / -inum 409248 -print /etc/ssh/sshd_config
dev=fd:00
-
The
dev
field specifies the minor and major ID of the device that contains the file or directory recorded in this event. In this case, the value represents the/dev/fd/0
device. mode=0100600
-
The
mode
field records the file or directory permissions, encoded in numerical notation as returned by thestat
command in thest_mode
field. See thestat(2)
man page for more information. In this case,0100600
can be interpreted as-rw-------
, meaning that only the root user has read and write permissions to the/etc/ssh/sshd_config
file. ouid=0
-
The
ouid
field records the object owner’s user ID. ogid=0
-
The
ogid
field records the object owner’s group ID. rdev=00:00
-
The
rdev
field contains a recorded device identifier for special files only. In this case, it is not used as the recorded file is a regular file. obj=system_u:object_r:etc_t:s0
-
The
obj
field records the SELinux context with which the recorded file or directory was labeled at the time of execution. nametype=NORMAL
-
The
nametype
field records the intent of each path record’s operation in the context of a given syscall. cap_fp=none
-
The
cap_fp
field records data related to the setting of a permitted file system-based capability of the file or directory object. cap_fi=none
-
The
cap_fi
field records data related to the setting of an inherited file system-based capability of the file or directory object. cap_fe=0
-
The
cap_fe
field records the setting of the effective bit of the file system-based capability of the file or directory object. cap_fver=0
-
The
cap_fver
field records the version of the file system-based capability of the file or directory object.
Fourth Record
type=PROCTITLE
-
The
type
field contains the type of the record. In this example, thePROCTITLE
value specifies that this record gives the full command-line that triggered this Audit event, triggered by a system call to the kernel. proctitle=636174002F6574632F7373682F737368645F636F6E666967
-
The
proctitle
field records the full command-line of the command that was used to invoke the analyzed process. The field is encoded in hexadecimal notation to not allow the user to influence the Audit log parser. The text decodes to the command that triggered this Audit event. When searching Audit records with theausearch
command, use the-i
or--interpret
option to automatically convert hexadecimal values into their human-readable equivalents. The636174002F6574632F7373682F737368645F636F6E666967
value is interpreted ascat /etc/ssh/sshd_config
.
12.6. Using auditctl for defining and executing Audit rules
The Audit system operates on a set of rules that define what is captured in the log files. Audit rules can be set either on the command line using the auditctl
utility or in the /etc/audit/rules.d/
directory.
The auditctl
command enables you to control the basic functionality of the Audit system and to define rules that decide which Audit events are logged.
File-system rules examples
To define a rule that logs all write access to, and every attribute change of, the
/etc/passwd
file:# auditctl -w /etc/passwd -p wa -k passwd_changes
To define a rule that logs all write access to, and every attribute change of, all the files in the
/etc/selinux/
directory:# auditctl -w /etc/selinux/ -p wa -k selinux_changes
System-call rules examples
To define a rule that creates a log entry every time the
adjtimex
orsettimeofday
system calls are used by a program, and the system uses the 64-bit architecture:# auditctl -a always,exit -F arch=b64 -S adjtimex -S settimeofday -k time_change
To define a rule that creates a log entry every time a file is deleted or renamed by a system user whose ID is 1000 or larger:
# auditctl -a always,exit -S unlink -S unlinkat -S rename -S renameat -F auid>=1000 -F auid!=4294967295 -k delete
Note that the
-F auid!=4294967295
option is used to exclude users whose login UID is not set.
Executable-file rules
To define a rule that logs all execution of the /bin/id
program, execute the following command:
# auditctl -a always,exit -F exe=/bin/id -F arch=b64 -S execve -k execution_bin_id
Additional resources
-
auditctl(8)
man page.
12.7. Defining persistent Audit rules
To define Audit rules that are persistent across reboots, you must either directly include them in the /etc/audit/rules.d/audit.rules
file or use the augenrules
program that reads rules located in the /etc/audit/rules.d/
directory.
Note that the /etc/audit/audit.rules
file is generated whenever the auditd
service starts. Files in /etc/audit/rules.d/
use the same auditctl
command-line syntax to specify the rules. Empty lines and text following a hash sign (#) are ignored.
Furthermore, you can use the auditctl
command to read rules from a specified file using the -R
option, for example:
# auditctl -R /usr/share/audit/sample-rules/30-stig.rules
12.8. Using pre-configured rules files
In the /usr/share/audit/sample-rules
directory, the audit
package provides a set of pre-configured rules files according to various certification standards:
- 30-nispom.rules
- Audit rule configuration that meets the requirements specified in the Information System Security chapter of the National Industrial Security Program Operating Manual.
- 30-ospp-v42*.rules
- Audit rule configuration that meets the requirements defined in the OSPP (Protection Profile for General Purpose Operating Systems) profile version 4.2.
- 30-pci-dss-v31.rules
- Audit rule configuration that meets the requirements set by Payment Card Industry Data Security Standard (PCI DSS) v3.1.
- 30-stig.rules
- Audit rule configuration that meets the requirements set by Security Technical Implementation Guides (STIG).
To use these configuration files, copy them to the /etc/audit/rules.d/
directory and use the augenrules --load
command, for example:
# cd /usr/share/audit/sample-rules/ # cp 10-base-config.rules 30-stig.rules 31-privileged.rules 99-finalize.rules /etc/audit/rules.d/ # augenrules --load
You can order Audit rules using a numbering scheme. See the /usr/share/audit/sample-rules/README-rules
file for more information.
Additional resources
-
audit.rules(7)
man page.
12.9. Using augenrules to define persistent rules
The augenrules
script reads rules located in the /etc/audit/rules.d/
directory and compiles them into an audit.rules
file. This script processes all files that end with .rules
in a specific order based on their natural sort order. The files in this directory are organized into groups with the following meanings:
- 10 - Kernel and auditctl configuration
- 20 - Rules that could match general rules but you want a different match
- 30 - Main rules
- 40 - Optional rules
- 50 - Server-specific rules
- 70 - System local rules
- 90 - Finalize (immutable)
The rules are not meant to be used all at once. They are pieces of a policy that should be thought out and individual files copied to /etc/audit/rules.d/
. For example, to set a system up in the STIG configuration, copy rules 10-base-config
, 30-stig
, 31-privileged
, and 99-finalize
.
Once you have the rules in the /etc/audit/rules.d/
directory, load them by running the augenrules
script with the --load
directive:
# augenrules --load
/sbin/augenrules: No change
No rules
enabled 1
failure 1
pid 742
rate_limit 0
...
Additional resources
-
audit.rules(8)
andaugenrules(8)
man pages.
12.10. Disabling augenrules
Use the following steps to disable the augenrules
utility. This switches Audit to use rules defined in the /etc/audit/audit.rules
file.
Procedure
Copy the
/usr/lib/systemd/system/auditd.service
file to the/etc/systemd/system/
directory:# cp -f /usr/lib/systemd/system/auditd.service /etc/systemd/system/
Edit the
/etc/systemd/system/auditd.service
file in a text editor of your choice, for example:# vi /etc/systemd/system/auditd.service
Comment out the line containing
augenrules
, and uncomment the line containing theauditctl -R
command:#ExecStartPost=-/sbin/augenrules --load ExecStartPost=-/sbin/auditctl -R /etc/audit/audit.rules
Reload the
systemd
daemon to fetch changes in theauditd.service
file:# systemctl daemon-reload
Restart the
auditd
service:# service auditd restart
Additional resources
-
augenrules(8)
andaudit.rules(8)
man pages. - Auditd service restart overrides changes made to /etc/audit/audit.rules.
12.11. Setting up Audit to monitor software updates
You can use the pre-configured rule 44-installers.rules
to configure Audit to monitor the following utilities that install software:
-
dnf
[3] -
yum
-
pip
-
npm
-
cpan
-
gem
-
luarocks
To monitor the rpm
utility, install the rpm-plugin-audit
package. Audit will then generate SOFTWARE_UPDATE
events when it installs or updates a package. You can list these events by entering ausearch -m SOFTWARE_UPDATE
on the command line.
Pre-configured rule files cannot be used on systems with the ppc64le
and aarch64
architectures.
Prerequisites
-
auditd
is configured in accordance with the settings provided in Configuring auditd for a secure environment .
Procedure
Copy the pre-configured rule file
44-installers.rules
from the/usr/share/audit/sample-rules/
directory to the/etc/audit/rules.d/
directory:# cp /usr/share/audit/sample-rules/44-installers.rules /etc/audit/rules.d/
Load the audit rules:
# augenrules --load
Verification
List the loaded rules:
# auditctl -l -p x-w /usr/bin/dnf-3 -k software-installer -p x-w /usr/bin/yum -k software-installer -p x-w /usr/bin/pip -k software-installer -p x-w /usr/bin/npm -k software-installer -p x-w /usr/bin/cpan -k software-installer -p x-w /usr/bin/gem -k software-installer -p x-w /usr/bin/luarocks -k software-installer
Perform an installation, for example:
# dnf reinstall -y vim-enhanced
Search the Audit log for recent installation events, for example:
# ausearch -ts recent -k software-installer –––– time->Thu Dec 16 10:33:46 2021 type=PROCTITLE msg=audit(1639668826.074:298): proctitle=2F7573722F6C6962657865632F706C6174666F726D2D707974686F6E002F7573722F62696E2F646E66007265696E7374616C6C002D790076696D2D656E68616E636564 type=PATH msg=audit(1639668826.074:298): item=2 name="/lib64/ld-linux-x86-64.so.2" inode=10092 dev=fd:01 mode=0100755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:ld_so_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 type=PATH msg=audit(1639668826.074:298): item=1 name="/usr/libexec/platform-python" inode=4618433 dev=fd:01 mode=0100755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:bin_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 type=PATH msg=audit(1639668826.074:298): item=0 name="/usr/bin/dnf" inode=6886099 dev=fd:01 mode=0100755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:rpm_exec_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 type=CWD msg=audit(1639668826.074:298): cwd="/root" type=EXECVE msg=audit(1639668826.074:298): argc=5 a0="/usr/libexec/platform-python" a1="/usr/bin/dnf" a2="reinstall" a3="-y" a4="vim-enhanced" type=SYSCALL msg=audit(1639668826.074:298): arch=c000003e syscall=59 success=yes exit=0 a0=55c437f22b20 a1=55c437f2c9d0 a2=55c437f2aeb0 a3=8 items=3 ppid=5256 pid=5375 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=3 comm="dnf" exe="/usr/libexec/platform-python3.6" subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key="software-installer"
dnf
is a symlink in RHEL, the path in the dnf
Audit rule must include the target of the symlink. To receive correct Audit events, modify the 44-installers.rules
file by changing the path=/usr/bin/dnf
path to /usr/bin/dnf-3
.
12.12. Monitoring user login times with Audit
To monitor which users logged in at specific times, you do not need to configure Audit in any special way. You can use the ausearch
or aureport
tools, which provide different ways of presenting the same information.
Prerequisites
-
auditd
is configured in accordance with the settings provided in Configuring auditd for a secure environment .
Procedure
To display user log in times, use any one of the following commands:
Search the audit log for the
USER_LOGIN
message type:# ausearch -m USER_LOGIN -ts '12/02/2020' '18:00:00' -sv no time->Mon Nov 22 07:33:22 2021 type=USER_LOGIN msg=audit(1637584402.416:92): pid=1939 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=login acct="(unknown)" exe="/usr/sbin/sshd" hostname=? addr=10.37.128.108 terminal=ssh res=failed'
-
You can specify the date and time with the
-ts
option. If you do not use this option,ausearch
provides results from today, and if you omit time,ausearch
provides results from midnight. -
You can use the
-sv yes
option to filter out successful login attempts and-sv no
for unsuccessful login attempts.
-
You can specify the date and time with the
Pipe the raw output of the
ausearch
command into theaulast
utility, which displays the output in a format similar to the output of thelast
command. For example:# ausearch --raw | aulast --stdin root ssh 10.37.128.108 Mon Nov 22 07:33 - 07:33 (00:00) root ssh 10.37.128.108 Mon Nov 22 07:33 - 07:33 (00:00) root ssh 10.22.16.106 Mon Nov 22 07:40 - 07:40 (00:00) reboot system boot 4.18.0-348.6.el8 Mon Nov 22 07:33
Display the list of login events by using the
aureport
command with the--login -i
options.# aureport --login -i Login Report ============================================ # date time auid host term exe success event ============================================ 1. 11/16/2021 13:11:30 root 10.40.192.190 ssh /usr/sbin/sshd yes 6920 2. 11/16/2021 13:11:31 root 10.40.192.190 ssh /usr/sbin/sshd yes 6925 3. 11/16/2021 13:11:31 root 10.40.192.190 ssh /usr/sbin/sshd yes 6930 4. 11/16/2021 13:11:31 root 10.40.192.190 ssh /usr/sbin/sshd yes 6935 5. 11/16/2021 13:11:33 root 10.40.192.190 ssh /usr/sbin/sshd yes 6940 6. 11/16/2021 13:11:33 root 10.40.192.190 /dev/pts/0 /usr/sbin/sshd yes 6945
Additional resources
-
The
ausearch(8)
man page. -
The
aulast(8)
man page. -
The
aureport(8)
man page.
12.13. Additional resources
- The RHEL Audit System Reference Knowledgebase article.
- The Auditd execution options in a container Knowledgebase article.
- The Linux Audit Documentation Project page.
-
The
audit
package provides documentation in the/usr/share/doc/audit/
directory. -
auditd(8)
,auditctl(8)
,ausearch(8)
,audit.rules(7)
,audispd.conf(5)
,audispd(8)
,auditd.conf(5)
,ausearch-expression(5)
,aulast(8)
,aulastlog(8)
,aureport(8)
,ausyscall(8)
,autrace(8)
, andauvirt(8)
man pages.
Chapter 13. Blocking and allowing applications using fapolicyd
Setting and enforcing a policy that either allows or denies application execution based on a rule set efficiently prevents the execution of unknown and potentially malicious software.
13.1. Introduction to fapolicyd
The fapolicyd
software framework controls the execution of applications based on a user-defined policy. This is one of the most efficient ways to prevent running untrusted and possibly malicious applications on the system.
The fapolicyd
framework provides the following components:
-
fapolicyd
service -
fapolicyd
command-line utilities -
fapolicyd
RPM plugin -
fapolicyd
rule language -
fagenrules
script
The administrator can define the allow
and deny
execution rules for any application with the possibility of auditing based on a path, hash, MIME type, or trust.
The fapolicyd
framework introduces the concept of trust. An application is trusted when it is properly installed by the system package manager, and therefore it is registered in the system RPM database. The fapolicyd
daemon uses the RPM database as a list of trusted binaries and scripts. The fapolicyd
RPM plugin registers any system update that is handled by either the DNF package manager or the RPM Package Manager. The plugin notifies the fapolicyd
daemon about changes in this database. Other ways of adding applications require the creation of custom rules and restarting the fapolicyd
service.
The fapolicyd
service configuration is located in the /etc/fapolicyd/
directory with the following structure:
-
The
/etc/fapolicyd/fapolicyd.trust
file contains a list of trusted files. You can also use multiple trust files in the/etc/fapolicyd/trust.d/
directory. -
The
/etc/fapolicyd/rules.d/
directory for files containingallow
anddeny
execution rules. Thefagenrules
script merges these component rules files to the/etc/fapolicyd/compiled.rules
file. -
The
fapolicyd.conf
file contains the daemon’s configuration options. This file is useful primarily for performance-tuning purposes.
Rules in /etc/fapolicyd/rules.d/
are organized in several files, each representing a different policy goal. The numbers at the beginning of the corresponding file names determine the order in /etc/fapolicyd/compiled.rules
:
- 10 - language rules
- 20 - Dracut-related Rules
- 21 - rules for updaters
- 30 - patterns
- 40 - ELF rules
- 41 - shared objects rules
- 42 - trusted ELF rules
- 70 - trusted language rules
- 72 - shell rules
- 90 - deny execute rules
- 95 - allow open rules
You can use one of the ways for fapolicyd
integrity checking:
- file-size checking
- comparing SHA-256 hashes
- Integrity Measurement Architecture (IMA) subsystem
By default, fapolicyd
does no integrity checking. Integrity checking based on the file size is fast, but an attacker can replace the content of the file and preserve its byte size. Computing and checking SHA-256 checksums is more secure, but it affects the performance of the system. The integrity = ima
option in fapolicyd.conf
requires support for files extended attributes (also known as xattr) on all file systems containing executable files.
Additional resources
-
fapolicyd(8)
,fapolicyd.rules(5)
,fapolicyd.conf(5)
,fapolicyd.trust(13)
,fagenrules(8)
, andfapolicyd-cli(1)
man pages. - The Enhancing security with the kernel integrity subsystem chapter in the Managing, monitoring, and updating the kernel document.
-
The documentation installed with the
fapolicyd
package in the/usr/share/doc/fapolicyd/
directory and the/usr/share/fapolicyd/sample-rules/README-rules
file.
13.2. Deploying fapolicyd
To deploy the fapolicyd
framework in RHEL:
Procedure
Install the
fapolicyd
package:# dnf install fapolicyd
Enable and start the
fapolicyd
service:# systemctl enable --now fapolicyd
Verification
Verify that the
fapolicyd
service is running correctly:# systemctl status fapolicyd ● fapolicyd.service - File Access Policy Daemon Loaded: loaded (/usr/lib/systemd/system/fapolicyd.service; enabled; vendor p> Active: active (running) since Tue 2019-10-15 18:02:35 CEST; 55s ago Process: 8818 ExecStart=/usr/sbin/fapolicyd (code=exited, status=0/SUCCESS) Main PID: 8819 (fapolicyd) Tasks: 4 (limit: 11500) Memory: 78.2M CGroup: /system.slice/fapolicyd.service └─8819 /usr/sbin/fapolicyd Oct 15 18:02:35 localhost.localdomain systemd[1]: Starting File Access Policy D> Oct 15 18:02:35 localhost.localdomain fapolicyd[8819]: Initialization of the da> Oct 15 18:02:35 localhost.localdomain fapolicyd[8819]: Reading RPMDB into memory Oct 15 18:02:35 localhost.localdomain systemd[1]: Started File Access Policy Da> Oct 15 18:02:36 localhost.localdomain fapolicyd[8819]: Creating database
Log in as a user without root privileges, and check that
fapolicyd
is working, for example:$ cp /bin/ls /tmp $ /tmp/ls bash: /tmp/ls: Operation not permitted
13.3. Marking files as trusted using an additional source of trust
The fapolicyd
framework trusts files contained in the RPM database. You can mark additional files as trusted by adding the corresponding entries to the /etc/fapolicyd/fapolicyd.trust
plain-text file or the /etc/fapolicyd/trust.d/
directory, which supports separating a list of trusted files into more files. You can modify fapolicyd.trust
or the files in /etc/fapolicyd/trust.d
either directly using a text editor or through fapolicyd-cli
commands.
Marking files as trusted using fapolicyd.trust
or trust.d/
is better than writing custom fapolicyd
rules due to performance reasons.
Prerequisites
-
The
fapolicyd
framework is deployed on your system.
Procedure
Copy your custom binary to the required directory, for example:
$ cp /bin/ls /tmp $ /tmp/ls bash: /tmp/ls: Operation not permitted
Mark your custom binary as trusted, and store the corresponding entry to the
myapp
file in/etc/fapolicyd/trust.d/
:# fapolicyd-cli --file add /tmp/ls --trust-file myapp
-
If you skip the
--trust-file
option, then the previous command adds the corresponding line to/etc/fapolicyd/fapolicyd.trust
. -
To mark all existing files in a directory as trusted, provide the directory path as an argument of the
--file
option, for example:fapolicyd-cli --file add /tmp/my_bin_dir/ --trust-file myapp
.
-
If you skip the
Update the
fapolicyd
database:# fapolicyd-cli --update
Changing the content of a trusted file or directory changes their checksum, and therefore fapolicyd
no longer considers them trusted.
To make the new content trusted again, refresh the file trust database by using the fapolicyd-cli --file update
command. If you do not provide any argument, the entire database refreshes. Alternatively, you can specify a path to a specific file or directory. Then, update the database by using fapolicyd-cli --update
.
Verification
Check that your custom binary can be now executed, for example:
$ /tmp/ls ls
Additional resources
-
fapolicyd.trust(13)
man page.
13.4. Adding custom allow and deny rules for fapolicyd
The default set of rules in the fapolicyd
package does not affect system functions. For custom scenarios, such as storing binaries and scripts in a non-standard directory or adding applications without the dnf
or rpm
installers, you must either mark additional files as trusted or add new custom rules.
For basic scenarios, prefer Marking files as trusted using an additional source of trust. In more advanced scenarios such as allowing to execute a custom binary only for specific user and group identifiers, add new custom rules to the /etc/fapolicyd/rules.d/
directory.
The following steps demonstrate adding a new rule to allow a custom binary.
Prerequisites
-
The
fapolicyd
framework is deployed on your system.
Procedure
Copy your custom binary to the required directory, for example:
$ cp /bin/ls /tmp $ /tmp/ls bash: /tmp/ls: Operation not permitted
Stop the
fapolicyd
service:# systemctl stop fapolicyd
Use debug mode to identify a corresponding rule. Because the output of the
fapolicyd --debug
command is verbose and you can stop it only by pressing Ctrl+C or killing the corresponding process, redirect the error output to a file. In this case, you can limit the output only to access denials by using the--debug-deny
option instead of--debug
:# fapolicyd --debug-deny 2> fapolicy.output & [1] 51341
Alternatively, you can run
fapolicyd
debug mode in another terminal.Repeat the command that
fapolicyd
denied:$ /tmp/ls bash: /tmp/ls: Operation not permitted
Stop debug mode by resuming it in the foreground and pressing Ctrl+C:
# fg fapolicyd --debug 2> fapolicy.output ^C ...
Alternatively, kill the process of
fapolicyd
debug mode:# kill 51341
Find a rule that denies the execution of your application:
# cat fapolicy.output | grep 'deny_audit' ... rule=13 dec=deny_audit perm=execute auid=0 pid=6855 exe=/usr/bin/bash : path=/tmp/ls ftype=application/x-executable trust=0
Locate the file that contains a rule that prevented the execution of your custom binary. In this case, the
deny_audit perm=execute
rule belongs to the90-deny-execute.rules
file:# ls /etc/fapolicyd/rules.d/ 10-languages.rules 40-bad-elf.rules 72-shell.rules 20-dracut.rules 41-shared-obj.rules 90-deny-execute.rules 21-updaters.rules 42-trusted-elf.rules 95-allow-open.rules 30-patterns.rules 70-trusted-lang.rules # cat /etc/fapolicyd/rules.d/90-deny-execute.rules # Deny execution for anything untrusted deny_audit perm=execute all : all
Add a new
allow
rule to the file that lexically precedes the rule file that contains the rule that denied the execution of your custom binary in the/etc/fapolicyd/rules.d/
directory:# touch /etc/fapolicyd/rules.d/80-myapps.rules # vi /etc/fapolicyd/rules.d/80-myapps.rules
Insert the following rule to the
80-myapps.rules
file:allow perm=execute exe=/usr/bin/bash trust=1 : path=/tmp/ls ftype=application/x-executable trust=0
Alternatively, you can allow executions of all binaries in the
/tmp
directory by adding the following rule to the rule file in/etc/fapolicyd/rules.d/
:allow perm=execute exe=/usr/bin/bash trust=1 : dir=/tmp/ trust=0
To prevent changes in the content of your custom binary, define the required rule using an SHA-256 checksum:
$ sha256sum /tmp/ls 780b75c90b2d41ea41679fcb358c892b1251b68d1927c80fbc0d9d148b25e836 ls
Change the rule to the following definition:
allow perm=execute exe=/usr/bin/bash trust=1 : sha256hash=780b75c90b2d41ea41679fcb358c892b1251b68d1927c80fbc0d9d148b25e836
Check that the list of compiled differs from the rule set in
/etc/fapolicyd/rules.d/
, and update the list, which is stored in the/etc/fapolicyd/compiled.rules
file:# fagenrules --check /usr/sbin/fagenrules: Rules have changed and should be updated # fagenrules --load
Check that your custom rule is in the list of
fapolicyd
rules before the rule that prevented the execution:# fapolicyd-cli --list ... 13. allow perm=execute exe=/usr/bin/bash trust=1 : path=/tmp/ls ftype=application/x-executable trust=0 14. deny_audit perm=execute all : all ...
Start the
fapolicyd
service:# systemctl start fapolicyd
Verification
Check that your custom binary can be now executed, for example:
$ /tmp/ls ls
Additional resources
-
fapolicyd.rules(5)
andfapolicyd-cli(1)
man pages. -
The documentation installed with the
fapolicyd
package in the/usr/share/fapolicyd/sample-rules/README-rules
file.
13.5. Enabling fapolicyd integrity checks
By default, fapolicyd
does not perform integrity checking. You can configure fapolicyd
to perform integrity checks by comparing either file sizes or SHA-256 hashes. You can also set integrity checks by using the Integrity Measurement Architecture (IMA) subsystem.
Prerequisites
-
The
fapolicyd
framework is deployed on your system.
Procedure
Open the
/etc/fapolicyd/fapolicyd.conf
file in a text editor of your choice, for example:# vi /etc/fapolicyd/fapolicyd.conf
Change the value of the
integrity
option fromnone
tosha256
, save the file, and exit the editor:integrity = sha256
Restart the
fapolicyd
service:# systemctl restart fapolicyd
Verification
Back up the file used for the verification:
# cp /bin/more /bin/more.bak
Change the content of the
/bin/more
binary:# cat /bin/less > /bin/more
Use the changed binary as a regular user:
# su example.user $ /bin/more /etc/redhat-release bash: /bin/more: Operation not permitted
Revert the changes:
# mv -f /bin/more.bak /bin/more
13.6. Troubleshooting problems related to fapolicyd
The following section provides tips for basic troubleshooting of the fapolicyd
application framework and guidance for adding applications using the rpm
command.
Installing applications using rpm
If you install an application using the
rpm
command, you have to perform a manual refresh of thefapolicyd
RPM database:Install your application:
# rpm -i application.rpm
Refresh the database:
# fapolicyd-cli --update
If you skip this step, the system can freeze and must be restarted.
Service status
If
fapolicyd
does not work correctly, check the service status:# systemctl status fapolicyd
fapolicyd-cli
checks and listings
The
--check-config
,--check-watch_fs
, and--check-trustdb
options help you find syntax errors, not-yet-watched file systems, and file mismatches, for example:# fapolicyd-cli --check-config Daemon config is OK # fapolicyd-cli --check-trustdb /etc/selinux/targeted/contexts/files/file_contexts miscompares: size sha256 /etc/selinux/targeted/policy/policy.31 miscompares: size sha256
Use the
--list
option to check the current list of rules and their order:# fapolicyd-cli --list ... 9. allow perm=execute all : trust=1 10. allow perm=open all : ftype=%languages trust=1 11. deny_audit perm=any all : ftype=%languages 12. allow perm=any all : ftype=text/x-shellscript 13. deny_audit perm=execute all : all ...
Debug mode
Debug mode provides detailed information about matched rules, database status, and more. To switch
fapolicyd
to debug mode:Stop the
fapolicyd
service:# systemctl stop fapolicyd
Use debug mode to identify a corresponding rule:
# fapolicyd --debug
Because the output of the
fapolicyd --debug
command is verbose, you can redirect the error output to a file:# fapolicyd --debug 2> fapolicy.output
Alternatively, to limit the output only to entries when
fapolicyd
denies access, use the--debug-deny
option:# fapolicyd --debug-deny
Removing the fapolicyd
database
To solve problems related to the
fapolicyd
database, try to remove the database file:# systemctl stop fapolicyd # fapolicyd-cli --delete-db
WarningDo not remove the
/var/lib/fapolicyd/
directory. Thefapolicyd
framework automatically restores only the database file in this directory.
Dumping the fapolicyd
database
The
fapolicyd
contains entries from all enabled trust sources. You can check the entries after dumping the database:# fapolicyd-cli --dump-db
Application pipe
In rare cases, removing the
fapolicyd
pipe file can solve a lockup:# rm -f /var/run/fapolicyd/fapolicyd.fifo
Additional resources
-
fapolicyd-cli(1)
man page.
13.7. Additional resources
-
fapolicyd
-related man pages listed by using theman -k fapolicyd
command. - The FOSDEM 2020 fapolicyd presentation.
Chapter 14. Protecting systems against intrusive USB devices
USB devices can be loaded with spyware, malware, or trojans, which can steal your data or damage your system. As a Red Hat Enterprise Linux administrator, you can prevent such USB attacks with USBGuard.
14.1. USBGuard
With the USBGuard software framework, you can protect your systems against intrusive USB devices by using basic lists of permitted and forbidden devices based on the USB device authorization feature in the kernel.
The USBGuard framework provides the following components:
- The system service component with an inter-process communication (IPC) interface for dynamic interaction and policy enforcement
-
The command-line interface to interact with a running
usbguard
system service - The rule language for writing USB device authorization policies
- The C++ API for interacting with the system service component implemented in a shared library
The usbguard
system service configuration file (/etc/usbguard/usbguard-daemon.conf
) includes the options to authorize the users and groups to use the IPC interface.
The system service provides the USBGuard public IPC interface. In Red Hat Enterprise Linux, the access to this interface is limited to the root user only by default.
Consider setting either the IPCAccessControlFiles
option (recommended) or the IPCAllowedUsers
and IPCAllowedGroups
options to limit access to the IPC interface.
Ensure that you do not leave the Access Control List (ACL) unconfigured as this exposes the IPC interface to all local users and allows them to manipulate the authorization state of USB devices and modify the USBGuard policy.
14.2. Installing USBGuard
Use this procedure to install and initiate the USBGuard
framework.
Procedure
Install the
usbguard
package:# dnf install usbguard
Create an initial rule set:
# usbguard generate-policy > /etc/usbguard/rules.conf
Start the
usbguard
daemon and ensure that it starts automatically on boot:# systemctl enable --now usbguard
Verification
Verify that the
usbguard
service is running:# systemctl status usbguard ● usbguard.service - USBGuard daemon Loaded: loaded (/usr/lib/systemd/system/usbguard.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2019-11-07 09:44:07 CET; 3min 16s ago Docs: man:usbguard-daemon(8) Main PID: 6122 (usbguard-daemon) Tasks: 3 (limit: 11493) Memory: 1.2M CGroup: /system.slice/usbguard.service └─6122 /usr/sbin/usbguard-daemon -f -s -c /etc/usbguard/usbguard-daemon.conf Nov 07 09:44:06 localhost.localdomain systemd[1]: Starting USBGuard daemon... Nov 07 09:44:07 localhost.localdomain systemd[1]: Started USBGuard daemon.
List USB devices recognized by
USBGuard
:# usbguard list-devices 4: allow id 1d6b:0002 serial "0000:02:00.0" name "xHCI Host Controller" hash...
Additional resources
-
usbguard(1)
andusbguard-daemon.conf(5)
man pages.
14.3. Blocking and authorizing a USB device using CLI
This procedure outlines how to authorize and block a USB device using the usbguard
command.
Prerequisites
-
The
usbguard
service is installed and running.
Procedure
List USB devices recognized by
USBGuard
:# usbguard list-devices 1: allow id 1d6b:0002 serial "0000:00:06.7" name "EHCI Host Controller" hash "JDOb0BiktYs2ct3mSQKopnOOV2h9MGYADwhT+oUtF2s=" parent-hash "4PHGcaDKWtPjKDwYpIRG722cB9SlGz9l9Iea93+Gt9c=" via-port "usb1" with-interface 09:00:00 ... 6: block id 1b1c:1ab1 serial "000024937962" name "Voyager" hash "CrXgiaWIf2bZAU+5WkzOE7y0rdSO82XMzubn7HDb95Q=" parent-hash "JDOb0BiktYs2ct3mSQKopnOOV2h9MGYADwhT+oUtF2s=" via-port "1-3" with-interface 08:06:50
Authorize the device 6 to interact with the system:
# usbguard allow-device 6
Deauthorize and remove the device 6:
# usbguard reject-device 6
Deauthorize and retain the device 6:
# usbguard block-device 6
USBGuard
uses the block and reject terms with the following meanings:
- block: do not interact with this device for now.
- reject: ignore this device as if it does not exist.
Additional resources
-
usbguard(1)
man page. -
Built-in help listed by using the
usbguard --help
command.
14.4. Permanently blocking and authorizing a USB device
You can permanently block and authorize a USB device using the -p
option. This adds a device-specific rule to the current policy.
Prerequisites
-
The
usbguard
service is installed and running.
Procedure
Configure SELinux to allow the
usbguard
daemon to write rules.Display the
semanage
Booleans relevant tousbguard
.# semanage boolean -l | grep usbguard usbguard_daemon_write_conf (off , off) Allow usbguard to daemon write conf usbguard_daemon_write_rules (on , on) Allow usbguard to daemon write rules
Optional: If the
usbguard_daemon_write_rules
Boolean is turned off, turn it on.# semanage boolean -m --on usbguard_daemon_write_rules
List USB devices recognized by USBGuard:
# usbguard list-devices 1: allow id 1d6b:0002 serial "0000:00:06.7" name "EHCI Host Controller" hash "JDOb0BiktYs2ct3mSQKopnOOV2h9MGYADwhT+oUtF2s=" parent-hash "4PHGcaDKWtPjKDwYpIRG722cB9SlGz9l9Iea93+Gt9c=" via-port "usb1" with-interface 09:00:00 ... 6: block id 1b1c:1ab1 serial "000024937962" name "Voyager" hash "CrXgiaWIf2bZAU+5WkzOE7y0rdSO82XMzubn7HDb95Q=" parent-hash "JDOb0BiktYs2ct3mSQKopnOOV2h9MGYADwhT+oUtF2s=" via-port "1-3" with-interface 08:06:50
Permanently authorize the device 6 to interact with the system:
# usbguard allow-device 6 -p
Permanently deauthorize and remove the device 6:
# usbguard reject-device 6 -p
Permanently deauthorize and retain the device 6:
# usbguard block-device 6 -p
USBGuard
uses the terms block and reject with the following meanings:
- block: do not interact with this device for now.
- reject: ignore this device as if it does not exist.
Verification
Check that
USBGuard
rules include the changes you made.# usbguard list-rules
Additional resources
-
usbguard(1)
man page. -
Built-in help listed by using the
usbguard --help
command.
14.5. Creating a custom policy for USB devices
The following procedure contains steps for creating a rule set for USB devices that reflects the requirements of your scenario.
Prerequisites
-
The
usbguard
service is installed and running. -
The
/etc/usbguard/rules.conf
file contains an initial rule set generated by theusbguard generate-policy
command.
Procedure
Create a policy which authorizes the currently connected USB devices, and store the generated rules to the
rules.conf
file:# usbguard generate-policy --no-hashes > ./rules.conf
The
--no-hashes
option does not generate hash attributes for devices. Avoid hash attributes in your configuration settings because they might not be persistent.Edit the
rules.conf
file with a text editor of your choice, for example:# vi ./rules.conf
Add, remove, or edit the rules as required. For example, the following rule allows only devices with a single mass storage interface to interact with the system:
allow with-interface equals { 08:*:* }
See the
usbguard-rules.conf(5)
man page for a detailed rule-language description and more examples.Install the updated policy:
# install -m 0600 -o root -g root rules.conf /etc/usbguard/rules.conf
Restart the
usbguard
daemon to apply your changes:# systemctl restart usbguard
Verification
Check that your custom rules are in the active policy, for example:
# usbguard list-rules ... 4: allow with-interface 08:*:* ...
Additional resources
-
usbguard-rules.conf(5)
man page.
14.6. Creating a structured custom policy for USB devices
You can organize your custom USBGuard policy in several .conf
files within the /etc/usbguard/rules.d/
directory. The usbguard-daemon
then combines the main rules.conf
file with the .conf
files within the directory in alphabetical order.
Prerequisites
-
The
usbguard
service is installed and running.
Procedure
Create a policy which authorizes the currently connected USB devices, and store the generated rules to a new
.conf
file, for example,policy.conf
.# usbguard generate-policy --no-hashes > ./policy.conf
The
--no-hashes
option does not generate hash attributes for devices. Avoid hash attributes in your configuration settings because they might not be persistent.Display the
policy.conf
file with a text editor of your choice, for example:# vi ./policy.conf ... allow id 04f2:0833 serial "" name "USB Keyboard" via-port "7-2" with-interface { 03:01:01 03:00:00 } with-connect-type "unknown" ...
Move selected lines into a separate
.conf
file.NoteThe two digits at the beginning of the file name specify the order in which the daemon reads the configuration files.
For example, copy the rules for your keyboards into a new
.conf
file.# grep "USB Keyboard" ./policy.conf > ./10keyboards.conf
Install the new policy to the
/etc/usbguard/rules.d/
directory.# install -m 0600 -o root -g root 10keyboards.conf /etc/usbguard/rules.d/10keyboards.conf
Move the rest of the lines to a main
rules.conf
file.# grep -v "USB Keyboard" ./policy.conf > ./rules.conf
Install the remaining rules.
# install -m 0600 -o root -g root rules.conf /etc/usbguard/rules.conf
Restart the
usbguard
daemon to apply your changes.# systemctl restart usbguard
Verification
Display all active USBGuard rules.
# usbguard list-rules ... 15: allow id 04f2:0833 serial "" name "USB Keyboard" hash "kxM/iddRe/WSCocgiuQlVs6Dn0VEza7KiHoDeTz0fyg=" parent-hash "2i6ZBJfTl5BakXF7Gba84/Cp1gslnNc1DM6vWQpie3s=" via-port "7-2" with-interface { 03:01:01 03:00:00 } with-connect-type "unknown" ...
Display the contents of the
rules.conf
file and all the.conf
files in the/etc/usbguard/rules.d/
directory.# cat /etc/usbguard/rules.conf /etc/usbguard/rules.d/*.conf
- Verify that the active rules contain all the rules from the files and are in the correct order.
Additional resources
-
usbguard-rules.conf(5)
man page.
14.7. Authorizing users and groups to use the USBGuard IPC interface
Use this procedure to authorize a specific user or a group to use the USBGuard public IPC interface. By default, only the root user can use this interface.
Prerequisites
-
The
usbguard
service is installed and running. -
The
/etc/usbguard/rules.conf
file contains an initial rule set generated by theusbguard generate-policy
command.
Procedure
Edit the
/etc/usbguard/usbguard-daemon.conf
file with a text editor of your choice:# vi /etc/usbguard/usbguard-daemon.conf
For example, add a line with a rule that allows all users in the
wheel
group to use the IPC interface, and save the file:IPCAllowGroups=wheel
You can add users or groups also with the
usbguard
command. For example, the following command enables the joesec user to have full access to theDevices
andExceptions
sections. Furthermore, joesec can list and modify the current policy:# usbguard add-user joesec --devices ALL --policy modify,list --exceptions ALL
To remove the granted permissions for the joesec user, use the
usbguard remove-user joesec
command.Restart the
usbguard
daemon to apply your changes:# systemctl restart usbguard
Additional resources
-
usbguard(1)
andusbguard-rules.conf(5)
man pages.
14.8. Logging USBguard authorization events to the Linux Audit log
Use the following steps to integrate logging of USBguard authorization events to the standard Linux Audit log. By default, the usbguard
daemon logs events to the /var/log/usbguard/usbguard-audit.log
file.
Prerequisites
-
The
usbguard
service is installed and running. -
The
auditd
service is running.
Procedure
Edit the
usbguard-daemon.conf
file with a text editor of your choice:# vi /etc/usbguard/usbguard-daemon.conf
Change the
AuditBackend
option fromFileAudit
toLinuxAudit
:AuditBackend=LinuxAudit
Restart the
usbguard
daemon to apply the configuration change:# systemctl restart usbguard
Verification
Query the
audit
daemon log for a USB authorization event, for example:# ausearch -ts recent -m USER_DEVICE
Additional resources
-
usbguard-daemon.conf(5)
man page.
14.9. Additional resources
-
usbguard(1)
,usbguard-rules.conf(5)
,usbguard-daemon(8)
, andusbguard-daemon.conf(5)
man pages. - USBGuard Homepage.
Chapter 15. Configuring a remote logging solution
To ensure that logs from various machines in your environment are recorded centrally on a logging server, you can configure the Rsyslog application to record logs that fit specific criteria from the client system to the server.
15.1. The Rsyslog logging service
The Rsyslog application, in combination with the systemd-journald
service, provides local and remote logging support in Red Hat Enterprise Linux. The rsyslogd
daemon continuously reads syslog
messages received by the systemd-journald
service from the Journal. rsyslogd
then filters and processes these syslog
events and records them to rsyslog
log files or forwards them to other services according to its configuration.
The rsyslogd
daemon also provides extended filtering, encryption protected relaying of messages, input and output modules, and support for transportation using the TCP and UDP protocols.
In /etc/rsyslog.conf
, which is the main configuration file for rsyslog
, you can specify the rules according to which rsyslogd
handles the messages. Generally, you can classify messages by their source and topic (facility) and urgency (priority), and then assign an action that should be performed when a message fits these criteria.
In /etc/rsyslog.conf
, you can also see a list of log files maintained by rsyslogd
. Most log files are located in the /var/log/
directory. Some applications, such as httpd
and samba
, store their log files in a subdirectory within /var/log/
.
Additional resources
-
The
rsyslogd(8)
andrsyslog.conf(5)
man pages. -
Documentation installed with the
rsyslog-doc
package in the/usr/share/doc/rsyslog/html/index.html
file.
15.2. Installing Rsyslog documentation
The Rsyslog application has extensive online documentation that is available at https://www.rsyslog.com/doc/, but you can also install the rsyslog-doc
documentation package locally.
Prerequisites
-
You have activated the
AppStream
repository on your system. -
You are authorized to install new packages using
sudo
.
Procedure
Install the
rsyslog-doc
package:# dnf install rsyslog-doc
Verification
Open the
/usr/share/doc/rsyslog/html/index.html
file in a browser of your choice, for example:$ firefox /usr/share/doc/rsyslog/html/index.html &
15.3. Configuring a server for remote logging over TCP
The Rsyslog application enables you to both run a logging server and configure individual systems to send their log files to the logging server. To use remote logging through TCP, configure both the server and the client. The server collects and analyzes the logs sent by one or more client systems.
With the Rsyslog application, you can maintain a centralized logging system where log messages are forwarded to a server over the network. To avoid message loss when the server is not available, you can configure an action queue for the forwarding action. This way, messages that failed to be sent are stored locally until the server is reachable again. Note that such queues cannot be configured for connections using the UDP protocol.
The omfwd
plug-in provides forwarding over UDP or TCP. The default protocol is UDP. Because the plug-in is built in, it does not have to be loaded.
By default, rsyslog
uses TCP on port 514
.
Prerequisites
- Rsyslog is installed on the server system.
-
You are logged in as
root
on the server. -
The
policycoreutils-python-utils
package is installed for the optional step using thesemanage
command. -
The
firewalld
service is running.
Procedure
Optional: To use a different port for
rsyslog
traffic, add thesyslogd_port_t
SELinux type to port. For example, enable port30514
:# semanage port -a -t syslogd_port_t -p tcp 30514
Optional: To use a different port for
rsyslog
traffic, configurefirewalld
to allow incomingrsyslog
traffic on that port. For example, allow TCP traffic on port30514
:# firewall-cmd --zone=<zone-name> --permanent --add-port=30514/tcp success # firewall-cmd --reload
Create a new file in the
/etc/rsyslog.d/
directory named, for example,remotelog.conf
, and insert the following content:# Define templates before the rules that use them # Per-Host templates for remote systems template(name="TmplAuthpriv" type="list") { constant(value="/var/log/remote/auth/") property(name="hostname") constant(value="/") property(name="programname" SecurePath="replace") constant(value=".log") } template(name="TmplMsg" type="list") { constant(value="/var/log/remote/msg/") property(name="hostname") constant(value="/") property(name="programname" SecurePath="replace") constant(value=".log") } # Provides TCP syslog reception module(load="imtcp") # Adding this ruleset to process remote messages ruleset(name="remote1"){ authpriv.* action(type="omfile" DynaFile="TmplAuthpriv") *.info;mail.none;authpriv.none;cron.none action(type="omfile" DynaFile="TmplMsg") } input(type="imtcp" port="30514" ruleset="remote1")
-
Save the changes to the
/etc/rsyslog.d/remotelog.conf
file. Test the syntax of the
/etc/rsyslog.conf
file:# rsyslogd -N 1 rsyslogd: version 8.1911.0-2.el8, config validation run... rsyslogd: End of config validation run. Bye.
Make sure the
rsyslog
service is running and enabled on the logging server:# systemctl status rsyslog
Restart the
rsyslog
service.# systemctl restart rsyslog
Optional: If
rsyslog
is not enabled, ensure thersyslog
service starts automatically after reboot:# systemctl enable rsyslog
Your log server is now configured to receive and store log files from the other systems in your environment.
Additional resources
-
rsyslogd(8)
,rsyslog.conf(5)
,semanage(8)
, andfirewall-cmd(1)
man pages. -
Documentation installed with the
rsyslog-doc
package in the/usr/share/doc/rsyslog/html/index.html
file.
15.4. Configuring remote logging to a server over TCP
Follow this procedure to configure a system for forwarding log messages to a server over the TCP protocol. The omfwd
plug-in provides forwarding over UDP or TCP. The default protocol is UDP. Because the plug-in is built in, you do not have to load it.
Prerequisites
-
The
rsyslog
package is installed on the client systems that should report to the server. - You have configured the server for remote logging.
- The specified port is permitted in SELinux and open in firewall.
-
The system contains the
policycoreutils-python-utils
package, which provides thesemanage
command for adding a non-standard port to the SELinux configuration.
Procedure
Create a new file in the
/etc/rsyslog.d/
directory named, for example,10-remotelog.conf
, and insert the following content:*.* action(type="omfwd" queue.type="linkedlist" queue.filename="example_fwd" action.resumeRetryCount="-1" queue.saveOnShutdown="on" target="example.com" port="30514" protocol="tcp" )
Where:
-
queue.type="linkedlist"
enables a LinkedList in-memory queue, -
queue.filename
defines a disk storage. The backup files are created with theexample_fwd
prefix in the working directory specified by the preceding globalworkDirectory
directive, -
the
action.resumeRetryCount -1
setting preventsrsyslog
from dropping messages when retrying to connect if server is not responding, -
enabled
queue.saveOnShutdown="on"
saves in-memory data ifrsyslog
shuts down, the last line forwards all received messages to the logging server, port specification is optional.
With this configuration,
rsyslog
sends messages to the server but keeps messages in memory if the remote server is not reachable. A file on disk is created only ifrsyslog
runs out of the configured memory queue space or needs to shut down, which benefits the system performance.
NoteRsyslog processes configuration files
/etc/rsyslog.d/
in the lexical order.-
Restart the
rsyslog
service.# systemctl restart rsyslog
Verification
To verify that the client system sends messages to the server, follow these steps:
On the client system, send a test message:
# logger test
On the server system, view the
/var/log/messages
log, for example:# cat /var/log/remote/msg/hostname/root.log Feb 25 03:53:17 hostname root[6064]: test
Where hostname is the host name of the client system. Note that the log contains the user name of the user that entered the
logger
command, in this caseroot
.
Additional resources
-
rsyslogd(8)
andrsyslog.conf(5)
man pages. -
Documentation installed with the
rsyslog-doc
package in the/usr/share/doc/rsyslog/html/index.html
file.
15.5. Configuring TLS-encrypted remote logging
By default, Rsyslog sends remote-logging communication in the plain text format. If your scenario requires to secure this communication channel, you can encrypt it using TLS.
To use encrypted transport through TLS, configure both the server and the client. The server collects and analyzes the logs sent by one or more client systems.
You can use either the ossl
network stream driver (OpenSSL) or the gtls
stream driver (GnuTLS).
If you have a separate system with higher security, for example, a system that is not connected to any network or has stricter authorizations, use the separate system as the certifying authority (CA).
Prerequisites
-
You have
root
access to both the client and server systems. -
The
rsyslog
andrsyslog-openssl
packages are installed on the server and the client systems. -
If you use the
gtls
network stream driver, install thersyslog-gnutls
package instead ofrsyslog-openssl
. -
If you generate certificates using the
certtool
command, install thegnutls-utils
package. On your logging server, the following certificates are in the
/etc/pki/ca-trust/source/anchors/
directory and your system configuration is updated by using theupdate-ca-trust
command:-
ca-cert.pem
- a CA certificate that can verify keys and certificates on logging servers and clients. -
server-cert.pem
- a public key of the logging server. -
server-key.pem
- a private key of the logging server.
-
On your logging clients, the following certificates are in the
/etc/pki/ca-trust/source/anchors/
directory and your system configuration is updated by usingupdate-ca-trust
:-
ca-cert.pem
- a CA certificate that can verify keys and certificates on logging servers and clients. -
client-cert.pem
- a public key of a client. -
client-key.pem
- a private key of a client. - If the server runs RHEL 9.2 or later and the FIPS mode is enabled, clients must either support the Extended Master Secret (EMS) extension or use TLS 1.3. TLS 1.2 connections without EMS fail. For more information, see the TLS extension "Extended Master Secret" enforced Knowledgebase article.
-
Procedure
Configure the server for receiving encrypted logs from your client systems:
-
Create a new file in the
/etc/rsyslog.d/
directory named, for example,securelogser.conf
. To encrypt the communication, the configuration file must contain paths to certificate files on your server, a selected authentication method, and a stream driver that supports TLS encryption. Add the following lines to the
/etc/rsyslog.d/securelogser.conf
file:# Set certificate files global( DefaultNetstreamDriverCAFile="/etc/pki/ca-trust/source/anchors/ca-cert.pem" DefaultNetstreamDriverCertFile="/etc/pki/ca-trust/source/anchors/server-cert.pem" DefaultNetstreamDriverKeyFile="/etc/pki/ca-trust/source/anchors/server-key.pem" ) # TCP listener module( load="imtcp" PermittedPeer=["client1.example.com", "client2.example.com"] StreamDriver.AuthMode="x509/name" StreamDriver.Mode="1" StreamDriver.Name="ossl" ) # Start up listener at port 514 input( type="imtcp" port="514" )
NoteIf you prefer the GnuTLS driver, use the
StreamDriver.Name="gtls"
configuration option. See the documentation installed with thersyslog-doc
package for more information about less strict authentication modes thanx509/name
.-
Save the changes to the
/etc/rsyslog.d/securelogser.conf
file. Verify the syntax of the
/etc/rsyslog.conf
file and any files in the/etc/rsyslog.d/
directory:# rsyslogd -N 1 rsyslogd: version 8.1911.0-2.el8, config validation run (level 1)... rsyslogd: End of config validation run. Bye.
Make sure the
rsyslog
service is running and enabled on the logging server:# systemctl status rsyslog
Restart the
rsyslog
service:# systemctl restart rsyslog
Optional: If Rsyslog is not enabled, ensure the
rsyslog
service starts automatically after reboot:# systemctl enable rsyslog
-
Create a new file in the
Configure clients for sending encrypted logs to the server:
-
On a client system, create a new file in the
/etc/rsyslog.d/
directory named, for example,securelogcli.conf
. Add the following lines to the
/etc/rsyslog.d/securelogcli.conf
file:# Set certificate files global( DefaultNetstreamDriverCAFile="/etc/pki/ca-trust/source/anchors/ca-cert.pem" DefaultNetstreamDriverCertFile="/etc/pki/ca-trust/source/anchors/client-cert.pem" DefaultNetstreamDriverKeyFile="/etc/pki/ca-trust/source/anchors/client-key.pem" ) # Set up the action for all messages *.* action( type="omfwd" StreamDriver="ossl" StreamDriverMode="1" StreamDriverPermittedPeers="server.example.com" StreamDriverAuthMode="x509/name" target="server.example.com" port="514" protocol="tcp" )
NoteIf you prefer the GnuTLS driver, use the
StreamDriver.Name="gtls"
configuration option.-
Save the changes to the
/etc/rsyslog.d/securelogser.conf
file. Verify the syntax of the
/etc/rsyslog.conf
file and other files in the/etc/rsyslog.d/
directory:# rsyslogd -N 1 rsyslogd: version 8.1911.0-2.el8, config validation run (level 1)... rsyslogd: End of config validation run. Bye.
Make sure the
rsyslog
service is running and enabled on the logging server:# systemctl status rsyslog
Restart the
rsyslog
service:# systemctl restart rsyslog
Optional: If Rsyslog is not enabled, ensure the
rsyslog
service starts automatically after reboot:# systemctl enable rsyslog
-
On a client system, create a new file in the
Verification
To verify that the client system sends messages to the server, follow these steps:
On the client system, send a test message:
# logger test
On the server system, view the
/var/log/messages
log, for example:# cat /var/log/remote/msg/hostname/root.log Feb 25 03:53:17 hostname root[6064]: test
Where
hostname
is the host name of the client system. Note that the log contains the user name of the user that entered the logger command, in this caseroot
.
Additional resources
-
certtool(1)
,openssl(1)
,update-ca-trust(8)
,rsyslogd(8)
, andrsyslog.conf(5)
man pages. -
Documentation installed with the
rsyslog-doc
package at/usr/share/doc/rsyslog/html/index.html
. - Using the logging System Role with TLS.
15.6. Configuring a server for receiving remote logging information over UDP
The Rsyslog application enables you to configure a system to receive logging information from remote systems. To use remote logging through UDP, configure both the server and the client. The receiving server collects and analyzes the logs sent by one or more client systems. By default, rsyslog
uses UDP on port 514
to receive log information from remote systems.
Follow this procedure to configure a server for collecting and analyzing logs sent by one or more client systems over the UDP protocol.
Prerequisites
- Rsyslog is installed on the server system.
-
You are logged in as
root
on the server. -
The
policycoreutils-python-utils
package is installed for the optional step using thesemanage
command. -
The
firewalld
service is running.
Procedure
Optional: To use a different port for
rsyslog
traffic than the default port514
:Add the
syslogd_port_t
SELinux type to the SELinux policy configuration, replacingportno
with the port number you wantrsyslog
to use:# semanage port -a -t syslogd_port_t -p udp portno
Configure
firewalld
to allow incomingrsyslog
traffic, replacingportno
with the port number andzone
with the zone you wantrsyslog
to use:# firewall-cmd --zone=zone --permanent --add-port=portno/udp success # firewall-cmd --reload
Reload the firewall rules:
# firewall-cmd --reload
Create a new
.conf
file in the/etc/rsyslog.d/
directory, for example,remotelogserv.conf
, and insert the following content:# Define templates before the rules that use them # Per-Host templates for remote systems template(name="TmplAuthpriv" type="list") { constant(value="/var/log/remote/auth/") property(name="hostname") constant(value="/") property(name="programname" SecurePath="replace") constant(value=".log") } template(name="TmplMsg" type="list") { constant(value="/var/log/remote/msg/") property(name="hostname") constant(value="/") property(name="programname" SecurePath="replace") constant(value=".log") } # Provides UDP syslog reception module(load="imudp") # This ruleset processes remote messages ruleset(name="remote1"){ authpriv.* action(type="omfile" DynaFile="TmplAuthpriv") *.info;mail.none;authpriv.none;cron.none action(type="omfile" DynaFile="TmplMsg") } input(type="imudp" port="514" ruleset="remote1")
Where
514
is the port numberrsyslog
uses by default. You can specify a different port instead.Verify the syntax of the
/etc/rsyslog.conf
file and all.conf
files in the/etc/rsyslog.d/
directory:# rsyslogd -N 1 rsyslogd: version 8.1911.0-2.el8, config validation run...
Restart the
rsyslog
service.# systemctl restart rsyslog
Optional: If
rsyslog
is not enabled, ensure thersyslog
service starts automatically after reboot:# systemctl enable rsyslog
Additional resources
-
rsyslogd(8)
,rsyslog.conf(5)
,semanage(8)
, andfirewall-cmd(1)
man pages. -
Documentation installed with the
rsyslog-doc
package in the/usr/share/doc/rsyslog/html/index.html
file.
15.7. Configuring remote logging to a server over UDP
Follow this procedure to configure a system for forwarding log messages to a server over the UDP protocol. The omfwd
plug-in provides forwarding over UDP or TCP. The default protocol is UDP. Because the plug-in is built in, you do not have to load it.
Prerequisites
-
The
rsyslog
package is installed on the client systems that should report to the server. - You have configured the server for remote logging as described in Configuring a server for receiving remote logging information over UDP.
Procedure
Create a new
.conf
file in the/etc/rsyslog.d/
directory, for example,10-remotelogcli.conf
, and insert the following content:*.* action(type="omfwd" queue.type="linkedlist" queue.filename="example_fwd" action.resumeRetryCount="-1" queue.saveOnShutdown="on" target="example.com" port="portno" protocol="udp" )
Where:
-
queue.type="linkedlist"
enables a LinkedList in-memory queue. -
queue.filename
defines a disk storage. The backup files are created with theexample_fwd
prefix in the working directory specified by the preceding globalworkDirectory
directive. -
The
action.resumeRetryCount -1
setting preventsrsyslog
from dropping messages when retrying to connect if the server is not responding. -
enabled queue.saveOnShutdown="on"
saves in-memory data ifrsyslog
shuts down. -
portno
is the port number you wantrsyslog
to use. The default value is514
. The last line forwards all received messages to the logging server, port specification is optional.
With this configuration,
rsyslog
sends messages to the server but keeps messages in memory if the remote server is not reachable. A file on disk is created only ifrsyslog
runs out of the configured memory queue space or needs to shut down, which benefits the system performance.
NoteRsyslog processes configuration files
/etc/rsyslog.d/
in the lexical order.-
Restart the
rsyslog
service.# systemctl restart rsyslog
Optional: If
rsyslog
is not enabled, ensure thersyslog
service starts automatically after reboot:# systemctl enable rsyslog
Verification
To verify that the client system sends messages to the server, follow these steps:
On the client system, send a test message:
# logger test
On the server system, view the
/var/log/remote/msg/hostname/root.log
log, for example:# cat /var/log/remote/msg/hostname/root.log Feb 25 03:53:17 hostname root[6064]: test
Where
hostname
is the host name of the client system. Note that the log contains the user name of the user that entered the logger command, in this caseroot
.
Additional resources
-
rsyslogd(8)
andrsyslog.conf(5)
man pages. -
Documentation installed with the
rsyslog-doc
package at/usr/share/doc/rsyslog/html/index.html
.
15.8. Load balancing helper in Rsyslog
The RebindInterval
setting specifies an interval at which the current connection is broken and is re-established. This setting applies to TCP, UDP, and RELP traffic. The load balancers perceive it as a new connection and forward the messages to another physical target system.
The RebindInterval
setting proves to be helpful in scenarios when a target system has changed its IP address. The Rsyslog application caches the IP address when the connection establishes, therefore, the messages are sent to the same server. If the IP address changes, the UDP packets will be lost until the Rsyslog service restarts. Re-establishing the connection will ensure the IP to be resolved by DNS again.
action(type=”omfwd” protocol=”tcp” RebindInterval=”250” target=”example.com” port=”514” …) action(type=”omfwd” protocol=”udp” RebindInterval=”250” target=”example.com” port=”514” …) action(type=”omrelp” RebindInterval=”250” target=”example.com” port=”6514” …)
15.9. Configuring reliable remote logging
With the Reliable Event Logging Protocol (RELP), you can send and receive syslog
messages over TCP with a much reduced risk of message loss. RELP provides reliable delivery of event messages, which makes it useful in environments where message loss is not acceptable. To use RELP, configure the imrelp
input module, which runs on the server and receives the logs, and the omrelp
output module, which runs on the client and sends logs to the logging server.
Prerequisites
-
You have installed the
rsyslog
,librelp
, andrsyslog-relp
packages on the server and the client systems. - The specified port is permitted in SELinux and open in the firewall.
Procedure
Configure the client system for reliable remote logging:
On the client system, create a new
.conf
file in the/etc/rsyslog.d/
directory named, for example,relpclient.conf
, and insert the following content:module(load="omrelp") *.* action(type="omrelp" target="_target_IP_" port="_target_port_")
Where:
-
target_IP
is the IP address of the logging server. -
target_port
is the port of the logging server.
-
-
Save the changes to the
/etc/rsyslog.d/relpclient.conf
file. Restart the
rsyslog
service.# systemctl restart rsyslog
Optional: If
rsyslog
is not enabled, ensure thersyslog
service starts automatically after reboot:# systemctl enable rsyslog
Configure the server system for reliable remote logging:
On the server system, create a new
.conf
file in the/etc/rsyslog.d/
directory named, for example,relpserv.conf
, and insert the following content:ruleset(name="relp"){ *.* action(type="omfile" file="_log_path_") } module(load="imrelp") input(type="imrelp" port="_target_port_" ruleset="relp")
Where:
-
log_path
specifies the path for storing messages. -
target_port
is the port of the logging server. Use the same value as in the client configuration file.
-
-
Save the changes to the
/etc/rsyslog.d/relpserv.conf
file. Restart the
rsyslog
service.# systemctl restart rsyslog
Optional: If
rsyslog
is not enabled, ensure thersyslog
service starts automatically after reboot:# systemctl enable rsyslog
Verification
To verify that the client system sends messages to the server, follow these steps:
On the client system, send a test message:
# logger test
On the server system, view the log at the specified
log_path
, for example:# cat /var/log/remote/msg/hostname/root.log Feb 25 03:53:17 hostname root[6064]: test
Where
hostname
is the host name of the client system. Note that the log contains the user name of the user that entered the logger command, in this caseroot
.
Additional resources
-
rsyslogd(8)
andrsyslog.conf(5)
man pages. -
Documentation installed with the
rsyslog-doc
package in the/usr/share/doc/rsyslog/html/index.html
file.
15.10. Supported Rsyslog modules
To expand the functionality of the Rsyslog application, you can use specific modules. Modules provide additional inputs (Input Modules), outputs (Output Modules), and other functionalities. A module can also provide additional configuration directives that become available after you load the module.
You can list the input and output modules installed on your system by entering the following command:
# ls /usr/lib64/rsyslog/{i,o}m*
You can view the list of all available rsyslog
modules in the /usr/share/doc/rsyslog/html/configuration/modules/idx_output.html
file after you install the rsyslog-doc
package.
15.11. Configuring the netconsole service to log kernel messages to a remote host
When logging to disk or using a serial console is not possible, you can use the netconsole
kernel module and the same-named service to log kernel messages over a network to a remote rsyslog
service.
Prerequisites
-
A system log service, such as
rsyslog
is installed on the remote host. - The remote system log service is configured to receive incoming log entries from this host.
Procedure
Install the
netconsole-service
package:# dnf install netconsole-service
Edit the
/etc/sysconfig/netconsole
file and set theSYSLOGADDR
parameter to the IP address of the remote host:# SYSLOGADDR=192.0.2.1
Enable and start the
netconsole
service:# systemctl enable --now netconsole
Verification steps
-
Display the
/var/log/messages
file on the remote system log server.
Additional resources
15.12. Additional resources
-
Documentation installed with the
rsyslog-doc
package in the/usr/share/doc/rsyslog/html/index.html
file -
rsyslog.conf(5)
andrsyslogd(8)
man pages - Configuring system logging without journald or with minimized journald usage Knowledgebase article
- Negative effects of the RHEL default logging setup on performance and their mitigations Knowledgebase article
- The Using the Logging System Role chapter
Chapter 16. Using the logging
System Role
As a system administrator, you can use the logging
System Role to configure a RHEL host as a logging server to collect logs from many client systems.
16.1. The logging
System Role
With the logging
System Role, you can deploy logging configurations on local and remote hosts.
To apply a logging
System Role on one or more systems, you define the logging configuration in a playbook. A playbook is a list of one or more plays. Playbooks are human-readable, and they are written in the YAML format. For more information about playbooks, see Working with playbooks in Ansible documentation.
The set of systems that you want to configure according to the playbook is defined in an inventory file. For more information about creating and using inventories, see How to build your inventory in Ansible documentation.
Logging solutions provide multiple ways of reading logs and multiple logging outputs.
For example, a logging system can receive the following inputs:
- local files,
-
systemd/journal
, - another logging system over the network.
In addition, a logging system can have the following outputs:
-
logs stored in the local files in the
/var/log
directory, - logs sent to Elasticsearch,
- logs forwarded to another logging system.
With the logging
System Role, you can combine the inputs and outputs to fit your scenario. For example, you can configure a logging solution that stores inputs from journal
in a local file, whereas inputs read from files are both forwarded to another logging system and stored in the local log files.
16.2. logging
System Role parameters
In a logging
System Role playbook, you define the inputs in the logging_inputs
parameter, outputs in the logging_outputs
parameter, and the relationships between the inputs and outputs in the logging_flows
parameter. The logging
System Role processes these variables with additional options to configure the logging system. You can also enable encryption or an automatic port management.
Currently, the only available logging system in the logging
System Role is Rsyslog.
logging_inputs
: List of inputs for the logging solution.-
name
: Unique name of the input. Used in thelogging_flows
: inputs list and a part of the generatedconfig
file name. type
: Type of the input element. The type specifies a task type which corresponds to a directory name inroles/rsyslog/{tasks,vars}/inputs/
.basics
: Inputs configuring inputs fromsystemd
journal orunix
socket.-
kernel_message
: Loadimklog
if set totrue
. Default tofalse
. -
use_imuxsock
: Useimuxsock
instead ofimjournal
. Default tofalse
. -
ratelimit_burst
: Maximum number of messages that can be emitted withinratelimit_interval
. Default to20000
ifuse_imuxsock
is false. Default to200
ifuse_imuxsock
is true. -
ratelimit_interval
: Interval to evaluateratelimit_burst
. Default to 600 seconds ifuse_imuxsock
is false. Default to 0 ifuse_imuxsock
is true. 0 indicates rate limiting is turned off. -
persist_state_interval
: Journal state is persisted everyvalue
messages. Default to10
. Effective only whenuse_imuxsock
is false.
-
-
files
: Inputs configuring inputs from local files. -
remote
: Inputs configuring inputs from the other logging system over network.
-
state
: State of the configuration file.present
orabsent
. Default topresent
.
-
logging_outputs
: List of outputs for the logging solution.-
files
: Outputs configuring outputs to local files. -
forwards
: Outputs configuring outputs to another logging system. -
remote_files
: Outputs configuring outputs from another logging system to local files.
-
logging_flows
: List of flows that define relationships betweenlogging_inputs
andlogging_outputs
. Thelogging_flows
variable has the following keys:-
name
: Unique name of the flow -
inputs
: List oflogging_inputs
name values -
outputs
: List oflogging_outputs
name values.
-
-
logging_manage_firewall
: If set totrue
, thelogging
role uses thefirewall
role to automatically manage port access. -
logging_manage_selinux
: If set totrue
, thelogging
role uses theselinux
role to automatically manage port access.
Additional resources
-
Documentation installed with the
rhel-system-roles
package in/usr/share/ansible/roles/rhel-system-roles.logging/README.html
16.3. Applying a local logging
System Role
Prepare and apply an Ansible playbook to configure a logging solution on a set of separate machines. Each machine records logs locally.
Prerequisites
-
Access and permissions to one or more managed nodes, which are systems you want to configure with the
logging
System Role. Access and permissions to a control node, which is a system from which Red Hat Ansible Core configures other systems.
On the control node:
-
The
ansible-core
andrhel-system-roles
packages are installed.
-
The
RHEL 8.0-8.5 provided access to a separate Ansible repository that contains Ansible Engine 2.9 for automation based on Ansible. Ansible Engine contains command-line utilities such as ansible
, ansible-playbook
, connectors such as docker
and podman
, and many plugins and modules. For information about how to obtain and install Ansible Engine, see the How to download and install Red Hat Ansible Engine Knowledgebase article.
RHEL 8.6 and 9.0 have introduced Ansible Core (provided as the ansible-core
package), which contains the Ansible command-line utilities, commands, and a small set of built-in Ansible plugins. RHEL provides this package through the AppStream repository, and it has a limited scope of support. For more information, see the Scope of support for the Ansible Core package included in the RHEL 9 and RHEL 8.6 and later AppStream repositories Knowledgebase article.
- An inventory file which lists the managed nodes.
You do not have to have the rsyslog
package installed, because the System Role installs rsyslog
when deployed.
Procedure
Create a playbook that defines the required role:
Create a new YAML file and open it in a text editor, for example:
# vi logging-playbook.yml
Insert the following content:
--- - name: Deploying basics input and implicit files output hosts: all roles: - rhel-system-roles.logging vars: logging_inputs: - name: system_input type: basics logging_outputs: - name: files_output type: files logging_flows: - name: flow1 inputs: [system_input] outputs: [files_output]
Run the playbook on a specific inventory:
# ansible-playbook -i </path/to/file/inventory.ini> </path/to/file/logging-playbook.yml>
Where:
-
<inventory.ini>
is the inventory file. -
<logging_playbook.yml>
is the playbook you use.
-
Verification
Test the syntax of the
/etc/rsyslog.conf
file:# rsyslogd -N 1 rsyslogd: version 8.1911.0-6.el8, config validation run... rsyslogd: End of config validation run. Bye.
Verify that the system sends messages to the log:
Send a test message:
# logger test
View the
/var/log/messages
log, for example:# cat /var/log/messages Aug 5 13:48:31 <hostname> root[6778]: test
Where
<hostname>
is the host name of the client system. Note that the log contains the user name of the user that entered the logger command, in this caseroot
.
16.4. Filtering logs in a local logging
System Role
You can deploy a logging solution which filters the logs based on the rsyslog
property-based filter.
Prerequisites
-
Access and permissions to one or more managed nodes, which are systems you want to configure with the
logging
System Role. Access and permissions to a control node, which is a system from which Red Hat Ansible Core configures other systems.
On the control node:
- Red Hat Ansible Core is installed
-
The
rhel-system-roles
package is installed - An inventory file which lists the managed nodes.
You do not have to have the rsyslog
package installed, because the System Role installs rsyslog
when deployed.
Procedure
Create a new
playbook.yml
file with the following content:--- - name: Deploying files input and configured files output hosts: all roles: - linux-system-roles.logging vars: logging_inputs: - name: files_input type: basics logging_outputs: - name: files_output0 type: files property: msg property_op: contains property_value: error path: /var/log/errors.log - name: files_output1 type: files property: msg property_op: "!contains" property_value: error path: /var/log/others.log logging_flows: - name: flow0 inputs: [files_input] outputs: [files_output0, files_output1]
Using this configuration, all messages that contain the
error
string are logged in/var/log/errors.log
, and all other messages are logged in/var/log/others.log
.You can replace the
error
property value with the string by which you want to filter.You can modify the variables according to your preferences.
Optional: Verify playbook syntax:
# ansible-playbook --syntax-check playbook.yml
Run the playbook on your inventory file:
# ansible-playbook -i inventory_file /path/to/file/playbook.yml
Verification
Test the syntax of the
/etc/rsyslog.conf
file:# rsyslogd -N 1 rsyslogd: version 8.1911.0-6.el8, config validation run... rsyslogd: End of config validation run. Bye.
Verify that the system sends messages that contain the
error
string to the log:Send a test message:
# logger error
View the
/var/log/errors.log
log, for example:# cat /var/log/errors.log Aug 5 13:48:31 hostname root[6778]: error
Where
hostname
is the host name of the client system. Note that the log contains the user name of the user that entered the logger command, in this caseroot
.
Additional resources
-
Documentation installed with the
rhel-system-roles
package in/usr/share/ansible/roles/rhel-system-roles.logging/README.html
16.5. Applying a remote logging solution using the logging
System Role
Follow these steps to prepare and apply a Red Hat Ansible Core playbook to configure a remote logging solution. In this playbook, one or more clients take logs from systemd-journal
and forward them to a remote server. The server receives remote input from remote_rsyslog
and remote_files
and outputs the logs to local files in directories named by remote host names.
Prerequisites
-
Access and permissions to one or more managed nodes, which are systems you want to configure with the
logging
System Role. Access and permissions to a control node, which is a system from which Red Hat Ansible Core configures other systems.
On the control node:
-
The
ansible-core
andrhel-system-roles
packages are installed. - An inventory file which lists the managed nodes.
-
The
You do not have to have the rsyslog
package installed, because the System Role installs rsyslog
when deployed.
Procedure
Create a playbook that defines the required role:
Create a new YAML file and open it in a text editor, for example:
# vi logging-playbook.yml
Insert the following content into the file:
--- - name: Deploying remote input and remote_files output hosts: server roles: - rhel-system-roles.logging vars: logging_inputs: - name: remote_udp_input type: remote udp_ports: [ 601 ] - name: remote_tcp_input type: remote tcp_ports: [ 601 ] logging_outputs: - name: remote_files_output type: remote_files logging_flows: - name: flow_0 inputs: [remote_udp_input, remote_tcp_input] outputs: [remote_files_output] - name: Deploying basics input and forwards output hosts: clients roles: - rhel-system-roles.logging vars: logging_inputs: - name: basic_input type: basics logging_outputs: - name: forward_output0 type: forwards severity: info target: <host1.example.com> udp_port: 601 - name: forward_output1 type: forwards facility: mail target: <host1.example.com> tcp_port: 601 logging_flows: - name: flows0 inputs: [basic_input] outputs: [forward_output0, forward_output1] [basic_input] [forward_output0, forward_output1]
Where
<host1.example.com>
is the logging server.NoteYou can modify the parameters in the playbook to fit your needs.
WarningThe logging solution works only with the ports defined in the SELinux policy of the server or client system and open in the firewall. The default SELinux policy includes ports 601, 514, 6514, 10514, and 20514. To use a different port, modify the SELinux policy on the client and server systems.
Create an inventory file that lists your servers and clients:
Create a new file and open it in a text editor, for example:
# vi <inventory.ini>
Insert the following content into the inventory file:
[servers] server ansible_host=<host1.example.com> [clients] client ansible_host=<host2.example.com>
Where:
-
<host1.example.com>
is the logging server. -
<host2.example.com>
is the logging client.
-
Run the playbook on your inventory.
# ansible-playbook -i </path/to/file/inventory.ini> </path/to/file/logging-playbook.yml>
Where:
-
<inventory.ini>_
is the inventory file. -
<logging-playbook.yml>_
is the playbook you created.
-
Verification
On both the client and the server system, test the syntax of the
/etc/rsyslog.conf
file:# rsyslogd -N 1 rsyslogd: version 8.1911.0-6.el8, config validation run (level 1), master config /etc/rsyslog.conf rsyslogd: End of config validation run. Bye.
Verify that the client system sends messages to the server:
On the client system, send a test message:
# logger test
On the server system, view the
/var/log/<host2.example.com>/messages
log, for example:# cat /var/log/<host2.example.com>/messages Aug 5 13:48:31 <host2.example.com> root[6778]: test
Where
<host2.example.com>
is the host name of the client system. Note that the log contains the user name of the user that entered the logger command, in this caseroot
.
Additional resources
- Preparing a control node and managed nodes to use RHEL System Roles
-
Documentation installed with the
rhel-system-roles
package in/usr/share/ansible/roles/rhel-system-roles.logging/README.html
- RHEL System Roles KB article
16.6. Using the logging
System Role with TLS
Transport Layer Security (TLS) is a cryptographic protocol designed to allow secure communication over the computer network.
As an administrator, you can use the logging
RHEL System Role to configure a secure transfer of logs using Red Hat Ansible Automation Platform.
16.6.1. Configuring client logging with TLS
You can use an Ansible playbook with the logging
System Role to configure logging on RHEL clients and transfer logs to a remote logging system using TLS encryption.
This procedure creates a private key and certificate, and configures TLS on all hosts in the clients group in the Ansible inventory. The TLS protocol encrypts the message transmission for secure transfer of logs over the network.
You do not have to call the certificate
System Role in the playbook to create the certificate. The logging
System Role calls it automatically.
In order for the CA to be able to sign the created certificate, the managed nodes must be enrolled in an IdM domain.
Prerequisites
- You have permissions to run playbooks on managed nodes on which you want to configure TLS.
- The managed nodes are listed in the inventory file on the control node.
-
The
ansible
andrhel-system-roles
packages are installed on the control node. - The managed nodes are enrolled in an IdM domain.
- If the logging server you want to configure on the manage node runs RHEL 9.2 or later and the FIPS mode is enabled, clients must either support the Extended Master Secret (EMS) extension or use TLS 1.3. TLS 1.2 connections without EMS fail. For more information, see the TLS extension "Extended Master Secret" enforced Knowledgebase article.
Procedure
Create a
playbook.yml
file with the following content:--- - name: Deploying files input and forwards output with certs hosts: clients roles: - rhel-system-roles.logging vars: logging_certificates: - name: logging_cert dns: ['localhost', 'www.example.com'] ca: ipa logging_pki_files: - ca_cert: /local/path/to/ca_cert.pem cert: /local/path/to/logging_cert.pem private_key: /local/path/to/logging_cert.pem logging_inputs: - name: input_name type: files input_log_path: /var/log/containers/*.log logging_outputs: - name: output_name type: forwards target: your_target_host tcp_port: 514 tls: true pki_authmode: x509/name permitted_server: 'server.example.com' logging_flows: - name: flow_name inputs: [input_name] outputs: [output_name]
The playbook uses the following parameters:
logging_certificates
-
The value of this parameter is passed on to
certificate_requests
in thecertificate
role and used to create a private key and certificate. logging_pki_files
Using this parameter, you can configure the paths and other settings that logging uses to find the CA, certificate, and key files used for TLS, specified with one or more of the following sub-parameters:
ca_cert
,ca_cert_src
,cert
,cert_src
,private_key
,private_key_src
, andtls
.NoteIf you are using
logging_certificates
to create the files on the target node, do not useca_cert_src
,cert_src
, andprivate_key_src
, which are used to copy files not created bylogging_certificates
.ca_cert
-
Represents the path to the CA certificate file on the target node. Default path is
/etc/pki/tls/certs/ca.pem
and the file name is set by the user. cert
-
Represents the path to the certificate file on the target node. Default path is
/etc/pki/tls/certs/server-cert.pem
and the file name is set by the user. private_key
-
Represents the path to the private key file on the target node. Default path is
/etc/pki/tls/private/server-key.pem
and the file name is set by the user. ca_cert_src
-
Represents the path to the CA certificate file on the control node which is copied to the target host to the location specified by
ca_cert
. Do not use this if usinglogging_certificates
. cert_src
-
Represents the path to a certificate file on the control node which is copied to the target host to the location specified by
cert
. Do not use this if usinglogging_certificates
. private_key_src
-
Represents the path to a private key file on the control node which is copied to the target host to the location specified by
private_key
. Do not use this if usinglogging_certificates
. tls
-
Setting this parameter to
true
ensures secure transfer of logs over the network. If you do not want a secure wrapper, you can settls: false
.
Verify playbook syntax:
# ansible-playbook --syntax-check playbook.yml
Run the playbook on your inventory file:
# ansible-playbook -i inventory_file playbook.yml
Additional resources
16.6.2. Configuring server logging with TLS
You can use an Ansible playbook with the logging
System Role to configure logging on RHEL servers and set them to receive logs from a remote logging system using TLS encryption.
This procedure creates a private key and certificate, and configures TLS on all hosts in the server group in the Ansible inventory.
You do not have to call the certificate
System Role in the playbook to create the certificate. The logging
System Role calls it automatically.
In order for the CA to be able to sign the created certificate, the managed nodes must be enrolled in an IdM domain.
Prerequisites
- You have permissions to run playbooks on managed nodes on which you want to configure TLS.
- The managed nodes are listed in the inventory file on the control node.
-
The
ansible
andrhel-system-roles
packages are installed on the control node. - The managed nodes are enrolled in an IdM domain.
- If the logging server you want to configure on the manage node runs RHEL 9.2 or later and the FIPS mode is enabled, clients must either support the Extended Master Secret (EMS) extension or use TLS 1.3. TLS 1.2 connections without EMS fail. For more information, see the TLS extension "Extended Master Secret" enforced Knowledgebase article.
Procedure
Create a
playbook.yml
file with the following content:--- - name: Deploying remote input and remote_files output with certs hosts: server roles: - rhel-system-roles.logging vars: logging_certificates: - name: logging_cert dns: ['localhost', 'www.example.com'] ca: ipa logging_pki_files: - ca_cert: /local/path/to/ca_cert.pem cert: /local/path/to/logging_cert.pem private_key: /local/path/to/logging_cert.pem logging_inputs: - name: input_name type: remote tcp_ports: 514 tls: true permitted_clients: ['clients.example.com'] logging_outputs: - name: output_name type: remote_files remote_log_path: /var/log/remote/%FROMHOST%/%PROGRAMNAME:::secpath-replace%.log async_writing: true client_count: 20 io_buffer_size: 8192 logging_flows: - name: flow_name inputs: [input_name] outputs: [output_name]
The playbook uses the following parameters:
logging_certificates
-
The value of this parameter is passed on to
certificate_requests
in thecertificate
role and used to create a private key and certificate. logging_pki_files
Using this parameter, you can configure the paths and other settings that logging uses to find the CA, certificate, and key files used for TLS, specified with one or more of the following sub-parameters:
ca_cert
,ca_cert_src
,cert
,cert_src
,private_key
,private_key_src
, andtls
.NoteIf you are using
logging_certificates
to create the files on the target node, do not useca_cert_src
,cert_src
, andprivate_key_src
, which are used to copy files not created bylogging_certificates
.ca_cert
-
Represents the path to the CA certificate file on the target node. Default path is
/etc/pki/tls/certs/ca.pem
and the file name is set by the user. cert
-
Represents the path to the certificate file on the target node. Default path is
/etc/pki/tls/certs/server-cert.pem
and the file name is set by the user. private_key
-
Represents the path to the private key file on the target node. Default path is
/etc/pki/tls/private/server-key.pem
and the file name is set by the user. ca_cert_src
-
Represents the path to the CA certificate file on the control node which is copied to the target host to the location specified by
ca_cert
. Do not use this if usinglogging_certificates
. cert_src
-
Represents the path to a certificate file on the control node which is copied to the target host to the location specified by
cert
. Do not use this if usinglogging_certificates
. private_key_src
-
Represents the path to a private key file on the control node which is copied to the target host to the location specified by
private_key
. Do not use this if usinglogging_certificates
. tls
-
Setting this parameter to
true
ensures secure transfer of logs over the network. If you do not want a secure wrapper, you can settls: false
.
Verify playbook syntax:
# ansible-playbook --syntax-check playbook.yml
Run the playbook on your inventory file:
# ansible-playbook -i inventory_file playbook.yml
Additional resources
16.7. Using the logging
System Roles with RELP
Reliable Event Logging Protocol (RELP) is a networking protocol for data and message logging over the TCP network. It ensures reliable delivery of event messages and you can use it in environments that do not tolerate any message loss.
The RELP sender transfers log entries in form of commands and the receiver acknowledges them once they are processed. To ensure consistency, RELP stores the transaction number to each transferred command for any kind of message recovery.
You can consider a remote logging system in between the RELP Client and RELP Server. The RELP Client transfers the logs to the remote logging system and the RELP Server receives all the logs sent by the remote logging system.
Administrators can use the logging
System Role to configure the logging system to reliably send and receive log entries.
16.7.1. Configuring client logging with RELP
You can use the logging
System Role to configure logging in RHEL systems that are logged on a local machine and can transfer logs to the remote logging system with RELP by running an Ansible playbook.
This procedure configures RELP on all hosts in the clients
group in the Ansible inventory. The RELP configuration uses Transport Layer Security (TLS) to encrypt the message transmission for secure transfer of logs over the network.
Prerequisites
- You have permissions to run playbooks on managed nodes on which you want to configure RELP.
- The managed nodes are listed in the inventory file on the control node.
-
The
ansible
andrhel-system-roles
packages are installed on the control node.
Procedure
Create a
playbook.yml
file with the following content:--- - name: Deploying basic input and relp output hosts: clients roles: - rhel-system-roles.logging vars: logging_inputs: - name: basic_input type: basics logging_outputs: - name: relp_client type: relp target: logging.server.com port: 20514 tls: true ca_cert: /etc/pki/tls/certs/ca.pem cert: /etc/pki/tls/certs/client-cert.pem private_key: /etc/pki/tls/private/client-key.pem pki_authmode: name permitted_servers: - '*.server.example.com' logging_flows: - name: example_flow inputs: [basic_input] outputs: [relp_client]
The playbooks uses following settings:
-
target
: This is a required parameter that specifies the host name where the remote logging system is running. -
port
: Port number the remote logging system is listening. tls
: Ensures secure transfer of logs over the network. If you do not want a secure wrapper you can set thetls
variable tofalse
. By defaulttls
parameter is set to true while working with RELP and requires key/certificates and triplets {ca_cert
,cert
,private_key
} and/or {ca_cert_src
,cert_src
,private_key_src
}.-
If the {
ca_cert_src
,cert_src
,private_key_src
} triplet is set, the default locations/etc/pki/tls/certs
and/etc/pki/tls/private
are used as the destination on the managed node to transfer files from control node. In this case, the file names are identical to the original ones in the triplet -
If the {
ca_cert
,cert
,private_key
} triplet is set, files are expected to be on the default path before the logging configuration. - If both triplets are set, files are transferred from local path from control node to specific path of the managed node.
-
If the {
-
ca_cert
: Represents the path to CA certificate. Default path is/etc/pki/tls/certs/ca.pem
and the file name is set by the user. -
cert
: Represents the path to certificate. Default path is/etc/pki/tls/certs/server-cert.pem
and the file name is set by the user. -
private_key
: Represents the path to private key. Default path is/etc/pki/tls/private/server-key.pem
and the file name is set by the user. -
ca_cert_src
: Represents local CA certificate file path which is copied to the target host. Ifca_cert
is specified, it is copied to the location. -
cert_src
: Represents the local certificate file path which is copied to the target host. Ifcert
is specified, it is copied to the location. -
private_key_src
: Represents the local key file path which is copied to the target host. Ifprivate_key
is specified, it is copied to the location. -
pki_authmode
: Accepts the authentication mode asname
orfingerprint
. -
permitted_servers
: List of servers that will be allowed by the logging client to connect and send logs over TLS. -
inputs
: List of logging input dictionary. -
outputs
: List of logging output dictionary.
-
Optional: Verify playbook syntax.
# ansible-playbook --syntax-check playbook.yml
Run the playbook:
# ansible-playbook -i inventory_file playbook.yml
16.7.2. Configuring server logging with RELP
You can use the logging
System Role to configure logging in RHEL systems as a server and can receive logs from the remote logging system with RELP by running an Ansible playbook.
This procedure configures RELP on all hosts in the server
group in the Ansible inventory. The RELP configuration uses TLS to encrypt the message transmission for secure transfer of logs over the network.
Prerequisites
- You have permissions to run playbooks on managed nodes on which you want to configure RELP.
- The managed nodes are listed in the inventory file on the control node.
-
The
ansible
andrhel-system-roles
packages are installed on the control node.
Procedure
Create a
playbook.yml
file with the following content:--- - name: Deploying remote input and remote_files output hosts: server roles: - rhel-system-roles.logging vars: logging_inputs: - name: relp_server type: relp port: 20514 tls: true ca_cert: /etc/pki/tls/certs/ca.pem cert: /etc/pki/tls/certs/server-cert.pem private_key: /etc/pki/tls/private/server-key.pem pki_authmode: name permitted_clients: - '*example.client.com' logging_outputs: - name: remote_files_output type: remote_files logging_flows: - name: example_flow inputs: relp_server outputs: remote_files_output
The playbooks uses the following settings:
-
port
: Port number the remote logging system is listening. tls
: Ensures secure transfer of logs over the network. If you do not want a secure wrapper you can set thetls
variable tofalse
. By defaulttls
parameter is set to true while working with RELP and requires key/certificates and triplets {ca_cert
,cert
,private_key
} and/or {ca_cert_src
,cert_src
,private_key_src
}.-
If the {
ca_cert_src
,cert_src
,private_key_src
} triplet is set, the default locations/etc/pki/tls/certs
and/etc/pki/tls/private
are used as the destination on the managed node to transfer files from control node. In this case, the file names are identical to the original ones in the triplet -
If the {
ca_cert
,cert
,private_key
} triplet is set, files are expected to be on the default path before the logging configuration. - If both triplets are set, files are transferred from local path from control node to specific path of the managed node.
-
If the {
-
ca_cert
: Represents the path to CA certificate. Default path is/etc/pki/tls/certs/ca.pem
and the file name is set by the user. -
cert
: Represents the path to the certificate. Default path is/etc/pki/tls/certs/server-cert.pem
and the file name is set by the user. -
private_key
: Represents the path to private key. Default path is/etc/pki/tls/private/server-key.pem
and the file name is set by the user. -
ca_cert_src
: Represents local CA certificate file path which is copied to the target host. Ifca_cert
is specified, it is copied to the location. -
cert_src
: Represents the local certificate file path which is copied to the target host. Ifcert
is specified, it is copied to the location. -
private_key_src
: Represents the local key file path which is copied to the target host. Ifprivate_key
is specified, it is copied to the location. -
pki_authmode
: Accepts the authentication mode asname
orfingerprint
. -
permitted_clients
: List of clients that will be allowed by the logging server to connect and send logs over TLS. -
inputs
: List of logging input dictionary. -
outputs
: List of logging output dictionary.
-
Optional: Verify playbook syntax.
# ansible-playbook --syntax-check playbook.yml
Run the playbook:
# ansible-playbook -i inventory_file playbook.yml
16.8. Additional resources
- Preparing a control node and managed nodes to use RHEL System Roles
-
Documentation installed with the
rhel-system-roles
package in/usr/share/ansible/roles/rhel-system-roles.logging/README.html
. - RHEL System Roles
-
ansible-playbook(1)
man page.