Red Hat Training
A Red Hat training course is available for RHEL 8
Chapter 35. Using the logging System Role
As a system administrator, you can use the logging System Role to configure a RHEL host as a logging server to collect logs from many client systems.
35.1. The logging System Role
With the logging System Role, you can deploy logging configurations on local and remote hosts.
To apply a logging System Role on one or more systems, you define the logging configuration in a playbook. A playbook is a list of one or more plays. Playbooks are human-readable, and they are written in the YAML format. For more information about playbooks, see Working with playbooks in Ansible documentation.
The set of systems that you want to configure according to the playbook is defined in an inventory file. For more information about creating and using inventories, see How to build your inventory in Ansible documentation.
Logging solutions provide multiple ways of reading logs and multiple logging outputs.
For example, a logging system can receive the following inputs:
- local files,
-
systemd/journal, - another logging system over the network.
In addition, a logging system can have the following outputs:
-
logs stored in the local files in the
/var/logdirectory, - logs sent to Elasticsearch,
- logs forwarded to another logging system.
With the logging System Role, you can combine the inputs and outputs to fit your scenario. For example, you can configure a logging solution that stores inputs from journal in a local file, whereas inputs read from files are both forwarded to another logging system and stored in the local log files.
35.2. logging System Role parameters
In a logging System Role playbook, you define the inputs in the logging_inputs parameter, outputs in the logging_outputs parameter, and the relationships between the inputs and outputs in the logging_flows parameter. The logging System Role processes these variables with additional options to configure the logging system. You can also enable encryption or an automatic port management.
Currently, the only available logging system in the logging System Role is Rsyslog.
logging_inputs: List of inputs for the logging solution.-
name: Unique name of the input. Used in thelogging_flows: inputs list and a part of the generatedconfigfile name. type: Type of the input element. The type specifies a task type which corresponds to a directory name inroles/rsyslog/{tasks,vars}/inputs/.basics: Inputs configuring inputs fromsystemdjournal orunixsocket.-
kernel_message: Loadimklogif set totrue. Default tofalse. -
use_imuxsock: Useimuxsockinstead ofimjournal. Default tofalse. -
ratelimit_burst: Maximum number of messages that can be emitted withinratelimit_interval. Default to20000ifuse_imuxsockis false. Default to200ifuse_imuxsockis true. -
ratelimit_interval: Interval to evaluateratelimit_burst. Default to 600 seconds ifuse_imuxsockis false. Default to 0 ifuse_imuxsockis true. 0 indicates rate limiting is turned off. -
persist_state_interval: Journal state is persisted everyvaluemessages. Default to10. Effective only whenuse_imuxsockis false.
-
-
files: Inputs configuring inputs from local files. -
remote: Inputs configuring inputs from the other logging system over network.
-
state: State of the configuration file.presentorabsent. Default topresent.
-
logging_outputs: List of outputs for the logging solution.-
files: Outputs configuring outputs to local files. -
forwards: Outputs configuring outputs to another logging system. -
remote_files: Outputs configuring outputs from another logging system to local files.
-
logging_flows: List of flows that define relationships betweenlogging_inputsandlogging_outputs. Thelogging_flowsvariable has the following keys:-
name: Unique name of the flow -
inputs: List oflogging_inputsname values -
outputs: List oflogging_outputsname values.
-
-
logging_manage_firewall: If set totrue, theloggingrole uses thefirewallrole to automatically manage port access. -
logging_manage_selinux: If set totrue, theloggingrole uses theselinuxrole to automatically manage port access.
Additional resources
-
Documentation installed with the
rhel-system-rolespackage in/usr/share/ansible/roles/rhel-system-roles.logging/README.html
35.3. Applying a local logging System Role
Prepare and apply an Ansible playbook to configure a logging solution on a set of separate machines. Each machine records logs locally.
Prerequisites
-
Access and permissions to one or more managed nodes, which are systems you want to configure with the
loggingSystem Role. Access and permissions to a control node, which is a system from which Red Hat Ansible Core configures other systems.
On the control node:
-
The
ansible-coreandrhel-system-rolespackages are installed.
-
The
RHEL 8.0-8.5 provided access to a separate Ansible repository that contains Ansible Engine 2.9 for automation based on Ansible. Ansible Engine contains command-line utilities such as ansible, ansible-playbook, connectors such as docker and podman, and many plugins and modules. For information about how to obtain and install Ansible Engine, see the How to download and install Red Hat Ansible Engine Knowledgebase article.
RHEL 8.6 and 9.0 have introduced Ansible Core (provided as the ansible-core package), which contains the Ansible command-line utilities, commands, and a small set of built-in Ansible plugins. RHEL provides this package through the AppStream repository, and it has a limited scope of support. For more information, see the Scope of support for the Ansible Core package included in the RHEL 9 and RHEL 8.6 and later AppStream repositories Knowledgebase article.
- An inventory file which lists the managed nodes.
You do not have to have the rsyslog package installed, because the System Role installs rsyslog when deployed.
Procedure
Create a playbook that defines the required role:
Create a new YAML file and open it in a text editor, for example:
# vi logging-playbook.ymlInsert the following content:
--- - name: Deploying basics input and implicit files output hosts: all roles: - rhel-system-roles.logging vars: logging_inputs: - name: system_input type: basics logging_outputs: - name: files_output type: files logging_flows: - name: flow1 inputs: [system_input] outputs: [files_output]
Run the playbook on a specific inventory:
# ansible-playbook -i inventory-file /path/to/file/logging-playbook.ymlWhere:
-
inventory-fileis the inventory file. -
logging-playbook.ymlis the playbook you use.
-
Verification
Test the syntax of the
/etc/rsyslog.conffile:# rsyslogd -N 1 rsyslogd: version 8.1911.0-6.el8, config validation run... rsyslogd: End of config validation run. Bye.Verify that the system sends messages to the log:
Send a test message:
# logger testView the
/var/log/messageslog, for example:# cat /var/log/messages Aug 5 13:48:31 hostname root[6778]: test
Where
hostnameis the host name of the client system. Note that the log contains the user name of the user that entered the logger command, in this caseroot.
35.4. Filtering logs in a local logging System Role
You can deploy a logging solution which filters the logs based on the rsyslog property-based filter.
Prerequisites
-
Access and permissions to one or more managed nodes, which are systems you want to configure with the
loggingSystem Role. Access and permissions to a control node, which is a system from which Red Hat Ansible Core configures other systems.
On the control node:
- Red Hat Ansible Core is installed
-
The
rhel-system-rolespackage is installed - An inventory file which lists the managed nodes.
You do not have to have the rsyslog package installed, because the System Role installs rsyslog when deployed.
Procedure
Create a new
playbook.ymlfile with the following content:--- - name: Deploying files input and configured files output hosts: all roles: - linux-system-roles.logging vars: logging_inputs: - name: files_input type: basics logging_outputs: - name: files_output0 type: files property: msg property_op: contains property_value: error path: /var/log/errors.log - name: files_output1 type: files property: msg property_op: "!contains" property_value: error path: /var/log/others.log logging_flows: - name: flow0 inputs: [files_input] outputs: [files_output0, files_output1]Using this configuration, all messages that contain the
errorstring are logged in/var/log/errors.log, and all other messages are logged in/var/log/others.log.You can replace the
errorproperty value with the string by which you want to filter.You can modify the variables according to your preferences.
Optional: Verify playbook syntax.
# ansible-playbook --syntax-check playbook.ymlRun the playbook on your inventory file:
# ansible-playbook -i inventory_file /path/to/file/playbook.yml
Verification
Test the syntax of the
/etc/rsyslog.conffile:# rsyslogd -N 1 rsyslogd: version 8.1911.0-6.el8, config validation run... rsyslogd: End of config validation run. Bye.Verify that the system sends messages that contain the
errorstring to the log:Send a test message:
# logger errorView the
/var/log/errors.loglog, for example:# cat /var/log/errors.log Aug 5 13:48:31 hostname root[6778]: error
Where
hostnameis the host name of the client system. Note that the log contains the user name of the user that entered the logger command, in this caseroot.
Additional resources
-
Documentation installed with the
rhel-system-rolespackage in/usr/share/ansible/roles/rhel-system-roles.logging/README.html
35.5. Applying a remote logging solution using the logging System Role
Follow these steps to prepare and apply a Red Hat Ansible Core playbook to configure a remote logging solution. In this playbook, one or more clients take logs from systemd-journal and forward them to a remote server. The server receives remote input from remote_rsyslog and remote_files and outputs the logs to local files in directories named by remote host names.
Prerequisites
-
Access and permissions to one or more managed nodes, which are systems you want to configure with the
loggingSystem Role. Access and permissions to a control node, which is a system from which Red Hat Ansible Core configures other systems.
On the control node:
-
The
ansible-coreandrhel-system-rolespackages are installed. - An inventory file which lists the managed nodes.
-
The
You do not have to have the rsyslog package installed, because the System Role installs rsyslog when deployed.
Procedure
Create a playbook that defines the required role:
Create a new YAML file and open it in a text editor, for example:
# vi logging-playbook.ymlInsert the following content into the file:
--- - name: Deploying remote input and remote_files output hosts: server roles: - rhel-system-roles.logging vars: logging_inputs: - name: remote_udp_input type: remote udp_ports: [ 601 ] - name: remote_tcp_input type: remote tcp_ports: [ 601 ] logging_outputs: - name: remote_files_output type: remote_files logging_flows: - name: flow_0 inputs: [remote_udp_input, remote_tcp_input] outputs: [remote_files_output] - name: Deploying basics input and forwards output hosts: clients roles: - rhel-system-roles.logging vars: logging_inputs: - name: basic_input type: basics logging_outputs: - name: forward_output0 type: forwards severity: info target: _host1.example.com_ udp_port: 601 - name: forward_output1 type: forwards facility: mail target: _host1.example.com_ tcp_port: 601 logging_flows: - name: flows0 inputs: [basic_input] outputs: [forward_output0, forward_output1] [basic_input] [forward_output0, forward_output1]Where
host1.example.comis the logging server.NoteYou can modify the parameters in the playbook to fit your needs.
WarningThe logging solution works only with the ports defined in the SELinux policy of the server or client system and open in the firewall. The default SELinux policy includes ports 601, 514, 6514, 10514, and 20514. To use a different port, modify the SELinux policy on the client and server systems.
Create an inventory file that lists your servers and clients:
Create a new file and open it in a text editor, for example:
# vi inventory.iniInsert the following content into the inventory file:
[servers] server ansible_host=host1.example.com [clients] client ansible_host=host2.example.com
Where:
-
host1.example.comis the logging server. -
host2.example.comis the logging client.
-
Run the playbook on your inventory.
# ansible-playbook -i /path/to/file/inventory.ini /path/to/file/_logging-playbook.ymlWhere:
-
inventory.iniis the inventory file. -
logging-playbook.ymlis the playbook you created.
-
Verification
On both the client and the server system, test the syntax of the
/etc/rsyslog.conffile:# rsyslogd -N 1 rsyslogd: version 8.1911.0-6.el8, config validation run (level 1), master config /etc/rsyslog.conf rsyslogd: End of config validation run. Bye.Verify that the client system sends messages to the server:
On the client system, send a test message:
# logger testOn the server system, view the
/var/log/messageslog, for example:# cat /var/log/messages Aug 5 13:48:31 host2.example.com root[6778]: test
Where
host2.example.comis the host name of the client system. Note that the log contains the user name of the user that entered the logger command, in this caseroot.
Additional resources
- Preparing a control node and managed nodes to use RHEL System Roles
-
Documentation installed with the
rhel-system-rolespackage in/usr/share/ansible/roles/rhel-system-roles.logging/README.html - RHEL System Roles KB article
35.6. Using the logging System Role with TLS
Transport Layer Security (TLS) is a cryptographic protocol designed to allow secure communication over the computer network.
As an administrator, you can use the logging RHEL System Role to configure a secure transfer of logs using Red Hat Ansible Automation Platform.
35.6.1. Configuring client logging with TLS
You can use an Ansible playbook with the logging System Role to configure logging on RHEL clients and transfer logs to a remote logging system using TLS encryption.
This procedure creates a private key and certificate, and configures TLS on all hosts in the clients group in the Ansible inventory. The TLS protocol encrypts the message transmission for secure transfer of logs over the network.
You do not have to call the certificate System Role in the playbook to create the certificate. The logging System Role calls it automatically.
In order for the CA to be able to sign the created certificate, the managed nodes must be enrolled in an IdM domain.
Prerequisites
- You have permissions to run playbooks on managed nodes on which you want to configure TLS.
- The managed nodes are listed in the inventory file on the control node.
-
The
ansibleandrhel-system-rolespackages are installed on the control node. - The managed nodes are enrolled in an IdM domain.
Procedure
Create a
playbook.ymlfile with the following content:--- - name: Deploying files input and forwards output with certs hosts: clients roles: - rhel-system-roles.logging vars: logging_certificates: - name: logging_cert dns: ['localhost', 'www.example.com'] ca: ipa logging_pki_files: - ca_cert: /local/path/to/ca_cert.pem cert: /local/path/to/logging_cert.pem private_key: /local/path/to/logging_cert.pem logging_inputs: - name: input_name type: files input_log_path: /var/log/containers/*.log logging_outputs: - name: output_name type: forwards target: your_target_host tcp_port: 514 tls: true pki_authmode: x509/name permitted_server: 'server.example.com' logging_flows: - name: flow_name inputs: [input_name] outputs: [output_name]The playbook uses the following parameters:
logging_certificates-
The value of this parameter is passed on to
certificate_requestsin thecertificaterole and used to create a private key and certificate. logging_pki_filesUsing this parameter, you can configure the paths and other settings that logging uses to find the CA, certificate, and key files used for TLS, specified with one or more of the following sub-parameters:
ca_cert,ca_cert_src,cert,cert_src,private_key,private_key_src, andtls.NoteIf you are using
logging_certificatesto create the files on the target node, do not useca_cert_src,cert_src, andprivate_key_src, which are used to copy files not created bylogging_certificates.ca_cert-
Represents the path to the CA certificate file on the target node. Default path is
/etc/pki/tls/certs/ca.pemand the file name is set by the user. cert-
Represents the path to the certificate file on the target node. Default path is
/etc/pki/tls/certs/server-cert.pemand the file name is set by the user. private_key-
Represents the path to the private key file on the target node. Default path is
/etc/pki/tls/private/server-key.pemand the file name is set by the user. ca_cert_src-
Represents the path to the CA certificate file on the control node which is copied to the target host to the location specified by
ca_cert. Do not use this if usinglogging_certificates. cert_src-
Represents the path to a certificate file on the control node which is copied to the target host to the location specified by
cert. Do not use this if usinglogging_certificates. private_key_src-
Represents the path to a private key file on the control node which is copied to the target host to the location specified by
private_key. Do not use this if usinglogging_certificates. tls-
Setting this parameter to
trueensures secure transfer of logs over the network. If you do not want a secure wrapper, you can settls: false.
Verify playbook syntax:
# ansible-playbook --syntax-check playbook.ymlRun the playbook on your inventory file:
# ansible-playbook -i inventory_file playbook.yml
Additional resources
35.6.2. Configuring server logging with TLS
You can use an Ansible playbook with the logging System Role to configure logging on RHEL servers and set them to receive logs from a remote logging system using TLS encryption.
This procedure creates a private key and certificate, and configures TLS on all hosts in the server group in the Ansible inventory.
You do not have to call the certificate System Role in the playbook to create the certificate. The logging System Role calls it automatically.
In order for the CA to be able to sign the created certificate, the managed nodes must be enrolled in an IdM domain.
Prerequisites
- You have permissions to run playbooks on managed nodes on which you want to configure TLS.
- The managed nodes are listed in the inventory file on the control node.
-
The
ansibleandrhel-system-rolespackages are installed on the control node. - The managed nodes are enrolled in an IdM domain.
Procedure
Create a
playbook.ymlfile with the following content:--- - name: Deploying remote input and remote_files output with certs hosts: server roles: - rhel-system-roles.logging vars: logging_certificates: - name: logging_cert dns: ['localhost', 'www.example.com'] ca: ipa logging_pki_files: - ca_cert: /local/path/to/ca_cert.pem cert: /local/path/to/logging_cert.pem private_key: /local/path/to/logging_cert.pem logging_inputs: - name: input_name type: remote tcp_ports: 514 tls: true permitted_clients: ['clients.example.com'] logging_outputs: - name: output_name type: remote_files remote_log_path: /var/log/remote/%FROMHOST%/%PROGRAMNAME:::secpath-replace%.log async_writing: true client_count: 20 io_buffer_size: 8192 logging_flows: - name: flow_name inputs: [input_name] outputs: [output_name]The playbook uses the following parameters:
logging_certificates-
The value of this parameter is passed on to
certificate_requestsin thecertificaterole and used to create a private key and certificate. logging_pki_filesUsing this parameter, you can configure the paths and other settings that logging uses to find the CA, certificate, and key files used for TLS, specified with one or more of the following sub-parameters:
ca_cert,ca_cert_src,cert,cert_src,private_key,private_key_src, andtls.NoteIf you are using
logging_certificatesto create the files on the target node, do not useca_cert_src,cert_src, andprivate_key_src, which are used to copy files not created bylogging_certificates.ca_cert-
Represents the path to the CA certificate file on the target node. Default path is
/etc/pki/tls/certs/ca.pemand the file name is set by the user. cert-
Represents the path to the certificate file on the target node. Default path is
/etc/pki/tls/certs/server-cert.pemand the file name is set by the user. private_key-
Represents the path to the private key file on the target node. Default path is
/etc/pki/tls/private/server-key.pemand the file name is set by the user. ca_cert_src-
Represents the path to the CA certificate file on the control node which is copied to the target host to the location specified by
ca_cert. Do not use this if usinglogging_certificates. cert_src-
Represents the path to a certificate file on the control node which is copied to the target host to the location specified by
cert. Do not use this if usinglogging_certificates. private_key_src-
Represents the path to a private key file on the control node which is copied to the target host to the location specified by
private_key. Do not use this if usinglogging_certificates. tls-
Setting this parameter to
trueensures secure transfer of logs over the network. If you do not want a secure wrapper, you can settls: false.
Verify playbook syntax:
# ansible-playbook --syntax-check playbook.ymlRun the playbook on your inventory file:
# ansible-playbook -i inventory_file playbook.yml
Additional resources
35.7. Using the logging System Roles with RELP
Reliable Event Logging Protocol (RELP) is a networking protocol for data and message logging over the TCP network. It ensures reliable delivery of event messages and you can use it in environments that do not tolerate any message loss.
The RELP sender transfers log entries in form of commands and the receiver acknowledges them once they are processed. To ensure consistency, RELP stores the transaction number to each transferred command for any kind of message recovery.
You can consider a remote logging system in between the RELP Client and RELP Server. The RELP Client transfers the logs to the remote logging system and the RELP Server receives all the logs sent by the remote logging system.
Administrators can use the logging System Role to configure the logging system to reliably send and receive log entries.
35.7.1. Configuring client logging with RELP
You can use the logging System Role to configure logging in RHEL systems that are logged on a local machine and can transfer logs to the remote logging system with RELP by running an Ansible playbook.
This procedure configures RELP on all hosts in the clients group in the Ansible inventory. The RELP configuration uses Transport Layer Security (TLS) to encrypt the message transmission for secure transfer of logs over the network.
Prerequisites
- You have permissions to run playbooks on managed nodes on which you want to configure RELP.
- The managed nodes are listed in the inventory file on the control node.
-
The
ansibleandrhel-system-rolespackages are installed on the control node.
Procedure
Create a
playbook.ymlfile with the following content:--- - name: Deploying basic input and relp output hosts: clients roles: - rhel-system-roles.logging vars: logging_inputs: - name: basic_input type: basics logging_outputs: - name: relp_client type: relp target: logging.server.com port: 20514 tls: true ca_cert: /etc/pki/tls/certs/ca.pem cert: /etc/pki/tls/certs/client-cert.pem private_key: /etc/pki/tls/private/client-key.pem pki_authmode: name permitted_servers: - '*.server.example.com' logging_flows: - name: example_flow inputs: [basic_input] outputs: [relp_client]The playbooks uses following settings:
-
target: This is a required parameter that specifies the host name where the remote logging system is running. -
port: Port number the remote logging system is listening. tls: Ensures secure transfer of logs over the network. If you do not want a secure wrapper you can set thetlsvariable tofalse. By defaulttlsparameter is set to true while working with RELP and requires key/certificates and triplets {ca_cert,cert,private_key} and/or {ca_cert_src,cert_src,private_key_src}.-
If the {
ca_cert_src,cert_src,private_key_src} triplet is set, the default locations/etc/pki/tls/certsand/etc/pki/tls/privateare used as the destination on the managed node to transfer files from control node. In this case, the file names are identical to the original ones in the triplet -
If the {
ca_cert,cert,private_key} triplet is set, files are expected to be on the default path before the logging configuration. - If both triplets are set, files are transferred from local path from control node to specific path of the managed node.
-
If the {
-
ca_cert: Represents the path to CA certificate. Default path is/etc/pki/tls/certs/ca.pemand the file name is set by the user. -
cert: Represents the path to certificate. Default path is/etc/pki/tls/certs/server-cert.pemand the file name is set by the user. -
private_key: Represents the path to private key. Default path is/etc/pki/tls/private/server-key.pemand the file name is set by the user. -
ca_cert_src: Represents local CA certificate file path which is copied to the target host. Ifca_certis specified, it is copied to the location. -
cert_src: Represents the local certificate file path which is copied to the target host. Ifcertis specified, it is copied to the location. -
private_key_src: Represents the local key file path which is copied to the target host. Ifprivate_keyis specified, it is copied to the location. -
pki_authmode: Accepts the authentication mode asnameorfingerprint. -
permitted_servers: List of servers that will be allowed by the logging client to connect and send logs over TLS. -
inputs: List of logging input dictionary. -
outputs: List of logging output dictionary.
-
Optional: Verify playbook syntax.
# ansible-playbook --syntax-check playbook.ymlRun the playbook:
# ansible-playbook -i inventory_file playbook.yml
35.7.2. Configuring server logging with RELP
You can use the logging System Role to configure logging in RHEL systems as a server and can receive logs from the remote logging system with RELP by running an Ansible playbook.
This procedure configures RELP on all hosts in the server group in the Ansible inventory. The RELP configuration uses TLS to encrypt the message transmission for secure transfer of logs over the network.
Prerequisites
- You have permissions to run playbooks on managed nodes on which you want to configure RELP.
- The managed nodes are listed in the inventory file on the control node.
-
The
ansibleandrhel-system-rolespackages are installed on the control node.
Procedure
Create a
playbook.ymlfile with the following content:--- - name: Deploying remote input and remote_files output hosts: server roles: - rhel-system-roles.logging vars: logging_inputs: - name: relp_server type: relp port: 20514 tls: true ca_cert: /etc/pki/tls/certs/ca.pem cert: /etc/pki/tls/certs/server-cert.pem private_key: /etc/pki/tls/private/server-key.pem pki_authmode: name permitted_clients: - '*example.client.com' logging_outputs: - name: remote_files_output type: remote_files logging_flows: - name: example_flow inputs: relp_server outputs: remote_files_outputThe playbooks uses the following settings:
-
port: Port number the remote logging system is listening. tls: Ensures secure transfer of logs over the network. If you do not want a secure wrapper you can set thetlsvariable tofalse. By defaulttlsparameter is set to true while working with RELP and requires key/certificates and triplets {ca_cert,cert,private_key} and/or {ca_cert_src,cert_src,private_key_src}.-
If the {
ca_cert_src,cert_src,private_key_src} triplet is set, the default locations/etc/pki/tls/certsand/etc/pki/tls/privateare used as the destination on the managed node to transfer files from control node. In this case, the file names are identical to the original ones in the triplet -
If the {
ca_cert,cert,private_key} triplet is set, files are expected to be on the default path before the logging configuration. - If both triplets are set, files are transferred from local path from control node to specific path of the managed node.
-
If the {
-
ca_cert: Represents the path to CA certificate. Default path is/etc/pki/tls/certs/ca.pemand the file name is set by the user. -
cert: Represents the path to the certificate. Default path is/etc/pki/tls/certs/server-cert.pemand the file name is set by the user. -
private_key: Represents the path to private key. Default path is/etc/pki/tls/private/server-key.pemand the file name is set by the user. -
ca_cert_src: Represents local CA certificate file path which is copied to the target host. Ifca_certis specified, it is copied to the location. -
cert_src: Represents the local certificate file path which is copied to the target host. Ifcertis specified, it is copied to the location. -
private_key_src: Represents the local key file path which is copied to the target host. Ifprivate_keyis specified, it is copied to the location. -
pki_authmode: Accepts the authentication mode asnameorfingerprint. -
permitted_clients: List of clients that will be allowed by the logging server to connect and send logs over TLS. -
inputs: List of logging input dictionary. -
outputs: List of logging output dictionary.
-
Optional: Verify playbook syntax.
# ansible-playbook --syntax-check playbook.ymlRun the playbook:
# ansible-playbook -i inventory_file playbook.yml
35.8. Additional resources
- Preparing a control node and managed nodes to use RHEL System Roles
-
Documentation installed with the
rhel-system-rolespackage in/usr/share/ansible/roles/rhel-system-roles.logging/README.html. - RHEL System Roles
-
ansible-playbook(1)man page.