System Administrator's Guide
Deployment, Configuration, and Administration of Red Hat Enterprise Linux 7
Abstract
Note
Part I. Basic System Configuration
Chapter 1. Getting Started
Note
root user have # in the prompt, while the commands that can be performed by a regular user, have $ in their prompt.
What is Cockpit and Which Tasks it Can Be Used for
- Monitoring basic system features, such as hardware, internet connection, or performance characteristics
- Analyzing the content of the system log files
- Configuring basic networking features, such as interfaces, network logs, packet sizes
- Managing user accounts
- Monitoring and configuring system services
- Creating diagnostic reports
- Setting kernel dump configuration
- Configuring SELinux
- Managing system subscriptions
- Accessing the terminal
1.1. Basic Configuration of the Environment
- Date and Time
- System Locales
- Keyboard Layout
- When installing with the Anaconda installer, see:Date&Time, Language Support and Keyboard Configuration in Red Hat Enterprise Linux 7 Installation Guide
- When installing with the Kickstart file, consult:Kickstart Commands and Options in Red Hat Enterprise Linux 7 Installation Guide.
1.1.1. Introduction to Configuring the Date and Time
NTP protocol, which is implemented by a daemon running in user space. The user space daemon updates the system clock running in the kernel. The system clock can keep time by using various clock sources.
NTP:
chronydThechronyddaemon is used by default. It is available from the chrony package. For more information on configuring and usingNTPwithchronyd, see Chapter 17, Configuring NTP Using the chrony Suite.ntpdThentpddaemon is available from the ntp package. For more information on configuring and usingNTPwithntpd, see Chapter 18, Configuring NTP Using ntpd.
ntpd instead of default chronyd, you need to disable chronyd, install, enable and configure ntpd as shown in Chapter 18, Configuring NTP Using ntpd.
Displaying the Current Date and Time
~]$
date~]$
timedatectlNote that thetimedatectlcommand provides more verbose output, including universal time, currently used time zone, the status of the Network Time Protocol (NTP) configuration, and some additional information.
1.1.2. Introduction to Configuring the System Locale
/etc/locale.conf file, which is read at early boot by the systemd daemon. The locale settings configured in /etc/locale.conf are inherited by every service or user, unless individual programs or individual users override them.
- Listing available system locale settings:
~]$
localectl list-locales - Displaying current status of the system locales settings:
~]$
localectl status - Setting or changing the default system locale settings:
~]#
localectl set-locale LANG=locale
1.1.3. Introduction to Configuring the Keyboard Layout
- Listing available keymaps:
~]$
localectl list-keymaps - Displaying current status of keymap settings:
~]$
localectl status - Setting or changing the default system keymap:
~]#
localectl set-keymap
1.2. Configuring and Inspecting Network Access
1.2.1. Configuring Network Access During the Installation Process
- The menu at the Installation Summary screen in the graphical user interface of the Anaconda installation program
- The option in the text mode of the Anaconda installation program
- The Kickstart file
1.2.2. Managing Network Connections After the Installation Process Using nmcli
root user to manage network connections using the nmcli utility.
~]#nmcli con addtypetype of the connection"con-name"connection nameifnameifnameinterface-namethe name of the interfaceipv4 addressipv4 addressgw4 addressgateway address
~]#nmcli con mod"con-name"
~]# nmcli con show~]# nmcli con show --active~]#nmcli con show"con-name"
1.2.3. Managing Network Connections After the Installation Process Using nmtui
1.2.4. Managing Networking in Cockpit
- To display currently received and sent packets
- To display the most important characteristics of available network interfaces
- To display content of the networking logs.
- To add various types of network interfaces (bond, team, bridge, VLAN)

Figure 1.1. Managing Networking in Cockpit
1.3. The Basics of Registering the System and Managing Subscriptions
1.3.1. What are Red Hat Subscriptions and Which Tasks they Can Be Used for
- Registered systems
- Products installed on those system
- Subscriptions attached to those product
1.3.2. Registering the System During the Installation
- Normally, registration is a part of the Initial Setup configuration process. For more information, see Red Hat Enterprise Linux 7 Installation Guide.
- Another option is to run Subscription manager as a post-installation script, which performs the automatic registration at the moment when the installation is complete and before the system is rebooted for the first time. To ensure this, modify the %post section of the Kickstart file. For more detailed information on running Subscription manager as a post-installation script, see Red Hat Enterprise Linux 7 Installation Guide.
1.3.3. Registering the System After the Installation
root user.
Procedure 1.1. Registering and subscribing your system
- Register your system:
~]#
subscription-manager registerThe command will prompt you to enter your Red Hat Customer Portal user name and password. - Determine the pool ID of a subscription that you require:
~]#
subscription-manager list --availableThis command displays all available subscriptions for your Red Hat account. For every subscription, various characteristics are displayed, including the pool ID. - Attach the appropriate subscription to your system by replacing pool_id with the pool ID determined in the previous step:
~]#
subscription-manager attach--pool=pool_id
1.4. Installing Software
1.4.1. Prerequisites for Software Installation
1.4.2. Introduction to the System of Software Packaging and Software Repositories
/etc/yum.repos.d/ directory.
yum utility to manage package operations:
- Searching information about packages
- Installing packages
- Updating packages
- Removing packages
- Checking the list of currently available repositories
- Adding or removing a repository
- Enabling or disabling a repository
yum utility, see Chapter 9, Yum.
1.4.3. Managing Basic Software-Installation Tasks with Subscription Manager and Yum
- Listing all available repositories:
~]#
subscription-manager repos --list - Listing all currently enabled repositories:
~]$
yum repolist - Enabling or disabling a repository:
~]#
subscription-manager repos --enablerepository~]#
subscription-manager repos --disablerepository - Searching for packages matching a specific string:
~]$
yum searchstring - Installing a package:
~]#
yum installpackage_name - Updating all packages and their dependencies:
~]#
yum update - Updating a package:
~]#
yum updatepackage_name - Uninstalling a package and any packages that depend on it:
~]#
yum removepackage_name - Listing information on all installed and available packages:
~]$
yum list all - Listing information on all installed packages:
~]$
yum list installed
1.5. Making systemd Services Start at Boot Time
1.5.1. Enabling or Disabling the Services
services option in the Kickstart file:
services [--disabled=list] [--enabled=list]
Note
~]# systemctl enableservice_name~]# systemctl disableservice_name1.5.2. Managing Services in Cockpit

Figure 1.2. Managing Services in Cockpit
1.5.3. Additional Resources on systemd Services
1.6. Enhancing System Security with a Firewall, SELinux and SSH Logings
1.6.1. Ensuring the Firewall is Enabled and Running
1.6.1.1. What is a Firewall and How it Enhances System Security
firewalld service, which is automatically enabled during the installation of Red Hat Enterprise Linux. However, if you explicitly disabled the service, for example in the kickstart configuration, you can re-enable it, as described in Section 1.6.1.2, “Re-enabling the firewalld Service”. For overview of firewall setting options in the Kickstart file, see Red Hat Enterprise Linux 7 Installation Guide.
1.6.1.2. Re-enabling the firewalld Service
firewalld service is disabled after the installation, Red Hat recommends to consider re-enabling it.
firewalld even as a regular user:
~]$ systemctl status firewalldfirewalld is not enabled and running, switch to the root user, and change its status:
~]# systemctl start firewalld~]# systemctl enable firewalldfirewalld, see Red Hat Enterprise Linux 7 Security Guide. For detailed information on configuring and using firewall, see Red Hat Enterprise Linux 7 Security Guide
1.6.2. Ensuring the Appropriate State of SELinux
1.6.2.1. What is SELinux and How it Enhances System Security
SELinux states
- Enabled
- Disabled
SELinux modes
- Enforcing
- Permissive
1.6.2.2. Ensuring the Required State of SELinux
Important
Procedure 1.2. Ensuring the required state of SELinux
- Display the current SELinux mode in effect:
~]$
getenforce - If needed, switch between the SELinux modes.The switch can be either temporary or permanent. A temporary switch is not persistent across reboots, while permanent switch is.
- To temporary switch to either enforcing or permissive mode:
~]#
setenforce Enforcing~]#
setenforce Permissive - To permanently set the SELinux mode, modify the SELINUX variable in the
/etc/selinux/configconfiguration file.For example, to switch SELinux to enforcing mode:# This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=enforcing
1.6.2.3. Managing SELinux in Cockpit

Figure 1.3. Managing SELinux in Cockpit
1.6.3. Using SSH-based authentication
1.6.3.1. What is SSH-based Authentication and How it Enhances System Security
1.6.3.2. Establishing an SSH Connection
Procedure 1.3. Creating the key files and Copying them to the Server
- Generate a public and a private key:
~]$
ssh-keygenBoth keys are stored in the~/.ssh/directory:~/.ssh/id_rsa.pub- public key~/.ssh/id_rsa- private key
The public key does not need to be secret. It is used to verify the private key. The private key is secret. You can choose to protect the private key with the passphrase that you specify during the key generation process. With the passphrase, authentication is even more secure, but is no longer password-less. You can avoid this using thessh-agentcommand. In this case, you will enter the passphrase only once - at the beginning of a session. For more information onssh-agentconfiguration, see Section 12.2.4, “Using Key-based Authentication”. - Copy the most recently modified public key to a remote machine you want to log into:
~]#
ssh-copy-id USER@hostnameAs a result, you are now able to enter the system in a secure way, but without entering a password.
1.6.3.3. Disabling SSH Root Login
root user, which is enabled by default.
Procedure 1.4. Disabling SSH root login
- Access the
/etc/ssh/sshd_configfile:~]#
vi /etc/ssh/sshd_config - Change the line that reads
#PermitRootLogin yesto:PermitRootLogin no
- Restart the
sshdservice:~]#
systemctl restart sshd
1.7. The Basics of Managing User Accounts
What are Groups and Which Purposes they Can Be Used for
1.7.1. The Most Basic Command-Line Tools to Manage User Accounts and Groups
- Displaying user and group IDs:
~]$
id - Creating a new user account:
~]#
useradd[options]user_name - Assigning a new password to a user account belonging to username:
~]#
passwduser_name - Adding a user to a group:
~]#
usermod -a -G group_name user_name
1.7.2. Managing User Accounts in Cockpit

Figure 1.4. Managing User Accounts in Cockpit
1.8. Dumping the Crashed Kernel Using the kdump Mechanism
kdump service is a part of the installation process, and by default, kdump was enabled during the installation. This section summarizes how to activate kdump during the installation in Section 1.8.2, “Enabling and Activating kdump During the Installation Process”, and how to manually enable the kdump service if it is disabled after the installation in Section 1.8.3, “Ensuring that kdump is Installed and Enabled After the Installation Process”.
1.8.1. What is kdump and Which Tasks it Can Be Used for
1.8.2. Enabling and Activating kdump During the Installation Process
%addon com_redhat_kdump command in the Kickstart file.
- When installing with the Anaconda installer, see:Installing Using Anaconda in Red Hat Enterprise Linux 7 Installation Guide.
- When installing with the Kickstart file, see:Kickstart Commands and Options in Red Hat Enterprise Linux 7 Installation Guide.
1.8.3. Ensuring that kdump is Installed and Enabled After the Installation Process
Procedure 1.5. Checking whether kdump is Installed and Configuring kdump
- To check whether kdump is installed on your system:
~]$
rpm -q kexec-tools - If not installed, to install kdump, enter as the
rootuser:~]#
yum install kexec-tools - To configure kdump:Use either the command line or graphical user interface.Both options are described in detail in Red Hat Enterprise Linux 7 Kernel Crash Dump Guide.If you need to install the graphical configuration tool:
~]#
yum install system-config-kdump
1.8.4. Configuring kdump in Cockpit
- the kdump status
- the amount of memory reserved for kdump
- the location of the crash dump files

Figure 1.5. Configuring kdump in Cockpit
1.8.5. Additional Resources on kdump
1.9. Performing System Rescue and Creating System Backup with ReaR
1.9.1. What is ReaR and Which Tasks it Can Be Used for
- Booting a rescue system on the new hardware
- Replicating the original storage layout
- Restoring user and system files
1.9.2. Quickstart to Installation and Configuration of ReaR
root user:
~]# yum install rear genisoimage syslinux/etc/rear/local.conf file to configure ReaR.
1.9.3. Quickstart to Creation of the Rescue System with ReaR
root user::
~]# rear mkrescue1.9.4. Quickstart to Configuration of ReaR with the Backup Software
/etc/rear/local.conf file:
BACKUP=NETFS BACKUP_URL=backup location
/etc/rear/local.conf:
NETFS_KEEP_OLD_BACKUP_COPY=y
/etc/rear/local.conf:
BACKUP_TYPE=incremental
1.10. Using the Log Files to Troubleshoot Problems
1.10.1. Services Handling the syslog Messages
- the
systemd-journalddaemon - Collects messages from the kernel, the early stages of the boot process, standard output and error of daemons as they start up and run, and syslog, and forwards the messages to thersyslogservice for further processing. - the
rsyslogservice - Sorts the syslog messages by type and priority, and writes them to the files in the/var/logdirectory, where the logs are persistently stored.
1.10.2. Subdirectories Storing the syslog Messages
/var/log directory according to what kind of messages and logs they contain:
var/log/messages- all syslog messages except those mentioned belowvar/log/secure- security and authentication-related messages and errorsvar/log/maillog- mail server-related messages and errorsvar/log/cron- log files related to periodically executed tasksvar/log/boot.log- log files related to system startup
1.11. Accessing Red Hat Support
- Obtaining Red Hat support, see Section 1.11.1, “Obtaining Red Hat Support Through Red Hat Customer Portal”
- Using the SOS report to troubleshoot problems, see Section 1.11.2, “Using the SOS Report to Troubleshoot Problems”
1.11.1. Obtaining Red Hat Support Through Red Hat Customer Portal
- Open a new support case
- Initiate a live chat with a Red Hat expert
- Contact a Red Hat expert by making a call or sending an email
- Web browser
- Red Hat Support Tool
1.11.1.1. What is the Red Hat Support Tool and Which Tasks It Can Be Used For
- Opening or updating support cases
- Searching in the Red Hat knowledge base solutions
- Analyzing Python and Java errors
~]$ redhat-support-tool
Welcome to the Red Hat Support Tool.
Command (? for help):
Command (? for help): ?
1.11.2. Using the SOS Report to Troubleshoot Problems
~]# yum install sos~]# sosreportChapter 2. System Locale and Keyboard Configuration
/etc/locale.conf configuration file or by using the localectl utility. Also, you can use the graphical user interface to perform the task; for a description of this method, see Red Hat Enterprise Linux 7 Installation Guide.
2.1. Setting the System Locale
/etc/locale.conf file, which is read at early boot by the systemd daemon. The locale settings configured in /etc/locale.conf are inherited by every service or user, unless individual programs or individual users override them.
/etc/locale.conf is a newline-separated list of variable assignments. For example, German locale with English messages in /etc/locale.conf looks as follows:
LANG=de_DE.UTF-8 LC_MESSAGES=C
/etc/locale.conf, you can use several other options, the most relevant are summarized in Table 2.1, “Options configurable in /etc/locale.conf”. See the locale(7) manual page for detailed information on these options. Note that the LC_ALL option, which represents all possible options, should not be configured in /etc/locale.conf.
Table 2.1. Options configurable in /etc/locale.conf
| Option | Description |
|---|---|
| LANG | Provides a default value for the system locale. |
| LC_COLLATE | Changes the behavior of functions which compare strings in the local alphabet. |
| LC_CTYPE | Changes the behavior of the character handling and classification functions and the multibyte character functions. |
| LC_NUMERIC | Describes the way numbers are usually printed, with details such as decimal point versus decimal comma. |
| LC_TIME | Changes the display of the current time, 24-hour versus 12-hour clock. |
| LC_MESSAGES | Determines the locale used for diagnostic messages written to the standard error output. |
2.1.1. Displaying the Current Status
localectl command can be used to query and change the system locale and keyboard layout settings. To show the current settings, use the status option:
localectlstatus
Example 2.1. Displaying the Current Status
~]$localectlstatusSystem Locale: LANG=en_US.UTF-8 VC Keymap: us X11 Layout: n/a
2.1.2. Listing Available Locales
localectllist-locales
Example 2.2. Listing Locales
~]$localectllist-locales|grepen_en_AG en_AG.utf8 en_AU en_AU.iso88591 en_AU.utf8 en_BW en_BW.iso88591 en_BW.utf8 output truncated
2.1.3. Setting the Locale
root:
localectlset-localeLANG=locale
localectl list-locales command. The above syntax can also be used to configure parameters from Table 2.1, “Options configurable in /etc/locale.conf”.
Example 2.3. Changing the Default Locale
list-locales. Then, as root, type the command in the following form:
~]#localectlset-localeLANG=en_GB.utf8
2.1.4. Making System Locale Settings Permanent when Installing with Kickstart
%packages section of the Kickstart file includes the --instLang option, the _install_langs RPM macro is set to the particular value for this installation, and the set of installed locales is adjusted accordingly. However, this adjustment affects only this installation, not subsequent upgrades. If an upgrade reinstalls the glibc package, the entire set of locales is upgraded instead of only the locales you requested during the installation.
- If you have not started the Kickstart installation, modify the Kickstart file to include instructions for setting RPM macros globally by applying this procedure: Procedure 2.1, “Setting RPM macros during the Kickstart installation”
- If you have already installed the system, set RPM macros globally on the system by applying this procedure: Procedure 2.2, “Setting RPM macros globally”
Procedure 2.1. Setting RPM macros during the Kickstart installation
- Modify the
%postsection of the Kickstart file:LANG=en_US echo "%_install_langs $LANG" > /etc/rpm/macros.language-conf awk '(NF==0amp amp!done){print "override_install_langs='$LANG'";done=1}{print}' \ < /etc/yum.conf > /etc/yum.conf.new mv /ec/yum.conf.new /etc/yum.conf
Procedure 2.2. Setting RPM macros globally
- Create the RPM configuration file at
/etc/rpm/macros.language-confwith the following contents:%_install_langs LANG
LANG is the value of theinstLangoption. - Update the
/etc/yum.conffile with:override_install_langs=LANG
2.2. Changing the Keyboard Layout
2.2.1. Displaying the Current Settings
localectlstatus
Example 2.4. Displaying the Keyboard Settings
~]$localectlstatusSystem Locale: LANG=en_US.utf8 VC Keymap: us X11 Layout: us
2.2.2. Listing Available Keymaps
localectllist-keymaps
Example 2.5. Searching for a Particular Keymap
grep to search the output of the previous command for a specific keymap name. There are often multiple keymaps compatible with your currently set locale. For example, to find available Czech keyboard layouts, type:
~]$localectllist-keymaps|grepczcz cz-cp1250 cz-lat2 cz-lat2-prog cz-qwerty cz-us-qwertz sunt5-cz-us sunt5-us-cz
2.2.3. Setting the Keymap
root:
localectlset-keymapmap
localectl list-keymaps command. Unless the --no-convert option is passed, the selected setting is also applied to the default keyboard mapping of the X11 window system, after converting it to the closest matching X11 keyboard mapping. This also applies in reverse, you can specify both keymaps with the following command as root:
localectlset-x11-keymapmap
--no-convert option.
localectl--no-convertset-x11-keymapmap
Example 2.6. Setting the X11 Keymap Separately
root:
~]#localectl--no-convertset-x11-keymapde
~]$localectlstatusSystem Locale: LANG=de_DE.UTF-8 VC Keymap: us X11 Layout: de
localectlset-x11-keymapmap model variant options
kbd(4) man page.
2.3. Additional Resources
Installed Documentation
localectl(1) — The manual page for thelocalectlcommand line utility documents how to use this tool to configure the system locale and keyboard layout.loadkeys(1) — The manual page for theloadkeyscommand provides more information on how to use this tool to change the keyboard layout in a virtual console.
See Also
- Chapter 6, Gaining Privileges documents how to gain administrative privileges by using the
suandsudocommands. - Chapter 10, Managing Services with systemd provides more information on
systemdand documents how to use thesystemctlcommand to manage system services.
Chapter 3. Configuring the Date and Time
- A real-time clock (RTC), commonly referred to as a hardware clock, (typically an integrated circuit on the system board) that is completely independent of the current state of the operating system and runs even when the computer is shut down.
- A system clock, also known as a software clock, that is maintained by the kernel and its initial value is based on the real-time clock. Once the system is booted and the system clock is initialized, the system clock is completely independent of the real-time clock.
timedatectl utility, which is new in Red Hat Enterprise Linux 7 and is part of systemd; the traditional date command; and the hwclock utility for accessing the hardware clock.
3.1. Using the timedatectl Command
systemd system and service manager and allows you to review and change the configuration of the system clock. You can use this tool to change the current date and time, set the time zone, or enable automatic synchronization of the system clock with a remote server.
3.1.1. Displaying the Current Date and Time
timedatectl command with no additional command line options:
timedatectlNTP) configuration, and additional information related to DST.
Example 3.1. Displaying the Current Date and Time
timedatectl command on a system that does not use NTP to synchronize the system clock with a remote server:
~]$ timedatectl
Local time: Mon 2016-09-16 19:30:24 CEST
Universal time: Mon 2016-09-16 17:30:24 UTC
Timezone: Europe/Prague (CEST, +0200)
NTP enabled: no
NTP synchronized: no
RTC in local TZ: no
DST active: yes
Last DST change: DST began at
Sun 2016-03-31 01:59:59 CET
Sun 2016-03-31 03:00:00 CEST
Next DST change: DST ends (the clock jumps one hour backwards) at
Sun 2016-10-27 02:59:59 CEST
Sun 2016-10-27 02:00:00 CETImportant
chrony or ntpd will not be immediately noticed by timedatectl. If changes to the configuration or status of these tools is made, enter the following command:
~]# systemctl restart systemd-timedated.service
3.1.2. Changing the Current Time
root:
timedatectlset-timeHH:MM:SS
date --set and hwclock --systohc commands.
NTP service is enabled. See Section 3.1.5, “Synchronizing the System Clock with a Remote Server” to temporally disable the service.
Example 3.2. Changing the Current Time
root:
~]# timedatectl set-time 23:26:00timedatectl command with the set-local-rtc option as root:
timedatectlset-local-rtcboolean
yes (or, alternatively, y, true, t, or 1). To configure the system to use UTC, replace boolean with no (or, alternatively, n, false, f, or 0). The default option is no.
3.1.3. Changing the Current Date
root:
timedatectlset-timeYYYY-MM-DD
Example 3.3. Changing the Current Date
root:
~]# timedatectl set-time 2017-06-02 23:26:00 3.1.4. Changing the Time Zone
timedatectllist-timezones
root:
timedatectlset-timezonetime_zone
timedatectl list-timezones command.
Example 3.4. Changing the Time Zone
timedatectl command with the list-timezones command line option. For example, to list all available time zones in Europe, type:
~]# timedatectl list-timezones | grep Europe
Europe/Amsterdam
Europe/Andorra
Europe/Athens
Europe/Belgrade
Europe/Berlin
Europe/Bratislava
…Europe/Prague, type as root:
~]# timedatectl set-timezone Europe/Prague3.1.5. Synchronizing the System Clock with a Remote Server
timedatectl command also allows you to enable automatic synchronization of your system clock with a group of remote servers using the NTP protocol. Enabling NTP enables the chronyd or ntpd service, depending on which of them is installed.
NTP service can be enabled and disabled using a command as follows:
timedatectlset-ntpboolean
NTP server, replace boolean with yes (the default option). To disable this feature, replace boolean with no.
Example 3.5. Synchronizing the System Clock with a Remote Server
~]# timedatectl set-ntp yesNTP service is not installed. See Section 17.3.1, “Installing chrony” for more information.
3.2. Using the date Command
date utility is available on all Linux systems and allows you to display and configure the current date and time. It is frequently used in scripts to display detailed information about the system clock in a custom format.
timedatectl Command”.
3.2.1. Displaying the Current Date and Time
date command with no additional command line options:
datedate command displays the local time. To display the time in UTC, run the command with the --utc or -u command line option:
date--utc
+"format" option on the command line:
date+"format"
date(1) manual page for a complete list of these options.
Table 3.1. Commonly Used Control Sequences
| Control Sequence | Description |
|---|---|
%H | The hour in the HH format (for example, 17). |
%M | The minute in the MM format (for example, 30). |
%S | The second in the SS format (for example, 24). |
%d | The day of the month in the DD format (for example, 16). |
%m | The month in the MM format (for example, 09). |
%Y | The year in the YYYY format (for example, 2016). |
%Z | The time zone abbreviation (for example, CEST). |
%F | The full date in the YYYY-MM-DD format (for example, 2016-09-16). This option is equal to %Y-%m-%d. |
%T | The full time in the HH:MM:SS format (for example, 17:30:24). This option is equal to %H:%M:%S |
Example 3.6. Displaying the Current Date and Time
~]$ date
Mon Sep 16 17:30:24 CEST 2016~]$ date --utc
Mon Sep 16 15:30:34 UTC 2016date command, type:
~]$ date +"%Y-%m-%d %H:%M"
2016-09-16 17:303.2.2. Changing the Current Time
date command with the --set or -s option as root:
date--setHH:MM:SS
date command sets the system clock to the local time. To set the system clock in UTC, run the command with the --utc or -u command line option:
date--setHH:MM:SS--utc
Example 3.7. Changing the Current Time
root:
~]# date --set 23:26:003.2.3. Changing the Current Date
date command with the --set or -s option as root:
date--setYYYY-MM-DD
Example 3.8. Changing the Current Date
root:
~]# date --set "2017-06-02 23:26:00"3.3. Using the hwclock Command
hwclock is a utility for accessing the hardware clock, also referred to as the Real Time Clock (RTC). The hardware clock is independent of the operating system you use and works even when the machine is shut down. This utility is used for displaying the time from the hardware clock. hwclock also contains facilities for compensating for systematic drift in the hardware clock.
hwclock utility saves its settings in the /etc/adjtime file, which is created with the first change you make, for example, when you set the time manually or synchronize the hardware clock with the system time.
Note
hwclock command was run automatically on every system shutdown or reboot, but it is not in Red Hat Enterprise Linux 7. When the system clock is synchronized by the Network Time Protocol (NTP) or Precision Time Protocol (PTP), the kernel automatically synchronizes the hardware clock to the system clock every 11 minutes.
3.3.1. Displaying the Current Date and Time
hwclock with no command line options as the root user returns the date and time in local time to standard output.
hwclock--utc or --localtime options with the hwclock command does not mean you are displaying the hardware clock time in UTC or local time. These options are used for setting the hardware clock to keep time in either of them. The time is always displayed in local time. Additionally, using the hwclock --utc or hwclock --local commands does not change the record in the /etc/adjtime file. This command can be useful when you know that the setting saved in /etc/adjtime is incorrect but you do not want to change the setting. On the other hand, you may receive misleading information if you use the command an incorrect way. See the hwclock(8) manual page for more details.
Example 3.9. Displaying the Current Date and Time
root:
~]# hwclock
Tue 15 Apr 2017 04:23:46 PM CEST -0.329272 seconds3.3.2. Setting the Date and Time
--set and --date options along with your specification:
hwclock --set --date "dd mmm yyyy HH:MM"--utc or --localtime options, respectively. In this case, UTC or LOCAL is recorded in the /etc/adjtime file.
Example 3.10. Setting the Hardware Clock to a Specific Date and Time
root in the following format:
~]# hwclock --set --date "21 Oct 2016 21:17" --utc3.3.3. Synchronizing the Date and Time
- Either you can set the hardware clock to the current system time by using this command:
hwclock --systohcNote that if you use NTP, the hardware clock is automatically synchronized to the system clock every 11 minutes, and this command is useful only at boot time to get a reasonable initial system time. - Or, you can set the system time from the hardware clock by using the following command:
hwclock --hctosys
--utc or --localtime option. Similarly to using --set, UTC or LOCAL is recorded in the /etc/adjtime file.
hwclock --systohc --utc command is functionally similar to timedatectl set-local-rtc false and the hwclock --systohc --local command is an alternative to timedatectl set-local-rtc true.
Example 3.11. Synchronizing the Hardware Clock with System Time
root:
~]# hwclock --systohc --localtime3.4. Additional Resources
Installed Documentation
timedatectl(1) — The manual page for thetimedatectlcommand line utility documents how to use this tool to query and change the system clock and its settings.date(1) — The manual page for thedatecommand provides a complete list of supported command line options.hwclock(8) — The manual page for thehwclockcommand provides a complete list of supported command line options.
See Also
- Chapter 2, System Locale and Keyboard Configuration documents how to configure the keyboard layout.
- Chapter 6, Gaining Privileges documents how to gain administrative privileges by using the
suandsudocommands. - Chapter 10, Managing Services with systemd provides more information on systemd and documents how to use the
systemctlcommand to manage system services.
Chapter 4. Managing Users and Groups
4.1. Introduction to Users and Groups
root, and access permissions can be changed by both the root user and file owner.
Reserved User and Group IDs
cat /usr/share/doc/setup*/uidgid
The recommended practice is to assign IDs starting at 5,000 that were not already reserved, as the reserved range can increase in the future. To make the IDs assigned to new users by default start at 5,000, change the UID_MIN and GID_MIN directives in the /etc/login.defs file:
[file contents truncated] UID_MIN 5000 [file contents truncated] GID_MIN 5000 [file contents truncated]
Note
UID_MIN and GID_MIN directives, UIDs will still start at the default 1000.
4.1.1. User Private Groups
/etc/bashrc file. Traditionally on UNIX-based systems, the umask is set to 022, which allows only the user who created the file or directory to make modifications. Under this scheme, all other users, including members of the creator's group, are not allowed to make any modifications. However, under the UPG scheme, this “group protection” is not necessary since every user has their own private group. See Section 4.3.5, “Setting Default Permissions for New Files Using umask” for more information.
/etc/group configuration file.
4.1.2. Shadow Passwords
- Shadow passwords improve system security by moving encrypted password hashes from the world-readable
/etc/passwdfile to/etc/shadow, which is readable only by therootuser. - Shadow passwords store information about password aging.
- Shadow passwords allow to enforce some of the security policies set in the
/etc/login.defsfile.
/etc/shadow file, some utilities and commands do not work without first enabling shadow passwords:
- The
chageutility for setting password aging parameters. For details, see the Password Security section in the Red Hat Enterprise Linux 7 Security Guide. - The
gpasswdutility for administrating the/etc/groupfile. - The
usermodcommand with the-e, --expiredateor-f, --inactiveoption. - The
useraddcommand with the-e, --expiredateor-f, --inactiveoption.
4.2. Managing Users in a Graphical Environment
4.2.1. Using the Users Settings Tool
Users and then press Enter. The Users settings tool appears. The Super key appears in a variety of guises, depending on the keyboard and other hardware, but often as either the Windows or Command key, and typically to the left of the Space bar. Alternatively, you can open the Users utility from the menu after clicking your user name in the top right corner of the screen.
root. To add and remove users, select the and button respectively. To add a user to the administrative group wheel, change the from Standard to Administrator. To edit a user's language setting, select the language and a drop-down menu appears.

Figure 4.1. The Users Settings Tool

Figure 4.2. The Password Menu
4.3. Using Command-Line Tools
Table 4.1. Command line utilities for managing users and groups
| Utilities | Description |
|---|---|
id | Displays user and group IDs. |
useradd, usermod, userdel | Standard utilities for adding, modifying, and deleting user accounts. |
groupadd, groupmod, groupdel | Standard utilities for adding, modifying, and deleting groups. |
gpasswd | Utility primarily used for modification of group password in the /etc/gshadow file which is used by the newgrp command. |
pwck, grpck | Utilities that can be used for verification of the password, group, and associated shadow files. |
pwconv, pwunconv | Utilities that can be used for the conversion of passwords to shadow passwords, or back from shadow passwords to standard passwords. |
grpconv, grpunconv | Similar to the previous, these utilities can be used for conversion of shadowed information for group accounts. |
4.3.1. Adding a New User
root:
useradd [options] usernameuseradd command creates a locked user account. To unlock the account, run the following command as root to assign a password:
passwd usernameTable 4.2. Common useradd command-line options
| Option | |
|---|---|
-c 'comment' | comment can be replaced with any string. This option is generally used to specify the full name of a user. |
-d home_directory | Home directory to be used instead of default /home/username/. |
-e date | Date for the account to be disabled in the format YYYY-MM-DD. |
-f days | Number of days after the password expires until the account is disabled. If 0 is specified, the account is disabled immediately after the password expires. If -1 is specified, the account is not disabled after the password expires. |
-g group_name | Group name or group number for the user's default (primary) group. The group must exist prior to being specified here. |
-G group_list | List of additional (supplementary, other than default) group names or group numbers, separated by commas, of which the user is a member. The groups must exist prior to being specified here. |
-m | Create the home directory if it does not exist. |
-M | Do not create the home directory. |
-N | Do not create a user private group for the user. |
-p password | The password encrypted with crypt. |
-r | Create a system account with a UID less than 1000 and without a home directory. |
-s | User's login shell, which defaults to /bin/bash. |
-u uid | User ID for the user, which must be unique and greater than 999. |
Important
/etc/login.defs file.
Explaining the Process
useradd juan is issued on a system that has shadow passwords enabled:
- A new line for
juanis created in/etc/passwd:juan:x:1001:1001::/home/juan:/bin/bash
The line has the following characteristics:- It begins with the user name
juan. - There is an
xfor the password field indicating that the system is using shadow passwords. - A UID greater than 999 is created. Under Red Hat Enterprise Linux 7, UIDs below 1000 are reserved for system use and should not be assigned to users.
- A GID greater than 999 is created. Under Red Hat Enterprise Linux 7, GIDs below 1000 are reserved for system use and should not be assigned to users.
- The optional GECOS information is left blank. The GECOS field can be used to provide additional information about the user, such as their full name or phone number.
- The home directory for
juanis set to/home/juan/. - The default shell is set to
/bin/bash.
- A new line for
juanis created in/etc/shadow:juan:!!:14798:0:99999:7:::
The line has the following characteristics:- It begins with the user name
juan. - Two exclamation marks (
!!) appear in the password field of the/etc/shadowfile, which locks the account.Note
If an encrypted password is passed using the-pflag, it is placed in the/etc/shadowfile on the new line for the user. - The password is set to never expire.
- A new line for a group named
juanis created in/etc/group:juan:x:1001:
A group with the same name as a user is called a user private group. For more information on user private groups, see Section 4.1.1, “User Private Groups”.The line created in/etc/grouphas the following characteristics:- It begins with the group name
juan. - An
xappears in the password field indicating that the system is using shadow group passwords. - The GID matches the one listed for
juan's primary group in/etc/passwd.
- A new line for a group named
juanis created in/etc/gshadow:juan:!::
The line has the following characteristics:- It begins with the group name
juan. - An exclamation mark (
!) appears in the password field of the/etc/gshadowfile, which locks the group. - All other fields are blank.
- A directory for user
juanis created in the/homedirectory:~]#
ls -ld /home/juandrwx------. 4 juan juan 4096 Mar 3 18:23 /home/juanThis directory is owned by userjuanand groupjuan. It has read, write, and execute privileges only for the userjuan. All other permissions are denied. - The files within the
/etc/skel/directory (which contain default user settings) are copied into the new/home/juan/directory:~]#
ls -la /home/juantotal 28 drwx------. 4 juan juan 4096 Mar 3 18:23 . drwxr-xr-x. 5 root root 4096 Mar 3 18:23 .. -rw-r--r--. 1 juan juan 18 Jun 22 2010 .bash_logout -rw-r--r--. 1 juan juan 176 Jun 22 2010 .bash_profile -rw-r--r--. 1 juan juan 124 Jun 22 2010 .bashrc drwxr-xr-x. 4 juan juan 4096 Nov 23 15:09 .mozilla
juan exists on the system. To activate it, the administrator must next assign a password to the account using the passwd command and, optionally, set password aging guidelines (see the Password Security section in the Red Hat Enterprise Linux 7 Security Guide for details).
4.3.2. Adding a New Group
root:
groupadd [options] group_nameTable 4.3. Common groupadd command-line options
| Option | Description |
|---|---|
-f, --force | When used with -g gid and gid already exists, groupadd will choose another unique gid for the group. |
-g gid | Group ID for the group, which must be unique and greater than 999. |
-K, --key key=value | Override /etc/login.defs defaults. |
-o, --non-unique | Allows creating groups with duplicate GID. |
-p, --password password | Use this encrypted password for the new group. |
-r | Create a system group with a GID less than 1000. |
4.3.3. Adding an Existing User to an Existing Group
usermod utility to add an already existing user to an already existing group.
usermod have different impact on user's primary group and on his or her supplementary groups.
root:
~]# usermod -g group_name user_nameroot:
~]# usermod -G group_name1,group_name2,... user_nameroot:
~]# usermod -aG group_name1,group_name2,... user_name~]# usermod --append -G group_name1,group_name2,... user_name4.3.4. Creating Group Directories
/opt/myproject/ directory. Some people are trusted to modify the contents of this directory, but not everyone.
- As
root, create the/opt/myproject/directory by typing the following at a shell prompt:mkdir /opt/myproject - Add the
myprojectgroup to the system:groupadd myproject - Associate the contents of the
/opt/myproject/directory with themyprojectgroup:chown root:myproject /opt/myproject - Allow users in the group to create files within the directory and set the setgid bit:
chmod 2775 /opt/myprojectAt this point, all members of themyprojectgroup can create and edit files in the/opt/myproject/directory without the administrator having to change file permissions every time users write new files. To verify that the permissions have been set correctly, run the following command:~]#
ls -ld /opt/myprojectdrwxrwsr-x. 3 root myproject 4096 Mar 3 18:31 /opt/myproject - Add users to the
myprojectgroup:usermod -aG myproject username
4.3.5. Setting Default Permissions for New Files Using umask
-rw-rw-r--. These initial permissions are partially defined by the file mode creation mask, also called file permission mask or umask. Every process has its own umask, for example, bash has umask 0022 by default. Process umask can be changed.
What umask consists of
0137, the digits mean that:
0= no meaning, it is always0(umask does not affect special bits)1= for owner permissions, the execute bit is set3= for group permissions, the execute and write bits are set7= for others permissions, the execute, write, and read bits are set
0137 equals symbolic representation u=rw-,g=r--,o=---. Symbolic notation specification is the reverse of the octal notation specification: it shows the allowed permissions, not the prohibited permissions.
How umask works
- When a bit is set in umask, it is unset in the file.
- When a bit is not set in umask, it can be set in the file, depending on other factors.
0137 affects creating a new file.

Figure 4.3. Applying umask when creating a file
Important
0000, which does not prohibit any permissions, a new regular file still does not have execute permissions. However, directories can be created with execute permissions:
[john@server tmp]$ umask 0000 [john@server tmp]$ touch file [john@server tmp]$ mkdir directory [john@server tmp]$ ls -lh . total 0 drwxrwxrwx. 2 john john 40 Nov 2 13:17 directory -rw-rw-rw-. 1 john john 0 Nov 2 13:17 file
4.3.5.1. Managing umask in Shells
bash, ksh, zsh and tcsh, umask is managed using the umask shell builtin. Processes started from shell inherit its umask.
Displaying the current mask
~]$ umask
0022~]$ umask -S
u=rwx,g=rx,o=rxSetting mask in shell using umask
~]$ umask octal_mask0 to 7. When three or less digits are provided, permissions are set as if the command contained leading zeros. For example, umask 7 translates to 0007.
Example 4.1. Setting umask Using Octal Notation
~]$ umask 0337~]$ umask 337~]$ umask -S symbolic_maskExample 4.2. Setting umask Using Symbolic Notation
0337 using symbolic notation:
~]$ umask -S u=r,g=r,o=Working with the default shell umask
bash, it is /etc/bashrc. To show the default bash umask:
~]$ grep -i -B 1 umask /etc/bashrcumask command or the UMASK variable. In the following example, umask is set to 022 using the umask command:
~]$ grep -i -B 1 umask /etc/bashrc
# By default, we want umask to get set. This sets it for non-login shell.
--
if [ $UID -gt 199 ] && [ "`id -gn`" = "`id -un`" ]; then
umask 002
else
umask 022bash, change the umask command call or the UMASK variable assignment in /etc/bashrc. This example changes the default umask to 0227:
if [ $UID -gt 199 ] && [ "`id -gn`" = "`id -un`" ]; then
umask 002
else
umask 227Working with the default shell umask of a specific user
bash umask of a new user defaults to the one defined in /etc/bashrc.
bash umaskfor a particular user, add a call to the umask command in $HOME/.bashrc file of that user. For example, to change bash umask of user john to 0227:
john@server ~]$ echo 'umask 227' >> /home/john/.bashrcSetting default permissions for newly created home directories
UMASK variable in the /etc/login.defs file:
# The permission mask is initialized to this value. If not specified,
# the permission mask will be initialized to 022.
UMASK 077
4.4. Additional Resources
Installed Documentation
useradd(8) — The manual page for theuseraddcommand documents how to use it to create new users.userdel(8) — The manual page for theuserdelcommand documents how to use it to delete users.usermod(8) — The manual page for theusermodcommand documents how to use it to modify users.groupadd(8) — The manual page for thegroupaddcommand documents how to use it to create new groups.groupdel(8) — The manual page for thegroupdelcommand documents how to use it to delete groups.groupmod(8) — The manual page for thegroupmodcommand documents how to use it to modify group membership.gpasswd(1) — The manual page for thegpasswdcommand documents how to manage the/etc/groupfile.grpck(8) — The manual page for thegrpckcommand documents how to use it to verify the integrity of the/etc/groupfile.pwck(8) — The manual page for thepwckcommand documents how to use it to verify the integrity of the/etc/passwdand/etc/shadowfiles.pwconv(8) — The manual page for thepwconv,pwunconv,grpconv, andgrpunconvcommands documents how to convert shadowed information for passwords and groups.id(1) — The manual page for theidcommand documents how to display user and group IDs.umask(2) — The manual page for theumaskcommand documents how to work with the file mode creation mask.
group(5) — The manual page for the/etc/groupfile documents how to use this file to define system groups.passwd(5) — The manual page for the/etc/passwdfile documents how to use this file to define user information.shadow(5) — The manual page for the/etc/shadowfile documents how to use this file to set passwords and account expiration information for the system.
Online Documentation
- Red Hat Enterprise Linux 7 Security Guide — The Security Guide for Red Hat Enterprise Linux 7 provides additional information how to ensure password security and secure the workstation by enabling password aging and user account locking.
See Also
- Chapter 6, Gaining Privileges documents how to gain administrative privileges by using the
suandsudocommands.
Chapter 5. Access Control Lists
acl package is required to implement ACLs. It contains the utilities used to add, modify, remove, and retrieve ACL information.
cp and mv commands copy or move any ACLs associated with files and directories.
5.1. Mounting File Systems
mount -t ext3 -o acl device-name partition
mount -t ext3 -o acl /dev/VolGroup00/LogVol02 /work
/etc/fstab file, the entry for the partition can include the acl option:
LABEL=/work /work ext3 acl 1 2
--with-acl-support option. No special flags are required when accessing or mounting a Samba share.
5.1.1. NFS
no_acl option in the /etc/exports file. To disable ACLs on an NFS share when mounting it on a client, mount it with the no_acl option via the command line or the /etc/fstab file.
5.2. Setting Access ACLs
- Per user
- Per group
- Via the effective rights mask
- For users not in the user group for the file
setfacl utility sets ACLs for files and directories. Use the -m option to add or modify the ACL of a file or directory:
# setfacl -m rules files
u:uid:perms- Sets the access ACL for a user. The user name or UID may be specified. The user may be any valid user on the system.
g:gid:perms- Sets the access ACL for a group. The group name or GID may be specified. The group may be any valid group on the system.
m:perms- Sets the effective rights mask. The mask is the union of all permissions of the owning group and all of the user and group entries.
o:perms- Sets the access ACL for users other than the ones in the group for the file.
r, w, and x for read, write, and execute.
setfacl command is used, the additional rules are added to the existing ACL or the existing rule is modified.
Example 5.1. Give read and write permissions
# setfacl -m u:andrius:rw /project/somefile
-x option and do not specify any permissions:
# setfacl -x rules files
Example 5.2. Remove all permissions
# setfacl -x u:500 /project/somefile
5.3. Setting Default ACLs
d: before the rule and specify a directory instead of a file name.
Example 5.3. Setting default ACLs
/share/ directory to read and execute for users not in the user group (an access ACL for an individual file can override it):
# setfacl -m d:o:rx /share
5.4. Retrieving ACLs
getfacl command. In the example below, the getfacl is used to determine the existing ACLs for a file.
Example 5.4. Retrieving ACLs
# getfacl home/john/picture.png
# file: home/john/picture.png # owner: john # group: john user::rw- group::r-- other::r--
getfacl home/sales/ will display similar output:
# file: home/sales/ # owner: john # group: john user::rw- user:barryg:r-- group::r-- mask::r-- other::r-- default:user::rwx default:user:john:rwx default:group::r-x default:mask::rwx default:other::r-x
5.5. Archiving File Systems With ACLs
dump command now preserves ACLs during a backup operation. When archiving a file or file system with tar, use the --acls option to preserve ACLs. Similarly, when using cp to copy files with ACLs, include the --preserve=mode option to ensure that ACLs are copied across too. In addition, the -a option (equivalent to -dR --preserve=all) of cp also preserves ACLs during a backup along with other information such as timestamps, SELinux contexts, and the like. For more information about dump, tar, or cp, refer to their respective man pages.
star utility is similar to the tar utility in that it can be used to generate archives of files; however, some of its options are different. Refer to Table 5.1, “Command Line Options for star” for a listing of more commonly used options. For all available options, refer to man star. The star package is required to use this utility.
Table 5.1. Command Line Options for star
| Option | Description |
|---|---|
-c | Creates an archive file. |
-n | Do not extract the files; use in conjunction with -x to show what extracting the files does. |
-r | Replaces files in the archive. The files are written to the end of the archive file, replacing any files with the same path and file name. |
-t | Displays the contents of the archive file. |
-u | Updates the archive file. The files are written to the end of the archive if they do not exist in the archive, or if the files are newer than the files of the same name in the archive. This option only works if the archive is a file or an unblocked tape that may backspace. |
-x | Extracts the files from the archive. If used with -U and a file in the archive is older than the corresponding file on the file system, the file is not extracted. |
-help | Displays the most important options. |
-xhelp | Displays the least important options. |
-/ | Do not strip leading slashes from file names when extracting the files from an archive. By default, they are stripped when files are extracted. |
-acl | When creating or extracting, archives or restores any ACLs associated with the files and directories. |
5.6. Compatibility with Older Systems
ext_attr attribute. This attribute can be seen using the following command:
# tune2fs -l filesystem-device
ext_attr attribute can be mounted with older kernels, but those kernels do not enforce any ACLs which have been set.
e2fsck utility included in version 1.22 and higher of the e2fsprogs package (including the versions in Red Hat Enterprise Linux 2.1 and 4) can check a file system with the ext_attr attribute. Older versions refuse to check it.
Chapter 6. Gaining Privileges
root user is potentially dangerous and can lead to widespread damage to the system and data. This chapter covers ways to gain administrative privileges using the setuid programs such as su and sudo. These programs allow specific users to perform tasks which would normally be available only to the root user while maintaining a higher level of control and system security.
6.1. Configuring Administrative Access Using the su Utility
su command, they are prompted for the root password and, after authentication, are given a root shell prompt.
su command, the user is the root user and has absolute administrative access to the system. Note that this access is still subject to the restrictions imposed by SELinux, if it is enabled. In addition, once a user has become root, it is possible for them to use the su command to change to any other user on the system without being prompted for a password.
root:
~]# usermod -a -G wheel usernamewheel group.
- Press the Super key to enter the Activities Overview, type
Usersand then press Enter. The Users settings tool appears. The Super key appears in a variety of guises, depending on the keyboard and other hardware, but often as either the Windows or Command key, and typically to the left of the Spacebar. - To enable making changes, click the button, and enter a valid administrator password.
- Click a user icon in the left column to display the user's properties in the right pane.
- Change the from
StandardtoAdministrator. This will add the user to thewheelgroup.
wheel group, it is advisable to only allow these specific users to use the su command. To do this, edit the Pluggable Authentication Module (PAM) configuration file for su, /etc/pam.d/su. Open this file in a text editor and uncomment the following line by removing the # character:
#auth required pam_wheel.so use_uid
wheel can switch to another user using the su command.
Note
root user is part of the wheel group by default.
6.2. Configuring Administrative Access Using the sudo Utility
sudo command offers another approach to giving users administrative access. When trusted users precede an administrative command with sudo, they are prompted for their own password. Then, when they have been authenticated and assuming that the command is permitted, the administrative command is executed as if they were the root user.
sudo command is as follows:
sudo commandroot user, such as mount.
sudo command allows for a high degree of flexibility. For instance, only users listed in the /etc/sudoers configuration file are allowed to use the sudo command and the command is executed in the user's shell, not a root shell. This means the root shell can be completely disabled as shown in the Red Hat Enterprise Linux 7 Security Guide.
sudo command is logged to the file /var/log/messages and the command issued along with the issuer's user name is logged to the file /var/log/secure. If additional logging is required, use the pam_tty_audit module to enable TTY auditing for specified users by adding the following line to your /etc/pam.d/system-auth file:
session required pam_tty_audit.so disable=pattern enable=pattern
root user and disable it for all other users:
session required pam_tty_audit.so disable=* enable=root
Important
pam_tty_audit PAM module for TTY auditing records only TTY input. This means that, when the audited user logs in, pam_tty_audit records the exact keystrokes the user makes into the /var/log/audit/audit.log file. For more information, see the pam_tty_audit(8) manual page.
sudo command is that an administrator can allow different users access to specific commands based on their needs.
sudo configuration file, /etc/sudoers, should use the visudo command.
visudo and add a line similar to the following in the user privilege specification section:
juan ALL=(ALL) ALL
juan, can use sudo from any host and execute any command.
sudo:
%users localhost=/usr/sbin/shutdown -h now
users system group can issue the command /sbin/shutdown -h now as long as it is issued from the console.
sudoers has a detailed listing of options for this file.
Important
sudo command. You can avoid them by editing the /etc/sudoers configuration file using visudo as described above. Leaving the /etc/sudoers file in its default state gives every user in the wheel group unlimited root access.
- By default,
sudostores the password for a five minute timeout period. Any subsequent uses of the command during this period will not prompt the user for a password. This could be exploited by an attacker if the user leaves his workstation unattended and unlocked while still being logged in. This behavior can be changed by adding the following line to the/etc/sudoersfile:Defaults timestamp_timeout=value
where value is the desired timeout length in minutes. Setting the value to 0 causessudoto require a password every time. - If an account is compromised, an attacker can use
sudoto open a new shell with administrative privileges:sudo /bin/bashOpening a new shell asrootin this or similar fashion gives the attacker administrative access for a theoretically unlimited amount of time, bypassing the timeout period specified in the/etc/sudoersfile and never requiring the attacker to input a password forsudoagain until the newly opened session is closed.
6.3. Additional Resources
Installed Documentation
su(1) — The manual page forsuprovides information regarding the options available with this command.sudo(8) — The manual page forsudoincludes a detailed description of this command and lists options available for customizing its behavior.pam(8) — The manual page describing the use of Pluggable Authentication Modules (PAM) for Linux.
Online Documentation
- Red Hat Enterprise Linux 7 Security Guide — The Security Guide for Red Hat Enterprise Linux 7 provides a more detailed look at potential security issues pertaining to the
setuidprograms as well as techniques used to alleviate these risks.
See Also
- Chapter 4, Managing Users and Groups documents how to manage system users and groups in the graphical user interface and on the command line.
Part II. Subscription and Support
Chapter 7. Registering the System and Managing Subscriptions
Note
7.1. Registering the System and Attaching Subscriptions
subscription-manager commands are supposed to be run as root.
- Run the following command to register your system. You will be prompted to enter your user name and password. Note that the user name and password are the same as your login credentials for Red Hat Customer Portal.
subscription-manager register - Determine the pool ID of a subscription that you require. To do so, type the following at a shell prompt to display a list of all subscriptions that are available for your system:
subscription-manager list --availableFor each available subscription, this command displays its name, unique identifier, expiration date, and other details related to your subscription. To list subscriptions for all architectures, add the--alloption. The pool ID is listed on a line beginning withPool ID. - Attach the appropriate subscription to your system by entering a command as follows:
subscription-manager attach --pool=pool_idReplace pool_id with the pool ID you determined in the previous step.To verify the list of subscriptions your system has currently attached, at any time, run:subscription-manager list --consumed
7.2. Managing Software Repositories
/etc/yum.repos.d/ directory. To verify that, use yum to list all enabled repositories:
yum repolistsubscription-manager repos --listrhel-version-variant-rpms rhel-version-variant-debug-rpms rhel-version-variant-source-rpms
6 or 7), and variant is the Red Hat Enterprise Linux system variant (server or workstation), for example:
rhel-7-server-rpms rhel-7-server-debug-rpms rhel-7-server-source-rpms
subscription-manager repos --enable repositorysubscription-manager repos --disable repositoryyum-cron service. For more information, see Section 9.7, “Automatically Refreshing Package Database and Downloading Updates with Yum-cron”.
7.3. Removing Subscriptions
- Determine the serial number of the subscription you want to remove by listing information about already attached subscriptions:
subscription-manager list --consumedThe serial number is the number listed asserial. For instance,744993814251016831in the example below:SKU: ES0113909 Contract: 01234567 Account: 1234567 Serial: 744993814251016831 Pool ID: 8a85f9894bba16dc014bccdd905a5e23 Active: False Quantity Used: 1 Service Level: SELF-SUPPORT Service Type: L1-L3 Status Details: Subscription Type: Standard Starts: 02/27/2015 Ends: 02/27/2016 System Type: Virtual
- Enter a command as follows to remove the selected subscription:
subscription-manager remove --serial=serial_numberReplace serial_number with the serial number you determined in the previous step.
subscription-manager remove --all7.4. Additional Resources
Installed Documentation
subscription-manager(8) — the manual page for Red Hat Subscription Management provides a complete list of supported options and commands.
Related Books
- Red Hat Subscription Management collection of guides — These guides contain detailed information how to use Red Hat Subscription Management.
- Installation Guide — see the Initial Setup chapter for detailed information on how to register during the initial setup process.
See Also
- Chapter 6, Gaining Privileges documents how to gain administrative privileges by using the
suandsudocommands. - Chapter 9, Yum provides information about using the yum packages manager to install and update software.
Chapter 8. Accessing Support Using the Red Hat Support Tool
SSH or from any terminal. It enables, for example, searching the Red Hat Knowledgebase from the command line, copying solutions directly on the command line, opening and updating support cases, and sending files to Red Hat for analysis.
8.1. Installing the Red Hat Support Tool
root:
~]# yum install redhat-support-tool
8.2. Registering the Red Hat Support Tool Using the Command Line
~]#
Where username is the user name of the Red Hat Customer Portal account.redhat-support-tool config user username~]#
redhat-support-tool config passwordPlease enter the password for username:
8.3. Using the Red Hat Support Tool in Interactive Shell Mode
~]$ redhat-support-tool
Welcome to the Red Hat Support Tool.
Command (? for help):
The tool can be run as an unprivileged user, with a consequently reduced set of commands, or as root.
? character. The program or menu selection can be exited by entering the q or e character. You will be prompted for your Red Hat Customer Portal user name and password when you first search the Knowledgebase or support cases. Alternately, set the user name and password for your Red Hat Customer Portal account using interactive mode, and optionally save it to the configuration file.
8.4. Configuring the Red Hat Support Tool
config --help:
~]#redhat-support-toolWelcome to the Red Hat Support Tool. Command (? for help):config --helpUsage: config [options] config.option <new option value> Use the 'config' command to set or get configuration file values. Options: -h, --help show this help message and exit -g, --global Save configuration option in /etc/redhat-support-tool.conf. -u, --unset Unset configuration option. The configuration file options which can be set are: user : The Red Hat Customer Portal user. password : The Red Hat Customer Portal password. debug : CRITICAL, ERROR, WARNING, INFO, or DEBUG url : The support services URL. Default=https://api.access.redhat.com proxy_url : A proxy server URL. proxy_user: A proxy server user. proxy_password: A password for the proxy server user. ssl_ca : Path to certificate authorities to trust during communication. kern_debug_dir: Path to the directory where kernel debug symbols should be downloaded and cached. Default=/var/lib/redhat-support-tool/debugkernels Examples: - config user - config user my-rhn-username - config --unset user
Procedure 8.1. Registering the Red Hat Support Tool Using Interactive Mode
- Start the tool by entering the following command:
~]#
redhat-support-tool - Enter your Red Hat Customer Portal user name:
Command (? for help):
To save your user name to the global configuration file, add theconfig user username-goption. - Enter your Red Hat Customer Portal password:
Command (? for help):
config passwordPlease enter the password for username:
8.4.1. Saving Settings to the Configuration Files
~/.redhat-support-tool/redhat-support-tool.conf configuration file. If required, it is recommended to save passwords to this file because it is only readable by that particular user. When the tool starts, it will read values from the global configuration file /etc/redhat-support-tool.conf and from the local configuration file. Locally stored values and options take precedence over globally stored settings.
Warning
/etc/redhat-support-tool.conf configuration file because the password is just base64 encoded and can easily be decoded. In addition, the file is world readable.
-g, --global option as follows:
Command (? for help): config setting -g value
Note
-g, --global option, the Red Hat Support Tool must be run as root because normal users do not have the permissions required to write to /etc/redhat-support-tool.conf.
-u, --unset option as follows:
Command (? for help): config setting -u value
This will clear, unset, the parameter from the tool and fall back to the equivalent setting in the global configuration file, if available.
Note
-u, --unset option, but they can be cleared, unset, from the current running instance of the tool by using the -g, --global option simultaneously with the -u, --unset option. If running as root, values and options can be removed from the global configuration file using -g, --global simultaneously with the -u, --unset option.
8.5. Opening and Updating Support Cases Using Interactive Mode
Procedure 8.2. Opening a New Support Case Using Interactive Mode
- Start the tool by entering the following command:
~]#
redhat-support-tool - Enter the
opencasecommand:Command (? for help):
opencase - Follow the on screen prompts to select a product and then a version.
- Enter a summary of the case.
- Enter a description of the case and press Ctrl+D on an empty line when complete.
- Select a severity of the case.
- Optionally chose to see if there is a solution to this problem before opening a support case.
- Confirm you would still like to open the support case.
Support case 0123456789 has successfully been opened
- Optionally chose to attach an SOS report.
- Optionally chose to attach a file.
Procedure 8.3. Viewing and Updating an Existing Support Case Using Interactive Mode
- Start the tool by entering the following command:
~]#
redhat-support-tool - Enter the
getcasecommand:Command (? for help):
Where case-number is the number of the case you want to view and update.getcase case-number - Follow the on screen prompts to view the case, modify or add comments, and get or add attachments.
Procedure 8.4. Modifying an Existing Support Case Using Interactive Mode
- Start the tool by entering the following command:
~]#
redhat-support-tool - Enter the
modifycasecommand:Command (? for help):
Where case-number is the number of the case you want to view and update.modifycase case-number - The modify selection list appears:
Type the number of the attribute to modify or 'e' to return to the previous menu. 1 Modify Type 2 Modify Severity 3 Modify Status 4 Modify Alternative-ID 5 Modify Product 6 Modify Version End of options.
Follow the on screen prompts to modify one or more of the options. - For example, to modify the status, enter
3:Selection: 3 1 Waiting on Customer 2 Waiting on Red Hat 3 Closed Please select a status (or 'q' to exit):
8.6. Viewing Support Cases on the Command Line
~]# redhat-support-tool getcase case-number
Where case-number is the number of the case you want to download.
8.7. Additional Resources
Part III. Installing and Managing Software
Chapter 9. Yum
Important
Note
su or sudo command.
9.1. Checking For and Updating Packages
9.1.1. Checking For Updates
yumcheck-update
Example 9.1. Example output of the yum check-update command
yum check-update can look as follows:
~]# yum check-update
Loaded plugins: product-id, search-disabled-repos, subscription-manager
dracut.x86_64 033-360.el7_2 rhel-7-server-rpms
dracut-config-rescue.x86_64 033-360.el7_2 rhel-7-server-rpms
kernel.x86_64 3.10.0-327.el7 rhel-7-server-rpms
rpm.x86_64 4.11.3-17.el7 rhel-7-server-rpms
rpm-libs.x86_64 4.11.3-17.el7 rhel-7-server-rpms
rpm-python.x86_64 4.11.3-17.el7 rhel-7-server-rpms
yum.noarch 3.4.3-132.el7 rhel-7-server-rpmsdracut— the name of the package,x86_64— the CPU architecture the package was built for,033— the version of the updated package to be installed,360.el7— the release of the updated package,_2— a build version, added as part of a z-stream update,rhel-7-server-rpms— the repository in which the updated package is located.
yum command.
9.1.2. Updating Packages
Updating a Single Package
root:
yumupdatepackage_name
Example 9.2. Updating the rpm package
~]# yum update rpm
Loaded plugins: langpacks, product-id, subscription-manager
Updating Red Hat repositories.
INFO:rhsm-app.repolib:repos updated: 0
Setting up Update Process
Resolving Dependencies
--> Running transaction check
---> Package rpm.x86_64 0:4.11.1-3.el7 will be updated
--> Processing Dependency: rpm = 4.11.1-3.el7 for package: rpm-libs-4.11.1-3.el7.x86_64
--> Processing Dependency: rpm = 4.11.1-3.el7 for package: rpm-python-4.11.1-3.el7.x86_64
--> Processing Dependency: rpm = 4.11.1-3.el7 for package: rpm-build-4.11.1-3.el7.x86_64
---> Package rpm.x86_64 0:4.11.2-2.el7 will be an update
--> Running transaction check
...
--> Finished Dependency Resolution
Dependencies Resolved
=============================================================================
Package Arch Version Repository Size
=============================================================================
Updating:
rpm x86_64 4.11.2-2.el7 rhel 1.1 M
Updating for dependencies:
rpm-build x86_64 4.11.2-2.el7 rhel 139 k
rpm-build-libs x86_64 4.11.2-2.el7 rhel 98 k
rpm-libs x86_64 4.11.2-2.el7 rhel 261 k
rpm-python x86_64 4.11.2-2.el7 rhel 74 k
Transaction Summary
=============================================================================
Upgrade 1 Package (+4 Dependent packages)
Total size: 1.7 M
Is this ok [y/d/N]:
Loaded plugins: langpacks, product-id, subscription-manager— Yum always informs you which yum plug-ins are installed and enabled. See Section 9.6, “Yum Plug-ins” for general information on yum plug-ins, or Section 9.6.3, “Working with Yum Plug-ins” for descriptions of specific plug-ins.rpm.x86_64— you can download and install a new rpm package as well as its dependencies. Transaction check is performed for each of these packages.- Yum presents the update information and then prompts you for confirmation of the update; yum runs interactively by default. If you already know which transactions the
yumcommand plans to perform, you can use the-yoption to automatically answeryesto any questions that yum asks (in which case it runs non-interactively). However, you should always examine which changes yum plans to make to the system so that you can easily troubleshoot any problems that might arise. You can also choose to download the package without installing it. To do so, select thedoption at the download prompt. This launches a background download of the selected package.If a transaction fails, you can view yum transaction history by using theyum historycommand as described in Section 9.4, “Working with Transaction History”.
Important
yum update or yum install command.
rpm -i kernel command which installs a new kernel instead of rpm -u kernel which replaces the current kernel.
root:
yum group update group_nameupgrade command that is equal to update with enabled obsoletes configuration option (see Section 9.5.1, “Setting [main] Options”). By default, obsoletes is turned on in /etc/yum.conf, which makes these two commands equivalent.
Updating All Packages and Their Dependencies
yum update command without any arguments:
yum updateUpdating Security-Related Packages
root:
yum update --securityroot:
yum update-minimal --security- the kernel-3.10.0-1 package is installed on your system;
- the kernel-3.10.0-2 package was released as a security update;
- the kernel-3.10.0-3 package was released as a bug fix update.
yum update-minimal --security updates the package to kernel-3.10.0-2, and yum update --security updates the package to kernel-3.10.0-3.
Automating Package Updating
yum-cron service. For more information, see Section 9.7, “Automatically Refreshing Package Database and Downloading Updates with Yum-cron”.
9.1.3. Upgrading the System Off-line with ISO and Yum
yum update command with the Red Hat Enterprise Linux installation ISO image is an easy and quick way to upgrade systems to the latest minor version. The following steps illustrate the upgrading process:
- Create a target directory to mount your ISO image. This directory is not automatically created when mounting, so create it before proceeding to the next step. As
root, type:mkdirmount_dirReplace mount_dir with a path to the mount directory. Typically, users create it as a subdirectory in the/mediadirectory. - Mount the Red Hat Enterprise Linux 7 installation ISO image to the previously created target directory. As
root, type:mount-oloopiso_name mount_dirReplace iso_name with a path to your ISO image and mount_dir with a path to the target directory. Here, the-oloopoption is required to mount the file as a block device. - Copy the
media.repofile from the mount directory to the/etc/yum.repos.d/directory. Note that configuration files in this directory must have the .repo extension to function properly.cpmount_dir/media.repo/etc/yum.repos.d/new.repoThis creates a configuration file for the yum repository. Replace new.repo with the filename, for example rhel7.repo. - Edit the new configuration file so that it points to the Red Hat Enterprise Linux installation ISO. Add the following line into the
/etc/yum.repos.d/new.repofile:baseurl=file:///mount_dir
Replace mount_dir with a path to the mount point. - Update all yum repositories including
/etc/yum.repos.d/new.repocreated in previous steps. Asroot, type:yumupdateThis upgrades your system to the version provided by the mounted ISO image. - After successful upgrade, you can unmount the ISO image. As
root, type:umountmount_dirwhere mount_dir is a path to your mount directory. Also, you can remove the mount directory created in the first step. Asroot, type:rmdirmount_dir - If you will not use the previously created configuration file for another installation or update, you can remove it. As
root, type:rm/etc/yum.repos.d/new.repo
Example 9.3. Upgrading from Red Hat Enterprise Linux 7.0 to 7.1
rhel-server-7.1-x86_64-dvd.iso, create a target directory for mounting, such as /media/rhel7/. As root, change into the directory with your ISO image and type:
~]#mount-olooprhel-server-7.1-x86_64-dvd.iso/media/rhel7/
media.repo file from the mount directory:
~]#cp/media/rhel7/media.repo/etc/yum.repos.d/rhel7.repo
/etc/yum.repos.d/rhel7.repo copied in the previous step:
baseurl=file:///media/rhel7/
rhel-server-7.1-x86_64-dvd.iso. As root, execute:
~]#yumupdate
~]#umount/media/rhel7/
~]#rmdir/media/rhel7/
~]#rm/etc/yum.repos.d/rhel7.repo
9.2. Working with Packages
9.2.1. Searching Packages
yumsearchterm…
Example 9.4. Searching for packages matching a specific string
~]$ yum search vim gvim emacs
Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager
============================= N/S matched: vim ==============================
vim-X11.x86_64 : The VIM version of the vi editor for the X Window System
vim-common.x86_64 : The common files needed by any version of the VIM editor
[output truncated]
============================ N/S matched: emacs =============================
emacs.x86_64 : GNU Emacs text editor
emacs-auctex.noarch : Enhanced TeX modes for Emacs
[output truncated]
Name and summary matches mostly, use "search all" for everything.
Warning: No matches found for: gvimyum search command is useful for searching for packages you do not know the name of, but for which you know a related term. Note that by default, yum search returns matches in package name and summary, which makes the search faster. Use the yum search all command for a more exhaustive but slower search.
Filtering the Results
* (which expands to match any character subset) and ? (which expands to match any single character).
yum command, otherwise the Bash shell will interpret these expressions as pathname expansions, and potentially pass all files in the current directory that match the global expressions to yum. To make sure the glob expressions are passed to yum as intended, use one of the following methods:
- escape the wildcard characters by preceding them with a backslash character
- double-quote or single-quote the entire glob expression.
9.2.2. Listing Packages
yumlistall
yum list glob_expression…Example 9.5. Listing ABRT-related packages
~]$ yum list abrt-addon\* abrt-plugin\*
Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager
Installed Packages
abrt-addon-ccpp.x86_64 2.1.11-35.el7 @rhel-7-server-rpms
abrt-addon-kerneloops.x86_64 2.1.11-35.el7 @rhel-7-server-rpms
abrt-addon-pstoreoops.x86_64 2.1.11-35.el7 @rhel-7-server-rpms
abrt-addon-python.x86_64 2.1.11-35.el7 @rhel-7-server-rpms
abrt-addon-vmcore.x86_64 2.1.11-35.el7 @rhel-7-server-rpms
abrt-addon-xorg.x86_64 2.1.11-35.el7 @rhel-7-server-rpmsinstalled keyword. The rightmost column in the output lists the repository from which the package was retrieved.
yumlistinstalledglob_expression…
Example 9.6. Listing all installed versions of the krb package
~]$ yum list installed "krb?-*"
Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager
Installed Packages
krb5-libs.x86_64 1.13.2-10.el7 @rhel-7-server-rpmsyumlistavailableglob_expression…
Example 9.7. Listing available gstreamer plug-ins
~]$ yum list available gstreamer\*plugin\*
Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager
Available Packages
gstreamer-plugins-bad-free.i686 0.10.23-20.el7 rhel-7-server-rpms
gstreamer-plugins-base.i686 0.10.36-10.el7 rhel-7-server-rpms
gstreamer-plugins-good.i686 0.10.31-11.el7 rhel-7-server-rpms
gstreamer1-plugins-bad-free.i686 1.4.5-3.el7 rhel-7-server-rpms
gstreamer1-plugins-base.i686 1.4.5-2.el7 rhel-7-server-rpms
gstreamer1-plugins-base-devel.i686 1.4.5-2.el7 rhel-7-server-rpms
gstreamer1-plugins-base-devel.x86_64 1.4.5-2.el7 rhel-7-server-rpms
gstreamer1-plugins-good.i686 1.4.5-2.el7 rhel-7-server-rpmsListing Repositories
yumrepolist
-v option. With this option enabled, information including the file name, overall size, date of the last update, and base URL are displayed for each listed repository. As an alternative, you can use the repoinfo command that produces the same output.
yumrepolist-v
yumrepoinfo
yumrepolistall
disabled as a first argument, you can reduce the command output to disabled repositories. For further specification you can pass the ID or name of repositories or related glob_expressions as arguments. Note that if there is an exact match between the repository ID or name and the inserted argument, this repository is listed even if it does not pass the enabled or disabled filter.
9.2.3. Displaying Package Information
yuminfopackage_name…
Example 9.8. Displaying information on the abrt package
~]$ yum info abrt
Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager
Installed Packages
Name : abrt
Arch : x86_64
Version : 2.1.11
Release : 35.el7
Size : 2.3 M
Repo : installed
From repo : rhel-7-server-rpms
Summary : Automatic bug detection and reporting tool
URL : https://fedorahosted.org/abrt/
License : GPLv2+
Description : abrt is a tool to help users to detect defects in applications and
: to create a bug report with all information needed by maintainer to fix
: it. It uses plugin system to extend its functionality.yum info package_name command is similar to the rpm -q --info package_name command, but provides as additional information the name of the yum repository the RPM package was installed from (look for the From repo: line in the output).
Using yumdb
yumdbinfopackage_name
user indicates it was installed by the user, and dep means it was brought in as a dependency).
Example 9.9. Querying yumdb for information on the yum package
~]$ yumdb info yum
Loaded plugins: langpacks, product-id
yum-3.4.3-132.el7.noarch
changed_by = 1000
checksum_data = a9d0510e2ff0d04d04476c693c0313a11379053928efd29561f9a837b3d9eb02
checksum_type = sha256
command_line = upgrade
from_repo = rhel-7-server-rpms
from_repo_revision = 1449144806
from_repo_timestamp = 1449144805
installed_by = 4294967295
origin_url = https://cdn.redhat.com/content/dist/rhel/server/7/7Server/x86_64/os/Packages/yum-3.4.3-132.el7.noarch.rpm
reason = user
releasever = 7Server
var_uuid = 147a7d49-b60a-429f-8d8f-3edb6ce6f4a1yumdb command, see the yumdb(8) manual page.
9.2.4. Installing Packages
root:
yuminstallpackage_name
root:
yuminstallpackage_name package_name…
yuminstallpackage_name.arch
Example 9.10. Installing packages on multilib system
i686 architecture, type:
~]#yuminstallsqlite.i686
root:
yuminstallglob_expression…
Example 9.11. Installing all audacious plugins
~]#yuminstallaudacious-plugins-\*
yum install. If you know the name of the binary you want to install, but not its package name, you can give yum install the path name. As root, type:
yum install /usr/sbin/named/usr/sbin/named, if any, and prompts you as to whether you want to install it.
yum install command does not require strictly defined arguments. It can process various formats of package names and glob expressions, which makes installation easier for users. On the other hand, it takes some time until yum parses the input correctly, especially if you specify a large number of packages. To optimize the package search, you can use the following commands to explicitly define how to parse the arguments:
yum install-n nameyum install-na name.architectureyum install-nevra name-epoch:version-release.architectureinstall-n, yum interprets name as the exact name of the package. The install-na command tells yum that the subsequent argument contains the package name and architecture divided by the dot character. With install-nevra, yum will expect an argument in the form name-epoch:version-release.architecture. Similarly, you can use yum remove-n, yum remove-na, and yum remove-nevra when searching for packages to be removed.
Note
named binary, but you do not know in which bin/ or sbin/ directory the file is installed, use the yum provides command with a glob expression:
~]# yum provides "*bin/named"
Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-
: manager
32:bind-9.9.4-14.el7.x86_64 : The Berkeley Internet Name Domain (BIND) DNS
: (Domain Name System) server
Repo : rhel-7-server-rpms
Matched from:
Filename : /usr/sbin/namedyum provides "*/file_name" is a useful way to find the packages that contain file_name.
Example 9.12. Installation Process
root:
~]# yum install httpd
Loaded plugins: langpacks, product-id, subscription-manager
Resolving Dependencies
--> Running transaction check
---> Package httpd.x86_64 0:2.4.6-12.el7 will be updated
---> Package httpd.x86_64 0:2.4.6-13.el7 will be an update
--> Processing Dependency: 2.4.6-13.el7 for package: httpd-2.4.6-13.el7.x86_64
--> Running transaction check
---> Package httpd-tools.x86_64 0:2.4.6-12.el7 will be updated
---> Package httpd-tools.x86_64 0:2.4.6-13.el7 will be an update
--> Finished Dependency Resolution
Dependencies Resolved
================================================================================ Package Arch Version Repository Size ================================================================================ Updating: httpd x86_64 2.4.6-13.el7 rhel-x86_64-server-7 1.2 M Updating for dependencies: httpd-tools x86_64 2.4.6-13.el7 rhel-x86_64-server-7 77 k Transaction Summary ================================================================================ Upgrade 1 Package (+1 Dependent package) Total size: 1.2 M Is this ok [y/d/N]:
y (yes) and N (no) options, you can choose d (download only) to download the packages but not to install them directly. If you choose y, the installation proceeds with the following messages until it is finished successfully.
Downloading packages: Running transaction check Running transaction test Transaction test succeeded Running transaction Updating : httpd-tools-2.4.6-13.el7.x86_64 1/4 Updating : httpd-2.4.6-13.el7.x86_64 2/4 Cleanup : httpd-2.4.6-12.el7.x86_64 3/4 Cleanup : httpd-tools-2.4.6-12.el7.x86_64 4/4 Verifying : httpd-2.4.6-13.el7.x86_64 1/4 Verifying : httpd-tools-2.4.6-13.el7.x86_64 2/4 Verifying : httpd-tools-2.4.6-12.el7.x86_64 3/4 Verifying : httpd-2.4.6-12.el7.x86_64 4/4 Updated: httpd.x86_64 0:2.4.6-13.el7 Dependency Updated: httpd-tools.x86_64 0:2.4.6-13.el7 Complete!
yum localinstall path9.2.5. Downloading Packages
... Total size: 1.2 M Is this ok [y/d/N]: ...
d option, yum downloads the packages without installing them immediately. You can install these packages later offline with the yum localinstall command or you can share them with a different device. Downloaded packages are saved in one of the subdirectories of the cache directory, by default /var/cache/yum/$basearch/$releasever/packages/. The downloading proceeds in background mode so that you can use yum for other operations in parallel.
9.2.6. Removing Packages
root:
yumremovepackage_name…
Example 9.13. Removing several packages
~]# yum remove toteminstall, remove can take these arguments:
- package names
- glob expressions
- file lists
- package provides
Warning
9.3. Working with Package Groups
yum groups command is a top-level command that covers all the operations that act on package groups in yum.
9.3.1. Listing Package Groups
summary option is used to view the number of installed groups, available groups, available environment groups, and both installed and available language groups:
yum groupssummary
Example 9.14. Example output of yum groups summary
~]$yumgroupssummaryLoaded plugins: langpacks, product-id, subscription-manager Available Environment Groups: 12 Installed Groups: 10 Available Groups: 12
list option. You can filter the command output by group names.
yumgrouplistglob_expression…
hidden to list also groups not marked as user visible, and ids to list group IDs. You can add language, environment, installed, or available options to reduce the command output to a specific group type.
yumgroupinfoglob_expression…
Example 9.15. Viewing information on the LibreOffice package group
~]$yumgroupinfoLibreOfficeLoaded plugins: langpacks, product-id, subscription-manager Group: LibreOffice Group-Id: libreoffice Description: LibreOffice Productivity Suite Mandatory Packages: =libreoffice-calc libreoffice-draw -libreoffice-emailmerge libreoffice-graphicfilter =libreoffice-impress =libreoffice-math =libreoffice-writer +libreoffice-xsltfilter Optional Packages: libreoffice-base libreoffice-pyuno
- "
-" — Package is not installed and it will not be installed as a part of the package group. - "
+" — Package is not installed but it will be installed on the nextyum upgradeoryum group upgrade. - "
=" — Package is installed and it was installed as a part of the package group. - no symbol — Package is installed but it was installed outside of the package group. This means that the
yum group removewill not remove these packages.
group_command configuration parameter is set to objects, which is the default setting. Set this parameter to a different value if you do not want yum to track if a package was installed as a part of the group or separately, which will make "no symbol" packages equivalent to "=" packages.
yum group mark command. For example, yum group mark packages marks any given installed packages as members of a specified group. To avoid installation of new packages on group update, use yum group mark blacklist. See the yum(8) man page for more information on capabilities of yum group mark.
Note
yum group list, info, install, or remove, pass @group_name to specify a package group, @^group_name to specify an environmental group, or group_name to include both.
9.3.2. Installing a Package Group
yum group list ids Example 9.16. Finding name and groupid of a package group
~]$ yum group list ids kde\*
Available environment groups:
KDE Plasma Workspaces (kde-desktop-environment)
Donehidden command option to list hidden groups too:
~]$ yum group list hidden ids kde\*
Loaded plugins: product-id, subscription-manager
Available Groups:
KDE (kde-desktop)
Donegroup install command. As root, type:
yumgroup install"group name"
root, execute the following command:
yumgroup installgroupid
install command if you prepend it with an @ symbol, which tells yum that you want to perform group install. As root, type:
yuminstall@group
yuminstall@^group
Example 9.17. Four equivalent ways of installing the KDE Desktop group
~]#yum group install "KDE Desktop"~]#yum group install kde-desktop~]#yum install @"KDE Desktop"~]#yum install @kde-desktop
9.3.3. Removing a Package Group
install syntax, with use of either name of the package group or its id. As root, type:
yumgroup removegroup_name
yumgroup removegroupid
remove command if you prepend it with an @-symbol, which tells yum that you want to perform group remove. As root, type:
yumremove@group
yumremove@^group
Example 9.18. Four equivalent ways of removing the KDE Desktop group
~]#yum group remove "KDE Desktop"~]#yum group remove kde-desktop~]#yum remove @"KDE Desktop"~]#yum remove @kde-desktop
9.4. Working with Transaction History
yum history command enables users to review information about a timeline of yum transactions, the dates and times they occurred, the number of packages affected, whether these transactions succeeded or were aborted, and if the RPM database was changed between transactions. Additionally, this command can be used to undo or redo certain transactions. All history data is stored in the history DB in the /var/lib/yum/history/ directory.
9.4.1. Listing Transactions
root, either run yum history with no additional arguments, or type the following at a shell prompt:
yumhistorylist
all keyword:
yumhistorylistall
yumhistoryliststart_id..end_id
yumhistorylistglob_expression…
Example 9.19. Listing the five oldest transactions
yum history list, the most recent transaction is displayed at the top of the list. To display information about the five oldest transactions stored in the history data base, type:
~]# yum history list 1..5
Loaded plugins: langpacks, product-id, subscription-manager
ID | Login user | Date and time | Action(s) | Altered
-------------------------------------------------------------------------------
5 | User <user> | 2013-07-29 15:33 | Install | 1
4 | User <user> | 2013-07-21 15:10 | Install | 1
3 | User <user> | 2013-07-16 15:27 | I, U | 73
2 | System <unset> | 2013-07-16 15:19 | Update | 1
1 | System <unset> | 2013-07-16 14:38 | Install | 1106
history listyum history list command produce tabular output with each row consisting of the following columns:
ID— an integer value that identifies a particular transaction.Login user— the name of the user whose login session was used to initiate a transaction. This information is typically presented in theFull Name <username>form. For transactions that were not issued by a user (such as an automatic system update),System <unset>is used instead.Date and time— the date and time when a transaction was issued.Action(s)— a list of actions that were performed during a transaction as described in Table 9.1, “Possible values of the Action(s) field”.Altered— the number of packages that were affected by a transaction, possibly followed by additional information as described in Table 9.2, “Possible values of the Altered field”.
Table 9.1. Possible values of the Action(s) field
| Action | Abbreviation | Description |
|---|---|---|
Downgrade | D | At least one package has been downgraded to an older version. |
Erase | E | At least one package has been removed. |
Install | I | At least one new package has been installed. |
Obsoleting | O | At least one package has been marked as obsolete. |
Reinstall | R | At least one package has been reinstalled. |
Update | U | At least one package has been updated to a newer version. |
Table 9.2. Possible values of the Altered field
| Symbol | Description |
|---|---|
< | Before the transaction finished, the rpmdb database was changed outside yum. |
> | After the transaction finished, the rpmdb database was changed outside yum. |
* | The transaction failed to finish. |
# | The transaction finished successfully, but yum returned a non-zero exit code. |
E | The transaction finished successfully, but an error or a warning was displayed. |
P | The transaction finished successfully, but problems already existed in the rpmdb database. |
s | The transaction finished successfully, but the --skip-broken command-line option was used and certain packages were skipped. |
rpmdb or yumdb database contents for any installed package with the currently used rpmdb or yumdb database, type the following:
yumhistorysync
yumhistorystats
Example 9.20. Example output of yum history stats
~]# yum history stats
Loaded plugins: langpacks, product-id, subscription-manager
File : //var/lib/yum/history/history-2012-08-15.sqlite
Size : 2,766,848
Transactions: 41
Begin time : Wed Aug 15 16:18:25 2012
End time : Wed Feb 27 14:52:30 2013
Counts :
NEVRAC : 2,204
NEVRA : 2,204
NA : 1,759
NEVR : 2,204
rpm DB : 2,204
yum DB : 2,204
history stats
root:
yumhistorysummary
yumhistorysummarystart_id..end_id
yum history list command, you can also display a summary of transactions regarding a certain package or packages by supplying a package name or a glob expression:
yumhistorysummaryglob_expression…
Example 9.21. Summary of the five latest transactions
~]# yum history summary 1..5
Loaded plugins: langpacks, product-id, subscription-manager
Login user | Time | Action(s) | Altered
-------------------------------------------------------------------------------
Jaromir ... <jhradilek> | Last day | Install | 1
Jaromir ... <jhradilek> | Last week | Install | 1
Jaromir ... <jhradilek> | Last 2 weeks | I, U | 73
System <unset> | Last 2 weeks | I, U | 1107
history summaryyum history summary command produce simplified tabular output similar to the output of yum history list.
yum history list and yum history summary are oriented towards transactions, and although they allow you to display only transactions related to a given package or packages, they lack important details, such as package versions. To list transactions from the perspective of a package, run the following command as root:
yumhistorypackage-listglob_expression…
Example 9.22. Tracing the history of a package
~]# yum history package-list subscription-manager\*
Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager
ID | Action(s) | Package
-------------------------------------------------------------------------------
2 | Updated | subscription-manager-1.13.22-1.el7.x86_64 EE
2 | Update | 1.15.9-15.el7.x86_64 EE
2 | Obsoleted | subscription-manager-firstboot-1.13.22-1.el7.x86_64 EE
2 | Updated | subscription-manager-gui-1.13.22-1.el7.x86_64 EE
2 | Update | 1.15.9-15.el7.x86_64 EE
2 | Obsoleting | subscription-manager-initial-setup-addon-1.15.9-15.el7.x86_64 EE
1 | Install | subscription-manager-1.13.22-1.el7.x86_64
1 | Install | subscription-manager-firstboot-1.13.22-1.el7.x86_64
1 | Install | subscription-manager-gui-1.13.22-1.el7.x86_64
history package-list9.4.2. Examining Transactions
root, use the yum history summary command in the following form:
yumhistorysummaryid
root:
yumhistoryinfoid…
yumhistoryinfostart_id..end_id
Example 9.23. Example output of yum history info
~]# yum history info 4..5
Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager
Transaction ID : 4..5
Begin time : Mon Dec 7 16:51:07 2015
Begin rpmdb : 1252:d2b62b7b5768e855723954852fd7e55f641fbad9
End time : 17:18:49 2015 (27 minutes)
End rpmdb : 1253:cf8449dc4c53fc0cbc0a4c48e496a6c50f3d43c5
User : Maxim Svistunov <msvistun>
Return-Code : Success
Command Line : install tigervnc-server.x86_64
Command Line : reinstall tigervnc-server
Transaction performed with:
Installed rpm-4.11.3-17.el7.x86_64 @rhel-7-server-rpms
Installed subscription-manager-1.15.9-15.el7.x86_64 @rhel-7-server-rpms
Installed yum-3.4.3-132.el7.noarch @rhel-7-server-rpms
Packages Altered:
Reinstall tigervnc-server-1.3.1-3.el7.x86_64 @rhel-7-server-rpms
history inforoot:
yumhistoryaddon-infoid
yum history info, when no id is provided, yum automatically uses the latest transaction. Another way to refer to the latest transaction is to use the last keyword:
yumhistoryaddon-infolast
Example 9.24. Example output of yum history addon-info
yum history addon-info command provides the following output:
~]# yum history addon-info 4
Loaded plugins: langpacks, product-id, subscription-manager
Transaction ID: 4
Available additional history information:
config-main
config-repos
saved_tx
history addon-infoyum history addon-info command, three types of information are available:
config-main— global yum options that were in use during the transaction. See Section 9.5.1, “Setting [main] Options” for information on how to change global options.config-repos— options for individual yum repositories. See Section 9.5.2, “Setting [repository] Options” for information on how to change options for individual repositories.saved_tx— the data that can be used by theyum load-transactioncommand in order to repeat the transaction on another machine (see below).
root:
yumhistoryaddon-infoid information
9.4.3. Reverting and Repeating Transactions
yum history command provides means to revert or repeat a selected transaction. To revert a transaction, type the following at a shell prompt as root:
yumhistoryundoid
root, run the following command:
yumhistoryredoid
last keyword to undo or repeat the latest transaction.
yum history undo and yum history redo commands only revert or repeat the steps that were performed during a transaction. If the transaction installed a new package, the yum history undo command will uninstall it, and if the transaction uninstalled a package the command will again install it. This command also attempts to downgrade all updated packages to their previous version, if these older packages are still available.
root:
yum-qhistoryaddon-infoidsaved_tx>file_name
root:
yumload-transactionfile_name
load-transaction to ignore missing packages or rpmdb version. For more information on these configuration options see the yum.conf(5) man page.
9.4.4. Starting New Transaction History
root:
yumhistorynew
/var/lib/yum/history/ directory. The old transaction history will be kept, but will not be accessible as long as a newer database file is present in the directory.
9.5. Configuring Yum and Yum Repositories
Note
/etc/yum.conf. This file contains one mandatory [main] section, which enables you to set yum options that have global effect, and can also contain one or more [repository] sections, which allow you to set repository-specific options. However, it is recommended to define individual repositories in new or existing .repo files in the /etc/yum.repos.d/ directory. The values you define in individual [repository] sections of the /etc/yum.conf file override values set in the [main] section.
- set global yum options by editing the
[main]section of the/etc/yum.confconfiguration file; - set options for individual repositories by editing the
[repository]sections in/etc/yum.confand.repofiles in the/etc/yum.repos.d/directory; - use yum variables in
/etc/yum.confand files in the/etc/yum.repos.d/directory so that dynamic version and architecture values are handled correctly; - add, enable, and disable yum repositories on the command line; and
- set up your own custom yum repository.
9.5.1. Setting [main] Options
/etc/yum.conf configuration file contains exactly one [main] section, and while some of the key-value pairs in this section affect how yum operates, others affect how yum treats repositories. You can add many additional options under the [main] section heading in /etc/yum.conf.
/etc/yum.conf configuration file can look like this:
[main]
cachedir=/var/cache/yum/$basearch/$releasever
keepcache=0
debuglevel=2
logfile=/var/log/yum.log
exactarch=1
obsoletes=1
gpgcheck=1
plugins=1
installonly_limit=3
[comments abridged]
# PUT YOUR REPOS HERE OR IN separate files named file.repo
# in /etc/yum.repos.d[main] section:
assumeyes=value- The
assumeyesoption determines whether or not yum prompts for confirmation of critical actions. Replace value with one of:0(default) — yum prompts for confirmation of critical actions it performs.1— Do not prompt for confirmation of criticalyumactions. Ifassumeyes=1is set, yum behaves in the same way as the command-line options-yand--assumeyes. cachedir=directory- Use this option to set the directory where yum stores its cache and database files. Replace directory with an absolute path to the directory. By default, yum's cache directory is
/var/cache/yum/$basearch/$releasever/.See Section 9.5.3, “Using Yum Variables” for descriptions of the$basearchand$releaseveryum variables. debuglevel=value- This option specifies the detail of debugging output produced by yum. Here, value is an integer between
1and10. Setting a higherdebuglevelvalue causes yum to display more detailed debugging output.debuglevel=2is the default, whiledebuglevel=0disables debugging output. exactarch=value- With this option, you can set yum to consider the exact architecture when updating already installed packages. Replace value with:
0— Do not take into account the exact architecture when updating packages.1(default) — Consider the exact architecture when updating packages. With this setting, yum does not install a package for 32-bit architecture to update a package already installed on the system with 64-bit architecture. exclude=package_name [more_package_names]- The
excludeoption enables you to exclude packages by keyword during installation or system update. Listing multiple packages for exclusion can be accomplished by quoting a space-delimited list of packages. Shell glob expressions using wildcards (for example,*and?) are allowed. gpgcheck=value- Use the
gpgcheckoption to specify if yum should perform a GPG signature check on packages. Replace value with:0— Disable GPG signature-checking on packages in all repositories, including local package installation.1(default) — Enable checking of GPG signature on all packages in all repositories, including local package installation. Withgpgcheckenabled, all packages' signatures are checked.If this option is set in the[main]section of the/etc/yum.conffile, it sets the GPG-checking rule for all repositories. However, you can also setgpgcheck=valuefor individual repositories instead; that is, you can enable GPG-checking on one repository while disabling it on another. Settinggpgcheck=valuefor an individual repository in its corresponding.repofile overrides the default if it is present in/etc/yum.conf. group_command=value- Use the
group_commandoption to specify how theyum group install,yum group upgrade, andyum group removecommands handle a package group. Replace value with on of:simple— Install all members of a package group. Upgrade only previously installed packages, but do not install packages that have been added to the group in the meantime.compat— Similar tosimplebutyum upgradealso installs packages that were added to the group since the previous upgrade.objects— (default.) With this option, yum keeps track of the previously installed groups and distinguishes between packages installed as a part of the group and packages installed separately. See Example 9.15, “Viewing information on the LibreOffice package group” group_package_types=package_type [more_package_types]- Here you can specify which type of packages (optional, default or mandatory) is installed when the
yumgroupinstallcommand is called. The default and mandatory package types are chosen by default. history_record=value- With this option, you can set yum to record transaction history. Replace value with one of:
0— yum should not record history entries for transactions.1(default) — yum should record history entries for transactions. This operation takes certain amount of disk space, and some extra time in the transactions, but it provides a lot of information about past operations, which can be displayed with theyumhistorycommand.history_record=1is the default.For more information on theyumhistorycommand, see Section 9.4, “Working with Transaction History”.Note
Yum uses history records to detect modifications to therpmdbdata base that have been done outside of yum. In such case, yum displays a warning and automatically searches for possible problems caused by alteringrpmdb. Withhistory_recordturned off, yum is not able to detect these changes and no automatic checks are performed. installonlypkgs=space separated list of packages- Here you can provide a space-separated list of packages which yum can install, but will never update. See the
yum.conf(5) manual page for the list of packages which are install-only by default.If you add theinstallonlypkgsdirective to/etc/yum.conf, you should ensure that you list all of the packages that should be install-only, including any of those listed under theinstallonlypkgssection ofyum.conf(5). In particular, kernel packages should always be listed ininstallonlypkgs(as they are by default), andinstallonly_limitshould always be set to a value greater than2so that a backup kernel is always available in case the default one fails to boot. installonly_limit=value- This option sets how many packages listed in the
installonlypkgsdirective can be installed at the same time. Replace value with an integer representing the maximum number of versions that can be installed simultaneously for any single package listed ininstallonlypkgs.The defaults for theinstallonlypkgsdirective include several different kernel packages, so be aware that changing the value ofinstallonly_limitalso affects the maximum number of installed versions of any single kernel package. The default value listed in/etc/yum.confisinstallonly_limit=3, and it is not recommended to decrease this value, particularly below2. keepcache=value- The
keepcacheoption determines whether yum keeps the cache of headers and packages after successful installation. Here, value is one of:0(default) — Do not retain the cache of headers and packages after a successful installation.1— Retain the cache after a successful installation. logfile=file_name- To specify the location for logging output, replace file_name with an absolute path to the file in which yum should write its logging output. By default, yum logs to
/var/log/yum.log. max_connenctions=number- Here value stands for the maximum number of simultaneous connections, default is 5.
multilib_policy=value- The
multilib_policyoption sets the installation behavior if several architecture versions are available for package install. Here, value stands for:best— install the best-choice architecture for this system. For example, settingmultilib_policy=beston an AMD64 system causes yum to install the 64-bit versions of all packages.all— always install every possible architecture for every package. For example, withmultilib_policyset toallon an AMD64 system, yum would install both the i686 and AMD64 versions of a package, if both were available. obsoletes=value- The
obsoletesoption enables the obsoletes process logic during updates.When one package declares in its spec file that it obsoletes another package, the latter package is replaced by the former package when the former package is installed. Obsoletes are declared, for example, when a package is renamed. Replace value with one of:0— Disable yum's obsoletes processing logic when performing updates.1(default) — Enable yum's obsoletes processing logic when performing updates. plugins=value- This is a global switch to enable or disable yum plug-ins, value is one of:
0— Disable all yum plug-ins globally.Important
Disabling all plug-ins is not advised because certain plug-ins provide important yum services. In particular, product-id and subscription-manager plug-ins provide support for the certificate-basedContent Delivery Network(CDN). Disabling plug-ins globally is provided as a convenience option, and is generally only recommended when diagnosing a potential problem with yum.1(default) — Enable all yum plug-ins globally. Withplugins=1, you can still disable a specific yum plug-in by settingenabled=0in that plug-in's configuration file.For more information about various yum plug-ins, see Section 9.6, “Yum Plug-ins”. For further information on controlling plug-ins, see Section 9.6.1, “Enabling, Configuring, and Disabling Yum Plug-ins”. reposdir=directory- Here, directory is an absolute path to the directory where
.repofiles are located. All.repofiles contain repository information (similar to the[repository]sections of/etc/yum.conf). Yum collects all repository information from.repofiles and the[repository]section of the/etc/yum.conffile to create a master list of repositories to use for transactions. Ifreposdiris not set, yum uses the default directory/etc/yum.repos.d/. retries=value- This option sets the number of times yum should attempt to retrieve a file before returning an error. value is an integer
0or greater. Setting value to0makes yum retry forever. The default value is10.
[main] options, see the [main] OPTIONS section of the yum.conf(5) manual page.
9.5.2. Setting [repository] Options
[repository] sections, where repository is a unique repository ID such as my_personal_repo (spaces are not permitted), allow you to define individual yum repositories. To avoid conflicts, custom repositories should not use names used by Red Hat repositories.
[repository] section takes:
[repository] name=repository_name baseurl=repository_url
[repository] section must contain the following directives:
name=repository_name- Here, repository_name is a human-readable string describing the repository.
baseurl=repository_url- Replace repository_url with a URL to the directory where the repodata directory of a repository is located:
- If the repository is available over HTTP, use:
http://path/to/repo - If the repository is available over FTP, use:
ftp://path/to/repo - If the repository is local to the machine, use:
file:///path/to/local/repo - If a specific online repository requires basic HTTP authentication, you can specify your user name and password by prepending it to the URL as
username:password@link. For example, if a repository on http://www.example.com/repo/ requires a user name of “user” and a password of “password”, then thebaseurllink could be specified ashttp://.user:password@www.example.com/repo/
Usually this URL is an HTTP link, such as:baseurl=http://path/to/repo/releases/$releasever/server/$basearch/os/
Note that yum always expands the$releasever,$arch, and$basearchvariables in URLs. For more information about yum variables, see Section 9.5.3, “Using Yum Variables”.
[repository] directive are:
enabled=value- This is a simple way to tell yum to use or ignore a particular repository, value is one of:
0— Do not include this repository as a package source when performing updates and installs. This is an easy way of quickly turning repositories on and off, which is useful when you desire a single package from a repository that you do not want to enable for updates or installs.1— Include this repository as a package source.Turning repositories on and off can also be performed by passing either the--enablerepo=repo_nameor--disablerepo=repo_nameoption toyum, or through the Add/Remove Software window of the PackageKit utility. async=value- Controls parallel downloading of repository packages. Here, value is one of:
auto(default) — parallel downloading is used if possible, which means that yum automatically disables it for repositories created by plug-ins to avoid failures.on— parallel downloading is enabled for the repository.off— parallel downloading is disabled for the repository.
[repository] options exist, part of them have the same form and function as certain [main] options. For a complete list, see the [repository] OPTIONS section of the yum.conf(5) manual page.
Example 9.25. A sample /etc/yum.repos.d/redhat.repo file
/etc/yum.repos.d/redhat.repo file:
# # Red Hat Repositories # Managed by (rhsm) subscription-manager # [red-hat-enterprise-linux-scalable-file-system-for-rhel-6-entitlement-rpms] name = Red Hat Enterprise Linux Scalable File System (for RHEL 6 Entitlement) (RPMs) baseurl = https://cdn.redhat.com/content/dist/rhel/entitlement-6/releases/$releasever/$basearch/scalablefilesystem/os enabled = 1 gpgcheck = 1 gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release sslverify = 1 sslcacert = /etc/rhsm/ca/redhat-uep.pem sslclientkey = /etc/pki/entitlement/key.pem sslclientcert = /etc/pki/entitlement/11300387955690106.pem [red-hat-enterprise-linux-scalable-file-system-for-rhel-6-entitlement-source-rpms] name = Red Hat Enterprise Linux Scalable File System (for RHEL 6 Entitlement) (Source RPMs) baseurl = https://cdn.redhat.com/content/dist/rhel/entitlement-6/releases/$releasever/$basearch/scalablefilesystem/source/SRPMS enabled = 0 gpgcheck = 1 gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release sslverify = 1 sslcacert = /etc/rhsm/ca/redhat-uep.pem sslclientkey = /etc/pki/entitlement/key.pem sslclientcert = /etc/pki/entitlement/11300387955690106.pem [red-hat-enterprise-linux-scalable-file-system-for-rhel-6-entitlement-debug-rpms] name = Red Hat Enterprise Linux Scalable File System (for RHEL 6 Entitlement) (Debug RPMs) baseurl = https://cdn.redhat.com/content/dist/rhel/entitlement-6/releases/$releasever/$basearch/scalablefilesystem/debug enabled = 0 gpgcheck = 1 gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release sslverify = 1 sslcacert = /etc/rhsm/ca/redhat-uep.pem sslclientkey = /etc/pki/entitlement/key.pem sslclientcert = /etc/pki/entitlement/11300387955690106.pem
9.5.3. Using Yum Variables
yum commands and in all yum configuration files (that is, /etc/yum.conf and all .repo files in the /etc/yum.repos.d/ directory):
$releasever- You can use this variable to reference the release version of Red Hat Enterprise Linux. Yum obtains the value of
$releaseverfrom thedistroverpkg=valueline in the/etc/yum.confconfiguration file. If there is no such line in/etc/yum.conf, then yum infers the correct value by deriving the version number from theredhat-releaseproductpackage that provides theredhat-releasefile. $arch- You can use this variable to refer to the system's CPU architecture as returned when calling Python's
os.uname()function. Valid values for$archinclude:i586,i686andx86_64. $basearch- You can use
$basearchto reference the base architecture of the system. For example, i686 and i586 machines both have a base architecture ofi386, and AMD64 and Intel 64 machines have a base architecture ofx86_64. $YUM0-9- These ten variables are each replaced with the value of any shell environment variables with the same name. If one of these variables is referenced (in
/etc/yum.conffor example) and a shell environment variable with the same name does not exist, then the configuration file variable is not replaced.
$” sign) in the /etc/yum/vars/ directory, and add the desired value on its first line.
$osname, create a new file with “Red Hat Enterprise Linux” on the first line and save it as /etc/yum/vars/osname:
~]# echo "Red Hat Enterprise Linux 7" > /etc/yum/vars/osname.repo files:
name=$osname $releasever
9.5.4. Viewing the Current Configuration
[main] section of the /etc/yum.conf file), execute the yum-config-manager command with no command-line options:
yum-config-manageryum-config-manager section…yum-config-manager glob_expression…Example 9.26. Viewing configuration of the main section
~]$ yum-config-manager main \*
Loaded plugins: langpacks, product-id, subscription-manager
================================== main ===================================
[main]
alwaysprompt = True
assumeyes = False
bandwith = 0
bugtracker_url = https://bugzilla.redhat.com/enter_bug.cgi?product=Red%20Hat%20Enterprise%20Linux%206&component=yum
cache = 0
[output truncated]9.5.5. Adding, Enabling, and Disabling a Yum Repository
Note
yum-config-manager command.
Important
Content Delivery Network (CDN), the Red Hat Subscription Manager tools are used to manage repositories in the /etc/yum.repos.d/redhat.repo file.
Adding a Yum Repository
[repository] section to the /etc/yum.conf file, or to a .repo file in the /etc/yum.repos.d/ directory. All files with the .repo file extension in this directory are read by yum, and it is recommended to define your repositories here instead of in /etc/yum.conf.
Warning
Content Delivery Network (CDN) constitutes a potential security risk, and could lead to security, stability, compatibility, and maintainability issues.
.repo file. To add such a repository to your system and enable it, run the following command as root:
yum-config-manager--add-reporepository_url
.repo file.
Example 9.27. Adding example.repo
~]# yum-config-manager --add-repo http://www.example.com/example.repo
Loaded plugins: langpacks, product-id, subscription-manager
adding repo from: http://www.example.com/example.repo
grabbing file http://www.example.com/example.repo to /etc/yum.repos.d/example.repo
example.repo | 413 B 00:00
repo saved to /etc/yum.repos.d/example.repoEnabling a Yum Repository
root:
yum-config-manager--enablerepository…
yum repolist all to list available repository IDs). Alternatively, you can use a glob expression to enable all matching repositories:
yum-config-manager--enableglob_expression…
Example 9.28. Enabling repositories defined in custom sections of /etc/yum.conf.
[example], [example-debuginfo], and [example-source]sections, type:
~]# yum-config-manager --enable example\*
Loaded plugins: langpacks, product-id, subscription-manager
============================== repo: example ==============================
[example]
bandwidth = 0
base_persistdir = /var/lib/yum/repos/x86_64/7Server
baseurl = http://www.example.com/repo/7Server/x86_64/
cache = 0
cachedir = /var/cache/yum/x86_64/7Server/example
[output truncated]Example 9.29. Enabling all repositories
/etc/yum.conf file and in the /etc/yum.repos.d/ directory, type:
~]# yum-config-manager --enable \*
Loaded plugins: langpacks, product-id, subscription-manager
============================== repo: example ==============================
[example]
bandwidth = 0
base_persistdir = /var/lib/yum/repos/x86_64/7Server
baseurl = http://www.example.com/repo/7Server/x86_64/
cache = 0
cachedir = /var/cache/yum/x86_64/7Server/example
[output truncated]yum-config-manager --enable command displays the current repository configuration.
Disabling a Yum Repository
root:
yum-config-manager--disablerepository…
yum repolist all to list available repository IDs). Similarly to yum-config-manager --enable, you can use a glob expression to disable all matching repositories at the same time:
yum-config-manager--disableglob_expression…
Example 9.30. Disabling all repositories
/etc/yum.conf file and in the /etc/yum.repos.d/ directory, type:
~]# yum-config-manager --disable \*
Loaded plugins: langpacks, product-id, subscription-manager
============================== repo: example ==============================
[example]
bandwidth = 0
base_persistdir = /var/lib/yum/repos/x86_64/7Server
baseurl = http://www.example.com/repo/7Server/x86_64/
cache = 0
cachedir = /var/cache/yum/x86_64/7Server/example
[output truncated]yum-config-manager --disable command displays the current configuration.
9.5.6. Creating a Yum Repository
- Install the createrepo package:
# yum install createrepo - Copy all packages for your new repository into one directory, such as
/tmp/local_repo/:cp /your/packages/*.rpm /tmp/local_repo/ - To create the repository run:
createrepo /tmp/local_repo/This creates the necessary metadata for the yum repository and places metadata in a newly created subdirectoryrepodata.The repository is now ready to be consumed by yum. This repository can be shared over the HTTP or FTP protocol, or refered directly from the local machine. See the Section 9.5.2, “Setting [repository] Options” section for more details on how to configure a yum repository.Note
When constructing the URL for a repository, refer to the/mnt/local_reponot to/mnt/local_repo/repodata, as this directory contains only metadata. Actual yum packages are in/mnt/local_repo.
9.5.6.1. Adding packages to an already created yum repository
- Copy the new packages to your repository directory, such as
/tmp/local_repo/:cp /your/packages/*.rpm /tmp/local_repo/ - To reflect the newly added packages in the metadata, run:
createrepo --update /tmp/local_repo/ - Optional: If you have already used any yum command with newly updated repository, run:
yum clean expire-cache
9.5.7. Adding the Optional and Supplementary Repositories
9.6. Yum Plug-ins
yum command. For example:
~]# yum info yum
Loaded plugins: langpacks, product-id, subscription-manager
[output truncated]Loaded plugins are the names you can provide to the --disableplugin=plugin_name option.
9.6.1. Enabling, Configuring, and Disabling Yum Plug-ins
plugins= is present in the [main] section of /etc/yum.conf, and that its value is 1:
plugins=1
plugins=0.
Important
Content Delivery Network (CDN). Disabling plug-ins globally is provided as a convenience option, and is generally only recommended when diagnosing a potential problem with yum.
/etc/yum/pluginconf.d/ directory. You can set plug-in specific options in these files. For example, here is the aliases plug-in's aliases.conf configuration file:
[main] enabled=1
/etc/yum.conf file, the plug-in configuration files always contain a [main] section where the enabled= option controls whether the plug-in is enabled when you run yum commands. If this option is missing, you can add it manually to the file.
enabled=0 in /etc/yum.conf, then all plug-ins are disabled regardless of whether they are enabled in their individual configuration files.
yum command, use the --noplugins option.
yum command, add the --disableplugin=plugin_name option to the command. For example, to disable the aliases plug-in while updating a system, type:
~]# yum update --disableplugin=aliases--disableplugin= option are the same names listed after the Loaded plugins line in the output of any yum command. You can disable multiple plug-ins by separating their names with commas. In addition, you can match multiple plug-in names or shorten long ones by using glob expressions:
~]# yum update --disableplugin=aliases,lang*9.6.2. Installing Additional Yum Plug-ins
yum-plugin-plugin_name package-naming convention, but not always: the package which provides the kabi plug-in is named kabi-yum-plugins, for example. You can install a yum plug-in in the same way you install other packages. For instance, to install the yum-aliases plug-in, type the following at a shell prompt:
~]# yum install yum-plugin-aliases9.6.3. Working with Yum Plug-ins
- search-disabled-repos (subscription-manager)
- The search-disabled-repos plug-in allows you to temporarily or permanently enable disabled repositories to help resolve dependencies. With this plug-in enabled, when Yum fails to install a package due to failed dependency resolution, it offers to temporarily enable disabled repositories and try again. If the installation succeeds, Yum also offers to enable the used repositories permanently. Note that the plug-in works only with the repositories that are managed by subscription-manager and not with custom repositories.
Important
Ifyumis executed with the--assumeyesor-yoption, or if theassumeyesdirective is enabled in/etc/yum.conf, the plug-in enables disabled repositories, both temporarily and permanently, without prompting for confirmation. This may lead to problems, for example, enabling repositories that you do not want enabled.To configure the search-disabled-repos plug-in, edit the configuration file located in/etc/yum/pluginconf.d/search-disabled-repos.conf. For the list of directives you can use in the[main]section, see the table below.Table 9.3. Supported
search-disabled-repos.confdirectivesDirective Description enabled=valueAllows you to enable or disable the plug-in. The value must be either 1(enabled), or0(disabled). The plug-in is enabled by default.notify_only=valueAllows you to restrict the behavior of the plug-in to notifications only. The value must be either 1(notify only without modifying the behavior of Yum), or0(modify the behavior of Yum). By default the plug-in only notifies the user.ignored_repos=repositoriesAllows you to specify the repositories that will not be enabled by the plug-in. - kabi (kabi-yum-plugins)
- The kabi plug-in checks whether a driver update package conforms with the official Red Hat kernel Application Binary Interface (kABI). With this plug-in enabled, when a user attempts to install a package that uses kernel symbols which are not on a whitelist, a warning message is written to the system log. Additionally, configuring the plug-in to run in enforcing mode prevents such packages from being installed at all.To configure the kabi plug-in, edit the configuration file located in
/etc/yum/pluginconf.d/kabi.conf. A list of directives that can be used in the[main]section is shown in the table below.Table 9.4. Supported
kabi.confdirectivesDirective Description enabled=valueAllows you to enable or disable the plug-in. The value must be either 1(enabled), or0(disabled). When installed, the plug-in is enabled by default.whitelists=directoryAllows you to specify the directory in which the files with supported kernel symbols are located. By default, the kabi plug-in uses files provided by the kernel-abi-whitelists package (that is, the /usr/lib/modules/kabi-rhel70/directory).enforce=valueAllows you to enable or disable enforcing mode. The value must be either 1(enabled), or0(disabled). By default, this option is commented out and the kabi plug-in only displays a warning message. - product-id (subscription-manager)
- The product-id plug-in manages product identity certificates for products installed from the Content Delivery Network. The product-id plug-in is installed by default.
- langpacks (yum-langpacks)
- The langpacks plug-in is used to search for locale packages of a selected language for every package that is installed. The langpacks plug-in is installed by default.
- aliases (yum-plugin-aliases)
- The aliases plug-in adds the
aliascommand-line option which enables configuring and using aliases foryumcommands. - yum-changelog (yum-plugin-changelog)
- The yum-changelog plug-in adds the
--changelogcommand-line option that enables viewing package change logs before and after updating. - yum-tmprepo (yum-plugin-tmprepo)
- The yum-tmprepo plug-in adds the
--tmprepocommand-line option that takes the URL of a repository file, downloads and enables it for only one transaction. This plug-in tries to ensure the safe temporary usage of repositories. By default, it does not allow to disable the gpg check. - yum-verify (yum-plugin-verify)
- The yum-verify plug-in adds the
verify,verify-rpm, andverify-allcommand-line options for viewing verification data on the system. - yum-versionlock (yum-plugin-versionlock)
- The yum-versionlock plug-in excludes other versions of selected packages, which enables protecting packages from being updated by newer versions. With the
versionlockcommand-line option, you can view and edit the list of locked packages.
9.7. Automatically Refreshing Package Database and Downloading Updates with Yum-cron
yum-cron service checks and downloads package updates automatically. The cron jobs provided by the yum-cron service are active immediately after installation of the yum-cron package. The yum-cron service can also automatically install downloaded updates.
yum-cron service:
- Updates the metadata in the yum cache once per hour.
- Downloads pending package updates to the yum cache once per day. If new packages are available in the repository, an email is sent. See chapter Section 9.7.2, “Setting up Optional Email Notifications” for more information.
yum-cron service has two configuration files:
/etc/yum/yum-cron.conf- For daily tasks.
/etc/yum/yum-cron-hourly.conf- For hourly tasks.
9.7.1. Enabling Automatic Installation of Updates
apply_updates option as follows:
apply_updates = yes
9.7.2. Setting up Optional Email Notifications
yum-cron service uses cron to send emails containing an output of the executed command. This email is sent according to cron configuration, typically to the local superuser and stored in the /var/spool/mail/root file.
cron jobs. However, this email configuration does not support TLS and overall email built-in logic is very basic.
yum-cron built-in email notifications:
- Open selected
yum-cronconfiguration file:/etc/yum/yum-cron.conf- For daily tasks.
/etc/yum/yum-cron-hourly.conf- For hourly tasks.
- In the
[emitters]section, set the following option:emit_via = email
- Set the
email_from,email_to,email_hostoptions as required
9.7.3. Enabling or Disabling Specific Repositories
yum-cron does not support specific configuration of repositories. As a workaround for enabling or disabling specific repositories for yum-cron but not for yum in general follow the steps bellow:
- Create an empty repository configuration directory anywhere on the system.
- Copy all configuration files from the
/etc/yum.repos.d/directory to this newly created directory. - In the respective
.repoconfiguration file within the/etc/yum.repos.d/, set theenabledoption as follows:enabled = 1- To enable the repository.
enabled = 0- To disable the repository.
- Add the following option, which points to the newly created repository directory, at the end of the selected
yum-cronconfiguration file:reposdir=/path/to/new/reposdir
9.7.4. Testing Yum-cron Settings
yum-cron settings without waiting for the next scheduled yum-cron task:
- Open selected
yum-cronconfiguration file:/etc/yum/yum-cron.conf- For daily tasks.
/etc/yum/yum-cron-hourly.conf- For hourly tasks.
- Set the
random_sleepoption in the selected configuration file as follows:random_sleep = 0
- Run the configuration files:
# yum-cron /etc/yum/yum-cron.conf # yum-cron /etc/yum/yum-cron-hourly.conf
9.7.5. Disabling Yum-cron messages
yum-cron messages cannot be entirely disabled, but can be limited to messages with critical priority only. To limit the messages:
- Open selected
yum-cronconfiguration file:/etc/yum/yum-cron.conf- For daily tasks.
/etc/yum/yum-cron-hourly.conf- For hourly tasks.
- Set the following option in the
[base]section of the configuration file:debuglevel = -4
9.7.6. Automatically Cleaning Packages
yum-cron service does not support any configuration option for removing packages similar to the yum clean all command. To clean packages automatically, you can create a cron job as an executable shell script:
- Create a shell script in the
/etc/cron.daily/directory containing:#!/bin/sh yum clean all
- Make the script executable:
# chmod +x /etc/cron.daily/script-name.sh
9.8. Additional Resources
Installed Documentation
yum(8) — The manual page for the yum command-line utility provides a complete list of supported options and commands.yumdb(8) — The manual page for theyumdbcommand-line utility documents how to use this tool to query and, if necessary, alter the yum database.yum.conf(5) — The manual page namedyum.confdocuments available yum configuration options.yum-utils(1) — The manual page namedyum-utilslists and briefly describes additional utilities for managing yum configuration, manipulating repositories, and working with yum database.
Online Resources
- Yum Guides — The Yum Guides page on the project home page provides links to further documentation.
- Red Hat Customer Portal Labs — The Red Hat Customer Portal Labs includes a “Yum Repository Configuration Helper”.
See Also
- Chapter 6, Gaining Privileges documents how to gain administrative privileges by using the
suandsudocommands.
Part IV. Infrastructure Services
Chapter 10. Managing Services with systemd
10.1. Introduction to systemd
Table 10.1. Available systemd Unit Types
| Unit Type | File Extension | Description |
|---|---|---|
| Service unit | .service | A system service. |
| Target unit | .target | A group of systemd units. |
| Automount unit | .automount | A file system automount point. |
| Device unit | .device | A device file recognized by the kernel. |
| Mount unit | .mount | A file system mount point. |
| Path unit | .path | A file or directory in a file system. |
| Scope unit | .scope | An externally created process. |
| Slice unit | .slice | A group of hierarchically organized units that manage system processes. |
| Snapshot unit | .snapshot | A saved state of the systemd manager. |
| Socket unit | .socket | An inter-process communication socket. |
| Swap unit | .swap | A swap device or a swap file. |
| Timer unit | .timer | A systemd timer. |
Table 10.2. Systemd Unit Files Locations
| Directory | Description |
|---|---|
/usr/lib/systemd/system/ | Systemd unit files distributed with installed RPM packages. |
/run/systemd/system/ | Systemd unit files created at run time. This directory takes precedence over the directory with installed service unit files. |
/etc/systemd/system/ | Systemd unit files created by systemctl enable as well as unit files added for extending a service. This directory takes precedence over the directory with runtime unit files. |
Overriding the Default systemd Configuration Using system.conf
/etc/systemd/system.conf. Use this file if you want to deviate from those defaults and override selected default values for systemd units globally.
DefaultTimeoutStartSec parameter to input the required value in seconds.
DefaultTimeoutStartSec=required value
10.1.1. Main Features
- Socket-based activation — At boot time, systemd creates listening sockets for all system services that support this type of activation, and passes the sockets to these services as soon as they are started. This not only allows systemd to start services in parallel, but also makes it possible to restart a service without losing any message sent to it while it is unavailable: the corresponding socket remains accessible and all messages are queued.Systemd uses socket units for socket-based activation.
- Bus-based activation — System services that use D-Bus for inter-process communication can be started on-demand the first time a client application attempts to communicate with them. Systemd uses D-Bus service files for bus-based activation.
- Device-based activation — System services that support device-based activation can be started on-demand when a particular type of hardware is plugged in or becomes available. Systemd uses device units for device-based activation.
- Path-based activation — System services that support path-based activation can be started on-demand when a particular file or directory changes its state. Systemd uses path units for path-based activation.
- Mount and automount point management — Systemd monitors and manages mount and automount points. Systemd uses mount units for mount points and automount units for automount points.
- Aggressive parallelization — Because of the use of socket-based activation, systemd can start system services in parallel as soon as all listening sockets are in place. In combination with system services that support on-demand activation, parallel activation significantly reduces the time required to boot the system.
- Transactional unit activation logic — Before activating or deactivating a unit, systemd calculates its dependencies, creates a temporary transaction, and verifies that this transaction is consistent. If a transaction is inconsistent, systemd automatically attempts to correct it and remove non-essential jobs from it before reporting an error.
- Backwards compatibility with SysV init — Systemd supports SysV init scripts as described in the Linux Standard Base Core Specification, which eases the upgrade path to systemd service units.
10.1.2. Compatibility Changes
- Systemd has only limited support for runlevels. It provides a number of target units that can be directly mapped to these runlevels and for compatibility reasons, it is also distributed with the earlier
runlevelcommand. Not all systemd targets can be directly mapped to runlevels, however, and as a consequence, this command might returnNto indicate an unknown runlevel. It is recommended that you avoid using therunlevelcommand if possible.For more information about systemd targets and their comparison with runlevels, see Section 10.3, “Working with systemd Targets”. - The
systemctlutility does not support custom commands. In addition to standard commands such asstart,stop, andstatus, authors of SysV init scripts could implement support for any number of arbitrary commands in order to provide additional functionality. For example, the init script foriptablesin Red Hat Enterprise Linux 6 could be executed with thepaniccommand, which immediately enabled panic mode and reconfigured the system to start dropping all incoming and outgoing packets. This is not supported in systemd and thesystemctlonly accepts documented commands.For more information about thesystemctlutility and its comparison with the earlierserviceutility, see Section 10.2, “Managing System Services”. - The
systemctlutility does not communicate with services that have not been started by systemd. When systemd starts a system service, it stores the ID of its main process in order to keep track of it. Thesystemctlutility then uses this PID to query and manage the service. Consequently, if a user starts a particular daemon directly on the command line,systemctlis unable to determine its current status or stop it. - Systemd stops only running services. Previously, when the shutdown sequence was initiated, Red Hat Enterprise Linux 6 and earlier releases of the system used symbolic links located in the
/etc/rc0.d/directory to stop all available system services regardless of their status. With systemd, only running services are stopped on shutdown. - System services are unable to read from the standard input stream. When systemd starts a service, it connects its standard input to
/dev/nullto prevent any interaction with the user. - System services do not inherit any context (such as the
HOMEandPATHenvironment variables) from the invoking user and their session. Each service runs in a clean execution context. - When loading a SysV init script, systemd reads dependency information encoded in the Linux Standard Base (LSB) header and interprets it at run time.
- All operations on service units are subject to a default timeout of 5 minutes to prevent a malfunctioning service from freezing the system. This value is hardcoded for services that are generated from initscripts and cannot be changed. However, individual configuration files can be used to specify a longer timeout value per service, see Example 10.21, “Changing the timeout limit”
10.2. Managing System Services
Note
/etc/rc.d/init.d/ directory. These init scripts were typically written in Bash, and allowed the system administrator to control the state of services and daemons in their system. In Red Hat Enterprise Linux 7, these init scripts have been replaced with service units.
.service file extension and serve a similar purpose as init scripts. To view, start, stop, restart, enable, or disable system services, use the systemctl command as described in Table 10.3, “Comparison of the service Utility with systemctl ”, Table 10.4, “Comparison of the chkconfig Utility with systemctl”, and further in this section. The service and chkconfig commands are still available in the system and work as expected, but are only included for compatibility reasons and should be avoided.
Table 10.3. Comparison of the service Utility with systemctl
| service | systemctl | Description |
|---|---|---|
service name start
| systemctl start name.service
| Starts a service. |
service name stop
| systemctl stop name.service
| Stops a service. |
service name restart
| systemctl restart name.service
| Restarts a service. |
service name condrestart
| systemctl try-restart name.service
| Restarts a service only if it is running. |
service name reload
| systemctl reload name.service
| Reloads configuration. |
service name status
| systemctl status name.service
systemctl is-active name.service
| Checks if a service is running. |
service --status-all
| systemctl list-units --type service --all
| Displays the status of all services. |
Table 10.4. Comparison of the chkconfig Utility with systemctl
| chkconfig | systemctl | Description |
|---|---|---|
chkconfig name on
| systemctl enable name.service
| Enables a service. |
chkconfig name off
| systemctl disable name.service
| Disables a service. |
chkconfig --list name
| systemctl status name.service
systemctl is-enabled name.service
| Checks if a service is enabled. |
chkconfig --list
| systemctl list-unit-files --type service
| Lists all services and checks if they are enabled. |
chkconfig --list
| systemctl list-dependencies --after
| Lists services that are ordered to start before the specified unit. |
chkconfig --list
| systemctl list-dependencies --before
| Lists services that are ordered to start after the specified unit. |
Specifying Service Units
.service file extension, for example:
~]# systemctl stop nfs-server.servicesystemctl utility assumes the argument is a service unit. The following command is equivalent to the one above:
~]# systemctl stop nfs-server~]# systemctl show nfs-server.service -p NamesBehavior of systemctl in a chroot Environment
chroot command, most systemctl commands refuse to perform any action. The reason for this is that the systemd process and the user that used the chroot command do not have the same view of the filesystem. This happens, for example, when systemctl is invoked from a kickstart file.
systemctl enable and systemctl disable commands. These commands do not need a running system and do not affect running processes, but they do affect unit files. Therefore, you can run these commands even in chroot environment. For example, to enable the httpd service on a system under the /srv/website1/ directory:
~]#chroot /srv/website1~]#systemctl enable httpd.serviceCreated symlink /etc/systemd/system/multi-user.target.wants/httpd.service, pointing to /usr/lib/systemd/system/httpd.service.
10.2.1. Listing Services
systemctl list-units --type serviceUNIT) followed by a note whether the unit file has been loaded (LOAD), its high-level (ACTIVE) and low-level (SUB) unit file activation state, and a short description (DESCRIPTION).
systemctl list-units command displays only active units. If you want to list all loaded units regardless of their state, run this command with the --all or -a command line option:
systemctl list-units --type service --allsystemctl list-unit-files --type serviceUNIT FILE) followed by information whether the service unit is enabled or not (STATE). For information on how to determine the status of individual service units, see Section 10.2.2, “Displaying Service Status”.
Example 10.1. Listing Services
~]$ systemctl list-units --type service
UNIT LOAD ACTIVE SUB DESCRIPTION
abrt-ccpp.service loaded active exited Install ABRT coredump hook
abrt-oops.service loaded active running ABRT kernel log watcher
abrt-vmcore.service loaded active exited Harvest vmcores for ABRT
abrt-xorg.service loaded active running ABRT Xorg log watcher
abrtd.service loaded active running ABRT Automated Bug Reporting Tool
...
systemd-vconsole-setup.service loaded active exited Setup Virtual Console
tog-pegasus.service loaded active running OpenPegasus CIM Server
LOAD = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB = The low-level unit activation state, values depend on unit type.
46 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'~]$ systemctl list-unit-files --type service
UNIT FILE STATE
abrt-ccpp.service enabled
abrt-oops.service enabled
abrt-vmcore.service enabled
abrt-xorg.service enabled
abrtd.service enabled
...
wpa_supplicant.service disabled
ypbind.service disabled
208 unit files listed.10.2.2. Displaying Service Status
systemctl status name.servicegdm). This command displays the name of the selected service unit followed by its short description, one or more fields described in Table 10.5, “Available Service Unit Information”, and if it is executed by the root user, also the most recent log entries.
Table 10.5. Available Service Unit Information
| Field | Description |
|---|---|
Loaded | Information whether the service unit has been loaded, the absolute path to the unit file, and a note whether the unit is enabled. |
Active | Information whether the service unit is running followed by a time stamp. |
Main PID | The PID of the corresponding system service followed by its name. |
Status | Additional information about the corresponding system service. |
Process | Additional information about related processes. |
CGroup | Additional information about related Control Groups (cgroups). |
systemctl is-active name.servicesystemctl is-enabled name.servicesystemctl is-active and systemctl is-enabled return an exit status of 0 if the specified service unit is running or enabled. For information on how to list all currently loaded service units, see Section 10.2.1, “Listing Services”.
Example 10.2. Displaying Service Status
gdm.service. To determine the current status of this service unit, type the following at a shell prompt:
~]# systemctl status gdm.service
gdm.service - GNOME Display Manager
Loaded: loaded (/usr/lib/systemd/system/gdm.service; enabled)
Active: active (running) since Thu 2013-10-17 17:31:23 CEST; 5min ago
Main PID: 1029 (gdm)
CGroup: /system.slice/gdm.service
├─1029 /usr/sbin/gdm
├─1037 /usr/libexec/gdm-simple-slave --display-id /org/gno...
└─1047 /usr/bin/Xorg :0 -background none -verbose -auth /r...
Oct 17 17:31:23 localhost systemd[1]: Started GNOME Display Manager.Example 10.3. Displaying Services Ordered to Start Before a Service
~]# systemctl list-dependencies --after gdm.service
gdm.service
├─dbus.socket
├─getty@tty1.service
├─livesys.service
├─plymouth-quit.service
├─system.slice
├─systemd-journald.socket
├─systemd-user-sessions.service
└─basic.target
[output truncated]Example 10.4. Displaying Services Ordered to Start After a Service
~]# systemctl list-dependencies --before gdm.service
gdm.service
├─dracut-shutdown.service
├─graphical.target
│ ├─systemd-readahead-done.service
│ ├─systemd-readahead-done.timer
│ └─systemd-update-utmp-runlevel.service
└─shutdown.target
├─systemd-reboot.service
└─final.target
└─systemd-reboot.service10.2.3. Starting a Service
root:
systemctl start name.servicegdm). This command starts the selected service unit in the current session. For information on how to enable a service unit to be started at boot time, see Section 10.2.6, “Enabling a Service”. For information on how to determine the status of a certain service unit, see Section 10.2.2, “Displaying Service Status”.
Example 10.5. Starting a Service
httpd.service. To activate this service unit and start the httpd daemon in the current session, run the following command as root:
~]# systemctl start httpd.service10.2.4. Stopping a Service
root:
systemctl stop name.servicebluetooth). This command stops the selected service unit in the current session. For information on how to disable a service unit and prevent it from being started at boot time, see Section 10.2.7, “Disabling a Service”. For information on how to determine the status of a certain service unit, see Section 10.2.2, “Displaying Service Status”.
Example 10.6. Stopping a Service
bluetoothd daemon is named bluetooth.service. To deactivate this service unit and stop the bluetoothd daemon in the current session, run the following command as root:
~]# systemctl stop bluetooth.service10.2.5. Restarting a Service
root:
systemctl restart name.servicehttpd). This command stops the selected service unit in the current session and immediately starts it again. Importantly, if the selected service unit is not running, this command starts it too. To tell systemd to restart a service unit only if the corresponding service is already running, run the following command as root:
systemctl try-restart name.serviceroot:
systemctl reload name.servicesystemctl command also supports the reload-or-restart and reload-or-try-restart commands that restart such services instead. For information on how to determine the status of a certain service unit, see Section 10.2.2, “Displaying Service Status”.
Example 10.7. Restarting a Service
root:
~]# systemctl reload httpd.service10.2.6. Enabling a Service
root:
systemctl enable name.servicehttpd). This command reads the [Install] section of the selected service unit and creates appropriate symbolic links to the /usr/lib/systemd/system/name.service file in the /etc/systemd/system/ directory and its subdirectories. This command does not, however, rewrite links that already exist. If you want to ensure that the symbolic links are re-created, use the following command as root:
systemctl reenable name.serviceExample 10.8. Enabling a Service
root:
~]# systemctl enable httpd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service.10.2.7. Disabling a Service
root:
systemctl disable name.servicebluetooth). This command reads the [Install] section of the selected service unit and removes appropriate symbolic links to the /usr/lib/systemd/system/name.service file from the /etc/systemd/system/ directory and its subdirectories. In addition, you can mask any service unit to prevent it from being started manually or by another service. To do so, run the following command as root:
systemctl mask name.service/etc/systemd/system/name.service file with a symbolic link to /dev/null, rendering the actual unit file inaccessible to systemd. To revert this action and unmask a service unit, type as root:
systemctl unmask name.serviceExample 10.9. Disabling a Service
bluetooth.service unit in the current session. To prevent this service unit from starting at boot time, type the following at a shell prompt as root:
~]# systemctl disable bluetooth.service
Removed symlink /etc/systemd/system/bluetooth.target.wants/bluetooth.service.
Removed symlink /etc/systemd/system/dbus-org.bluez.service.10.2.8. Starting a Conflicting Service
postfix service, and you try to start the sendmail service, systemd first automatically stops postfix, because these two services are conflicting and cannot run on the same port.
10.3. Working with systemd Targets
.target file extension and their only purpose is to group together other systemd units through a chain of dependencies. For example, the graphical.target unit, which is used to start a graphical session, starts system services such as the GNOME Display Manager (gdm.service) or Accounts Service (accounts-daemon.service) and also activates the multi-user.target unit. Similarly, the multi-user.target unit starts other essential system services such as NetworkManager (NetworkManager.service) or D-Bus (dbus.service) and activates another target unit named basic.target.
Table 10.6. Comparison of SysV Runlevels with systemd Targets
| Runlevel | Target Units | Description |
|---|---|---|
0 | runlevel0.target, poweroff.target | Shut down and power off the system. |
1 | runlevel1.target, rescue.target | Set up a rescue shell. |
2 | runlevel2.target, multi-user.target | Set up a non-graphical multi-user system. |
3 | runlevel3.target, multi-user.target | Set up a non-graphical multi-user system. |
4 | runlevel4.target, multi-user.target | Set up a non-graphical multi-user system. |
5 | runlevel5.target, graphical.target | Set up a graphical multi-user system. |
6 | runlevel6.target, reboot.target | Shut down and reboot the system. |
systemctl utility as described in Table 10.7, “Comparison of SysV init Commands with systemctl” and in the sections below. The runlevel and telinit commands are still available in the system and work as expected, but are only included for compatibility reasons and should be avoided.
Table 10.7. Comparison of SysV init Commands with systemctl
| Old Command | New Command | Description |
|---|---|---|
runlevel | systemctl list-units --type target | Lists currently loaded target units. |
telinit runlevel | systemctl isolate name.target | Changes the current target. |
10.3.1. Viewing the Default Target
systemctl get-default/etc/systemd/system/default.target and displays the result. For information on how to change the default target, see Section 10.3.3, “Changing the Default Target”. For information on how to list all currently loaded target units, see Section 10.3.2, “Viewing the Current Target”.
Example 10.10. Viewing the Default Target
~]$ systemctl get-default
graphical.target10.3.2. Viewing the Current Target
systemctl list-units --type targetUNIT) followed by a note whether the unit has been loaded (LOAD), its high-level (ACTIVE) and low-level (SUB) unit activation state, and a short description (DESCRIPTION).
systemctl list-units command displays only active units. If you want to list all loaded units regardless of their state, run this command with the --all or -a command line option:
systemctl list-units --type target --allExample 10.11. Viewing the Current Target
~]$ systemctl list-units --type target
UNIT LOAD ACTIVE SUB DESCRIPTION
basic.target loaded active active Basic System
cryptsetup.target loaded active active Encrypted Volumes
getty.target loaded active active Login Prompts
graphical.target loaded active active Graphical Interface
local-fs-pre.target loaded active active Local File Systems (Pre)
local-fs.target loaded active active Local File Systems
multi-user.target loaded active active Multi-User System
network.target loaded active active Network
paths.target loaded active active Paths
remote-fs.target loaded active active Remote File Systems
sockets.target loaded active active Sockets
sound.target loaded active active Sound Card
spice-vdagentd.target loaded active active Agent daemon for Spice guests
swap.target loaded active active Swap
sysinit.target loaded active active System Initialization
time-sync.target loaded active active System Time Synchronized
timers.target loaded active active Timers
LOAD = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB = The low-level unit activation state, values depend on unit type.
17 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.10.3.3. Changing the Default Target
root:
systemctl set-default name.targetmulti-user). This command replaces the /etc/systemd/system/default.target file with a symbolic link to /usr/lib/systemd/system/name.target, where name is the name of the target unit you want to use. For information on how to change the current target, see Section 10.3.4, “Changing the Current Target”. For information on how to list all currently loaded target units, see Section 10.3.2, “Viewing the Current Target”.
Example 10.12. Changing the Default Target
multi-user.target unit by default, run the following command as root:
~]# systemctl set-default multi-user.target
rm '/etc/systemd/system/default.target'
ln -s '/usr/lib/systemd/system/multi-user.target' '/etc/systemd/system/default.target'10.3.4. Changing the Current Target
root:
systemctl isolate name.targetmulti-user). This command starts the target unit named name and all dependent units, and immediately stops all others. For information on how to change the default target, see Section 10.3.3, “Changing the Default Target”. For information on how to list all currently loaded target units, see Section 10.3.2, “Viewing the Current Target”.
Example 10.13. Changing the Current Target
multi-user.target unit in the current session, run the following command as root:
~]# systemctl isolate multi-user.target10.3.5. Changing to Rescue Mode
root:
systemctl rescuesystemctl isolate rescue.target, but it also sends an informative message to all users that are currently logged into the system. To prevent systemd from sending this message, run this command with the --no-wall command line option:
systemctl --no-wall rescueExample 10.14. Changing to Rescue Mode
root:
~]# systemctl rescue
Broadcast message from root@localhost on pts/0 (Fri 2013-10-25 18:23:15 CEST):
The system is going down to rescue mode NOW!10.3.6. Changing to Emergency Mode
root:
systemctl emergencysystemctl isolate emergency.target, but it also sends an informative message to all users that are currently logged into the system. To prevent systemd from sending this message, run this command with the --no-wall command line option:
systemctl --no-wall emergencyExample 10.15. Changing to Emergency Mode
root:
~]# systemctl --no-wall emergency10.4. Shutting Down, Suspending, and Hibernating the System
systemctl utility replaces a number of power management commands used in previous versions of the Red Hat Enterprise Linux system. The commands listed in Table 10.8, “Comparison of Power Management Commands with systemctl” are still available in the system for compatibility reasons, but it is advised that you use systemctl when possible.
Table 10.8. Comparison of Power Management Commands with systemctl
| Old Command | New Command | Description |
|---|---|---|
halt | systemctl halt | Halts the system. |
poweroff | systemctl poweroff | Powers off the system. |
reboot | systemctl reboot | Restarts the system. |
pm-suspend | systemctl suspend | Suspends the system. |
pm-hibernate | systemctl hibernate | Hibernates the system. |
pm-suspend-hybrid | systemctl hybrid-sleep | Hibernates and suspends the system. |
10.4.1. Shutting Down the System
systemctl utility provides commands for shutting down the system, however the traditional shutdown command is also supported. Although the shutdown command will call the systemctl utility to perform the shutdown, it has an advantage in that it also supports a time argument. This is particularly useful for scheduled maintenance and to allow more time for users to react to the warning that a system shutdown has been scheduled. The option to cancel the shutdown can also be an advantage.
Using systemctl Commands
root:
systemctl poweroffroot:
systemctl halt--no-wall command line option, for example:
systemctl --no-wall poweroffUsing the shutdown Command
root: shutdown --poweroff hh:mm Where hh:mm is the time in 24 hour clock format. The /run/nologin file is created 5 minutes before system shutdown to prevent new logins. When a time argument is used, an optional message, the wall message, can be appended to the command.
root: shutdown --halt +m Where +m is the delay time in minutes. The now keyword is an alias for +0.
root user as follows: shutdown -c
shutdown(8) manual page for further command options.
10.4.2. Restarting the System
root:
systemctl reboot--no-wall command line option:
systemctl --no-wall reboot10.4.3. Suspending the System
root:
systemctl suspend10.4.4. Hibernating the System
root:
systemctl hibernateroot:
systemctl hybrid-sleep10.5. Controlling systemd on a Remote Machine
systemctl utility also allows you to interact with systemd running on a remote machine over the SSH protocol. Provided that the sshd service on the remote machine is running, you can connect to this machine by running the systemctl command with the --host or -H command line option:
systemctl --host user_name@host_name commandcommand with any of the systemctl commands described above. Note that the remote machine must be configured to allow the selected user remote access over the SSH protocol. For more information on how to configure an SSH server, see Chapter 12, OpenSSH.
Example 10.16. Remote Management
server-01.example.com as the root user and determine the current status of the httpd.service unit, type the following at a shell prompt:
~]$ systemctl -H root@server-01.example.com status httpd.service
>>>>>>> systemd unit files -- update
root@server-01.example.com's password:
httpd.service - The Apache HTTP Server
Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled)
Active: active (running) since Fri 2013-11-01 13:58:56 CET; 2h 48min ago
Main PID: 649
Status: "Total requests: 0; Current requests/sec: 0; Current traffic: 0 B/sec"
CGroup: /system.slice/httpd.service10.6. Creating and Modifying systemd Unit Files
systemctl commands work with unit files in the background. To make finer adjustments, system administrator must edit or create unit files manually. Table 10.2, “Systemd Unit Files Locations” lists three main directories where unit files are stored on the system, the /etc/systemd/system/ directory is reserved for unit files created or customized by the system administrator.
unit_name.type_extension
sshd.service as well as sshd.socket unit present on your system.
sshd.service, create the sshd.service.d/custom.conf file and insert additional directives there. For more information on configuration directories, see Section 10.6.4, “Modifying Existing Unit Files”.
sshd.service.wants/ and sshd.service.requires/ directories can be created. These directories contain symbolic links to unit files that are dependencies of the sshd service. The symbolic links are automatically created either during installation according to [Install] unit file options (see Table 10.11, “Important [Install] Section Options”) or at runtime based on [Unit] options (see Table 10.9, “Important [Unit] Section Options”). It is also possible to create these directories and symbolic links manually.
10.6.1. Understanding the Unit File Structure
- [Unit] — contains generic options that are not dependent on the type of the unit. These options provide unit description, specify the unit's behavior, and set dependencies to other units. For a list of most frequently used [Unit] options, see Table 10.9, “Important [Unit] Section Options”.
- [unit type] — if a unit has type-specific directives, these are grouped under a section named after the unit type. For example, service unit files contain the [Service] section, see Table 10.10, “Important [Service] Section Options” for most frequently used [Service] options.
- [Install] — contains information about unit installation used by
systemctl enableanddisablecommands, see Table 10.11, “Important [Install] Section Options” for a list of [Install] options.
Table 10.9. Important [Unit] Section Options
| Option[a] | Description |
|---|---|
Description | A meaningful description of the unit. This text is displayed for example in the output of the systemctl status command. |
Documentation | Provides a list of URIs referencing documentation for the unit. |
After[b] | Defines the order in which units are started. The unit starts only after the units specified in After are active. Unlike Requires, After does not explicitly activate the specified units. The Before option has the opposite functionality to After. |
Requires | Configures dependencies on other units. The units listed in Requires are activated together with the unit. If any of the required units fail to start, the unit is not activated. |
Wants | Configures weaker dependencies than Requires. If any of the listed units does not start successfully, it has no impact on the unit activation. This is the recommended way to establish custom unit dependencies. |
Conflicts | Configures negative dependencies, an opposite to Requires. |
[a]
For a complete list of options configurable in the [Unit] section, see the systemd.unit(5) manual page.
[b]
In most cases, it is sufficient to set only the ordering dependencies with After and Before unit file options. If you also set a requirement dependency with Wants (recommended) or Requires, the ordering dependency still needs to be specified. That is because ordering and requirement dependencies work independently from each other.
| |
Table 10.10. Important [Service] Section Options
| Option[a] | Description |
|---|---|
Type | Configures the unit process startup type that affects the functionality of ExecStart and related options. One of:
|
ExecStart | Specifies commands or scripts to be executed when the unit is started. ExecStartPre and ExecStartPost specify custom commands to be executed before and after ExecStart. Type=oneshot enables specifying multiple custom commands that are then executed sequentially. |
ExecStop | Specifies commands or scripts to be executed when the unit is stopped. |
ExecReload | Specifies commands or scripts to be executed when the unit is reloaded. |
Restart | With this option enabled, the service is restarted after its process exits, with the exception of a clean stop by the systemctl command. |
RemainAfterExit | If set to True, the service is considered active even when all its processes exited. Default value is False. This option is especially useful if Type=oneshot is configured. |
[a]
For a complete list of options configurable in the [Service] section, see the systemd.service(5) manual page.
| |
Table 10.11. Important [Install] Section Options
| Option[a] | Description |
|---|---|
Alias | Provides a space-separated list of additional names for the unit. Most systemctl commands, excluding systemctl enable, can use aliases instead of the actual unit name. |
RequiredBy | A list of units that depend on the unit. When this unit is enabled, the units listed in RequiredBy gain a Require dependency on the unit. |
WantedBy | A list of units that weakly depend on the unit. When this unit is enabled, the units listed in WantedBy gain a Want dependency on the unit. |
Also | Specifies a list of units to be installed or uninstalled along with the unit. |
DefaultInstance | Limited to instantiated units, this option specifies the default instance for which the unit is enabled. See Section 10.6.5, “Working with Instantiated Units” |
[a]
For a complete list of options configurable in the [Install] section, see the systemd.unit(5) manual page.
| |
Example 10.17. postfix.service Unit File
/usr/lib/systemd/system/postifix.service unit file as currently provided by the postfix package:
[Unit] Description=Postfix Mail Transport Agent After=syslog.target network.target Conflicts=sendmail.service exim.service [Service] Type=forking PIDFile=/var/spool/postfix/pid/master.pid EnvironmentFile=-/etc/sysconfig/network ExecStartPre=-/usr/libexec/postfix/aliasesdb ExecStartPre=-/usr/libexec/postfix/chroot-update ExecStart=/usr/sbin/postfix start ExecReload=/usr/sbin/postfix reload ExecStop=/usr/sbin/postfix stop [Install] WantedBy=multi-user.target
EnvironmentFile points to the location where environment variables for the service are defined, PIDFile specifies a stable PID for the main process of the service. Finally, the [Install] section lists units that depend on the service.
10.6.2. Creating Custom Unit Files
- Prepare the executable file with the custom service. This can be a custom-created script, or an executable delivered by a software provider. If required, prepare a PID file to hold a constant PID for the main process of the custom service. It is also possible to include environment files to store shell variables for the service. Make sure the source script is executable (by executing the
chmod a+x) and is not interactive. - Create a unit file in the
/etc/systemd/system/directory and make sure it has correct file permissions. Execute asroot:touch/etc/systemd/system/name.servicechmod 664/etc/systemd/system/name.serviceReplace name with a name of the service to be created. Note that file does not need to be executable. - Open the
name.servicefile created in the previous step, and add the service configuration options. There is a variety of options that can be used depending on the type of service you wish to create, see Section 10.6.1, “Understanding the Unit File Structure”. The following is an example unit configuration for a network-related service:[Unit] Description=service_description After=network.target [Service] ExecStart=path_to_executable Type=forking PIDFile=path_to_pidfile [Install] WantedBy=default.target
Where:- service_description is an informative description that is displayed in journal log files and in the output of the
systemctl statuscommand. - the
Aftersetting ensures that the service is started only after the network is running. Add a space-separated list of other relevant services or targets. - path_to_executable stands for the path to the actual service executable.
Type=forkingis used for daemons that make the fork system call. The main process of the service is created with the PID specified in path_to_pidfile. Find other startup types in Table 10.10, “Important [Service] Section Options”.WantedBystates the target or targets that the service should be started under. Think of these targets as of a replacement of the older concept of runlevels, see Section 10.3, “Working with systemd Targets” for details.
- Notify systemd that a new
name.servicefile exists by executing the following command asroot:systemctldaemon-reloadsystemctl start name.serviceWarning
Always run thesystemctl daemon-reloadcommand after creating new unit files or modifying existing unit files. Otherwise, thesystemctl startorsystemctl enablecommands could fail due to a mismatch between states of systemd and actual service unit files on disk.The name.service unit can now be managed as any other system service with commands described in Section 10.2, “Managing System Services”.
Example 10.18. Creating the emacs.service File
- Create a unit file in the
/etc/systemd/system/directory and make sure it has the correct file permissions. Execute asroot:~]#
touch~]#/etc/systemd/system/emacs.servicechmod 664/etc/systemd/system/emacs.service - Add the following content to the file:
[Unit] Description=Emacs: the extensible, self-documenting text editor [Service] Type=forking ExecStart=/usr/bin/emacs --daemon ExecStop=/usr/bin/emacsclient --eval "(kill-emacs)" Environment=SSH_AUTH_SOCK=%t/keyring/ssh Restart=always [Install] WantedBy=default.targetWith the above configuration, the/usr/bin/emacsexecutable is started in daemon mode on service start. The SSH_AUTH_SOCK environment variable is set using the "%t" unit specifier that stands for the runtime directory. The service also restarts the emacs process if it exits unexpectedly. - Execute the following commands to reload the configuration and start the custom service:
~]#
systemctl~]#daemon-reloadsystemctl start emacs.service
systemctl commands. For example, run systemctl status emacs to display the editor's status or systemctl enable emacs to make the editor start automatically on system boot.
Example 10.19. Creating a second instance of the sshd service
sshd service:
- Create a copy of the
sshd_configfile that will be used by the second daemon:~]#
cp /etc/ssh/sshd{,-second}_config - Edit the
sshd-second_configfile created in the previous step to assign a different port number and PID file to the second daemon:Port 22220 PidFile /var/run/sshd-second.pid
See thesshd_config(5) manual page for more information onPortandPidFileoptions. Make sure the port you choose is not in use by any other service. The PID file does not have to exist before running the service, it is generated automatically on service start. - Create a copy of the systemd unit file for the
sshdservice:~]#
cp /usr/lib/systemd/system/sshd.service /etc/systemd/system/sshd-second.service - Alter the
sshd-second.servicecreated in the previous step as follows:- Modify the
Descriptionoption:Description=OpenSSH server second instance daemon
- Add sshd.service to services specified in the
Afteroption, so that the second instance starts only after the first one has already started:After=syslog.target network.target auditd.service sshd.service
- The first instance of sshd includes key generation, therefore remove the ExecStartPre=/usr/sbin/sshd-keygen line.
- Add the
-f /etc/ssh/sshd-second_configparameter to thesshdcommand, so that the alternative configuration file is used:ExecStart=/usr/sbin/sshd -D -f /etc/ssh/sshd-second_config $OPTIONS
- After the above modifications, the sshd-second.service should look as follows:
[Unit] Description=OpenSSH server second instance daemon After=syslog.target network.target auditd.service sshd.service [Service] EnvironmentFile=/etc/sysconfig/sshd ExecStart=/usr/sbin/sshd -D -f /etc/ssh/sshd-second_config $OPTIONS ExecReload=/bin/kill -HUP $MAINPID KillMode=process Restart=on-failure RestartSec=42s [Install] WantedBy=multi-user.target
- If using SELinux, add the port for the second instance of sshd to SSH ports, otherwise the second instance of sshd will be rejected to bind to the port:
~]#
semanage port -a -t ssh_port_t -p tcp 22220 - Enable sshd-second.service, so that it starts automatically upon boot:
~]#
systemctl enable sshd-second.serviceVerify if the sshd-second.service is running by using thesystemctl statuscommand. Also, verify if the port is enabled correctly by connecting to the service:~]$
ssh -p 22220 user@serverIf the firewall is in use, make sure that it is configured appropriately in order to allow connections to the second instance of sshd.
systemd, see the Red Hat Knowledgebase article How to set limits for services in RHEL 7 and systemd. These limits need to be set in the service's unit file. Note that systemd ignores limits set in the /etc/security/limits.conf and /etc/security/limits.d/*.conf configuration files. The limits defined in these files are set by PAM when starting a login session, but daemons started by systemd do not use PAM login sessions.
10.6.3. Converting SysV Init Scripts to Unit Files
postfix service on Red Hat Enterprise Linux 6:
#!/bin/bash # # postfix Postfix Mail Transfer Agent # # chkconfig: 2345 80 30 # description: Postfix is a Mail Transport Agent, which is the program \ # that moves mail from one machine to another. # processname: master # pidfile: /var/spool/postfix/pid/master.pid # config: /etc/postfix/main.cf # config: /etc/postfix/master.cf ### BEGIN INIT INFO # Provides: postfix MTA # Required-Start: $local_fs $network $remote_fs # Required-Stop: $local_fs $network $remote_fs # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: start and stop postfix # Description: Postfix is a Mail Transport Agent, which is the program that # moves mail from one machine to another. ### END INIT INFO
Finding the Service Description
Description option in the [Unit] section of the unit file. The LSB header might contain similar data on the #Short-Description and #Description lines.
Finding Service Dependencies
Table 10.12. Dependency Options from the LSB Header
| LSB Option | Description | Unit File Equivalent |
|---|---|---|
Provides | Specifies the boot facility name of the service, that can be referenced in other init scripts (with the "$" prefix). This is no longer needed as unit files refer to other units by their file names. | – |
Required-Start | Contains boot facility names of required services. This is translated as an ordering dependency, boot facility names are replaced with unit file names of corresponding services or targets they belong to. For example, in case of postfix, the Required-Start dependency on $network was translated to the After dependency on network.target. | After, Before |
Should-Start | Constitutes weaker dependencies than Required-Start. Failed Should-Start dependencies do not affect the service startup. | After, Before |
Required-Stop, Should-Stop | Constitute negative dependencies. | Conflicts |
Finding Default Targets of the Service
WantedBy option in the [Install] section of the unit file. For example, postfix was previously started in runlevels 2, 3, 4, and 5, which translates to multi-user.target and graphical.target on Red Hat Enterprise Linux 7. Note that the graphical.target depends on multiuser.target, therefore it is not necessary to specify both, as in Example 10.17, “postfix.service Unit File”. You might find information on default and forbidden runlevels also at #Default-Start and #Default-Stop lines in the LSB header.
Finding Files Used by the Service
EnvironmentFile unit file option. The PID file specified on the #pidfile init script line is imported to the unit file with the PIDFile option.
postfix init script shows the block of code to be executed at service start.
conf_check() {
[ -x /usr/sbin/postfix ] || exit 5
[ -d /etc/postfix ] || exit 6
[ -d /var/spool/postfix ] || exit 5
}
make_aliasesdb() {
if [ "$(/usr/sbin/postconf -h alias_database)" == "hash:/etc/aliases" ]
then
# /etc/aliases.db might be used by other MTA, make sure nothing
# has touched it since our last newaliases call
[ /etc/aliases -nt /etc/aliases.db ] ||
[ "$ALIASESDB_STAMP" -nt /etc/aliases.db ] ||
[ "$ALIASESDB_STAMP" -ot /etc/aliases.db ] || return
/usr/bin/newaliases
touch -r /etc/aliases.db "$ALIASESDB_STAMP"
else
/usr/bin/newaliases
fi
}
start() {
[ "$EUID" != "0" ] && exit 4
# Check that networking is up.
[ ${NETWORKING} = "no" ] && exit 1
conf_check
# Start daemons.
echo -n $"Starting postfix: "
make_aliasesdb >/dev/null 2>&1
[ -x $CHROOT_UPDATE ] && $CHROOT_UPDATE
/usr/sbin/postfix start 2>/dev/null 1>&2 && success || failure $"$prog start"
RETVAL=$?
[ $RETVAL -eq 0 ] && touch $lockfile
echo
return $RETVAL
}conf_check() and make_aliasesdb(), that are called from the start() function block. On closer look, several external files and directories are mentioned in the above code: the main service executable /usr/sbin/postfix, the /etc/postfix/ and /var/spool/postfix/ configuration directories, as well as the /usr/sbin/postconf/ directory.
ExecStart, ExecStartPre, ExecStartPost, ExecStop, and ExecReload options. In case of postfix on Red Hat Enterprise Linux 7, the /usr/sbin/postfix together with supporting scripts are executed on service start. Consult the postfix unit file at Example 10.17, “postfix.service Unit File”.
10.6.4. Modifying Existing Unit Files
/usr/lib/systemd/system/ directory. System Administrators should not modify these files directly, therefore any customization must be confined to configuration files in the /etc/systemd/system/ directory. Depending on the extent of the required changes, pick one of the following approaches:
- Create a directory for supplementary configuration files at
/etc/systemd/system/unit.d/. This method is recommended for most use cases. It enables extending the default configuration with additional functionality, while still referring to the original unit file. Changes to the default unit introduced with a package upgrade are therefore applied automatically. See the section called “Extending the Default Unit Configuration” for more information. - Create a copy of the original unit file
/usr/lib/systemd/system/in/etc/systemd/system/and make changes there. The copy overrides the original file, therefore changes introduced with the package update are not applied. This method is useful for making significant unit changes that should persist regardless of package updates. See the section called “Overriding the Default Unit Configuration” for details.
/etc/systemd/system/. To apply changes to unit files without rebooting the system, execute:
systemctl daemon-reloaddaemon-reload option reloads all unit files and recreates the entire dependency tree, which is needed to immediately apply any change to a unit file. As an alternative, you can achieve the same result with the following command:
init qsystemctl restart name.serviceImportant
systemd drop-in configuration file for the service as described in the section called “Extending the Default Unit Configuration” and the section called “Overriding the Default Unit Configuration”. Then manage this service in the same way as a normal systemd service.
network service, do not modify the /etc/rc.d/init.d/network initscript file. Instead, create new directory /etc/systemd/system/network.service.d/ and a systemd drop-in file /etc/systemd/system/network.service.d/my_config.conf. Then, put the modified values into the drop-in file. Note: systemd knows the network service as network.service, which is why the created directory must be called network.service.d
Extending the Default Unit Configuration
/etc/systemd/system/. If extending a service unit, execute the following command as root:
mkdir /etc/systemd/system/name.service.d/touch /etc/systemd/system/name.service.d/config_name.conf[Unit] Requires=new_dependency After=new_dependency
[Service] Restart=always RestartSec=30
root:
systemctldaemon-reloadsystemctl restart name.service
Example 10.20. Extending the httpd.service Configuration
~]#mkdir~]#/etc/systemd/system/httpd.service.d/touch/etc/systemd/system/httpd.service.d/custom_script.conf
/usr/local/bin/custom.sh, insert the following text to the custom_script.conf file:
[Service] ExecStartPost=/usr/local/bin/custom.sh
~]#systemctl~]#daemon-reloadsystemctl restart httpd.service
Note
/etc/systemd/system/ take precedence over unit files in /usr/lib/systemd/system/. Therefore, if the configuration files contain an option that can be specified only once, such as Description or ExecStart, the default value of this option is overridden. Note that in the output of the systemd-delta command, described in the section called “Monitoring Overriden Units”, such units are always marked as [EXTENDED], even though in sum, certain options are actually overridden.
Overriding the Default Unit Configuration
/etc/systemd/system/ directory. To do so, execute the following command as root:
cp /usr/lib/systemd/system/name.service /etc/systemd/system/name.serviceroot:
systemctldaemon-reloadsystemctl restart name.service
Example 10.21. Changing the timeout limit
httpd service:
- Copy the
httpdunit file to the/etc/systemd/system/directory:cp /usr/lib/systemd/system/httpd.service /etc/systemd/system/httpd.service - Open file
/etc/systemd/system/httpd.serviceand specify theTimeoutStartUSecvalue in the[Service]section:... [Service] ... PrivateTmp=true TimeoutStartSec=10 [Install] WantedBy=multi-user.target ...
- Reload the
systemddaemon:systemctl daemon-reload - Optional. Verify the new timeout value:
systemctl show httpd -p TimeoutStartUSec
Note
DefaultTimeoutStartSec in the /etc/systemd/system.conf file. See Section 10.1, “Introduction to systemd”.
Monitoring Overriden Units
systemd-delta[EQUIVALENT] /etc/systemd/system/default.target → /usr/lib/systemd/system/default.target [OVERRIDDEN] /etc/systemd/system/autofs.service → /usr/lib/systemd/system/autofs.service --- /usr/lib/systemd/system/autofs.service 2014-10-16 21:30:39.000000000 -0400 +++ /etc/systemd/system/autofs.service 2014-11-21 10:00:58.513568275 -0500 @@ -8,7 +8,8 @@ EnvironmentFile=-/etc/sysconfig/autofs ExecStart=/usr/sbin/automount $OPTIONS --pid-file /run/autofs.pid ExecReload=/usr/bin/kill -HUP $MAINPID -TimeoutSec=180 +TimeoutSec=240 +Restart=Always [Install] WantedBy=multi-user.target [MASKED] /etc/systemd/system/cups.service → /usr/lib/systemd/system/cups.service [EXTENDED] /usr/lib/systemd/system/sssd.service → /etc/systemd/system/sssd.service.d/journal.conf 4 overridden configuration files found.
systemd-delta. Note that if a file is overridden, systemd-delta by default displays a summary of changes similar to the output of the diff command.
Table 10.13. systemd-delta Difference Types
| Type | Description |
|---|---|
|
[MASKED]
|
Masked unit files, see Section 10.2.7, “Disabling a Service” for description of unit masking.
|
|
[EQUIVALENT]
|
Unmodified copies that override the original files but do not differ in content, typically symbolic links.
|
|
[REDIRECTED]
|
Files that are redirected to another file.
|
|
[OVERRIDEN]
|
Overridden and changed files.
|
|
[EXTENDED]
|
Files that are extended with .conf files in the
/etc/systemd/system/unit.d/ directory.
|
|
[UNCHANGED]
|
Unmodified files are displayed only when the
--type=unchanged option is used.
|
systemd-delta after system update to check if there are any updates to the default units that are currently overridden by custom configuration. It is also possible to limit the output only to a certain difference type. For example, to view just the overridden units, execute:
systemd-delta --type=overridden10.6.5. Working with Instantiated Units
Requires or Wants options), or with the systemctl start command. Instantiated service units are named the following way:
template_name@instance_name.service
unit_name@.service
Wants setting in a unit file:
Wants=getty@ttyA.service,getty@ttyB.service
getty@.service file, reads the configuration from it, and starts the services.
Table 10.14. Important Unit Specifiers
| Unit Specifier | Meaning | Description |
|---|---|---|
%n | Full unit name | Stands for the full unit name including the type suffix. %N has the same meaning but also replaces the forbidden characters with ASCII codes. |
%p | Prefix name | Stands for a unit name with type suffix removed. For instantiated units %p stands for the part of the unit name before the "@" character. |
%i | Instance name | Is the part of the instantiated unit name between the "@" character and the type suffix. %I has the same meaning but also replaces the forbidden characters for ASCII codes. |
%H | Host name | Stands for the hostname of the running system at the point in time the unit configuration is loaded. |
%t | Runtime directory | Represents the runtime directory, which is either /run for the root user, or the value of the XDG_RUNTIME_DIR variable for unprivileged users. |
systemd.unit(5) manual page.
getty@.service template contains the following directives:
[Unit] Description=Getty on %I ... [Service] ExecStart=-/sbin/agetty --noclear %I $TERM ...
Description= is resolved as Getty on ttyA and Getty on ttyB.
10.7. Additional Resources
Installed Documentation
systemctl(1) — The manual page for thesystemctlcommand line utility provides a complete list of supported options and commands.systemd(1) — The manual page for thesystemdsystem and service manager provides more information about its concepts and documents available command line options and environment variables, supported configuration files and directories, recognized signals, and available kernel options.systemd-delta(1) — The manual page for thesystemd-deltautility that allows to find extended and overridden configuration files.systemd.unit(5) — The manual page namedsystemd.unitprovides detailed information about systemd unit files and documents all available configuration options.systemd.service(5) — The manual page namedsystemd.servicedocuments the format of service unit files.systemd.target(5) — The manual page namedsystemd.targetdocuments the format of target unit files.systemd.kill(5) — The manual page namedsystemd.killdocuments the configuration of the process killing procedure.
Online Documentation
- Red Hat Enterprise Linux 7 Networking Guide — The Networking Guide for Red Hat Enterprise Linux 7 documents relevant information regarding the configuration and administration of network interfaces, networks, and network services in this system. It provides an introduction to the
hostnamectlutility, explains how to use it to view and set host names on the command line, both locally and remotely, and provides important information about the selection of host names and domain names. - Red Hat Enterprise Linux 7 Desktop Migration and Administration Guide — The Desktop Migration and Administration Guide for Red Hat Enterprise Linux 7 documents the migration planning, deployment, configuration, and administration of the GNOME 3 desktop on this system. It introduces the
logindservice, enumerates its most significant features, and explains how to use theloginctlutility to list active sessions and enable multi-seat support. - Red Hat Enterprise Linux 7 SELinux User's and Administrator's Guide — The SELinux User's and Administrator's Guide for Red Hat Enterprise Linux 7 describes the basic principles of SELinux and documents in detail how to configure and use SELinux with various services such as the Apache HTTP Server, Postfix, PostgreSQL, or OpenShift. It explains how to configure SELinux access permissions for system services managed by systemd.
- Red Hat Enterprise Linux 7 Installation Guide — The Installation Guide for Red Hat Enterprise Linux 7 documents how to install the system on AMD64 and Intel 64 systems, 64-bit IBM Power Systems servers, and IBM System z. It also covers advanced installation methods such as Kickstart installations, PXE installations, and installations over the VNC protocol. In addition, it describes common post-installation tasks and explains how to troubleshoot installation problems, including detailed instructions on how to boot into rescue mode or recover the root password.
- Red Hat Enterprise Linux 7 Security Guide — The Security Guide for Red Hat Enterprise Linux 7 assists users and administrators in learning the processes and practices of securing their workstations and servers against local and remote intrusion, exploitation, and malicious activity. It also explains how to secure critical system services.
- systemd Home Page — The project home page provides more information about systemd.
See Also
- Chapter 2, System Locale and Keyboard Configuration documents how to manage the system locale and keyboard layouts. It explains how to use the
localectlutility to view the current locale, list available locales, and set the system locale on the command line, as well as to view the current keyboard layout, list available keymaps, and enable a particular keyboard layout on the command line. - Chapter 3, Configuring the Date and Time documents how to manage the system date and time. It explains the difference between a real-time clock and system clock and describes how to use the
timedatectlutility to display the current settings of the system clock, configure the date and time, change the time zone, and synchronize the system clock with a remote server. - Chapter 6, Gaining Privileges documents how to gain administrative privileges by using the
suandsudocommands. - Chapter 12, OpenSSH describes how to configure an SSH server and how to use the
ssh,scp, andsftpclient utilities to access it. - Chapter 22, Viewing and Managing Log Files provides an introduction to
journald. It describes the journal, introduces thejournaldservice, and documents how to use thejournalctlutility to view log entries, enter live view mode, and filter log entries. In addition, this chapter describes how to give non-root users access to system logs and enable persistent storage for log files.
Chapter 11. Configuring a System for Accessibility
- a speech synthesizer, which provides a speech output
- a braille display, which provides a tactile output
- configure the
brlttyservice, as described in Section 11.1, “Configuring thebrlttyService” - switch on the Always Show Universal Access Menu, as described in Section 11.2, “Switch On Always Show Universal Access Menu”
- enable the Festival speech synthesizer, as described in Section 11.3, “Enabling the Festival Speech Synthesis System”
11.1. Configuring the brltty Service
brltty service to provide tactile output for visually impaired users.
Enable the brltty Service
brltty is running. By default, brltty is disabled. Enable brltty to be started on boot:
~]# systemctl enable brltty.serviceAuthorize Users to Use the Braille Display
/etc/brltty.conf file is suitable even for the file systems where users or groups cannot be assigned to a file. The procedure using the /etc/brlapi.key file is suitable only for the file systems where users or groups can be assigned to a file.
Procedure 11.1. Setting Access to Braille Display by Using /etc/brltty.conf
- Open the
/etc/brltty.conffile, and find the section called Application Programming Interface Parameters. - Specify the users.
- To specify one or more individual users, list the users on the following line:
api-parameters Auth=user:
user_1, user_2, ...# Allow some local user - To specify a user group, enter its name on the following line:
api-parameters Auth=group:
group# Allow some local group
Procedure 11.2. Setting Access to Braille Display by Using /etc/brlapi.key
- Create the
/etc/brlapi.keyfile.~]# mcookie > /etc/brlapi.key - Change ownership of the
/etc/brlapi.keyto particular user or group.- To specify an individual user:
~]# chownuser_1/etc/brlapi.key - To specify a group:
~]# chowngroup_1/etc/brlapi.key
- Adjust the content of
/etc/brltty.confto include this:api-parameters Auth=keyfile:
/etc/brlapi.key
Set the Braille Driver
braille-driver directive in /etc/brltty.conf specifies a two-letter driver identification code of the driver for the braille display.
Procedure 11.3. Setting the Braille Driver
- Decide whether you want to use the autodetection for finding the appropriate braille driver.
- If you want to use autodetection, leave
braille driverspecified toauto, which is the default option.braille-driver
auto# autodetectWarning
Autodetection tries all drivers. Therefore, it might take a long time or even fail. For this reason, setting up a particular braille driver is recommended. - If you do not want to use the autodetection, specify the identification code of the required braille driver in the
braille-driverdirective.Choose the identification code of required braille driver from the list provided in/etc/brltty.conf, for example:braille-driver
xw# XWindowYou can also set multiple drivers, separated by commas, and autodetection is then performed among them.
Set the Braille Device
braille-device directive in /etc/brltty.conf specifies the device to which the braille display is connected. The following device types are supported (see Table 11.1, “Braille Device Types and the Corresponding Syntax”):
Table 11.1. Braille Device Types and the Corresponding Syntax
braille-deviceserial:ttyS0# First serial device braille-deviceusb:# First USB device matching braille driver braille-deviceusb:nnnnn# Specific USB device by serial number braille-devicebluetooth:xx:xx:xx:xx:xx:xx# Specific Bluetooth device by address
Warning
braille-device to usb: does not work. In this case, identify the virtual serial device that the kernel has created for the adapter. The virtual serial device can look like this: serial:ttyUSB0You can find the actual device name in the kernel messages on the device plug with the following command:
~]# dmesg | fgrep ttyUSB0
Set Specific Parameters for Particular Braille Displays
braille-parameters directive in /etc/brltty.conf. The braille-parameters directive passes non-generic parameters through to the braille driver. Choose the required parameters from the list in /etc/brltty.conf.
Set the Text Table
text-table directive in /etc/brltty.conf specifies which text table is used to encode the symbols. Relative paths to text tables are in the /etc/brltty/Text/ directory.
Procedure 11.4. Setting the Text Table
- Decide whether you want to use the autoselection for finding the appropriate text table.
- If you want to use the autoselection, leave
text-tablespecified toauto, which is the default option.text-table
auto# locale-based autoselectionThis ensures that local-based autoselection with fallback toen-nabccis performed. - If you do not want to use the autoselection, choose the required
text-tablefrom the list in/etc/brltty.conf.For example, to use the text table for American English:text-table
en_US# English (United States)
Set the Contraction Table
contraction-table directive in /etc/brltty.conf specifies which table is used to encode the abbreviations. Relative paths to particular contraction tables are in the /etc/brltty/Contraction/ directory.
contraction-table from the list in /etc/brltty.conf.
contraction-table en-us-g2 # English (US, grade 2)Warning
11.2. Switch On Always Show Universal Access Menu

Warning
Procedure 11.5. Switching On Always Show Universal Access Menu
- Open the Gnome settings menu, and click .
- Switch on Always Show Universal Access Menu.

- Optional: Verify that the icon is displayed on the top bar even if all options from this menu are switched off.

11.3. Enabling the Festival Speech Synthesis System
Procedure 11.6. Installing Festival and Making it Running on Boot
- Install Festival:
~]# yum install festival festival-freebsoft-utils - Make Festival running on boot:
- Create a new
systemdunit file:Create a file in the/etc/systemd/system/directory and make it executable.~]# touch /etc/systemd/system/festival.service~]# chmod 664 /etc/systemd/system/festival.service - Ensure that the script in the
/usr/bin/festival_serverfile is used to run Festival. Add the following content to the/etc/systemd/system/festival.servicefile:[Unit] Description=Festival speech synthesis server [Service] ExecStart=/usr/bin/festival_server Type=simple
- Notify
systemdthat a newfestival.servicefile exists:~]# systemctl daemon-reload~]# systemctl start festival.service - Enable
festival.service:~]# systemctl enable festival.service
Choose a Voice for Festival
- festvox-awb-arctic-hts
- festvox-bdl-arctic-hts
- festvox-clb-arctic-hts
- festvox-kal-diphone
- festvox-ked-diphone
- festvox-rms-arctic-hts
- festvox-slt-arctic-hts
- hispavoces-pal-diphone
- hispavoces-sfl-diphone
~]# yum info package_name~]# yum installpackage_name~]# reboot
Chapter 12. OpenSSH
SSH (Secure Shell) is a protocol which facilitates secure communications between two systems using a client-server architecture and allows users to log in to server host systems remotely. Unlike other remote communication protocols, such as FTP or Telnet, SSH encrypts the login session, rendering the connection difficult for intruders to collect unencrypted passwords.
telnet or rsh. A related program called scp replaces older programs designed to copy files between hosts, such as rcp. Because these older applications do not encrypt passwords transmitted between the client and the server, avoid them whenever possible. Using secure methods to log in to remote systems decreases the risks for both the client system and the remote host.
12.1. The SSH Protocol
12.1.1. Why Use SSH?
- Interception of communication between two systems
- The attacker can be somewhere on the network between the communicating parties, copying any information passed between them. He may intercept and keep the information, or alter the information and send it on to the intended recipient.This attack is usually performed using a packet sniffer, a rather common network utility that captures each packet flowing through the network, and analyzes its content.
- Impersonation of a particular host
- Attacker's system is configured to pose as the intended recipient of a transmission. If this strategy works, the user's system remains unaware that it is communicating with the wrong host.This attack can be performed using a technique known as DNS poisoning, or via so-called IP spoofing. In the first case, the intruder uses a cracked DNS server to point client systems to a maliciously duplicated host. In the second case, the intruder sends falsified network packets that appear to be from a trusted host.
12.1.2. Main Features
- No one can pose as the intended server
- After an initial connection, the client can verify that it is connecting to the same server it had connected to previously.
- No one can capture the authentication information
- The client transmits its authentication information to the server using strong, 128-bit encryption.
- No one can intercept the communication
- All data sent and received during a session is transferred using 128-bit encryption, making intercepted transmissions extremely difficult to decrypt and read.
- It provides secure means to use graphical applications over a network
- Using a technique called X11 forwarding, the client can forward X11 (X Window System) applications from the server.
- It provides a way to secure otherwise insecure protocols
- The SSH protocol encrypts everything it sends and receives. Using a technique called port forwarding, an SSH server can become a conduit to securing otherwise insecure protocols, like POP, and increasing overall system and data security.
- It can be used to create a secure channel
- The OpenSSH server and client can be configured to create a tunnel similar to a virtual private network for traffic between server and client machines.
- It supports the Kerberos authentication
- OpenSSH servers and clients can be configured to authenticate using the GSSAPI (Generic Security Services Application Program Interface) implementation of the Kerberos network authentication protocol.
12.1.3. Protocol Versions
12.1.4. Event Sequence of an SSH Connection
- A cryptographic handshake is made so that the client can verify that it is communicating with the correct server.
- The transport layer of the connection between the client and remote host is encrypted using a symmetric cipher.
- The client authenticates itself to the server.
- The client interacts with the remote host over the encrypted connection.
12.1.4.1. Transport Layer
- Keys are exchanged
- The public key encryption algorithm is determined
- The symmetric encryption algorithm is determined
- The message authentication algorithm is determined
- The hash algorithm is determined
Warning
12.1.4.2. Authentication
12.1.4.3. Channels
12.2. Configuring OpenSSH
12.2.1. Configuration Files
ssh, scp, and sftp), and those for the server (the sshd daemon).
/etc/ssh/ directory as described in Table 12.1, “System-wide configuration files”. User-specific SSH configuration information is stored in ~/.ssh/ within the user's home directory as described in Table 12.2, “User-specific configuration files”.
Table 12.1. System-wide configuration files
| File | Description |
|---|---|
/etc/ssh/moduli | Contains Diffie-Hellman groups used for the Diffie-Hellman key exchange which is critical for constructing a secure transport layer. When keys are exchanged at the beginning of an SSH session, a shared, secret value is created which cannot be determined by either party alone. This value is then used to provide host authentication. |
/etc/ssh/ssh_config | The default SSH client configuration file. Note that it is overridden by ~/.ssh/config if it exists. |
/etc/ssh/sshd_config | The configuration file for the sshd daemon. |
/etc/ssh/ssh_host_ecdsa_key | The ECDSA private key used by the sshd daemon. |
/etc/ssh/ssh_host_ecdsa_key.pub | The ECDSA public key used by the sshd daemon. |
/etc/ssh/ssh_host_rsa_key | The RSA private key used by the sshd daemon for version 2 of the SSH protocol. |
/etc/ssh/ssh_host_rsa_key.pub | The RSA public key used by the sshd daemon for version 2 of the SSH protocol. |
/etc/pam.d/sshd | The PAM configuration file for the sshd daemon. |
/etc/sysconfig/sshd | Configuration file for the sshd service. |
Table 12.2. User-specific configuration files
| File | Description |
|---|---|
~/.ssh/authorized_keys | Holds a list of authorized public keys for servers. When the client connects to a server, the server authenticates the client by checking its signed public key stored within this file. |
~/.ssh/id_ecdsa | Contains the ECDSA private key of the user. |
~/.ssh/id_ecdsa.pub | The ECDSA public key of the user. |
~/.ssh/id_rsa | The RSA private key used by ssh for version 2 of the SSH protocol. |
~/.ssh/id_rsa.pub | The RSA public key used by ssh for version 2 of the SSH protocol. |
~/.ssh/known_hosts | Contains host keys of SSH servers accessed by the user. This file is very important for ensuring that the SSH client is connecting to the correct SSH server. |
Warning
Privilege Separation feature by using the UsePrivilegeSeparation no directive in the /etc/ssh/sshd_config file. Turning off Privilege Separation disables many security features and exposes the server to potential security vulnerabilities and targeted attacks. For more information about UsePrivilegeSeparation, see the sshd_config(5) manual page or the What is the significance of UsePrivilegeSeparation directive in /etc/ssh/sshd_config file and how to test it ? Red Hat Knowledgebase article.
ssh_config(5) and sshd_config(5) manual pages.
12.2.2. Starting an OpenSSH Server
sshd daemon in the current session, type the following at a shell prompt as root:
~]# systemctl start sshd.servicesshd daemon in the current session, use the following command as root:
~]# systemctl stop sshd.serviceroot:
~]# systemctl enable sshd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/sshd.service to /usr/lib/systemd/system/sshd.service.sshd daemon depends on the network.target target unit, which is sufficient for static configured network interfaces and for default ListenAddress 0.0.0.0 options. To specify different addresses in the ListenAddress directive and to use a slower dynamic network configuration, add dependency on the network-online.target target unit to the sshd.service unit file. To achieve this, create the /etc/systemd/system/sshd.service.d/local.conf file with the following options:
[Unit] Wants=network-online.targetAfter=network-online.target
systemd manager configuration using the following command:
~]# systemctl daemon-reload@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @: REMOTE HOST IDENTIFICATION HAS CHANGED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! Someone could be eavesdropping on you right now (man-in-the-middle attack)! It is also possible that the RSA host key has just been changed.
/etc/ssh/ directory. See Table 12.1, “System-wide configuration files” for a complete list, and restore the files whenever you reinstall the system.
12.2.3. Requiring SSH for Remote Connections
telnet, rsh, rlogin, and vsftpd.
vsftpd service, see Section 16.2, “FTP”. To learn how to manage system services in Red Hat Enterprise Linux 7, read Chapter 10, Managing Services with systemd.
12.2.4. Using Key-based Authentication
/etc/ssh/sshd_config configuration file in a text editor such as vi or nano, and change the PasswordAuthentication option as follows:
PasswordAuthentication no
PubkeyAuthentication no has not been set. If connected remotely, not using console or out-of-band access, testing the key-based log in process before disabling password authentication is advised.
ssh, scp, or sftp to connect to the server from a client machine, generate an authorization key pair by following the steps below. Note that keys must be generated for each user separately.
use_nfs_home_dirs SELinux boolean first:
~]# setsebool -P use_nfs_home_dirs 1Important
root, only root will be able to use the keys.
Note
~/.ssh/ directory. After reinstalling, copy it back to your home directory. This process can be done for all users on your system, including root.
12.2.4.1. Generating Key Pairs
- Generate an RSA key pair by typing the following at a shell prompt:
~]$
ssh-keygen -t rsaGenerating public/private rsa key pair. Enter file in which to save the key (/home/USER/.ssh/id_rsa): - Press Enter to confirm the default location,
~/.ssh/id_rsa, for the newly created key. - Enter a passphrase, and confirm it by entering it again when prompted to do so. For security reasons, avoid using the same password as you use to log in to your account.After this, you will be presented with a message similar to this:
Your identification has been saved in /home/USER/.ssh/id_rsa. Your public key has been saved in /home/USER/.ssh/id_rsa.pub. The key fingerprint is: SHA256:UNIgIT4wfhdQH/K7yqmjsbZnnyGDKiDviv492U5z78Y USER@penguin.example.com The key's randomart image is: +---[RSA 2048]----+ |o ..==o+. | |.+ . .=oo | | .o. ..o | | ... .. | | .S | |o . . | |o+ o .o+ .. | |+.++=o*.o .E | |BBBo+Bo. oo | +----[SHA256]-----+
Note
To get an MD5 key fingerprint, which was the default fingerprint in previous versions, use thessh-keygencommand with the-E md5option. - By default, the permissions of the
~/.ssh/directory are set torwx------or700expressed in octal notation. This is to ensure that only the USER can view the contents. If required, this can be confirmed with the following command:~]$
ls -ld ~/.sshdrwx------. 2 USER USER 54 Nov 25 16:56 /home/USER/.ssh/ - To copy the public key to a remote machine, issue a command in the following format:
This will copy the most recently modifiedssh-copy-id user@hostname~/.ssh/id*.pubpublic key if it is not yet installed. Alternatively, specify the public key's file name as follows:
This will copy the content ofssh-copy-id -i~/.ssh/id_rsa.pubuser@hostname~/.ssh/id_rsa.pubinto the~/.ssh/authorized_keysfile on the machine to which you want to connect. If the file already exists, the keys are appended to its end.
- Generate an ECDSA key pair by typing the following at a shell prompt:
~]$
ssh-keygen -t ecdsaGenerating public/private ecdsa key pair. Enter file in which to save the key (/home/USER/.ssh/id_ecdsa): - Press Enter to confirm the default location,
~/.ssh/id_ecdsa, for the newly created key. - Enter a passphrase, and confirm it by entering it again when prompted to do so. For security reasons, avoid using the same password as you use to log in to your account.After this, you will be presented with a message similar to this:
Your identification has been saved in /home/USER/.ssh/id_ecdsa. Your public key has been saved in /home/USER/.ssh/id_ecdsa.pub. The key fingerprint is: SHA256:8BhZageKrLXM99z5f/AM9aPo/KAUd8ZZFPcPFWqK6+M USER@penguin.example.com The key's randomart image is: +---[ECDSA 256]---+ | . . +=| | . . . = o.o| | + . * . o...| | = . . * . + +..| |. + . . So o * ..| | . o . .+ = ..| | o oo ..=. .| | ooo...+ | | .E++oo | +----[SHA256]-----+
- By default, the permissions of the
~/.ssh/directory are set torwx------or700expressed in octal notation. This is to ensure that only the USER can view the contents. If required, this can be confirmed with the following command:~]$
ls -ld ~/.ssh~]$ ls -ld ~/.ssh/ drwx------. 2 USER USER 54 Nov 25 16:56 /home/USER/.ssh/ - To copy the public key to a remote machine, issue a command in the following format:
This will copy the most recently modifiedssh-copy-id USER@hostname~/.ssh/id*.pubpublic key if it is not yet installed. Alternatively, specify the public key's file name as follows:
This will copy the content ofssh-copy-id -i~/.ssh/id_ecdsa.pubUSER@hostname~/.ssh/id_ecdsa.pubinto the~/.ssh/authorized_keyson the machine to which you want to connect. If the file already exists, the keys are appended to its end.
Important
12.2.4.2. Configuring ssh-agent
ssh-agent authentication agent. If you are running GNOME, you can configure it to prompt you for your passphrase whenever you log in and remember it during the whole session. Otherwise you can store the passphrase for a certain shell prompt.
- Make sure you have the openssh-askpass package installed. If not, see Section 9.2.4, “Installing Packages” for more information on how to install new packages in Red Hat Enterprise Linux.
- Press the Super key to enter the Activities Overview, type
Startup Applicationsand then press Enter. The Startup Applications Preferences tool appears. The tab containing a list of available startup programs will be shown by default. The Super key appears in a variety of guises, depending on the keyboard and other hardware, but often as either the Windows or Command key, and typically to the left of the Space bar.
Figure 12.1. Startup Applications Preferences
- Click the button on the right, and enter
/usr/bin/ssh-addin the Command field.
Figure 12.2. Adding new application
- Click and make sure the checkbox next to the newly added item is selected.

Figure 12.3. Enabling the application
- Log out and then log back in. A dialog box will appear prompting you for your passphrase. From this point on, you should not be prompted for a password by
ssh,scp, orsftp.
Figure 12.4. Entering a passphrase
~]$ ssh-add
Enter passphrase for /home/USER/.ssh/id_rsa:12.3. OpenSSH Clients
12.3.1. Using the ssh Utility
ssh utility allows you to log in to a remote machine and execute commands there. It is a secure replacement for the rlogin, rsh, and telnet programs.
telnet command, log in to a remote machine by using the following command:
ssh hostnamepenguin.example.com, type the following at a shell prompt:
~]$ ssh penguin.example.comssh username@hostnamepenguin.example.com as USER, type:
~]$ ssh USER@penguin.example.comThe authenticity of host 'penguin.example.com' can't be established. ECDSA key fingerprint is SHA256:vuGKK9dsW34zrZzwjl5g+vOE6EZQvHRQ8zObKYO2mW4. ECDSA key fingerprint is MD5:7e:15:c3:03:4d:e1:dd:ee:99:dc:3e:f4:b9:67:6b:62. Are you sure you want to continue connecting (yes/no)?
ssh-keygen command as follows:
~]# ssh-keygen -l -f /etc/ssh/ssh_host_ecdsa_key.pub
SHA256:vuGKK9dsW34zrZzwjl5g+vOE6EZQvHRQ8zObKYO2mW4
Note
ssh-keygen command with the -E md5 option, for example:
~]# ssh-keygen -l -f /etc/ssh/ssh_host_ecdsa_key.pub -EM md5
MD5:7e:15:c3:03:4d:e1:dd:ee:99:dc:3e:f4:b9:67:6b:62
yes to accept the key and confirm the connection. You will see a notice that the server has been added to the list of known hosts, and a prompt asking for your password:
Warning: Permanently added 'penguin.example.com' (ECDSA) to the list of known hosts. USER@penguin.example.com's password:
Important
~/.ssh/known_hosts file. Before doing this, however, contact the system administrator of the SSH server to verify the server is not compromised.
~/.ssh/known_hosts file, issue a command as follows:
~]#ssh-keygen-Rpenguin.example.com # Host penguin.example.com found: line 15 type ECDSA /home/USER/.ssh/known_hosts updated. Original contents retained as /home/USER/.ssh/known_hosts.old
ssh program can be used to execute a command on the remote machine without logging in to a shell prompt:
ssh [username@]hostname command/etc/redhat-release file provides information about the Red Hat Enterprise Linux version. To view the contents of this file on penguin.example.com, type:
~]$ ssh USER@penguin.example.com cat /etc/redhat-release
USER@penguin.example.com's password:
Red Hat Enterprise Linux Server release 7.0 (Maipo)12.3.2. Using the scp Utility
scp can be used to transfer files between machines over a secure, encrypted connection. In its design, it is very similar to rcp.
scp localfile username@hostname:remotefiletaglist.vim to a remote machine named penguin.example.com, type the following at a shell prompt:
~]$ scp taglist.vim USER@penguin.example.com:.vim/plugin/taglist.vim
USER@penguin.example.com's password:
taglist.vim 100% 144KB 144.5KB/s 00:00.vim/plugin/ to the same directory on the remote machine penguin.example.com, type the following command:
~]$ scp .vim/plugin/* USER@penguin.example.com:.vim/plugin/
USER@penguin.example.com's password:
closetag.vim 100% 13KB 12.6KB/s 00:00
snippetsEmu.vim 100% 33KB 33.1KB/s 00:00
taglist.vim 100% 144KB 144.5KB/s 00:00scp username@hostname:remotefile localfile.vimrc configuration file from the remote machine, type:
~]$ scp USER@penguin.example.com:.vimrc .vimrc
USER@penguin.example.com's password:
.vimrc 100% 2233 2.2KB/s 00:0012.3.3. Using the sftp Utility
sftp utility can be used to open a secure, interactive FTP session. In its design, it is similar to ftp except that it uses a secure, encrypted connection.
sftp username@hostnamepenguin.example.com with USER as a user name, type:
~]$ sftp USER@penguin.example.com
USER@penguin.example.com's password:
Connected to penguin.example.com.
sftp>sftp utility accepts a set of commands similar to those used by ftp (see Table 12.3, “A selection of available sftp commands”).
Table 12.3. A selection of available sftp commands
| Command | Description |
|---|---|
ls [directory] | List the content of a remote directory. If none is supplied, a current working directory is used by default. |
cd directory | Change the remote working directory to directory. |
mkdir directory | Create a remote directory. |
rmdir path | Remove a remote directory. |
put localfile [remotefile] | Transfer localfile to a remote machine. |
get remotefile [localfile] | Transfer remotefile from a remote machine. |
sftp(1) manual page.
12.4. More Than a Secure Shell
12.4.1. X11 Forwarding
ssh -Y username@hostnamepenguin.example.com with USER as a user name, type:
~]$ ssh -Y USER@penguin.example.com
USER@penguin.example.com's password:root to install the X11 package group:
~]# yum group install "X Window System"
For more information on package groups, see Section 9.3, “Working with Package Groups”.
~]$ system-config-printer &12.4.2. Port Forwarding
TCP/IP protocols via port forwarding. When using this technique, the SSH server becomes an encrypted conduit to the SSH client.
Note
root level access.
localhost, use a command in the following form:
ssh -L local-port:remote-hostname:remote-port username@hostnamemail.example.com using POP3 through an encrypted connection, use the following command:
~]$ ssh -L 1100:mail.example.com:110 mail.example.com1100 on the localhost to check for new email. Any requests sent to port 1100 on the client system will be directed securely to the mail.example.com server.
mail.example.com is not running an SSH server, but another machine on the same network is, SSH can still be used to secure part of the connection. However, a slightly different command is necessary:
~]$ ssh -L 1100:mail.example.com:110 other.example.com1100 on the client machine are forwarded through the SSH connection on port 22 to the SSH server, other.example.com. Then, other.example.com connects to port 110 on mail.example.com to check for new email. Note that when using this technique, only the connection between the client system and other.example.com SSH server is secure.
ssh -L local-socket:remote-socket username@hostname command, for example:
~]$ ssh -L /var/mysql/mysql.sock:/var/mysql/mysql.sock username@hostnameImportant
No parameter for the AllowTcpForwarding line in /etc/ssh/sshd_config and restarting the sshd service.
12.5. Additional Resources
Installed Documentation
sshd(8) — The manual page for thesshddaemon documents available command line options and provides a complete list of supported configuration files and directories.ssh(1) — The manual page for thesshclient application provides a complete list of available command line options and supported configuration files and directories.scp(1) — The manual page for thescputility provides a more detailed description of this utility and its usage.sftp(1) — The manual page for thesftputility.ssh-keygen(1) — The manual page for thessh-keygenutility documents in detail how to use it to generate, manage, and convert authentication keys used byssh.ssh_config(5) — The manual page namedssh_configdocuments available SSH client configuration options.sshd_config(5) — The manual page namedsshd_configprovides a full description of available SSH daemon configuration options.
Online Documentation
- OpenSSH Home Page — The OpenSSH home page containing further documentation, frequently asked questions, links to the mailing lists, bug reports, and other useful resources.
- OpenSSL Home Page — The OpenSSL home page containing further documentation, frequently asked questions, links to the mailing lists, and other useful resources.
See Also
- Chapter 6, Gaining Privileges documents how to gain administrative privileges by using the
suandsudocommands. - Chapter 10, Managing Services with systemd provides more information on systemd and documents how to use the
systemctlcommand to manage system services.
Chapter 13. TigerVNC
TigerVNC (Tiger Virtual Network Computing) is a system for graphical desktop sharing which allows you to remotely control other computers.
TigerVNC works on the client-server principle: a server shares its output (vncserver) and a client (vncviewer) connects to the server.
Note
TigerVNC in Red Hat Enterprise Linux 7 uses the systemd system management daemon for its configuration. The /etc/sysconfig/vncserver configuration file has been replaced by /etc/systemd/system/vncserver@.service.
13.1. VNC Server
vncserver is a utility which starts a VNC (Virtual Network Computing) desktop. It runs Xvnc with appropriate options and starts a window manager on the VNC desktop. vncserver allows users to run separate sessions in parallel on a machine which can then be accessed by any number of clients from anywhere.
13.1.1. Installing VNC Server
root:
~]# yum install tigervnc-server
13.1.2. Configuring VNC Server
Procedure 13.1. Configuring a VNC Display for a Single User
- A configuration file named
/etc/systemd/system/vncserver@.serviceis required. To create this file, copy the/usr/lib/systemd/system/vncserver@.servicefile asroot:~]#cp /usr/lib/systemd/system/vncserver@.service /etc/systemd/system/vncserver@.serviceThere is no need to include the display number in the file name becausesystemdautomatically creates the appropriately named instance in memory on demand, replacing'%i'in the service file by the display number. For a single user it is not necessary to rename the file. For multiple users, a uniquely named service file for each user is required, for example, by adding the user name to the file name in some way. See Section 13.1.2.1, “Configuring VNC Server for Two Users” for details. - Edit
/etc/systemd/system/vncserver@.service, replacing USER with the actual user name. Leave the remaining lines of the file unmodified. The-geometryargument specifies the size of the VNC desktop to be created; by default, it is set to1024x768.ExecStart=/usr/sbin/runuser -l USER -c "/usr/bin/vncserver %i -geometry 1280x1024" PIDFile=/home/USER/.vnc/%H%i.pid
- Save the changes.
- To make the changes take effect immediately, issue the following command:
~]#systemctl daemon-reload - Set the password for the user or users defined in the configuration file. Note that you need to switch from
rootto USER first.~]#su - USER~]$vncpasswdPassword: Verify:Important
The stored password is not encrypted; anyone who has access to the password file can find the plain-text password.
13.1.2.1. Configuring VNC Server for Two Users
- Create two service files, for example
vncserver-USER_1@.serviceandvncserver-USER_2@.service. In both these files substitute USER with the correct user name. - Set passwords for both users:
~]$su - USER_1~]$vncpasswdPassword: Verify:~]$su - USER_2~]$vncpasswdPassword: Verify:
13.1.3. Starting VNC Server
%i is substituted with the display number by systemd. With a valid display number, execute the following command:
~]# systemctl start vncserver@:display_number.servicevncserver is automatically started. As root, issue a command as follows:
~]# systemctl enable vncserver@:display_number.service13.1.3.1. Configuring VNC Server for Two Users and Two Different Displays
~]#systemctl start vncserver-USER_1@:3.service~]#systemctl start vncserver-USER_2@:5.service
13.1.4. VNC setup based on xinetd with XDMCP for GDM
~]# yum install gdm tigervnc tigervnc-server xinetd
~]# systemctl enable xinetd.servicegraphical.target. To get the currently set default target unit, use:
~]# systemctl get-default
~]# systemctl set-default target_nameProcedure 13.2. Accessing the GDM login window and logging in
- Set up GDM to enable XDMCP by editing the
/etc/gdm/custom.confconfiguration file:[xdmcp] Enable=true
- Create a file called
/etc/xinetd.d/xvncserverwith the following content:service service_name { disable = no protocol = tcp socket_type = stream wait = no user = nobody server = /usr/bin/Xvnc server_args = -inetd -query localhost -once -geometry selected_geometry -depth selected_depth securitytypes=none }In the server_args section, the-query localhostoption will make each Xvnc instance query localhost for an xdmcp session. The-depthoption specifies the pixel depth (in bits) of the VNC desktop to be created. Acceptable values are 8, 15, 16 and 24 - any other values are likely to cause unpredictable behavior of applications. - Edit file
/etc/servicesto have the service defined. To do this, append the following snippet to the/etc/servicesfile:# VNC xinetd GDM base service_name 5950/tcp
- To ensure that the configuration changes take effect, reboot the machine.Alternatively, you can run the following. Change init levels to 3 and back to 5 to force gdm to reload.
# init 3 # init 5
Verify that gdm is listening on UDP port 177.# netstat -anu|grep 177 udp 0 0 0.0.0.0:177 0.0.0.0:*
Restart the xinetd service.~]#systemctl restart xinetd.serviceVerify that the xinetd service has loaded the new services.# netstat -anpt|grep 595 tcp 0 0 :::5950 :::* LISTEN 3119/xinetd
- Test the setup using a vncviewer command:
# vncviewer localhost:5950
The command will launch a VNC session to the localhost where no password is asked. You will see a GDM login screen, and you will be able to log in to any user account on the system with a valid user name and password. Then you can run the same test on remote connections.
~]#firewall-cmd --permanent --zone=public --add-port=5950/tcp~]#firewall-cmd --reload
13.1.5. Terminating a VNC Session
vncserver service, you can disable the automatic start of the service at system start:
~]# systemctl disable vncserver@:display_number.serviceroot:
~]# systemctl stop vncserver@:display_number.service13.2. Sharing an Existing Desktop
0. A user can share their desktop using the TigerVNC server x0vncserver.
Procedure 13.3. Sharing an X Desktop
x0vncserver, proceed as follows:
- Enter the following command as
root~]#
yum install tigervnc-server - Set the VNC password for the user:
~]$
vncpasswdPassword: Verify: - Enter the following command as that user:
~]$
x0vncserver -PasswordFile=.vnc/passwd -AlwaysShared=1
5900, the remote viewer can now connect to display 0, and view the logged in users desktop. See Section 13.3.2.1, “Configuring the Firewall for VNC” for information on how to configure the firewall.
13.3. VNC Viewer
vncviewer is a program which shows the graphical user interfaces and controls the vncserver remotely.
vncviewer, there is a pop-up menu containing entries which perform various actions such as switching in and out of full-screen mode or quitting the viewer. Alternatively, you can operate vncviewer through the terminal. Enter vncviewer -h on the command line to list vncviewer's parameters.
13.3.1. Installing VNC Viewer
vncviewer, issue the following command as root: ~]# yum install tigervnc
13.3.2. Connecting to VNC Server
Procedure 13.4. Connecting to a VNC Server Using a GUI
- Enter the
vncviewercommand with no arguments, the VNC Viewer: Connection Details utility appears. It prompts for a VNC server to connect to. - If required, to prevent disconnecting any existing VNC connections to the same display, select the option to allow sharing of the desktop as follows:
- Select the button.
- Select the Misc. tab.
- Select the button.
- Press OK to return to the main menu.
- Enter an address and display number to connect to:
address:display_number
- Press Connect to connect to the VNC server display.
- You will be prompted to enter the VNC password. This will be the VNC password for the user corresponding to the display number unless a global default VNC password was set.A window appears showing the VNC server desktop. Note that this is not the desktop the normal user sees, it is an Xvnc desktop.
Procedure 13.5. Connecting to a VNC Server Using the CLI
- Enter the
viewercommand with the address and display number as arguments:vncviewer address:display_number
Where address is anIPaddress or host name. - Authenticate yourself by entering the VNC password. This will be the VNC password for the user corresponding to the display number unless a global default VNC password was set.
- A window appears showing the VNC server desktop. Note that this is not the desktop the normal user sees, it is the Xvnc desktop.
13.3.2.1. Configuring the Firewall for VNC
firewalld might block the connection. To allow firewalld to pass the VNC packets, you can open specific ports to TCP traffic. When using the -via option, traffic is redirected over SSH which is enabled by default in firewalld.
Note
0 to 3, make use of firewalld's support for the VNC service by means of the service option as described below. Note that for display numbers greater than 3, the corresponding ports will have to be opened specifically as explained in Procedure 13.7, “Opening Ports in firewalld”.
Procedure 13.6. Enabling VNC Service in firewalld
- Run the following command to see the information concerning
firewalldsettings:~]$firewall-cmd --list-all - To allow all VNC connections from a specific address, use a command as follows:
~]#
Note that these changes will not persist after the next system start. To make permanent changes to the firewall, repeat the commands adding thefirewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.122.116" service name=vnc-server accept'success--permanentoption. See the Red Hat Enterprise Linux 7 Security Guide for more information on the use of firewall rich language commands. - To verify the above settings, use a command as follows:
~]#
firewall-cmd --list-allpublic (default, active) interfaces: bond0 bond0.192 sources: services: dhcpv6-client ssh ports: masquerade: no forward-ports: icmp-blocks: rich rules: rule family="ipv4" source address="192.168.122.116" service name="vnc-server" accept
--add-port option to the firewall-cmd command Line tool. For example, VNC display 4 requires port 5904 to be opened for TCP traffic.
Procedure 13.7. Opening Ports in firewalld
- To open a port for
TCPtraffic in the public zone, issue a command asrootas follows:~]#
firewall-cmd --zone=public --add-port=5904/tcpsuccess - To view the ports that are currently open for the public zone, issue a command as follows:
~]#
firewall-cmd --zone=public --list-ports5904/tcp
firewall-cmd --zone=zone --remove-port=number/protocol command.
--permanent option. For more information on opening and closing ports in firewalld, see the Red Hat Enterprise Linux 7 Security Guide.
13.3.3. Connecting to VNC Server Using SSH
-via option. This will create an SSH tunnel between the VNC server and the client.
vncviewer -via user@host:display_number
Example 13.1. Using the -via Option
- To connect to a VNC server using
SSH, enter a command as follows:~]$vncviewer -via USER_2@192.168.2.101:3 - When you are prompted to, type the password, and confirm by pressing Enter.
- A window with a remote desktop appears on your screen.
Restricting VNC Access
-localhost option in the systemd.service file, the ExecStart line:
ExecStart=/usr/sbin/runuser -l user -c "/usr/bin/vncserver -localhost %i"
vncserver from accepting connections from anything but the local host and port-forwarded connections sent using SSH as a result of the -via option.
SSH, see Chapter 12, OpenSSH.
13.4. Additional Resources
Installed Documentation
vncserver(1)— The manual page for the VNC server utility.vncviewer(1)— The manual page for the VNC viewer.vncpasswd(1)— The manual page for the VNC password command.Xvnc(1)— The manual page for the Xvnc server configuration options.x0vncserver(1)— The manual page for theTigerVNCserver for sharing existing X servers.
Part V. Servers
Chapter 14. Web Servers
14.1. The Apache HTTP Server
httpd, an open source web server developed by the Apache Software Foundation.
httpd service configuration accordingly. This section reviews some of the newly added features, outlines important changes between Apache HTTP Server 2.4 and version 2.2, and guides you through the update of older configuration files.
14.1.1. Notable Changes
- httpd Service Control
- With the migration away from SysV init scripts, server administrators should switch to using the
apachectlandsystemctlcommands to control the service, in place of theservicecommand. The following examples are specific to thehttpdservice.The command:service httpd graceful
is replaced byapachectl graceful
Thesystemdunit file forhttpdhas different behavior from the init script as follows:The command:- A graceful restart is used by default when the service is reloaded.
- A graceful stop is used by default when the service is stopped.
service httpd configtest
is replaced byapachectl configtest
- Private /tmp
- To enhance system security, the
systemdunit file runs thehttpddaemon using a private/tmpdirectory, separate to the system/tmpdirectory. - Configuration Layout
- Configuration files which load modules are now placed in the
/etc/httpd/conf.modules.d/directory. Packages that provide additional loadable modules forhttpd, such as php, will place a file in this directory. AnIncludedirective before the main section of the/etc/httpd/conf/httpd.conffile is used to include files within the/etc/httpd/conf.modules.d/directory. This means any configuration files withinconf.modules.d/are processed before the main body ofhttpd.conf. AnIncludeOptionaldirective for files within the/etc/httpd/conf.d/directory is placed at the end of thehttpd.conffile. This means the files within/etc/httpd/conf.d/are now processed after the main body ofhttpd.conf.Some additional configuration files are provided by the httpd package itself:/etc/httpd/conf.d/autoindex.conf— This configures mod_autoindex directory indexing./etc/httpd/conf.d/userdir.conf— This configures access to user directories, for example,http://example.com/~username/; such access is disabled by default for security reasons./etc/httpd/conf.d/welcome.conf— As in previous releases, this configures the welcome page displayed forhttp://localhost/when no content is present.
- Default Configuration
- A minimal
httpd.conffile is now provided by default. Many common configuration settings, such asTimeoutorKeepAliveare no longer explicitly configured in the default configuration; hard-coded settings will be used instead, by default. The hard-coded default settings for all configuration directives are specified in the manual. See the section called “Installable Documentation” for more information. - Incompatible Syntax Changes
- If migrating an existing configuration from httpd 2.2 to httpd 2.4, a number of backwards-incompatible changes to the
httpdconfiguration syntax were made which will require changes. See the following Apache document for more information on upgrading http://httpd.apache.org/docs/2.4/upgrading.html - Processing Model
- In previous releases of Red Hat Enterprise Linux, different multi-processing models (MPM) were made available as different
httpdbinaries: the forked model, “prefork”, as/usr/sbin/httpd, and the thread-based model “worker” as/usr/sbin/httpd.worker.In Red Hat Enterprise Linux 7, only a singlehttpdbinary is used, and three MPMs are available as loadable modules: worker, prefork (default), and event. Edit the configuration file/etc/httpd/conf.modules.d/00-mpm.confas required, by adding and removing the comment character#so that only one of the three MPM modules is loaded. - Packaging Changes
- The LDAP authentication and authorization modules are now provided in a separate sub-package, mod_ldap. The new module mod_session and associated helper modules are provided in a new sub-package, mod_session. The new modules mod_proxy_html and mod_xml2enc are provided in a new sub-package, mod_proxy_html. These packages are all in the Optional channel.
Note
Before subscribing to the Optional and Supplementary channels see the Scope of Coverage Details. If you decide to install packages from these channels, follow the steps documented in the article called How to access Optional and Supplementary channels, and -devel packages using Red Hat Subscription Manager (RHSM)? on the Red Hat Customer Portal. - Packaging Filesystem Layout
- The
/var/cache/mod_proxy/directory is no longer provided; instead, the/var/cache/httpd/directory is packaged with aproxyandsslsubdirectory.Packaged content provided withhttpdhas been moved from/var/www/to/usr/share/httpd/:/usr/share/httpd/icons/— The directory containing a set of icons used with directory indices, previously contained in/var/www/icons/, has moved to/usr/share/httpd/icons/. Available athttp://localhost/icons/in the default configuration; the location and the availability of the icons is configurable in the/etc/httpd/conf.d/autoindex.conffile./usr/share/httpd/manual/— The/var/www/manual/has moved to/usr/share/httpd/manual/. This directory, contained in the httpd-manual package, contains the HTML version of the manual forhttpd. Available athttp://localhost/manual/if the package is installed, the location and the availability of the manual is configurable in the/etc/httpd/conf.d/manual.conffile./usr/share/httpd/error/— The/var/www/error/has moved to/usr/share/httpd/error/. Custom multi-language HTTP error pages. Not configured by default, the example configuration file is provided at/usr/share/doc/httpd-VERSION/httpd-multilang-errordoc.conf.
- Authentication, Authorization and Access Control
- The configuration directives used to control authentication, authorization and access control have changed significantly. Existing configuration files using the
Order,DenyandAllowdirectives should be adapted to use the newRequiresyntax. See the following Apache document for more information http://httpd.apache.org/docs/2.4/howto/auth.html - suexec
- To improve system security, the suexec binary is no longer installed as if by the
rootuser; instead, it has file system capability bits set which allow a more restrictive set of permissions. In conjunction with this change, the suexec binary no longer uses the/var/log/httpd/suexec.loglogfile. Instead, log messages are sent to syslog; by default these will appear in the/var/log/securelog file. - Module Interface
- Third-party binary modules built against httpd 2.2 are not compatible with httpd 2.4 due to changes to the
httpdmodule interface. Such modules will need to be adjusted as necessary for the httpd 2.4 module interface, and then rebuilt. A detailed list of the API changes in version2.4is available here: http://httpd.apache.org/docs/2.4/developer/new_api_2_4.html.The apxs binary used to build modules from source has moved from/usr/sbin/apxsto/usr/bin/apxs. - Removed modules
- List of
httpdmodules removed in Red Hat Enterprise Linux 7:- mod_auth_mysql, mod_auth_pgsql
- httpd 2.4 provides SQL database authentication support internally in the mod_authn_dbd module.
- mod_perl
- mod_perl is not officially supported with httpd 2.4 by upstream.
- mod_authz_ldap
- httpd 2.4 provides LDAP support in sub-package mod_ldap using mod_authnz_ldap.
14.1.2. Updating the Configuration
- Make sure all module names are correct, since they may have changed. Adjust the
LoadModuledirective for each module that has been renamed. - Recompile all third party modules before attempting to load them. This typically means authentication and authorization modules.
- If you use the Apache HTTP Secure Server, see Section 14.1.8, “Enabling the mod_ssl Module” for important information on enabling the Secure Sockets Layer (SSL) protocol.
~]# apachectl configtest
Syntax OK14.1.3. Running the httpd Service
httpd service, make sure you have the httpd installed. You can do so by using the following command:
~]# yum install httpd14.1.3.1. Starting the Service
httpd service, type the following at a shell prompt as root:
~]# systemctl start httpd.service~]# systemctl enable httpd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service.Note
14.1.3.2. Stopping the Service
httpd service, type the following at a shell prompt as root:
~]# systemctl stop httpd.service~]# systemctl disable httpd.service
Removed symlink /etc/systemd/system/multi-user.target.wants/httpd.service.14.1.3.3. Restarting the Service
httpd service:
- To restart the service completely, enter the following command as
root:~]#
systemctl restart httpd.serviceThis stops the runninghttpdservice and immediately starts it again. Use this command after installing or removing a dynamically loaded module such as PHP. - To only reload the configuration, as
root, type:~]#
systemctl reload httpd.serviceThis causes the runninghttpdservice to reload its configuration file. Any requests currently being processed will be interrupted, which may cause a client browser to display an error message or render a partial page. - To reload the configuration without affecting active requests, enter the following command as
root:~]#
apachectl gracefulThis causes the runninghttpdservice to reload its configuration file. Any requests currently being processed will continue to use the old configuration.
14.1.4. Editing the Configuration Files
httpd service is started, by default, it reads the configuration from locations that are listed in Table 14.1, “The httpd service configuration files”.
Table 14.1. The httpd service configuration files
httpd service.
~]# apachectl configtest
Syntax OK14.1.5. Working with Modules
httpd service is distributed along with a number of Dynamic Shared Objects (DSOs), which can be dynamically loaded or unloaded at runtime as necessary. On Red Hat Enterprise Linux 7, these modules are located in /usr/lib64/httpd/modules/.
14.1.5.1. Loading a Module
LoadModule directive. Note that modules provided by a separate package often have their own configuration file in the /etc/httpd/conf.d/ directory.
Example 14.1. Loading the mod_ssl DSO
LoadModule ssl_module modules/mod_ssl.so
httpd service.
14.1.5.2. Writing a Module
root:
~]# yum install httpd-develapxs) utility required to compile a module.
~]# apxs -i -a -c module_name.c14.1.6. Setting Up Virtual Hosts
/usr/share/doc/httpd-VERSION/httpd-vhosts.conf into the /etc/httpd/conf.d/ directory, and replace the @@Port@@ and @@ServerRoot@@ placeholder values. Customize the options according to your requirements as shown in Example 14.2, “Example virtual host configuration”.
Example 14.2. Example virtual host configuration
<VirtualHost *:80>
ServerAdmin webmaster@penguin.example.com
DocumentRoot "/www/docs/penguin.example.com"
ServerName penguin.example.com
ServerAlias www.penguin.example.com
ErrorLog "/var/log/httpd/dummy-host.example.com-error_log"
CustomLog "/var/log/httpd/dummy-host.example.com-access_log" common
</VirtualHost>ServerName must be a valid DNS name assigned to the machine. The <VirtualHost> container is highly customizable, and accepts most of the directives available within the main server configuration. Directives that are not supported within this container include User and Group, which were replaced by SuexecUserGroup.
Note
Listen directive in the global settings section of the /etc/httpd/conf/httpd.conf file accordingly.
httpd service.
14.1.7. Setting Up an SSL Server
mod_ssl, a module that uses the OpenSSL toolkit to provide the SSL/TLS support, is commonly referred to as the SSL server. Red Hat Enterprise Linux also supports the use of Mozilla NSS as the TLS implementation. Support for Mozilla NSS is provided by the mod_nss module.
14.1.7.1. An Overview of Certificates and Security
Table 14.2. Information about CA lists used by common web browsers
| Web Browser | Link |
|---|---|
| Mozilla Firefox | Mozilla root CA list. |
| Opera | Information on root certificates used by Opera. |
| Internet Explorer | Information on root certificates used by Microsoft Windows. |
| Chromium | Information on root certificates used by the Chromium project. |
14.1.8. Enabling the mod_ssl Module
mod_ssl, you cannot have the another application or module, such as mod_nss configured to use the same port. Port 443 is the default port for HTTPS.
mod_ssl module and the OpenSSL toolkit, install the mod_ssl and openssl packages. Enter the following command as root:
~]# yum install mod_ssl opensslmod_ssl configuration file at /etc/httpd/conf.d/ssl.conf, which is included in the main Apache HTTP Server configuration file by default. For the module to be loaded, restart the httpd service as described in Section 14.1.3.3, “Restarting the Service”.
Important
SSL and using only TLSv1.1 or TLSv1.2. Backwards compatibility can be achieved using TLSv1.0. Many products Red Hat supports have the ability to use SSLv2 or SSLv3 protocols, or enable them by default. However, the use of SSLv2 or SSLv3 is now strongly recommended against.
14.1.8.1. Enabling and Disabling SSL and TLS in mod_ssl
SSLProtocol directive in the “## SSL Global Context” section of the configuration file and removing it everywhere else, or edit the default entry under “# SSL Protocol support” in all “VirtualHost” sections. If you do not specify it in the per-domain VirtualHost section then it will inherit the settings from the global section. To make sure that a protocol version is being disabled the administrator should either only specify SSLProtocol in the “SSL Global Context” section, or specify it in all per-domain VirtualHost sections.
Procedure 14.1. Disable SSLv2 and SSLv3
- As
root, open the/etc/httpd/conf.d/ssl.conffile and search for all instances of theSSLProtocoldirective. By default, the configuration file contains one section that looks as follows:~]#
This section is within the VirtualHost section.vi /etc/httpd/conf.d/ssl.conf# SSL Protocol support: # List the enable protocol levels with which clients will be able to # connect. Disable SSLv2 access by default: SSLProtocol all -SSLv2 - Edit the
SSLProtocolline as follows:# SSL Protocol support: # List the enable protocol levels with which clients will be able to # connect. Disable SSLv2 access by default: SSLProtocol all -SSLv2 -SSLv3
Repeat this action for all VirtualHost sections. Save and close the file. - Verify that all occurrences of the
SSLProtocoldirective have been changed as follows:~]#
This step is particularly important if you have more than the one default VirtualHost section.grep SSLProtocol /etc/httpd/conf.d/ssl.confSSLProtocol all -SSLv2 -SSLv3 - Restart the Apache daemon as follows:
~]#
Note that any sessions will be interrupted.systemctl restart httpd
Procedure 14.2. Disable All SSL and TLS Protocols Except TLS 1 and Up
- As
root, open the/etc/httpd/conf.d/ssl.conffile and search for all instances ofSSLProtocoldirective. By default the file contains one section that looks as follows:~]#
vi /etc/httpd/conf.d/ssl.conf# SSL Protocol support: # List the enable protocol levels with which clients will be able to # connect. Disable SSLv2 access by default: SSLProtocol all -SSLv2 - Edit the
SSLProtocolline as follows:# SSL Protocol support: # List the enable protocol levels with which clients will be able to # connect. Disable SSLv2 access by default: SSLProtocol -all +TLSv1 +TLSv1.1 +TLSv1.2
Save and close the file. - Verify the change as follows:
~]#
grep SSLProtocol /etc/httpd/conf.d/ssl.confSSLProtocol -all +TLSv1 +TLSv1.1 +TLSv1.2 - Restart the Apache daemon as follows:
~]#
Note that any sessions will be interrupted.systemctl restart httpd
Procedure 14.3. Testing the Status of SSL and TLS Protocols
openssl s_client -connect command. The command has the following form: openssl s_client -connect hostname:port -protocolWhere port is the port to test and protocol is the protocol version to test for. To test the SSL server running locally, use
localhost as the host name. For example, to test the default port for secure HTTPS connections, port 443 to see if SSLv3 is enabled, issue a command as follows:
~]#
The above output indicates that the handshake failed and therefore no cipher was negotiated.openssl s_client -connect localhost:443 -ssl3CONNECTED(00000003) 139809943877536:error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure:s3_pkt.c:1257:SSL alert number 40 139809943877536:error:1409E0E5:SSL routines:SSL3_WRITE_BYTES:ssl handshake failure:s3_pkt.c:596: output omitted New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE SSL-Session: Protocol : SSLv3 output truncated~]$
The above output indicates that no failure of the handshake occurred and a set of ciphers was negotiated.openssl s_client -connect localhost:443 -tls1_2CONNECTED(00000003) depth=0 C = --, ST = SomeState, L = SomeCity, O = SomeOrganization, OU = SomeOrganizationalUnit, CN = localhost.localdomain, emailAddress = root@localhost.localdomain output omitted New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384 Server public key is 2048 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1.2 output truncated
openssl s_client command options are documented in the s_client(1) manual page.
14.1.9. Enabling the mod_nss Module
mod_nss, you cannot have the mod_ssl package installed with its default settings as mod_ssl will use port 443 by default, however this is the default HTTPS port. If at all possible, remove the package.
root:
~]# yum remove mod_sslNote
mod_ssl is required for other purposes, modify the /etc/httpd/conf.d/ssl.conf file to use a port other than 443 to prevent mod_ssl conflicting with mod_nss when its port to listen on is changed to 443.
mod_nss and mod_ssl can only co-exist at the same time if they use unique ports. For this reason mod_nss by default uses 8443, but the default port for HTTPS is port 443. The port is specified by the Listen directive as well as in the VirtualHost name or address.
Procedure 14.4. Configuring mod_nss
- Install mod_nss as
root:~]#
yum install mod_nssThis will create themod_nssconfiguration file at/etc/httpd/conf.d/nss.conf. The/etc/httpd/conf.d/directory is included in the main Apache HTTP Server configuration file by default. For the module to be loaded, restart thehttpdservice as described in Section 14.1.3.3, “Restarting the Service”. - As
root, open the/etc/httpd/conf.d/nss.conffile and search for all instances of theListendirective.Edit theListen 8443line as follows:Listen 443
Port443is the default port forHTTPS. - Edit the default
VirtualHost _default_:8443line as follows:VirtualHost _default_:443
Edit any other non-default virtual host sections if they exist. Save and close the file. - Mozilla NSS stores certificates in a server certificate database indicated by the
NSSCertificateDatabasedirective in the/etc/httpd/conf.d/nss.conffile. By default the path is set to/etc/httpd/alias, the NSS database created during installation.To view the default NSS database, issue a command as follows:~]#
In the above command output,certutil -L -d /etc/httpd/aliasCertificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI cacert CTu,Cu,Cu Server-Cert u,u,u alpha u,pu,uServer-Certis the defaultNSSNickname. The-Loption lists all the certificates, or displays information about a named certificate, in a certificate database. The-doption specifies the database directory containing the certificate and key database files. See thecertutil(1)man page for more command line options. - To configure mod_nss to use another database, edit the
NSSCertificateDatabaseline in the/etc/httpd/conf.d/nss.conffile. The default file has the following lines within the VirtualHost section.# Server Certificate Database: # The NSS security database directory that holds the certificates and # keys. The database consists of 3 files: cert8.db, key3.db and secmod.db. # Provide the directory that these files exist. NSSCertificateDatabase /etc/httpd/alias
In the above command output,aliasis the default NSS database directory,/etc/httpd/alias/. - To apply a password to the default NSS certificate database, use the following command as
root:~]#
certutil -W -d /etc/httpd/aliasEnter Password or Pin for "NSS Certificate DB": Enter a password which will be used to encrypt your keys. The password should be at least 8 characters long, and should contain at least one non-alphabetic character. Enter new password: Re-enter password: Password changed successfully. - Before deploying the HTTPS server, create a new certificate database using a certificate signed by a certificate authority (CA).
Example 14.3. Adding a Certificate to the Mozilla NSS database
Thecertutilcommand is used to add a CA certificate to the NSS database files:certutil-d/etc/httpd/nss-db-directory/-A-n"CA_certificate"-tCT,,-a-icertificate.pemThe above command adds a CA certificate stored in a PEM-formatted file named certificate.pem. The-doption specifies the NSS database directory containing the certificate and key database files, the-noption sets a name for the certificate,-tCT,,means that the certificate is trusted to be used in TLS clients and servers. The-Aoption adds an existing certificate to a certificate database. If the database does not exist it will be created. The-aoption allows the use of ASCII format for input or output, and the-ioption passes thecertificate.peminput file to the command.See thecertutil(1)man page for more command line options. - The NSS database should be password protected to safeguard the private key.
Example 14.4. Setting a Password for a Mozilla NSS database
Thecertutiltool can be used set a password for an NSS database as follows:certutil -W -d /etc/httpd/nss-db-directory/
For example, for the default database, issue a command asrootas follows:~]#
certutil -W -d /etc/httpd/aliasEnter Password or Pin for "NSS Certificate DB": Enter a password which will be used to encrypt your keys. The password should be at least 8 characters long, and should contain at least one non-alphabetic character. Enter new password: Re-enter password: Password changed successfully. - Configure
mod_nssto use the NSS internal software token by changing the line with theNSSPassPhraseDialogdirective as follows:~]#
This is to avoid manual password entry on system start. The software token exists in the NSS database but you can also have a physical token containing your certificates.vi /etc/httpd/conf.d/nss.confNSSPassPhraseDialog file:/etc/httpd/password.conf - If the SSL Server Certificate contained in the NSS database is an RSA certificate, make certain that the
NSSNicknameparameter is uncommented and matches the nickname displayed in step 4 above:~]#
vi /etc/httpd/conf.d/nss.confNSSNickname Server-CertIf the SSL Server Certificate contained in the NSS database is an ECC certificate, make certain that theNSSECCNicknameparameter is uncommented and matches the nickname displayed in step 4 above:~]#
vi /etc/httpd/conf.d/nss.confNSSECCNickname Server-CertMake certain that theNSSCertificateDatabaseparameter is uncommented and points to the NSS database directory displayed in step 4 or configured in step 5 above:~]#
Replacevi /etc/httpd/conf.d/nss.confNSSCertificateDatabase /etc/httpd/alias/etc/httpd/aliaswith the path to the certificate database to be used. - Create the
/etc/httpd/password.conffile asroot:~]#
Add a line with the following form:vi /etc/httpd/password.confinternal:password
Replacing password with the password that was applied to the NSS security databases in step 6 above. - Apply the appropriate ownership and permissions to the
/etc/httpd/password.conffile:~]#
chgrp apache /etc/httpd/password.conf~]#chmod 640 /etc/httpd/password.conf~]#ls -l /etc/httpd/password.conf-rw-r-----. 1 root apache 10 Dec 4 17:13 /etc/httpd/password.conf - To configure
mod_nssto use the NSS the software token in/etc/httpd/password.conf, edit/etc/httpd/conf.d/nss.confas follows:~]#
vi /etc/httpd/conf.d/nss.conf - Restart the Apache server for the changes to take effect as described in Section 14.1.3.3, “Restarting the Service”
Important
SSL and using only TLSv1.1 or TLSv1.2. Backwards compatibility can be achieved using TLSv1.0. Many products Red Hat supports have the ability to use SSLv2 or SSLv3 protocols, or enable them by default. However, the use of SSLv2 or SSLv3 is now strongly recommended against.
14.1.9.1. Enabling and Disabling SSL and TLS in mod_nss
NSSProtocol directive in the “## SSL Global Context” section of the configuration file and removing it everywhere else, or edit the default entry under “# SSL Protocol” in all “VirtualHost” sections. If you do not specify it in the per-domain VirtualHost section then it will inherit the settings from the global section. To make sure that a protocol version is being disabled the administrator should either only specify NSSProtocol in the “SSL Global Context” section, or specify it in all per-domain VirtualHost sections.
Procedure 14.5. Disable All SSL and TLS Protocols Except TLS 1 and Up in mod_nss
- As
root, open the/etc/httpd/conf.d/nss.conffile and search for all instances of theNSSProtocoldirective. By default, the configuration file contains one section that looks as follows:~]#
This section is within the VirtualHost section.vi /etc/httpd/conf.d/nss.conf# SSL Protocol: output omitted # Since all protocol ranges are completely inclusive, and no protocol in the # middle of a range may be excluded, the entry "NSSProtocol SSLv3,TLSv1.1" # is identical to the entry "NSSProtocol SSLv3,TLSv1.0,TLSv1.1". NSSProtocol SSLv3,TLSv1.0,TLSv1.1 - Edit the
NSSProtocolline as follows:# SSL Protocol: NSSProtocol TLSv1.0,TLSv1.1
Repeat this action for all VirtualHost sections. - Edit the
Listen 8443line as follows:Listen 443
- Edit the default
VirtualHost _default_:8443line as follows:VirtualHost _default_:443
Edit any other non-default virtual host sections if they exist. Save and close the file. - Verify that all occurrences of the
NSSProtocoldirective have been changed as follows:~]#
This step is particularly important if you have more than one VirtualHost section.grep NSSProtocol /etc/httpd/conf.d/nss.conf# middle of a range may be excluded, the entry "NSSProtocol SSLv3,TLSv1.1" # is identical to the entry "NSSProtocol SSLv3,TLSv1.0,TLSv1.1". NSSProtocol TLSv1.0,TLSv1.1 - Restart the Apache daemon as follows:
~]#
Note that any sessions will be interrupted.service httpd restart
Procedure 14.6. Testing the Status of SSL and TLS Protocols in mod_nss
openssl s_client -connect command. Install the openssl package as root:
~]# yum install openssl
openssl s_client -connect command has the following form: openssl s_client -connect hostname:port -protocolWhere port is the port to test and protocol is the protocol version to test for. To test the SSL server running locally, use
localhost as the host name. For example, to test the default port for secure HTTPS connections, port 443 to see if SSLv3 is enabled, issue a command as follows:
~]#
The above output indicates that the handshake failed and therefore no cipher was negotiated.openssl s_client -connect localhost:443 -ssl3CONNECTED(00000003) 3077773036:error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number:s3_pkt.c:337: output omitted New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE SSL-Session: Protocol : SSLv3 output truncated~]$
The above output indicates that no failure of the handshake occurred and a set of ciphers was negotiated.openssl s_client -connect localhost:443 -tls1CONNECTED(00000003) depth=1 C = US, O = example.com, CN = Certificate Shack output omitted New, TLSv1/SSLv3, Cipher is AES128-SHA Server public key is 1024 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 output truncated
openssl s_client command options are documented in the s_client(1) manual page.
14.1.10. Using an Existing Key and Certificate
- You are changing the IP address or domain name.Certificates are issued for a particular IP address and domain name pair. If one of these values changes, the certificate becomes invalid.
- You have a certificate from VeriSign, and you are changing the server software.VeriSign, a widely used certificate authority, issues certificates for a particular software product, IP address, and domain name. Changing the software product renders the certificate invalid.
/etc/pki/tls/private/ and /etc/pki/tls/certs/ directories respectively. You can do so by issuing the following commands as root:
~]#mvkey_file.key/etc/pki/tls/private/hostname.key~]#mvcertificate.crt/etc/pki/tls/certs/hostname.crt
/etc/httpd/conf.d/ssl.conf configuration file:
SSLCertificateFile /etc/pki/tls/certs/hostname.crt SSLCertificateKeyFile /etc/pki/tls/private/hostname.key
httpd service as described in Section 14.1.3.3, “Restarting the Service”.
Example 14.5. Using a key and certificate from the Red Hat Secure Web Server
~]#mv /etc/httpd/conf/httpsd.key /etc/pki/tls/private/penguin.example.com.key~]#mv /etc/httpd/conf/httpsd.crt /etc/pki/tls/certs/penguin.example.com.crt
14.1.11. Generating a New Key and Certificate
root:
~]# yum install crypto-utilsImportant
root, use the following command instead of genkey:
~]# openssl req -x509 -new -set_serial number -key hostname.key -out hostname.crtNote
root:
~]# rm /etc/pki/tls/private/hostname.keygenkey command as root, followed by the appropriate host name (for example, penguin.example.com):
~]# genkey hostname- Review the target locations in which the key and certificate will be stored.

Figure 14.1. Running the genkey utility
Use the Tab key to select the button, and press Enter to proceed to the next screen. - Using the up and down arrow keys, select a suitable key size. Note that while a larger key increases the security, it also increases the response time of your server. The NIST recommends using
2048 bits. See NIST Special Publication 800-131A.
Figure 14.2. Selecting the key size
Once finished, use the Tab key to select the button, and press Enter to initiate the random bits generation process. Depending on the selected key size, this may take some time. - Decide whether you want to send a certificate request to a certificate authority.

Figure 14.3. Generating a certificate request
Use the Tab key to select to compose a certificate request, or to generate a self-signed certificate. Then press Enter to confirm your choice. - Using the Spacebar key, enable (
[*]) or disable ([ ]) the encryption of the private key.
Figure 14.4. Encrypting the private key
Use the Tab key to select the button, and press Enter to proceed to the next screen. - If you have enabled the private key encryption, enter an adequate passphrase. Note that for security reasons, it is not displayed as you type, and it must be at least five characters long.

Figure 14.5. Entering a passphrase
Use the Tab key to select the button, and press Enter to proceed to the next screen.Important
Entering the correct passphrase is required in order for the server to start. If you lose it, you will need to generate a new key and certificate. - Customize the certificate details.

Figure 14.6. Specifying certificate information
Use the Tab key to select the button, and press Enter to finish the key generation. - If you have previously enabled the certificate request generation, you will be prompted to send it to a certificate authority.

Figure 14.7. Instructions on how to send a certificate request
Press Enter to return to a shell prompt.
/etc/httpd/conf.d/ssl.conf configuration file:
SSLCertificateFile /etc/pki/tls/certs/hostname.crt SSLCertificateKeyFile /etc/pki/tls/private/hostname.key
httpd service as described in Section 14.1.3.3, “Restarting the Service”, so that the updated configuration is loaded.
14.1.12. Configure the Firewall for HTTP and HTTPS Using the Command Line
HTTP and HTTPS traffic by default. To enable the system to act as a web server, make use of firewalld's supported services to enable HTTP and HTTPS traffic to pass through the firewall as required.
HTTP using the command line, issue the following command as root:
~]# firewall-cmd --add-service http
success
HTTPS using the command line, issue the following command as root:
~]# firewall-cmd --add-service https
success
--permanent option.
14.1.12.1. Checking Network Access for Incoming HTTPS and HTTPS Using the Command Line
root:
~]# firewall-cmd --list-all
public (default, active)
interfaces: em1
sources:
services: dhcpv6-client ssh
output truncated
In this example taken from a default installation, the firewall is enabled but HTTP and HTTPS have not been allowed to pass through.
HTTP and HTTP firewall services are enabled, the services line will appear similar to the following:
services: dhcpv6-client http https ssh
firewalld, see the Red Hat Enterprise Linux 7 Security Guide.
14.1.13. Additional Resources
Installed Documentation
httpd(8)— The manual page for thehttpdservice containing the complete list of its command-line options.genkey(1)— The manual page forgenkeyutility, provided by the crypto-utils package.apachectl(8)— The manual page for the Apache HTTP Server Control Interface.
Installable Documentation
- http://localhost/manual/ — The official documentation for the Apache HTTP Server with the full description of its directives and available modules. Note that in order to access this documentation, you must have the httpd-manual package installed, and the web server must be running.Before accessing the documentation, issue the following commands as
root:~]#
yum install httpd-manual~]#apachectl graceful
Online Documentation
- http://httpd.apache.org/ — The official website for the Apache HTTP Server with documentation on all the directives and default modules.
- http://www.openssl.org/ — The OpenSSL home page containing further documentation, frequently asked questions, links to the mailing lists, and other useful resources.
Chapter 15. Mail Servers
15.1. Email Protocols
15.1.1. Mail Transport Protocols
15.1.1.1. SMTP
15.1.2. Mail Access Protocols
15.1.2.1. POP
Note
root:
~]# yum install dovecotPOP server, email messages are downloaded by email client applications. By default, most POP email clients are automatically configured to delete the message on the email server after it has been successfully transferred, however this setting usually can be changed.
POP is fully compatible with important Internet messaging standards, such as Multipurpose Internet Mail Extensions (MIME), which allow for email attachments.
POP works best for users who have one system on which to read email. It also works well for users who do not have a persistent connection to the Internet or the network containing the mail server. Unfortunately for those with slow network connections, POP requires client programs upon authentication to download the entire content of each message. This can take a long time if any messages have large attachments.
POP protocol is POP3.
POP protocol variants:
- APOP —
POP3withMD5authentication. An encoded hash of the user's password is sent from the email client to the server rather than sending an unencrypted password. - KPOP —
POP3with Kerberos authentication. - RPOP —
POP3withRPOPauthentication. This uses a per-user ID, similar to a password, to authenticate POP requests. However, this ID is not encrypted, soRPOPis no more secure than standardPOP.
- The
pop3sservice - The
stunnelapplication - The
starttlscommand
15.1.2.2. IMAP
IMAP server under Red Hat Enterprise Linux is Dovecot and is provided by the dovecot package. See Section 15.1.2.1, “POP” for information on how to install Dovecot.
IMAP mail server, email messages remain on the server where users can read or delete them. IMAP also allows client applications to create, rename, or delete mail directories on the server to organize and store email.
IMAP is particularly useful for users who access their email using multiple machines. The protocol is also convenient for users connecting to the mail server via a slow connection, because only the email header information is downloaded for messages until opened, saving bandwidth. The user also has the ability to delete messages without viewing or downloading them.
IMAP client applications are capable of caching copies of messages locally, so the user can browse previously read messages when not directly connected to the IMAP server.
IMAP, like POP, is fully compatible with important Internet messaging standards, such as MIME, which allow for email attachments.
SSL encryption for client authentication and data transfer sessions. This can be enabled by using the imaps service, or by using the stunnel program. For more information on securing email communication, see Section 15.5.1, “Securing Communication”.
15.1.2.3. Dovecot
imap-login and pop3-login processes which implement the IMAP and POP3 protocols are spawned by the master dovecot daemon included in the dovecot package. The use of IMAP and POP is configured through the /etc/dovecot/dovecot.conf configuration file; by default dovecot runs IMAP and POP3 together with their secure versions using SSL. To configure dovecot to use POP, complete the following steps:
- Edit the
/etc/dovecot/dovecot.confconfiguration file to make sure theprotocolsvariable is uncommented (remove the hash sign (#) at the beginning of the line) and contains thepop3argument. For example:protocols = imap pop3 lmtp
When theprotocolsvariable is left commented out,dovecotwill use the default values as described above. - Make the change operational for the current session by running the following command as
root:~]#
systemctl restart dovecot - Make the change operational after the next reboot by running the command:
~]#
systemctl enable dovecotCreated symlink from /etc/systemd/system/multi-user.target.wants/dovecot.service to /usr/lib/systemd/system/dovecot.service.Note
Please note thatdovecotonly reports that it started theIMAPserver, but also starts thePOP3server.
SMTP, both IMAP and POP3 require connecting clients to authenticate using a user name and password. By default, passwords for both protocols are passed over the network unencrypted.
SSL on dovecot:
- Edit the
/etc/dovecot/conf.d/10-ssl.confconfiguration to make sure thessl_protocolsvariable is uncommented and contains the!SSLv2 !SSLv3arguments:ssl_protocols = !SSLv2 !SSLv3
These values ensure thatdovecotavoids SSL versions 2 and also 3, which are both known to be insecure. This is due to the vulnerability described in POODLE: SSLv3 vulnerability (CVE-2014-3566). See Resolution for POODLE SSL 3.0 vulnerability (CVE-2014-3566) in Postfix and Dovecot for details. - Edit the
/etc/pki/dovecot/dovecot-openssl.cnfconfiguration file as you prefer. However, in a typical installation, this file does not require modification. - Rename, move or delete the files
/etc/pki/dovecot/certs/dovecot.pemand/etc/pki/dovecot/private/dovecot.pem. - Execute the
/usr/libexec/dovecot/mkcert.shscript which creates thedovecotself signed certificates. These certificates are copied in the/etc/pki/dovecot/certsand/etc/pki/dovecot/privatedirectories. To implement the changes, restartdovecotby issuing the following command asroot:~]#
systemctl restart dovecot
dovecot can be found online at http://www.dovecot.org.
15.2. Email Program Classifications
15.2.1. Mail Transport Agent
SMTP. A message may involve several MTAs as it moves to its intended destination.
15.2.2. Mail Delivery Agent
mail or Procmail.
15.2.3. Mail User Agent
- Retrieving messages via the
POPorIMAPprotocols - Setting up mailboxes to store messages
- Sending outbound messages to an MTA
15.3. Mail Transport Agents
root to switch to Sendmail:
~]# alternatives --config mta~]# systemctl enable service~]# systemctl disable service15.3.1. Postfix
15.3.1.1. The Default Postfix Installation
postfix. This daemon launches all related processes needed to handle mail delivery.
/etc/postfix/ directory. The following is a list of the more commonly used files:
access— Used for access control, this file specifies which hosts are allowed to connect to Postfix.main.cf— The global Postfix configuration file. The majority of configuration options are specified in this file.master.cf— Specifies how Postfix interacts with various processes to accomplish mail delivery.transport— Maps email addresses to relay hosts.
aliases file can be found in the /etc directory. This file is shared between Postfix and Sendmail. It is a configurable list required by the mail protocol that describes user ID aliases.
Important
/etc/postfix/main.cf file does not allow Postfix to accept network connections from a host other than the local computer. For instructions on configuring Postfix as a server for other clients, see Section 15.3.1.3, “Basic Postfix Configuration”.
postfix service after changing any options in the configuration files under the /etc/postfix/ directory in order for those changes to take effect. To do so, run the following command as root:
~]# systemctl restart postfix
15.3.1.2. Upgrading From a Previous Release
disable_vrfy_command = no— This is disabled by default, which is different to the default for Sendmail. If changed toyesit can prevent certain email address harvesting methods.allow_percent_hack = yes— This is enabled by default. It allows removing%characters in email addresses. The percent hack is an old workaround that allowed sender-controlled routing of email messages.DNSand mail routing are now much more reliable, but Postfix continues to support the hack. To turn off percent rewriting, setallow_percent_hacktono.smtpd_helo_required = no— This is disabled by default, as it is in Sendmail, because it can prevent some applications from sending mail. It can be changed toyesto require clients to send the HELO or EHLO commands before attempting to send the MAIL, FROM, or ETRN commands.
15.3.1.3. Basic Postfix Configuration
root to enable mail delivery for other hosts on the network:
- Edit the
/etc/postfix/main.cffile with a text editor, such asvi. - Uncomment the
mydomainline by removing the hash sign (#), and replace domain.tld with the domain the mail server is servicing, such asexample.com. - Uncomment the
myorigin = $mydomainline. - Uncomment the
myhostnameline, and replace host.domain.tld with the host name for the machine. - Uncomment the
mydestination = $myhostname, localhost.$mydomainline. - Uncomment the
mynetworksline, and replace 168.100.189.0/28 with a valid network setting for hosts that can connect to the server. - Uncomment the
inet_interfaces = allline. - Comment the
inet_interfaces = localhostline. - Restart the
postfixservice.
/etc/postfix/main.cf configuration file. Additional resources including information about Postfix configuration, SpamAssassin integration, or detailed descriptions of the /etc/postfix/main.cf parameters are available online at http://www.postfix.org/.
Important
SSL and using only TLSv1.1 or TLSv1.2. See Resolution for POODLE SSL 3.0 vulnerability (CVE-2014-3566) in Postfix and Dovecot for details.
15.3.1.4. Using Postfix with LDAP
LDAP directory as a source for various lookup tables (for example, aliases, virtual, canonical, and so on). This allows LDAP to store hierarchical user information and Postfix to only be given the result of LDAP queries when needed. By not storing this information locally, administrators can easily maintain it.
15.3.1.4.1. The /etc/aliases lookup example
LDAP to look up the /etc/aliases file. Make sure your /etc/postfix/main.cf file contains the following:
alias_maps = hash:/etc/aliases, ldap:/etc/postfix/ldap-aliases.cf
/etc/postfix/ldap-aliases.cf file if you do not have one already and make sure it contains the following:
server_host = ldap.example.com search_base = dc=example, dc=com
ldap.example.com, example, and com are parameters that need to be replaced with specification of an existing available LDAP server.
Note
/etc/postfix/ldap-aliases.cf file can specify various parameters, including parameters that enable LDAP SSL and STARTTLS. For more information, see the ldap_table(5) man page.
LDAP, see OpenLDAP in the System-Level Authentication Guide.
15.3.2. Sendmail
SMTP protocol. Note that Sendmail is considered deprecated and users are encouraged to use Postfix when possible. See Section 15.3.1, “Postfix” for more information.
15.3.2.1. Purpose and Limitations
POP or IMAP, to download their messages to their local machine. Or, they may prefer a Web interface to gain access to their mailbox. These other applications can work in conjunction with Sendmail, but they actually exist for different reasons and can operate separately from one another.
15.3.2.2. The Default Sendmail Installation
root:
~]# yum install sendmailroot:
~]# yum install sendmail-cfsendmail.
/etc/mail/sendmail.cf. Avoid editing the sendmail.cf file directly. To make configuration changes to Sendmail, edit the /etc/mail/sendmail.mc file, back up the original /etc/mail/sendmail.cf file, and use the following alternatives to generate a new configuration file:
- Use the included makefile in
/etc/mail/to create a new/etc/mail/sendmail.cfconfiguration file:~]#
make all -C /etc/mail/All other generated files in/etc/mail(db files) will be regenerated if needed. The old makemap commands are still usable. The make command is automatically used whenever you start or restart thesendmailservice.
/etc/mail/ directory including:
access— Specifies which systems can use Sendmail for outbound email.domaintable— Specifies domain name mapping.local-host-names— Specifies aliases for the host.mailertable— Specifies instructions that override routing for particular domains.virtusertable— Specifies a domain-specific form of aliasing, allowing multiple virtual domains to be hosted on one machine.
/etc/mail/ directory, such as access, domaintable, mailertable and virtusertable, must actually store their information in database files before Sendmail can use any configuration changes. To include any changes made to these configurations in their database files, run the following commands, as root:
~]#This will updatecd /etc/mail/~]#make all
virtusertable.db, access.db, domaintable.db, mailertable.db, sendmail.cf, and submit.cf.
make name.db all where name represents the name of the custom database file to be updated.
make name.db where name.db represents the name of the database file to be updated.
sendmail service for the changes to take effect by running:
~]# systemctl restart sendmailexample.com domain delivered to bob@other-example.com, add the following line to the virtusertable file:
@example.com bob@other-example.comvirtusertable.db file must be updated:
~]# make virtusertable.dball option will result in the virtusertable.db and access.db being updated at the same time.
15.3.2.3. Common Sendmail Configuration Changes
/etc/mail/sendmail.cf file.
Warning
sendmail.cf file, create a backup copy.
/etc/mail/sendmail.mc file as root. Once you are finished, restart the sendmail service and, if the m4 package is installed, the m4 macro processor will automatically generate a new sendmail.cf configuration file:
~]# systemctl restart sendmailImportant
sendmail.cf file does not allow Sendmail to accept network connections from any host other than the local computer. To configure Sendmail as a server for other clients, edit the /etc/mail/sendmail.mc file, and either change the address specified in the Addr= option of the DAEMON_OPTIONS directive from 127.0.0.1 to the IP address of an active network device or comment out the DAEMON_OPTIONS directive all together by placing dnl at the beginning of the line. When finished, regenerate /etc/mail/sendmail.cf by restarting the service:
~]# systemctl restart sendmailSMTP-only sites. However, it does not work for UUCP (UNIX-to-UNIX Copy Protocol) sites. If using UUCP mail transfers, the /etc/mail/sendmail.mc file must be reconfigured and a new /etc/mail/sendmail.cf file must be generated.
/usr/share/sendmail-cf/README file before editing any files in the directories under the /usr/share/sendmail-cf/ directory, as they can affect the future configuration of the /etc/mail/sendmail.cf file.
15.3.2.4. Masquerading
mail.example.com that handles all of their email and assigns a consistent return address to all outgoing mail.
user@example.com instead of user@host.example.com.
/etc/mail/sendmail.mc:
FEATURE(always_add_domain)dnl FEATURE(`masquerade_entire_domain')dnl FEATURE(`masquerade_envelope')dnl FEATURE(`allmasquerade')dnl MASQUERADE_AS(`example.com.')dnl MASQUERADE_DOMAIN(`example.com.')dnl MASQUERADE_AS(`example.com')dnl
sendmail.cf file using the m4 macro processor, this configuration makes all mail from inside the network appear as if it were sent from example.com.
DNS and DHCP servers, as well as any provisioning applications, should agree on the host name format used in an organization. See the Red Hat Enterprise Linux 7 Networking Guide for more information on recommended naming practices.
15.3.2.5. Stopping Spam
SMTP messages, also called relaying, has been disabled by default since Sendmail version 8.9. Before this change occurred, Sendmail directed the mail host (x.edu) to accept messages from one party (y.com) and sent them to a different party (z.net). Now, however, Sendmail must be configured to permit any domain to relay mail through the server. To configure relay domains, edit the /etc/mail/relay-domains file and restart Sendmail
~]# systemctl restart sendmail/etc/mail/access file can be used to prevent connections from unwanted hosts. The following example illustrates how this file can be used to both block and specifically allow access to the Sendmail server:
badspammer.com ERROR:550 "Go away and do not spam us anymore" tux.badspammer.com OK 10.0 RELAYbadspammer.com is blocked with a 550 RFC-821 compliant error code, with a message sent back. Emails sent from the tux.badspammer.com sub-domain are accepted. The last line shows that any email sent from the 10.0.*.* network can be relayed through the mail server.
/etc/mail/access.db file is a database, use the makemap command to update any changes. Do this using the following command as root:
~]# makemap hash /etc/mail/access < /etc/mail/accessSMTP servers store information about an email's journey in the message header. As the message travels from one MTA to another, each puts in a Received header above all the other Received headers. It is important to note that this information may be altered by spammers.
/usr/share/sendmail-cf/README file for more information and examples.
15.3.2.6. Using Sendmail with LDAP
LDAP is a very quick and powerful way to find specific information about a particular user from a much larger group. For example, an LDAP server can be used to look up a particular email address from a common corporate directory by the user's last name. In this kind of implementation, LDAP is largely separate from Sendmail, with LDAP storing the hierarchical user information and Sendmail only being given the result of LDAP queries in pre-addressed email messages.
LDAP, where it uses LDAP to replace separately maintained files, such as /etc/aliases and /etc/mail/virtusertables, on different mail servers that work together to support a medium- to enterprise-level organization. In short, LDAP abstracts the mail routing level from Sendmail and its separate configuration files to a powerful LDAP cluster that can be leveraged by many different applications.
LDAP. To extend the Sendmail server using LDAP, first get an LDAP server, such as OpenLDAP, running and properly configured. Then edit the /etc/mail/sendmail.mc to include the following:
LDAPROUTE_DOMAIN('yourdomain.com')dnl
FEATURE('ldap_routing')dnlNote
LDAP. The configuration can differ greatly from this depending on the implementation of LDAP, especially when configuring several Sendmail machines to use a common LDAP server.
/usr/share/sendmail-cf/README for detailed LDAP routing configuration instructions and examples.
/etc/mail/sendmail.cf file by running the m4 macro processor and again restarting Sendmail. See Section 15.3.2.3, “Common Sendmail Configuration Changes” for instructions.
LDAP, see OpenLDAP in the System-Level Authentication Guide.
15.3.3. Fetchmail
POP3 and IMAP. It can even forward email messages to an SMTP server, if necessary.
Note
root:
~]# yum install fetchmail.fetchmailrc file in the user's home directory. If it does not already exist, create the .fetchmailrc file in your home directory
.fetchmailrc file, Fetchmail checks for email on a remote server and downloads it. It then delivers it to port 25 on the local machine, using the local MTA to place the email in the correct user's spool file. If Procmail is available, it is launched to filter the email and place it in a mailbox so that it can be read by an MUA.
15.3.3.1. Fetchmail Configuration Options
.fetchmailrc file is much easier. Place any desired configuration options in the .fetchmailrc file for those options to be used each time the fetchmail command is issued. It is possible to override these at the time Fetchmail is run by specifying that option on the command line.
.fetchmailrc file contains three classes of configuration options:
- global options — Gives Fetchmail instructions that control the operation of the program or provide settings for every connection that checks for email.
- server options — Specifies necessary information about the server being polled, such as the host name, as well as preferences for specific email servers, such as the port to check or number of seconds to wait before timing out. These options affect every user using that server.
- user options — Contains information, such as user name and password, necessary to authenticate and check for email using a specified email server.
.fetchmailrc file, followed by one or more server options, each of which designate a different email server that Fetchmail should check. User options follow server options for each user account checking that email server. Like server options, multiple user options may be specified for use with a particular server as well as to check multiple email accounts on the same server.
.fetchmailrc file by the use of a special option verb, poll or skip, that precedes any of the server information. The poll action tells Fetchmail to use this server option when it is run, which checks for email using the specified user options. Any server options after a skip action, however, are not checked unless this server's host name is specified when Fetchmail is invoked. The skip option is useful when testing configurations in the .fetchmailrc file because it only checks skipped servers when specifically invoked, and does not affect any currently working configurations.
.fetchmailrc file:
set postmaster "user1"
set bouncemail
poll pop.domain.com proto pop3
user 'user1' there with password 'secret' is user1 here
poll mail.domain2.com
user 'user5' there with password 'secret2' is user1 here
user 'user7' there with password 'secret3' is user1 herepostmaster option) and all email errors are sent to the postmaster instead of the sender (bouncemail option). The set action tells Fetchmail that this line contains a global option. Then, two email servers are specified, one set to check using POP3, the other for trying various protocols to find one that works. Two users are checked using the second server option, but all email found for any user is sent to user1's mail spool. This allows multiple mailboxes to be checked on multiple servers, while appearing in a single MUA inbox. Each user's specific information begins with the user action.
Note
.fetchmailrc file. Omitting the with password 'password' section causes Fetchmail to ask for a password when it is launched.
fetchmail man page explains each option in detail, but the most common ones are listed in the following three sections.
15.3.3.2. Global Options
set action.
daemon seconds— Specifies daemon-mode, where Fetchmail stays in the background. Replace seconds with the number of seconds Fetchmail is to wait before polling the server.postmaster— Specifies a local user to send mail to in case of delivery problems.syslog— Specifies the log file for errors and status messages. By default, this is/var/log/maillog.
15.3.3.3. Server Options
.fetchmailrc after a poll or skip action.
auth auth-type— Replace auth-type with the type of authentication to be used. By default,passwordauthentication is used, but some protocols support other types of authentication, includingkerberos_v5,kerberos_v4, andssh. If theanyauthentication type is used, Fetchmail first tries methods that do not require a password, then methods that mask the password, and finally attempts to send the password unencrypted to authenticate to the server.interval number— Polls the specified server everynumberof times that it checks for email on all configured servers. This option is generally used for email servers where the user rarely receives messages.port port-number— Replace port-number with the port number. This value overrides the default port number for the specified protocol.proto protocol— Replace protocol with the protocol, such aspop3orimap, to use when checking for messages on the server.timeout seconds— Replace seconds with the number of seconds of server inactivity after which Fetchmail gives up on a connection attempt. If this value is not set, a default of300seconds is used.
15.3.3.4. User Options
user option (defined below).
fetchall— Orders Fetchmail to download all messages in the queue, including messages that have already been viewed. By default, Fetchmail only pulls down new messages.fetchlimit number— Replace number with the number of messages to be retrieved before stopping.flush— Deletes all previously viewed messages in the queue before retrieving new messages.limit max-number-bytes— Replace max-number-bytes with the maximum size in bytes that messages are allowed to be when retrieved by Fetchmail. This option is useful with slow network links, when a large message takes too long to download.password 'password'— Replace password with the user's password.preconnect "command"— Replace command with a command to be executed before retrieving messages for the user.postconnect "command"— Replace command with a command to be executed after retrieving messages for the user.ssl— Activates SSL encryption. At the time of writing, the default action is to use the best available fromSSL2,SSL3,SSL23,TLS1,TLS1.1andTLS1.2. Note thatSSL2is considered obsolete and due to the POODLE: SSLv3 vulnerability (CVE-2014-3566),SSLv3should not be used. However there is no way to force the use of TLS1 or newer, therefore ensure the mail server being connected to is configured not to useSSLv2andSSLv3. Usestunnelwhere the server cannot be configured not to useSSLv2andSSLv3.sslproto— Defines allowed SSL or TLS protocols. Possible values areSSL2,SSL3,SSL23, andTLS1. The default value, ifsslprotois omitted, unset, or set to an invalid value, isSSL23. The default action is to use the best fromSSLv2,SSLv3,TLSv1,TLS1.1andTLS1.2. Note that setting any other value for SSL or TLS will disable all the other protocols. Due to the POODLE: SSLv3 vulnerability (CVE-2014-3566), it is recommend to omit this option, or set it toSSLv23, and configure the corresponding mail server not to useSSLv2andSSLv3. Usestunnelwhere the server cannot be configured not to useSSLv2andSSLv3.user "username"— Replace username with the user name used by Fetchmail to retrieve messages. This option must precede all other user options.
15.3.3.5. Fetchmail Command Options
fetchmail command mirror the .fetchmailrc configuration options. In this way, Fetchmail may be used with or without a configuration file. These options are not used on the command line by most users because it is easier to leave them in the .fetchmailrc file.
fetchmail command with other options for a particular purpose. It is possible to issue command options to temporarily override a .fetchmailrc setting that is causing an error, as any options specified at the command line override configuration file options.
15.3.3.6. Informational or Debugging Options
fetchmail command can supply important information.
--configdump— Displays every possible option based on information from.fetchmailrcand Fetchmail defaults. No email is retrieved for any users when using this option.-s— Executes Fetchmail in silent mode, preventing any messages, other than errors, from appearing after thefetchmailcommand.-v— Executes Fetchmail in verbose mode, displaying every communication between Fetchmail and remote email servers.-V— Displays detailed version information, lists its global options, and shows settings to be used with each user, including the email protocol and authentication method. No email is retrieved for any users when using this option.
15.3.3.7. Special Options
.fetchmailrc file.
-a— Fetchmail downloads all messages from the remote email server, whether new or previously viewed. By default, Fetchmail only downloads new messages.-k— Fetchmail leaves the messages on the remote email server after downloading them. This option overrides the default behavior of deleting messages after downloading them.-l max-number-bytes— Fetchmail does not download any messages over a particular size and leaves them on the remote email server.--quit— Quits the Fetchmail daemon process.
.fetchmailrc options can be found in the fetchmail man page.
15.3.4. Mail Transport Agent (MTA) Configuration
mail command to send email containing log messages to the root user of the local system.
15.4. Mail Delivery Agents
mail. Both of the applications are considered LDAs and both move email from the MTA's spool file into the user's mailbox. However, Procmail provides a robust filtering system.
mail command, consult its man page (man mail).
/etc/procmailrc or of a ~/.procmailrc file (also called an rc file) in the user's home directory invokes Procmail whenever an MTA receives a new message.
rc files exist in the /etc directory and no .procmailrc files exist in any user's home directory. Therefore, to use Procmail, each user must construct a .procmailrc file with specific environment variables and rules.
rc file. If a message matches a recipe, then the email is placed in a specified file, is deleted, or is otherwise processed.
/etc/procmailrc file and rc files in the /etc/procmailrcs/ directory for default, system-wide, Procmail environmental variables and recipes. Procmail then searches for a .procmailrc file in the user's home directory. Many users also create additional rc files for Procmail that are referred to within the .procmailrc file in their home directory.
15.4.1. Procmail Configuration
~/.procmailrc file in the following format:
env-variable="value"
env-variable is the name of the variable and value defines the variable.
DEFAULT— Sets the default mailbox where messages that do not match any recipes are placed.The defaultDEFAULTvalue is the same as$ORGMAIL.INCLUDERC— Specifies additionalrcfiles containing more recipes for messages to be checked against. This breaks up the Procmail recipe lists into individual files that fulfill different roles, such as blocking spam and managing email lists, that can then be turned off or on by using comment characters in the user's~/.procmailrcfile.For example, lines in a user's~/.procmailrcfile may look like this:MAILDIR=$HOME/Msgs INCLUDERC=$MAILDIR/lists.rc INCLUDERC=$MAILDIR/spam.rc
To turn off Procmail filtering of email lists but leaving spam control in place, comment out the firstINCLUDERCline with a hash sign (#). Note that it uses paths relative to the current directory.LOCKSLEEP— Sets the amount of time, in seconds, between attempts by Procmail to use a particular lockfile. The default is8seconds.LOCKTIMEOUT— Sets the amount of time, in seconds, that must pass after a lockfile was last modified before Procmail assumes that the lockfile is old and can be deleted. The default is1024seconds.LOGFILE— The file to which any Procmail information or error messages are written.MAILDIR— Sets the current working directory for Procmail. If set, all other Procmail paths are relative to this directory.ORGMAIL— Specifies the original mailbox, or another place to put the messages if they cannot be placed in the default or recipe-required location.By default, a value of/var/spool/mail/$LOGNAMEis used.SUSPEND— Sets the amount of time, in seconds, that Procmail pauses if a necessary resource, such as swap space, is not available.SWITCHRC— Allows a user to specify an external file containing additional Procmail recipes, much like theINCLUDERCoption, except that recipe checking is actually stopped on the referring configuration file and only the recipes on theSWITCHRC-specified file are used.VERBOSE— Causes Procmail to log more information. This option is useful for debugging.
LOGNAME, the login name; HOME, the location of the home directory; and SHELL, the default shell.
procmailrc man page.
15.4.2. Procmail Recipes
:0 [flags] [: lockfile-name ] * [ condition_1_special-condition-character condition_1_regular_expression ] * [ condition_2_special-condition-character condition-2_regular_expression ] * [ condition_N_special-condition-character condition-N_regular_expression ] special-action-character action-to-perform
flags section specifies that a lockfile is created for this message. If a lockfile is created, the name can be specified by replacing lockfile-name.
*) can further control the condition.
action-to-perform argument specifies the action taken when the message matches one of the conditions. There can only be one action per recipe. In many cases, the name of a mailbox is used here to direct matching messages into that file, effectively sorting the email. Special action characters may also be used before the action is specified. See Section 15.4.2.4, “Special Conditions and Actions” for more information.
15.4.2.1. Delivering vs. Non-Delivering Recipes
{ }, that are performed on messages which match the recipe's conditions. Nesting blocks can be nested inside one another, providing greater control for identifying and performing actions on messages.
15.4.2.2. Flags
A— Specifies that this recipe is only used if the previous recipe without anAoraflag also matched this message.a— Specifies that this recipe is only used if the previous recipe with anAoraflag also matched this message and was successfully completed.B— Parses the body of the message and looks for matching conditions.b— Uses the body in any resulting action, such as writing the message to a file or forwarding it. This is the default behavior.c— Generates a carbon copy of the email. This is useful with delivering recipes, since the required action can be performed on the message and a copy of the message can continue being processed in thercfiles.D— Makes theegrepcomparison case-sensitive. By default, the comparison process is not case-sensitive.E— While similar to theAflag, the conditions in the recipe are only compared to the message if the immediately preceding recipe without anEflag did not match. This is comparable to an else action.e— The recipe is compared to the message only if the action specified in the immediately preceding recipe fails.f— Uses the pipe as a filter.H— Parses the header of the message and looks for matching conditions. This is the default behavior.h— Uses the header in a resulting action. This is the default behavior.w— Tells Procmail to wait for the specified filter or program to finish, and reports whether or not it was successful before considering the message filtered.W— Is identical towexcept that "Program failure" messages are suppressed.
procmailrc man page.
15.4.2.3. Specifying a Local Lockfile
:) after any flags on a recipe's first line. This creates a local lockfile based on the destination file name plus whatever has been set in the LOCKEXT global environment variable.
15.4.2.4. Special Conditions and Actions
*) at the beginning of a recipe's condition line:
!— In the condition line, this character inverts the condition, causing a match to occur only if the condition does not match the message.<— Checks if the message is under a specified number of bytes.>— Checks if the message is over a specified number of bytes.
!— In the action line, this character tells Procmail to forward the message to the specified email addresses.$— Refers to a variable set earlier in thercfile. This is often used to set a common mailbox that is referred to by various recipes.|— Starts a specified program to process the message.{and}— Constructs a nesting block, used to contain additional recipes to apply to matching messages.
15.4.2.5. Recipe Examples
grep(1) man page.
:0: new-mail.spool
LOCKEXT environment variable. No condition is specified, so every message matches this recipe and is placed in the single spool file called new-mail.spool, located within the directory specified by the MAILDIR environment variable. An MUA can then view messages in this file.
rc files to direct messages to a default location.
:0 * ^From: spammer@domain.com /dev/null
spammer@domain.com are sent to the /dev/null device, deleting them.
Warning
/dev/null for permanent deletion. If a recipe inadvertently catches unintended messages, and those messages disappear, it becomes difficult to troubleshoot the rule.
/dev/null.
:0: * ^(From|Cc|To).*tux-lug tuxlug
tux-lug@domain.com mailing list are placed in the tuxlug mailbox automatically for the MUA. Note that the condition in this example matches the message if it has the mailing list's email address on the From, Cc, or To lines.
15.4.2.6. Spam Filters
Note
root:
~]# yum install spamassassin~/.procmailrc file:
INCLUDERC=/etc/mail/spamassassin/spamassassin-default.rc/etc/mail/spamassassin/spamassassin-default.rc contains a simple Procmail rule that activates SpamAssassin for all incoming email. If an email is determined to be spam, it is tagged in the header as such and the title is prepended with the following pattern:
*****SPAM*****
:0 Hw
* ^X-Spam-Status: Yes
spamspam.
spamd) and the client application (spamc). Configuring SpamAssassin this way, however, requires root access to the host.
spamd daemon, type the following command:
~]# systemctl start spamassassinsystemctl enable spamassassin.service~/.procmailrc file. For a system-wide configuration, place it in /etc/procmailrc:
INCLUDERC=/etc/mail/spamassassin/spamassassin-spamc.rc15.5. Mail User Agents
mutt.
15.5.1. Securing Communication
POP and IMAP protocols pass authentication information unencrypted, it is possible for an attacker to gain access to user accounts by collecting user names and passwords as they are passed over the network.
15.5.1.1. Secure Email Clients
IMAP and POP have known port numbers (993 and 995, respectively) that the MUA uses to authenticate and download messages.
15.5.1.2. Securing Email Client Communications
IMAP and POP users on the email server is a simple matter.
Warning
IMAP or POP, change to the /etc/pki/dovecot/ directory, edit the certificate parameters in the /etc/pki/dovecot/dovecot-openssl.cnf configuration file as you prefer, and type the following commands, as root:
dovecot]#rm -f certs/dovecot.pem private/dovecot.pemdovecot]#/usr/libexec/dovecot/mkcert.sh
/etc/dovecot/conf.d/10-ssl.conf file:
ssl_cert = </etc/pki/dovecot/certs/dovecot.pem ssl_key = </etc/pki/dovecot/private/dovecot.pem
dovecot daemon:
~]# systemctl restart dovecotstunnel command can be used as an encryption wrapper around the standard, non-secure connections to IMAP or POP services.
stunnel utility uses external OpenSSL libraries included with Red Hat Enterprise Linux to provide strong cryptography and to protect the network connections. It is recommended to apply to a CA to obtain an SSL certificate, but it is also possible to create a self-signed certificate.
stunnel and create its basic configuration. To configure stunnel as a wrapper for IMAPS and POP3S, add the following lines to the /etc/stunnel/stunnel.conf configuration file:
[pop3s] accept = 995 connect = 110 [imaps] accept = 993 connect = 143
stunnel. Once you start it, it is possible to use an IMAP or a POP email client and connect to the email server using SSL encryption.
15.6. Configuring Mail Server with Antispam and Antivirus
15.6.1. Configuring Spam Filtering for Mail Transport Agent or Mail Delivery Agent
15.6.1.1. Configuring Spam Filtering in a Mail Transport Agent
15.6.1.2. Configuring Spam Filtering in a Mail Delivery Agent
mail utility. See Section 15.2.2, “Mail Delivery Agent” for more information.
Warning
15.6.2. Configuring Antivirus Protection
Warning
root user:
~]# yum install clamav clamav-data clamav-server clamav-update15.6.3. Using the EPEL Repository to install Antispam and Antivirus Software
root user:
~]# yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpmzu15.7. Additional Resources
15.7.1. Installed Documentation
- Information on configuring Sendmail is included with the sendmail and sendmail-cf packages.
/usr/share/sendmail-cf/README— Contains information on them4macro processor, file locations for Sendmail, supported mailers, how to access enhanced features, and more.
In addition, thesendmailandaliasesman pages contain helpful information covering various Sendmail options and the proper configuration of the Sendmail/etc/mail/aliasesfile. /usr/share/doc/postfix-version-number/— Contains a large amount of information on how to configure Postfix. Replace version-number with the version number of Postfix./usr/share/doc/fetchmail-version-number/— Contains a full list of Fetchmail features in theFEATURESfile and an introductoryFAQdocument. Replace version-number with the version number of Fetchmail./usr/share/doc/procmail-version-number/— Contains aREADMEfile that provides an overview of Procmail, aFEATURESfile that explores every program feature, and anFAQfile with answers to many common configuration questions. Replace version-number with the version number of Procmail.When learning how Procmail works and creating new recipes, the following Procmail man pages are invaluable:procmail— Provides an overview of how Procmail works and the steps involved with filtering email.procmailrc— Explains thercfile format used to construct recipes.procmailex— Gives a number of useful, real-world examples of Procmail recipes.procmailsc— Explains the weighted scoring technique used by Procmail to match a particular recipe to a message./usr/share/doc/spamassassin-version-number/— Contains a large amount of information pertaining to SpamAssassin. Replace version-number with the version number of the spamassassin package.
15.7.2. Online Documentation
- How to configure postfix with TLS? — A Red Hat Knowledgebase article that describes configuring postfix to use TLS.
- How to configure a Sendmail Smart Host — A Red Hat Knowledgebase solution that describes configuring a sendmail Smart Host.
- http://www.sendmail.org/ — Offers a thorough technical breakdown of Sendmail features, documentation and configuration examples.
- http://www.sendmail.com/ — Contains news, interviews and articles concerning Sendmail, including an expanded view of the many options available.
- http://www.postfix.org/ — The Postfix project home page contains a wealth of information about Postfix. The mailing list is a particularly good place to look for information.
- http://www.fetchmail.info/fetchmail-FAQ.html — A thorough FAQ about Fetchmail.
- http://www.procmail.org/ — The home page for Procmail with links to assorted mailing lists dedicated to Procmail as well as various FAQ documents.
- http://www.spamassassin.org/ — The official site of the SpamAssassin project.
Chapter 16. File and Print Servers
CIFS) protocol, and vsftpd, the primary FTP server shipped with Red Hat Enterprise Linux. Additionally, it explains how to use the Print Settings tool to configure printers.
16.1. Samba
- An Active Directory (AD) or NT4 domain member
- A standalone server
- An NT4 Primary Domain Controller (PDC) or Backup Domain Controller (BDC)
Note
Red Hat supports these modes only in existing installations with Windows versions which support NT4 domains. Red Hat recommends not setting up a new Samba NT4 domain, because Microsoft operating systems later than Windows 7 and Windows Server 2008 R2 do not support NT4 domains.
Note
16.1.1. The Samba Services
smbd- This service provides file sharing and printing services using the SMB protocol. Additionally, the service is responsible for resource locking and for authenticating connecting users. The
smbsystemdservice starts and stops thesmbddaemon.To use thesmbdservice, install the samba package. nmbd- This service provides host name and IP resolution using the NetBIOS over IP protocol. Additionally to the name resolution, the
nmbdservice enables browsing the SMB network to locate domains, work groups, hosts, file shares, and printers. For this, the service either reports this information directly to the broadcasting client or forwards it to a local or master browser. Thenmbsystemdservice starts and stops thenmbddaemon.Note that modern SMB networks use DNS to resolve clients and IP addresses.To use thenmbdservice, install the samba package. winbindd- The
winbinddservice provides an interface for the Name Service Switch (NSS) to use AD or NT4 domain users and groups on the local system. This enables, for example, domain users to authenticate to services hosted on a Samba server or to other local services. Thewinbindsystemdservice starts and stops thewinbindddaemon.To use thewinbinddservice, install the samba-winbind package.Important
Red Hat only supports running Samba as a server with thewinbinddservice to provide domain users and groups to the local system. Due to certain limitations, such as missing Windows access control list (ACL) support and NT LAN Manager (NTLM) fallback, the System Security Services Daemon (SSSD) is not supported.
16.1.2. Verifying the smb.conf File by Using the testparm Utility
testparm utility verifies that the Samba configuration in the /etc/samba/smb.conf file is correct. The utility detects invalid parameters and values, but also incorrect settings, such as for ID mapping. If testparm reports no problem, the Samba services will successfully load the /etc/samba/smb.conf file. Note that testparm cannot verify that the configured services will be available or work as expected.
Important
/etc/samba/smb.conf file by using testparm after each modification of this file.
/etc/samba/smb.conf file, run the testparm utility as the root user. If testparm reports incorrect parameters, values, or other errors in the configuration, fix the problem and run the utility again.
Example 16.1. Using testparm
~]# testparm Load smb config files from /etc/samba/smb.conf rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384) Unknown parameter encountered: "log levell" Processing section "[example_share]" Loaded services file OK. ERROR: The idmap range for the domain * (tdb) overlaps with the range of DOMAIN (ad)! Server role: ROLE_DOMAIN_MEMBER Press enter to see a dump of your service definitions # Global parameters [global] ... [example_share] ...
16.1.3. Understanding the Samba Security Modes
security parameter in the [global] section in the /etc/samba/smb.conf file manages how Samba authenticates users that are connecting to the service. Depending on the mode you install Samba in, the parameter must be set to different values:
- On an AD domain member, set
security=ads.In this mode, Samba uses Kerberos to authenticate AD users.For details about setting up Samba as a domain member, see Section 16.1.5, “Setting up Samba as a Domain Member”. - On a standalone server, set
security=user.In this mode, Samba uses a local database to authenticate connecting users.For details about setting up Samba as a standalone server, see Section 16.1.4, “Setting up Samba as a Standalone Server”. - On an NT4 PDC or BDC, set
security=user.In this mode, Samba authenticates users to a local or LDAP database. - On an NT4 domain member, set
security=domain.In this mode, Samba authenticates connecting users to an NT4 PDC or BDC. You cannot use this mode on AD domain members.For details about setting up Samba as a domain member, see Section 16.1.5, “Setting up Samba as a Domain Member”.
security parameter in the smb.conf(5) man page.
16.1.4. Setting up Samba as a Standalone Server
16.1.4.1. Setting up the Server Configuration for the Standalone Server
Procedure 16.1. Setting up Samba as a Standalone Server
- Install the samba package:
~]# yum install samba
- Edit the
/etc/samba/smb.conffile and set the following parameters:[global] workgroup = Example-WG netbios name = Server security = user log file = /var/log/samba/%m.log log level = 1
This configuration defines a standalone server namedServerwithin theExample-WGwork group. Additionally, this configuration enables logging on a minimal level (1) and log files will be stored in the/var/log/samba/directory. Samba will expand the%mmacro in thelog fileparameter to the NetBIOS name of connecting clients. This enables individual log files for each client.For further details, see the parameter descriptions in the smb.conf(5) man page. - Configure file or printer sharing. See:
- Verify the
/etc/samba/smb.conffile:~]# testparm
- If you set up shares that require authentication, create the user accounts. For details, see Section 16.1.4.2, “Creating and Enabling Local User Accounts”.
- Open the required ports and reload the firewall configuration by using the
firewall-cmdutility:~]# firewall-cmd --permanent --add-port={139/tcp,445/tcp} ~]# firewall-cmd --reload - Start the
smbservice:~]# systemctl start smb
- Optionally, enable the
smbservice to start automatically when the system boots:~]# systemctl enable smb
16.1.4.2. Creating and Enabling Local User Accounts
passdb backend = tdbsam default setting, Samba stores user accounts in the /var/lib/samba/private/passdb.tdb database.
example Samba user:
Procedure 16.2. Creating a Samba User
- Create the operating system account:
~]# useradd -M -s /sbin/nologin example
The previous command adds theexampleaccount without creating a home directory. If the account is only used to authenticate to Samba, assign the/sbin/nologincommand as shell to prevent the account from logging in locally. - Set a password to the operating system account to enable it:
~]# passwd example Enter new UNIX password: password Retype new UNIX password: password passwd: password updated successfully
Samba does not use the password set on the operating system account to authenticate. However, you need to set a password to enable the account. If an account is disabled, Samba denies access if this user connects. - Add the user to the Samba database and set a password to the account:
~]# smbpasswd -a example New SMB password: password Retype new SMB password: password Added user example.
Use this password to authenticate when using this account to connect to a Samba share. - Enable the Samba account:
~]# smbpasswd -e example Enabled user example.
16.1.5. Setting up Samba as a Domain Member
- Access domain resources on other domain members
- Authenticate domain users to local services, such as
sshd - Share directories and printers hosted on the server to act as a file and print server
16.1.5.1. Joining a Domain
Procedure 16.3. Joining a Red Hat Enterprise Linux System to a Domain
- Install the following packages:
~]# yum install realmd oddjob-mkhomedir oddjob samba-winbind-clients \ samba-winbind samba-common-tools - If you join an AD, additionally install the samba-winbind-krb5-locator package:
~]# yum install samba-winbind-krb5-locator
This plug-in enables Kerberos to locate the Key Distribution Center (KDC) based on AD sites using DNS service records. - Optionally, rename the existing
/etc/samba/smb.confSamba configuration file:~]# mv /etc/samba/smb.conf /etc/samba/smb.conf.old
- Join the domain. For example, to join a domain named
ad.example.com~]# realm join --client-software=winbind ad.example.com
Using the previous command, therealmutility automatically:- Creates a
/etc/samba/smb.conffile for a membership in thead.example.comdomain - Adds the
winbindmodule for user and group lookups to the/etc/nsswitch.conffile - Configures the Kerberos client in the
/etc/krb5.conffile for the AD membership - Updates the Pluggable Authentication Module (PAM) configuration files in the
/etc/pam.d/directory - Starts the
winbindservice and enables the service to start when the system boots
For further details about therealmutility, see the realm(8) man page and the corresponding section in the Red Hat Windows Integration Guide. - Optionally, set an alternative ID mapping back end or customized ID mapping settings in the
/etc/samba/smb.conffile. For details, see Section 16.1.5.3, “Understanding ID Mapping”. - Optionally, verify the configuration. See Section 16.1.5.2, “Verifying That Samba Was Correctly Joined As a Domain Member”.
16.1.5.2. Verifying That Samba Was Correctly Joined As a Domain Member
Verifying That the Operating System Can Retrieve Domain User Accounts and Groups
getent utility to verify that the operating system can retrieve domain users and groups. For example:
- To query the
administratoraccount in theADdomain:~]# getent passwd AD\\administrator AD\administrator:*:10000:10000::/home/administrator@AD:/bin/bash
- To query the members of the
Domain Usersgroup in theADdomain:~]# getent group "AD\\Domain Users" AD\domain users:x:10000:user
/srv/samba/example.txt file to administrator and the group to Domain Admins:
~]# chown administrator:"Domain Admins" /srv/samba/example.txt
Verifying If AD Domain Users Can Obtain a Kerberos Ticket
administrator user can obtain a Kerberos ticket:
Note
kinit and klist utilities, install the krb5-workstation package on the Samba domain member.
Procedure 16.4. Obtaining a Kerberos Ticket
- Obtain a ticket for the
administrator@AD.EXAMPLE.COMprincipal:~]# kinit administrator@AD.EXAMPLE.COM
- Display the cached Kerberos ticket:
~]# klist Ticket cache: KEYRING:persistent:0:0 Default principal: administrator@AD.EXAMPLE.COM Valid starting Expires Service principal 11.09.2017 14:46:21 12.09.2017 00:46:21 krbtgt/AD.EXAMPLE.COM@AD.EXAMPLE.COM renew until 18.09.2017 14:46:19
Listing the Available Domains
winbindd service, enter:
~]# wbinfo --all-domains
Example 16.2. Displaying the Available Domains
~]# wbinfo --all-domains BUILTIN SAMBA-SERVER AD
16.1.5.3. Understanding ID Mapping
winbindd service is responsible for providing information about domain users and groups to the operating system.
winbindd service to provide unique IDs for users and groups to Linux, you must configure ID mapping in the /etc/samba/smb.conf file for:
- The local database (default domain)
- The AD or NT4 domain the Samba server is a member of
- Each trusted domain from which users must be able to access resources on this Samba server
16.1.5.3.1. Planning ID Ranges
Warning
Example 16.3. Unique ID Ranges
*), AD-DOM, and the TRUST-DOM domains.
[global] ... idmap config * : backend = tdb idmap config * : range = 10000-999999 idmap config AD-DOM:backend = rid idmap config AD-DOM:range = 2000000-2999999 idmap config TRUST-DOM:backend = rid idmap config TRUST-DOM:range = 4000000-4999999
Important
16.1.5.3.2. The * Default Domain
- The domain the Samba server is a member of
- Each trusted domain that should be able to access the Samba server
- Local Samba users and groups
- Samba built-in accounts and groups, such as
BUILTIN\Administrators
Important
tdb- When you configure the default domain to use the
tdbback end, set an ID range that is big enough to include objects that will be created in the future and that are not part of a defined domain ID mapping configuration.For example, set the following in the[global]section in the/etc/samba/smb.conffile:idmap config * : backend = tdb idmap config * : range = 10000-999999
For further details, see Section 16.1.5.4.1, “Using thetdbID Mapping Back End”. autorid- When you configure the default domain to use the
autoridback end, adding additional ID mapping configurations for domains is optional.For example, set the following in the[global]section in the/etc/samba/smb.conffile:idmap config * : backend = autorid idmap config * : range = 10000-999999
For further details, see Section 16.1.5.4.4.2, “Configuring theautoridBack End”.
16.1.5.4. The Different ID Mapping Back Ends
Table 16.1. Frequently Used ID Mapping Back Ends
| Back End | Use Case |
|---|---|
tdb | The * default domain only |
ad | AD domains only |
rid | AD and NT4 domains |
autorid | AD, NT4, and the * default domain |
16.1.5.4.1. Using the tdb ID Mapping Back End
winbindd service uses the writable tdb ID mapping back end by default to store Security Identifier (SID), UID, and GID mapping tables. This includes local users, groups, and built-in principals.
* default domain. For example:
idmap config * : backend = tdb idmap config * : range = 10000-999999
* default domain, see Section 16.1.5.3.2, “The * Default Domain”.
16.1.5.4.2. Using the ad ID Mapping Back End
ad ID mapping back end implements a read-only API to read account and group information from AD. This provides the following benefits:
- All user and group settings are stored centrally in AD.
- User and group IDs are consistent on all Samba servers that use this back end.
- The IDs are not stored in a local database which can corrupt, and therefore file ownerships cannot be lost.
ad back end reads the following attributes from AD:
Table 16.2. Attributes the ad Back End Reads from User and Group Objects
| AD Attribute Name | Object Type | Mapped to |
|---|---|---|
sAMAccountName | User and group | User or group name, depending on the object |
uidNumber | User | User ID (UID) |
gidNumber | Group | Group ID (GID) |
loginShell [a] | User | Path to the shell of the user |
unixHomeDirectory [a] | User | Path to the home directory of the user |
primaryGroupID [b] | User | Primary group ID |
[a]
Samba only reads this attribute if you set idmap config DOMAIN:unix_nss_info = yes.
[b]
Samba only reads this attribute if you set idmap config DOMAIN:unix_primary_group = yes.
| ||
16.1.5.4.2.1. Prerequisites of the ad Back End
ad ID mapping back end:
- Both users and groups must have unique IDs set in AD, and the IDs must be within the range configured in the
/etc/samba/smb.conffile. Objects whose IDs are outside of the range will not be available on the Samba server. - Users and groups must have all required attributes set in AD. If required attributes are missing, the user or group will not be available on the Samba server. The required attributes depend on your configuration. See Table 16.2, “Attributes the
adBack End Reads from User and Group Objects”.
16.1.5.4.2.2. Configuring the ad Back End
ad ID mapping back end:
Procedure 16.5. Configuring the ad Back End on a Domain Member
- Edit the
[global]section in the/etc/samba/smb.conffile:- Add an ID mapping configuration for the default domain (
*) if it does not exist. For example:idmap config * : backend = tdb idmap config * : range = 10000-999999
For further details about the default domain configuration, see Section 16.1.5.3.2, “The*Default Domain”. - Enable the
adID mapping back end for the AD domain:idmap config DOMAIN : backend = ad
- Set the range of IDs that is assigned to users and groups in the AD domain. For example:
idmap config DOMAIN : range = 2000000-2999999
Important
The range must not overlap with any other domain configuration on this server. Additionally, the range must be set big enough to include all IDs assigned in the future. For further details, see Section 16.1.5.3.1, “Planning ID Ranges”. - Set that Samba uses the RFC 2307 schema when reading attributes from AD:
idmap config DOMAIN : schema_mode = rfc2307
- To enable Samba to read the login shell and the path to the users home directory from the corresponding AD attribute, set:
idmap config DOMAIN : unix_nss_info = yes
Alternatively, you can set a uniform domain-wide home directory path and login shell that is applied to all users. For example:template shell = /bin/bash template homedir = /home/%U
For details about variable substitution, see the VARIABLE SUBSTITUTIONS section in the smb.conf(5) man page. - By default, Samba uses the
primaryGroupIDattribute of a user object as the user's primary group on Linux. Alternatively, you can configure Samba to use the value set in thegidNumberattribute instead:idmap config DOMAIN : unix_primary_group = yes
- Verify the
/etc/samba/smb.conffile:~]# testparm
- Reload the Samba configuration:
~]# smbcontrol all reload-config
- Verify that the settings work as expected. See the section called “Verifying That the Operating System Can Retrieve Domain User Accounts and Groups”.
16.1.5.4.3. Using the rid ID Mapping Back End
Note
S-1-5-21-5421822485-1151247151-421485315-30014, then 30014 is the corresponding RID. For details, how Samba calculates the local ID, see the idmap_rid(8) man page.
rid ID mapping back end implements a read-only API to calculate account and group information based on an algorithmic mapping scheme for AD and NT4 domains. When you configure the back end, you must set the lowest and highest RID in the idmap config DOMAIN : range parameter. Samba will not map users or groups with a lower or higher RID than set in this parameter.
Important
rid cannot assign new IDs, such as for BUILTIN groups. Therefore, do not use this back end for the * default domain.
16.1.5.4.3.1. Benefits and Drawbacks of Using the rid Back End
Benefits
- All domain users and groups that have an RID within the configured range are automatically available on the domain member.
- You do not need to manually assign IDs, home directories, and login shells.
Drawbacks
- All domain users get the same login shell and home directory assigned. However, you can use variables.
- User and group IDs are only the same across Samba domain members if all use the
ridback end with the same ID range settings. - You cannot exclude individual users or groups from being available on the domain member. Only users and groups outside of the configured range are excluded.
- Based on the formulas the
winbinddservice uses to calculate the IDs, duplicate IDs can occur in multi-domain environments if objects in different domains have the same RID.
16.1.5.4.3.2. Configuring the rid Back End
rid ID mapping back end:
Procedure 16.6. Configuring the rid Back End on a Domain Member
- Edit the
[global]section in the/etc/samba/smb.conffile:- Add an ID mapping configuration for the default domain (
*) if it does not exist. For example:idmap config * : backend = tdb idmap config * : range = 10000-999999
For further details about the default domain configuration, see Section 16.1.5.3.2, “The*Default Domain”. - Enable the
ridID mapping back end for the domain:idmap config DOMAIN : backend = rid
- Set a range that is big enough to include all RIDs that will be assigned in the future. For example:
idmap config DOMAIN : range = 2000000-2999999
Samba ignores users and groups whose RIDs in this domain are not within the range.Important
The range must not overlap with any other domain configuration on this server. For further details, see Section 16.1.5.3.1, “Planning ID Ranges”. - Set a shell and home directory path that will be assigned to all mapped users. For example:
template shell = /bin/bash template homedir = /home/%U
For details about variable substitution, see the VARIABLE SUBSTITUTIONS section in the smb.conf(5) man page.
- Verify the
/etc/samba/smb.conffile:~]# testparm
- Reload the Samba configuration:
~]# smbcontrol all reload-config
- Verify that the settings work as expected. See the section called “Verifying That the Operating System Can Retrieve Domain User Accounts and Groups”.
16.1.5.4.4. Using the autorid ID Mapping Back End [2]
autorid back end works similar to the rid ID mapping back end, but can automatically assign IDs for different domains. This enables you to use the autorid back end in the following situations:
- Only for the
*default domain. - For the
*default domain and additional domains, without the need to create ID mapping configurations for each of the additional domains. - Only for specific domains.
16.1.5.4.4.1. Benefits and Drawbacks of Using the autorid Back End
Benefits
- All domain users and groups whose calculated UID and GID is within the configured range are automatically available on the domain member.
- You do not need to manually assign IDs, home directories, and login shells.
- No duplicate IDs, even if multiple objects in a multi-domain environment have the same RID.
Drawbacks
- User and group IDs are not the same across Samba domain members.
- All domain users get the same login shell and home directory assigned. However, you can use variables.
- You cannot exclude individual users or groups from being available on the domain member. Only users and groups whose calculated UID or GID is outside of the configured range are excluded.
16.1.5.4.4.2. Configuring the autorid Back End
autorid ID mapping back end for the * default domain:
Note
autorid for the default domain, adding additional ID mapping configuration for domains is optional.
Procedure 16.7. Configuring the autorid Back End on a Domain Member
- Edit the
[global]section in the/etc/samba/smb.conffile:- Enable the
autoridID mapping back end for the*default domain:idmap config * : backend = autorid
- Set a range that is big enough to assign IDs for all existing and future objects. For example:
idmap config * : range = 10000-999999
Samba ignores users and groups whose calculated IDs in this domain are not within the range. For details about how the back end calculated IDs, see the THE MAPPING FORMULAS section in the idmap_autorid(8) man page.Warning
After you set the range and Samba starts using it, you can only increase the upper limit of the range. Any other change to the range can result in new ID assignments, and thus in loosing file ownerships. - Optionally, set a range size. For example:
idmap config * : rangesize = 200000
Samba assigns this number of continuous IDs for each domain's object until all IDs from the range set in theidmap config * : rangeparameter are taken. For further details, see therangesizeparameter description in the idmap_autorid(8) man page. - Set a shell and home directory path that will be assigned to all mapped users. For example:
template shell = /bin/bash template homedir = /home/%U
For details about variable substitution, see the VARIABLE SUBSTITUTIONS section in the smb.conf(5) man page. - Optionally, add additional ID mapping configuration for domains. If no configuration for an individual domain is available, Samba calculates the ID using the
autoridback end settings in the previously configured*default domain.Important
If you configure additional back ends for individual domains, the ranges for all ID mapping configuration must not overlap. For further details, see Section 16.1.5.3.1, “Planning ID Ranges”.
- Verify the
/etc/samba/smb.conffile:~]# testparm
- Reload the Samba configuration:
~]# smbcontrol all reload-config
- Verify that the settings work as expected. See the section called “Verifying That the Operating System Can Retrieve Domain User Accounts and Groups”.
16.1.6. Integrating a Samba File Server Into an IdM Domain
16.1.8. Setting up a Samba Print Server [5]
16.1.8.1. The Samba spoolssd Service
spoolssd is a service that is integrated into the smbd service. Enable spoolssd in the Samba configuration to significantly increase the performance on print servers with a high number of jobs or printers.
spoolssd, Samba forks the smbd process and initializes the printcap cache for each print job. In case of a large number of printers, the smbd service can become unresponsive for multiple seconds while the cache is initialized. The spoolssd service enables you to start pre-forked smbd processes that are processing print jobs without any delays. The main spoolssd smbd process uses a low amount of memory, and forks and terminates child processes.
spoolssd service:
Procedure 16.15. Enabling the spoolssd Service
- Edit the
[global]section in the/etc/samba/smb.conffile:- Add the following parameters:
rpc_server:spoolss = external rpc_daemon:spoolssd = fork
- Optionally, you can set the following parameters:
Parameter Default Description spoolssd:prefork_min_children 5 Minimum number of child processes spoolssd:prefork_max_children 25 Maximum number of child processes spoolssd:prefork_spawn_rate 5 Samba forks the number of new child processes set in this parameter, up to the value set in spoolssd:prefork_max_children, if a new connection is establishedspoolssd:prefork_max_allowed_clients 100 Number of clients, a child process serves spoolssd:prefork_child_min_life 60 Minimum lifetime of a child process in seconds. 60 seconds is the minimum.
- Verify the
/etc/samba/smb.conffile:~]# testparm
- Restart the
smbservice:~]# systemctl restart smb
smbd child processes:
~]# ps axf ... 30903 smbd 30912 \_ smbd 30913 \_ smbd 30914 \_ smbd 30915 \_ smbd ...
16.1.8.2. Enabling Print Server Support in Samba
Procedure 16.16. Enabling Print Server Support in Samba
- On the Samba server, set up CUPS and add the printer to the CUPS back end. For details, see Section 16.3, “Print Settings”.
Note
Samba can only forward the print jobs to CUPS if CUPS is installed locally on the Samba print server. - Edit the
/etc/samba/smb.conffile:- If you want to enable the
spoolssdservice, add the following parameters to the[global]section:rpc_server:spoolss = external rpc_daemon:spoolssd = fork
For further details, see Section 16.1.8.1, “The SambaspoolssdService”. - To configure the printing back end, add the
[printers]section:[printers] comment = All Printers path = /var/tmp/ printable = yes create mask = 0600
Important
Theprintersshare name is hard-coded and cannot be changed.
- Verify the
/etc/samba/smb.conffile:~]# testparm
- Open the required ports and reload the firewall configuration using the
firewall-cmdutility:~]# firewall-cmd --permanent --add-service=samba ~]# firewall-cmd --reload
- Restart the
smbservice:~]# systemctl restart smb
16.1.8.3. Manually Sharing Specific Printers
Procedure 16.17. Manually Sharing a Specific Printer
- Edit the
/etc/samba/smb.conffile:- In the
[global]section, disable automatic printer sharing by setting:load printers = no
- Add a section for each printer you want to share. For example, to share the printer named
examplein the CUPS back end asExample-Printerin Samba, add the following section:[Example-Printer] path = /var/tmp/ printable = yes printer name = example
You do not need individual spool directories for each printer. You can set the same spool directory in thepathparameter for the printer as you set in the[printers]section.
- Verify the
/etc/samba/smb.conffile:~]# testparm
- Reload the Samba configuration:
~]# smbcontrol all reload-config
16.1.8.4. Setting up Automatic Printer Driver Downloads for Windows Clients [6]
Note
16.1.8.4.1. Basic Information about Printer Drivers
Supported Driver Model Version
Package-aware Drivers
Preparing a Printer Driver for Being Uploaded
- Unpack the driver if it is provided in a compressed format.
- Some drivers require to start a setup application that installs the driver locally on a Windows host. In certain situations, the installer extracts the individual files into the operating system's temporary folder during the setup runs. To use the driver files for uploading:
- Start the installer.
- Copy the files from the temporary folder to a new location.
- Cancel the installation.
Providing 32-bit and 64-bit Drivers for a Printer to a Client
Example PostScript and the 64-bit driver named Example PostScript (v1.0), the names do not match. Consequently, you can only assign one of the drivers to a printer and the driver will not be available for both architectures.
16.1.8.4.2. Enabling Users to Upload and Preconfigure Drivers
SePrintOperatorPrivilege privilege granted. For example, to grant the privilege to the printadmin group:
~]# net rpc rights grant "printadmin" SePrintOperatorPrivilege \
-U "DOMAIN\administrator"
Enter DOMAIN\administrator's password:
Successfully granted rights.Note
SePrintOperatorPrivilege to a domain group. This enables you to centrally manage the privilege by updating a user's group membership.
SePrintOperatorPrivilege granted:
~]# net rpc rights list privileges SePrintOperatorPrivilege \
-U "DOMAIN\administrator"
Enter administrator's password:
SePrintOperatorPrivilege:
BUILTIN\Administrators
DOMAIN\printadmin16.1.8.4.4. Creating a GPO to Enable Clients to Trust the Samba Print Server
Procedure 16.19. Creating a GPO to Enable Clients to Trust the Samba Print Server
- Log into a Windows computer using an account that is allowed to edit group policies, such as the AD domain
Administratoruser. - Open the Group Policy Management Console.
- Right-click to your AD domain and select Create a GPO in this domain, and Link it here

- Enter a name for the GPO, such as Legacy printer Driver Policy and click . The new GPO will be displayed under the domain entry.
- Right-click to the newly-created GPO and select Edit to open the Group Policy Management Editor.
- Navigate to → → → .

- On the right side of the window, double-click Point and Print Restriction to edit the policy:
- Enable the policy and set the following options:
- Select Users can only point and print to these servers and enter the fully-qualified domain name (FQDN) of the Samba print server to the field next to this option.
- In both check boxes under Security Prompts, select Do not show warning or elevation prompt.

- Click .
- Double-click Package Point and Print - Approved servers to edit the policy:
- Enable the policy and click the button.
- Enter the FQDN of the Samba print server.

- Close both the Show Contents and policy properties window by clicking .
- Close the Group Policy Management Editor.
- Close the Group Policy Management Console.
16.1.8.4.5. Uploading Drivers and Preconfiguring Printers
16.1.9. Tuning the Performance of a Samba Server [7]
16.1.9.1. Setting the SMB Protocol Version
server max protocol is set to the latest supported stable SMB protocol version.
server max protocol parameter. If you set the parameter manually, you will need to modify the setting with each new version of the SMB protocol, to have the latest protocol version enabled.
server max protocol parameter from the [global] section in the /etc/samba/smb.conf file.
16.1.9.3. Settings That Can Have a Negative Performance Impact
socket options parameter in the /etc/samba/smb.conf file overrides these kernel settings. As a result, setting this parameter decreases the Samba network performance in most cases.
socket options parameter from the [global] section in the /etc/samba/smb.conf.
16.1.10. Frequently Used Samba Command-line Utilities
16.1.10.1. Using the net Utility
net utility enables you to perform several administration tasks on a Samba server. This section describes the most frequently used subcommands of the net utility.
16.1.10.1.1. Using the net ads join and net rpc join Commands
join subcommand of the net utility, you can join Samba to an AD or NT4 domain. To join the domain, you must create the /etc/samba/smb.conf file manually, and optionally update additional configurations, such as PAM.
Important
realm utility to join a domain. The realm utility automatically updates all involved configuration files. For details, see Section 16.1.5.1, “Joining a Domain”.
net command:
Procedure 16.21. Joining a Domain Using the net Command
- Manually create the
/etc/samba/smb.conffile with the following settings:- For an AD domain member:
[global] workgroup = domain_name security = ads passdb backend = tdbsam realm = AD_REALM
- For an NT4 domain member:
[global] workgroup = domain_name security = user passdb backend = tdbsam
- Add an ID mapping configuration for the
*default domain and for the domain you want to join to the[global]section in the/etc/samba/smb.conf. For details, see Section 16.1.5.3, “Understanding ID Mapping”. - Verify the
/etc/samba/smb.conffile:~]# testparm
- Join the domain as the domain administrator:
- To join an AD domain:
~]# net ads join -U "DOMAIN\administrator"
- To join an NT4 domain:
~]# net rpc join -U "DOMAIN\administrator"
- Append the
winbindsource to thepasswdandgroupdatabase entry in the/etc/nsswitch.conffile:passwd: files winbind group: files winbind
- Enable and start the
winbindservice:~]# systemctl enable winbind ~]# systemctl start winbind
- Optionally, configure PAM using the
authconfutility.For details, see the Using Pluggable Authentication Modules (PAM) section in the Red Hat System-Level Authentication Guide. - Optionally for AD environments, configure the Kerberos client.For details, see the Configuring a Kerberos Client section in the Red Hat System-Level Authentication Guide.
16.1.10.1.2. Using the net rpc rights Command
net rpc rights command to manage privileges.
Listing Privileges
net rpc rights list command. For example:
net rpc rights list -U "DOMAIN\administrator"
Enter DOMAIN\administrator's password:
SeMachineAccountPrivilege Add machines to domain
SeTakeOwnershipPrivilege Take ownership of files or other objects
SeBackupPrivilege Back up files and directories
SeRestorePrivilege Restore files and directories
SeRemoteShutdownPrivilege Force shutdown from a remote system
SePrintOperatorPrivilege Manage printers
SeAddUsersPrivilege Add users and groups to the domain
SeDiskOperatorPrivilege Manage disk shares
SeSecurityPrivilege System security
Granting Privileges
net rpc rights grant command.
SePrintOperatorPrivilege privilege to the DOMAIN\printadmin group:
~]# net rpc rights grant "DOMAIN\printadmin" SePrintOperatorPrivilege \
-U "DOMAIN\administrator"
Enter DOMAIN\administrator's password:
Successfully granted rights.Revoking Privileges
net rpc rights revoke.
SePrintOperatorPrivilege privilege from the DOMAIN\printadmin group:
~]# net rpc rights remoke "DOMAIN\printadmin" SePrintOperatorPrivilege \
-U "DOMAIN\administrator"
Enter DOMAIN\administrator's password:
Successfully revoked rights.16.1.10.1.4. Using the net user Command
net user command enables you to perform the following actions on an AD DC or NT4 PDC:
- List all user accounts
- Add users
- Remove Users
Note
ads for AD domains or rpc for NT4 domains, is only required when you list domain user accounts. Other user-related subcommands can auto-detect the connection method.
-U user_name parameter to the command to specify a user that is allowed to perform the requested action.
Listing Domain User Accounts
~]# net ads user -U "DOMAIN\administrator"
~]# net rpc user -U "DOMAIN\administrator"
Adding a User Account to the Domain
net user add command to add a user account to the domain.
user account to the domain:
Procedure 16.22. Adding a User Account to the Domain
- Add the account:
~]# net user add user password -U "DOMAIN\administrator" User user added
- Optionally, use the remote procedure call (RPC) shell to enable the account on the AD DC or NT4 PDC. For example:
~]# net rpc shell -U DOMAIN\administrator -S DC_or_PDC_name Talking to domain DOMAIN (S-1-5-21-1424831554-512457234-5642315751) net rpc> user edit disabled user no Set user's disabled flag from [yes] to [no] net rpc> exit
Deleting a User Account from the Domain
net user delete command to remove a user account from the domain.
user account from the domain:
~]# net user delete user -U "DOMAIN\administrator" User user deleted
16.1.10.2. Using the rpcclient Utility
rpcclient utility enables you to manually execute client-side Microsoft Remote Procedure Call (MS-RPC) functions on a local or remote SMB server. However, most of the features are integrated into separate utilities provided by Samba. Use rpcclient only for testing MS-PRC functions.
- Manage the printer Spool Subsystem (SPOOLSS).
Example 16.9. Assigning a Driver to a Printer
~]# rpcclient server_name -U "DOMAIN\administrator" \ -c 'setdriver "printer_name" "driver_name"' Enter DOMAIN\administrators password: Successfully set printer_name to driver driver_name. - Retrieve information about an SMB server.
Example 16.10. Listing all File Shares and Shared Printers
~]# rpcclient server_name -U "DOMAIN\administrator" -c 'netshareenum' Enter DOMAIN\administrators password: netname: Example_Share remark: path: C:\srv\samba\example_share\ password: netname: Example_Printer remark: path: C:\var\spool\samba\ password:
- Perform actions using the Security Account Manager Remote (SAMR) protocol.
Example 16.11. Listing Users on an SMB Server
~]# rpcclient server_name -U "DOMAIN\administrator" -c 'enumdomusers' Enter DOMAIN\administrators password: user:[user1] rid:[0x3e8] user:[user2] rid:[0x3e9]
If you run the command against a standalone server or a domain member, it lists the users in the local database. Running the command against an AD DC or NT4 PDC lists the domain users.
16.1.10.3. Using the samba-regedit Application
samba-regedit application to edit the registry of a Samba server.

~]# samba-regedit
- Cursor up and cursor down: Navigate through the registry tree and the values.
- Enter: Opens a key or edits a value.
- Tab: Switches between the Key and Value pane.
- Ctrl+C: Closes the application.
16.1.10.4. Using the smbcacls Utility
16.1.10.5. Using the smbclient Utility
smbclient utility enables you to access file shares on an SMB server, similarly to a command-line FTP client. You can use it, for example, to upload and download files to and from a share.
example share hosted on server using the DOMAIN\user account:
~]# smbclient -U "DOMAIN\user" //server/example Enter domain\user's password: Domain=[SERVER] OS=[Windows 6.1] Server=[Samba 4.6.2] smb: \>
smbclient connected successfully to the share, the utility enters the interactive mode and shows the following prompt:
smb: \>
smb: \> help
smb: \> help command_name
16.1.10.5.1. Using smbclient in Interactive Mode
smbclient without the -c parameter, the utility enters the interactive mode.
Procedure 16.23. Downloading a File from an SMB Share Using smbclient
- Connect to the share:
~]# smbclient -U "DOMAIN\user_name" //server_name/share_name
- Change into the
/example/directory:smb: \> cd /example/
- List the files in the directory:
smb: \example\> ls . D 0 Mon Sep 1 10:00:00 2017 .. D 0 Mon Sep 1 10:00:00 2017 example.txt N 1048576 Mon Sep 1 10:00:00 2017 9950208 blocks of size 1024. 8247144 blocks available - Download the
example.txtfile:smb: \example\> get example.txt getting file \directory\subdirectory\example.txt of size 1048576 as example.txt (511975,0 KiloBytes/sec) (average 170666,7 KiloBytes/sec)
- Disconnect from the share:
smb: \example\> exit
16.1.10.5.2. Using smbclient in Scripting Mode
-c commands parameter to smbclient, you can automatically execute the commands on the remote SMB share. This enables you to use smbclient in scripts.
~]# smbclient -U DOMAIN\user_name //server_name/share_name \
-c "cd /example/ ; get example.txt ; exit"16.1.10.6. Using the smbcontrol Utility
smbcontrol utility enables you to send command messages to the smbd, nmbd, winbindd, or all of these services. These control messages instruct the service, for example, to reload its configuration.
Example 16.12. Reloading the Configuration of the smbd, nmbd, and winbindd Service
smbd, nmbd, winbindd, send the reload-config message-type to the all destination:
~]# smbcontrol all reload-config
16.1.10.7. Using the smbpasswd Utility
smbpasswd utility manages user accounts and passwords in the local Samba database.
smbpasswd changes the Samba password of the user. For example:
[user@server ~]$ smbpasswd New SMB password: Retype new SMB password:
smbpasswd as the root user, you can use the utility, for example, to:
- Create a new user:
[root@server ~]# smbpasswd -a user_name New SMB password: Retype new SMB password: Added user user_name.
Note
Before you can add a user to the Samba database, you must create the account in the local operating system. See Section 4.3.1, “Adding a New User” - Enable a Samba user:
[root@server ~]# smbpasswd -e user_name Enabled user user_name.
- Disable a Samba user:
[root@server ~]# smbpasswd -x user_name Disabled user user_name.
- Delete a user:
[root@server ~]# smbpasswd -x user_name Deleted user user_name.
16.1.10.8. Using the smbstatus Utility
smbstatus utility reports on:
- Connections per PID of each
smbddaemon to the Samba server. This report includes the user name, primary group, SMB protocol version, encryption, and signing information. - Connections per Samba share. This report includes the PID of the
smbddaemon, the IP of the connecting machine, the time stamp when the connection was established, encryption, and signing information. - A list of locked files. The report entries include further details, such as opportunistic lock (oplock) types
Example 16.13. Output of the smbstatus Utility
~]# smbstatus Samba version 4.6.2 PID Username Group Machine Protocol Version Encryption Signing ----------------------------------------------------------------------------------------------------------------------------- 963 DOMAIN\administrator DOMAIN\domain users client-pc (ipv4:192.0.2.1:57786) SMB3_02 - AES-128-CMAC Service pid Machine Connected at Encryption Signing: ------------------------------------------------------------------------------- example 969 192.0.2.1 Mo Sep 1 10:00:00 2017 CEST - AES-128-CMAC Locked files: Pid Uid DenyMode Access R/W Oplock SharePath Name Time ------------------------------------------------------------------------------------------------------------ 969 10000 DENY_WRITE 0x120089 RDONLY LEASE(RWH) /srv/samba/example file.txt Mon Sep 1 10:00:00 2017
16.1.10.9. Using the smbtar Utility
smbtar utility backs up the content of an SMB share or a subdirectory of it and stores the content in a tar archive. Alternatively, you can write the content to a tape device.
demo directory on the //server/example/ share and store the content in the /root/example.tar archive:
~]# smbtar -s server -x example -u user_name -p password -t /root/example.tar
16.1.10.10. Using the testparm Utility
16.1.10.11. Using the wbinfo Utility
wbinfo utility queries and returns information created and used by the winbindd service.
Note
winbindd service must be configured and running to use wbinfo.
wbinfo, for example, to:
- List domain users:
~]# wbinfo -u AD\administrator AD\guest ...
- List domain groups:
~]# wbinfo -g AD\domain computers AD\domain admins AD\domain users ...
- Display the SID of a user:
~]# wbinfo --name-to-sid="AD\administrator" S-1-5-21-1762709870-351891212-3141221786-500 SID_USER (1)
- Display information about domains and trusts:
~]# wbinfo --trusted-domains --verbose Domain Name DNS Domain Trust Type Transitive In Out BUILTIN None Yes Yes Yes server None Yes Yes Yes DOMAIN1 domain1.example.com None Yes Yes Yes DOMAIN2 domain2.example.com External No Yes Yes
16.1.11. Additional Resources
- The Red Hat Samba packages include manual pages for all Samba commands and configuration files the package installs. For example, to display the man page of the
/etc/samba/smb.conffile that explains all configuration parameters you can set in this file:~]# man 5 smb.conf
/usr/share/docs/samba-version/: Contains general documentation, example scripts, and LDAP schema files, provided by the Samba project.- Red Hat Cluster Storage Administration Guide: Provides information about setting up Samba and the Clustered Trivial Database (CDTB) to share directories stored on an GlusterFS volume.
- The An active/active Samba Server in a Red Hat High Availability Cluster chapter in the Red Hat Enterprise Linux High Availability Add-on Administration guide describes how to up a Samba high-availability installation.
- For details about mounting an SMB share on Red Hat Enterprise Linux, see the corresponding section in the Red Hat Storage Administration Guide.
16.2. FTP
FTP) is one of the oldest and most commonly used protocols found on the Internet today. Its purpose is to reliably transfer files between computer hosts on a network without requiring the user to log directly in to the remote host or to have knowledge of how to use the remote system. It allows users to access files on remote systems using a standard set of simple commands.
FTP protocol and introduces vsftpd, which is the preferred FTP server in Red Hat Enterprise Linux.
16.2.1. The File Transfer Protocol
TCP network protocol. Because FTP is a rather old protocol, it uses unencrypted user name and password authentication. For this reason, it is considered an insecure protocol and should not be used unless absolutely necessary. However, because FTP is so prevalent on the Internet, it is often required for sharing files to the public. System administrators, therefore, should be aware of FTP's unique characteristics.
TLS and how to secure an FTP server with the help of SELinux. A good substitute for FTP is sftp from the OpenSSH suite of tools. For information about configuring OpenSSH and about the SSH protocol in general, refer to Chapter 12, OpenSSH.
FTP requires multiple network ports to work properly. When an FTP client application initiates a connection to an FTP server, it opens port 21 on the server — known as the command port. This port is used to issue all commands to the server. Any data requested from the server is returned to the client via a data port. The port number for data connections, and the way in which data connections are initialized, vary depending upon whether the client requests the data in active or passive mode.
- active mode
- Active mode is the original method used by the
FTPprotocol for transferring data to the client application. When an active-mode data transfer is initiated by theFTPclient, the server opens a connection from port20on the server to theIPaddress and a random, unprivileged port (greater than1024) specified by the client. This arrangement means that the client machine must be allowed to accept connections over any port above1024. With the growth of insecure networks, such as the Internet, the use of firewalls for protecting client machines is now prevalent. Because these client-side firewalls often deny incoming connections from active-modeFTPservers, passive mode was devised. - passive mode
- Passive mode, like active mode, is initiated by the
FTPclient application. When requesting data from the server, theFTPclient indicates it wants to access the data in passive mode and the server provides theIPaddress and a random, unprivileged port (greater than1024) on the server. The client then connects to that port on the server to download the requested information.While passive mode does resolve issues for client-side firewall interference with data connections, it can complicate administration of the server-side firewall. You can reduce the number of open ports on a server by limiting the range of unprivileged ports on theFTPserver. This also simplifies the process of configuring firewall rules for the server.
16.2.2. The vsftpd Server
vsftpd) is designed from the ground up to be fast, stable, and, most importantly, secure. vsftpd is the only stand-alone FTP server distributed with Red Hat Enterprise Linux, due to its ability to handle large numbers of connections efficiently and securely.
vsftpd has three primary aspects:
- Strong separation of privileged and non-privileged processes — Separate processes handle different tasks, and each of these processes runs with the minimal privileges required for the task.
- Tasks requiring elevated privileges are handled by processes with the minimal privilege necessary — By taking advantage of compatibilities found in the
libcaplibrary, tasks that usually require full root privileges can be executed more safely from a less privileged process. - Most processes run in a
chrootjail — Whenever possible, processes are change-rooted to the directory being shared; this directory is then considered achrootjail. For example, if the/var/ftp/directory is the primary shared directory,vsftpdreassigns/var/ftp/to the new root directory, known as/. This disallows any potential malicious hacker activities for any directories not contained in the new root directory.
vsftpd deals with requests:
- The parent process runs with the least privileges required — The parent process dynamically calculates the level of privileges it requires to minimize the level of risk. Child processes handle direct interaction with the
FTPclients and run with as close to no privileges as possible. - All operations requiring elevated privileges are handled by a small parent process — Much like the Apache
HTTPServer,vsftpdlaunches unprivileged child processes to handle incoming connections. This allows the privileged, parent process to be as small as possible and handle relatively few tasks. - All requests from unprivileged child processes are distrusted by the parent process — Communication with child processes is received over a socket, and the validity of any information from child processes is checked before being acted on.
- Most interactions with
FTPclients are handled by unprivileged child processes in achrootjail — Because these child processes are unprivileged and only have access to the directory being shared, any crashed processes only allow the attacker access to the shared files.
16.2.2.1. Starting and Stopping vsftpd
vsftpd service in the current session, type the following at a shell prompt as root:
~]# systemctl start vsftpd.serviceroot:
~]# systemctl stop vsftpd.servicevsftpd service, run the following command as root:
~]# systemctl restart vsftpd.servicevsftpd service, which is the most efficient way to make configuration changes take effect after editing the configuration file for this FTP server. Alternatively, you can use the following command to restart the vsftpd service only if it is already running:
~]# systemctl try-restart vsftpd.servicevsftpd service does not start automatically at boot time. To configure the vsftpd service to start at boot time, type the following at a shell prompt as root:
~]# systemctl enable vsftpd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/vsftpd.service to /usr/lib/systemd/system/vsftpd.service.16.2.2.2. Starting Multiple Copies of vsftpd
FTP domains. This is a technique called multihoming. One way to multihome using vsftpd is by running multiple copies of the daemon, each with its own configuration file.
IP addresses to network devices or alias network devices on the system. For more information about configuring network devices, device aliases, and additional information about network configuration scripts, see the Red Hat Enterprise Linux 7 Networking Guide.
FTP domains must be configured to reference the correct machine. For information about BIND, the DNS protocol implementation used in Red Hat Enterprise Linux, and its configuration files, see the Red Hat Enterprise Linux 7 Networking Guide.
vsftpd to answer requests on different IP addresses, multiple copies of the daemon must be running. To facilitate launching multiple instances of the vsftpd daemon, a special systemd service unit (vsftpd@.service) for launching vsftpd as an instantiated service is supplied in the vsftpd package.
vsftpd configuration file for each required instance of the FTP server must be created and placed in the /etc/vsftpd/ directory. Note that each of these configuration files must have a unique name (such as /etc/vsftpd/vsftpd-site-2.conf) and must be readable and writable only by the root user.
FTP server listening on an IPv4 network, the following directive must be unique:
listen_address=N.N.N.NIP address for the FTP site being served. If the site is using IPv6, use the listen_address6 directive instead.
/etc/vsftpd/ directory, individual instances of the vsftpd daemon can be started by executing the following command as root:
~]# systemctl start vsftpd@configuration-file-name.servicevsftpd-site-2. Note that the configuration file's .conf extension should not be included in the command.
vsftpd daemon at once, you can make use of a systemd target unit file (vsftpd.target), which is supplied in the vsftpd package. This systemd target causes an independent vsftpd daemon to be launched for each available vsftpd configuration file in the /etc/vsftpd/ directory. Execute the following command as root to enable the target:
~]# systemctl enable vsftpd.target
Created symlink from /etc/systemd/system/multi-user.target.wants/vsftpd.target to /usr/lib/systemd/system/vsftpd.target.vsftpd service (along with the configured vsftpd server instances) at boot time. To start the service immediately, without rebooting the system, execute the following command as root:
~]# systemctl start vsftpd.targetanon_rootlocal_rootvsftpd_log_filexferlog_file
16.2.2.3. Encrypting vsftpd Connections Using TLS
FTP, which transmits user names, passwords, and data without encryption by default, the vsftpd daemon can be configured to utilize the TLS protocol to authenticate connections and encrypt all transfers. Note that an FTP client that supports TLS is needed to communicate with vsftpd with TLS enabled.
Note
SSL (Secure Sockets Layer) is the name of an older implementation of the security protocol. The new versions are called TLS (Transport Layer Security). Only the newer versions (TLS) should be used as SSL suffers from serious security vulnerabilities. The documentation included with the vsftpd server, as well as the configuration directives used in the vsftpd.conf file, use the SSL name when referring to security-related matters, but TLS is supported and used by default when the ssl_enable directive is set to YES.
ssl_enable configuration directive in the vsftpd.conf file to YES to turn on TLS support. The default settings of other TLS-related directives that become automatically active when the ssl_enable option is enabled provide for a reasonably well-configured TLS set up. This includes, among other things, the requirement to only use the TLS v1 protocol for all connections (the use of the insecure SSL protocol versions is disabled by default) or forcing all non-anonymous logins to use TLS for sending passwords and data transfers.
Example 16.14. Configuring vsftpd to Use TLS
SSL versions of the security protocol in the vsftpd.conf file:
ssl_enable=YES ssl_tlsv1=YES ssl_sslv2=NO ssl_sslv3=NO
vsftpd service after you modify its configuration:
~]# systemctl restart vsftpd.serviceTLS-related configuration directives for fine-tuning the use of TLS by vsftpd.
16.2.2.4. SELinux Policy for vsftpd
vsftpd daemon (as well as other ftpd processes), defines a mandatory access control, which, by default, is based on least access required. In order to allow the FTP daemon to access specific files or directories, appropriate labels need to be assigned to them.
public_content_t label must be assigned to the files and directories to be shared. You can do this using the chcon command as root:
~]# chcon -R -t public_content_t /path/to/directorypublic_content_rw_t label. In addition to that, the allow_ftpd_anon_write SELinux Boolean option must be set to 1. Use the setsebool command as root to do that:
~]# setsebool -P allow_ftpd_anon_write=1FTP, which is the default setting on Red Hat Enterprise Linux 7, the ftp_home_dir Boolean option needs to be set to 1. If vsftpd is to be allowed to run in standalone mode, which is also enabled by default on Red Hat Enterprise Linux 7, the ftpd_is_daemon option needs to be set to 1 as well.
FTP. Also, see the Red Hat Enterprise Linux 7 SELinux User's and Administrator's Guide for more detailed information about SELinux in general.
16.2.3. Additional Resources
vsftpd, see the following resources.
16.2.3.1. Installed Documentation
- The
/usr/share/doc/vsftpd-version-number/directory — Replace version-number with the installed version of the vsftpd package. This directory contains aREADMEfile with basic information about the software. TheTUNINGfile contains basic performance-tuning tips and theSECURITY/directory contains information about the security model employed byvsftpd. vsftpd-related manual pages — There are a number of manual pages for the daemon and the configuration files. The following lists some of the more important manual pages.- Server Applications
- vsftpd(8) — Describes available command-line options for
vsftpd.
- Configuration Files
- vsftpd.conf(5) — Contains a detailed list of options available within the configuration file for
vsftpd. - hosts_access(5) — Describes the format and options available within the
TCPwrappers configuration files:hosts.allowandhosts.deny.
- Interaction with SELinux
- ftpd_selinux(8) — Contains a description of the SELinux policy governing
ftpdprocesses as well as an explanation of the way SELinux labels need to be assigned and Booleans set.
16.2.3.2. Online Documentation
- About vsftpd and FTP in General
- http://vsftpd.beasts.org/ — The
vsftpdproject page is a great place to locate the latest documentation and to contact the author of the software. - http://slacksite.com/other/ftp.html — This website provides a concise explanation of the differences between active and passive-mode
FTP.
- Red Hat Enterprise Linux Documentation
- Red Hat Enterprise Linux 7 Networking Guide — The Networking Guide for Red Hat Enterprise Linux 7 documents relevant information regarding the configuration and administration of network interfaces, networks, and network services in this system. It provides an introduction to the
hostnamectlutility and explains how to use it to view and set host names on the command line, both locally and remotely. - Red Hat Enterprise Linux 7 SELinux User's and Administrator's Guide — The SELinux User's and Administrator's Guide for Red Hat Enterprise Linux 7 describes the basic principles of SELinux and documents in detail how to configure and use SELinux with various services such as the Apache HTTP Server, Postfix, PostgreSQL, or OpenShift. It explains how to configure SELinux access permissions for system services managed by
systemd. - Red Hat Enterprise Linux 7 Security Guide — The Security Guide for Red Hat Enterprise Linux 7 assists users and administrators in learning the processes and practices of securing their workstations and servers against local and remote intrusion, exploitation, and malicious activity. It also explains how to secure critical system services.
- Relevant RFC Documents
16.3. Print Settings
Important
cupsd.conf man page documents configuration of a CUPS server. It includes directives for enabling SSL support. However, CUPS does not allow control of the protocol versions used. Due to the vulnerability described in Resolution for POODLE SSLv3.0 vulnerability (CVE-2014-3566) for components that do not allow SSLv3 to be disabled via configuration settings, Red Hat recommends that you do not rely on this for security. It is recommend that you use stunnel to provide a secure tunnel and disable SSLv3. For more information on using stunnel, see the Red Hat Enterprise Linux 7 Security Guide.
SSH as described in Section 12.4.1, “X11 Forwarding”.
Note
16.3.1. Starting the Print Settings Configuration Tool
system-config-printer at a shell prompt. The Print Settings tool appears. Alternatively, if using the GNOME desktop, press the Super key to enter the Activities Overview, type Print Settings and then press Enter. The Print Settings tool appears. The Super key appears in a variety of guises, depending on the keyboard and other hardware, but often as either the Windows or Command key, and typically to the left of the Spacebar.

Figure 16.1. Print Settings window
16.3.2. Starting Printer Setup
root user password. Local printers connected with other port types and network printers need to be set up manually.
- Start the Print Settings tool (refer to Section 16.3.1, “Starting the Print Settings Configuration Tool”).
- Go to → → .
- In the Authenticate dialog box, enter an administrator or
rootuser password. If this is the first time you have configured a remote printer you will be prompted to authorize an adjustment to the firewall. - Select the printer connection type and provide its details in the area on the right.
16.3.3. Adding a Local Printer
- Open the Add printer dialog (refer to Section 16.3.2, “Starting Printer Setup”).
- If the device does not appear automatically, select the port to which the printer is connected in the list on the left (such as Serial Port #1 or LPT #1).
- On the right, enter the connection properties:
- for Other
- URI (for example file:/dev/lp0)
- for Serial Port
- Baud RateParityData BitsFlow Control

Figure 16.2. Adding a local printer
- Click .
- Select the printer model. See Section 16.3.8, “Selecting the Printer Model and Finishing” for details.
16.3.4. Adding an AppSocket/HP JetDirect printer
- Open the
New Printerdialog (refer to Section 16.3.1, “Starting the Print Settings Configuration Tool”). - In the list on the left, select → .
- On the right, enter the connection settings:
- Hostname
- Printer host name or
IPaddress. - Port Number
- Printer port listening for print jobs (
9100by default).

Figure 16.3. Adding a JetDirect printer
- Click .
- Select the printer model. See Section 16.3.8, “Selecting the Printer Model and Finishing” for details.
16.3.5. Adding an IPP Printer
IPP printer is a printer attached to a different system on the same TCP/IP network. The system this printer is attached to may either be running CUPS or simply configured to use IPP.
TCP connections on port 631. Note that the CUPS browsing protocol allows client machines to discover shared CUPS queues automatically. To enable this, the firewall on the client machine must be configured to allow incoming UDP packets on port 631.
IPP printer:
- Open the
New Printerdialog (refer to Section 16.3.2, “Starting Printer Setup”). - In the list of devices on the left, select and or .
- On the right, enter the connection settings:
- Host
- The host name of the
IPPprinter. - Queue
- The queue name to be given to the new queue (if the box is left empty, a name based on the device node will be used).

Figure 16.4. Adding an IPP printer
- Click to continue.
- Select the printer model. See Section 16.3.8, “Selecting the Printer Model and Finishing” for details.
16.3.6. Adding an LPD/LPR Host or Printer
- Open the
New Printerdialog (refer to Section 16.3.2, “Starting Printer Setup”). - In the list of devices on the left, select → .
- On the right, enter the connection settings:
- Host
- The host name of the LPD/LPR printer or host.Optionally, click to find queues on the LPD host.
- Queue
- The queue name to be given to the new queue (if the box is left empty, a name based on the device node will be used).

Figure 16.5. Adding an LPD/LPR printer
- Click to continue.
- Select the printer model. See Section 16.3.8, “Selecting the Printer Model and Finishing” for details.
16.3.7. Adding a Samba (SMB) printer
Note
root:
yum install samba-client- Open the
New Printerdialog (refer to Section 16.3.2, “Starting Printer Setup”). - In the list on the left, select → .
- Enter the SMB address in the smb:// field. Use the format computer name/printer share. In Figure 16.6, “Adding a SMB printer”, the computer name is
dellboxand the printer share isr2.
Figure 16.6. Adding a SMB printer
- Click to see the available workgroups/domains. To display only queues of a particular host, type in the host name (NetBios name) and click .
- Select either of the options:
- Prompt user if authentication is required: user name and password are collected from the user when printing a document.
- Set authentication details now: provide authentication information now so it is not required later. In the Username field, enter the user name to access the printer. This user must exist on the SMB system, and the user must have permission to access the printer. The default user name is typically
guestfor Windows servers, ornobodyfor Samba servers.
- Enter the Password (if required) for the user specified in the Username field.
Warning
Samba printer user names and passwords are stored in the printer server as unencrypted files readable byrootand the Linux Printing Daemon,lpd. Thus, other users that haverootaccess to the printer server can view the user name and password you use to access the Samba printer.Therefore, when you choose a user name and password to access a Samba printer, it is advisable that you choose a password that is different from what you use to access your local Red Hat Enterprise Linux system.If there are files shared on the Samba print server, it is recommended that they also use a password different from what is used by the print queue. - Click to test the connection. Upon successful verification, a dialog box appears confirming printer share accessibility.
- Click .
- Select the printer model. See Section 16.3.8, “Selecting the Printer Model and Finishing” for details.
16.3.8. Selecting the Printer Model and Finishing
- In the window displayed after the automatic driver detection has failed, select one of the following options:
- Select a Printer from database — the system chooses a driver based on the selected make of your printer from the list of Makes. If your printer model is not listed, choose Generic.
- Provide PPD file — the system uses the provided PostScript Printer Description (PPD) file for installation. A PPD file may also be delivered with your printer as being normally provided by the manufacturer. If the PPD file is available, you can choose this option and use the browser bar below the option description to select the PPD file.
- Search for a printer driver to download — enter the make and model of your printer into the Make and model field to search on OpenPrinting.org for the appropriate packages.

Figure 16.7. Selecting a printer brand
- Depending on your previous choice provide details in the area displayed below:
- Printer brand for the Select printer from database option.
- PPD file location for the Provide PPD file option.
- Printer make and model for the Search for a printer driver to download option.
- Click to continue.
- If applicable for your option, window shown in Figure 16.8, “Selecting a printer model” appears. Choose the corresponding model in the Models column on the left.
Note
On the right, the recommended printer driver is automatically selected; however, you can select another available driver. The print driver processes the data that you want to print into a format the printer can understand. Since a local printer is attached directly to your computer, you need a printer driver to process the data that is sent to the printer.
Figure 16.8. Selecting a printer model
- Click .
- Under the
Describe Printerenter a unique name for the printer in the Printer Name field. The printer name can contain letters, numbers, dashes (-), and underscores (_); it must not contain any spaces. You can also use the Description and Location fields to add further printer information. Both fields are optional, and may contain spaces.
Figure 16.9. Printer setup
- Click to confirm your printer configuration and add the print queue if the settings are correct. Click to modify the printer configuration.
- After the changes are applied, a dialog box appears allowing you to print a test page. Click to print a test page now. Alternatively, you can print a test page later as described in Section 16.3.9, “Printing a Test Page”.
16.3.9. Printing a Test Page
- Right-click the printer in the Printing window and click .
- In the Properties window, click Settings on the left.
- On the displayed Settings tab, click the button.
16.3.10. Modifying Existing Printers
16.3.10.1. The Settings Page

Figure 16.10. Settings page
16.3.10.2. The Policies Page
16.3.10.2.1. Sharing Printers

Figure 16.11. Policies page
TCP connections to port 631, the port for the Network Printing Server (IPP) protocol. To allow IPP traffic through the firewall on Red Hat Enterprise Linux 7, make use of firewalld's IPP service. To do so, proceed as follows:
Procedure 16.24. Enabling IPP Service in firewalld
- To start the graphical firewall-config tool, press the Super key to enter the Activities Overview, type
firewalland then press Enter. The Firewall Configuration window opens. You will be prompted for an administrator orrootpassword.Alternatively, to start the graphical firewall configuration tool using the command line, enter the following command asrootuser:~]#
The Firewall Configuration window opens.firewall-configLook for the word “Connected” in the lower left corner. This indicates that the firewall-config tool is connected to the user space daemon,firewalld.To immediately change the current firewall settings, ensure the drop-down selection menu labeled Configuration is set to . Alternatively, to edit the settings to be applied at the next system start, or firewall reload, select from the drop-down list. - Select the Zones tab and then select the firewall zone to correspond with the network interface to be used. The default is the zone. The tab shows what interfaces have been assigned to a zone.
- Select the Services tab and then select the service to enable sharing. The service is required for accessing network printers.
- Close the firewall-config tool.
firewalld, see the Red Hat Enterprise Linux 7 Security Guide.
16.3.10.2.2. The Access Control Page

Figure 16.12. Access Control page
16.3.10.2.3. The Printer Options Page

Figure 16.13. Printer Options page
16.3.10.2.4. Job Options Page

Figure 16.14. Job Options page
16.3.10.2.5. Ink/Toner Levels Page

Figure 16.15. Ink/Toner Levels page
16.3.10.3. Managing Print Jobs

Figure 16.16. GNOME Print Status
lpstat -o. The last few lines look similar to the following:
Example 16.15. Example of lpstat -o output
$ lpstat -o
Charlie-60 twaugh 1024 Tue 08 Feb 2011 16:42:11 GMT
Aaron-61 twaugh 1024 Tue 08 Feb 2011 16:42:44 GMT
Ben-62 root 1024 Tue 08 Feb 2011 16:45:42 GMTlpstat -o and then use the command cancel job number. For example, cancel 60 would cancel the print job in Example 16.15, “Example of lpstat -o output”. You cannot cancel print jobs that were started by other users with the cancel command. However, you can enforce deletion of such job by issuing the cancel -U root job_number command. To prevent such canceling, change the printer operation policy to Authenticated to force root authentication.
lp sample.txt prints the text file sample.txt. The print filter determines what type of file it is and converts it into a format the printer can understand.
16.3.11. Additional Resources
Installed Documentation
lp(1)— The manual page for thelpcommand that allows you to print files from the command line.lpr(1)— The manual page for thelprcommand that allows you to print files from the command line.cancel(1)— The manual page for the command-line utility to remove print jobs from the print queue.mpage(1)— The manual page for the command-line utility to print multiple pages on one sheet of paper.cupsd(8)— The manual page for the CUPS printer daemon.cupsd.conf(5)— The manual page for the CUPS printer daemon configuration file.classes.conf(5)— The manual page for the class configuration file for CUPS.lpstat(1)— The manual page for thelpstatcommand, which displays status information about classes, jobs, and printers.
Online Documentation
- http://www.linuxprinting.org/ — The OpenPrinting group on the Linux Foundation website contains a large amount of information about printing in Linux.
- http://www.cups.org/ — The CUPS website provides documentation, FAQs, and newsgroups about CUPS.
Chapter 17. Configuring NTP Using the chrony Suite
NTP protocol is implemented by a daemon running in user space.
ntpd and chronyd, available from the repositories in the ntp and chrony packages respectively.
17.1. Introduction to the chrony Suite
- to synchronize the system clock with
NTPservers, - to synchronize the system clock with a reference clock, for example a GPS receiver,
- to synchronize the system clock with a manual time input,
- as an
NTPv4(RFC 5905)server or peer to provide a time service to other computers in the network.
chronyd, a daemon that runs in user space, and chronyc, a command line program which can be used to monitor the performance of chronyd and to change various operating parameters when it is running.
17.1.1. Differences Between ntpd and chronyd
chronyd can do better than ntpd:
chronydcan work well in an environment where access to the time reference is intermittent, whereasntpdneeds regular polling of time reference to work well.chronydcan perform well even when the network is congested for longer periods of time.chronydcan usually synchronize the clock faster and with better accuracy.chronydquickly adapts to sudden changes in the rate of the clock, for example, due to changes in the temperature of the crystal oscillator, whereasntpdmay need a long time to settle down again.- In the default configuration,
chronydnever steps the time after the clock has been synchronized at system start, in order not to upset other running programs.ntpdcan be configured to never step the time too, but it has to use a different means of adjusting the clock, which has some disadvantages including negative effect on accuracy of the clock. chronydcan adjust the rate of the clock on a Linux system in a larger range, which allows it to operate even on machines with a broken or unstable clock. For example, on some virtual machines.chronydis smaller, it uses less memory and it wakes up the CPU only when necessary, which is better for power saving.
chronyd can do that ntpd cannot do:
chronydprovides support for isolated networks where the only method of time correction is manual entry. For example, by the administrator looking at a clock.chronydcan examine the errors corrected at different updates to estimate the rate at which the computer gains or loses time, and use this estimate to adjust the computer clock subsequently.chronydprovides support to work out the rate of gain or loss of the real-time clock, for example the clock that maintains the time when the computer is turned off. It can use this data when the system boots to set the system time using an adapted value of time taken from the real-time clock. These real-time clock facilities are currently only available on Linux systems.chronydsupports hardware timestamping on Linux, which allows extremely accurate synchronization on local networks.
ntpd can do that chronyd cannot do:
ntpdsupports all operating modes fromNTPversion 4 (RFC 5905), including broadcast, multicast and manycast clients and servers. Note that the broadcast and multicast modes are, even with authentication, inherently less accurate and less secure than the ordinary server and client mode, and should generally be avoided.ntpdsupports the Autokey protocol (RFC 5906) to authenticate servers with public-key cryptography. Note that the protocol has proven to be insecure and will be probably replaced with an implementation of the Network Time Security (NTS) specification.ntpdincludes drivers for many reference clocks, whereaschronydrelies on other programs, for example gpsd, to access the data from the reference clocks using shared memory (SHM) or Unix domain socket (SOCK).
17.1.2. Choosing Between NTP Daemons
Note
Autokey protocol, can only be used with ntpd, because chronyd does not support this protocol. The Autokey protocol has serious security issues, and thus using this protocol should be avoided. Instead of Autokey, use authentication with symmetric keys, which is supported by both chronyd and ntpd. Chrony supports stronger hash functions like SHA256 and SHA512, while ntpd can use only MD5 and SHA1.
17.2. Understanding chrony and Its Configuration
17.2.1. Understanding chronyd
chronyd, can be monitored and controlled by the command line utility chronyc. This utility provides a command prompt which allows entering a number of commands to query the current state of chronyd and make changes to its configuration. By default, chronyd accepts only commands from a local instance of chronyc, but it can be configured to accept monitoring commands also from remote hosts. The remote access should be restricted.
17.2.2. Understanding chronyc
chronyd, can be controlled by the command line utility chronyc. This utility provides a command prompt which allows entering a number of commands to query the current state of chronyd and to make changes to its configuration. The default configuration is for chronyd to only accept commands from a local instance of chronyc, but it can be configured to accept monitoring commands also from remote hosts. The remote access should be restricted.
17.2.3. Understanding the chrony Configuration Commands
chronyd is /etc/chrony.conf. The -f option can be used to specify an alternate configuration file path. See the chronyd man page for further options. For a complete list of the directives that can be used see http://chrony.tuxfamily.org/manual.html#Configuration-file.
chronyd configuration options:
- Comments
- Comments should be preceded by #, %, ; or !
- allow
- Optionally specify a host, subnet, or network from which to allow
NTPconnections to a machine acting asNTPserver. The default is not to allow connections.Examples:
allow 192.0.2.0/24
Use this command to grant access to a specific network.allow 2001:0db8:85a3::8a2e:0370:7334
Use this this command to grant access to anIPv6.
- The UDP port number 123 needs to be open in the firewall in order to allow the client access:
~]#
firewall-cmd --zone=public --add-port=123/udpIf you want to open port 123 permanently, use the--permanentoption:~]#
firewall-cmd --permanent --zone=public --add-port=123/udp - cmdallow
- This is similar to the
allowdirective (see sectionallow), except that it allows control access (rather thanNTPclient access) to a particular subnet or host. (By “control access” is meant that chronyc can be run on those hosts and successfully connect tochronydon this computer.) The syntax is identical. There is also acmddeny alldirective with similar behavior to thecmdallow alldirective. - dumpdir
- Path to the directory to save the measurement history across restarts of
chronyd(assuming no changes are made to the system clock behavior whilst it is not running). If this capability is to be used (via thedumponexitcommand in the configuration file, or thedumpcommand in chronyc), thedumpdircommand should be used to define the directory where the measurement histories are saved. - dumponexit
- If this command is present, it indicates that
chronydshould save the measurement history for each of its time sources recorded whenever the program exits. (See thedumpdircommand above). - hwtimestamp
- The
hwtimestampdirective enables hardware timestamping for extremely accurate synchronization. For more details, seechrony.conf(5)manual page. - local
- The
localkeyword is used to allowchronydto appear synchronized to real time from the viewpoint of clients polling it, even if it has no current synchronization source. This option is normally used on the “master” computer in an isolated network, where several computers are required to synchronize to one another, and the “master” is kept in line with real time by manual input.An example of the command is:local stratum 10
A large value of 10 indicates that the clock is so many hops away from a reference clock that its time is unreliable. If the computer ever has access to another computer which is ultimately synchronized to a reference clock, it will almost certainly be at a stratum less than 10. Therefore, the choice of a high value like 10 for thelocalcommand prevents the machine’s own time from ever being confused with real time, were it ever to leak out to clients that have visibility of real servers. - log
- The
logcommand indicates that certain information is to be logged. It accepts the following options:The log files are written to the directory specified by the- measurements
- This option logs the raw
NTPmeasurements and related information to a file calledmeasurements.log. - statistics
- This option logs information about the regression processing to a file called
statistics.log. - tracking
- This option logs changes to the estimate of the system’s gain or loss rate, and any slews made, to a file called
tracking.log. - rtc
- This option logs information about the system’s real-time clock.
- refclocks
- This option logs the raw and filtered reference clock measurements to a file called
refclocks.log. - tempcomp
- This option logs the temperature measurements and system rate compensations to a file called
tempcomp.log.
logdircommand. An example of the command is:log measurements statistics tracking
- logdir
- This directive allows the directory where log files are written to be specified. An example of the use of this directive is:
logdir /var/log/chrony
- makestep
- Normally
chronydwill cause the system to gradually correct any time offset, by slowing down or speeding up the clock as required. In certain situations, the system clock may be so far adrift that this slewing process would take a very long time to correct the system clock. This directive forceschronydto step system clock if the adjustment is larger than a threshold value, but only if there were no more clock updates sincechronydwas started than a specified limit (a negative value can be used to disable the limit). This is particularly useful when using reference clock, because theinitstepslewdirective only works withNTPsources.An example of the use of this directive is:makestep 1000 10
This would step the system clock if the adjustment is larger than 1000 seconds, but only in the first ten clock updates. - maxchange
- This directive sets the maximum allowed offset corrected on a clock update. The check is performed only after the specified number of updates to allow a large initial adjustment of the system clock. When an offset larger than the specified maximum occurs, it will be ignored for the specified number of times and then
chronydwill give up and exit (a negative value can be used to never exit). In both cases a message is sent to syslog.An example of the use of this directive is:maxchange 1000 1 2
After the first clock update,chronydwill check the offset on every clock update, it will ignore two adjustments larger than 1000 seconds and exit on another one. - maxupdateskew
- One of
chronyd's tasks is to work out how fast or slow the computer’s clock runs relative to its reference sources. In addition, it computes an estimate of the error bounds around the estimated value. If the range of error is too large, it indicates that the measurements have not settled down yet, and that the estimated gain or loss rate is not very reliable. Themaxupdateskewparameter is the threshold for determining whether an estimate is too unreliable to be used. By default, the threshold is 1000 ppm. The format of the syntax is:maxupdateskew skew-in-ppm
Typical values for skew-in-ppm might be 100 for a dial-up connection to servers over a telephone line, and 5 or 10 for a computer on a LAN. It should be noted that this is not the only means of protection against using unreliable estimates. At all times,chronydkeeps track of both the estimated gain or loss rate, and the error bound on the estimate. When a new estimate is generated following another measurement from one of the sources, a weighted combination algorithm is used to update the master estimate. So ifchronydhas an existing highly-reliable master estimate and a new estimate is generated which has large error bounds, the existing master estimate will dominate in the new master estimate. - minsources
- The
minsourcesdirective sets the minimum number of sources that need to be considered as selectable in the source selection algorithm before the local clock is updated.The format of the syntax is:minsources number-of-sources
By default, number-of-sources is 1. Setting minsources to a larger number can be used to improve the reliability, because multiple sources will need to correspond with each other. - noclientlog
- This directive, which takes no arguments, specifies that client accesses are not to be logged. Normally they are logged, allowing statistics to be reported using the clients command in chronyc.
- reselectdist
- When
chronydselects synchronization source from available sources, it will prefer the one with minimum synchronization distance. However, to avoid frequent reselecting when there are sources with similar distance, a fixed distance is added to the distance for sources that are currently not selected. This can be set with thereselectdistoption. By default, the distance is 100 microseconds.The format of the syntax is:reselectdist dist-in-seconds
- stratumweight
- The
stratumweightdirective sets how much distance should be added per stratum to the synchronization distance whenchronydselects the synchronization source from available sources.The format of the syntax is:stratumweight dist-in-seconds
By default, dist-in-seconds is 1 millisecond. This means that sources with lower stratum are usually preferred to sources with higher stratum even when their distance is significantly worse. Settingstratumweightto 0 makeschronydignore stratum when selecting the source. - rtcfile
- The
rtcfiledirective defines the name of the file in whichchronydcan save parameters associated with tracking the accuracy of the system’s real-time clock (RTC). The format of the syntax is:rtcfile /var/lib/chrony/rtc
chronydsaves information in this file when it exits and when thewritertccommand is issued in chronyc. The information saved is the RTC’s error at some epoch, that epoch (in seconds since January 1 1970), and the rate at which the RTC gains or loses time. Not all real-time clocks are supported as their code is system-specific. Note that if this directive is used then the real-time clock should not be manually adjusted as this would interfere with chrony's need to measure the rate at which the real-time clock drifts if it was adjusted at random intervals. - rtcsync
- The
rtcsyncdirective is present in the/etc/chrony.conffile by default. This will inform the kernel the system clock is kept synchronized and the kernel will update the real-time clock every 11 minutes.
17.2.4. Security with chronyc
chronyd in two ways:
- Internet Protocol (IPv4 or IPv6,
- Unix domain socket, which is accessible locally by the
rootor chrony user.
/var/run/chrony/chronyd.sock. If this connection fails, which can happen for example when chronyc is running under a non-root user, chronyc tries to connect to 127.0.0.1 and then ::1.
chronyd, are allowed from the network:
- activity
- manual list
- rtcdata
- smoothing
- sources
- sourcestats
- tracking
- waitsync
chronyd accepts these commands can be configured with the cmdallow directive in the configuration file of chronyd, or the cmdallow command in chronyc. By default, the commands are accepted only from localhost (127.0.0.1 or ::1).
chronyd responds with a Not authorised error, even if it is from localhost.
Procedure 17.1. Accessing chronyd remotely with chronyc
- Allow access from both IPv4 and IPv6 addresses by adding the following to the
/etc/chrony.conffile:bindcmdaddress 0.0.0.0
orbindcmdaddress :
- Allow commands from the remote IP address, network, or subnet by using the
cmdallowdirective.Example 17.1.
Add the following content to the/etc/chrony.conffile:cmdallow 192.168.1.0/24
- Open port 323 in the firewall to connect from a remote system.
~]#
firewall-cmd --zone=public --add-port=323/udpIf you want to open port 323 permanently, use the--permanent.~]#
firewall-cmd --permanent --zone=public --add-port=323/udp
allow directive is for NTP access whereas the cmdallow directive is to enable receiving of remote commands. It is possible to make these changes temporarily using chronyc running locally. Edit the configuration file to make permanent changes.
17.3. Using chrony
17.3.1. Installing chrony
root:
~]# yum install chrony
The default location for the chrony daemon is /usr/sbin/chronyd. The command line utility will be installed to /usr/bin/chronyc.
17.3.2. Checking the Status of chronyd
chronyd, issue the following command:
~]$ systemctl status chronyd
chronyd.service - NTP client/server
Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled)
Active: active (running) since Wed 2013-06-12 22:23:16 CEST; 11h ago
17.3.3. Starting chronyd
chronyd, issue the following command as root:
~]# systemctl start chronyd
chronyd starts automatically at system start, issue the following command as root:
~]# systemctl enable chronyd
17.3.4. Stopping chronyd
chronyd, issue the following command as root:
~]# systemctl stop chronyd
chronyd from starting automatically at system start, issue the following command as root:
~]# systemctl disable chronyd
17.3.5. Checking if chrony is Synchronized
tracking, sources, and sourcestats commands.
17.3.5.1. Checking chrony Tracking
~]$ chronyc tracking
Reference ID : CB00710F (foo.example.net)
Stratum : 3
Ref time (UTC) : Fri Jan 27 09:49:17 2017
System time : 0.000006523 seconds slow of NTP time
Last offset : -0.000006747 seconds
RMS offset : 0.000035822 seconds
Frequency : 3.225 ppm slow
Residual freq : 0.000 ppm
Skew : 0.129 ppm
Root delay : 0.013639022 seconds
Root dispersion : 0.001100737 seconds
Update interval : 64.2 seconds
Leap status : Normal
The fields are as follows:
- Reference ID
- This is the reference ID and name (or
IPaddress) if available, of the server to which the computer is currently synchronized. Reference ID is a hexadecimal number to avoid confusion with IPv4 addresses. - Stratum
- The stratum indicates how many hops away from a computer with an attached reference clock we are. Such a computer is a stratum-1 computer, so the computer in the example is two hops away (that is to say, a.b.c is a stratum-2 and is synchronized from a stratum-1).
- Ref time
- This is the time (UTC) at which the last measurement from the reference source was processed.
- System time
- In normal operation,
chronydnever steps the system clock, because any jump in the timescale can have adverse consequences for certain application programs. Instead, any error in the system clock is corrected by slightly speeding up or slowing down the system clock until the error has been removed, and then returning to the system clock’s normal speed. A consequence of this is that there will be a period when the system clock (as read by other programs using thegettimeofday()system call, or by the date command in the shell) will be different fromchronyd's estimate of the current true time (which it reports toNTPclients when it is operating in server mode). The value reported on this line is the difference due to this effect. - Last offset
- This is the estimated local offset on the last clock update.
- RMS offset
- This is a long-term average of the offset value.
- Frequency
- The “frequency” is the rate by which the system’s clock would be wrong if
chronydwas not correcting it. It is expressed in ppm (parts per million). For example, a value of 1 ppm would mean that when the system’s clock thinks it has advanced 1 second, it has actually advanced by 1.000001 seconds relative to true time. - Residual freq
- This shows the “residual frequency” for the currently selected reference source. This reflects any difference between what the measurements from the reference source indicate the frequency should be and the frequency currently being used. The reason this is not always zero is that a smoothing procedure is applied to the frequency. Each time a measurement from the reference source is obtained and a new residual frequency computed, the estimated accuracy of this residual is compared with the estimated accuracy (see
skewnext) of the existing frequency value. A weighted average is computed for the new frequency, with weights depending on these accuracies. If the measurements from the reference source follow a consistent trend, the residual will be driven to zero over time. - Skew
- This is the estimated error bound on the frequency.
- Root delay
- This is the total of the network path delays to the stratum-1 computer from which the computer is ultimately synchronized. Root delay values are printed in nanosecond resolution. In certain extreme situations, this value can be negative. (This can arise in a symmetric peer arrangement where the computers’ frequencies are not tracking each other and the network delay is very short relative to the turn-around time at each computer.)
- Root dispersion
- This is the total dispersion accumulated through all the computers back to the stratum-1 computer from which the computer is ultimately synchronized. Dispersion is due to system clock resolution, statistical measurement variations etc. Root dispersion values are printed in nanosecond resolution.
- Leap status
- This is the leap status, which can be Normal, Insert second, Delete second or Not synchronized.
17.3.5.2. Checking chrony Sources
chronyd is accessing. The optional argument -v can be specified, meaning verbose. In this case, extra caption lines are shown as a reminder of the meanings of the columns.
~]$ chronyc sources
210 Number of sources = 3
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
#* GPS0 0 4 377 11 -479ns[ -621ns] +/- 134ns
^? a.b.c 2 6 377 23 -923us[ -924us] +/- 43ms
^+ d.e.f 1 6 377 21 -2629us[-2619us] +/- 86ms
The columns are as follows:
- M
- This indicates the mode of the source.
^means a server,=means a peer and#indicates a locally connected reference clock. - S
- This column indicates the state of the sources. “*” indicates the source to which
chronydis currently synchronized. “+” indicates acceptable sources which are combined with the selected source. “-” indicates acceptable sources which are excluded by the combining algorithm. “?” indicates sources to which connectivity has been lost or whose packets do not pass all tests. “x” indicates a clock whichchronydthinks is a falseticker (its time is inconsistent with a majority of other sources). “~” indicates a source whose time appears to have too much variability. The “?” condition is also shown at start-up, until at least 3 samples have been gathered from it. - Name/IP address
- This shows the name or the
IPaddress of the source, or reference ID for reference clock. - Stratum
- This shows the stratum of the source, as reported in its most recently received sample. Stratum 1 indicates a computer with a locally attached reference clock. A computer that is synchronized to a stratum 1 computer is at stratum 2. A computer that is synchronized to a stratum 2 computer is at stratum 3, and so on.
- Poll
- This shows the rate at which the source is being polled, as a base-2 logarithm of the interval in seconds. Thus, a value of 6 would indicate that a measurement is being made every 64 seconds.
chronydautomatically varies the polling rate in response to prevailing conditions. - Reach
- This shows the source’s reach register printed as an octal number. The register has 8 bits and is updated on every received or missed packet from the source. A value of 377 indicates that a valid reply was received for all of the last eight transmissions.
- LastRx
- This column shows how long ago the last sample was received from the source. This is normally in seconds. The letters
m,h,doryindicate minutes, hours, days or years. A value of 10 years indicates there were no samples received from this source yet. - Last sample
- This column shows the offset between the local clock and the source at the last measurement. The number in the square brackets shows the actual measured offset. This may be suffixed by
ns(indicating nanoseconds),us(indicating microseconds),ms(indicating milliseconds), ors(indicating seconds). The number to the left of the square brackets shows the original measurement, adjusted to allow for any slews applied to the local clock since. The number following the+/-indicator shows the margin of error in the measurement. Positive offsets indicate that the local clock is ahead of the source.
17.3.5.3. Checking chrony Source Statistics
sourcestats command displays information about the drift rate and offset estimation process for each of the sources currently being examined by chronyd. The optional argument -v can be specified, meaning verbose. In this case, extra caption lines are shown as a reminder of the meanings of the columns.
~]$ chronyc sourcestats
210 Number of sources = 1
Name/IP Address NP NR Span Frequency Freq Skew Offset Std Dev
===============================================================================
abc.def.ghi 11 5 46m -0.001 0.045 1us 25us
The columns are as follows:
- Name/IP address
- This is the name or
IPaddress of theNTPserver (or peer) or reference ID of the reference clock to which the rest of the line relates. - NP
- This is the number of sample points currently being retained for the server. The drift rate and current offset are estimated by performing a linear regression through these points.
- NR
- This is the number of runs of residuals having the same sign following the last regression. If this number starts to become too small relative to the number of samples, it indicates that a straight line is no longer a good fit to the data. If the number of runs is too low,
chronyddiscards older samples and re-runs the regression until the number of runs becomes acceptable. - Span
- This is the interval between the oldest and newest samples. If no unit is shown the value is in seconds. In the example, the interval is 46 minutes.
- Frequency
- This is the estimated residual frequency for the server, in parts per million. In this case, the computer’s clock is estimated to be running 1 part in 109 slow relative to the server.
- Freq Skew
- This is the estimated error bounds on Freq (again in parts per million).
- Offset
- This is the estimated offset of the source.
- Std Dev
- This is the estimated sample standard deviation.
17.3.6. Manually Adjusting the System Clock
root:
~]# chronyc makestep
rtcfile directive is used, the real-time clock should not be manually adjusted. Random adjustments would interfere with chrony's need to measure the rate at which the real-time clock drifts.
17.4. Setting Up chrony for Different Environments
17.4.1. Setting Up chrony for a System in an Isolated Network
settime command is used.
root, edit the /etc/chrony.conf as follows:
driftfile /var/lib/chrony/drift commandkey 1 keyfile /etc/chrony.keys initstepslew 10 client1 client3 client6 local stratum 8 manual allow 192.0.2.0Where
192.0.2.0 is the network or subnet address from which the clients are allowed to connect.
root, edit the /etc/chrony.conf as follows:
server master driftfile /var/lib/chrony/drift logdir /var/log/chrony log measurements statistics tracking keyfile /etc/chrony.keys commandkey 24 local stratum 10 initstepslew 20 master allow 192.0.2.123Where
192.0.2.123 is the address of the master, and master is the host name of the master. Clients with this configuration will resynchronize the master if it restarts.
/etc/chrony.conf file should be the same except that the local and allow directives should be omitted.
local directive that enables a local reference mode, which allows chronyd operating as an NTP server to appear synchronized to real time, even when it was never synchronized or the last update of the clock happened a long time ago.
orphan option of the local directive which enables the orphan mode. Each server needs to be configured to poll all other servers with local. This ensures that only the server with the smallest reference ID has the local reference active and other servers are synchronized to it. When the server fails, another one will take over.
17.5. Using chronyc
17.5.1. Using chronyc to Control chronyd
chronyd using the command line utility chronyc in interactive mode, enter the following command as root:
~]# chronyc
chronyc must run as root if some of the restricted commands are to be used.
chronyc>
help to list all of the commands.
chronyc command
Note
chronyd restart. For permanent changes, modify /etc/chrony.conf.
17.6. Chrony with HW timestamping
17.6.1. Understanding Hardware Timestamping
NTP timestamps are usually created by the kernel and chronyd with the use of the system clock. However, when HW timestamping is enabled, the NIC uses its own clock to generate the timestamps when packets are entering or leaving the link layer or the physical layer. When used with NTP, hardware timestamping can significantly improve the accuracy of synchronization. For best accuracy, both NTP servers and NTP clients need to use hardware timestamping. Under ideal conditions, a sub-microsecond accuracy may be possible.
PTP. For further information about PTP, see Chapter 19, Configuring PTP Using ptp4l. Unlike NTP, PTP relies on assistance in network switches and routers. If you want to reach the best accuracy of synchronization, use PTP on networks that have switches and routers with PTP support, and prefer NTP on networks that do not have such switches and routers.
17.6.2. Verifying Support for Hardware Timestamping
NTP is supported by an interface, use the ethtool -T command. An interface can be used for hardware timestamping with NTP if ethtool lists the SOF_TIMESTAMPING_TX_HARDWARE and SOF_TIMESTAMPING_TX_SOFTWARE capabilities and also the HWTSTAMP_FILTER_ALL filter mode.
Example 17.2. Verifying Support for Hardware Timestamping on a Specific Interface
~]# ethtool -T eth0
Timestamping parameters for eth0:
Capabilities:
hardware-transmit (SOF_TIMESTAMPING_TX_HARDWARE)
software-transmit (SOF_TIMESTAMPING_TX_SOFTWARE)
hardware-receive (SOF_TIMESTAMPING_RX_HARDWARE)
software-receive (SOF_TIMESTAMPING_RX_SOFTWARE)
software-system-clock (SOF_TIMESTAMPING_SOFTWARE)
hardware-raw-clock (SOF_TIMESTAMPING_RAW_HARDWARE)
PTP Hardware Clock: 0
Hardware Transmit Timestamp Modes:
off (HWTSTAMP_TX_OFF)
on (HWTSTAMP_TX_ON)
Hardware Receive Filter Modes:
none (HWTSTAMP_FILTER_NONE)
all (HWTSTAMP_FILTER_ALL)
ptpv1-l4-sync (HWTSTAMP_FILTER_PTP_V1_L4_SYNC)
ptpv1-l4-delay-req (HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ)
ptpv2-l4-sync (HWTSTAMP_FILTER_PTP_V2_L4_SYNC)
ptpv2-l4-delay-req (HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ)
ptpv2-l2-sync (HWTSTAMP_FILTER_PTP_V2_L2_SYNC)
ptpv2-l2-delay-req (HWTSTAMP_FILTER_PTP_V2_L2_DELAY_REQ)
ptpv2-event (HWTSTAMP_FILTER_PTP_V2_EVENT)
ptpv2-sync (HWTSTAMP_FILTER_PTP_V2_SYNC)
ptpv2-delay-req (HWTSTAMP_FILTER_PTP_V2_DELAY_REQ)
17.6.3. Enabling Hardware Timestamping
hwtimestamp directive in the /etc/chrony.conf file. The directive can either specify a single interface, or a wildcard character (*) can be used to enable hardware timestamping on all interfaces that support it. Use the wildcard specification in case that no other application, like ptp4l from the linuxptp package, is using hardware timestamping on an interface. Multiple hwtimestamp directives are allowed in the chrony configuration file.
Example 17.3. Enabling Hardware Timestamping by Using the hwtimestamp Directive
hwtimestamp eth0 hwtimestamp eth1 hwtimestamp *
17.6.4. Configuring Client Polling Interval
/etc/chrony.conf specifies a local NTP server using one second polling interval:
server ntp.local minpoll 0 maxpoll 0
17.6.5. Enabling Interleaved Mode
NTP servers that are not hardware NTP appliances, but rather general purpose computers running a software NTP implementation, like chrony, will get a hardware transmit timestamp only after sending a packet. This behavior prevents the server from saving the timestamp in the packet to which it corresponds. In order to enable NTP clients receiving transmit timestamps that were generated after the transmission, configure the clients to use the NTP interleaved mode by adding the xleave option to the server directive in /etc/chrony.conf:
server ntp.local minpoll 0 maxpoll 0 xleave
17.6.6. Configuring Server for Large Number of Clients
clientloglimit directive in /etc/chrony.conf. This directive specifies the maximum size of memory allocated for logging of clients' access on the server:
clientloglimit 100000000
17.6.7. Verifying Hardware Timestamping
Example 17.4. Log Messages for Interfaces with Enabled Hardware Timestamping
chronyd[4081]: Enabled HW timestamping on eth0 chronyd[4081]: Enabled HW timestamping on eth1
NTP client or peer, you can have the transmit and receive timestamping modes and the interleaved mode reported for each NTP source by the chronyc ntpdata command:
Example 17.5. Reporting the Transmit, Receive Timestamping and Interleaved Mode for Each NTP Source
~]# chronyc ntpdataRemote address : 203.0.113.15 (CB00710F) Remote port : 123 Local address : 203.0.113.74 (CB00714A) Leap status : Normal Version : 4 Mode : Server Stratum : 1 Poll interval : 0 (1 seconds) Precision : -24 (0.000000060 seconds) Root delay : 0.000015 seconds Root dispersion : 0.000015 seconds Reference ID : 47505300 (GPS) Reference time : Wed May 03 13:47:45 2017 Offset : -0.000000134 seconds Peer delay : 0.000005396 seconds Peer dispersion : 0.000002329 seconds Response time : 0.000152073 seconds Jitter asymmetry: +0.00 NTP tests : 111 111 1111 Interleaved : Yes Authenticated : No TX timestamping : Hardware RX timestamping : Hardware Total TX : 27 Total RX : 27 Total valid RX : 27
Example 17.6. Reporting the Stability of NTP Measurements
# chronyc sourcestatsNTP measurements should be in tens or hundreds of nanoseconds, under normal load. This stability is reported in the Std Dev column of the output of the chronyc sourcestats command:
210 Number of sources = 1 Name/IP Address NP NR Span Frequency Freq Skew Offset Std Dev ntp.local 12 7 11 +0.000 0.019 +0ns 49ns
17.6.8. Configuring PTP-NTP bridge
PTP) grandmaster is available in a network that does not have switches or routers with PTP support, a computer may be dedicated to operate as a PTP slave and a stratum-1 NTP server. Such a computer needs to have two or more network interfaces, and be close to the grandmaster or have a direct connection to it. This will ensure highly accurate synchronization in the network.
PTP. The configuration is described in the Chapter 19, Configuring PTP Using ptp4l. Configure chronyd to provide the system time using the other interface:
Example 17.7. Configuring chronyd to Provide the System Time Using the Other Interface
bindaddress 203.0.113.74 hwtimestamp eth1 local stratum 1
17.7. Additional Resources
17.7.1. Installed Documentation
chronyc(1)man page — Describes the chronyc command-line interface tool including commands and command options.
chronyd(8)man page — Describes the chronyd daemon including commands and command options.
chrony.conf(5)man page — Describes the chrony configuration file.
Chapter 18. Configuring NTP Using ntpd
18.1. Introduction to NTP
NTP servers provide “Coordinated Universal Time” (UTC). Information about these time servers can found at www.pool.ntp.org.
NTP is implemented by a daemon running in user space. The default NTP user space daemon in Red Hat Enterprise Linux 7 is chronyd. It must be disabled if you want to use the ntpd daemon. See Chapter 17, Configuring NTP Using the chrony Suite for information on chrony.
rtc(4) and hwclock(8) man pages for information on hardware clocks. The system clock can keep time by using various clock sources. Usually, the Time Stamp Counter (TSC) is used. The TSC is a CPU register which counts the number of cycles since it was last reset. It is very fast, has a high resolution, and there are no interrupts. On system start, the system clock reads the time and date from the RTC. The time kept by the RTC will drift away from actual time by up to 5 minutes per month due to temperature variations. Hence the need for the system clock to be constantly synchronized with external time references. When the system clock is being synchronized by ntpd, the kernel will in turn update the RTC every 11 minutes automatically.
18.2. NTP Strata
NTP servers are classified according to their synchronization distance from the atomic clocks which are the source of the time signals. The servers are thought of as being arranged in layers, or strata, from 1 at the top down to 15. Hence the word stratum is used when referring to a specific layer. Atomic clocks are referred to as Stratum 0 as this is the source, but no Stratum 0 packet is sent on the Internet, all stratum 0 atomic clocks are attached to a server which is referred to as stratum 1. These servers send out packets marked as Stratum 1. A server which is synchronized by means of packets marked stratum n belongs to the next, lower, stratum and will mark its packets as stratum n+1. Servers of the same stratum can exchange packets with each other but are still designated as belonging to just the one stratum, the stratum one below the best reference they are synchronized to. The designation Stratum 16 is used to indicate that the server is not currently synchronized to a reliable time source.
NTP clients act as servers for those systems in the stratum below them.
NTP Strata:
- Stratum 0:
- Atomic Clocks and their signals broadcast over Radio and GPS
- GPS (Global Positioning System)
- Mobile Phone Systems
- Low Frequency Radio Broadcasts WWVB (Colorado, USA.), JJY-40 and JJY-60 (Japan), DCF77 (Germany), and MSF (United Kingdom)
These signals can be received by dedicated devices and are usually connected by RS-232 to a system used as an organizational or site-wide time server. - Stratum 1:
- Computer with radio clock, GPS clock, or atomic clock attached
- Stratum 2:
- Reads from stratum 1; Serves to lower strata
- Stratum 3:
- Reads from stratum 2; Serves to lower strata
- Stratum n+1:
- Reads from stratum n; Serves to lower strata
- Stratum 15:
- Reads from stratum 14; This is the lowest stratum.
18.3. Understanding NTP
NTP used by Red Hat Enterprise Linux is as described in RFC 1305 Network Time Protocol (Version 3) Specification, Implementation and Analysis and RFC 5905 Network Time Protocol Version 4: Protocol and Algorithms Specification
NTP enables sub-second accuracy to be achieved. Over the Internet, accuracy to 10s of milliseconds is normal. On a Local Area Network (LAN), 1 ms accuracy is possible under ideal conditions. This is because clock drift is now accounted and corrected for, which was not done in earlier, simpler, time protocol systems. A resolution of 233 picoseconds is provided by using 64-bit time stamps. The first 32-bits of the time stamp is used for seconds, the last 32-bits are used for fractions of seconds.
NTP represents the time as a count of the number of seconds since 00:00 (midnight) 1 January, 1900 GMT. As 32-bits is used to count the seconds, this means the time will “roll over” in 2036. However NTP works on the difference between time stamps so this does not present the same level of problem as other implementations of time protocols have done. If a hardware clock that is within 68 years of the correct time is available at boot time then NTP will correctly interpret the current date. The NTP4 specification provides for an “Era Number” and an “Era Offset” which can be used to make software more robust when dealing with time lengths of more than 68 years. Do not confuse this with the Unix Year 2038 problem.
NTP protocol provides additional information to improve accuracy. Four time stamps are used to allow the calculation of round-trip time and server response time. In order for a system in its role as NTP client to synchronize with a reference time server, a packet is sent with an “originate time stamp”. When the packet arrives, the time server adds a “receive time stamp”. After processing the request for time and date information and just before returning the packet, it adds a “transmit time stamp”. When the returning packet arrives at the NTP client, a “receive time stamp” is generated. The client can now calculate the total round trip time and by subtracting the processing time derive the actual traveling time. By assuming the outgoing and return trips take equal time, the single-trip delay in receiving the NTP data is calculated. The full NTP algorithm is much more complex than presented here.
ntpd has determined the time should be. The system clock is adjusted slowly, at most at a rate of 0.5 ms per second, to reduce this offset by changing the frequency of the counter being used. It will take at least 2000 seconds to adjust the clock by 1 second using this method. This slow change is referred to as slewing and cannot go backwards. If the time offset of the clock is more than 128 ms (the default setting), ntpd can “step” the clock forwards or backwards. If the time offset at system start is greater than 1000 seconds then the user, or an installation script, should make a manual adjustment. See Chapter 3, Configuring the Date and Time. With the -g option to the ntpd command (used by default), any offset at system start will be corrected, but during normal operation only offsets of up to 1000 seconds will be corrected.
-x option (unrelated to the -g option). Using the -x option to increase the stepping limit from 0.128 s to 600 s has a drawback because a different method of controlling the clock has to be used. It disables the kernel clock discipline and may have a negative impact on the clock accuracy. The -x option can be added to the /etc/sysconfig/ntpd configuration file.
18.4. Understanding the Drift File
ntpd. The drift file is replaced, rather than just updated, and for this reason the drift file must be in a directory for which the ntpd has write permissions.
18.5. UTC, Timezones, and DST
NTP is entirely in UTC (Universal Time, Coordinated), Timezones and DST (Daylight Saving Time) are applied locally by the system. The file /etc/localtime is a copy of, or symlink to, a zone information file from /usr/share/zoneinfo. The RTC may be in localtime or in UTC, as specified by the 3rd line of /etc/adjtime, which will be one of LOCAL or UTC to indicate how the RTC clock has been set. Users can easily change this setting using the checkbox System Clock Uses UTC in the Date and Time graphical configuration tool. See Chapter 3, Configuring the Date and Time for information on how to use that tool. Running the RTC in UTC is recommended to avoid various problems when daylight saving time is changed.
ntpd is explained in more detail in the man page ntpd(8). The resources section lists useful sources of information. See Section 18.20, “Additional Resources”.
18.6. Authentication Options for NTP
NTPv4 NTPv4 added support for the Autokey Security Architecture, which is based on public asymmetric cryptography while retaining support for symmetric key cryptography. The Autokey protocol is described in RFC 5906 Network Time Protocol Version 4: Autokey Specification. Unfortunately, it was found later that the protocol has serious security issues, and thus Red Hat strongly recommends to use symmetric keys instead. The man page ntp_auth(5) describes the authentication options and commands for ntpd.
NTP packets with incorrect time information. On systems using the public pool of NTP servers, this risk is mitigated by having more than three NTP servers in the list of public NTP servers in /etc/ntp.conf. If only one time source is compromised or spoofed, ntpd will ignore that source. You should conduct a risk assessment and consider the impact of incorrect time on your applications and organization. If you have internal time sources you should consider steps to protect the network over which the NTP packets are distributed. If you conduct a risk assessment and conclude that the risk is acceptable, and the impact to your applications minimal, then you can choose not to use authentication.
disable auth directive in the ntp.conf file. Alternatively, authentication needs to be configured by using SHA1 or MD5 symmetric keys, or by public (asymmetric) key cryptography using the Autokey scheme. The Autokey scheme for asymmetric cryptography is explained in the ntp_auth(8) man page and the generation of keys is explained in ntp-keygen(8). To implement symmetric key cryptography, see Section 18.17.12, “Configuring Symmetric Authentication Using a Key” for an explanation of the key option.
18.7. Managing the Time on Virtual Machines
kvm-clock. See the KVM guest timing management chapter of the Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide.
18.8. Understanding Leap Seconds
NTP transmits information about pending leap seconds and applies them automatically.
18.9. Understanding the ntpd Configuration File
ntpd, reads the configuration file at system start or when the service is restarted. The default location for the file is /etc/ntp.conf and you can view the file by entering the following command:
~]$ less /etc/ntp.conf
The configuration commands are explained briefly later in this chapter, see Section 18.17, “Configure NTP”, and more verbosely in the ntp.conf(5) man page.
- The driftfile entry
- A path to the drift file is specified, the default entry on Red Hat Enterprise Linux is:
driftfile /var/lib/ntp/drift
If you change this be certain that the directory is writable byntpd. The file contains one value used to adjust the system clock frequency after every system or service start. See Understanding the Drift File for more information. - The access control entries
- The following line sets the default access control restriction:
restrict default nomodify notrap nopeer noquery
- The
nomodifyoptions prevents any changes to the configuration. - The
notrapoption preventsntpdccontrol message protocol traps. - The
nopeeroption prevents a peer association being formed. - The
noqueryoption preventsntpqandntpdcqueries, but not time queries, from being answered.
Important
Thentpqandntpdcqueries can be used in amplification attacks, therefore do not remove thenoqueryoption from therestrict defaultcommand on publicly accessible systems.See CVE-2013-5211 for more details.Addresses within the range127.0.0.0/8are sometimes required by various processes or applications. As the "restrict default" line above prevents access to everything not explicitly allowed, access to the standard loopback address forIPv4andIPv6is permitted by means of the following lines:# the administrative functions. restrict 127.0.0.1 restrict ::1
Addresses can be added underneath if specifically required by another application.Hosts on the local network are not permitted because of the "restrict default" line above. To change this, for example to allow hosts from the192.0.2.0/24network to query the time and statistics but nothing more, a line in the following format is required:restrict 192.0.2.0 mask 255.255.255.0 nomodify notrap nopeer
To allow unrestricted access from a specific host, for example192.0.2.250/32, a line in the following format is required:restrict 192.0.2.250
A mask of255.255.255.255is applied if none is specified.The restrict commands are explained in thentp_acc(5)man page. - The public servers entry
- By default, the
ntp.conffile contains four public server entries:server 0.rhel.pool.ntp.org iburst server 1.rhel.pool.ntp.org iburst server 2.rhel.pool.ntp.org iburst server 3.rhel.pool.ntp.org iburst
- The broadcast multicast servers entry
- By default, the
ntp.conffile contains some commented out examples. These are largely self explanatory. See Section 18.17, “Configure NTP” for the explanation of the specific commands. If required, add your commands just below the examples.
Note
DHCP client program, dhclient, receives a list of NTP servers from the DHCP server, it adds them to ntp.conf and restarts the service. To disable that feature, add PEERNTP=no to /etc/sysconfig/network.
18.10. Understanding the ntpd Sysconfig File
ntpd init script on service start. The default contents is as follows:
# Command line options for ntpd OPTIONS="-g"
-g option enables ntpd to ignore the offset limit of 1000 s and attempt to synchronize the time even if the offset is larger than 1000 s, but only on system start. Without that option ntpd will exit if the time offset is greater than 1000 s. It will also exit after system start if the service is restarted and the offset is greater than 1000 s even with the -g option.
18.11. Disabling chrony
ntpd the default user space daemon, chronyd, must be stopped and disabled. Issue the following command as root:
~]# systemctl stop chronyd
To prevent it restarting at system start, issue the following command as root:
~]# systemctl disable chronyd
To check the status of chronyd, issue the following command:
~]$ systemctl status chronyd
18.12. Checking if the NTP Daemon is Installed
ntpd is installed, enter the following command as root:
~]# yum install ntp
NTP is implemented by means of the daemon or service ntpd, which is contained within the ntp package.
18.13. Installing the NTP Daemon (ntpd)
ntpd, enter the following command as root:
~]# yum install ntp
ntpd at system start, enter the following command as root:
~]# systemctl enable ntpd
18.14. Checking the Status of NTP
ntpd is running and configured to run at system start, issue the following command:
~]$ systemctl status ntpd
ntpd, issue the following command:
~]$ ntpstat
unsynchronised
time server re-starting
polling server every 64 s
~]$ ntpstat
synchronised to NTP server (10.5.26.10) at stratum 2
time correct to within 52 ms
polling server every 1024 s
18.15. Configure the Firewall to Allow Incoming NTP Packets
NTP traffic consists of UDP packets on port 123 and needs to be permitted through network and host-based firewalls in order for NTP to function.
NTP traffic for clients using the graphical Firewall Configuration tool.
firewall and then press Enter. The Firewall Configuration window opens. You will be prompted for your user password.
root user:
~]# firewall-config
The Firewall Configuration window opens. Note, this command can be run as normal user but you will then be prompted for the root password from time to time.
firewalld.
18.15.1. Change the Firewall Settings
Note
18.15.2. Open Ports in the Firewall for NTP Packets
123 and select from the drop-down list.
18.16. Configure ntpdate Servers
ntpdate service is to set the clock during system boot. This was used previously to ensure that the services started after ntpdate would have the correct time and not observe a jump in the clock. The use of ntpdate and the list of step-tickers is considered deprecated and so Red Hat Enterprise Linux 7 uses the -g option to the ntpd command and not ntpdate by default.
ntpdate service in Red Hat Enterprise Linux 7 is beneficial if it is used without the ntpd service or when the -x option is specified for the ntpd command. If ntpd is used with -x but without the ntpdate service enabled, the clock is corrected by step only if the time difference is larger than 600 seconds. With a smaller offset than 600 seconds, the clock is adjusted slowly, approximately 2000 seconds for every corrected second.
ntpdate service is enabled to run at system start, issue the following command:
~]$ systemctl status ntpdate
root:
~]# systemctl enable ntpdate
/etc/ntp/step-tickers file contains 0.rhel.pool.ntp.org. To configure additional ntpdate servers, using a text editor running as root, edit /etc/ntp/step-tickers. The number of servers listed is not very important as ntpdate will only use this to obtain the date information once when the system is starting. If you have an internal time server then use that host name for the first line. An additional host on the second line as a backup is sensible. The selection of backup servers and whether the second host is internal or external depends on your risk assessment. For example, what is the chance of any problem affecting the first server also affecting the second server? Would connectivity to an external server be more likely to be available than connectivity to internal servers in the event of a network failure disrupting access to the first server?
18.17. Configure NTP
NTP service, use a text editor running as root user to edit the /etc/ntp.conf file. This file is installed together with ntpd and is configured to use time servers from the Red Hat pool by default. The man page ntp.conf(5) describes the command options that can be used in the configuration file apart from the access and rate limiting commands which are explained in the ntp_acc(5) man page.
18.17.1. Configure Access Control to an NTP Service
NTP service running on a system, make use of the restrict command in the ntp.conf file. See the commented out example:
# Hosts on local network are less restricted. #restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
restrict command takes the following form:
restrict optionignore— All packets will be ignored, includingntpqandntpdcqueries.kod— a “Kiss-o'-death” packet is to be sent to reduce unwanted queries.limited— do not respond to time service requests if the packet violates the rate limit default values or those specified by thediscardcommand.ntpqandntpdcqueries are not affected. For more information on thediscardcommand and the default values, see Section 18.17.2, “Configure Rate Limiting Access to an NTP Service”.lowpriotrap— traps set by matching hosts to be low priority.nomodify— prevents any changes to the configuration.noquery— preventsntpqandntpdcqueries, but not time queries, from being answered.nopeer— prevents a peer association being formed.noserve— deny all packets exceptntpqandntpdcqueries.notrap— preventsntpdccontrol message protocol traps.notrust— deny packets that are not cryptographically authenticated.ntpport— modify the match algorithm to only apply the restriction if the source port is the standardNTPUDPport123.version— deny packets that do not match the currentNTPversion.
restrict command has to have the limited option. If ntpd should reply with a KoD packet, the restrict command needs to have both limited and kod options.
ntpq and ntpdc queries can be used in amplification attacks (see CVE-2013-5211 for more details), do not remove the noquery option from the restrict default command on publicly accessible systems.
18.17.2. Configure Rate Limiting Access to an NTP Service
NTP service running on a system, add the limited option to the restrict command as explained in Section 18.17.1, “Configure Access Control to an NTP Service”. If you do not want to use the default discard parameters, then also use the discard command as explained here.
discard command takes the following form:
discard[averagevalue] [minimumvalue] [monitorvalue]
average— specifies the minimum average packet spacing to be permitted, it accepts an argument in log2 seconds. The default value is 3 (23 equates to 8 seconds).minimum— specifies the minimum packet spacing to be permitted, it accepts an argument in log2 seconds. The default value is 1 (21 equates to 2 seconds).monitor— specifies the discard probability for packets once the permitted rate limits have been exceeded. The default value is 3000 seconds. This option is intended for servers that receive 1000 or more requests per second.
discard command are as follows: discard average 4
discard average 4 minimum 2
18.17.3. Adding a Peer Address
NTP service of the same stratum, make use of the peer command in the ntp.conf file.
peer command takes the following form:
peer addressIP unicast address or a DNS resolvable name. The address must only be that of a system known to be a member of the same stratum. Peers should have at least one time source that is different to each other. Peers are normally systems under the same administrative control.
18.17.4. Adding a Server Address
NTP service of a higher stratum, make use of the server command in the ntp.conf file.
server command takes the following form:
server addressIP unicast address or a DNS resolvable name. The address of a remote reference server or local reference clock from which packets are to be received.
18.17.5. Adding a Broadcast or Multicast Server Address
NTP packets to, make use of the broadcast command in the ntp.conf file.
broadcast command takes the following form:
broadcast addressIP broadcast or multicast address to which packets are sent.
NTP broadcast server. The address used must be a broadcast or a multicast address. Broadcast address implies the IPv4 address 255.255.255.255. By default, routers do not pass broadcast messages. The multicast address can be an IPv4 Class D address, or an IPv6 address. The IANA has assigned IPv4 multicast address 224.0.1.1 and IPv6 address FF05::101 (site local) to NTP. Administratively scoped IPv4 multicast addresses can also be used, as described in RFC 2365 Administratively Scoped IP Multicast.
18.17.6. Adding a Manycast Client Address
NTP server discovery, make use of the manycastclient command in the ntp.conf file.
manycastclient command takes the following form:
manycastclient addressIP multicast address from which packets are to be received. The client will send a request to the address and select the best servers from the responses and ignore other servers. NTP communication then uses unicast associations, as if the discovered NTP servers were listed in ntp.conf.
NTP client. Systems can be both client and server at the same time.
18.17.7. Adding a Broadcast Client Address
NTP packets, make use of the broadcastclient command in the ntp.conf file.
broadcastclient command takes the following form:
broadcastclientNTP client. Systems can be both client and server at the same time.
18.17.8. Adding a Manycast Server Address
NTP packets, make use of the manycastserver command in the ntp.conf file.
manycastserver command takes the following form:
manycastserver addressNTP server. Systems can be both client and server at the same time.
18.17.9. Adding a Multicast Client Address
NTP packets, make use of the multicastclient command in the ntp.conf file.
multicastclient command takes the following form:
multicastclient addressNTP client. Systems can be both client and server at the same time.
18.17.10. Configuring the Burst Option
burst option against a public server is considered abuse. Do not use this option with public NTP servers. Use it only for applications within your own organization.
burstserver command to improve the average quality of the time-offset calculations.
18.17.11. Configuring the iburst Option
iburstcalldelay command to allow additional time for a modem or ISDN call to complete. For use with the server command to reduce the time taken for initial synchronization. This is now a default option in the configuration file.
18.17.12. Configuring Symmetric Authentication Using a Key
key number1 to 65534 inclusive. This option enables the use of a message authentication code (MAC) in packets. This option is for use with the peer, server, broadcast, and manycastclient commands.
/etc/ntp.conf file as follows:
server 192.168.1.1 key 10 broadcast 192.168.1.255 key 20 manycastclient 239.255.254.254 key 30
18.17.13. Configuring the Poll Interval
minpollvalue andmaxpollvalue
minpoll value is 6, 26 equates to 64 s. The default value for maxpoll is 10, which equates to 1024 s. Allowed values are in the range 3 to 17 inclusive, which equates to 8 s to 36.4 h respectively. These options are for use with the peer or server. Setting a shorter maxpoll may improve clock accuracy.
18.17.14. Configuring Server Preference
preferpeer or server commands.
18.17.15. Configuring the Time-to-Live for NTP Packets
ttl valueNTP servers. Specify the maximum time-to-live value to use for the “expanding ring search” by a manycast client. The default value is 127.
18.17.16. Configuring the NTP Version to Use
NTP should be used in place of the default, add the following option to the end of a server or peer command:
version valueNTP set in created NTP packets. The value can be in the range 1 to 4. The default is 4.
18.18. Configuring the Hardware Clock Update
- Instant one-time update
- To perform an instant one-time update of the hardware clock, run this command as root:
~]#
hwclock --systohc - Update on every boot
- To make the hardware clock update on every boot after executing the ntpdate synchronization utility, do the following:
- Add the following line to the
/etc/sysconfig/ntpdatefile:SYNC_HWCLOCK=yes
- Enable the
ntpdateservice as root:~]#
systemctl enable ntpdate.service
Note that thentpdateservice uses the NTP servers defined in the/etc/ntp/step-tickersfile.Note
On virtual machines, the hardware clock will be updated on the next boot of the host machine, not of the virtual machine. - Update via NTP
- You can make the hardware clock update every time the system clock is updated by the
ntpdorchronydservice:Start thentpdservice as root:~]#
systemctl start ntpd.serviceTo make the behavior persistent across boots, make the service start automatically at the boot time:~]#
systemctl enable ntpd.serviceorStart thechronydservice as root:~]#
systemctl start chronyd.serviceTo make the behavior persistent across boots, make the service start automatically at the boot time:~]#
systemctl enable chronyd.serviceAs a result, every time the system clock is synchronized byntpdorchronyd, the kernel automatically updates the hardware clock in 11 minutes.Warning
This approach might not always work because the above mentioned 11-minute mode is not always enabled. As a consequence, the hardware clock does not necessarily get updated on the system clock update.To check the synchronization of the software clock with the hardware clock, use thentpdc -c kerninfoor thentptimecommand asroot:~]#
ntpdc -c kerninfoThe result may look like this:pll offset: 0 s pll frequency: 0.000 ppm maximum error: 8.0185 s estimated error: 0 s
status: 2001 pll nanopll time constant: 6 precision: 1e-09 s frequency tolerance: 500 ppmor~]#
ntptimeThe result may look like this:ntp_gettime() returns code 0 (OK) time dcba5798.c3dfe2e0 Mon, May 8 2017 11:34:00.765, (.765135199), maximum error 8010000 us, estimated error 0 us, TAI offset 0 ntp_adjtime() returns code 0 (OK) modes 0x0 (), offset 0.000 us, frequency 0.000 ppm, interval 1 s, maximum error 8010000 us, estimated error 0 us,
status 0x2001 (PLL,NANO), time constant 6, precision 0.001 us, tolerance 500 ppm,To recognize whether the software clock is synchronized with the hardware clock, see the status line in the output (highlighted).If the third digit from the end is 4, the software clock is not synchronized with the hardware clock.status 0x2401
If the second digit of the last four digits is not 4, the software clock is synchronized with the hardware clock.status 0x2001
18.19. Configuring Clock Sources
~]$In the above example, the kernel is using kvm-clock. This was selected at boot time as this is a virtual machine. Note that the available clock source is architecture dependent.cd /sys/devices/system/clocksource/clocksource0/clocksource0]$cat available_clocksourcekvm-clock tsc hpet acpi_pm clocksource0]$cat current_clocksourcekvm-clock
clocksource directive to the end of the kernel's GRUB 2 menu entry. Use the grubby tool to make the change. For example, to force the default kernel on a system to use the tsc clock source, enter a command as follows:
~]# grubby --args=clocksource=tsc --update-kernel=DEFAULT
The --update-kernel parameter also accepts the keyword ALL, or a comma separated list of kernel index numbers.
18.20. Additional Resources
NTP and ntpd.
18.20.1. Installed Documentation
ntpd(8)man page — Describesntpdin detail, including the command-line options.ntp.conf(5)man page — Contains information on how to configure associations with servers and peers.ntpq(8)man page — Describes theNTPquery utility for monitoring and querying anNTPserver.ntpdc(8)man page — Describes thentpdutility for querying and changing the state ofntpd.ntp_auth(5)man page — Describes authentication options, commands, and key management forntpd.ntp_keygen(8)man page — Describes generating public and private keys forntpd.ntp_acc(5)man page — Describes access control options using therestrictcommand.ntp_mon(5)man page — Describes monitoring options for the gathering of statistics.ntp_clock(5)man page — Describes commands for configuring reference clocks.ntp_misc(5)man page — Describes miscellaneous options.ntp_decode(5)man page — Lists the status words, event messages and error codes used forntpdreporting and monitoring.ntpstat(8)man page — Describes a utility for reporting the synchronization state of theNTPdaemon running on the local machine.ntptime(8)man page — Describes a utility for reading and setting kernel time variables.tickadj(8)man page — Describes a utility for reading, and optionally setting, the length of the tick.
18.20.2. Useful Websites
- http://doc.ntp.org/
- The NTP Documentation Archive
- http://www.eecis.udel.edu/~mills/ntp.html
- Network Time Synchronization Research Project.
- http://www.eecis.udel.edu/~mills/ntp/html/manyopt.html
- Information on Automatic Server Discovery in
NTPv4.
Chapter 19. Configuring PTP Using ptp4l
19.1. Introduction to PTP
PTP is capable of sub-microsecond accuracy, which is far better than is normally obtainable with NTP. PTP support is divided between the kernel and user space. The kernel in Red Hat Enterprise Linux includes support for PTP clocks, which are provided by network drivers. The actual implementation of the protocol is known as linuxptp, a PTPv2 implementation according to the IEEE standard 1588 for Linux.
PTP boundary clock and ordinary clock. With hardware time stamping, it is used to synchronize the PTP hardware clock to the master clock, and with software time stamping it synchronizes the system clock to the master clock. The phc2sys program is needed only with hardware time stamping, for synchronizing the system clock to the PTP hardware clock on the network interface card (NIC).
19.1.1. Understanding PTP
PTP are organized in a master-slave hierarchy. The slaves are synchronized to their masters which may be slaves to their own masters. The hierarchy is created and updated automatically by the best master clock (BMC) algorithm, which runs on every clock. When a clock has only one port, it can be master or slave, such a clock is called an ordinary clock (OC). A clock with multiple ports can be master on one port and slave on another, such a clock is called a boundary clock (BC). The top-level master is called the grandmaster clock, which can be synchronized by using a Global Positioning System (GPS) time source. By using a GPS-based time source, disparate networks can be synchronized with a high-degree of accuracy.

Figure 19.1. PTP grandmaster, boundary, and slave Clocks
19.1.2. Advantages of PTP
PTP has over the Network Time Protocol (NTP) is hardware support present in various network interface controllers (NIC) and network switches. This specialized hardware allows PTP to account for delays in message transfer, and greatly improves the accuracy of time synchronization. While it is possible to use non-PTP enabled hardware components within the network, this will often cause an increase in jitter or introduce an asymmetry in the delay resulting in synchronization inaccuracies, which add up with multiple non-PTP aware components used in the communication path. To achieve the best possible accuracy, it is recommended that all networking components between PTP clocks are PTP hardware enabled. Time synchronization in larger networks where not all of the networking hardware supports PTP might be better suited for NTP.
PTP support, the NIC has its own on-board clock, which is used to time stamp the received and transmitted PTP messages. It is this on-board clock that is synchronized to the PTP master, and the computer's system clock is synchronized to the PTP hardware clock on the NIC. With software PTP support, the system clock is used to time stamp the PTP messages and it is synchronized to the PTP master directly. Hardware PTP support provides better accuracy since the NIC can time stamp the PTP packets at the exact moment they are sent and received while software PTP support requires additional processing of the PTP packets by the operating system.
19.2. Using PTP
PTP, the kernel network driver for the intended interface has to support either software or hardware time stamping capabilities.
19.2.1. Checking for Driver and Hardware Support
~]# ethtool -T eth3
Time stamping parameters for eth3:
Capabilities:
hardware-transmit (SOF_TIMESTAMPING_TX_HARDWARE)
software-transmit (SOF_TIMESTAMPING_TX_SOFTWARE)
hardware-receive (SOF_TIMESTAMPING_RX_HARDWARE)
software-receive (SOF_TIMESTAMPING_RX_SOFTWARE)
software-system-clock (SOF_TIMESTAMPING_SOFTWARE)
hardware-raw-clock (SOF_TIMESTAMPING_RAW_HARDWARE)
PTP Hardware Clock: 0
Hardware Transmit Timestamp Modes:
off (HWTSTAMP_TX_OFF)
on (HWTSTAMP_TX_ON)
Hardware Receive Filter Modes:
none (HWTSTAMP_FILTER_NONE)
all (HWTSTAMP_FILTER_ALL)
Where eth3 is the interface you want to check.
SOF_TIMESTAMPING_SOFTWARE
SOF_TIMESTAMPING_TX_SOFTWARE
SOF_TIMESTAMPING_RX_SOFTWARE
SOF_TIMESTAMPING_RAW_HARDWARE
SOF_TIMESTAMPING_TX_HARDWARE
SOF_TIMESTAMPING_RX_HARDWARE
19.2.2. Installing PTP
PTP. User space support is provided by the tools in the linuxptp package. To install linuxptp, issue the following command as root:
~]# yum install linuxptp
This will install ptp4l and phc2sys.
PTP time using NTP, see Section 19.8, “Serving PTP Time with NTP”.
19.2.3. Starting ptp4l
/etc/sysconfig/ptp4l file. Options required for use both by the service and on the command line should be specified in the /etc/ptp4l.conf file. The /etc/sysconfig/ptp4l file includes the -f /etc/ptp4l.conf command line option, which causes the ptp4l program to read the /etc/ptp4l.conf file and process the options it contains. The use of the /etc/ptp4l.conf is explained in Section 19.4, “Specifying a Configuration File”. More information on the different ptp4l options and the configuration file settings can be found in the ptp4l(8) man page.
Starting ptp4l as a Service
root:
~]# systemctl start ptp4l
For more information on managing system services in Red Hat Enterprise Linux 7, see Chapter 10, Managing Services with systemd.
Using ptp4l From The Command Line
-i option. Enter the following command as root:
~]# ptp4l -i eth3 -m
Where eth3 is the interface you want to configure. Below is example output from ptp4l when the PTP clock on the NIC is synchronized to a master:
~]# ptp4l -i eth3 -m
selected eth3 as PTP clock
port 1: INITIALIZING to LISTENING on INITIALIZE
port 0: INITIALIZING to LISTENING on INITIALIZE
port 1: new foreign master 00a069.fffe.0b552d-1
selected best master clock 00a069.fffe.0b552d
port 1: LISTENING to UNCALIBRATED on RS_SLAVE
master offset -23947 s0 freq +0 path delay 11350
master offset -28867 s0 freq +0 path delay 11236
master offset -32801 s0 freq +0 path delay 10841
master offset -37203 s1 freq +0 path delay 10583
master offset -7275 s2 freq -30575 path delay 10583
port 1: UNCALIBRATED to SLAVE on MASTER_CLOCK_SELECTED
master offset -4552 s2 freq -30035 path delay 10385
The master offset value is the measured offset from the master in nanoseconds. The s0, s1, s2 strings indicate the different clock servo states: s0 is unlocked, s1 is clock step and s2 is locked. Once the servo is in the locked state (s2), the clock will not be stepped (only slowly adjusted) unless the pi_offset_const option is set to a positive value in the configuration file (described in the ptp4l(8) man page). The adj value is the frequency adjustment of the clock in parts per billion (ppb). The path delay value is the estimated delay of the synchronization messages sent from the master in nanoseconds. Port 0 is a Unix domain socket used for local PTP management. Port 1 is the eth3 interface (based on the example above.) INITIALIZING, LISTENING, UNCALIBRATED and SLAVE are some of possible port states which change on the INITIALIZE, RS_SLAVE, MASTER_CLOCK_SELECTED events. In the last state change message, the port state changed from UNCALIBRATED to SLAVE indicating successful synchronization with a PTP master clock.
Logging Messages From ptp4l
/var/log/messages. However, specifying the -m option enables logging to standard output which can be useful for debugging purposes.
-S option needs to be used as follows:
~]# ptp4l -i eth3 -m -S
19.2.3.1. Selecting a Delay Measurement Mechanism
ptp4l command as follows:
-P- The
-Pselects the peer-to-peer (P2P) delay measurement mechanism.The P2P mechanism is preferred as it reacts to changes in the network topology faster, and may be more accurate in measuring the delay, than other mechanisms. The P2P mechanism can only be used in topologies where each port exchanges PTP messages with at most one other P2P port. It must be supported and used by all hardware, including transparent clocks, on the communication path. -E- The
-Eselects the end-to-end (E2E) delay measurement mechanism. This is the default.The E2E mechanism is also referred to as the delay “request-response” mechanism. -A- The
-Aenables automatic selection of the delay measurement mechanism.The automatic option starts ptp4l in E2E mode. It will change to P2P mode if a peer delay request is received.
Note
PTP communication path must use the same mechanism to measure the delay. Warnings will be printed in the following circumstances:
- When a peer delay request is received on a port using the E2E mechanism.
- When a E2E delay request is received on a port using the P2P mechanism.
19.3. Using PTP with Multiple Interfaces
sysctl utility is used to read and write values to tunables in the kernel. Changes to a running system can be made using sysctl commands directly on the command line and permanent changes can be made by adding lines to the /etc/sysctl.conf file.
- To change to loose mode filtering globally, enter the following commands as
root:~]#
sysctl -w net.ipv4.conf.default.rp_filter=2~]#sysctl -w net.ipv4.conf.all.rp_filter=2 - To change the reverse path filtering mode per network interface, use the
net.ipv4.interface.rp_filtercommand on all PTP interfaces. For example, for an interface with device nameem1:~]#
sysctl -w net.ipv4.conf.em1.rp_filter=2
/etc/sysctl.conf file. You can change the mode for all interfaces, or for a particular interface.
/etc/sysctl.conf file with an editor running as the root user and add a line as follows:
net.ipv4.conf.all.rp_filter=2
net.ipv4.conf.interface.rp_filter=2
Note
conf/{all,interface}/rp_filter is used when doing source validation on each interface.
sysctl parameters, see the Red Hat Knowledgebase article What is the difference between "all", "default" and a specific device in a sysctl parameter?.
sysctl service run during the boot process:
- Drivers are loaded before the
sysctlservice runs.In this case, affected network interfaces use the mode preset from the kernel, andsysctldefaults are ignored.For solution of this problem, see the Red Hat Knowledgebase article What is the difference between "all", "default" and a specific device in a sysctl parameter?. - Drivers are loaded or reloaded after the
sysctlservice runs.In this case, it is possible that somesysctl.confparameters are not used after reboot. These settings may not be available or they may return to defaults.For solution of this problem, see the Red Hat Knowledgebase article Some sysctl.conf parameters are not used after reboot, manually adjusting the settings works as expected.
19.4. Specifying a Configuration File
-f option. For example:
~]# ptp4l -f /etc/ptp4l.conf
-i eth3 -m -S options shown above would look as follows:
~]# cat /etc/ptp4l.conf
[global]
verbose 1
time_stamping software
[eth3]
19.5. Using the PTP Management Client
PTP management client, pmc, can be used to obtain additional information from ptp4l as follows:
~]# pmc -u -b 0 'GET CURRENT_DATA_SET'
sending: GET CURRENT_DATA_SET
90e2ba.fffe.20c7f8-0 seq 0 RESPONSE MANAGMENT CURRENT_DATA_SET
stepsRemoved 1
offsetFromMaster -142.0
meanPathDelay 9310.0
~]# pmc -u -b 0 'GET TIME_STATUS_NP'
sending: GET TIME_STATUS_NP
90e2ba.fffe.20c7f8-0 seq 0 RESPONSE MANAGMENT TIME_STATUS_NP
master_offset 310
ingress_time 1361545089345029441
cumulativeScaledRateOffset +1.000000000
scaledLastGmPhaseChange 0
gmTimeBaseIndicator 0
lastGmPhaseChange 0x0000'0000000000000000.0000
gmPresent true
gmIdentity 00a069.fffe.0b552d
-b option to zero limits the boundary to the locally running ptp4l instance. A larger boundary value will retrieve the information also from PTP nodes further from the local clock. The retrievable information includes:
stepsRemovedis the number of communication paths to the grandmaster clock.offsetFromMasterand master_offset is the last measured offset of the clock from the master in nanoseconds.meanPathDelayis the estimated delay of the synchronization messages sent from the master in nanoseconds.- if
gmPresentis true, thePTPclock is synchronized to a master, the local clock is not the grandmaster clock. gmIdentityis the grandmaster's identity.
root:
~]# pmc help
Additional information is available in the pmc(8) man page.
19.6. Synchronizing the Clocks
PTP hardware clock (PHC) on the NIC. The phc2sys service is configured in the /etc/sysconfig/phc2sys configuration file. The default setting in the /etc/sysconfig/phc2sys file is as follows: OPTIONS="-a -r"The
-a option causes phc2sys to read the clocks to be synchronized from the ptp4l application. It will follow changes in the PTP port states, adjusting the synchronization between the NIC hardware clocks accordingly. The system clock is not synchronized, unless the -r option is also specified. If you want the system clock to be eligible to become a time source, specify the -r option twice.
/etc/sysconfig/phc2sys, restart the phc2sys service from the command line by issuing a command as root:
~]# systemctl restart phc2sys
Under normal circumstances, use systemctl commands to start, stop, and restart the phc2sys service.
root:
~]# phc2sys -a -r
The -a option causes phc2sys to read the clocks to be synchronized from the ptp4l application. If you want the system clock to be eligible to become a time source, specify the -r option twice.
-s option to synchronize the system clock to a specific interface's PTP hardware clock. For example:
~]# phc2sys -s eth3 -w
The -w option waits for the running ptp4l application to synchronize the PTP clock and then retrieves the TAI to UTC offset from ptp4l.
PTP operates in the International Atomic Time (TAI) timescale, while the system clock is kept in Coordinated Universal Time (UTC). The current offset between the TAI and UTC timescales is 36 seconds. The offset changes when leap seconds are inserted or deleted, which typically happens every few years. The -O option needs to be used to set this offset manually when the -w is not used, as follows:
~]# phc2sys -s eth3 -O -36
-S option is used. This means that the phc2sys program should be started after the ptp4l program has synchronized the PTP hardware clock. However, with -w, it is not necessary to start phc2sys after ptp4l as it will wait for it to synchronize the clock.
~]# systemctl start phc2sys
When running as a service, options are specified in the /etc/sysconfig/phc2sys file. More information on the different phc2sys options can be found in the phc2sys(8) man page.
19.7. Verifying Time Synchronization
PTP time synchronization is working correctly, new messages with offsets and frequency adjustments are printed periodically to the ptp4l and phc2sys outputs if hardware time stamping is used. The output values converge shortly. You can see these messages in the /var/log/messages file.
- offset (in nanoseconds)
- frequency offset (in parts per billion (ppb))
- path delay (in nanoseconds)
ptp4l[352.359]: selected /dev/ptp0 as PTP clock ptp4l[352.361]: port 1: INITIALIZING to LISTENING on INITIALIZE ptp4l[352.361]: port 0: INITIALIZING to LISTENING on INITIALIZE ptp4l[353.210]: port 1: new foreign master 00a069.fffe.0b552d-1 ptp4l[357.214]: selected best master clock 00a069.fffe.0b552d ptp4l[357.214]: port 1: LISTENING to UNCALIBRATED on RS_SLAVE ptp4l[359.224]: master offset 3304 s0 freq +0 path delay 9202 ptp4l[360.224]: master offset 3708 s1 freq -29492 path delay 9202 ptp4l[361.224]: master offset -3145 s2 freq -32637 path delay 9202 ptp4l[361.224]: port 1: UNCALIBRATED to SLAVE on MASTER_CLOCK_SELECTED ptp4l[362.223]: master offset -145 s2 freq -30580 path delay 9202 ptp4l[363.223]: master offset 1043 s2 freq -29436 path delay 8972 ptp4l[364.223]: master offset 266 s2 freq -29900 path delay 9153 ptp4l[365.223]: master offset 430 s2 freq -29656 path delay 9153 ptp4l[366.223]: master offset 615 s2 freq -29342 path delay 9169 ptp4l[367.222]: master offset -191 s2 freq -29964 path delay 9169 ptp4l[368.223]: master offset 466 s2 freq -29364 path delay 9170 ptp4l[369.235]: master offset 24 s2 freq -29666 path delay 9196 ptp4l[370.235]: master offset -375 s2 freq -30058 path delay 9238 ptp4l[371.235]: master offset 285 s2 freq -29511 path delay 9199 ptp4l[372.235]: master offset -78 s2 freq -29788 path delay 9204
phc2sys[526.527]: Waiting for ptp4l... phc2sys[527.528]: Waiting for ptp4l... phc2sys[528.528]: phc offset 55341 s0 freq +0 delay 2729 phc2sys[529.528]: phc offset 54658 s1 freq -37690 delay 2725 phc2sys[530.528]: phc offset 888 s2 freq -36802 delay 2756 phc2sys[531.528]: phc offset 1156 s2 freq -36268 delay 2766 phc2sys[532.528]: phc offset 411 s2 freq -36666 delay 2738 phc2sys[533.528]: phc offset -73 s2 freq -37026 delay 2764 phc2sys[534.528]: phc offset 39 s2 freq -36936 delay 2746 phc2sys[535.529]: phc offset 95 s2 freq -36869 delay 2733 phc2sys[536.529]: phc offset -359 s2 freq -37294 delay 2738 phc2sys[537.529]: phc offset -257 s2 freq -37300 delay 2753 phc2sys[538.529]: phc offset 119 s2 freq -37001 delay 2745 phc2sys[539.529]: phc offset 288 s2 freq -36796 delay 2766 phc2sys[540.529]: phc offset -149 s2 freq -37147 delay 2760 phc2sys[541.529]: phc offset -352 s2 freq -37395 delay 2771 phc2sys[542.529]: phc offset 166 s2 freq -36982 delay 2748 phc2sys[543.529]: phc offset 50 s2 freq -37048 delay 2756 phc2sys[544.530]: phc offset -31 s2 freq -37114 delay 2748 phc2sys[545.530]: phc offset -333 s2 freq -37426 delay 2747 phc2sys[546.530]: phc offset 194 s2 freq -36999 delay 2749
summary_interval directive. The summary_interval directive is specified as 2 to the power of n in seconds. For example, to reduce the output to every 1024 seconds, add the following line to the /etc/ptp4l.conf file:
summary_interval 10
summary_interval set to 6:
ptp4l: [615.253] selected /dev/ptp0 as PTP clock ptp4l: [615.255] port 1: INITIALIZING to LISTENING on INITIALIZE ptp4l: [615.255] port 0: INITIALIZING to LISTENING on INITIALIZE ptp4l: [615.564] port 1: new foreign master 00a069.fffe.0b552d-1 ptp4l: [619.574] selected best master clock 00a069.fffe.0b552d ptp4l: [619.574] port 1: LISTENING to UNCALIBRATED on RS_SLAVE ptp4l: [623.573] port 1: UNCALIBRATED to SLAVE on MASTER_CLOCK_SELECTED ptp4l: [684.649] rms 669 max 3691 freq -29383 ± 3735 delay 9232 ± 122 ptp4l: [748.724] rms 253 max 588 freq -29787 ± 221 delay 9219 ± 158 ptp4l: [812.793] rms 287 max 673 freq -29802 ± 248 delay 9211 ± 183 ptp4l: [876.853] rms 226 max 534 freq -29795 ± 197 delay 9221 ± 138 ptp4l: [940.925] rms 250 max 562 freq -29801 ± 218 delay 9199 ± 148 ptp4l: [1004.988] rms 226 max 525 freq -29802 ± 196 delay 9228 ± 143 ptp4l: [1069.065] rms 300 max 646 freq -29802 ± 259 delay 9214 ± 176 ptp4l: [1133.125] rms 226 max 505 freq -29792 ± 197 delay 9225 ± 159 ptp4l: [1197.185] rms 244 max 688 freq -29790 ± 211 delay 9201 ± 162
summary_interval is set to 0, so messages are printed once per second, which is the maximum frequency. The messages are logged at the LOG_INFO level. To disable messages, use the -l option to set the maximum log level to 5 or lower:
~]# phc2sys -l 5-u option to reduce the phc2sys output:
~]# phc2sys -u summary-updates
~]# phc2sys -s eth3 -w -m -u 60
phc2sys[700.948]: rms 1837 max 10123 freq -36474 ± 4752 delay 2752 ± 16
phc2sys[760.954]: rms 194 max 457 freq -37084 ± 174 delay 2753 ± 12
phc2sys[820.963]: rms 211 max 487 freq -37085 ± 185 delay 2750 ± 19
phc2sys[880.968]: rms 183 max 440 freq -37102 ± 164 delay 2734 ± 91
phc2sys[940.973]: rms 244 max 584 freq -37095 ± 216 delay 2748 ± 16
phc2sys[1000.979]: rms 220 max 573 freq -36666 ± 182 delay 2747 ± 43
phc2sys[1060.984]: rms 266 max 675 freq -36759 ± 234 delay 2753 ± 17
-u), phc2sys waits until ptp4l is in synchronized state (-w), and messages are printed to the standard output (-m). For further details about the phc2sys options, see the phc2sys(5) man page.
- offset root mean square (rms)
- maximum absolute offset (max)
- frequency offset (freq): its mean, and standard deviation
- path delay (delay): its mean, and standard deviation
19.8. Serving PTP Time with NTP
ntpd daemon can be configured to distribute the time from the system clock synchronized by ptp4l or phc2sys by using the LOCAL reference clock driver. To prevent ntpd from adjusting the system clock, the ntp.conf file must not specify any NTP servers. The following is a minimal example of ntp.conf:
~]# cat /etc/ntp.conf
server 127.127.1.0
fudge 127.127.1.0 stratum 0
Note
DHCP client program, dhclient, receives a list of NTP servers from the DHCP server, it adds them to ntp.conf and restarts the service. To disable that feature, add PEERNTP=no to /etc/sysconfig/network.
19.9. Serving NTP Time with PTP
NTP to PTP synchronization in the opposite direction is also possible. When ntpd is used to synchronize the system clock, ptp4l can be configured with the priority1 option (or other clock options included in the best master clock algorithm) to be the grandmaster clock and distribute the time from the system clock via PTP:
~]# cat /etc/ptp4l.conf
[global]
priority1 127
[eth3]
# ptp4l -f /etc/ptp4l.conf
PTP hardware clock to the system clock. If running phc2sys as a service, edit the /etc/sysconfig/phc2sys configuration file. The default setting in the /etc/sysconfig/phc2sys file is as follows: OPTIONS="-a -r"As
root, edit that line as follows:
~]# vi /etc/sysconfig/phc2sys
OPTIONS="-a -r -r"
The -r option is used twice here to allow synchronization of the PTP hardware clock on the NIC from the system clock. Restart the phc2sys service for the changes to take effect:
~]# systemctl restart phc2sys
PTP clock's frequency, the synchronization to the system clock can be loosened by using smaller P (proportional) and I (integral) constants for the PI servo:
~]# phc2sys -a -r -r -P 0.01 -I 0.0001
19.10. Synchronize to PTP or NTP Time Using timemaster
PTP domains available on the network, or fallback to NTP is needed, the timemaster program can be used to synchronize the system clock to all available time sources. The PTP time is provided by phc2sys and ptp4l via shared memory driver (SHM reference clocks to chronyd or ntpd (depending on the NTP daemon that has been configured on the system). The NTP daemon can then compare all time sources, both PTP and NTP, and use the best sources to synchronize the system clock.
NTP and PTP time sources, checks which network interfaces have their own or share a PTP hardware clock (PHC), generates configuration files for ptp4l and chronyd or ntpd, and starts the ptp4l, phc2sys, and chronyd or ntpd processes as needed. It will remove the generated configuration files on exit. It writes configuration files for chronyd, ntpd, and ptp4l to /var/run/timemaster/.
19.10.1. Starting timemaster as a Service
root:
~]# systemctl start timemaster
This will read the options in /etc/timemaster.conf. For more information on managing system services in Red Hat Enterprise Linux 7, see Chapter 10, Managing Services with systemd.
19.10.2. Understanding the timemaster Configuration File
/etc/timemaster.conf file with a number of sections containing default options. The section headings are enclosed in brackets.
~]$ less /etc/timemaster.conf
# Configuration file for timemaster
#[ntp_server ntp-server.local]
#minpoll 4
#maxpoll 4
#[ptp_domain 0]
#interfaces eth0
[timemaster]
ntp_program chronyd
[chrony.conf]
include /etc/chrony.conf
[ntp.conf]
includefile /etc/ntp.conf
[ptp4l.conf]
[chronyd]
path /usr/sbin/chronyd
options -u chrony
[ntpd]
path /usr/sbin/ntpd
options -u ntp:ntp -g
[phc2sys]
path /usr/sbin/phc2sys
[ptp4l]
path /usr/sbin/ptp4l
[ntp_server address]This is an example of an
NTP server section, “ntp-server.local” is an example of a host name for an NTP server on the local LAN. Add more sections as required using a host name or IP address as part of the section name. Note that the short polling values in that example section are not suitable for a public server, see Chapter 18, Configuring NTP Using ntpd for an explanation of suitable minpoll and maxpoll values.
[ptp_domain number]A “PTP domain” is a group of one or more
PTP clocks that synchronize to each other. They may or may not be synchronized to clocks in another domain. Clocks that are configured with the same domain number make up the domain. This includes a PTP grandmaster clock. The domain number in each “PTP domain” section needs to correspond to one of the PTP domains configured on the network.
PTP clock and hardware time stamping is enabled automatically. Interfaces that support hardware time stamping have a PTP clock (PHC) attached, however it is possible for a group of interfaces on a NIC to share a PHC. A separate ptp4l instance will be started for each group of interfaces sharing the same PHC and for each interface that supports only software time stamping. All ptp4l instances are configured to run as a slave. If an interface with hardware time stamping is specified in more than one PTP domain, then only the first ptp4l instance created will have hardware time stamping enabled.
[timemaster]The default timemaster configuration includes the system
ntpd and chrony configuration (/etc/ntp.conf or /etc/chronyd.conf) in order to include the configuration of access restrictions and authentication keys. That means any NTP servers specified there will be used with timemaster too.
[ntp_server ntp-server.local]— Specify polling intervals for this server. Create additional sections as required. Include the host name orIPaddress in the section heading.[ptp_domain 0]— Specify interfaces that havePTPclocks configured for this domain. Create additional sections with, the appropriate domain number, as required.[timemaster]— Specify theNTPdaemon to be used. Possible values arechronydandntpd.[chrony.conf]— Specify any additional settings to be copied to the configuration file generated forchronyd.[ntp.conf]— Specify any additional settings to be copied to the configuration file generated forntpd.[ptp4l.conf]— Specify options to be copied to the configuration file generated for ptp4l.[chronyd]— Specify any additional settings to be passed on the command line tochronyd.[ntpd]— Specify any additional settings to be passed on the command line tontpd.[phc2sys]— Specify any additional settings to be passed on the command line to phc2sys.[ptp4l]— Specify any additional settings to be passed on the command line to all instances of ptp4l.
timemaster(8) manual page.
19.10.3. Configuring timemaster Options
Procedure 19.1. Editing the timemaster Configuration File
- To change the default configuration, open the
/etc/timemaster.conffile for editing asroot:~]#
vi /etc/timemaster.conf - For each
NTPserver you want to control using timemaster, create[ntp_server address]sections. Note that the short polling values in the example section are not suitable for a public server, see Chapter 18, Configuring NTP Using ntpd for an explanation of suitableminpollandmaxpollvalues. - To add interfaces that should be used in a domain, edit the
#[ptp_domain 0]section and add the interfaces. Create additional domains as required. For example:[ptp_domain 0] interfaces eth0 [ptp_domain 1] interfaces eth1 - If required to use
ntpdas theNTPdaemon on this system, change the default entry in the[timemaster]section fromchronydtontpd. See Chapter 17, Configuring NTP Using the chrony Suite for information on the differences between ntpd and chronyd. - If using
chronydas theNTPserver on this system, add any additional options below the defaultinclude /etc/chrony.confentry in the[chrony.conf]section. Edit the defaultincludeentry if the path to/etc/chrony.confis known to have changed. - If using
ntpdas theNTPserver on this system, add any additional options below the defaultinclude /etc/ntp.confentry in the[ntp.conf]section. Edit the defaultincludeentry if the path to/etc/ntp.confis known to have changed. - In the
[ptp4l.conf]section, add any options to be copied to the configuration file generated for ptp4l. This chapter documents common options and more information is available in theptp4l(8)manual page. - In the
[chronyd]section, add any command line options to be passed tochronydwhen called by timemaster. See Chapter 17, Configuring NTP Using the chrony Suite for information on usingchronyd. - In the
[ntpd]section, add any command line options to be passed tontpdwhen called by timemaster. See Chapter 18, Configuring NTP Using ntpd for information on usingntpd. - In the
[phc2sys]section, add any command line options to be passed to phc2sys when called by timemaster. This chapter documents common options and more information is available in thephy2sys(8)manual page. - In the
[ptp4l]section, add any command line options to be passed to ptp4l when called by timemaster. This chapter documents common options and more information is available in theptp4l(8)manual page. - Save the configuration file and restart timemaster by issuing the following command as
root:~]#
systemctl restart timemaster
19.11. Improving Accuracy
PTP synchronization accuracy (at the cost of increased power consumption). The kernel tickless mode can be disabled by adding nohz=off to the kernel boot option parameters. However, recent improvements applied to kernel-3.10.0-197.el7 have greatly improved the stability of the system clock and the difference in stability of the clock with and without nohz=off should be much smaller now for most users.
/etc/ptp4l.conf file: clock_servo linregAfter making changes to
/etc/ptp4l.conf, restart the ptp4l service from the command line by issuing the following command as root:
~]# systemctl restart ptp4l
/etc/sysconfig/phc2sys file: -E linregAfter making changes to
/etc/sysconfig/phc2sys, restart the phc2sys service from the command line by issuing the following command as root:
~]# systemctl restart phc2sys
19.12. Additional Resources
PTP and the ptp4l tools.
19.12.1. Installed Documentation
ptp4l(8)man page — Describes ptp4l options including the format of the configuration file.pmc(8)man page — Describes thePTPmanagement client and its command options.phc2sys(8)man page — Describes a tool for synchronizing the system clock to aPTPhardware clock (PHC).timemaster(8)man page — Describes a program that uses ptp4l and phc2sys to synchronize the system clock usingchronydorntpd.
19.12.2. Useful Websites
- http://www.nist.gov/el/isd/ieee/ieee1588.cfm
- The IEEE 1588 Standard.
Part VI. Monitoring and Automation
Chapter 20. System Monitoring Tools
20.1. Viewing System Processes
20.1.1. Using the ps Command
ps command allows you to display information about running processes. It produces a static list, that is, a snapshot of what is running when you execute the command. If you want a constantly updated list of running processes, use the top command or the System Monitor application instead.
psax
ps ax command displays the process ID (PID), the terminal that is associated with it (TTY), the current status (STAT), the cumulated CPU time (TIME), and the name of the executable file (COMMAND). For example:
~]$ ps ax
PID TTY STAT TIME COMMAND
1 ? Ss 0:01 /usr/lib/systemd/systemd --switched-root --system --deserialize 23
2 ? S 0:00 [kthreadd]
3 ? S 0:00 [ksoftirqd/0]
5 ? S> 0:00 [kworker/0:0H]
[output truncated]psaux
ps ax command, ps aux displays the effective user name of the process owner (USER), the percentage of the CPU (%CPU) and memory (%MEM) usage, the virtual memory size in kilobytes (VSZ), the non-swapped physical memory size in kilobytes (RSS), and the time or date the process was started. For example:
~]$ ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.3 0.3 134776 6840 ? Ss 09:28 0:01 /usr/lib/systemd/systemd --switched-root --system --d
root 2 0.0 0.0 0 0 ? S 09:28 0:00 [kthreadd]
root 3 0.0 0.0 0 0 ? S 09:28 0:00 [ksoftirqd/0]
root 5 0.0 0.0 0 0 ? S> 09:28 0:00 [kworker/0:0H]
[output truncated]ps command in a combination with grep to see if a particular process is running. For example, to determine if Emacs is running, type:
~]$ ps ax | grep emacs
12056 pts/3 S+ 0:00 emacs
12060 pts/2 S+ 0:00 grep --color=auto emacs20.1.2. Using the top Command
top command displays a real-time list of processes that are running on the system. It also displays additional information about the system uptime, current CPU and memory usage, or total number of running processes, and allows you to perform actions such as sorting the list or killing a process.
top command, type the following at a shell prompt:
toptop command displays the process ID (PID), the effective user name of the process owner (USER), the priority (PR), the nice value (NI), the amount of virtual memory the process uses (VIRT), the amount of non-swapped physical memory the process uses (RES), the amount of shared memory the process uses (SHR), the process status field S), the percentage of the CPU (%CPU) and memory (%MEM) usage, the cumulated CPU time (TIME+), and the name of the executable file (COMMAND). For example:
~]$ top
top - 16:42:12 up 13 min, 2 users, load average: 0.67, 0.31, 0.19
Tasks: 165 total, 2 running, 163 sleeping, 0 stopped, 0 zombie
%Cpu(s): 37.5 us, 3.0 sy, 0.0 ni, 59.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 1016800 total, 77368 free, 728936 used, 210496 buff/cache
KiB Swap: 839676 total, 776796 free, 62880 used. 122628 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3168 sjw 20 0 1454628 143240 15016 S 20.3 14.1 0:22.53 gnome-shell
4006 sjw 20 0 1367832 298876 27856 S 13.0 29.4 0:15.58 firefox
1683 root 20 0 242204 50464 4268 S 6.0 5.0 0:07.76 Xorg
4125 sjw 20 0 555148 19820 12644 S 1.3 1.9 0:00.48 gnome-terminal-
10 root 20 0 0 0 0 S 0.3 0.0 0:00.39 rcu_sched
3091 sjw 20 0 37000 1468 904 S 0.3 0.1 0:00.31 dbus-daemon
3096 sjw 20 0 129688 2164 1492 S 0.3 0.2 0:00.14 at-spi2-registr
3925 root 20 0 0 0 0 S 0.3 0.0 0:00.05 kworker/0:0
1 root 20 0 126568 3884 1052 S 0.0 0.4 0:01.61 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd
3 root 20 0 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/0
6 root 20 0 0 0 0 S 0.0 0.0 0:00.07 kworker/u2:0
[output truncated]top. For more information, see the top(1) manual page.
Table 20.1. Interactive top commands
| Command | Description |
|---|---|
| Enter, Space | Immediately refreshes the display. |
| h | Displays a help screen for interactive commands. |
| h, ? | Displays a help screen for windows and field groups. |
| k | Kills a process. You are prompted for the process ID and the signal to send to it. |
| n | Changes the number of displayed processes. You are prompted to enter the number. |
| u | Sorts the list by user. |
| M | Sorts the list by memory usage. |
| P | Sorts the list by CPU usage. |
| q | Terminates the utility and returns to the shell prompt. |
20.1.3. Using the System Monitor Tool
gnome-system-monitor at a shell prompt. The System Monitor tool appears. Alternatively, if using the GNOME desktop, press the Super key to enter the Activities Overview, type System Monitor and then press Enter. The System Monitor tool appears. The Super key appears in a variety of guises, depending on the keyboard and other hardware, but often as either the Windows or Command key, and typically to the left of the Spacebar.

Figure 20.1. System Monitor — Processes
- view only active processes,
- view all processes,
- view your processes,
- view process dependencies,
- refresh the list of processes,
- end a process by selecting it from the list and then clicking the button.
20.2. Viewing Memory Usage
20.2.1. Using the free Command
free command allows you to display the amount of free and used memory on the system. To do so, type the following at a shell prompt:
freefree command provides information about both the physical memory (Mem) and swap space (Swap). It displays the total amount of memory (total), as well as the amount of memory that is in use (used), free (free), shared (shared), sum of buffers and cached (buff/cache), and available (available). For example:
~]$ free
total used free shared buff/cache available
Mem: 1016800 727300 84684 3500 204816 124068
Swap: 839676 66920 772756free displays the values in kilobytes. To display the values in megabytes, supply the -m command line option:
free-m
~]$ free -m
total used free shared buff/cache available
Mem: 992 711 81 3 200 120
Swap: 819 65 75420.2.2. Using the System Monitor Tool
gnome-system-monitor at a shell prompt. The System Monitor tool appears. Alternatively, if using the GNOME desktop, press the Super key to enter the Activities Overview, type System Monitor and then press Enter. The System Monitor tool appears. The Super key appears in a variety of guises, depending on the keyboard and other hardware, but often as either the Windows or Command key, and typically to the left of the Spacebar.

Figure 20.2. System Monitor — Resources
20.3. Viewing CPU Usage
20.3.1. Using the System Monitor Tool
gnome-system-monitor at a shell prompt. The System Monitor tool appears. Alternatively, if using the GNOME desktop, press the Super key to enter the Activities Overview, type System Monitor and then press Enter. The System Monitor tool appears. The Super key appears in a variety of guises, depending on the keyboard and other hardware, but often as either the Windows or Command key, and typically to the left of the Spacebar.
20.4. Viewing Block Devices and File Systems
20.4.1. Using the lsblk Command
lsblk command allows you to display a list of available block devices. It provides more information and better control on output formatting than the blkid command. It reads information from udev, therefore it is usable by non-root users. To display a list of block devices, type the following at a shell prompt:
lsblklsblk command displays the device name (NAME), major and minor device number (MAJ:MIN), if the device is removable (RM), its size (SIZE), if the device is read-only (RO), what type it is (TYPE), and where the device is mounted (MOUNTPOINT). For example:
~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 20G 0 rom
|-vda1 252:1 0 500M 0 part /boot
`-vda2 252:2 0 19.5G 0 part
|-vg_kvm-lv_root (dm-0) 253:0 0 18G 0 lvm /
`-vg_kvm-lv_swap (dm-1) 253:1 0 1.5G 0 lvm [SWAP]lsblk lists block devices in a tree-like format. To display the information as an ordinary list, add the -l command line option:
lsblk-l
~]$ lsblk -l
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 20G 0 rom
vda1 252:1 0 500M 0 part /boot
vda2 252:2 0 19.5G 0 part
vg_kvm-lv_root (dm-0) 253:0 0 18G 0 lvm /
vg_kvm-lv_swap (dm-1) 253:1 0 1.5G 0 lvm [SWAP]20.4.2. Using the blkid Command
blkid command allows you to display low-level information about available block devices. It requires root privileges, therefore non-root users should use the lsblk command. To do so, type the following at a shell prompt as root:
blkidblkid command displays available attributes such as its universally unique identifier (UUID), file system type (TYPE), or volume label (LABEL). For example:
~]# blkid
/dev/vda1: UUID="7fa9c421-0054-4555-b0ca-b470a97a3d84" TYPE="ext4"
/dev/vda2: UUID="7IvYzk-TnnK-oPjf-ipdD-cofz-DXaJ-gPdgBW" TYPE="LVM2_member"
/dev/mapper/vg_kvm-lv_root: UUID="a07b967c-71a0-4925-ab02-aebcad2ae824" TYPE="ext4"
/dev/mapper/vg_kvm-lv_swap: UUID="d7ef54ca-9c41-4de4-ac1b-4193b0c1ddb6" TYPE="swap"blkid command lists all available block devices. To display information about a particular device only, specify the device name on the command line:
blkid device_name/dev/vda1, type as root:
~]# blkid /dev/vda1
/dev/vda1: UUID="7fa9c421-0054-4555-b0ca-b470a97a3d84" TYPE="ext4"-p and -o udev command line options to obtain more detailed information. Note that root privileges are required to run this command:
blkid-poudevdevice_name
~]# blkid -po udev /dev/vda1
ID_FS_UUID=7fa9c421-0054-4555-b0ca-b470a97a3d84
ID_FS_UUID_ENC=7fa9c421-0054-4555-b0ca-b470a97a3d84
ID_FS_VERSION=1.0
ID_FS_TYPE=ext4
ID_FS_USAGE=filesystem20.4.3. Using the findmnt Command
findmnt command allows you to display a list of currently mounted file systems. To do so, type the following at a shell prompt:
findmntfindmnt command displays the target mount point (TARGET), source device (SOURCE), file system type (FSTYPE), and relevant mount options (OPTIONS). For example:
~]$ findmnt
TARGET SOURCE FSTYPE OPTIONS
/ /dev/mapper/rhel-root
xfs rw,relatime,seclabel,attr2,inode64,noquota
├─/proc proc proc rw,nosuid,nodev,noexec,relatime
│ ├─/proc/sys/fs/binfmt_misc systemd-1 autofs rw,relatime,fd=32,pgrp=1,timeout=300,minproto=5,maxproto=5,direct
│ └─/proc/fs/nfsd sunrpc nfsd rw,relatime
├─/sys sysfs sysfs rw,nosuid,nodev,noexec,relatime,seclabel
│ ├─/sys/kernel/security securityfs securityfs rw,nosuid,nodev,noexec,relatime
│ ├─/sys/fs/cgroup tmpfs tmpfs rw,nosuid,nodev,noexec,seclabel,mode=755
[output truncated]findmnt lists file systems in a tree-like format. To display the information as an ordinary list, add the -l command line option:
findmnt-l
~]$ findmnt -l
TARGET SOURCE FSTYPE OPTIONS
/proc proc proc rw,nosuid,nodev,noexec,relatime
/sys sysfs sysfs rw,nosuid,nodev,noexec,relatime,seclabel
/dev devtmpfs devtmpfs rw,nosuid,seclabel,size=933372k,nr_inodes=233343,mode=755
/sys/kernel/security securityfs securityfs rw,nosuid,nodev,noexec,relatime
/dev/shm tmpfs tmpfs rw,nosuid,nodev,seclabel
/dev/pts devpts devpts rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000
/run tmpfs tmpfs rw,nosuid,nodev,seclabel,mode=755
/sys/fs/cgroup tmpfs tmpfs rw,nosuid,nodev,noexec,seclabel,mode=755
[output truncated]-t command line option followed by a file system type:
findmnt-ttype
xfs file systems, type:
~]$ findmnt -t xfs
TARGET SOURCE FSTYPE OPTIONS
/ /dev/mapper/rhel-root xfs rw,relatime,seclabel,attr2,inode64,noquota
└─/boot /dev/vda1 xfs rw,relatime,seclabel,attr2,inode64,noquota20.4.4. Using the df Command
df command allows you to display a detailed report on the system's disk space usage. To do so, type the following at a shell prompt:
dfdf command displays its name (Filesystem), size (1K-blocks or Size), how much space is used (Used), how much space is still available (Available), the percentage of space usage (Use%), and where is the file system mounted (Mounted on). For example:
~]$ df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg_kvm-lv_root 18618236 4357360 13315112 25% /
tmpfs 380376 288 380088 1% /dev/shm
/dev/vda1 495844 77029 393215 17% /bootdf command shows the partition size in 1 kilobyte blocks and the amount of used and available disk space in kilobytes. To view the information in megabytes and gigabytes, supply the -h command line option, which causes df to display the values in a human-readable format:
df-h
~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_kvm-lv_root 18G 4.2G 13G 25% /
tmpfs 372M 288K 372M 1% /dev/shm
/dev/vda1 485M 76M 384M 17% /boot20.4.5. Using the du Command
du command allows you to displays the amount of space that is being used by files in a directory. To display the disk usage for each of the subdirectories in the current working directory, run the command with no additional command line options:
du~]$ du
14972 ./Downloads
4 ./.mozilla/extensions
4 ./.mozilla/plugins
12 ./.mozilla
15004 .du command displays the disk usage in kilobytes. To view the information in megabytes and gigabytes, supply the -h command line option, which causes the utility to display the values in a human-readable format:
du-h
~]$ du -h
15M ./Downloads
4.0K ./.mozilla/extensions
4.0K ./.mozilla/plugins
12K ./.mozilla
15M .du command always shows the grand total for the current directory. To display only this information, supply the -s command line option:
du-sh
~]$ du -sh
15M .20.4.6. Using the System Monitor Tool
gnome-system-monitor at a shell prompt. The System Monitor tool appears. Alternatively, if using the GNOME desktop, press the Super key to enter the Activities Overview, type System Monitor and then press Enter. The System Monitor tool appears. The Super key appears in a variety of guises, depending on the keyboard and other hardware, but often as either the Windows or Command key, and typically to the left of the Spacebar.

Figure 20.3. System Monitor — File Systems
20.5. Viewing Hardware Information
20.5.1. Using the lspci Command
lspci command allows you to display information about PCI buses and devices that are attached to them. To list all PCI devices that are in the system, type the following at a shell prompt:
lspci~]$ lspci
00:00.0 Host bridge: Intel Corporation 82X38/X48 Express DRAM Controller
00:01.0 PCI bridge: Intel Corporation 82X38/X48 Express Host-Primary PCI Express Bridge
00:1a.0 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 02)
00:1a.1 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 02)
00:1a.2 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 02)
[output truncated]-v command line option to display more verbose output, or -vv for very verbose output:
lspci-v|-vv
~]$ lspci -v
[output truncated]
01:00.0 VGA compatible controller: nVidia Corporation G84 [Quadro FX 370] (rev a1) (prog-if 00 [VGA controller])
Subsystem: nVidia Corporation Device 0491
Physical Slot: 2
Flags: bus master, fast devsel, latency 0, IRQ 16
Memory at f2000000 (32-bit, non-prefetchable) [size=16M]
Memory at e0000000 (64-bit, prefetchable) [size=256M]
Memory at f0000000 (64-bit, non-prefetchable) [size=32M]
I/O ports at 1100 [size=128]
Expansion ROM at <unassigned> [disabled]
Capabilities: <access denied>
Kernel driver in use: nouveau
Kernel modules: nouveau, nvidiafb
[output truncated]20.5.2. Using the lsusb Command
lsusb command allows you to display information about USB buses and devices that are attached to them. To list all USB devices that are in the system, type the following at a shell prompt:
lsusb~]$ lsusb
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
[output truncated]
Bus 001 Device 002: ID 0bda:0151 Realtek Semiconductor Corp. Mass Storage Device (Multicard Reader)
Bus 008 Device 002: ID 03f0:2c24 Hewlett-Packard Logitech M-UAL-96 Mouse
Bus 008 Device 003: ID 04b3:3025 IBM Corp.-v command line option to display more verbose output:
lsusb-v
~]$ lsusb -v
[output truncated]
Bus 008 Device 002: ID 03f0:2c24 Hewlett-Packard Logitech M-UAL-96 Mouse
Device Descriptor:
bLength 18
bDescriptorType 1
bcdUSB 2.00
bDeviceClass 0 (Defined at Interface level)
bDeviceSubClass 0
bDeviceProtocol 0
bMaxPacketSize0 8
idVendor 0x03f0 Hewlett-Packard
idProduct 0x2c24 Logitech M-UAL-96 Mouse
bcdDevice 31.00
iManufacturer 1
iProduct 2
iSerial 0
bNumConfigurations 1
Configuration Descriptor:
bLength 9
bDescriptorType 2
[output truncated]20.5.3. Using the lscpu Command
lscpu command allows you to list information about CPUs that are present in the system, including the number of CPUs, their architecture, vendor, family, model, CPU caches, etc. To do so, type the following at a shell prompt:
lscpu~]$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 23
Stepping: 7
CPU MHz: 1998.000
BogoMIPS: 4999.98
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 3072K
NUMA node0 CPU(s): 0-320.6. Checking for Hardware Errors
rasdaemon, catches and handles all reliability, availability, and serviceability (RAS) error events that come from the kernel tracing mechanism, and logs them. The functions previously provided by edac-utils are now replaced by rasdaemon.
rasdaemon, enter the following command as root:
~]# yum install rasdaemon
Start the service as follows:
~]# systemctl start rasdaemon
To make the service run at system start, enter the following command:
~]# systemctl enable rasdaemon
ras-mc-ctl utility provides a means to work with EDAC drivers. Enter the following command to see a list of command options:
~]$ ras-mc-ctl --help
Usage: ras-mc-ctl [OPTIONS...]
--quiet Quiet operation.
--mainboard Print mainboard vendor and model for this hardware.
--status Print status of EDAC drivers.
output truncated
root:
~]# ras-mc-ctl --summary
Memory controller events summary:
Corrected on DIMM Label(s): 'CPU_SrcID#0_Ha#0_Chan#0_DIMM#0' location: 0:0:0:-1 errors: 1
No PCIe AER errors.
No Extlog errors.
MCE records summary:
1 MEMORY CONTROLLER RD_CHANNEL0_ERR Transaction: Memory read error errors
2 No Error errors
root:
~]# ras-mc-ctl --errors
Memory controller events:
1 3172-02-17 00:47:01 -0500 1 Corrected error(s): memory read error at CPU_SrcID#0_Ha#0_Chan#0_DIMM#0 location: 0:0:0:-1, addr 65928, grain 7, syndrome 0 area:DRAM err_code:0001:0090 socket:0 ha:0 channel_mask:1 rank:0
No PCIe AER errors.
No Extlog errors.
MCE events:
1 3171-11-09 06:20:21 -0500 error: MEMORY CONTROLLER RD_CHANNEL0_ERR Transaction: Memory read error, mcg mcgstatus=0, mci Corrected_error, n_errors=1, mcgcap=0x01000c16, status=0x8c00004000010090, addr=0x1018893000, misc=0x15020a086, walltime=0x57e96780, cpuid=0x00050663, bank=0x00000007
2 3205-06-22 00:13:41 -0400 error: No Error, mcg mcgstatus=0, mci Corrected_error Error_enabled, mcgcap=0x01000c16, status=0x9400000000000000, addr=0x0000abcd, walltime=0x57e967ea, cpuid=0x00050663, bank=0x00000001
3 3205-06-22 00:13:41 -0400 error: No Error, mcg mcgstatus=0, mci Corrected_error Error_enabled, mcgcap=0x01000c16, status=0x9400000000000000, addr=0x00001234, walltime=0x57e967ea, cpu=0x00000001, cpuid=0x00050663, apicid=0x00000002, bank=0x00000002
ras-mc-ctl(8) manual page.
20.7. Monitoring Performance with Net-SNMP
SNMP protocol.
20.7.1. Installing Net-SNMP
Table 20.2. Available Net-SNMP packages
| Package | Provides |
|---|---|
| net-snmp | The SNMP Agent Daemon and documentation. This package is required for exporting performance data. |
| net-snmp-libs | The netsnmp library and the bundled management information bases (MIBs). This package is required for exporting performance data. |
| net-snmp-utils | SNMP clients such as snmpget and snmpwalk. This package is required in order to query a system's performance data over SNMP. |
| net-snmp-perl | The mib2c utility and the NetSNMP Perl module. Note that this package is provided by the Optional channel. See Section 9.5.7, “Adding the Optional and Supplementary Repositories” for more information on Red Hat additional channels. |
| net-snmp-python | An SNMP client library for Python. Note that this package is provided by the Optional channel. See Section 9.5.7, “Adding the Optional and Supplementary Repositories” for more information on Red Hat additional channels. |
yum command in the following form:
yuminstallpackage…
root:
~]# yum install net-snmp net-snmp-libs net-snmp-utils20.7.2. Running the Net-SNMP Daemon
snmpd, the SNMP Agent Daemon. This section provides information on how to start, stop, and restart the snmpd service. For more information on managing system services in Red Hat Enterprise Linux 7, see Chapter 10, Managing Services with systemd.
20.7.2.1. Starting the Service
snmpd service in the current session, type the following at a shell prompt as root:
systemctl start snmpd.servicesystemctl enable snmpd.service20.7.2.2. Stopping the Service
snmpd service, type the following at a shell prompt as root:
systemctl stop snmpd.servicesystemctl disable snmpd.service20.7.2.3. Restarting the Service
snmpd service, type the following at a shell prompt:
systemctl restart snmpd.servicesystemctl reload snmpd.servicesnmpd service to reload its configuration.
20.7.3. Configuring Net-SNMP
/etc/snmp/snmpd.conf configuration file. The default snmpd.conf file included with Red Hat Enterprise Linux 7 is heavily commented and serves as a good starting point for agent configuration.
snmpd.conf(5) manual page. Additionally, there is a utility in the net-snmp package named snmpconf which can be used to interactively generate a valid agent configuration.
snmpwalk utility described in this section.
Note
snmpd service to re-read the configuration by running the following command as root:
systemctl reload snmpd.service20.7.3.1. Setting System Information
system tree. For example, the following snmpwalk command shows the system tree with a default agent configuration.
~]# snmpwalk -v2c -c public localhost system
SNMPv2-MIB::sysDescr.0 = STRING: Linux localhost.localdomain 3.10.0-123.el7.x86_64 #1 SMP Mon May 5 11:16:57 EDT 2014 x86_64
SNMPv2-MIB::sysObjectID.0 = OID: NET-SNMP-MIB::netSnmpAgentOIDs.10
DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (464) 0:00:04.64
SNMPv2-MIB::sysContact.0 = STRING: Root <root@localhost> (configure /etc/snmp/snmp.local.conf)
[output truncated]sysName object is set to the host name. The sysLocation and sysContact objects can be configured in the /etc/snmp/snmpd.conf file by changing the value of the syslocation and syscontact directives, for example:
syslocation Datacenter, Row 4, Rack 3 syscontact UNIX Admin <admin@example.com>
snmpwalk command again:
~]#systemctl reload snmp.service~]#snmpwalk -v2c -c public localhost systemSNMPv2-MIB::sysDescr.0 = STRING: Linux localhost.localdomain 3.10.0-123.el7.x86_64 #1 SMP Mon May 5 11:16:57 EDT 2014 x86_64 SNMPv2-MIB::sysObjectID.0 = OID: NET-SNMP-MIB::netSnmpAgentOIDs.10 DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (35424) 0:05:54.24 SNMPv2-MIB::sysContact.0 = STRING: UNIX Admin <admin@example.com> SNMPv2-MIB::sysName.0 = STRING: localhost.localdomain SNMPv2-MIB::sysLocation.0 = STRING: Datacenter, Row 4, Rack 3 [output truncated]
20.7.3.2. Configuring Authentication
Configuring SNMP Version 2c Community
rocommunity or rwcommunity directive in the /etc/snmp/snmpd.conf configuration file. The format of the directives is as follows:
directive community [source [OID]]system tree to a client using the community string “redhat” on the local machine:
rocommunity redhat 127.0.0.1 .1.3.6.1.2.1.1
snmpwalk command with the -v and -c options.
~]# snmpwalk -v2c -c redhat localhost system
SNMPv2-MIB::sysDescr.0 = STRING: Linux localhost.localdomain 3.10.0-123.el7.x86_64 #1 SMP Mon May 5 11:16:57 EDT 2014 x86_64
SNMPv2-MIB::sysObjectID.0 = OID: NET-SNMP-MIB::netSnmpAgentOIDs.10
DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (101376) 0:16:53.76
SNMPv2-MIB::sysContact.0 = STRING: UNIX Admin <admin@example.com>
SNMPv2-MIB::sysName.0 = STRING: localhost.localdomain
SNMPv2-MIB::sysLocation.0 = STRING: Datacenter, Row 4, Rack 3
[output truncated]Configuring SNMP Version 3 User
net-snmp-create-v3-user command. This command adds entries to the /var/lib/net-snmp/snmpd.conf and /etc/snmp/snmpd.conf files which create the user and grant access to the user. Note that the net-snmp-create-v3-user command may only be run when the agent is not running. The following example creates the “admin” user with the password “redhatsnmp”:
~]#systemctl stop snmpd.service~]#net-snmp-create-v3-userEnter a SNMPv3 user name to create: admin Enter authentication pass-phrase: redhatsnmp Enter encryption pass-phrase: [press return to reuse the authentication pass-phrase] adding the following line to /var/lib/net-snmp/snmpd.conf: createUser admin MD5 "redhatsnmp" DES adding the following line to /etc/snmp/snmpd.conf: rwuser admin ~]#systemctl start snmpd.service
rwuser directive (or rouser when the -ro command line option is supplied) that net-snmp-create-v3-user adds to /etc/snmp/snmpd.conf has a similar format to the rwcommunity and rocommunity directives:
directive user [noauth|auth|priv] [OID]
auth option). The noauth option allows you to permit unauthenticated requests, and the priv option enforces the use of encryption. The authpriv option specifies that requests must be authenticated and replies should be encrypted.
rwuser admin authpriv .1
.snmp/ directory in your user's home directory and a configuration file named snmp.conf in that directory (~/.snmp/snmp.conf) with the following lines:
defVersion 3 defSecurityLevel authPriv defSecurityName admin defPassphrase redhatsnmp
snmpwalk command will now use these authentication settings when querying the agent:
~]$ snmpwalk -v3 localhost system
SNMPv2-MIB::sysDescr.0 = STRING: Linux localhost.localdomain 3.10.0-123.el7.x86_64 #1 SMP Mon May 5 11:16:57 EDT 2014 x86_64
[output truncated]20.7.4. Retrieving Performance Data over SNMP
20.7.4.1. Hardware Configuration
Host Resources MIB included with Net-SNMP presents information about the current hardware and software configuration of a host to a client utility. Table 20.3, “Available OIDs” summarizes the different OIDs available under that MIB.
Table 20.3. Available OIDs
| OID | Description |
|---|---|
HOST-RESOURCES-MIB::hrSystem | Contains general system information such as uptime, number of users, and number of running processes. |
HOST-RESOURCES-MIB::hrStorage | Contains data on memory and file system usage. |
HOST-RESOURCES-MIB::hrDevices | Contains a listing of all processors, network devices, and file systems. |
HOST-RESOURCES-MIB::hrSWRun | Contains a listing of all running processes. |
HOST-RESOURCES-MIB::hrSWRunPerf | Contains memory and CPU statistics on the process table from HOST-RESOURCES-MIB::hrSWRun. |
HOST-RESOURCES-MIB::hrSWInstalled | Contains a listing of the RPM database. |
HOST-RESOURCES-MIB::hrFSTable:
~]$ snmptable -Cb localhost HOST-RESOURCES-MIB::hrFSTable
SNMP table: HOST-RESOURCES-MIB::hrFSTable
Index MountPoint RemoteMountPoint Type
Access Bootable StorageIndex LastFullBackupDate LastPartialBackupDate
1 "/" "" HOST-RESOURCES-TYPES::hrFSLinuxExt2
readWrite true 31 0-1-1,0:0:0.0 0-1-1,0:0:0.0
5 "/dev/shm" "" HOST-RESOURCES-TYPES::hrFSOther
readWrite false 35 0-1-1,0:0:0.0 0-1-1,0:0:0.0
6 "/boot" "" HOST-RESOURCES-TYPES::hrFSLinuxExt2
readWrite false 36 0-1-1,0:0:0.0 0-1-1,0:0:0.0HOST-RESOURCES-MIB, see the /usr/share/snmp/mibs/HOST-RESOURCES-MIB.txt file.
20.7.4.2. CPU and Memory Information
UCD SNMP MIB. The systemStats OID provides a number of counters around processor usage:
~]$ snmpwalk localhost UCD-SNMP-MIB::systemStats
UCD-SNMP-MIB::ssIndex.0 = INTEGER: 1
UCD-SNMP-MIB::ssErrorName.0 = STRING: systemStats
UCD-SNMP-MIB::ssSwapIn.0 = INTEGER: 0 kB
UCD-SNMP-MIB::ssSwapOut.0 = INTEGER: 0 kB
UCD-SNMP-MIB::ssIOSent.0 = INTEGER: 0 blocks/s
UCD-SNMP-MIB::ssIOReceive.0 = INTEGER: 0 blocks/s
UCD-SNMP-MIB::ssSysInterrupts.0 = INTEGER: 29 interrupts/s
UCD-SNMP-MIB::ssSysContext.0 = INTEGER: 18 switches/s
UCD-SNMP-MIB::ssCpuUser.0 = INTEGER: 0
UCD-SNMP-MIB::ssCpuSystem.0 = INTEGER: 0
UCD-SNMP-MIB::ssCpuIdle.0 = INTEGER: 99
UCD-SNMP-MIB::ssCpuRawUser.0 = Counter32: 2278
UCD-SNMP-MIB::ssCpuRawNice.0 = Counter32: 1395
UCD-SNMP-MIB::ssCpuRawSystem.0 = Counter32: 6826
UCD-SNMP-MIB::ssCpuRawIdle.0 = Counter32: 3383736
UCD-SNMP-MIB::ssCpuRawWait.0 = Counter32: 7629
UCD-SNMP-MIB::ssCpuRawKernel.0 = Counter32: 0
UCD-SNMP-MIB::ssCpuRawInterrupt.0 = Counter32: 434
UCD-SNMP-MIB::ssIORawSent.0 = Counter32: 266770
UCD-SNMP-MIB::ssIORawReceived.0 = Counter32: 427302
UCD-SNMP-MIB::ssRawInterrupts.0 = Counter32: 743442
UCD-SNMP-MIB::ssRawContexts.0 = Counter32: 718557
UCD-SNMP-MIB::ssCpuRawSoftIRQ.0 = Counter32: 128
UCD-SNMP-MIB::ssRawSwapIn.0 = Counter32: 0
UCD-SNMP-MIB::ssRawSwapOut.0 = Counter32: 0ssCpuRawUser, ssCpuRawSystem, ssCpuRawWait, and ssCpuRawIdle OIDs provide counters which are helpful when determining whether a system is spending most of its processor time in kernel space, user space, or I/O. ssRawSwapIn and ssRawSwapOut can be helpful when determining whether a system is suffering from memory exhaustion.
UCD-SNMP-MIB::memory OID, which provides similar data to the free command:
~]$ snmpwalk localhost UCD-SNMP-MIB::memory
UCD-SNMP-MIB::memIndex.0 = INTEGER: 0
UCD-SNMP-MIB::memErrorName.0 = STRING: swap
UCD-SNMP-MIB::memTotalSwap.0 = INTEGER: 1023992 kB
UCD-SNMP-MIB::memAvailSwap.0 = INTEGER: 1023992 kB
UCD-SNMP-MIB::memTotalReal.0 = INTEGER: 1021588 kB
UCD-SNMP-MIB::memAvailReal.0 = INTEGER: 634260 kB
UCD-SNMP-MIB::memTotalFree.0 = INTEGER: 1658252 kB
UCD-SNMP-MIB::memMinimumSwap.0 = INTEGER: 16000 kB
UCD-SNMP-MIB::memBuffer.0 = INTEGER: 30760 kB
UCD-SNMP-MIB::memCached.0 = INTEGER: 216200 kB
UCD-SNMP-MIB::memSwapError.0 = INTEGER: noError(0)
UCD-SNMP-MIB::memSwapErrorMsg.0 = STRING:UCD SNMP MIB. The SNMP table UCD-SNMP-MIB::laTable has a listing of the 1, 5, and 15 minute load averages:
~]$ snmptable localhost UCD-SNMP-MIB::laTable
SNMP table: UCD-SNMP-MIB::laTable
laIndex laNames laLoad laConfig laLoadInt laLoadFloat laErrorFlag laErrMessage
1 Load-1 0.00 12.00 0 0.000000 noError
2 Load-5 0.00 12.00 0 0.000000 noError
3 Load-15 0.00 12.00 0 0.000000 noError20.7.4.3. File System and Disk Information
Host Resources MIB provides information on file system size and usage. Each file system (and also each memory pool) has an entry in the HOST-RESOURCES-MIB::hrStorageTable table:
~]$ snmptable -Cb localhost HOST-RESOURCES-MIB::hrStorageTable
SNMP table: HOST-RESOURCES-MIB::hrStorageTable
Index Type Descr
AllocationUnits Size Used AllocationFailures
1 HOST-RESOURCES-TYPES::hrStorageRam Physical memory
1024 Bytes 1021588 388064 ?
3 HOST-RESOURCES-TYPES::hrStorageVirtualMemory Virtual memory
1024 Bytes 2045580 388064 ?
6 HOST-RESOURCES-TYPES::hrStorageOther Memory buffers
1024 Bytes 1021588 31048 ?
7 HOST-RESOURCES-TYPES::hrStorageOther Cached memory
1024 Bytes 216604 216604 ?
10 HOST-RESOURCES-TYPES::hrStorageVirtualMemory Swap space
1024 Bytes 1023992 0 ?
31 HOST-RESOURCES-TYPES::hrStorageFixedDisk /
4096 Bytes 2277614 250391 ?
35 HOST-RESOURCES-TYPES::hrStorageFixedDisk /dev/shm
4096 Bytes 127698 0 ?
36 HOST-RESOURCES-TYPES::hrStorageFixedDisk /boot
1024 Bytes 198337 26694 ?HOST-RESOURCES-MIB::hrStorageSize and HOST-RESOURCES-MIB::hrStorageUsed can be used to calculate the remaining capacity of each mounted file system.
UCD-SNMP-MIB::systemStats (ssIORawSent.0 and ssIORawRecieved.0) and in UCD-DISKIO-MIB::diskIOTable. The latter provides much more granular data. Under this table are OIDs for diskIONReadX and diskIONWrittenX, which provide counters for the number of bytes read from and written to the block device in question since the system boot:
~]$ snmptable -Cb localhost UCD-DISKIO-MIB::diskIOTable
SNMP table: UCD-DISKIO-MIB::diskIOTable
Index Device NRead NWritten Reads Writes LA1 LA5 LA15 NReadX NWrittenX
...
25 sda 216886272 139109376 16409 4894 ? ? ? 216886272 139109376
26 sda1 2455552 5120 613 2 ? ? ? 2455552 5120
27 sda2 1486848 0 332 0 ? ? ? 1486848 0
28 sda3 212321280 139104256 15312 4871 ? ? ? 212321280 13910425620.7.4.4. Network Information
Interfaces MIB provides information on network devices. IF-MIB::ifTable provides an SNMP table with an entry for each interface on the system, the configuration of the interface, and various packet counters for the interface. The following example shows the first few columns of ifTable on a system with two physical network interfaces:
~]$ snmptable -Cb localhost IF-MIB::ifTable
SNMP table: IF-MIB::ifTable
Index Descr Type Mtu Speed PhysAddress AdminStatus
1 lo softwareLoopback 16436 10000000 up
2 eth0 ethernetCsmacd 1500 0 52:54:0:c7:69:58 up
3 eth1 ethernetCsmacd 1500 0 52:54:0:a7:a3:24 downIF-MIB::ifOutOctets and IF-MIB::ifInOctets. The following SNMP queries will retrieve network traffic for each of the interfaces on this system:
~]$snmpwalk localhost IF-MIB::ifDescrIF-MIB::ifDescr.1 = STRING: lo IF-MIB::ifDescr.2 = STRING: eth0 IF-MIB::ifDescr.3 = STRING: eth1 ~]$snmpwalk localhost IF-MIB::ifOutOctetsIF-MIB::ifOutOctets.1 = Counter32: 10060699 IF-MIB::ifOutOctets.2 = Counter32: 650 IF-MIB::ifOutOctets.3 = Counter32: 0 ~]$snmpwalk localhost IF-MIB::ifInOctetsIF-MIB::ifInOctets.1 = Counter32: 10060699 IF-MIB::ifInOctets.2 = Counter32: 78650 IF-MIB::ifInOctets.3 = Counter32: 0
20.7.5. Extending Net-SNMP
20.7.5.1. Extending Net-SNMP with Shell Scripts
NET-SNMP-EXTEND-MIB) that can be used to query arbitrary shell scripts. To specify the shell script to run, use the extend directive in the /etc/snmp/snmpd.conf file. Once defined, the Agent will provide the exit code and any output of the command over SNMP. The example below demonstrates this mechanism with a script which determines the number of httpd processes in the process table.
Note
proc directive. See the snmpd.conf(5) manual page for more information.
httpd processes running on the system at a given point in time:
#!/bin/sh NUMPIDS=`pgrep httpd | wc -l` exit $NUMPIDS
extend directive to the /etc/snmp/snmpd.conf file. The format of the extend directive is the following:
extend name prog args/usr/local/bin/check_apache.sh, the following directive will add the script to the SNMP tree:
extend httpd_pids /bin/sh /usr/local/bin/check_apache.sh
NET-SNMP-EXTEND-MIB::nsExtendObjects:
~]$ snmpwalk localhost NET-SNMP-EXTEND-MIB::nsExtendObjects
NET-SNMP-EXTEND-MIB::nsExtendNumEntries.0 = INTEGER: 1
NET-SNMP-EXTEND-MIB::nsExtendCommand."httpd_pids" = STRING: /bin/sh
NET-SNMP-EXTEND-MIB::nsExtendArgs."httpd_pids" = STRING: /usr/local/bin/check_apache.sh
NET-SNMP-EXTEND-MIB::nsExtendInput."httpd_pids" = STRING:
NET-SNMP-EXTEND-MIB::nsExtendCacheTime."httpd_pids" = INTEGER: 5
NET-SNMP-EXTEND-MIB::nsExtendExecType."httpd_pids" = INTEGER: exec(1)
NET-SNMP-EXTEND-MIB::nsExtendRunType."httpd_pids" = INTEGER: run-on-read(1)
NET-SNMP-EXTEND-MIB::nsExtendStorage."httpd_pids" = INTEGER: permanent(4)
NET-SNMP-EXTEND-MIB::nsExtendStatus."httpd_pids" = INTEGER: active(1)
NET-SNMP-EXTEND-MIB::nsExtendOutput1Line."httpd_pids" = STRING:
NET-SNMP-EXTEND-MIB::nsExtendOutputFull."httpd_pids" = STRING:
NET-SNMP-EXTEND-MIB::nsExtendOutNumLines."httpd_pids" = INTEGER: 1
NET-SNMP-EXTEND-MIB::nsExtendResult."httpd_pids" = INTEGER: 8
NET-SNMP-EXTEND-MIB::nsExtendOutLine."httpd_pids".1 = STRING:extend directive. For example, the following shell script can be used to determine the number of processes matching an arbitrary string, and will also output a text string giving the number of processes:
#!/bin/sh PATTERN=$1 NUMPIDS=`pgrep $PATTERN | wc -l` echo "There are $NUMPIDS $PATTERN processes." exit $NUMPIDS
/etc/snmp/snmpd.conf directives will give both the number of httpd PIDs as well as the number of snmpd PIDs when the above script is copied to /usr/local/bin/check_proc.sh:
extend httpd_pids /bin/sh /usr/local/bin/check_proc.sh httpd extend snmpd_pids /bin/sh /usr/local/bin/check_proc.sh snmpd
snmpwalk of the nsExtendObjects OID:
~]$ snmpwalk localhost NET-SNMP-EXTEND-MIB::nsExtendObjects
NET-SNMP-EXTEND-MIB::nsExtendNumEntries.0 = INTEGER: 2
NET-SNMP-EXTEND-MIB::nsExtendCommand."httpd_pids" = STRING: /bin/sh
NET-SNMP-EXTEND-MIB::nsExtendCommand."snmpd_pids" = STRING: /bin/sh
NET-SNMP-EXTEND-MIB::nsExtendArgs."httpd_pids" = STRING: /usr/local/bin/check_proc.sh httpd
NET-SNMP-EXTEND-MIB::nsExtendArgs."snmpd_pids" = STRING: /usr/local/bin/check_proc.sh snmpd
NET-SNMP-EXTEND-MIB::nsExtendInput."httpd_pids" = STRING:
NET-SNMP-EXTEND-MIB::nsExtendInput."snmpd_pids" = STRING:
...
NET-SNMP-EXTEND-MIB::nsExtendResult."httpd_pids" = INTEGER: 8
NET-SNMP-EXTEND-MIB::nsExtendResult."snmpd_pids" = INTEGER: 1
NET-SNMP-EXTEND-MIB::nsExtendOutLine."httpd_pids".1 = STRING: There are 8 httpd processes.
NET-SNMP-EXTEND-MIB::nsExtendOutLine."snmpd_pids".1 = STRING: There are 1 snmpd processes.Warning
httpd processes. This query could be used during a performance test to determine the impact of the number of processes on memory pressure:
~]$snmpget localhost \'NET-SNMP-EXTEND-MIB::nsExtendResult."httpd_pids"' \UCD-SNMP-MIB::memAvailReal.0NET-SNMP-EXTEND-MIB::nsExtendResult."httpd_pids" = INTEGER: 8 UCD-SNMP-MIB::memAvailReal.0 = INTEGER: 799664 kB
20.7.5.2. Extending Net-SNMP with Perl
extend directive is a fairly limited method for exposing custom application metrics over SNMP. The Net-SNMP Agent also provides an embedded Perl interface for exposing custom objects. The net-snmp-perl package in the Optional channel provides the NetSNMP::agent Perl module that is used to write embedded Perl plug-ins on Red Hat Enterprise Linux.
Note
NetSNMP::agent Perl module provides an agent object which is used to handle requests for a part of the agent's OID tree. The agent object's constructor has options for running the agent as a sub-agent of snmpd or a standalone agent. No arguments are necessary to create an embedded agent:
use NetSNMP::agent (':all');
my $agent = new NetSNMP::agent();agent object has a register method which is used to register a callback function with a particular OID. The register function takes a name, OID, and pointer to the callback function. The following example will register a callback function named hello_handler with the SNMP Agent which will handle requests under the OID .1.3.6.1.4.1.8072.9999.9999:
$agent->register("hello_world", ".1.3.6.1.4.1.8072.9999.9999",
\&hello_handler);Note
.1.3.6.1.4.1.8072.9999.9999 (NET-SNMP-MIB::netSnmpPlaypen) is typically used for demonstration purposes only. If your organization does not already have a root OID, you can obtain one by contacting an ISO Name Registration Authority (ANSI in the United States).
HANDLER, REGISTRATION_INFO, REQUEST_INFO, and REQUESTS. The REQUESTS parameter contains a list of requests in the current call and should be iterated over and populated with data. The request objects in the list have get and set methods which allow for manipulating the OID and value of the request. For example, the following call will set the value of a request object to the string “hello world”:
$request->setValue(ASN_OCTET_STR, "hello world");
getMode method on the request_info object passed as the third parameter to the handler function. If the request is a GET request, the caller will expect the handler to set the value of the request object, depending on the OID of the request. If the request is a GETNEXT request, the caller will also expect the handler to set the OID of the request to the next available OID in the tree. This is illustrated in the following code example:
my $request;
my $string_value = "hello world";
my $integer_value = "8675309";
for($request = $requests; $request; $request = $request->next()) {
my $oid = $request->getOID();
if ($request_info->getMode() == MODE_GET) {
if ($oid == new NetSNMP::OID(".1.3.6.1.4.1.8072.9999.9999.1.0")) {
$request->setValue(ASN_OCTET_STR, $string_value);
}
elsif ($oid == new NetSNMP::OID(".1.3.6.1.4.1.8072.9999.9999.1.1")) {
$request->setValue(ASN_INTEGER, $integer_value);
}
} elsif ($request_info->getMode() == MODE_GETNEXT) {
if ($oid == new NetSNMP::OID(".1.3.6.1.4.1.8072.9999.9999.1.0")) {
$request->setOID(".1.3.6.1.4.1.8072.9999.9999.1.1");
$request->setValue(ASN_INTEGER, $integer_value);
}
elsif ($oid < new NetSNMP::OID(".1.3.6.1.4.1.8072.9999.9999.1.0")) {
$request->setOID(".1.3.6.1.4.1.8072.9999.9999.1.0");
$request->setValue(ASN_OCTET_STR, $string_value);
}
}
}getMode returns MODE_GET, the handler analyzes the value of the getOID call on the request object. The value of the request is set to either string_value if the OID ends in “.1.0”, or set to integer_value if the OID ends in “.1.1”. If the getMode returns MODE_GETNEXT, the handler determines whether the OID of the request is “.1.0”, and then sets the OID and value for “.1.1”. If the request is higher on the tree than “.1.0”, the OID and value for “.1.0” is set. This in effect returns the “next” value in the tree so that a program like snmpwalk can traverse the tree without prior knowledge of the structure.
NetSNMP::ASN. See the perldoc for NetSNMP::ASN for a full list of available constants.
#!/usr/bin/perl
use NetSNMP::agent (':all');
use NetSNMP::ASN qw(ASN_OCTET_STR ASN_INTEGER);
sub hello_handler {
my ($handler, $registration_info, $request_info, $requests) = @_;
my $request;
my $string_value = "hello world";
my $integer_value = "8675309";
for($request = $requests; $request; $request = $request->next()) {
my $oid = $request->getOID();
if ($request_info->getMode() == MODE_GET) {
if ($oid == new NetSNMP::OID(".1.3.6.1.4.1.8072.9999.9999.1.0")) {
$request->setValue(ASN_OCTET_STR, $string_value);
}
elsif ($oid == new NetSNMP::OID(".1.3.6.1.4.1.8072.9999.9999.1.1")) {
$request->setValue(ASN_INTEGER, $integer_value);
}
} elsif ($request_info->getMode() == MODE_GETNEXT) {
if ($oid == new NetSNMP::OID(".1.3.6.1.4.1.8072.9999.9999.1.0")) {
$request->setOID(".1.3.6.1.4.1.8072.9999.9999.1.1");
$request->setValue(ASN_INTEGER, $integer_value);
}
elsif ($oid < new NetSNMP::OID(".1.3.6.1.4.1.8072.9999.9999.1.0")) {
$request->setOID(".1.3.6.1.4.1.8072.9999.9999.1.0");
$request->setValue(ASN_OCTET_STR, $string_value);
}
}
}
}
my $agent = new NetSNMP::agent();
$agent->register("hello_world", ".1.3.6.1.4.1.8072.9999.9999",
\&hello_handler);/usr/share/snmp/hello_world.pl and add the following line to the /etc/snmp/snmpd.conf configuration file:
perl do "/usr/share/snmp/hello_world.pl"
snmpwalk should return the new data:
~]$ snmpwalk localhost NET-SNMP-MIB::netSnmpPlaypen
NET-SNMP-MIB::netSnmpPlaypen.1.0 = STRING: "hello world"
NET-SNMP-MIB::netSnmpPlaypen.1.1 = INTEGER: 8675309snmpget should also be used to exercise the other mode of the handler:
~]$snmpget localhost \NET-SNMP-MIB::netSnmpPlaypen.1.0 \NET-SNMP-MIB::netSnmpPlaypen.1.1NET-SNMP-MIB::netSnmpPlaypen.1.0 = STRING: "hello world" NET-SNMP-MIB::netSnmpPlaypen.1.1 = INTEGER: 8675309
20.8. Additional Resources
20.8.1. Installed Documentation
- lscpu(1) — The manual page for the
lscpucommand. - lsusb(8) — The manual page for the
lsusbcommand. - findmnt(8) — The manual page for the
findmntcommand. - blkid(8) — The manual page for the
blkidcommand. - lsblk(8) — The manual page for the
lsblkcommand. - ps(1) — The manual page for the
pscommand. - top(1) — The manual page for the
topcommand. - free(1) — The manual page for the
freecommand. - df(1) — The manual page for the
dfcommand. - du(1) — The manual page for the
ducommand. - lspci(8) — The manual page for the
lspcicommand. - snmpd(8) — The manual page for the
snmpdservice. - snmpd.conf(5) — The manual page for the
/etc/snmp/snmpd.conffile containing full documentation of available configuration directives.
Chapter 21. OpenLMI
21.1. About OpenLMI
- System management agents — these agents are installed on a managed system and implement an object model that is presented to a standard object broker. The initial agents implemented in OpenLMI include storage configuration and network configuration, but later work will address additional elements of system management. The system management agents are commonly referred to as Common Information Model providers or CIM providers.
- A standard object broker — the object broker manages system management agents and provides an interface to them. The standard object broker is also known as a CIM Object Monitor or CIMOM.
- Client applications and scripts — the client applications and scripts call the system management agents through the standard object broker.
21.1.1. Main Features
- OpenLMI provides a standard interface for configuration, management, and monitoring of your local and remote systems.
- It allows you to configure, manage, and monitor production servers running on both physical and virtual machines.
- It is distributed with a collection of CIM providers that allow you to configure, manage, and monitor storage devices and complex networks.
- It allows you to call system management functions from C, C++, Python, and Java programs, and includes LMIShell, which provides a command line interface.
- It is free software based on open industry standards.
21.1.2. Management Capabilities
Table 21.1. Available CIM Providers
| Package Name | Description |
|---|---|
| openlmi-account | A CIM provider for managing user accounts. |
| openlmi-logicalfile | A CIM provider for reading files and directories. |
| openlmi-networking | A CIM provider for network management. |
| openlmi-powermanagement | A CIM provider for power management. |
| openlmi-service | A CIM provider for managing system services. |
| openlmi-storage | A CIM provider for storage management. |
| openlmi-fan | A CIM provider for controlling computer fans. |
| openlmi-hardware | A CIM provider for retrieving hardware information. |
| openlmi-realmd | A CIM provider for configuring realmd. |
| openlmi-software[a] | A CIM provider for software management. |
[a]
In Red Hat Enterprise Linux 7, the OpenLMI Software provider is included as a Technology Preview. This provider is fully functional, but has a known performance scaling issue where listing large numbers of software packages may consume excessive amount of memory and time. To work around this issue, adjust package searches to return as few packages as possible.
| |
21.2. Installing OpenLMI
21.2.1. Installing OpenLMI on a Managed System
- Install the tog-pegasus package by typing the following at a shell prompt as
root:yum install tog-pegasusThis command installs the OpenPegasus CIMOM and all its dependencies to the system and creates a user account for thepegasususer. - Install required CIM providers by running the following command as
root:yum install openlmi-{storage,networking,service,account,powermanagement}This command installs the CIM providers for storage, network, service, account, and power management. For a complete list of CIM providers distributed with Red Hat Enterprise Linux 7, see Table 21.1, “Available CIM Providers”. - Edit the
/etc/Pegasus/access.confconfiguration file to customize the list of users that are allowed to connect to the OpenPegasus CIMOM. By default, only thepegasususer is allowed to access the CIMOM both remotely and locally. To activate this user account, run the following command asrootto set the user's password:passwd pegasus - Start the OpenPegasus CIMOM by activating the
tog-pegasus.serviceunit. To activate thetog-pegasus.serviceunit in the current session, type the following at a shell prompt asroot:systemctl start tog-pegasus.serviceTo configure thetog-pegasus.serviceunit to start automatically at boot time, type asroot:systemctl enable tog-pegasus.service - If you intend to interact with the managed system from a remote machine, enable TCP communication on port
5989(wbem-https). To open this port in the current session, run the following command asroot:firewall-cmd --add-port 5989/tcpTo open port5989for TCP communication permanently, type asroot:firewall-cmd --permanent --add-port 5989/tcp
21.2.2. Installing OpenLMI on a Client System
- Install the openlmi-tools package by typing the following at a shell prompt as
root:yum install openlmi-toolsThis command installs LMIShell, an interactive client and interpreter for accessing CIM objects provided by OpenPegasus, and all its dependencies to the system. - Configure SSL certificates for OpenPegasus as described in Section 21.3, “Configuring SSL Certificates for OpenPegasus”.
21.3. Configuring SSL Certificates for OpenPegasus
- Self-signed certificates require less infrastructure to use, but are more difficult to deploy to clients and manage securely.
- Authority-signed certificates are easier to deploy to clients once they are set up, but may require a greater initial investment.
Table 21.2. Certificate and Trust Store Locations
| Configuration Option | Location | Description |
|---|---|---|
sslCertificateFilePath | /etc/Pegasus/server.pem | Public certificate of the CIMOM. |
sslKeyFilePath | /etc/Pegasus/file.pem | Private key known only to the CIMOM. |
sslTrustStore | /etc/Pegasus/client.pem | The file or directory providing the list of trusted certificate authorities. |
Important
tog-pegasus service to make sure it recognizes the new certificates. To restart the service, type the following at a shell prompt as root:
systemctl restart tog-pegasus.service21.3.1. Managing Self-signed Certificates
tog-pegasus service is started, a set of self-signed certificates will be automatically generated using the system's primary host name as the certificate subject.
Important
- Copy the
/etc/Pegasus/server.pemcertificate from the managed system to the/etc/pki/ca-trust/source/anchors/directory on the client system. To do so, type the following at a shell prompt asroot:scp root@hostname:/etc/Pegasus/server.pem /etc/pki/ca-trust/source/anchors/pegasus-hostname.pemReplace hostname with the host name of the managed system. Note that this command only works if thesshdservice is running on the managed system and is configured to allow therootuser to log in to the system over the SSH protocol. For more information on how to install and configure thesshdservice and use thescpcommand to transfer files over the SSH protocol, see Chapter 12, OpenSSH. - Verify the integrity of the certificate on the client system by comparing its check sum with the check sum of the original file. To calculate the check sum of the
/etc/Pegasus/server.pemfile on the managed system, run the following command asrooton that system:sha1sum /etc/Pegasus/server.pemTo calculate the check sum of the/etc/pki/ca-trust/source/anchors/pegasus-hostname.pemfile on the client system, run the following command on this system:sha1sum /etc/pki/ca-trust/source/anchors/pegasus-hostname.pemReplace hostname with the host name of the managed system. - Update the trust store on the client system by running the following command as
root:update-ca-trust extract
21.3.2. Managing Authority-signed Certificates with Identity Management (Recommended)
- Install the ipa-client package and register the system to Identity Management as described in the Red Hat Enterprise Linux 7 Linux Domain Identity, Authentication, and Policy Guide.
- Copy the Identity Management signing certificate to the trusted store by typing the following command as
root:cp /etc/ipa/ca.crt /etc/pki/ca-trust/source/anchors/ipa.crt - Update the trust store by running the following command as
root:update-ca-trust extract - Register Pegasus as a service in the Identity Management domain by running the following command as a privileged domain user:
ipa service-add CIMOM/hostnameReplace hostname with the host name of the managed system.This command can be run from any system in the Identity Management domain that has the ipa-admintools package installed. It creates a service entry in Identity Management that can be used to generate signed SSL certificates. - Back up the PEM files located in the
/etc/Pegasus/directory (recommended). - Retrieve the signed certificate by running the following command as
root:ipa-getcert request -f /etc/Pegasus/server.pem -k /etc/Pegasus/file.pem -N CN=hostname -K CIMOM/hostnameReplace hostname with the host name of the managed system.The certificate and key files are now kept in proper locations. Thecertmongerdaemon installed on the managed system by theipa-client-installscript ensures that the certificate is kept up-to-date and renewed as necessary.For more information, see the Red Hat Enterprise Linux 7 Linux Domain Identity, Authentication, and Policy Guide.
- Install the ipa-client package and register the system to Identity Management as described in the Red Hat Enterprise Linux 7 Linux Domain Identity, Authentication, and Policy Guide.
- Copy the Identity Management signing certificate to the trusted store by typing the following command as
root:cp /etc/ipa/ca.crt /etc/pki/ca-trust/source/anchors/ipa.crt - Update the trust store by running the following command as
root:update-ca-trust extract
- Copy the
/etc/ipa/ca.crtfile securely from any other system joined to the same Identity Management domain to the trusted store/etc/pki/ca-trust/source/anchors/directory asroot. - Update the trust store by running the following command as
root:update-ca-trust extract
21.3.3. Managing Authority-signed Certificates Manually
- If a certificate authority is trusted by default, it is not necessary to perform any particular steps to accomplish this.
- If the certificate authority is not trusted by default, the certificate has to be imported on the client and managed systems.
- Copy the certificate to the trusted store by typing the following command as
root:cp /path/to/ca.crt /etc/pki/ca-trust/source/anchors/ca.crt - Update the trust store by running the following command as
root:update-ca-trust extract
- Create a new SSL configuration file
/etc/Pegasus/ssl.cnfto store information about the certificate. The contents of this file must be similar to the following example:[ req ] distinguished_name = req_distinguished_name prompt = no [ req_distinguished_name ] C = US ST = Massachusetts L = Westford O = Fedora OU = Fedora OpenLMI CN = hostname
Replace hostname with the fully qualified domain name of the managed system. - Generate a private key on the managed system by using the following command as
root:openssl genrsa -out /etc/Pegasus/file.pem 1024 - Generate a certificate signing request (CSR) by running this command as
root:openssl req -config /etc/Pegasus/ssl.cnf -new -key /etc/Pegasus/file.pem -out /etc/Pegasus/server.csr - Send the
/etc/Pegasus/server.csrfile to the certificate authority for signing. The detailed procedure of submitting the file depends on the particular certificate authority. - When the signed certificate is received from the certificate authority, save it as
/etc/Pegasus/server.pem. - Copy the certificate of the trusted authority to the Pegasus trust store to make sure that Pegasus is capable of trusting its own certificate by running as
root:cp /path/to/ca.crt /etc/Pegasus/client.pem
Important
21.4. Using LMIShell
21.4.1. Starting, Using, and Exiting LMIShell
Starting LMIShell in Interactive Mode
lmishell command with no additional arguments:
lmishelllmishell command with the --noverify or -n command line option:
lmishell --noverifyUsing Tab Completion
Browsing History
~/.lmishell_history file. This allows you to browse the command history and re-use already entered lines in interactive mode without the need to type them at the prompt again. To move backward in the command history, press the Up Arrow key or the Ctrl+p key combination. To move forward in the command history, press the Down Arrow key or the Ctrl+n key combination.
> (reverse-i-search)`connect':c = connect("server.example.com", "pegasus")
clear_history() function as follows:
clear_history()history_length option in the ~/.lmishellrc configuration file. In addition, you can change the location of the history file by changing the value of the history_file option in this configuration file. For example, to set the location of the history file to ~/.lmishell_history and configure LMIShell to store the maximum of 1000 lines in it, add the following lines to the ~/.lmishellrc file:
history_file = "~/.lmishell_history" history_length = 1000
Handling Exceptions
use_exceptions() function as follows:
use_exceptions()use_exception(False)
use_exceptions option in the ~/.lmishellrc configuration file to True:
use_exceptions = True
Configuring a Temporary Cache
clear_cache() method as follows:
object_name.clear_cache()use_cache() method as follows:
object_name.use_cache(False)
object_name.use_cache(True)
use_cache option in the ~/.lmishellrc configuration file to False:
use_cache = False
Exiting LMIShell
quit() function as follows:
> quit()
~]$Running an LMIShell Script
lmishell command as follows:
lmishell file_name--interact or -i command line option:
lmishell --interact file_name.lmi.
21.4.2. Connecting to a CIMOM
Connecting to a Remote CIMOM
connect() function as follows:
connect(host_name, user_name[, password])LMIConnection object.
Example 21.1. Connecting to a Remote CIMOM
server.example.com as user pegasus, type the following at the interactive prompt:
> c = connect("server.example.com", "pegasus")
password:
>Connecting to a Local CIMOM
root user and the /var/run/tog-pegasus/cimxml.socket socket must exist.
connect() function as follows:
connect(host_name)localhost, 127.0.0.1, or ::1. The function returns an LMIConnection object or None.
Example 21.2. Connecting to a Local CIMOM
localhost as the root user, type the following at the interactive prompt:
> c = connect("localhost")
>Verifying a Connection to a CIMOM
connect() function returns either an LMIConnection object, or None if the connection could not be established. In addition, when the connect() function fails to establish a connection, it prints an error message to standard error output.
isinstance() function as follows:
isinstance(object_name,LMIConnection)
True if object_name is an LMIConnection object, or False otherwise.
Example 21.3. Verifying a Connection to a CIMOM
c variable created in Example 21.1, “Connecting to a Remote CIMOM” contains an LMIConnection object, type the following at the interactive prompt:
> isinstance(c, LMIConnection)
True
>c is not None:
> c is None
False
>21.4.3. Working with Namespaces
root namespace is the first entry point of a connection object.
Listing Available Namespaces
print_namespaces() method as follows:
object_name.print_namespaces()namespaces:
object_name.namespacesExample 21.4. Listing Available Namespaces
root namespace object of the c connection object created in Example 21.1, “Connecting to a Remote CIMOM” and list all available namespaces, type the following at the interactive prompt:
> c.root.print_namespaces()
cimv2
interop
PG_InterOp
PG_Internal
>root_namespaces, type:
> root_namespaces = c.root.namespaces
>Accessing Namespace Objects
object_name.namespace_name
LMINamespace object.
Example 21.5. Accessing Namespace Objects
cimv2 namespace of the c connection object created in Example 21.1, “Connecting to a Remote CIMOM” and assign it to a variable named ns, type the following at the interactive prompt:
> ns = c.root.cimv2
> 21.4.4. Working with Classes
Listing Available Classes
print_classes() method as follows:
namespace_object.print_classes()classes() method:
namespace_object.classes()Example 21.6. Listing Available Classes
ns namespace object created in Example 21.5, “Accessing Namespace Objects” and list all available classes, type the following at the interactive prompt:
> ns.print_classes()
CIM_CollectionInSystem
CIM_ConcreteIdentity
CIM_ControlledBy
CIM_DeviceSAPImplementation
CIM_MemberOfStatusCollection
...
>cimv2_classes, type:
> cimv2_classes = ns.classes()
>Accessing Class Objects
namespace_object.class_name
Example 21.7. Accessing Class Objects
LMI_IPNetworkConnection class of the ns namespace object created in Example 21.5, “Accessing Namespace Objects” and assign it to a variable named cls, type the following at the interactive prompt:
> cls = ns.LMI_IPNetworkConnection
>Examining Class Objects
class_object.classnameclass_object.namespacedoc() method as follows:
class_object.doc()Example 21.8. Examining Class Objects
cls class object created in Example 21.7, “Accessing Class Objects” and display its name and corresponding namespace, type the following at the interactive prompt:
>cls.classname'LMI_IPNetworkConnection' >cls.namespace'root/cimv2' >
> cls.doc()
Class: LMI_IPNetworkConnection
SuperClass: CIM_IPNetworkConnection
[qualifier] string UMLPackagePath: 'CIM::Network::IP'
[qualifier] string Version: '0.1.0'
...Listing Available Methods
print_methods() method as follows:
class_object.print_methods()methods() method:
class_object.methods()Example 21.9. Listing Available Methods
cls class object created in Example 21.7, “Accessing Class Objects” and list all available methods, type the following at the interactive prompt:
> cls.print_methods()
RequestStateChange
>service_methods, type:
> service_methods = cls.methods()
>Listing Available Properties
print_properties() method as follows:
class_object.print_properties()properties() method:
class_object.properties()Example 21.10. Listing Available Properties
cls class object created in Example 21.7, “Accessing Class Objects” and list all available properties, type the following at the interactive prompt:
> cls.print_properties()
RequestedState
HealthState
StatusDescriptions
TransitioningToState
Generation
...
>service_properties, type:
> service_properties = cls.properties()
>Listing and Viewing ValueMap Properties
print_valuemap_properties() method as follows:
class_object.print_valuemap_properties()valuemap_properties() method:
class_object.valuemap_properties()Example 21.11. Listing ValueMap Properties
cls class object created in Example 21.7, “Accessing Class Objects” and list all available ValueMap properties, type the following at the interactive prompt:
> cls.print_valuemap_properties()
RequestedState
HealthState
TransitioningToState
DetailedStatus
OperationalStatus
...
>service_valuemap_properties, type:
> service_valuemap_properties = cls.valuemap_properties()
>class_object.valuemap_propertyValuesprint_values() method as follows:
class_object.valuemap_propertyValues.print_values()
values() method:
class_object.valuemap_propertyValues.values()
Example 21.12. Accessing ValueMap Properties
RequestedState. To inspect this property and list available constant values, type the following at the interactive prompt:
> cls.RequestedStateValues.print_values()
Reset
NoChange
NotApplicable
Quiesce
Unknown
...
>requested_state_values, type:
> requested_state_values = cls.RequestedStateValues.values()
>class_object.valuemap_propertyValues.constant_value_namevalue() method as follows:
class_object.valuemap_propertyValues.value("constant_value_name")
value_name() method:
class_object.valuemap_propertyValues.value_name("constant_value")
Example 21.13. Accessing Constant Values
RequestedState property provides a constant value named Reset. To access this named constant value, type the following at the interactive prompt:
>cls.RequestedStateValues.Reset11 >cls.RequestedStateValues.value("Reset")11 >
> cls.RequestedStateValues.value_name(11)
u'Reset'
>Fetching a CIMClass Object
CIMClass object, which is why LMIShell only fetches this object from the CIMOM when a called method actually needs it. To fetch the CIMClass object manually, use the fetch() method as follows:
class_object.fetch()CIMClass object fetch it automatically.
21.4.5. Working with Instances
Accessing Instances
instances() method as follows:
class_object.instances()LMIInstance objects.
first_instance() method:
class_object.first_instance()LMIInstance object.
instances() and first_instance() support an optional argument to allow you to filter the results:
class_object.instances(criteria)class_object.first_instance(criteria)Example 21.14. Accessing Instances
cls class object created in Example 21.7, “Accessing Class Objects” that has the ElementName property equal to eth0 and assign it to a variable named device, type the following at the interactive prompt:
> device = cls.first_instance({"ElementName": "eth0"})
>Examining Instances
instance_object.classnameinstance_object.namespaceinstance_object.pathLMIInstanceName object.
doc() method as follows:
instance_object.doc()Example 21.15. Examining Instances
device instance object created in Example 21.14, “Accessing Instances” and display its class name and the corresponding namespace, type the following at the interactive prompt:
>device.classnameu'LMI_IPNetworkConnection' >device.namespace'root/cimv2' >
> device.doc()
Instance of LMI_IPNetworkConnection
[property] uint16 RequestedState = '12'
[property] uint16 HealthState
[property array] string [] StatusDescriptions
...Creating New Instances
create_instance() method as follows:
class_object.create_instance(properties)LMIInstance object.
Example 21.16. Creating New Instances
LMI_Group class represents system groups and the LMI_Account class represents user accounts on the managed system. To use the ns namespace object created in Example 21.5, “Accessing Namespace Objects”, create instances of these two classes for the system group named pegasus and the user named lmishell-user, and assign them to variables named group and user, type the following at the interactive prompt:
>group = ns.LMI_Group.first_instance({"Name" : "pegasus"})>user = ns.LMI_Account.first_instance({"Name" : "lmishell-user"})>
LMI_Identity class for the lmishell-user user, type:
> identity = user.first_associator(ResultClass="LMI_Identity")
>LMI_MemberOfGroup class represents system group membership. To use the LMI_MemberOfGroup class to add the lmishell-user to the pegasus group, create a new instance of this class as follows:
>ns.LMI_MemberOfGroup.create_instance({..."Member" : identity.path,..."Collection" : group.path})LMIInstance(classname="LMI_MemberOfGroup", ...) >
Deleting Individual Instances
delete() method as follows:
instance_object.delete()Example 21.17. Deleting Individual Instances
LMI_Account class represents user accounts on the managed system. To use the ns namespace object created in Example 21.5, “Accessing Namespace Objects”, create an instance of the LMI_Account class for the user named lmishell-user, and assign it to a variable named user, type the following at the interactive prompt:
> user = ns.LMI_Account.first_instance({"Name" : "lmishell-user"})
>lmishell-user from the system, type:
> user.delete()
True
>Listing and Accessing Available Properties
print_properties() method as follows:
instance_object.print_properties()properties() method:
instance_object.properties()Example 21.18. Listing Available Properties
device instance object created in Example 21.14, “Accessing Instances” and list all available properties, type the following at the interactive prompt:
> device.print_properties()
RequestedState
HealthState
StatusDescriptions
TransitioningToState
Generation
...
>device_properties, type:
> device_properties = device.properties()
>instance_object.property_name
instance_object.property_name = value
push() method:
instance_object.push()Example 21.19. Accessing Individual Properties
device instance object created in Example 21.14, “Accessing Instances” and display the value of the property named SystemName, type the following at the interactive prompt:
> device.SystemName
u'server.example.com'
>Listing and Using Available Methods
print_methods() method as follows:
instance_object.print_methods()method() method:
instance_object.methods()Example 21.20. Listing Available Methods
device instance object created in Example 21.14, “Accessing Instances” and list all available methods, type the following at the interactive prompt:
> device.print_methods()
RequestStateChange
>network_device_methods, type:
> network_device_methods = device.methods()
>instance_object.method_name(
parameter=value,
...)Important
LMIInstance objects do not automatically refresh their contents (properties, methods, qualifiers, and so on). To do so, use the refresh() method as described below.
Example 21.21. Using Methods
PG_ComputerSystem class represents the system. To create an instance of this class by using the ns namespace object created in Example 21.5, “Accessing Namespace Objects” and assign it to a variable named sys, type the following at the interactive prompt:
> sys = ns.PG_ComputerSystem.first_instance()
>LMI_AccountManagementService class implements methods that allow you to manage users and groups in the system. To create an instance of this class and assign it to a variable named acc, type:
> acc = ns.LMI_AccountManagementService.first_instance()
>lmishell-user in the system, use the CreateAccount() method as follows:
> acc.CreateAccount(Name="lmishell-user", System=sys)
LMIReturnValue(rval=0, rparams=NocaseDict({u'Account': LMIInstanceName(classname="LMI_Account"...), u'Identities': [LMIInstanceName(classname="LMI_Identity"...), LMIInstanceName(classname="LMI_Identity"...)]}), errorstr='')LMI_StorageJobLMI_SoftwareInstallationJobLMI_NetworkJob
instance_object.Syncmethod_name(
parameter=value,
...)Sync prefix in their name and return a three-item tuple consisting of the job's return value, job's return value parameters, and job's error string.
PreferPolling parameter as follows:
instance_object.Syncmethod_name(PreferPolling=Trueparameter=value, ...)
Listing and Viewing ValueMap Parameters
print_valuemap_parameters() method as follows:
instance_object.method_name.print_valuemap_parameters()valuemap_parameters() method:
instance_object.method_name.valuemap_parameters()Example 21.22. Listing ValueMap Parameters
acc instance object created in Example 21.21, “Using Methods” and list all available ValueMap parameters of the CreateAccount() method, type the following at the interactive prompt:
> acc.CreateAccount.print_valuemap_parameters()
CreateAccount
>create_account_parameters, type:
> create_account_parameters = acc.CreateAccount.valuemap_parameters()
>instance_object.method_name.valuemap_parameterValuesprint_values() method as follows:
instance_object.method_name.valuemap_parameterValues.print_values()
values() method:
instance_object.method_name.valuemap_parameterValues.values()
Example 21.23. Accessing ValueMap Parameters
CreateAccount. To inspect this parameter and list available constant values, type the following at the interactive prompt:
> acc.CreateAccount.CreateAccountValues.print_values()
Operationunsupported
Failed
Unabletosetpasswordusercreated
Unabletocreatehomedirectoryusercreatedandpasswordset
Operationcompletedsuccessfully
>create_account_values, type:
> create_account_values = acc.CreateAccount.CreateAccountValues.values()
>instance_object.method_name.valuemap_parameterValues.constant_value_namevalue() method as follows:
instance_object.method_name.valuemap_parameterValues.value("constant_value_name")
value_name() method:
instance_object.method_name.valuemap_parameterValues.value_name("constant_value")
Example 21.24. Accessing Constant Values
CreateAccount ValueMap parameter provides a constant value named Failed. To access this named constant value, type the following at the interactive prompt:
>acc.CreateAccount.CreateAccountValues.Failed2 >acc.CreateAccount.CreateAccountValues.value("Failed")2 >
> acc.CreateAccount.CreateAccountValues.value_name(2)
u'Failed'
>Refreshing Instance Objects
refresh() method as follows:
instance_object.refresh()Example 21.25. Refreshing Instance Objects
device instance object created in Example 21.14, “Accessing Instances”, type the following at the interactive prompt:
> device.refresh()
LMIReturnValue(rval=True, rparams=NocaseDict({}), errorstr='')
>Displaying MOF Representation
tomof() method as follows:
instance_object.tomof()Example 21.26. Displaying MOF Representation
device instance object created in Example 21.14, “Accessing Instances”, type the following at the interactive prompt:
> device.tomof()
instance of LMI_IPNetworkConnection {
RequestedState = 12;
HealthState = NULL;
StatusDescriptions = NULL;
TransitioningToState = 12;
...21.4.6. Working with Instance Names
Accessing Instance Names
CIMInstance objects are identified by CIMInstanceName objects. To get a list of all available instance name objects, use the instance_names() method as follows:
class_object.instance_names()LMIInstanceName objects.
first_instance_name() method:
class_object.first_instance_name()LMIInstanceName object.
instance_names() and first_instance_name() support an optional argument to allow you to filter the results:
class_object.instance_names(criteria)class_object.first_instance_name(criteria)Example 21.27. Accessing Instance Names
cls class object created in Example 21.7, “Accessing Class Objects” that has the Name key property equal to eth0 and assign it to a variable named device_name, type the following at the interactive prompt:
> device_name = cls.first_instance_name({"Name": "eth0"})
>Examining Instance Names
instance_name_object.classnameinstance_name_object.namespaceExample 21.28. Examining Instance Names
device_name instance name object created in Example 21.27, “Accessing Instance Names” and display its class name and the corresponding namespace, type the following at the interactive prompt:
>device_name.classnameu'LMI_IPNetworkConnection' >device_name.namespace'root/cimv2' >
Creating New Instance Names
CIMInstanceName object if you know all primary keys of a remote object. This instance name object can then be used to retrieve the whole instance object.
new_instance_name() method as follows:
class_object.new_instance_name(key_properties)LMIInstanceName object.
Example 21.29. Creating New Instance Names
LMI_Account class represents user accounts on the managed system. To use the ns namespace object created in Example 21.5, “Accessing Namespace Objects” and create a new instance name of the LMI_Account class representing the lmishell-user user on the managed system, type the following at the interactive prompt:
>instance_name = ns.LMI_Account.new_instance_name({..."CreationClassName" : "LMI_Account",..."Name" : "lmishell-user",..."SystemCreationClassName" : "PG_ComputerSystem",..."SystemName" : "server"})>
Listing and Accessing Key Properties
print_key_properties() method as follows:
instance_name_object.print_key_properties()key_properties() method:
instance_name_object.key_properties()Example 21.30. Listing Available Key Properties
device_name instance name object created in Example 21.27, “Accessing Instance Names” and list all available key properties, type the following at the interactive prompt:
> device_name.print_key_properties()
CreationClassName
SystemName
Name
SystemCreationClassName
>device_name_properties, type:
> device_name_properties = device_name.key_properties()
>instance_name_object.key_property_name
Example 21.31. Accessing Individual Key Properties
device_name instance name object created in Example 21.27, “Accessing Instance Names” and display the value of the key property named SystemName, type the following at the interactive prompt:
> device_name.SystemName
u'server.example.com'
>Converting Instance Names to Instances
to_instance() method as follows:
instance_name_object.to_instance()LMIInstance object.
Example 21.32. Converting Instance Names to Instances
device_name instance name object created in Example 21.27, “Accessing Instance Names” to an instance object and assign it to a variable named device, type the following at the interactive prompt:
> device = device_name.to_instance()
>21.4.7. Working with Associated Objects
Accessing Associated Instances
associators() method as follows:
instance_object.associators(AssocClass=class_name,ResultClass=class_name,ResultRole=role,IncludeQualifiers=include_qualifiers,IncludeClassOrigin=include_class_origin,PropertyList=property_list)
first_associator() method:
instance_object.first_associator(AssocClass=class_name,ResultClass=class_name,ResultRole=role,IncludeQualifiers=include_qualifiers,IncludeClassOrigin=include_class_origin,PropertyList=property_list)
AssocClass— Each returned object must be associated with the source object through an instance of this class or one of its subclasses. The default value isNone.ResultClass— Each returned object must be either an instance of this class or one of its subclasses, or it must be this class or one of its subclasses. The default value isNone.Role— Each returned object must be associated with the source object through an association in which the source object plays the specified role. The name of the property in the association class that refers to the source object must match the value of this parameter. The default value isNone.ResultRole— Each returned object must be associated with the source object through an association in which the returned object plays the specified role. The name of the property in the association class that refers to the returned object must match the value of this parameter. The default value isNone.
IncludeQualifiers— A boolean indicating whether all qualifiers of each object (including qualifiers on the object and on any returned properties) should be included as QUALIFIER elements in the response. The default value isFalse.IncludeClassOrigin— A boolean indicating whether the CLASSORIGIN attribute should be present on all appropriate elements in each returned object. The default value isFalse.PropertyList— The members of this list define one or more property names. Returned objects will not include elements for any properties missing from this list. IfPropertyListis an empty list, no properties are included in returned objects. If it isNone, no additional filtering is defined. The default value isNone.
Example 21.33. Accessing Associated Instances
LMI_StorageExtent class represents block devices available in the system. To use the ns namespace object created in Example 21.5, “Accessing Namespace Objects”, create an instance of the LMI_StorageExtent class for the block device named /dev/vda, and assign it to a variable named vda, type the following at the interactive prompt:
>vda = ns.LMI_StorageExtent.first_instance({..."DeviceID" : "/dev/vda"})>
vda_partitions, use the associators() method as follows:
> vda_partitions = vda.associators(ResultClass="LMI_DiskPartition")
> Accessing Associated Instance Names
associator_names() method as follows:
instance_object.associator_names(AssocClass=class_name,ResultClass=class_name,Role=role,ResultRole=role)
first_associator_name() method:
instance_object.first_associator_name(AssocClass=class_object,ResultClass=class_object,Role=role,ResultRole=role)
AssocClass— Each returned name identifies an object that must be associated with the source object through an instance of this class or one of its subclasses. The default value isNone.ResultClass— Each returned name identifies an object that must be either an instance of this class or one of its subclasses, or it must be this class or one of its subclasses. The default value isNone.Role— Each returned name identifies an object that must be associated with the source object through an association in which the source object plays the specified role. The name of the property in the association class that refers to the source object must match the value of this parameter. The default value isNone.ResultRole— Each returned name identifies an object that must be associated with the source object through an association in which the returned named object plays the specified role. The name of the property in the association class that refers to the returned object must match the value of this parameter. The default value isNone.
Example 21.34. Accessing Associated Instance Names
vda instance object created in Example 21.33, “Accessing Associated Instances”, get a list of its associated instance names, and assign it to a variable named vda_partitions, type:
> vda_partitions = vda.associator_names(ResultClass="LMI_DiskPartition")
>21.4.8. Working with Association Objects
Accessing Association Instances
references() method as follows:
instance_object.references(ResultClass=class_name,Role=role,IncludeQualifiers=include_qualifiers,IncludeClassOrigin=include_class_origin,PropertyList=property_list)
first_reference() method:
instance_object.first_reference( ...ResultClass=class_name, ...Role=role, ...IncludeQualifiers=include_qualifiers, ...IncludeClassOrigin=include_class_origin, ...PropertyList=property_list) >
ResultClass— Each returned object must be either an instance of this class or one of its subclasses, or it must be this class or one of its subclasses. The default value isNone.Role— Each returned object must refer to the target object through a property with a name that matches the value of this parameter. The default value isNone.
IncludeQualifiers— A boolean indicating whether each object (including qualifiers on the object and on any returned properties) should be included as a QUALIFIER element in the response. The default value isFalse.IncludeClassOrigin— A boolean indicating whether the CLASSORIGIN attribute should be present on all appropriate elements in each returned object. The default value isFalse.PropertyList— The members of this list define one or more property names. Returned objects will not include elements for any properties missing from this list. IfPropertyListis an empty list, no properties are included in returned objects. If it isNone, no additional filtering is defined. The default value isNone.
Example 21.35. Accessing Association Instances
LMI_LANEndpoint class represents a communication endpoint associated with a certain network interface device. To use the ns namespace object created in Example 21.5, “Accessing Namespace Objects”, create an instance of the LMI_LANEndpoint class for the network interface device named eth0, and assign it to a variable named lan_endpoint, type the following at the interactive prompt:
>lan_endpoint = ns.LMI_LANEndpoint.first_instance({..."Name" : "eth0"})>
LMI_BindsToLANEndpoint object and assign it to a variable named bind, type:
>bind = lan_endpoint.first_reference(...ResultClass="LMI_BindsToLANEndpoint")>
Dependent property to access the dependent LMI_IPProtocolEndpoint class that represents the IP address of the corresponding network interface device:
>ip = bind.Dependent.to_instance()>print ip.IPv4Address192.168.122.1 >
Accessing Association Instance Names
reference_names() method as follows:
instance_object.reference_names(ResultClass=class_name,Role=role)
first_reference_name() method:
instance_object.first_reference_name(ResultClass=class_name,Role=role)
ResultClass— Each returned object name identifies either an instance of this class or one of its subclasses, or this class or one of its subclasses. The default value isNone.Role— Each returned object identifies an object that refers to the target instance through a property with a name that matches the value of this parameter. The default value isNone.
Example 21.36. Accessing Association Instance Names
lan_endpoint instance object created in Example 21.35, “Accessing Association Instances”, access the first association instance name that refers to an LMI_BindsToLANEndpoint object, and assign it to a variable named bind, type:
>bind = lan_endpoint.first_reference_name(...ResultClass="LMI_BindsToLANEndpoint")
Dependent property to access the dependent LMI_IPProtocolEndpoint class that represents the IP address of the corresponding network interface device:
>ip = bind.Dependent.to_instance()>print ip.IPv4Address192.168.122.1 >
21.4.9. Working with Indications
Subscribing to Indications
subscribe_indication() method as follows:
connection_object.subscribe_indication(QueryLanguage="WQL",Query='SELECT * FROM CIM_InstModification',Name="cpu",CreationNamespace="root/interop",SubscriptionCreationClassName="CIM_IndicationSubscription",FilterCreationClassName="CIM_IndicationFilter",FilterSystemCreationClassName="CIM_ComputerSystem",FilterSourceNamespace="root/cimv2",HandlerCreationClassName="CIM_IndicationHandlerCIMXML",HandlerSystemCreationClassName="CIM_ComputerSystem",Destination="http://host_name:5988")
connection_object.subscribe_indication(Query='SELECT * FROM CIM_InstModification',Name="cpu",Destination="http://host_name:5988")
Permanent=True keyword parameter to the subscribe_indication() method call. This will prevent LMIShell from deleting the subscription.
Example 21.37. Subscribing to Indications
c connection object created in Example 21.1, “Connecting to a Remote CIMOM” and subscribe to an indication named cpu, type the following at the interactive prompt:
>c.subscribe_indication(...QueryLanguage="WQL",...Query='SELECT * FROM CIM_InstModification',...Name="cpu",...CreationNamespace="root/interop",...SubscriptionCreationClassName="CIM_IndicationSubscription",...FilterCreationClassName="CIM_IndicationFilter",...FilterSystemCreationClassName="CIM_ComputerSystem",...FilterSourceNamespace="root/cimv2",...HandlerCreationClassName="CIM_IndicationHandlerCIMXML",...HandlerSystemCreationClassName="CIM_ComputerSystem",...Destination="http://server.example.com:5988")LMIReturnValue(rval=True, rparams=NocaseDict({}), errorstr='') >
Listing Subscribed Indications
print_subscribed_indications() method as follows:
connection_object.print_subscribed_indications()subscribed_indications() method:
connection_object.subscribed_indications()Example 21.38. Listing Subscribed Indications
c connection object created in Example 21.1, “Connecting to a Remote CIMOM” and list all subscribed indications, type the following at the interactive prompt:
> c.print_subscribed_indications()
>indications, type:
> indications = c.subscribed_indications()
>Unsubscribing from Indications
unsubscribe_indication() method as follows:
connection_object.unsubscribe_indication(indication_name)unsubscribe_all_indications() method:
connection_object.unsubscribe_all_indications()Example 21.39. Unsubscribing from Indications
c connection object created in Example 21.1, “Connecting to a Remote CIMOM” and unsubscribe from the indication created in Example 21.37, “Subscribing to Indications”, type the following at the interactive prompt:
> c.unsubscribe_indication('cpu')
LMIReturnValue(rval=True, rparams=NocaseDict({}), errorstr='')
>Implementing an Indication Handler
subscribe_indication() method allows you to specify the host name of the system you want to deliver the indications to. The following example shows how to implement an indication handler:
>def handler(ind, arg1, arg2, **kwargs):...exported_objects = ind.exported_objects()...do_something_with(exported_objects)>listener = LmiIndicationListener("0.0.0.0", listening_port)>listener.add_handler("indication-name-XXXXXXXX", handler, arg1, arg2, **kwargs)>listener.start()>
LmiIndication object, which contains a list of methods and objects exported by the indication. Other parameters are user specific: those arguments need to be specified when adding a handler to the listener.
add_handler() method call uses a special string with eight “X” characters. These characters are replaced with a random string that is generated by listeners in order to avoid a possible handler name collision. To use the random string, start the indication listener first and then subscribe to an indication so that the Destination property of the handler object contains the following value: schema://host_name/random_string.
Example 21.40. Implementing an Indication Handler
192.168.122.1 and calls the indication_callback() function whenever a new user account is created:
#!/usr/bin/lmishell
import sys
from time import sleep
from lmi.shell.LMIUtil import LMIPassByRef
from lmi.shell.LMIIndicationListener import LMIIndicationListener
# These are passed by reference to indication_callback
var1 = LMIPassByRef("some_value")
var2 = LMIPassByRef("some_other_value")
def indication_callback(ind, var1, var2):
# Do something with ind, var1 and var2
print ind.exported_objects()
print var1.value
print var2.value
c = connect("hostname", "username", "password")
listener = LMIIndicationListener("0.0.0.0", 65500)
unique_name = listener.add_handler(
"demo-XXXXXXXX", # Creates a unique name for me
indication_callback, # Callback to be called
var1, # Variable passed by ref
var2 # Variable passed by ref
)
listener.start()
print c.subscribe_indication(
Name=unique_name,
Query="SELECT * FROM LMI_AccountInstanceCreationIndication WHERE SOURCEINSTANCE ISA LMI_Account",
Destination="192.168.122.1:65500"
)
try:
while True:
sleep(60)
except KeyboardInterrupt:
sys.exit(0)21.4.10. Example Usage
c = connect("host_name", "user_name", "password")
ns = c.root.cimv2Using the OpenLMI Service Provider
Example 21.41. Listing Available Services
TRUE) or stopped (FALSE) and the status string, use the following code snippet:
for service in ns.LMI_Service.instances():
print "%s:\t%s" % (service.Name, service.Status)cls = ns.LMI_Service
for service in cls.instances():
if service.EnabledDefault == cls.EnabledDefaultValues.Enabled:
print service.NameEnabledDefault property is equal to 2 for enabled services and 3 for disabled services.
cups service, use the following:
cups = ns.LMI_Service.first_instance({"Name": "cups.service"})
cups.doc()Example 21.42. Starting and Stopping Services
cups service and to see its current status, use the following code snippet:
cups = ns.LMI_Service.first_instance({"Name": "cups.service"})
cups.StartService()
print cups.Status
cups.StopService()
print cups.StatusExample 21.43. Enabling and Disabling Services
cups service and to display its EnabledDefault property, use the following code snippet:
cups = ns.LMI_Service.first_instance({"Name": "cups.service"})
cups.TurnServiceOff()
print cups.EnabledDefault
cups.TurnServiceOn()
print cups.EnabledDefaultUsing the OpenLMI Networking Provider
Example 21.44. Listing IP Addresses Associated with a Given Port Number
device = ns.LMI_IPNetworkConnection.first_instance({'ElementName': 'eth0'})
for endpoint in device.associators(AssocClass="LMI_NetworkSAPSAPDependency", ResultClass="LMI_IPProtocolEndpoint"):
if endpoint.ProtocolIFType == ns.LMI_IPProtocolEndpoint.ProtocolIFTypeValues.IPv4:
print "IPv4: %s/%s" % (endpoint.IPv4Address, endpoint.SubnetMask)
elif endpoint.ProtocolIFType == ns.LMI_IPProtocolEndpoint.ProtocolIFTypeValues.IPv6:
print "IPv6: %s/%d" % (endpoint.IPv6Address, endpoint.IPv6SubnetPrefixLength)LMI_IPProtocolEndpoint class associated with a given LMI_IPNetworkConnection class.
for rsap in device.associators(AssocClass="LMI_NetworkRemoteAccessAvailableToElement", ResultClass="LMI_NetworkRemoteServiceAccessPoint"):
if rsap.AccessContext == ns.LMI_NetworkRemoteServiceAccessPoint.AccessContextValues.DefaultGateway:
print "Default Gateway: %s" % rsap.AccessInfo
LMI_NetworkRemoteServiceAccessPoint instance with the AccessContext property equal to DefaultGateway.
- Get the
LMI_IPProtocolEndpointinstances associated with a givenLMI_IPNetworkConnectionusingLMI_NetworkSAPSAPDependency. - Use the same association for the
LMI_DNSProtocolEndpointinstances.
LMI_NetworkRemoteServiceAccessPoint instances with the AccessContext property equal to the DNS Server associated through LMI_NetworkRemoteAccessAvailableToElement have the DNS server address in the AccessInfo property.
RemoteServiceAccessPath and entries can be duplicated. The following code snippet uses the set() function to remove duplicate entries from the list of DNS servers:
dnsservers = set()
for ipendpoint in device.associators(AssocClass="LMI_NetworkSAPSAPDependency", ResultClass="LMI_IPProtocolEndpoint"):
for dnsedpoint in ipendpoint.associators(AssocClass="LMI_NetworkSAPSAPDependency", ResultClass="LMI_DNSProtocolEndpoint"):
for rsap in dnsedpoint.associators(AssocClass="LMI_NetworkRemoteAccessAvailableToElement", ResultClass="LMI_NetworkRemoteServiceAccessPoint"):
if rsap.AccessContext == ns.LMI_NetworkRemoteServiceAccessPoint.AccessContextValues.DNSServer:
dnsservers.add(rsap.AccessInfo)
print "DNS:", ", ".join(dnsservers)Example 21.45. Creating a New Connection and Configuring a Static IP Address
capability = ns.LMI_IPNetworkConnectionCapabilities.first_instance({ 'ElementName': 'eth0' })
result = capability.LMI_CreateIPSetting(Caption='eth0 Static',
IPv4Type=capability.LMI_CreateIPSetting.IPv4TypeValues.Static,
IPv6Type=capability.LMI_CreateIPSetting.IPv6TypeValues.Stateless)
setting = result.rparams["SettingData"].to_instance()
for settingData in setting.associators(AssocClass="LMI_OrderedIPAssignmentComponent"):
if setting.ProtocolIFType == ns.LMI_IPAssignmentSettingData.ProtocolIFTypeValues.IPv4:
# Set static IPv4 address
settingData.IPAddresses = ["192.168.1.100"]
settingData.SubnetMasks = ["255.255.0.0"]
settingData.GatewayAddresses = ["192.168.1.1"]
settingData.push()LMI_CreateIPSetting() method on the instance of LMI_IPNetworkConnectionCapabilities, which is associated with LMI_IPNetworkConnection through LMI_IPNetworkConnectionElementCapabilities. It also uses the push() method to modify the setting.
Example 21.46. Activating a Connection
ApplySettingToIPNetworkConnection() method of the LMI_IPConfigurationService class. This method is asynchronous and returns a job. The following code snippets illustrates how to call this method synchronously:
setting = ns.LMI_IPAssignmentSettingData.first_instance({ "Caption": "eth0 Static" })
port = ns.LMI_IPNetworkConnection.first_instance({ 'ElementName': 'ens8' })
service = ns.LMI_IPConfigurationService.first_instance()
service.SyncApplySettingToIPNetworkConnection(SettingData=setting, IPNetworkConnection=port, Mode=32768)Mode parameter affects how the setting is applied. The most commonly used values of this parameter are as follows:
1— apply the setting now and make it auto-activated.2— make the setting auto-activated and do not apply it now.4— disconnect and disable auto-activation.5— do not change the setting state, only disable auto-activation.32768— apply the setting.32769— disconnect.
Using the OpenLMI Storage Provider
c and ns variables, these examples use the following variable definitions:
MEGABYTE = 1024*1024 storage_service = ns.LMI_StorageConfigurationService.first_instance() filesystem_service = ns.LMI_FileSystemConfigurationService.first_instance()
Example 21.47. Creating a Volume Group
/dev/myGroup/ that has three members and the default extent size of 4 MB, use the following code snippet:
# Find the devices to add to the volume group
# (filtering the CIM_StorageExtent.instances()
# call would be faster, but this is easier to read):
sda1 = ns.CIM_StorageExtent.first_instance({"Name": "/dev/sda1"})
sdb1 = ns.CIM_StorageExtent.first_instance({"Name": "/dev/sdb1"})
sdc1 = ns.CIM_StorageExtent.first_instance({"Name": "/dev/sdc1"})
# Create a new volume group:
(ret, outparams, err) = storage_service.SyncCreateOrModifyVG(
ElementName="myGroup",
InExtents=[sda1, sdb1, sdc1])
vg = outparams['Pool'].to_instance()
print "VG", vg.PoolID, \
"with extent size", vg.ExtentSize, \
"and", vg.RemainingExtents, "free extents created."Example 21.48. Creating a Logical Volume
# Find the volume group:
vg = ns.LMI_VGStoragePool.first_instance({"Name": "/dev/mapper/myGroup"})
# Create the first logical volume:
(ret, outparams, err) = storage_service.SyncCreateOrModifyLV(
ElementName="Vol1",
InPool=vg,
Size=100 * MEGABYTE)
lv = outparams['TheElement'].to_instance()
print "LV", lv.DeviceID, \
"with", lv.BlockSize * lv.NumberOfBlocks,\
"bytes created."
# Create the second logical volume:
(ret, outparams, err) = storage_service.SyncCreateOrModifyLV(
ElementName="Vol2",
InPool=vg,
Size=100 * MEGABYTE)
lv = outparams['TheElement'].to_instance()
print "LV", lv.DeviceID, \
"with", lv.BlockSize * lv.NumberOfBlocks, \
"bytes created."Example 21.49. Creating a File System
ext3 file system on logical volume lv from Example 21.48, “Creating a Logical Volume”, use the following code snippet:
(ret, outparams, err) = filesystem_service.SyncLMI_CreateFileSystem(
FileSystemType=filesystem_service.LMI_CreateFileSystem.FileSystemTypeValues.EXT3,
InExtents=[lv])Example 21.50. Mounting a File System
# Find the file system on the logical volume:
fs = lv.first_associator(ResultClass="LMI_LocalFileSystem")
mount_service = ns.LMI_MountConfigurationService.first_instance()
(rc, out, err) = mount_service.SyncCreateMount(
FileSystemType='ext3',
Mode=32768, # just mount
FileSystem=fs,
MountPoint='/mnt/test',
FileSystemSpec=lv.Name)Example 21.51. Listing Block Devices
devices = ns.CIM_StorageExtent.instances()
for device in devices:
if lmi_isinstance(device, ns.CIM_Memory):
# Memory and CPU caches are StorageExtents too, do not print them
continue
print device.classname,
print device.DeviceID,
print device.Name,
print device.BlockSize*device.NumberOfBlocksUsing the OpenLMI Hardware Provider
Example 21.52. Viewing CPU Information
cpu = ns.LMI_Processor.first_instance() cpu_cap = cpu.associators(ResultClass="LMI_ProcessorCapabilities")[0] print cpu.Name print cpu_cap.NumberOfProcessorCores print cpu_cap.NumberOfHardwareThreads
Example 21.53. Viewing Memory Information
mem = ns.LMI_Memory.first_instance()
for i in mem.associators(ResultClass="LMI_PhysicalMemory"):
print i.NameExample 21.54. Viewing Chassis Information
chassis = ns.LMI_Chassis.first_instance() print chassis.Manufacturer print chassis.Model
Example 21.55. Listing PCI Devices
for pci in ns.LMI_PCIDevice.instances():
print pci.Name21.5. Using OpenLMI Scripts
lmi, an extensible utility that can be used to interact with these libraries from the command line.
easy_install --user openlmi-scriptslmi utility in the ~/.local/ directory. To extend the functionality of the lmi utility, install additional OpenLMI modules by using the following command:
easy_install --user package_name21.6. Additional Resources
Installed Documentation
- lmishell(1) — The manual page for the
lmishellclient and interpreter provides detailed information about its execution and usage.
Online Documentation
- Red Hat Enterprise Linux 7 Networking Guide — The Networking Guide for Red Hat Enterprise Linux 7 documents relevant information regarding the configuration and administration of network interfaces and network services on the system.
- Red Hat Enterprise Linux 7 Storage Administration Guide — The Storage Administration Guide for Red Hat Enterprise Linux 7 provides instructions on how to manage storage devices and file systems on the system.
- Red Hat Enterprise Linux 7 Power Management Guide — The Power Management Guide for Red Hat Enterprise Linux 7 explains how to manage power consumption of the system effectively. It discusses different techniques that lower power consumption for both servers and laptops, and explains how each technique affects the overall performance of the system.
- Red Hat Enterprise Linux 7 Linux Domain Identity, Authentication, and Policy Guide — The Linux Domain Identity, Authentication, and Policy Guide for Red Hat Enterprise Linux 7 covers all aspects of installing, configuring, and managing IPA domains, including both servers and clients. The guide is intended for IT and systems administrators.
- FreeIPA Documentation — The FreeIPA Documentation serves as the primary user documentation for using the FreeIPA Identity Management project.
- OpenSSL Home Page — The OpenSSL home page provides an overview of the OpenSSL project.
- Mozilla NSS Documentation — The Mozilla NSS Documentation serves as the primary user documentation for using the Mozilla NSS project.
See Also
- Chapter 4, Managing Users and Groups documents how to manage system users and groups in the graphical user interface and on the command line.
- Chapter 9, Yum describes how to use the Yum package manager to search, install, update, and uninstall packages on the command line.
- Chapter 10, Managing Services with systemd provides an introduction to
systemdand documents how to use thesystemctlcommand to manage system services, configure systemd targets, and execute power management commands. - Chapter 12, OpenSSH describes how to configure an SSH server and how to use the
ssh,scp, andsftpclient utilities to access it.
Chapter 22. Viewing and Managing Log Files
rsyslogd. The rsyslogd daemon is an enhanced replacement for sysklogd, and provides extended filtering, encryption protected relaying of messages, various configuration options, input and output modules, support for transportation via the TCP or UDP protocols. Note that rsyslog is compatible with sysklogd.
journald daemon – a component of systemd. The journald daemon captures Syslog messages, kernel log messages, initial RAM disk and early boot messages as well as messages written to standard output and standard error output of all services, indexes them and makes this available to the user. The native journal file format, which is a structured and indexed binary file, improves searching and provides faster operation, and it also stores meta data information like time stamps or user IDs. Log files produced by journald are by default not persistent, log files are stored only in memory or a small ring-buffer in the /run/log/journal/ directory. The amount of logged data depends on free memory, when you reach the capacity limit, the oldest entries are deleted. However, this setting can be altered – see Section 22.10.5, “Enabling Persistent Storage”. For more information on Journal see Section 22.10, “Using the Journal”.
journald daemon is the primary tool for troubleshooting. It also provides additional data necessary for creating structured log messages. Data acquired by journald is forwarded into the /run/systemd/journal/syslog socket that may be used by rsyslogd to process the data further. However, rsyslog does the actual integration by default via the imjournal input module, thus avoiding the aforementioned socket. You can also transfer data in the opposite direction, from rsyslogd to journald with use of omjournal module. See Section 22.7, “Interaction of Rsyslog and Journal” for further information. The integration enables maintaining text-based logs in a consistent format to ensure compatibility with possible applications or configurations dependent on rsyslogd. Also, you can maintain rsyslog messages in a structured format (see Section 22.8, “Structured Logging with Rsyslog”).
22.1. Locating Log Files
rsyslogd can be found in the /etc/rsyslog.conf configuration file. Most log files are located in the /var/log/ directory. Some applications such as httpd and samba have a directory within /var/log/ for their log files.
/var/log/ directory with numbers after them (for example, cron-20100906). These numbers represent a time stamp that has been added to a rotated log file. Log files are rotated so their file sizes do not become too large. The logrotate package contains a cron task that automatically rotates log files according to the /etc/logrotate.conf configuration file and the configuration files in the /etc/logrotate.d/ directory.
22.2. Basic Configuration of Rsyslog
/etc/rsyslog.conf. Here, you can specify global directives, modules, and rules that consist of filter and action parts. Also, you can add comments in the form of text following a hash sign (#).
22.2.1. Filters
/etc/rsyslog.conf configuration file, define both, a filter and an action, on one line and separate them with one or more spaces or tabs.
- Facility/Priority-based filters
- The most used and well-known way to filter syslog messages is to use the facility/priority-based filters which filter syslog messages based on two conditions: facility and priority separated by a dot. To create a selector, use the following syntax:
FACILITY.PRIORITY
where:- FACILITY specifies the subsystem that produces a specific syslog message. For example, the
mailsubsystem handles all mail-related syslog messages. FACILITY can be represented by one of the following keywords (or by a numerical code):kern(0),user(1),mail(2),daemon(3),auth(4),syslog(5),lpr(6),news(7),uucp(8),cron(9),authpriv(10),ftp(11), andlocal0throughlocal7(16 - 23). - PRIORITY specifies a priority of a syslog message. PRIORITY can be represented by one of the following keywords (or by a number):
debug(7),info(6),notice(5),warning(4),err(3),crit(2),alert(1), andemerg(0).The aforementioned syntax selects syslog messages with the defined or higher priority. By preceding any priority keyword with an equal sign (=), you specify that only syslog messages with the specified priority will be selected. All other priorities will be ignored. Conversely, preceding a priority keyword with an exclamation mark (!) selects all syslog messages except those with the defined priority.
In addition to the keywords specified above, you may also use an asterisk (*) to define all facilities or priorities (depending on where you place the asterisk, before or after the comma). Specifying the priority keywordnoneserves for facilities with no given priorities. Both facility and priority conditions are case-insensitive.To define multiple facilities and priorities, separate them with a comma (,). To define multiple selectors on one line, separate them with a semi-colon (;). Note that each selector in the selector field is capable of overwriting the preceding ones, which can exclude some priorities from the pattern.Example 22.1. Facility/Priority-based Filters
The following are a few examples of simple facility/priority-based filters that can be specified in/etc/rsyslog.conf. To select all kernel syslog messages with any priority, add the following text into the configuration file:kern.*
To select all mail syslog messages with prioritycritand higher, use this form:mail.crit
To select all cron syslog messages except those with theinfoordebugpriority, set the configuration in the following form:cron.!info,!debug
- Property-based filters
- Property-based filters let you filter syslog messages by any property, such as
timegeneratedorsyslogtag. For more information on properties, see the section called “Properties”. You can compare each of the specified properties to a particular value using one of the compare-operations listed in Table 22.1, “Property-based compare-operations”. Both property names and compare operations are case-sensitive.Property-based filter must start with a colon (:). To define the filter, use the following syntax::PROPERTY, [!]COMPARE_OPERATION, "STRING"
where:- The PROPERTY attribute specifies the desired property.
- The optional exclamation point (
!) negates the output of the compare-operation. Other Boolean operators are currently not supported in property-based filters. - The COMPARE_OPERATION attribute specifies one of the compare-operations listed in Table 22.1, “Property-based compare-operations”.
- The STRING attribute specifies the value that the text provided by the property is compared to. This value must be enclosed in quotation marks. To escape certain character inside the string (for example a quotation mark (
")), use the backslash character (\).
Table 22.1. Property-based compare-operations
Compare-operation Description containsChecks whether the provided string matches any part of the text provided by the property. To perform case-insensitive comparisons, use contains_i.isequalCompares the provided string against all of the text provided by the property. These two values must be exactly equal to match. startswithChecks whether the provided string is found exactly at the beginning of the text provided by the property. To perform case-insensitive comparisons, use startswith_i.regexCompares the provided POSIX BRE (Basic Regular Expression) against the text provided by the property. ereregexCompares the provided POSIX ERE (Extended Regular Expression) regular expression against the text provided by the property. isemptyChecks if the property is empty. The value is discarded. This is especially useful when working with normalized data, where some fields may be populated based on normalization result. Example 22.2. Property-based Filters
The following are a few examples of property-based filters that can be specified in/etc/rsyslog.conf. To select syslog messages which contain the stringerrorin their message text, use::msg, contains, "error"
The following filter selects syslog messages received from the host namehost1::hostname, isequal, "host1"
To select syslog messages which do not contain any mention of the wordsfatalanderrorwith any or no text between them (for example,fatal lib error), type::msg, !regex, "fatal .* error"
- Expression-based filters
- Expression-based filters select syslog messages according to defined arithmetic, Boolean or string operations. Expression-based filters use rsyslog's own scripting language called RainerScript to build complex filters.The basic syntax of expression-based filter looks as follows:
if EXPRESSION then ACTION else ACTION
where:- The EXPRESSION attribute represents an expression to be evaluated, for example:
$msg startswith 'DEVNAME'or$syslogfacility-text == 'local0'. You can specify more than one expression in a single filter by usingandandoroperators. - The ACTION attribute represents an action to be performed if the expression returns the value
true. This can be a single action, or an arbitrary complex script enclosed in curly braces. - Expression-based filters are indicated by the keyword if at the start of a new line. The then keyword separates the EXPRESSION from the ACTION. Optionally, you can employ the else keyword to specify what action is to be performed in case the condition is not met.
With expression-based filters, you can nest the conditions by using a script enclosed in curly braces as in Example 22.3, “Expression-based Filters”. The script allows you to use facility/priority-based filters inside the expression. On the other hand, property-based filters are not recommended here. RainerScript supports regular expressions with specialized functionsre_match()andre_extract().Example 22.3. Expression-based Filters
The following expression contains two nested conditions. The log files created by a program called prog1 are split into two files based on the presence of the "test" string in the message.if $programname == 'prog1' then { action(type="omfile" file="/var/log/prog1.log") if $msg contains 'test' then action(type="omfile" file="/var/log/prog1test.log") else action(type="omfile" file="/var/log/prog1notest.log") }
22.2.2. Actions
- Saving syslog messages to log files
- The majority of actions specify to which log file a syslog message is saved. This is done by specifying a file path after your already-defined selector:
FILTER PATH
where FILTER stands for user-specified selector and PATH is a path of a target file.For instance, the following rule is comprised of a selector that selects all cron syslog messages and an action that saves them into the/var/log/cron.loglog file:cron.* /var/log/cron.log
By default, the log file is synchronized every time a syslog message is generated. Use a dash mark (-) as a prefix of the file path you specified to omit syncing:FILTER -PATH
Note that you might lose information if the system terminates right after a write attempt. However, this setting can improve performance, especially if you run programs that produce very verbose log messages.Your specified file path can be either static or dynamic. Static files are represented by a fixed file path as shown in the example above. Dynamic file paths can differ according to the received message. Dynamic file paths are represented by a template and a question mark (?) prefix:FILTER ?DynamicFile
where DynamicFile is a name of a predefined template that modifies output paths. You can use the dash prefix (-) to disable syncing, also you can use multiple templates separated by a colon (;). For more information on templates, see the section called “Generating Dynamic File Names”.If the file you specified is an existing terminal or/dev/consoledevice, syslog messages are sent to standard output (using special terminal-handling) or your console (using special/dev/console-handling) when using the X Window System, respectively. - Sending syslog messages over the network
- rsyslog allows you to send and receive syslog messages over the network. This feature allows you to administer syslog messages of multiple hosts on one machine. To forward syslog messages to a remote machine, use the following syntax:
@[(
zNUMBER)]HOST:[PORT]where:- The at sign (
@) indicates that the syslog messages are forwarded to a host using theUDPprotocol. To use theTCPprotocol, use two at signs with no space between them (@@). - The optional
zNUMBERsetting enables zlib compression for syslog messages. The NUMBER attribute specifies the level of compression (from 1 – lowest to 9 – maximum). Compression gain is automatically checked byrsyslogd, messages are compressed only if there is any compression gain and messages below 60 bytes are never compressed. - The HOST attribute specifies the host which receives the selected syslog messages.
- The PORT attribute specifies the host machine's port.
When specifying anIPv6address as the host, enclose the address in square brackets ([,]).Example 22.4. Sending syslog Messages over the Network
The following are some examples of actions that forward syslog messages over the network (note that all actions are preceded with a selector that selects all messages with any priority). To forward messages to192.168.0.1via theUDPprotocol, type:*.* @192.168.0.1
To forward messages to "example.com" using port 6514 and theTCPprotocol, use:*.* @@example.com:6514
The following compresses messages with zlib (level 9 compression) and forwards them to2001:db8::1using theUDPprotocol*.* @(z9)[2001:db8::1]
- Output channels
- Output channels are primarily used to specify the maximum size a log file can grow to. This is very useful for log file rotation (for more information see Section 22.2.5, “Log Rotation”). An output channel is basically a collection of information about the output action. Output channels are defined by the
$outchanneldirective. To define an output channel in/etc/rsyslog.conf, use the following syntax:$outchannel NAME, FILE_NAME, MAX_SIZE, ACTION
where:- The NAME attribute specifies the name of the output channel.
- The FILE_NAME attribute specifies the name of the output file. Output channels can write only into files, not pipes, terminal, or other kind of output.
- The MAX_SIZE attribute represents the maximum size the specified file (in FILE_NAME) can grow to. This value is specified in bytes.
- The ACTION attribute specifies the action that is taken when the maximum size, defined in MAX_SIZE, is hit.
To use the defined output channel as an action inside a rule, type:FILTER :omfile:$NAME
Example 22.5. Output channel log rotation
The following output shows a simple log rotation through the use of an output channel. First, the output channel is defined via the$outchanneldirective:$outchannel log_rotation, /var/log/test_log.log, 104857600, /home/joe/log_rotation_script
and then it is used in a rule that selects every syslog message with any priority and executes the previously-defined output channel on the acquired syslog messages:*.* :omfile:$log_rotation
Once the limit (in the example100 MB) is hit, the/home/joe/log_rotation_scriptis executed. This script can contain anything from moving the file into a different folder, editing specific content out of it, or simply removing it. - Sending syslog messages to specific users
- rsyslog can send syslog messages to specific users by specifying a user name of the user you want to send the messages to (as in Example 22.7, “Specifying Multiple Actions”). To specify more than one user, separate each user name with a comma (
,). To send messages to every user that is currently logged on, use an asterisk (*). - Executing a program
- rsyslog lets you execute a program for selected syslog messages and uses the
system()call to execute the program in shell. To specify a program to be executed, prefix it with a caret character (^). Consequently, specify a template that formats the received message and passes it to the specified executable as a one line parameter (for more information on templates, see Section 22.2.3, “Templates”).FILTER ^EXECUTABLE; TEMPLATE
Here an output of the FILTER condition is processed by a program represented by EXECUTABLE. This program can be any valid executable. Replace TEMPLATE with the name of the formatting template.Example 22.6. Executing a Program
In the following example, any syslog message with any priority is selected, formatted with thetemplatetemplate and passed as a parameter to the test-program program, which is then executed with the provided parameter:*.* ^test-program;template
Warning
When accepting messages from any host, and using the shell execute action, you may be vulnerable to command injection. An attacker may try to inject and execute commands in the program you specified to be executed in your action. To avoid any possible security threats, thoroughly consider the use of the shell execute action. - Storing syslog messages in a database
- Selected syslog messages can be directly written into a database table using the database writer action. The database writer uses the following syntax:
:PLUGIN:DB_HOST,DB_NAME,DB_USER,DB_PASSWORD;[TEMPLATE]where:- The PLUGIN calls the specified plug-in that handles the database writing (for example, the
ommysqlplug-in). - The DB_HOST attribute specifies the database host name.
- The DB_NAME attribute specifies the name of the database.
- The DB_USER attribute specifies the database user.
- The DB_PASSWORD attribute specifies the password used with the aforementioned database user.
- The TEMPLATE attribute specifies an optional use of a template that modifies the syslog message. For more information on templates, see Section 22.2.3, “Templates”.
Important
Currently, rsyslog provides support forMySQLandPostgreSQLdatabases only. In order to use theMySQLandPostgreSQLdatabase writer functionality, install the rsyslog-mysql and rsyslog-pgsql packages, respectively. Also, make sure you load the appropriate modules in your/etc/rsyslog.confconfiguration file:module(load=”ommysql”) # Output module for MySQL support module(load=”ompgsql”) # Output module for PostgreSQL support
For more information on rsyslog modules, see Section 22.6, “Using Rsyslog Modules”.Alternatively, you may use a generic database interface provided by theomlibdbmodule (supports: Firebird/Interbase, MS SQL, Sybase, SQLLite, Ingres, Oracle, mSQL). - Discarding syslog messages
- To discard your selected messages, use
stop.The discard action is mostly used to filter out messages before carrying on any further processing. It can be effective if you want to omit some repeating messages that would otherwise fill the log files. The results of discard action depend on where in the configuration file it is specified, for the best results place these actions on top of the actions list. Please note that once a message has been discarded there is no way to retrieve it in later configuration file lines.For instance, the following rule discards all messages that matches thelocal5.*filter:local5.* stop
In the following example, any cron syslog messages are discarded:cron.* stop
Note
With versions prior to rsyslog 7, the tilde character (~) was used instead ofstopto discard syslog messages.
Specifying Multiple Actions
FILTER ACTION & ACTION & ACTION
Example 22.7. Specifying Multiple Actions
crit) are sent to user user1, processed by the template temp and passed on to the test-program executable, and forwarded to 192.168.0.1 via the UDP protocol.
kern.=crit user1 & ^test-program;temp & @192.168.0.1
;) and specify the name of the template. For more information on templates, see Section 22.2.3, “Templates”.
Warning
/etc/rsyslog.conf.
22.2.3. Templates
/etc/rsyslog.conf:
template(name=”TEMPLATE_NAME” type=”string” string="text %PROPERTY% more text" [option.OPTION="on"])
template()is the directive introducing block defining a template.- The
TEMPLATE_NAMEmandatory argument is used to refer to the template. Note thatTEMPLATE_NAMEshould be unique. - The
typemandatory argument can acquire one of these values: “list”, “subtree”, “string” or “plugin”. - The
stringargument is the actual template text. Within this text, special characters, such as \n for newline or \r for carriage return, can be used. Other characters, such as % or ", have to be escaped if you want to use those characters literally. Within this text, special characters, such as\nfor new line or\rfor carriage return, can be used. Other characters, such as%or", have to be escaped if you want to use those characters literally. - The text specified between two percent signs (
%) specifies a property that allows you to access specific contents of a syslog message. For more information on properties, see the section called “Properties”. - The
OPTIONattribute specifies any options that modify the template functionality. The currently supported template options aresqlandstdsql, which are used for formatting the text as an SQL query, or json which formats text to be suitable for JSON processing, andcasesensitivewhich sets case sensitiveness of property names.Note
Note that the database writer checks whether thesqlorstdsqloptions are specified in the template. If they are not, the database writer does not perform any action. This is to prevent any possible security threats, such as SQL injection.See section Storing syslog messages in a database in Section 22.2.2, “Actions” for more information.
Generating Dynamic File Names
timegenerated property, which extracts a time stamp from the message, to generate a unique file name for each syslog message:
template(name=”DynamicFile” type=”list”) {
constant(value=”/var/log/test_logs/”)
property(name=”timegenerated”)
constant(value”-test.log”)
}
$template directive only specifies the template. You must use it inside a rule for it to take effect. In /etc/rsyslog.conf, use the question mark (?) in an action definition to mark the dynamic file name template:
*.* ?DynamicFile
Properties
%)) enable access various contents of a syslog message through the use of a property replacer. To define a property inside a template (between the two quotation marks ("…")), use the following syntax:
%PROPERTY_NAME[:FROM_CHAR:TO_CHAR:OPTION]%- The PROPERTY_NAME attribute specifies the name of a property. A list of all available properties and their detailed description can be found in the
rsyslog.conf(5)manual page under the section Available Properties. - FROM_CHAR and TO_CHAR attributes denote a range of characters that the specified property will act upon. Alternatively, regular expressions can be used to specify a range of characters. To do so, set the letter
Ras the FROM_CHAR attribute and specify your desired regular expression as the TO_CHAR attribute. - The OPTION attribute specifies any property options, such as the
lowercaseoption to convert the input to lowercase. A list of all available property options and their detailed description can be found in thersyslog.conf(5)manual page under the section Property Options.
- The following property obtains the whole message text of a syslog message:
%msg%
- The following property obtains the first two characters of the message text of a syslog message:
%msg:1:2%
- The following property obtains the whole message text of a syslog message and drops its last line feed character:
%msg:::drop-last-lf%
- The following property obtains the first 10 characters of the time stamp that is generated when the syslog message is received and formats it according to the RFC 3999 date standard.
%timegenerated:1:10:date-rfc3339%
Template Examples
Example 22.8. A verbose syslog message template
template(name=”verbose” type=”list”) {
property(name="syslogseverity”)
property(name="syslogfacility”)
property(name="timegenerated”)
property(name="HOSTNAME”)
property(name="syslogtag”)
property(name="msg”)
constant(value=”\n")
}
mesg(1) permission set to yes). This template outputs the message text, along with a host name, message tag and a time stamp, on a new line (using \r and \n) and rings the bell (using \7).
Example 22.9. A wall message template
template(name=”wallmsg” type=”list”) {
constant(value="\r\n\7Message from syslogd@”)
property(name="HOSTNAME”)
constant(value=” at ")
property(name="timegenerated”)
constant(value=" ...\r\n ”)
property(name="syslogtag”)
constant(value=” “)
property(name="msg”)
constant(value=”\r\n”)
}
sql option at the end of the template specified as the template option. It tells the database writer to format the message as an MySQL SQL query.
Example 22.10. A database formatted message template
template(name="dbFormat" type="list" option.sql="on") {
constant(value="insert into SystemEvents (Message, Facility, FromHost, Priority, DeviceReportedTime, ReceivedAt, InfoUnitID, SysLogTag)")
constant(value=" values ('")
property(name="msg")
constant(value="', ")
property(name="syslogfacility")
constant(value=", '")
property(name="hostname")
constant(value="', ")
property(name="syslogpriority")
constant(value=", '")
property(name="timereported" dateFormat="mysql")
constant(value="', '")
property(name="timegenerated" dateFormat="mysql")
constant(value="', ")
property(name="iut")
constant(value=", '")
property(name="syslogtag")
constant(value="')")
}
RSYSLOG_ prefix. These are reserved for the syslog's use and it is advisable to not create a template using this prefix to avoid conflicts. The following list shows these predefined templates along with their definitions.
RSYSLOG_DebugFormat- A special format used for troubleshooting property problems.
template(name=”RSYSLOG_DebugFormat” type=”string” string="Debug line with all properties:\nFROMHOST: '%FROMHOST%', fromhost-ip: '%fromhost-ip%', HOSTNAME: '%HOSTNAME%', PRI: %PRI%,\nsyslogtag '%syslogtag%', programname: '%programname%', APP-NAME: '%APP-NAME%', PROCID: '%PROCID%', MSGID: '%MSGID%',\nTIMESTAMP: '%TIMESTAMP%', STRUCTURED-DATA: '%STRUCTURED-DATA%',\nmsg: '%msg%'\nescaped msg: '%msg:::drop-cc%'\nrawmsg: '%rawmsg%'\n\n")
RSYSLOG_SyslogProtocol23Format- The format specified in IETF's internet-draft ietf-syslog-protocol-23, which is assumed to become the new syslog standard RFC.
template(name=”RSYSLOG_SyslogProtocol23Format” type=”string” string="%PRI%1 %TIMESTAMP:::date-rfc3339% %HOSTNAME% %APP-NAME% %PROCID% %MSGID% %STRUCTURED-DATA% %msg%\n ")
RSYSLOG_FileFormat- A modern-style logfile format similar to TraditionalFileFormat, but with high-precision time stamps and time zone information.
template(name="RSYSLOG_FileFormat" type="list") { property(name="timestamp" dateFormat="rfc3339") constant(value=" ") property(name="hostname") constant(value=" ") property(name="syslogtag") property(name="msg" spifno1stsp="on" ) property(name="msg" droplastlf="on" ) constant(value="\n") } RSYSLOG_TraditionalFileFormat- The older default log file format with low-precision time stamps.
template(name="RSYSLOG_TraditionalFileFormat" type="list") { property(name="timestamp") constant(value=" ") property(name="hostname") constant(value=" ") property(name="syslogtag") property(name="msg" spifno1stsp="on" ) property(name="msg" droplastlf="on" ) constant(value="\n") } RSYSLOG_ForwardFormat- A forwarding format with high-precision time stamps and time zone information.
template(name="ForwardFormat" type="list") { constant(value="<") property(name="pri") constant(value=">") property(name="timestamp" dateFormat="rfc3339") constant(value=" ") property(name="hostname") constant(value=" ") property(name="syslogtag" position.from="1" position.to="32") property(name="msg" spifno1stsp="on" ) property(name="msg") } RSYSLOG_TraditionalForwardFormat- The traditional forwarding format with low-precision time stamps.
template(name="TraditionalForwardFormat" type="list") { constant(value="<") property(name="pri") constant(value=">") property(name="timestamp") constant(value=" ") property(name="hostname") constant(value=" ") property(name="syslogtag" position.from="1" position.to="32") property(name="msg" spifno1stsp="on" ) property(name="msg") }
22.2.4. Global Directives
rsyslogd daemon. They usually specify a value for a specific predefined variable that affects the behavior of the rsyslogd daemon or a rule that follows. All of the global directives are enclosed in a global configuration block. The following is an example of a global directive that specifies overriding local host name for log messages:
global(localHostname=”machineXY”)
10,000 messages) can be overridden by specifying a different value (as shown in the example above).
/etc/rsyslog.conf configuration file. A directive affects the behavior of all configuration options until another occurrence of that same directive is detected. Global directives can be used to configure actions, queues and for debugging. A comprehensive list of all available configuration directives can be found in the section called “Online Documentation”. Currently, a new configuration format has been developed that replaces the $-based syntax (see Section 22.3, “Using the New Configuration Format”). However, classic global directives remain supported as a legacy format.
22.2.5. Log Rotation
/etc/logrotate.conf configuration file:
# rotate log files weekly weekly # keep 4 weeks worth of backlogs rotate 4 # uncomment this if you want your log files compressed compress
.gz format. Any lines that begin with a hash sign (#) are comments and are not processed.
/etc/logrotate.d/ directory and define any configuration options there.
/etc/logrotate.d/ directory:
/var/log/messages {
rotate 5
weekly
postrotate
/usr/bin/killall -HUP syslogd
endscript
}
/var/log/messages log file only. The settings specified here override the global settings where possible. Thus the rotated /var/log/messages log file will be kept for five weeks instead of four weeks as was defined in the global options.
weekly— Specifies the rotation of log files to be done weekly. Similar directives include:dailymonthlyyearly
compress— Enables compression of rotated log files. Similar directives include:nocompresscompresscmd— Specifies the command to be used for compressing.uncompresscmdcompressext— Specifies what extension is to be used for compressing.compressoptions— Specifies any options to be passed to the compression program used.delaycompress— Postpones the compression of log files to the next rotation of log files.
rotate INTEGER— Specifies the number of rotations a log file undergoes before it is removed or mailed to a specific address. If the value0is specified, old log files are removed instead of rotated.mail ADDRESS— This option enables mailing of log files that have been rotated as many times as is defined by therotatedirective to the specified address. Similar directives include:nomailmailfirst— Specifies that the just-rotated log files are to be mailed, instead of the about-to-expire log files.maillast— Specifies that the about-to-expire log files are to be mailed, instead of the just-rotated log files. This is the default option whenmailis enabled.
logrotate(5) manual page.
22.2.6. Increasing the Limit of Open Files
rsyslog exceeds the limit for a maximum number of open files. Consequently, the rsyslog cannot open new files.
rsyslog:
/etc/systemd/system/rsylog.service.d/increase_nofile_limit.conf file with the following content:
[Service] LimitNOFILE=16384
22.3. Using the New Configuration Format
/etc/rsyslog.conf configuration file.
input() and ruleset() statements, which permit the /etc/rsyslog.conf configuration file to be written in the new syntax. The new syntax differs mainly in that it is much more structured; parameters are passed as arguments to statements, such as input, action, template, and module load. The scope of options is limited by blocks. This enhances readability and reduces the number of bugs caused by misconfiguration. There is also a significant performance gain. Some functionality is exposed in both syntaxes, some only in the new one.
$InputFileName /tmp/inputfile $InputFileTag tag1: $InputFileStateFile inputfile-state $InputRunFileMonitor
input(type="imfile" file="/tmp/inputfile" tag="tag1:" statefile="inputfile-state")
22.3.1. Rulesets
/etc/rsyslog.conf file, all rules are evaluated in order of appearance for every input message. This process starts with the first rule and continues until all rules have been processed or until the message is discarded by one of the rules.
/etc/rsyslog.conf can look as follows:
$RuleSet rulesetname rule rule2
$RuleSet RSYSLOG_DefaultRuleset
input() and ruleset() statements are reserved for this operation. The new format ruleset definition in /etc/rsyslog.conf can look as follows:
ruleset(name="rulesetname") {
rule
rule2
call rulesetname2
…
}
RSYSLOG_ since this namespace is reserved for use by rsyslog. RSYSLOG_DefaultRuleset then defines the default set of rules to be performed if the message has no other ruleset assigned. With rule and rule2 you can define rules in filter-action format mentioned above. With the call parameter, you can nest rulesets by calling them from inside other ruleset blocks.
input(type="input_type" port="port_num" ruleset="rulesetname");
input(). Replace rulesetname with a name of the ruleset to be evaluated against the message. In case an input message is not explicitly bound to a ruleset, the default ruleset is triggered.
Example 22.11. Using rulesets
/etc/rsyslog.conf:
ruleset(name="remote-6514") {
action(type="omfile" file="/var/log/remote-6514")
}
ruleset(name="remote-601") {
cron.* action(type="omfile" file="/var/log/remote-601-cron")
mail.* action(type="omfile" file="/var/log/remote-601-mail")
}
input(type="imtcp" port="6514" ruleset="remote-6514");
input(type="imtcp" port="601" ruleset="remote-601");
601, messages are sorted according to the facility. Then, the TCP input is enabled and bound to rulesets. Note that you must load the required modules (imtcp) for this configuration to work.
22.3.2. Compatibility with sysklogd
-c option exists in rsyslog version 5 but not in version 7. Also, the sysklogd-style command-line options are deprecated and configuring rsyslog through these command-line options should be avoided. However, you can use several templates and directives to configure rsyslogd to emulate sysklogd-like behavior.
rsyslogd options, see the rsyslogd(8)manual page.
22.4. Working with Queues in Rsyslog

Figure 22.1. Message Flow in Rsyslog
/etc/rsyslog.conf are applied. Based on these rules, the rule processor evaluates which actions are to be performed. Each action has its own action queue. Messages are passed through this queue to the respective action processor which creates the final output. Note that at this point, several actions can run simultaneously on one message. For this purpose, a message is duplicated and passed to multiple action processors.
- they serve as buffers that decouple producers and consumers in the structure of rsyslog
- they allow for parallelization of actions performed on messages
Warning
SSH logging, which in turn can prevent SSH access. Therefore it is advised to use dedicated action queues for outputs which are forwarded over a network or to a database.
22.4.1. Defining Queues
/etc/rsyslog.conf:
object(queue.type=”queue_type”)
- main message queue: replace object with
main_queue - an action queue: replace object with
action - ruleset: replace object with
ruleset
direct, linkedlist or fixedarray (which are in-memory queues), or disk.
Direct Queues
object(queue.type=”Direct”)
main_queue, action or ruleset to use this option to the main message queue, an action queue or for the ruleset respectively. With direct queue, messages are passed directly and immediately from the producer to the consumer.
Disk Queues
/etc/rsyslog.conf:
object(queue.type=”Disk”)
main_queue, action or ruleset to use this option to the main message queue, an action queue or for the ruleset respectively. Disk queues are written in parts, with a default size 10 Mb. This default size can be modified with the following configuration directive:
object(queue.size=”size”)
object(queue.filename=”name”)
In-memory Queues
action (queue.saveonshutdown=”on”) setting to save the data before shutdown. There are two types of in-memory queues:
- FixedArray queue — the default mode for the main message queue, with a limit of 10,000 elements. This type of queue uses a fixed, pre-allocated array that holds pointers to queue elements. Due to these pointers, even if the queue is empty a certain amount of memory is consumed. However, FixedArray offers the best run time performance and is optimal when you expect a relatively low number of queued messages and high performance.
- LinkedList queue — here, all structures are dynamically allocated in a linked list, thus the memory is allocated only when needed. LinkedList queues handle occasional message bursts very well.
object(queue.type=”LinkedList”)
object(queue.type=”FixedArray”)
main_queue, action or ruleset to use this option to the main message queue, an action queue or for the ruleset respectively.
Disk-Assisted In-memory Queues
queue.filename=”file_name” directive to its block to define a file name for disk assistance. This queue then becomes disk-assisted, which means it couples an in-memory queue with a disk queue to work in tandem.
object(queue.highwatermark=”number”)
object(queue.lowwatermark=”number”)
main_queue, action or ruleset to use this option to the main message queue, an action queue or for the ruleset respectively. Replace number with a number of enqueued messages. When an in-memory queue reaches the number defined by the high watermark, it starts writing messages to disk and continues until the in-memory queue size drops to the number defined with the low watermark. Correctly set watermarks minimize unnecessary disk writes, but also leave memory space for message bursts since writing to disk files is rather lengthy. Therefore, the high watermark must be lower than the whole queue capacity set with queue.size. The difference between the high watermark and the overall queue size is a spare memory buffer reserved for message bursts. On the other hand, setting the high watermark too low will turn on disk assistance unnecessarily often.
Example 22.12. Reliable Forwarding of Log Messages to a Server
UDP protocol.
Procedure 22.1. Forwarding To a Single Server
- Use the following configuration in
/etc/rsyslog.confor create a file with the following content in the/etc/rsyslog.d/directory:*.* action(type=”omfwd” queue.type=”LinkedList” queue.filename=”example_fwd” action.resumeRetryCount="-1" queue.saveonshutdown="on" arget="example.com" Port="6514" Protocol="tcp")
Where:queue.typeenables a LinkedList in-memory queue,queue.filenamedefines a disk storage, in this case the backup files are created in the/var/lib/rsyslog/directory with the example_fwd prefix,- the
action.resumeRetryCount= “-1”setting prevents rsyslog from dropping messages when retrying to connect if server is not responding, - enabled
queue.saveonshutdownsaves in-memory data if rsyslog shuts down, - the last line forwards all received messages to the logging server using reliable TCP delivery, port specification is optional.
With the above configuration, rsyslog keeps messages in memory if the remote server is not reachable. A file on disk is created only if rsyslog runs out of the configured memory queue space or needs to shut down, which benefits the system performance.
Procedure 22.2. Forwarding To Multiple Servers
- Each destination server requires a separate forwarding rule, action queue specification, and backup file on disk. For example, use the following configuration in
/etc/rsyslog.confor create a file with the following content in the/etc/rsyslog.d/directory:*.* action(type=”omfwd” queue.type=”LinkedList” queue.filename=”example_fwd1” action.resumeRetryCount="-1" queue.saveonshutdown="on" Target="example1.com" Protocol="tcp") *.* action(type=”omfwd” queue.type=”LinkedList” queue.filename=”example_fwd2” action.resumeRetryCount="-1" queue.saveonshutdown="on" Target="example2.com" Protocol="tcp")
22.4.2. Creating a New Directory for rsyslog Log Files
syslogd daemon and is managed by SELinux. Therefore all files to which rsyslog is required to write to, must have the appropriate SELinux file context.
Procedure 22.3. Creating a New Working Directory
- If required to use a different directory to store working files, create a directory as follows:
~]#
mkdir/rsyslog - Install utilities to manage SELinux policy:
~]#
yum install policycoreutils-python - Set the SELinux directory context type to be the same as the
/var/lib/rsyslog/directory:~]#
semanage fcontext -a -t syslogd_var_lib_t /rsyslog - Apply the SELinux context:
~]#
restorecon -R -v /rsyslogrestorecon reset /rsyslog context unconfined_u:object_r:default_t:s0->unconfined_u:object_r:syslogd_var_lib_t:s0 - If required, check the SELinux context as follows:
~]#
ls -Zd /rsyslogdrwxr-xr-x. root root system_u:object_r:syslogd_var_lib_t:s0 /rsyslog - Create subdirectories as required. For example:
~]#
The subdirectories will be created with the same SELinux context as the parent directory.mkdir/rsyslog/work/ - Add the following line in
/etc/rsyslog.confimmediately before it is required to take effect:global(workDirectory=”/rsyslog/work”)
This setting will remain in effect until the nextWorkDirectorydirective is encountered while parsing the configuration files.
22.4.3. Managing Queues
Limiting Queue Size
object(queue.highwatermark=”number”)
main_queue, action or ruleset to use this option to the main message queue, an action queue or for the ruleset respectively. Replace number with a number of enqueued messages. You can set the queue size only as the number of messages, not as their actual memory size. The default queue size is 10,000 messages for the main message queue and ruleset queues, and 1000 for action queues.
object(queue.maxdiskspace=”number”)
main_queue, action or ruleset. When the size limit specified by number is hit, messages are discarded until sufficient amount of space is freed by dequeued messages.
Discarding Messages
object(queue.discardmark=”number”)
MainMsg or with Action to use this option to the main message queue or for an action queue respectively. Here, number stands for a number of messages that have to be in the queue to start the discarding process. To define which messages to discard, use:
object(queue.discardseverity=”number”)
7 (debug), 6 (info), 5 (notice), 4 (warning), 3 (err), 2 (crit), 1 (alert), or 0 (emerg). With this setting, both newly incoming and already queued messages with lower than defined priority are erased from the queue immediately after the discard mark is reached.
Using Timeframes
object(queue.dequeuetimebegin=”hour”)
object(queue.dequeuetimeend=”hour”)
Configuring Worker Threads
object(queue.workerthreadminimummessages=”number”)
object(queue.workerthreads=”number”)
object(queue.timeoutworkerthreadshutdown=”time”)
Batch Dequeuing
$object(queue.DequeueBatchSize= ”number”)
Terminating Queues
object(queue.timeoutshutdown=”time”)
object(queue.timeoutactioncompletion=”time”)
object(queue.saveonshutdown=”on”)
22.4.4. Using the New Syntax for rsyslog queues
action() object that can be used both separately or inside a ruleset in /etc/rsyslog.conf. The format of an action queue is as follows:
action(type="action_type "queue.size="queue_size" queue.type="queue_type" queue.filename="file_name"
disk or select from one of the in-memory queues: direct, linkedlist or fixedarray. For file_name specify only a file name, not a path. Note that if creating a new directory to hold log files, the SELinux context must be set. See Section 22.4.2, “Creating a New Directory for rsyslog Log Files” for an example.
Example 22.13. Defining an Action Queue
action(type="omfile" queue.size="10000" queue.type="linkedlist" queue.filename="logfile")
*.* action(type="omfile" file="/var/lib/rsyslog/log_file
)
*.* action(type="omfile"
queue.filename="log_file"
queue.type="linkedlist"
queue.size="10000"
)
The default work directory, or the last work directory to be set, will be used. If required to use a different work directory, add a line as follows before the action queue: global(workDirectory="/directory")
Example 22.14. Forwarding To a Single Server Using the New Syntax
omfwd plug-in is used to provide forwarding over UDP or TCP. The default is UDP. As the plug-in is built in it does not have to be loaded.
/etc/rsyslog.conf or create a file with the following content in the /etc/rsyslog.d/ directory:
*.* action(type="omfwd"
queue.type="linkedlist"
queue.filename="example_fwd"
action.resumeRetryCount="-1"
queue.saveOnShutdown="on"
target="example.com" port="6514" protocol="tcp"
)
queue.type="linkedlist"enables a LinkedList in-memory queue,queue.filenamedefines a disk storage. The backup files are created with the example_fwd prefix, in the working directory specified by the preceding globalworkDirectorydirective,- the
action.resumeRetryCount -1setting prevents rsyslog from dropping messages when retrying to connect if server is not responding, - enabled
queue.saveOnShutdown="on"saves in-memory data if rsyslog shuts down, - the last line forwards all received messages to the logging server, port specification is optional.
22.5. Configuring rsyslog on a Logging Server
rsyslog service provides facilities both for running a logging server and for configuring individual systems to send their log files to the logging server. See Example 22.12, “Reliable Forwarding of Log Messages to a Server” for information on client rsyslog configuration.
rsyslog service must be installed on the system that you intend to use as a logging server and all systems that will be configured to send logs to it. Rsyslog is installed by default in Red Hat Enterprise Linux 7. If required, to ensure that it is, enter the following command as root:
~]# yum install rsyslogUDP and 514, as listed in the /etc/services file. However, rsyslog defaults to using TCP on port 514. In the configuration file, /etc/rsyslog.conf, TCP is indicated by @@.
~]# semanage port -l | grep syslog
syslog_tls_port_t tcp 6514, 10514
syslog_tls_port_t udp 6514, 10514
syslogd_port_t tcp 601, 20514
syslogd_port_t udp 514, 601, 20514
The semanage utility is provided as part of the policycoreutils-python package. If required, install the package as follows:
~]# yum install policycoreutils-python
rsyslog, rsyslogd_t, is configured to permit sending and receiving to the remote shell (rsh) port with SELinux type rsh_port_t, which defaults to TCP on port 514. Therefore it is not necessary to use semanage to explicitly permit TCP on port 514. For example, to check what SELinux is set to permit on port 514, enter a command as follows:
~]# semanage port -l | grep 514
output omitted
rsh_port_t tcp 514
syslogd_port_t tcp 6514, 601
syslogd_port_t udp 514, 6514, 601
root user.
Procedure 22.4. Configure SELinux to Permit rsyslog Traffic on a Port
rsyslog traffic, follow this procedure on the logging server and the clients. For example, to send and receive TCP traffic on port 10514, proceed as follows:
~]#
semanage port -a -t syslogd_port_t -p tcp 10514- Review the SELinux ports by entering the following command:
~]#
semanage port -l | grep syslog - If the new port was already configured in
/etc/rsyslog.conf, restartrsyslognow for the change to take effect:~]#
service rsyslog restart - Verify which ports
rsyslogis now listening to:~]#
netstat -tnlp | grep rsyslogtcp 0 0 0.0.0.0:10514 0.0.0.0:* LISTEN 2528/rsyslogd tcp 0 0 :::10514 :::* LISTEN 2528/rsyslogd
semanage-port(8) manual page for more information on the semanage port command.
Procedure 22.5. Configuring firewalld
firewalld to allow incoming rsyslog traffic. For example, to allow TCP traffic on port 10514, proceed as follows:
~]#
firewall-cmd --zone=zone --add-port=10514/tcpsuccessWhere zone is the zone of the interface to use. Note that these changes will not persist after the next system start. To make permanent changes to the firewall, repeat the commands adding the--permanentoption. For more information on opening and closing ports infirewalld, see the Red Hat Enterprise Linux 7 Security Guide.- To verify the above settings, use a command as follows:
~]#
firewall-cmd --list-allpublic (default, active) interfaces: eth0 sources: services: dhcpv6-client ssh ports: 10514/tcp masquerade: no forward-ports: icmp-blocks: rich rules:
Procedure 22.6. Configuring rsyslog to Receive and Sort Remote Log Messages
- Open the
/etc/rsyslog.conffile in a text editor and proceed as follows:- Add these lines below the modules section but above the
Provides UDP syslog receptionsection:# Define templates before the rules that use them ### Per-Host Templates for Remote Systems ### $template TmplAuthpriv, "/var/log/remote/auth/%HOSTNAME%/%PROGRAMNAME:::secpath-replace%.log" $template TmplMsg, "/var/log/remote/msg/%HOSTNAME%/%PROGRAMNAME:::secpath-replace%.log"
- Replace the default
Provides TCP syslog receptionsection with the following:# Provides TCP syslog reception $ModLoad imtcp # Adding this ruleset to process remote messages $RuleSet remote1 authpriv.* ?TmplAuthpriv *.info;mail.none;authpriv.none;cron.none ?TmplMsg $RuleSet RSYSLOG_DefaultRuleset #End the rule set by switching back to the default rule set $InputTCPServerBindRuleset remote1 #Define a new input and bind it to the "remote1" rule set $InputTCPServerRun 10514
Save the changes to the/etc/rsyslog.conffile. - The
rsyslogservice must be running on both the logging server and the systems attempting to log to it.- Use the
systemctlcommand to start thersyslogservice.~]#
systemctl start rsyslog - To ensure the
rsyslogservice starts automatically in future, enter the following command as root:~]#
systemctl enable rsyslog
22.5.1. Using The New Template Syntax on a Logging Server
template(name="TmplAuthpriv" type="string"
string="/var/log/remote/auth/%HOSTNAME%/%PROGRAMNAME:::secpath-replace%.log"
)
template(name="TmplMsg" type="string"
string="/var/log/remote/msg/%HOSTNAME%/%PROGRAMNAME:::secpath-replace%.log"
)
template(name="TmplAuthpriv" type="list") {
constant(value="/var/log/remote/auth/")
property(name="hostname")
constant(value="/")
property(name="programname" SecurePath="replace")
constant(value=".log")
}
template(name="TmplMsg" type="list") {
constant(value="/var/log/remote/msg/")
property(name="hostname")
constant(value="/")
property(name="programname" SecurePath="replace")
constant(value=".log")
}
This template text format might be easier to read for those new to rsyslog and therefore can be easier to adapt as requirements change.
module(load="imtcp")
ruleset(name="remote1"){
authpriv.* action(type="omfile" DynaFile="TmplAuthpriv")
*.info;mail.none;authpriv.none;cron.none action(type="omfile" DynaFile="TmplMsg")
}
input(type="imtcp" port="10514" ruleset="remote1")22.6. Using Rsyslog Modules
module(load=”MODULE”)
imfile) that enables rsyslog to convert any standard text files into syslog messages, specify the following line in the /etc/rsyslog.conf configuration file:
module(load=”imfile”)
- Input Modules — Input modules gather messages from various sources. The name of an input module always starts with the
imprefix, such asimfileandimjournal. - Output Modules — Output modules provide a facility to issue message to various targets such as sending across a network, storing in a database, or encrypting. The name of an output module always starts with the
omprefix, such asomsnmp,omrelp, and so on. - Parser Modules — These modules are useful in creating custom parsing rules or to parse malformed messages. With moderate knowledge of the C programming language, you can create your own message parser. The name of a parser module always starts with the
pmprefix, such aspmrfc5424,pmrfc3164, and so on. - Message Modification Modules — Message modification modules change content of syslog messages. Names of these modules start with the
mmprefix. Message Modification Modules such asmmanon,mmnormalize, ormmjsonparseare used for anonymization or normalization of messages. - String Generator Modules — String generator modules generate strings based on the message content and strongly cooperate with the template feature provided by rsyslog. For more information on templates, see Section 22.2.3, “Templates”. The name of a string generator module always starts with the
smprefix, such assmfileorsmtradfile. - Library Modules — Library modules provide functionality for other loadable modules. These modules are loaded automatically by rsyslog when needed and cannot be configured by the user.
Warning
22.6.1. Importing Text Files
imfile, enables rsyslog to convert any text file into a stream of syslog messages. You can use imfile to import log messages from applications that create their own text file logs. To load imfile, add the following into /etc/rsyslog.conf:
module(load=”imfile”
PollingInterval=”int”)
imfile once, even when importing multiple files. The PollingInterval module argument specifies how often rsyslog checks for changes in connected text files. The default interval is 10 seconds, to change it, replace int with a time interval specified in seconds.
/etc/rsyslog.conf:
# File 1
input(type="imfile"
File="path_to_file"
Tag="tag:"
Severity="severity"
Facility="facility")
# File 2
input(type="imfile"
File="path_to_file2")
...
- replace path_to_file with a path to the text file.
- replace tag: with a tag name for this message.
Example 22.15. Importing Text Files
imfile module to import the messages. Add the following into /etc/rsyslog.conf:
module(load=”imfile”)
input(type="imfile"
File="/var/log/httpd/error_log"
Tag="apache-error:")
22.6.2. Exporting Messages to a Database
ommysql, ompgsql, omoracle, or ommongodb. As an alternative, use the generic omlibdbi output module that relies on the libdbi library. The omlibdbi module supports database systems Firebird/Interbase, MS SQL, Sybase, SQLite, Ingres, Oracle, mSQL, MySQL, and PostgreSQL.
Example 22.16. Exporting Rsyslog Messages to a Database
/etc/rsyslog.conf:
module(load=”ommysql”) *.* action(type”ommysql” server=”database-server” db=”database-name” uid=”database-userid” pwd=”database-password” serverport=”1234”)
22.6.3. Enabling Encrypted Transport
Configuring Encrypted Message Transfer with TLS
- Create public key, private key and certificate file, see Section 14.1.11, “Generating a New Key and Certificate”.
- On the server side, configure the following in the
/etc/rsyslog.confconfiguration file:- Set the gtls netstream driver as the default driver:
global(defaultnetstreamdriver="gtls")
- Provide paths to certificate files:
global(defaultnetstreamdrivercafile="path_ca.pem" defaultnetstreamdrivercertfile="path_cert.pem" defaultnetstreamdriverkeyfile="path_key.pem")
You can merge all global directives into single block if you prefer a less cluttered configuration file.Replace:- path_ca.pem with a path to your public key
- path_cert.pem with a path to the certificate file
- path_key.pem with a path to the private key
- Load the imtcp moduleand set driver options:
module(load=”imtcp” StreamDriver.Mode=“number” StreamDriver.AuthMode=”anon”)
- Start a server:
input(type="imtcp" port="port″)
Replace:- number to specify the driver mode. To enable TCP-only mode, use
1 - port with the port number at which to start a listener, for example
10514
Theanonsetting means that the client is not authenticated.
- On the client side, configure the following in the
/etc/rsyslog.confconfiguration file:- Load the public key:
global(defaultnetstreamdrivercafile="path_ca.pem")
Replace path_ca.pem with a path to the public key. - Set the gtls netstream driver as the default driver:
global(defaultnetstreamdriver="gtls")
- Configure the driver and specify what action will be performed:
module(load=”imtcp” streamdrivermode=”number” streamdriverauthmode=”anon”) input(type=”imtcp” address=”server.net” port=”port”)Replace number, anon, and port with the same values as on the server.On the last line in the above listing, an example action forwards messages from the server to the specified TCP port.
Configuring Encrypted Message Transfer with GSSAPI
- Put the following configuration in
/etc/rsyslog.conf:$ModLoad imgssapi
This directive loads the imgssapi module. - Specify the input as follows:
$InputGSSServerServiceName name $InputGSSServerPermitPlainTCP
on$InputGSSServerMaxSessions number $InputGSSServerRun port- Replace name with the name of the GSS server.
- Replace number to set the maximum number of sessions supported. This number is not limited by default.
- Replace port with a selected port on which you want to start a GSS server.
The$InputGSSServerPermitPlainTCP onsetting permits the server to receive also plain TCP messages on the same port. This is off by default.
Note
imgssapi module is initialized as soon as the configuration file reader encounters the $InputGSSServerRun directive in the /etc/rsyslog.conf configuration file. The supplementary options configured after $InputGSSServerRun are therefore ignored. For configuration to take effect, all imgssapi configuration options must be placed before $InputGSSServerRun.
Example 22.17. Using GSSAPI
$ModLoad imgssapi $InputGSSServerPermitPlainTCP on $InputGSSServerRun 1514
22.6.4. Using RELP
Configuring RELP
/etc/rsyslog.conf file.
- To configure the client:
- Load the required modules:
module(load="imuxsock") module(load="omrelp") module(load="imtcp")
- Configure the TCP input as follows:
input(type="imtcp" port="port″)
Replace port to start a listener at the required port. - Configure the transport settings:
action(type="omrelp" target="target_IP″ port="target_port″)
Replace target_IP and target_port with the IP address and port that identify the target server.
- To configure the server:
- Configure loading the module:
module(load="imuxsock") module(load="imrelp" ruleset="relp")
- Configure the TCP input similarly to the client configuration:
input(type="imrelp" port="target_port″)
Replace target_port with the same value as on the clients. - Configure the rules and choose an action to be performed. In the following example, log_path specifies the path for storing messages:
ruleset (name="relp") { action(type="omfile" file="log_path") }
Configuring RELP with TLS
/etc/rsyslog.conf file.
- Create public key, private key and certificate file. For instructions, see Section 14.1.11, “Generating a New Key and Certificate”.
- To configure the client:
- Load the required modules:
module(load="imuxsock") module(load="omrelp") module(load="imtcp")
- Configure the TCP input as follows:
input(type="imtcp" port="port″)
Replace port to start a listener at the required port. - Configure the transport settings:
action(type="omrelp" target="target_IP″ port="target_port″ tls="on" tls.caCert="path_ca.pem" tls.myCert="path_cert.pem" tls.myPrivKey="path_key.pem" tls.authmode="mode" tls.permittedpeer=["peer_name"] )
Replace:- target_IP and target_port with the IP address and port that identify the target server.
- path_ca.pem, path_cert.pem, and path_key.pem with paths to the certification files
- mode with the authentication mode for the transaction. Use either "name" or "fingerprint"
- peer_name with a certificate fingerprint of the permitted peer. If you specify this,
tls.permittedpeerrestricts connection to the selected group of peers.
The tls="on" setting enables the TLS protocol.
- To configure the server:
- Configure loading the module:
module(load="imuxsock") module(load="imrelp" ruleset="relp")
- Configure the TCP input similarly to the client configuration:
input(type="imrelp" port="target_port″ tls="on" tls.caCert="path_ca.pem" tls.myCert="path_cert.pem" tls.myPrivKey="path_key.pem" tls.authmode="name" tls.permittedpeer=["peer_name","peer_name1","peer_name2"] )
Replace the highlighted values with the same as on the client. - Configure the rules and choose an action to be performed. In the following example, log_path specifies the path for storing messages:
ruleset (name="relp") { action(type="omfile" file="log_path") }
22.7. Interaction of Rsyslog and Journal
rsyslogd uses the imjournal module as a default input mode for journal files. With this module, you import not only the messages but also the structured data provided by journald. Also, older data can be imported from journald (unless forbidden with the IgnorePreviousMessages option). See Section 22.8.1, “Importing Data from Journal” for basic configuration of imjournal.
rsyslogd to read from the socket provided by journal as an output for syslog-based applications. The path to the socket is /run/systemd/journal/syslog. Use this option when you want to maintain plain rsyslog messages. Compared to imjournal the socket input currently offers more features, such as ruleset binding or filtering. To import Journal data trough the socket, use the following configuration in /etc/rsyslog.conf:
module(load="imuxsock"
SysSock.Use="on"
SysSock.Name="/run/systemd/journal/syslog")
omjournal module. Configure the output in /etc/rsyslog.conf as follows:
module(load="omjournal") action(type="omjournal")
module(load="imtcp")
module(load="omjournal")
ruleset(name="remote") {
action(type="omjournal")
}
input(type="imtcp" port="10514" ruleset="remote")
22.8. Structured Logging with Rsyslog
Oct 25 10:20:37 localhost anacron[1395]: Jobs will be executed sequentially
{"timestamp":"2013-10-25T10:20:37", "host":"localhost", "program":"anacron", "pid":"1395", "msg":"Jobs will be executed sequentially"}imjournal module. With the mmjsonparse module, you can parse data imported from Journal and from other sources and process them further, for example as a database output. For parsing to be successful, mmjsonparse requires input messages to be structured in a way that is defined by the Lumberjack project.
@cee: {"pid":17055, "uid":1000, "gid":1000, "appname":"logger", "msg":"Message text."} libumberlog library to generate messages in the lumberjack-compliant form. For more information on libumberlog, see the section called “Online Documentation”.
22.8.1. Importing Data from Journal
imjournal module is Rsyslog's input module to natively read the journal files (see Section 22.7, “Interaction of Rsyslog and Journal”). Journal messages are then logged in text format as other rsyslog messages. However, with further processing, it is possible to translate meta data provided by Journal into a structured message.
/etc/rsyslog.conf:
module(load=”imjournal”
PersistStateInterval=”number_of_messages”
StateFile=”path”
ratelimit.interval=”seconds”
ratelimit.burst=”burst_number”
IgnorePreviousMessages=”off/on”)
- With number_of_messages, you can specify how often the journal data must be saved. This will happen each time the specified number of messages is reached.
- Replace path with a path to the state file. This file tracks the journal entry that was the last one processed.
- With seconds, you set the length of the rate limit interval. The number of messages processed during this interval cannot exceed the value specified in burst_number. The default setting is 20,000 messages per 600 seconds. Rsyslog discards messages that come after the maximum burst within the time frame specified.
- With
IgnorePreviousMessagesyou can ignore messages that are currently in Journal and import only new messages, which is used when there is no state file specified. The default setting isoff. Please note that if this setting is off and there is no state file, all messages in the Journal are processed, even if they were already processed in a previous rsyslog session.
Note
imjournal simultaneously with imuxsock module that is the traditional system log input. However, to avoid message duplication, you must prevent imuxsock from reading the Journal's system socket. To do so, use the SysSock.Use directive:
module(load”imjournal”)
module(load”imuxsock”
SysSock.Use=”off”
Socket="/run/systemd/journal/syslog")
systemd.journal-fields(7) manual page. For example, it is possible to focus on kernel journal fields, that are used by messages originating in the kernel.
22.8.2. Filtering Structured Messages
template(name="CEETemplate" type="string" string="%TIMESTAMP% %HOSTNAME% %syslogtag% @cee: %$!all-json%\n")
@cee: string to the JSON string and can be applied, for example, when creating an output file with omfile module. To access JSON field names, use the $! prefix. For example, the following filter condition searches for messages with specific hostname and UID:
($!hostname == "hostname" && $!UID== "UID")
22.8.3. Parsing JSON
mmjsonparse module is used for parsing structured messages. These messages can come from Journal or from other input sources, and must be formatted in a way defined by the Lumberjack project. These messages are identified by the presence of the @cee: string. Then, mmjsonparse checks if the JSON structure is valid and then the message is parsed.
mmjsonparse, use the following configuration in the /etc/rsyslog.conf:
module(load”mmjsonparse”) *.* :mmjsonparse:
mmjsonparse module is loaded on the first line, then all messages are forwarded to it. Currently, there are no configuration parameters available for mmjsonparse.
22.8.4. Storing Messages in the MongoDB
/etc/rsyslog.conf (configuration parameters for ommongodb are available only in the new configuration format; see Section 22.3, “Using the New Configuration Format”):
module(load”ommongodb”) *.* action(type="ommongodb" server="DB_server" serverport="port" db="DB_name" collection="collection_name" uid="UID" pwd="password")
- Replace DB_server with the name or address of the MongoDB server. Specify port to select a non-standard port from the MongoDB server. The default port value is
0and usually there is no need to change this parameter. - With DB_name, you identify to which database on the MongoDB server you want to direct the output. Replace collection_name with the name of a collection in this database. In MongoDB, collection is a group of documents, the equivalent of an RDBMS table.
- You can set your login details by replacing UID and password.
22.9. Debugging Rsyslog
rsyslogd in debugging mode, use the following command:
rsyslogd-dn
rsyslogd produces debugging information and prints it to the standard output. The -n stands for "no fork". You can modify debugging with environmental variables, for example, you can store the debug output in a log file. Before starting rsyslogd, type the following on the command line:
export RSYSLOG_DEBUGLOG="path" export RSYSLOG_DEBUG="Debug"
rsyslogd(8) manual page.
/etc/rsyslog.conf file is valid use:
rsyslogd-N1
1 represents level of verbosity of the output message. This is a forward compatibility option because currently, only one level is provided. However, you must add this argument to run the validation.
22.10. Using the Journal
rsyslogd. The Journal was developed to address problems connected with traditional logging. It is closely integrated with the rest of the system, supports various logging technologies and access management for the log files.
journald service. It creates and maintains binary files called journals based on logging information that is received from the kernel, from user processes, from standard output, and standard error output of system services or via its native API. These journals are structured and indexed, which provides relatively fast seek times. Journal entries can carry a unique identifier. The journald service collects numerous meta data fields for each log message. The actual journal files are secured, and therefore cannot be manually edited.
22.10.1. Viewing Log Files
root:
journalctl/var/log/messages/ but with certain improvements:
- the priority of entries is marked visually. Lines of error priority and higher are highlighted with red color and a bold font is used for lines with notice and warning priority
- the time stamps are converted for the local time zone of your system
- all logged data is shown, including rotated logs
- the beginning of a boot is tagged with a special line
Example 22.18. Example Output of journalctl
# journalctl
-- Logs begin at Thu 2013-08-01 15:42:12 CEST, end at Thu 2013-08-01 15:48:48 CEST. --
Aug 01 15:42:12 localhost systemd-journal[54]: Allowing runtime journal files to grow to 49.7M.
Aug 01 15:42:12 localhost kernel: Initializing cgroup subsys cpuset
Aug 01 15:42:12 localhost kernel: Initializing cgroup subsys cpu
[...]
journalctl output is to use the -n option that lists only the specified number of most recent log entries:
journalctl-nNumber
journalctl displays the ten most recent entries.
journalctl command allows controlling the form of the output with the following syntax:
journalctl-oform
verbose, which returns full-structured entry items with all fields, export, which creates a binary stream suitable for backups and network transfer, and json, which formats entries as JSON data structures. For the full list of keywords, see the journalctl(1) manual page.
Example 22.19. Verbose journalctl Output
#journalctl-o verbose[...] Fri 2013-08-02 14:41:22 CEST [s=e1021ca1b81e4fc688fad6a3ea21d35b;i=55c;b=78c81449c920439da57da7bd5c56a770;m=27cc _BOOT_ID=78c81449c920439da57da7bd5c56a770 PRIORITY=5 SYSLOG_FACILITY=3 _TRANSPORT=syslog _MACHINE_ID=69d27b356a94476da859461d3a3bc6fd _HOSTNAME=localhost.localdomain _PID=562 _COMM=dbus-daemon _EXE=/usr/bin/dbus-daemon _CMDLINE=/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation _SYSTEMD_CGROUP=/system/dbus.service _SYSTEMD_UNIT=dbus.service SYSLOG_IDENTIFIER=dbus SYSLOG_PID=562 _UID=81 _GID=81 _SELINUX_CONTEXT=system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 MESSAGE=[system] Successfully activated service 'net.reactivated.Fprint' _SOURCE_REALTIME_TIMESTAMP=1375447282839181 [...]
systemd.journal-fields(7) manual page.
22.10.2. Access Control
root privileges can only see log files generated by them. The system administrator can add selected users to the adm group, which grants them access to complete log files. To do so, type as root:
usermod-a-Gadm username
journalctl command as the root user. Note that access control only works when persistent storage is enabled for Journal.
22.10.3. Using The Live View
journalctl shows the full list of entries, starting with the oldest entry collected. With the live view, you can supervise the log messages in real time as new entries are continuously printed as they appear. To start journalctl in live view mode, type:
journalctl-f
22.10.4. Filtering Messages
journalctl command executed without parameters is often extensive, therefore you can use various filtering methods to extract information to meet your needs.
Filtering by Priority
journalctl-ppriority
debug (7), info (6), notice (5), warning (4), err (3), crit (2), alert (1), and emerg (0).
Example 22.20. Filtering by Priority
journalctl-p err
Filtering by Time
journalctl-b
-b will not significantly reduce the output of journalctl. In such cases, time-based filtering is more helpful:
journalctl--since=value--until=value
--since and --until, you can view only log messages created within a specified time range. You can pass values to these options in form of date or time or both as shown in the following example.
Example 22.21. Filtering by Time and Priority
journalctl-p warning--since="2013-3-16 23:59:59"
Advanced Filtering
systemd can store, see the systemd.journal-fields(7) manual page. This meta data is collected for each log message, without user intervention. Values are usually text-based, but can take binary and large values; fields can have multiple values assigned though it is not very common.
journalctl-Ffieldname
journalctl fieldname=valueNote
systemd is quite large, it is easy to forget the exact name of the field of interest. When unsure, type:
journalctljournalctl fieldname=journalctl -F fieldname.
journalctl fieldname=value1 fieldname=value2 ...OR combination of the matches. Entries matching value1 or value2 are displayed.
journalctl fieldname1=value fieldname2=value ...AND. Entries have to match both conditions to be shown.
OR combination of matches for multiple fields:
journalctl fieldname1=value + fieldname2=value ...Example 22.22. Advanced filtering
avahi-daemon.service or crond.service under user with UID 70, use the following command:
journalctl_UID=70_SYSTEMD_UNIT=avahi-daemon.service_SYSTEMD_UNIT=crond.service
_SYSTEMD_UNIT field, both results will be displayed, but only when matching the _UID=70 condition. This can be expressed simply as: (UID=70 and (avahi or cron)).
journalctl-ffieldname=value ...
22.10.5. Enabling Persistent Storage
/run/log/journal/ directory. This is sufficient to show recent log history with journalctl. This directory is volatile, log data is not saved permanently. With the default configuration, syslog reads the journal logs and stores them in the /var/log/ directory. With persistent logging enabled, journal files are stored in /var/log/journal which means they persist after reboot. Journal can then replace rsyslog for some users (but see the chapter introduction).
- Richer data is recorded for troubleshooting in a longer period of time
- For immediate troubleshooting, richer data is available after a reboot
- Server console currently reads data from journal, not log files
- Even with persistent storage the amount of data stored depends on free memory, there is no guarantee to cover a specific time span
- More disk space is needed for logs
root type:
mkdir-p/var/log/journal/
journald to apply the change:
systemctlrestartsystemd-journald
22.11. Managing Log Files in a Graphical Environment
22.11.1. Viewing Log Files
Vi or Emacs. Some log files are readable by all users on the system; however, root privileges are required to read most log files.
Note
root:
~]# yum install gnome-system-log~]$ gnome-system-log
Figure 22.2. System Log

Figure 22.3. System Log - Filters

Figure 22.4. System Log - defining a filter
- Name — Specifies the name of the filter.
- Regular Expression — Specifies the regular expression that will be applied to the log file and will attempt to match any possible strings of text in it.
- Effect
- Highlight — If checked, the found results will be highlighted with the selected color. You may select whether to highlight the background or the foreground of the text.
- Hide — If checked, the found results will be hidden from the log file you are viewing.

Figure 22.5. System Log - enabling a filter
22.11.2. Adding a Log File

Figure 22.6. System Log - adding a log file
Note
.gz format.
22.11.3. Monitoring Log Files

Figure 22.7. System Log - new log alert
22.12. Additional Resources
rsyslog daemon and how to locate, view, and monitor log files, see the resources listed below.
Installed Documentation
rsyslogd(8) — The manual page for thersyslogddaemon documents its usage.rsyslog.conf(5) — The manual page namedrsyslog.confdocuments available configuration options.logrotate(8) — The manual page for the logrotate utility explains in greater detail how to configure and use it.journalctl(1) — The manual page for thejournalctldaemon documents its usage.journald.conf(5) — This manual page documents available configuration options.systemd.journal-fields(7) — This manual page lists special Journal fields.
Installable Documentation
/usr/share/doc/rsyslogversion/html/index.html — This file, which is provided by the rsyslog-doc package from the Optional channel, contains information on rsyslog. See Section 9.5.7, “Adding the Optional and Supplementary Repositories” for more information on Red Hat additional channels. Before accessing the documentation, you must run the following command as root:
~]# yum install rsyslog-docOnline Documentation
- RainerScript documentation on the rsyslog Home Page — Commented summary of data types, expressions, and functions available in RainerScript.
- rsyslog version 7 documentation on the rsyslog home page — Version 7 of rsyslog is available for Red Hat Enterprise Linux 7 in the rsyslog package.
- Description of queues on the rsyslog Home Page — General information on various types of message queues and their usage.
See Also
- Chapter 6, Gaining Privileges documents how to gain administrative privileges by using the
suandsudocommands. - Chapter 10, Managing Services with systemd provides more information on systemd and documents how to use the
systemctlcommand to manage system services.
Chapter 23. Automating System Tasks
- regularly at specified time using cron, see Section 23.1, “Scheduling a Recurring Job Using Cron”
- asynchronously at certain days using anacron, see Section 23.2, “Scheduling a Recurring Asynchronous Job Using Anacron”
- once at a specific time using at, see Section 23.3, “Scheduling a Job to Run at a Specific Time Using at”
- once when system load average drops to a specified value using batch, see Section 23.4, “Scheduling a Job to Run on System Load Drop Using batch”
- once on the next boot, see Section 23.5, “Scheduling a Job to Run on Next Boot Using a systemd Unit File”
23.1. Scheduling a Recurring Job Using Cron
Cron is a service that enables you to schedule running a task, often called a job, at regular times. A cron job is only executed if the system is running on the scheduled time. For scheduling jobs that can postpone their execution to when the system boots up, so a job is not "lost" if the system is not running, see Section 23.3, “Scheduling a Job to Run at a Specific Time Using at”.
crontab files. These files are then read by the crond service, which executes the jobs.
23.1.1. Prerequisites for Cron Jobs
cron job:
- Install the cronie package:
~]#
yum install cronie - The
crondservice is enabled - made to start automatically at boot time - upon installation. If you disabled the service, enable it:~]#
systemctl enable crond.service - Start the
crondservice for the current session:~]#
systemctl start crond.service - (optional) Configure cron. For example, you can change:
- shell to be used when executing jobs
- the
PATHenvironment variable - mail addressee if a job sends emails.
See the crontab(5) manual page for information on configuringcron.
23.1.2. Scheduling a Cron Job
Scheduling a Job as root User
root user uses the cron table in /etc/crontab, or, preferably, creates a cron table file in /etc/cron.d/. Use this procedure to schedule a job as root:
- Choose:
- in which minutes of an hour to execute the job. For example, use
0,10,20,30,40,50or0/10to specify every 10 minutes of an hour. - in which hours of a day to execute the job. For example, use
17-20to specify time from 17:00 to 20:59. - in which days of a month to execute the job. For example, use
15to specify 15th day of a month. - in which months of a year to execute the job. For example, use
Jun,Jul,Augor6,7,8to specify the summer months of the year. - in which days of the week to execute the job. For example, use
*for the job to execute independently of the day of week.
Combine the chosen values into the time specification. The above example values result into this specification:0,10,20,30,40,50 17-20 15 Jun,Jul,Aug * - Specify the user. The job will execute as if run by this user. For example, use
root. - Specify the command to execute. For example, use
/usr/local/bin/my-script.sh - Put the above specifications into a single line:
0,10,20,30,40,50 17-20 15 Jun,Jul,Aug * root /usr/local/bin/my-script.sh
- Add the resulting line to
/etc/crontab, or, preferably, create a cron table file in/etc/cron.d/and add the line there.
/etc/crontab file:
SHELL=/bin/bash PATH=/sbin:/bin:/usr/sbin:/usr/bin MAILTO=root # For details see man 4 crontabs # Example of job definition: # .---------------- minute (0 - 59) # | .------------- hour (0 - 23) # | | .---------- day of month (1 - 31) # | | | .------- month (1 - 12) OR jan,feb,mar,apr ... # | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat # | | | | | # * * * * * user-name command to be executed
Scheduling a Job as Non-root User
- From the user's shell, run:
[bob@localhost ~]$
crontab -eThis will start editing of the user's owncrontabfile using the editor specified by theVISUALorEDITORenvironment variable. - Specify the job in the same way as in Scheduling a cron Job as root user, but leave out the field with user name. For example, instead of adding
0,10,20,30,40,50 17-20 15 Jun,Jul,Aug * bob /home/bob/bin/script.sh
add:0,10,20,30,40,50 17-20 15 Jun,Jul,Aug * /home/bob/bin/script.sh
- Save the file and exit the editor.
- (optional) To verify the new job, list the contents of the current user's crontab file by running:
[bob@localhost ~]$
crontab -l@daily /home/bob/bin/script.sh
Scheduling Hourly, Daily, Weekly, and Monthly Jobs
- Put the actions you want your job to execute into a shell script.
- Put the shell script into one of the following directories:
/etc/cron.hourly//etc/cron.daily//etc/cron.weekly//etc/cron.monthly/
crond service automatically executes any scripts present in /etc/cron.hourly, /etc/cron.daily, /etc/cron.weekly, and /etc/cron.monthly directories at their corresponding times.
23.2. Scheduling a Recurring Asynchronous Job Using Anacron
Anacron, like cron, is a service that enables you to schedule running a task, often called a job, at regular times. However, anacron differs from cron in two ways:
- If the system is not running at the scheduled time, an
anacronjob is postponed until the system is running; - An
anacronjob can run once per day at most.
anacrontab files. These files are then read by the crond service, which executes the jobs.
23.2.1. Prerequisites for Anacrob Jobs
anacron job:
- Verify that you have the cronie-anacron package installed:
~]#
rpm -q cronie-anacronThe cronie-anacron is likely to be installed already, because it is a sub-package of the cronie package. If it is not installed, use this command:~]#
yum install cronie-anacron - The
crondservice is enabled - made to start automatically at boot time - upon installation. If you disabled the service, enable it:~]#
systemctl enable crond.service - Start the
crondservice for the current session:~]#
systemctl start crond.service - (optional) Configure anacron. For example, you can change:
- shell to be used when executing jobs
- the
PATHenvironment variable - mail addressee if a job sends emails.
See the anacrontab(5) manual page for information on configuringanacron.
23.2.2. Scheduling an Anacron Job
Scheduling an anacron Job as root User
root user uses the anacron table in /etc/anacrontab. Use the following procedure to schedule a job as root.
Procedure 23.1. Scheduling an anacron Job as root User
- Choose:
- Frequency of executing the job. For example, use
1to specify every day or3to specify once in 3 days. - The delay of executing the job. For example, use
0to specify no delay or60to specify 1 hour of delay. - The job identifier, which will be used for logging. For example, use
my.anacron.jobto log the job with themy.anacron.jobstring. - The command to execute. For example, use
/usr/local/bin/my-script.sh
Combine the chosen values into the job specification. Here is an example specification:3 60 cron.daily /usr/local/bin/my-script.sh
- Add the resulting line to
/etc/anacrontab.
/etc/anacrontab file. For full reference on how to specify a job, see the anacrontab(5) manual page.
Scheduling Hourly, Daily, Weekly, and Monthly Jobs
23.3. Scheduling a Job to Run at a Specific Time Using at
at utility.
at jobs using the at utility. The jobs are then executed by the atd service.
23.3.1. Prerequisites for At Jobs
at job:
- Install the at package:
~]#
yum install at - The
atdservice is enabled - made to start automatically at boot time - upon installation. If you disabled the service, enable it:~]#
systemctl enable atd.service - Start the
atdservice for the current session:~]#
systemctl start atd.service
23.3.2. Scheduling an At Job
- A job is always run by some user. Log in as the desired user and run:
~]#
at timeReplace time with the time specification.For details on specifying time, see the at(1) manual page and the/usr/share/doc/at/timespecfile.Example 23.1. Specifying Time for At
To execute the job at 15:00, run:~]#
at 15:00If the specified time has passed, the job is executed at the same time the next day.To execute the job on August 20 2017, run:~]#
at August 20 2017or~]#
at 082017To execute the job 5 days from now, run:~]#
now + 5 days - At the displayed
at>prompt, enter the command to execute and press Enter:~]#
at 15:00at> sh /usr/local/bin/my-script.sh at>Repeat this step for every command you want to execute.Note
Theat>prompt shows which shell it will use:warning: commands will be executed using /bin/sh
The at utility uses the shell set in user's SHELL environment variable, or the user's login shell, or/bin/sh, whichever is found first. - Press Ctrl+D on an empty line to finish specifying the job.
Note
Viewing Pending Jobs
atq command:
~]# atq
26 Thu Feb 23 15:00:00 2017 a root
28 Thu Feb 24 17:30:00 2017 a rootjob_number scheduled_date scheduled_hour job_class user_name
job_queue column specifies whether a job is an at or a batch job. a stands for at, b stands for batch.
Deleting a Scheduled Job
- List pending jobs with the
atqcommand:~]#
atq26 Thu Feb 23 15:00:00 2017 a root 28 Thu Feb 24 17:30:00 2017 a root - Find the job you want to delete by its scheduled time and the user.
- Run the
atrmcommand, specifying the job by its number:~]#
atrm26
23.3.2.1. Controlling Access to At and Batch
at and batch commands for specific users. To do this, put user names into /etc/at.allow or /etc/at.deny according to these rules:
- Both access control files use the same format: one user name on each line.
- No white space is permitted in either file.
- If the
at.allowfile exists, only users listed in the file are allowed to useatorbatch, and theat.denyfile is ignored. - If
at.allowdoes not exist, users listed inat.denyare not allowed to useatorbatch. - The
rootuser is not affected by the access control files and can always execute theatandbatchcommands.
at daemon (atd) does not have to be restarted if the access control files are modified. The access control files are read each time a user tries to execute the at or batch commands.
23.4. Scheduling a Job to Run on System Load Drop Using batch
batch utility. This can be useful for performing resource-demanding tasks or for preventing the system from being idle.
batch jobs using the batch utility. The jobs are then executed by the atd service.
23.4.1. Prerequisites for Batch Jobs
batch utility is provided in the at package, and batch jobs are managed by the atd service. Hence, the prerequisites for batch jobs are the same as for at jobs. See Section 23.3.1, “Prerequisites for At Jobs”.
23.4.2. Scheduling a Batch Job
- A job is always run by some user. Log in as the desired user and run:
~]#
batch - At the displayed
at>prompt, enter the command to execute and press Enter:~]#
batchat> sh /usr/local/bin/my-script.shRepeat this step for every command you want to execute.Note
Theat>prompt shows which shell it will use:warning: commands will be executed using /bin/sh
The batch utility uses the shell set in user's SHELL environment variable, or the user's login shell, or/bin/sh, whichever is found first. - Press Ctrl+D on an empty line to finish specifying the job.
Note
Changing the Default System Load Average Limit
batch jobs start when system load average drops below 0.8. This setting is kept in the atq service. To change the system load limit:
- To the
/etc/sysconfig/atdfile, add this line:OPTS='-l x'
Substitute x with the new load average. For example:OPTS='-l 0.5'
- Restart the
atqservice:#
systemctl restart atq
Viewing Pending Jobs
atq command. See the section called “Viewing Pending Jobs”.
Deleting a Scheduled Job
atrm command. See the section called “Deleting a Scheduled Job”.
Controlling Access to Batch
batch utility. This is done for the batch and at utilities together. See Section 23.3.2.1, “Controlling Access to At and Batch”.
23.5. Scheduling a Job to Run on Next Boot Using a systemd Unit File
systemd unit file that specifies the script to run and its dependencies.
- Create the
systemdunit file that specifies at which stage of the boot process to run the script. This example shows a unit file with a reasonable set ofWants=andAfter=dependencies:~]#
cat /etc/systemd/system/one-time.service[Unit] # The script needs to execute after: # network interfaces are configured Wants=network-online.target After=network-online.target # all remote filesystems (NFS/_netdev) are mounted After=remote-fs.target # name (DNS) and user resolution from remote databases (AD/LDAP) are available After=nss-user-lookup.target nss-lookup.target # the system clock has synchronized After=time-sync.target [Service] Type=oneshot ExecStart=/usr/local/bin/foobar.sh [Install] WantedBy=multi-user.targetIf you use this example:- substitute
/usr/local/bin/foobar.shwith the name of your script - modify the set of
After=entries if necessary
For information on specifying the stage of boot, see Section 10.6, “Creating and Modifying systemd Unit Files”. - If you want the
systemdservice to stay active after executing the script, add theRemainAfterExit=yesline to the[Service]section:[Service] Type=oneshot RemainAfterExit=yes ExecStart=/usr/local/bin/foobar.sh - Reload the
systemddaemon:~]#
systemctl daemon-reload - Enable the
systemdservice:~]#
systemctl enable one-time.service - Create the script to execute:
~]#
cat /usr/local/bin/foobar.sh#!/bin/bash touch /root/test_file - If you want the script to run during the next boot only, and not on every boot, add a line that disables the
systemdunit:#!/bin/bash touch /root/test_file systemctl disable one-time.service - Make the script executable:
~]#
chmod +x /usr/local/bin/foobar.sh
23.6. Additional Resources
Installed Documentation
- cron - The manual page for the crond daemon documents how
crondworks and how to change its behavior. - crontab - The manual page for the crontab utility provides a complete list of supported options.
- crontab(5) - This section of the manual page for the crontab utility documents the format of
crontabfiles.
Chapter 24. Automatic Bug Reporting Tool (ABRT)
24.1. Introduction to ABRT
abrtd daemon and a number of system services and utilities for processing, analyzing, and reporting detected problems. The daemon runs silently in the background most of the time and springs into action when an application crashes or a kernel oops is detected. The daemon then collects the relevant problem data, such as a core file if there is one, the crashing application's command line parameters, and other data of forensic utility.
FTP or SCP, send it as an email, or write it to a file.
Note
24.2. Installing ABRT and Starting its Services
Warning
/proc/sys/kernel/core_pattern file, which can contain a template used to name core-dump files. The content of this file will be overwritten to:
|/usr/libexec/abrt-hook-ccpp %s %c %p %u %g %t e
24.2.1. Installing the ABRT GUI
root user:
~]# yum install abrt-desktop~]$ ps -el | grep abrt-applet
0 S 500 2036 1824 0 80 0 - 61604 poll_s ? 00:00:00 abrt-appletabrt-applet program:
~]$ abrt-applet &
[1] 226124.2.2. Installing ABRT for the Command Line
root user:
~]# yum install abrt-cli24.2.3. Installing Supplementary ABRT Tools
root:
~]# yum install libreport-plugin-mailxroot user at the local machine. The email destination can be configured in the /etc/libreport/plugins/mailx.conf file.
root:
~]# yum install abrt-java-connector24.2.4. Starting the ABRT Services
abrtd daemon requires the abrt user to exist for file system operations in the /var/spool/abrt directory. When the abrt package is installed, it automatically creates the abrt user whose UID and GID is 173, if such user does not already exist. Otherwise, the abrt user can be created manually. In that case, any UID and GID can be chosen, because abrtd does not require a specific UID and GID.
abrtd daemon is configured to start at boot time. You can use the following command to verify its current status:
~]$ systemctl is-active abrtd.service
activesystemctl returns inactive or unknown, the daemon is not running. You can start it for the current session by entering the following command as root:
~]# systemctl start abrtd.serviceabrt-ccpp service is running if you want ABRT to detect C or C++ crashes. See Section 24.4, “Detecting Software Problems” for a list of all available ABRT detection services and their respective packages.
abrt-vmcore and abrt-pstoreoops services, which are only started when a kernel panic or kernel oops occurs, all ABRT services are automatically enabled and started at boot time when their respective packages are installed. You can disable or enable any ABRT service by using the systemctl utility as described in Chapter 10, Managing Services with systemd.
24.2.5. Testing ABRT Crash Detection
kill command to send the SEGV signal to terminate a process. For example, start a sleep process and terminate it with the kill command in the following way:
~]$sleep 100 &[1] 2823 ~]$kill -s SIGSEGV 2823
kill command, and, provided a graphical session is running, the user is notified of the detected problem by the GUI notification applet. On the command line, you can check that the crash was detected by running the abrt-cli list command or by examining the crash dump created in the /var/tmp/abrt/ directory. See Section 24.5, “Handling Detected Problems” for more information on how to work with detected crashes.
24.3. Configuring ABRT
- Event #1 — a problem-data directory is created.
- Event #2 — problem data is analyzed.
- Event #3 — the problem is reported to Bugzilla.
analyzer, architecture, coredump, cmdline, executable, kernel, os_release, reason, time, and uid.
backtrace, can be created during the analysis of the problem, depending on which analyzer method is used and its configuration settings. Each of these files holds specific information about the system and the problem itself. For example, the kernel file records the version of a crashed kernel.
24.3.1. Configuring Events
report_Bugzilla.conf, in the /etc/libreport/events/ or $HOME/.cache/abrt/events/ directories for system-wide or user-specific settings respectively. The configuration files contain pairs of directives and values.
gnome-abrt and abrt-cli tools read the configuration data from these files and pass it to the events they run.
event_name.xml files in the /usr/share/libreport/events/ directory. These files are used by both gnome-abrt and abrt-cli to make the user interface more friendly. Do not edit these files unless you want to modify the standard installation. If you intend to do that, copy the file to be modified to the /etc/libreport/events/ directory and modify the new file. These files can contain the following information:
- a user-friendly event name and description (Bugzilla, Report to Bugzilla bug tracker),
- a list of items in a problem-data directory that are required for the event to succeed,
- a default and mandatory selection of items to send or not send,
- whether the GUI should prompt for data review,
- what configuration options exist, their types (string, Boolean, and so on), default value, prompt string, and so on; this lets the GUI build appropriate configuration dialogs.
report_Logger event accepts an output filename as a parameter. Using the respective event_name.xml file, the ABRT GUI determines which parameters can be specified for a selected event and allows the user to set the values for these parameters. The values are saved by the ABRT GUI and reused on subsequent invocations of these events. Note that the ABRT GUI saves configuration options using the GNOME Keyring tool and by passing them to events, it overrides data from text configuration files.

Figure 24.1. Configuring ABRT Events
Important
/etc/libreport/ directory hierarchy are world-readable and are meant to be used as global settings. Thus, it is not advisable to store user names, passwords, or any other sensitive data in them. The per-user settings (set in the GUI application and readable by the owner of $HOME only) are safely stored in GNOME Keyring, or they can be stored in a text configuration file in $HOME/.abrt/ for use with abrt-cli.
/etc/libreport/events.d/ directory, and a brief description. Note that while the configuration files use the event identifiers, the ABRT GUI refers to the individual events using their names. Note also that not all of the events can be set up using the GUI. For information on how to define a custom event, see Section 24.3.2, “Creating Custom Events”.
Table 24.1. Standard ABRT Events
| Name | Identifier and Configuration File | Description |
|---|---|---|
| uReport |
report_uReport
| Uploads a μReport to the FAF server. |
| Mailx |
report_Mailx
mailx_event.conf
| Sends the problem report via the Mailx utility to a specified email address. |
| Bugzilla |
report_Bugzilla
bugzilla_event.conf
| Reports the problem to the specified installation of the Bugzilla bug tracker. |
| Red Hat Customer Support |
report_RHTSupport
rhtsupport_event.conf
| Reports the problem to the Red Hat Technical Support system. |
| Analyze C or C++ Crash |
analyze_CCpp
ccpp_event.conf
| Sends the core dump to a remote retrace server for analysis or performs a local analysis if the remote one fails. |
| Report uploader |
report_Uploader
uploader_event.conf
| Uploads a tarball (.tar.gz) archive with problem data to the chosen destination using the FTP or the SCP protocol. |
| Analyze VM core |
analyze_VMcore
vmcore_event.conf
| Runs the GDB (the GNU debugger) on the problem data of a kernel oops and generates a backtrace of the kernel. |
| Local GNU Debugger |
analyze_LocalGDB
ccpp_event.conf
| Runs GDB (the GNU debugger) on the problem data of an application and generates a backtrace of the program. |
| Collect .xsession-errors |
analyze_xsession_errors
ccpp_event.conf
| Saves relevant lines from the ~/.xsession-errors file to the problem report. |
| Logger |
report_Logger
print_event.conf
| Creates a problem report and saves it to a specified local file. |
| Kerneloops.org |
report_Kerneloops
koops_event.conf
| Sends a kernel problem to the oops tracker at kerneloops.org. |
24.3.2. Creating Custom Events
/etc/libreport/events.d/ directory. These configuration files are loaded by the main configuration file, /etc/libreport/report_event.conf. There is no need to edit the default configuration files because abrt will run the scripts contained in /etc/libreport/events.d/. This file accepts shell metacharacters (for example, *, $, ?) and interprets relative paths relatively to its location.
space character or the tab character are considered a part of this rule. Each rule consists of two parts, a condition part and a program part. The condition part contains conditions in one of the following forms:
- VAR=VAL
- VAR!=VAL
- VAL~=REGEX
- VAR is either the
EVENTkey word or a name of a problem-data directory element (such asexecutable,package,hostname, and so on), - VAL is either a name of an event or a problem-data element, and
- REGEX is a regular expression.
EVENT=post-create date > /tmp/dt
echo $HOSTNAME `uname -r`/tmp/dt file with the current date and time and print the host name of the machine and its kernel version on the standard output.
~/.xsession-errors file to the problem report of any problem for which the abrt-ccpp service has been used, provided the crashed application had any X11 libraries loaded at the time of the crash:
EVENT=analyze_xsession_errors analyzer=CCpp dso_list~=.*/libX11.*
test -f ~/.xsession-errors || { echo "No ~/.xsession-errors"; exit 1; }
test -r ~/.xsession-errors || { echo "Can't read ~/.xsession-errors"; exit 1; }
executable=`cat executable` &&
base_executable=${executable##*/} &&
grep -F -e "$base_executable" ~/.xsession-errors | tail -999 >xsession_errors &&
echo "Element 'xsession_errors' saved"
/etc/libreport/events.d/ directory.
-
post-create - This event is run by
abrtdto process newly created problem-data directories. When thepost-createevent is run,abrtdchecks whether the new problem data matches any of the already existing problem directories. If such a problem directory exists, it is updated and the new problem data is discarded. Note that if the script in any definition of thepost-createevent exits with a non-zero value,abrtdwill terminate the process and will drop the problem data. -
notify,notify-dup - The
notifyevent is run following the completion ofpost-create. When the event is run, the user can be sure that the problem deserves their attention. Thenotify-dupis similar, except it is used for duplicate occurrences of the same problem. -
analyze_name_suffix - where name_suffix is the replaceable part of the event name. This event is used to process collected data. For example, the
analyze_LocalGDBevent uses the GNU Debugger (GDB) utility to process the core dump of an application and produce a backtrace of the crash. -
collect_name_suffix - …where name_suffix is the adjustable part of the event name. This event is used to collect additional information on problems.
-
report_name_suffix - …where name_suffix is the adjustable part of the event name. This event is used to report a problem.
24.3.3. Setting Up Automatic Reporting
root:
~]# abrt-auto-reporting enabledAutoreportingEnabled directive in the /etc/abrt/abrt.conf configuration file to yes. This system-wide setting applies to all users of the system. Note that by enabling this option, automatic reporting will also be enabled in the graphical desktop environment. To only enable autoreporting in the ABRT GUI, switch the Automatically send uReport option to YES in the Problem Reporting Configuration window. To open this window, choose → from within a running instance of the gnome-abrt application. To launch the application, go to → → .

Figure 24.2. Configuring ABRT Problem Reporting
Note
AutoreportingEvent directive in the /etc/abrt/abrt.conf configuration file to point to a different ABRT event. See Table 24.1, “Standard ABRT Events” for an overview of the standard events.
24.4. Detecting Software Problems
Table 24.2. Supported Programming Languages and Software Projects
24.4.1. Detecting C and C++ Crashes
abrt-ccpp service installs its own core-dump handler, which, when started, overrides the default value of the kernel's core_pattern variable, so that C and C++ crashes are handled by abrtd. If you stop the abrt-ccpp service, the previously specified value of core_pattern is reinstated.
/proc/sys/kernel/core_pattern file contains the string core, which means that the kernel produces files with the core. prefix in the current directory of the crashed process. The abrt-ccpp service overwrites the core_pattern file to contain the following command:
|/usr/libexec/abrt-hook-ccpp %s %c %p %u %g %t e
abrt-hook-ccpp program, which stores it in ABRT's dump location and notifies the abrtd daemon of the new crash. It also stores the following files from the /proc/PID/ directory (where PID is the ID of the crashed process) for debugging purposes: maps, limits, cgroup, status. See proc(5) for a description of the format and the meaning of these files.
24.4.2. Detecting Python Exceptions
abrt.pth file installed in /usr/lib64/python2.7/site-packages/, which in turn imports abrt_exception_handler.py. This overrides Python's default sys.excepthook with a custom handler, which forwards unhandled exceptions to abrtd via its Socket API.
-S option to the Python interpreter:
~]$ python -S file.py24.4.3. Detecting Ruby Exceptions
at_exit feature, which is executed when a program ends. This allows for checking for possible unhandled exceptions. Every time an unhandled exception is captured, the ABRT handler prepares a bug report, which can be submitted to Red Hat Bugzilla using standard ABRT tools.
24.4.4. Detecting Java Exceptions
abrtd. The agent registers several JVMTI event callbacks and has to be loaded into the JVM using the -agentlib command line parameter. Note that the processing of the registered callbacks negatively impacts the performance of the application. Use the following command to have ABRT catch exceptions from a Java class:
~]$ java -agentlib:abrt-java-connector[=abrt=on] $MyClass -platform.jvmtiSupported trueabrt=on option to the connector, you ensure that the exceptions are handled by abrtd. In case you want to have the connector output the exceptions to standard output, omit this option.
24.4.5. Detecting X.Org Crashes
abrt-xorg service collects and processes information about crashes of the X.Org server from the /var/log/Xorg.0.log file. Note that no report is generated if a blacklisted X.org module is loaded. Instead, a not-reportable file is created in the problem-data directory with an appropriate explanation. You can find the list of offending modules in the /etc/abrt/plugins/xorg.conf file. Only proprietary graphics-driver modules are blacklisted by default.
24.4.6. Detecting Kernel Oopses and Panics
abrt-oops service.
abrt-vmcore service. The service only starts when a vmcore file (a kernel-core dump) appears in the /var/crash/ directory. When a core-dump file is found, abrt-vmcore creates a new problem-data directory in the /var/tmp/abrt/ directory and moves the core-dump file to the newly created problem-data directory. After the /var/crash/ directory is searched, the service is stopped.
kdump service must be enabled on the system. The amount of memory that is reserved for the kdump kernel has to be set correctly. You can set it using the system-config-kdump graphical tool or by specifying the crashkernel parameter in the list of kernel options in the GRUB 2 menu. For details on how to enable and configure kdump, see the Red Hat Enterprise Linux 7 Kernel Crash Dump Guide. For information on making changes to the GRUB 2 menu see Chapter 25, Working with GRUB 2.
abrt-pstoreoops service, ABRT is capable of collecting and processing information about kernel panics, which, on systems that support pstore, is stored in the automatically-mounted /sys/fs/pstore/ directory. The platform-dependent pstore interface (persistent storage) provides a mechanism for storing data across system reboots, thus allowing for preserving kernel panic information. The service starts automatically when kernel crash-dump files appear in the /sys/fs/pstore/ directory.
24.5. Handling Detected Problems
abrtd can be viewed, reported, and deleted using either the command line tool, abrt-cli, or the graphical tool, gnome-abrt.
Note
24.5.1. Using the Command Line Tool
ABRT has detected 1 problem(s). For more info run: abrt-cli list --since 1398783164
abrt-cli list command:
~]$ abrt-cli list
id 6734c6f1a1ed169500a7bfc8bd62aabaf039f9aa
Directory: /var/tmp/abrt/ccpp-2014-04-21-09:47:51-3430
count: 1
executable: /usr/bin/sleep
package: coreutils-8.22-11.el7
time: Mon 21 Apr 2014 09:47:51 AM EDT
uid: 1000
Run 'abrt-cli report /var/tmp/abrt/ccpp-2014-04-21-09:47:51-3430' for creating a case in Red Hat Customer Portalabrt-cli list command has a unique identifier and a directory that can be used for further manipulation using abrt-cli.
abrt-cli info command:
abrt-cli info [-d] directory_or_id list and info sub-commands, pass them the -d (--detailed) option, which shows all stored information about the problems listed, including respective backtrace files if they have already been generated.
abrt-cli report command:
abrt-cli report directory_or_id abrt-cli opens a text editor with the content of the report. You can see what is being reported, and you can fill in instructions on how to reproduce the crash and other comments. You should also check the backtrace because the backtrace might be sent to a public server and viewed by anyone, depending on the problem-reporter event settings.
Note
abrt-cli uses the editor defined in the ABRT_EDITOR environment variable. If the variable is not defined, it checks the VISUAL and EDITOR variables. If none of these variables is set, the vi editor is used. You can set the preferred editor in your .bashrc configuration file. For example, if you prefer GNU Emacs, add the following line to the file:
exportVISUAL=emacs
abrt-cli rm directory_or_id abrt-cli command, use the --help option:
abrt-cli command --help 24.5.2. Using the GUI
D-Bus message whenever a problem report is created. If the ABRT applet is running in a graphical desktop environment, it catches this message and displays a notification dialog on the desktop. You can open the ABRT GUI using this dialog by clicking on the button. You can also open the ABRT GUI by selecting the → → menu item.
~]$ gnome-abrt &
Figure 24.3. ABRT GUI
24.6. Additional Resources
Installed Documentation
- abrtd(8) — The manual page for the
abrtddaemon provides information about options that can be used with the daemon. - abrt_event.conf(5) — The manual page for the
abrt_event.confconfiguration file describes the format of its directives and rules and provides reference information about event meta-data configuration in XML files.
Online Documentation
- Red Hat Enterprise Linux 7 Networking Guide — The Networking Guide for Red Hat Enterprise Linux 7 documents relevant information regarding the configuration and administration of network interfaces and network services on this system.
- Red Hat Enterprise Linux 7 Kernel Crash Dump Guide — The Kernel Crash Dump Guide for Red Hat Enterprise Linux 7 documents how to configure, test, and use the
kdumpcrash recovery service and provides a brief overview of how to analyze the resulting core dump using the crash debugging utility.
See Also
- Chapter 22, Viewing and Managing Log Files describes the configuration of the
rsyslogdaemon and the systemd journal and explains how to locate, view, and monitor system logs. - Chapter 9, Yum describes how to use the Yum package manager to search, install, update, and uninstall packages on the command line.
- Chapter 10, Managing Services with systemd provides an introduction to
systemdand documents how to use thesystemctlcommand to manage system services, configure systemd targets, and execute power management commands.
Part VII. Kernel Customization with Bootloader
Chapter 25. Working with GRUB 2
25.1. Introduction to GRUB 2
/boot/grub2/grub.cfg file on traditional BIOS-based machines and from the /boot/efi/EFI/redhat/grub.cfg file on UEFI machines. This file contains menu information.
grub.cfg, is generated during installation, or by invoking the /usr/sbin/grub2-mkconfig utility, and is automatically updated by grubby each time a new kernel is installed. When regenerated manually using grub2-mkconfig, the file is generated according to the template files located in /etc/grub.d/, and custom settings in the /etc/default/grub file. Edits of grub.cfg will be lost any time grub2-mkconfig is used to regenerate the file, so care must be taken to reflect any manual changes in /etc/default/grub as well.
grub.cfg, such as the removal and addition of new kernels, should be done using the grubby tool and, for scripts, using new-kernel-pkg tool. If you use grubby to modify the default kernel the changes will be inherited when new kernels are installed. For more information on grubby, see Section 25.4, “Making Persistent Changes to a GRUB 2 Menu Using the grubby Tool”.
/etc/default/grub file is used by the grub2-mkconfig tool, which is used by anaconda when creating grub.cfg during the installation process, and can be used in the event of a system failure, for example if the boot loader configurations need to be recreated. In general, it is not recommended to replace the grub.cfg file by manually running grub2-mkconfig except as a last resort. Note that any manual changes to /etc/default/grub require rebuilding the grub.cfg file.
Menu Entries in grub.cfg
grub.cfg configuration file contains one or more menuentry blocks, each representing a single GRUB 2 boot menu entry. These blocks always start with the menuentry keyword followed by a title, list of options, and an opening curly bracket, and end with a closing curly bracket. Anything between the opening and closing bracket should be indented. For example, the following is a sample menuentry block for Red Hat Enterprise Linux 7 with Linux kernel 3.8.0-0.40.el7.x86_64:
menuentry 'Red Hat Enterprise Linux Server' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-c60731dc-9046-4000-9182-64bdcce08616' {
load_video
set gfxpayload=keep
insmod gzio
insmod part_msdos
insmod xfs
set root='hd0,msdos1'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 --hint='hd0,msdos1' 19d9e294-65f8-4e37-8e73-d41d6daa6e58
else
search --no-floppy --fs-uuid --set=root 19d9e294-65f8-4e37-8e73-d41d6daa6e58
fi
echo 'Loading Linux 3.8.0-0.40.el7.x86_64 ...'
linux16 /vmlinuz-3.8.0-0.40.el7.x86_64 root=/dev/mapper/rhel-root ro rd.md=0 rd.dm=0 rd.lvm.lv=rhel/swap crashkernel=auto rd.luks=0 vconsole.keymap=us rd.lvm.lv=rhel/root rhgb quiet
echo 'Loading initial ramdisk ...'
initrd /initramfs-3.8.0-0.40.el7.x86_64.img
}
menuentry block that represents an installed Linux kernel contains linux on 64-bit IBM POWER Series, linux16 on x86_64 BIOS-based systems, and linuxefi on UEFI-based systems. Then the initrd directives followed by the path to the kernel and the initramfs image respectively. If a separate /boot partition was created, the paths to the kernel and the initramfs image are relative to /boot. In the example above, the initrd /initramfs-3.8.0-0.40.el7.x86_64.img line means that the initramfs image is actually located at /boot/initramfs-3.8.0-0.40.el7.x86_64.img when the root file system is mounted, and likewise for the kernel path.
linux16 /vmlinuz-kernel_version line must match the version number of the initramfs image given on the initrd /initramfs-kernel_version.img line of each menuentry block. For more information on how to verify the initial RAM disk image, see see Red Hat Enterprise 7 Kernel Administration Guide.
Note
menuentry blocks, the initrd directive must point to the location (relative to the /boot/ directory if it is on a separate partition) of the initramfs file corresponding to the same kernel version. This directive is called initrd because the previous tool which created initial RAM disk images, mkinitrd, created what were known as initrd files. The grub.cfg directive remains initrd to maintain compatibility with other tools. The file-naming convention of systems using the dracut utility to create the initial RAM disk image is initramfs-kernel_version.img.
25.2. Configuring GRUB 2
- To make non-persistent changes to the GRUB 2 menu, see Section 25.3, “Making Temporary Changes to a GRUB 2 Menu”.
- To make persistent changes to a running system, see Section 25.4, “Making Persistent Changes to a GRUB 2 Menu Using the grubby Tool”.
- For information on making and customizing a GRUB 2 configuration file, see Section 25.5, “Customizing the GRUB 2 Configuration File”.
25.3. Making Temporary Changes to a GRUB 2 Menu
Procedure 25.1. Making Temporary Changes to a Kernel Menu Entry
- Start the system and, on the GRUB 2 boot screen, move the cursor to the menu entry you want to edit, and press the e key for edit.
- Move the cursor down to find the kernel command line. The kernel command line starts with
linuxon 64-Bit IBM Power Series,linux16on x86-64 BIOS-based systems, orlinuxefion UEFI systems. - Move the cursor to the end of the line.Press Ctrl+a and Ctrl+e to jump to the start and end of the line, respectively. On some systems, Home and End might also work.
- Edit the kernel parameters as required. For example, to run the system in emergency mode, add the emergency parameter at the end of the
linux16line:linux16 /vmlinuz-3.10.0-0.rc4.59.el7.x86_64 root=/dev/mapper/rhel-root ro rd.md=0 rd.dm=0 rd.lvm.lv=rhel/swap crashkernel=auto rd.luks=0 vconsole.keymap=us rd.lvm.lv=rhel/root rhgb quiet emergencyTherhgbandquietparameters can be removed in order to enable system messages.These settings are not persistent and apply only for a single boot. To make persistent changes to a menu entry on a system, use thegrubbytool. See the section called “Adding and Removing Arguments from a GRUB 2 Menu Entry” for more information on usinggrubby.
25.4. Making Persistent Changes to a GRUB 2 Menu Using the grubby Tool
grubby tool can be used to read information from, and make persistent changes to, the grub.cfg file. It enables, for example, changing GRUB 2 menu entries to specify what arguments to pass to a kernel on system start and changing the default kernel.
grubby is invoked manually without specifying a GRUB 2 configuration file, it defaults to searching for /etc/grub2.cfg, which is a symbolic link to the grub.cfg file, whose location is architecture dependent. If that file cannot be found it will search for an architecture dependent default.
Listing the Default Kernel
~]# grubby --default-kernel
/boot/vmlinuz-3.10.0-229.4.2.el7.x86_64~]# grubby --default-index
0Changing the Default Boot Entry
grubby command as follows:
~]# grubby --set-default /boot/vmlinuz-3.10.0-229.4.2.el7.x86_64
Viewing the GRUB 2 Menu Entry for a Kernel
~]$ grubby --info=ALL
On UEFI systems, all grubby commands must be entered as root.
~]$ grubby --info /boot/vmlinuz-3.10.0-229.4.2.el7.x86_64
index=0
kernel=/boot/vmlinuz-3.10.0-229.4.2.el7.x86_64
args="ro rd.lvm.lv=rhel/root crashkernel=auto rd.lvm.lv=rhel/swap vconsole.font=latarcyrheb-sun16 vconsole.keymap=us rhgb quiet LANG=en_US.UTF-8"
root=/dev/mapper/rhel-root
initrd=/boot/initramfs-3.10.0-229.4.2.el7.x86_64.img
title=Red Hat Enterprise Linux Server (3.10.0-229.4.2.el7.x86_64) 7.0 (Maipo)
Try tab completion to see the available kernels within the /boot/ directory.
Adding and Removing Arguments from a GRUB 2 Menu Entry
--update-kernel option can be used to update a menu entry when used in combination with --args to add new arguments and --remove-arguments to remove existing arguments. These options accept a quoted space-separated list. The command to simultaneously add and remove arguments a from GRUB 2 menu entry has the follow format: grubby --remove-args="argX argY" --args="argA argB" --update-kernel /boot/kernel
~]# grubby --remove-args="rhgb quiet" --args=console=ttyS0,115200 --update-kernel /boot/vmlinuz-3.10.0-229.4.2.el7.x86_64
This command removes the Red Hat graphical boot argument, enables boot message to be seen, and adds a serial console. As the console arguments will be added at the end of the line, the new console will take precedence over any other consoles configured.
--info command option as follows:
~]# grubby --info /boot/vmlinuz-3.10.0-229.4.2.el7.x86_64
index=0
kernel=/boot/vmlinuz-3.10.0-229.4.2.el7.x86_64
args="ro rd.lvm.lv=rhel/root crashkernel=auto rd.lvm.lv=rhel/swap vconsole.font=latarcyrheb-sun16 vconsole.keymap=us LANG=en_US.UTF-8 ttyS0,115200"
root=/dev/mapper/rhel-root
initrd=/boot/initramfs-3.10.0-229.4.2.el7.x86_64.img
title=Red Hat Enterprise Linux Server (3.10.0-229.4.2.el7.x86_64) 7.0 (Maipo)Updating All Kernel Menus with the Same Arguments
~]# grubby --update-kernel=ALL --args=console=ttyS0,115200
The --update-kernel parameter also accepts DEFAULT or a comma separated list of kernel index numbers.
Changing a Kernel Argument
~]# grubby --args=vconsole.font=latarcyrheb-sun32 --update-kernel /boot/vmlinuz-3.10.0-229.4.2.el7.x86_64
index=0
kernel=/boot/vmlinuz-3.10.0-229.4.2.el7.x86_64
args="ro rd.lvm.lv=rhel/root crashkernel=auto rd.lvm.lv=rhel/swap vconsole.font=latarcyrheb-sun32 vconsole.keymap=us LANG=en_US.UTF-8"
root=/dev/mapper/rhel-root
initrd=/boot/initramfs-3.10.0-229.4.2.el7.x86_64.img
title=Red Hat Enterprise Linux Server (3.10.0-229.4.2.el7.x86_64) 7.0 (Maipo)
grubby(8) manual page for more command options.
25.5. Customizing the GRUB 2 Configuration File
/etc/grub.d/ directory. The following files are included:
00_header, which loads GRUB 2 settings from the/etc/default/grubfile.01_users, which reads the superuser password from theuser.cfgfile. In Red Hat Enterprise Linux 7.0 and 7.1, this file was only created when boot password was defined in the kickstart file during installation, and it included the defined password in plain text.10_linux, which locates kernels in the default partition of Red Hat Enterprise Linux.30_os-prober, which builds entries for operating systems found on other partitions.40_custom, a template, which can be used to create additional menu entries.
/etc/grub.d/ directory are read in alphabetical order and can be therefore renamed to change the boot order of specific menu entries.
Important
GRUB_TIMEOUT key set to 0 in the /etc/default/grub file, GRUB 2 does not display the list of bootable kernels when the system starts up. In order to display this list when booting, press and hold any alphanumeric key when the BIOS information is displayed; GRUB 2 will present you with the GRUB 2 menu.
25.5.1. Changing the Default Boot Entry
GRUB_DEFAULT directive in the /etc/default/grub file is the word saved. This instructs GRUB 2 to load the kernel specified by the saved_entry directive in the GRUB 2 environment file, located at /boot/grub2/grubenv. You can set another GRUB 2 record to be the default, using the grub2-set-default command, which will update the GRUB 2 environment file.
saved_entry value is set to the name of latest installed kernel of package type kernel. This is defined in /etc/sysconfig/kernel by the UPDATEDEFAULT and DEFAULTKERNEL directives. The file can be viewed by the root user as follows:
~]# cat /etc/sysconfig/kernel
# UPDATEDEFAULT specifies if new-kernel-pkg should make
# new kernels the default
UPDATEDEFAULT=yes
# DEFAULTKERNEL specifies the default kernel package type
DEFAULTKERNEL=kernel
The DEFAULTKERNEL directive specifies what package type will be used as the default. Installing a package of type kernel-debug will not change the default kernel while the DEFAULTKERNEL is set to package type kernel.
saved_entry directive to change the default order in which the operating systems are loaded. To specify which operating system should be loaded first, pass its number to the grub2-set-default command. For example:
~]#grub2-set-default2
GRUB_DEFAULT directive in the /etc/default/grub file. To list the available menu entries, run the following command as root:
~]# awk -F\' '$1=="menuentry " {print $2}' /etc/grub2.cfg
The file name /etc/grub2.cfg is a symbolic link to the grub.cfg file, whose location is architecture dependent. For reliability reasons, the symbolic link is not used in other examples in this chapter. It is better to use absolute paths when writing to a file, especially when repairing a system.
/etc/default/grub require rebuilding the grub.cfg file as follows:
- On BIOS-based machines, issue the following command as
root:~]#
grub2-mkconfig -o /boot/grub2/grub.cfg - On UEFI-based machines, issue the following command as
root:~]#
grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg
25.5.2. Editing a Menu Entry
GRUB_CMDLINE_LINUX key in the /etc/default/grub file. Note that you can specify multiple parameters for the GRUB_CMDLINE_LINUX key, similarly to adding the parameters in the GRUB 2 boot menu. For example: GRUB_CMDLINE_LINUX="console=tty0 console=ttyS0,9600n8"Where
console=tty0 is the first virtual terminal and console=ttyS0 is the serial terminal to be used.
/etc/default/grub require rebuilding the grub.cfg file as follows:
- On BIOS-based machines, issue the following command as
root:~]#
grub2-mkconfig -o /boot/grub2/grub.cfg - On UEFI-based machines, issue the following command as
root:~]#
grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg
25.5.3. Adding a new Entry
grub2-mkconfig command, GRUB 2 searches for Linux kernels and other operating systems based on the files located in the /etc/grub.d/ directory. The /etc/grub.d/10_linux script searches for installed Linux kernels on the same partition. The /etc/grub.d/30_os-prober script searches for other operating systems. Menu entries are also automatically added to the boot menu when updating the kernel.
40_custom file located in the /etc/grub.d/ directory is a template for custom entries and looks as follows:
#!/bin/sh exec tail -n +3 $0 # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above.
menuentry "<Title>"{ <Data> }
25.5.4. Creating a Custom Menu
Important
/etc/grub.d/ directory in case you need to revert the changes later.
Note
/etc/default/grub file does not have any effect on creating custom menus.
- On BIOS-based machines, copy the contents of
/boot/grub2/grub.cfg, or, on UEFI machines, copy the contents of/boot/efi/EFI/redhat/grub.cfg. Put the content of thegrub.cfginto the/etc/grub.d/40_customfile below the existing header lines. The executable part of the40_customscript has to be preserved. - From the content put into the
/etc/grub.d/40_customfile, only themenuentryblocks are needed to create the custom menu. The/boot/grub2/grub.cfgand/boot/efi/EFI/redhat/grub.cfgfiles might contain function specifications and other content above and below themenuentryblocks. If you put these unnecessary lines into the40_customfile in the previous step, erase them.This is an example of a custom40_customscript:#!/bin/sh exec tail -n +3 $0 # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. menuentry 'First custom entry' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.10.0-67.el7.x86_64-advanced-32782dd0-4b47-4d56-a740-2076ab5e5976' { load_video set gfxpayload=keep insmod gzio insmod part_msdos insmod xfs set root='hd0,msdos1' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint='hd0,msdos1' 7885bba1-8aa7-4e5d-a7ad-821f4f52170a else search --no-floppy --fs-uuid --set=root 7885bba1-8aa7-4e5d-a7ad-821f4f52170a fi linux16 /vmlinuz-3.10.0-67.el7.x86_64 root=/dev/mapper/rhel-root ro rd.lvm.lv=rhel/root vconsole.font=latarcyrheb-sun16 rd.lvm.lv=rhel/swap vconsole.keymap=us crashkernel=auto rhgb quiet LANG=en_US.UTF-8 initrd16 /initramfs-3.10.0-67.el7.x86_64.img } menuentry 'Second custom entry' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-0-rescue-07f43f20a54c4ce8ada8b70d33fd001c-advanced-32782dd0-4b47-4d56-a740-2076ab5e5976' { load_video insmod gzio insmod part_msdos insmod xfs set root='hd0,msdos1' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint='hd0,msdos1' 7885bba1-8aa7-4e5d-a7ad-821f4f52170a else search --no-floppy --fs-uuid --set=root 7885bba1-8aa7-4e5d-a7ad-821f4f52170a fi linux16 /vmlinuz-0-rescue-07f43f20a54c4ce8ada8b70d33fd001c root=/dev/mapper/rhel-root ro rd.lvm.lv=rhel/root vconsole.font=latarcyrheb-sun16 rd.lvm.lv=rhel/swap vconsole.keymap=us crashkernel=auto rhgb quiet initrd16 /initramfs-0-rescue-07f43f20a54c4ce8ada8b70d33fd001c.img } - Remove all files from the
/etc/grub.d/directory except the following:00_header,40_custom,01_users(if it exists),- and
README.
Alternatively, if you want to keep the files in the/etc/grub2.d/directory, make them unexecutable by running thechmodcommand.a-x<file_name> - Edit, add, or remove menu entries in the
40_customfile as desired. - Rebuild the
grub.cfgfile by running thegrub2-mkconfigcommand as follows:-o- On BIOS-based machines, issue the following command as
root:~]#
grub2-mkconfig -o /boot/grub2/grub.cfg - On UEFI-based machines, issue the following command as
root:~]#
grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg
25.6. Protecting GRUB 2 with a Password
- Password is required for modifying menu entries but not for booting existing menu entries;
- Password is required for modifying menu entries and for booting one, several, or all menu entries.
Configuring GRUB 2 to Require a Password only for Modifying Entries
- Run the
grub2-setpasswordcommand as root:~]#
grub2-setpassword - Enter and confirm the password:
Enter password: Confirm password:
/boot/grub2/user.cfg file that contains the hash of the password. The user for this password, root, is defined in the /boot/grub2/grub.cfg file. With this change, modifying a boot entry during booting requires you to specify the root user name and your password.
Configuring GRUB 2 to Require a Password for Modifying and Booting Entries
grub2-setpassword prevents menu entries from unauthorized modification but not from unauthorized booting. To also require password for booting an entry, follow these steps after setting the password with grub2-setpassword:
Warning
- Open the
/boot/grub2/grub.cfgfile. - Find the boot entry that you want to protect with password by searching for lines beginning with
menuentry. - Delete the
--unrestrictedparameter from the menu entry block, for example:[file contents truncated] menuentry 'Red Hat Enterprise Linux Server (3.10.0-327.18.2.rt56.223.el7_2.x86_64) 7.2 (Maipo)' --class red --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-3.10.0-327.el7.x86_64-advanced-c109825c-de2f-4340-a0ef-4f47d19fe4bf' { load_video set gfxpayload=keep [file contents truncated]
- Save and close the file.
root user name and password.
Note
/boot/grub2/grub.cfg persist when new kernel versions are installed, but are lost when re-generating grub.cfg using the grub2-mkconfig command. Therefore, to retain password protection, use the above procedure after every use of grub2-mkconfig.
Note
--unrestricted parameter from every menu entry in the /boot/grub2/grub.cfg file, all newly installed kernels will have menu entry created without --unrestricted and hence automatically inherit the password protection.
Passwords Set Before Updating to Red Hat Enterprise Linux 7.2
grub2-setpassword tool was added in Red Hat Enterprise Linux 7.2 and is now the standard method of setting GRUB 2 passwords. This is in contrast to previous versions of Red Hat Enterprise Linux, where boot entries needed to be manually specified in the /etc/grub.d/40_custom file, and super users - in the /etc/grub.d/01_users file.
Additional GRUB 2 Users
--unrestricted parameter requires the root password. However, GRUB 2 also enables creating additional non-root users that can boot such entries without providing a password. Modifying the entries still requires the root password. For information on creating such users, see the GRUB 2 Manual.
25.7. Reinstalling GRUB 2
- Upgrading from the previous version of GRUB.
- The user requires the GRUB 2 boot loader to control installed operating systems. However, some operating systems are installed with their own boot loaders. Reinstalling GRUB 2 returns control to the desired operating system.
- Adding the boot information to another drive.
25.7.1. Reinstalling GRUB 2 on BIOS-Based Machines
grub2-install command, the boot information is updated and missing files are restored. Note that the files are restored only if they are not corrupted.
grub2-install device command to reinstall GRUB 2 if the system is operating normally. For example, if sda is your device:
~]#grub2-install/dev/sda
25.7.2. Reinstalling GRUB 2 on UEFI-Based Machines
yum reinstall grub2-efi shim command, the boot information is updated and missing files are restored. Note that the files are restored only if they are not corrupted.
yum reinstall grub2-efi shim command to reinstall GRUB 2 if the system is operating normally. For example:
~]# yum reinstall grub2-efi shim25.7.3. Resetting and Reinstalling GRUB 2
root, follow these steps:
- Run the
rm /etc/grub.d/*command; - Run the
rm /etc/sysconfig/grubcommand; - For EFI systems only, run the following command:
~]#
yum reinstall grub2-efi shim grub2-tools - For BIOS and EFI systems, run this command:
~]#
yum reinstall grub2-tools - Rebuild the
grub.cfgfile by running thegrub2-mkconfigcommand as follows:-o- On BIOS-based machines, issue the following command as
root:~]#
grub2-mkconfig -o /boot/grub2/grub.cfg - On UEFI-based machines, issue the following command as
root:~]#
grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg
- Now follow the procedure in Section 25.7, “Reinstalling GRUB 2” to restore GRUB 2 on the
/boot/partition.
25.8. Upgrading from GRUB Legacy to GRUB 2
GRUB Legacy to GRUB 2 does not happen automatically, but it can be done manually. Perform the GRUB upgrade for these reasons:
- In RHEL 7 and later versions,
GRUB Legacyis no longer maintained and does not receive updates. GRUB Legacyis unable to boot on systems without the/boot/directory.GRUB 2has more features and is more reliable.GRUB 2supports more hardware configurations, file systems, and drive layouts.
Prerequisites for upgrading
GRUB Legacy. Note that GRUB Legacy is available through the grub package.
Procedure 25.2. Creating a manual backup of GRUB Legacy
- Download the grub package:
~]#
yum reinstall -y --downloadonly grub - Locate the downloaded package:
~]#
find /var/cache/yum/ | grep "grub"Note
If you did not change the default cache location ofyum, then its cache is located in the/var/cache/yum/directory. If you changed the default cache location ofyum, consult its configuration to find it. For further information, see Working with Yum Cache and Configuring Yum and Yum Repositories . - Copy the package to a safe location, for example to the
/root/directory:~]#
cp /var/cache/yum/x86_64/6Server/rhel/packages/grub-0.97-99.el6.x86_64.rpm /root/Important
Do not copy the grub package into the/boot/directory. This may cause the in-place upgrade from RHEL 6 to RHEL 7 to fail if/boot/does not have enough free space. For more information, see the pre-upgrade and upgrade documentation:
Upgrading from GRUB Legacy to GRUB 2 after the in-place upgrade of the operating system
Procedure 25.3. Upgrading from GRUB Legacy to GRUB 2
- Install the grub package from its backup:
~]#
rpm --install --force --nodeps grub-0.97-99.el6.x86_64.rpmThis step ensures that you have a recovery option in case that the upgrade fromGRUB LegacytoGRUB 2fails at some point. Note that there can be various versions of the package, so you need to use the precise name of your backed up package. - Make sure that the grub2 package is installed. If grub2 is not on the system after the upgrade to RHEL 7, you can install it manually by running:
~]#
yum install grub2
Determining bootable device file
- Find out the
GRUB Legacydesignation for the bootable device. For that, view theGRUB Legacyconfiguration file/boot/grub/grub.confand search for therootline:# grub.conf generated by anaconda # # Note that you do not have to rerun GRUB 2 after making changes to this file # NOTICE: You have a /boot partition. This means that # all kernel and initrd paths are relative to /boot/, eg. # root (hd0,0) # kernel /vmlinuz-version ro root=/dev/mapper/vg_rhel68-lv_root # initrd /initrd-[generic-]version.img #boot=/dev/sda default=0 timeout=5 splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu title Red Hat Enterprise Linux Server (2.6.32-642.4.2.el6.x86_64) root (hd0,0) kernel /vmlinuz-2.6.32-642.4.2.el6.x86_64 ro root=/dev/mapper/vg_rhel68-lv_root rd_NO_LUKS KEYBOARDTYPE=pc KEYTABLE=us LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_LVM_LV=vg_rhel68/lv_root rd_LVM_LV=vg_rhel68/lv_swap rd_NO_DM rhgb quiet initrd /initramfs-2.6.32-642.4.2.el6.x86_64.img title Red Hat Enterprise Linux 6 (2.6.32-642.el6.x86_64) root (hd0,0) kernel /vmlinuz-2.6.32-642.el6.x86_64 ro root=/dev/mapper/vg_rhel68-lv_root rd_NO_LUKS KEYBOARDTYPE=pc KEYTABLE=us LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_LVM_LV=vg_rhel68/lv_root rd_LVM_LV=vg_rhel68/lv_swap rd_NO_DM rhgb quiet initrd /initramfs-2.6.32-642.el6.x86_64.img
For each menu entry, therootline specifies the bootable device. In this example,hd0,0is the bootable device. - Only perform this step if your
/boot/grub/device.mapis not correct. This might happen, for example, after changing hardware configuration.- Recreate
/boot/grub/device.map:~]#
grub-install --recheck /dev/sdaThe old configuration is backed up automatically in/boot/grub/device.map.backup. - If the previous step broke your device mapping configuration, restore the backup:
~]#
rm /boot/grub/device.map~]#cp /boot/grub/device.map.backup /boot/grub/device.map
- Determine the mapping of the
GRUB Legacydevice designation to the device file. For that, take the device found in step 1, and find the corresponding entry in the/boot/grub/device.mapfile:# this device map was generated by anaconda (hd0) /dev/sda (hd1) /dev/sdb
In this example, the listing shows that for devicehd0the device file is/dev/sda.Make note of the device file, it will be used in the next procedure.
Generating the GRUB 2 configuration files
GRUB 2 configuration without removing the original GRUB Legacy configuration. We will keep GRUB Legacy configuration in case GRUB 2 does not work correctly.
- Install the
GRUB 2files to the/boot/grub/directory of/dev/sdXdisk:~]#
grub2-install --grub-setup=/bin/true /dev/sdXSubstitute /dev/sdX with the bootable device file determined in the section called “Determining bootable device file”.The--grub-setup=/bin/trueoption ensures that the oldGRUB Legacyconfiguration is not deleted.Warning
Note the difference in the configuration file extensions:.confis forGRUB.cfgis forGRUB 2
Do not overwrite the oldGRUBconfiguration file by mistake in the next step. - Generate the
/boot/grub2/grub.cfg:~]#
grub2-mkconfig -o /boot/grub2/grub.cfgNote
For customizing the generatedGRUB 2configuration file, see Section 25.5, “Customizing the GRUB 2 Configuration File”. You should make changes in/etc/default/grub, not directly in/boot/grub2/grub.cfg. Otherwise, changes in/boot/grub2/grub.cfgare lost every time the file is re-generated.
Testing GRUB 2 with GRUB Legacy still installed
GRUB 2 without removing the GRUB Legacy configuration. The GRUB Legacy configuration needs to stay until GRUB 2 configuration is verified; otherwise the system might become unbootable. To safely test GRUB 2 configuration, we will start GRUB 2 from GRUB Legacy.
- Add a new section into
/boot/grub/grub.conf:title GRUB 2 Test root (hd0,0) kernel /grub2/i386-pc/core.img boot
Substitute (hd0,0) with theGRUB Legacybootable device designation. - Reboot the system.
- When presented with a
GRUB Legacymenu, select theGRUB 2 Testentry. - When presented with a
GRUB 2menu, select a kernel to boot. - If the above did not work, restart, and do not choose the
GRUB 2 Testentry on next boot.
Replacing and removing GRUB Legacy
GRUB 2 worked successfully, replace GRUB Legacy and remove it from the system:
- Overwrite the
GRUB Legacyboot sector with theGRUB 2bootloader:~]#
grub2-install /dev/sda - Uninstall the grub packages:
~]#
yum remove grub
GRUB 2 is now finished.
25.9. GRUB 2 over a Serial Console
25.9.1. Configuring the GRUB 2 Menu
rhgb and quiet parameters and add console parameters at the end of the linux16 line as follows:
linux16 /vmlinuz-3.10.0-0.rc4.59.el7.x86_64 root=/dev/mapper/rhel-root ro rd.md=0 rd.dm=0 rd.lvm.lv=rhel/swap crashkernel=auto rd.luks=0 vconsole.keymap=us rd.lvm.lv=rhel/root console=ttyS0,115200grubby tool. For example, to update the entry for the default kernel, enter a command as follows:
~]# grubby --remove-args="rhgb quiet" --args=console=ttyS0,115200 --update-kernel=DEFAULT
The --update-kernel parameter also accepts the keyword ALL or a comma separated list of kernel index numbers. See the section called “Adding and Removing Arguments from a GRUB 2 Menu Entry” for more information on using grubby.
/etc/default/grub file:
GRUB_TERMINAL="serial" GRUB_SERIAL_COMMAND="serial --speed=9600 --unit=0 --word=8 --parity=no --stop=1"
GRUB_TERMINAL key overrides values of GRUB_TERMINAL_INPUT and GRUB_TERMINAL_OUTPUT. On the second line, adjust the baud rate, parity, and other values to fit your environment and hardware. A much higher baud rate, for example 115200, is preferable for tasks such as following log files. Once you have completed the changes in the /etc/default/grub file, it is necessary to update the GRUB 2 configuration file.
grub.cfg file by running the grub2-mkconfig -o command as follows:
- On BIOS-based machines, issue the following command as
root:~]#
grub2-mkconfig -o /boot/grub2/grub.cfg - On UEFI-based machines, issue the following command as
root:~]#
grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg
Note
console=ttyS0,9600n8Where
console=ttyS0 is the serial terminal to be used, 9600 is the baud rate, n is for no parity, and 8 is the word length in bits. A much higher baud rate, for example 115200, is preferable for tasks such as following log files.
25.9.2. Using screen to Connect to the Serial Console
root:
~]# yum install screenscreen /dev/console_port baud_rate
~]$Where console_port isscreen/dev/console_port115200
ttyS0, or ttyUSB0, and so on.
:quit and press Enter.
screen(1) manual page for additional options and detailed information.
25.10. Terminal Menu Editing During Boot
25.10.1. Booting to Rescue Mode
root password.
- To enter rescue mode during boot, on the GRUB 2 boot screen, press the e key for edit.
- Add the following parameter at the end of the
linuxline on 64-Bit IBM Power Series, thelinux16line on x86-64 BIOS-based systems, or thelinuxefiline on UEFI systems:systemd.unit=rescue.target
Press Ctrl+a and Ctrl+e to jump to the start and end of the line, respectively. On some systems, Home and End might also work.Note that equivalent parameters,1,s, andsingle, can be passed to the kernel as well. - Press Ctrl+x to boot the system with the parameter.
25.10.2. Booting to Emergency Mode
root file system only for reading, does not attempt to mount any other local file systems, does not activate network interfaces, and only starts few essential services. In Red Hat Enterprise Linux 7, emergency mode requires the root password.
- To enter emergency mode, on the GRUB 2 boot screen, press the e key for edit.
- Add the following parameter at the end of the
linuxline on 64-Bit IBM Power Series, thelinux16line on x86-64 BIOS-based systems, or thelinuxefiline on UEFI systems:systemd.unit=emergency.target
Press Ctrl+a and Ctrl+e to jump to the start and end of the line, respectively. On some systems, Home and End might also work.Note that equivalent parameters,emergencyand-b, can be passed to the kernel as well. - Press Ctrl+x to boot the system with the parameter.
25.10.3. Booting to the Debug Shell
systemd debug shell provides a shell very early in the startup process that can be used to diagnose systemd related boot-up problems. Once in the debug shell, systemctl commands such as systemctl list-jobs, and systemctl list-units can be used to look for the cause of boot problems. In addition, the debug option can be added to the kernel command line to increase the number of log messages. For systemd, the kernel command-line option debug is now a shortcut for systemd.log_level=debug.
Procedure 25.4. Adding the Debug Shell Command
- On the GRUB 2 boot screen, move the cursor to the menu entry you want to edit and press the e key for edit.
- Add the following parameter at the end of the
linuxline on 64-Bit IBM Power Series, thelinux16line on x86-64 BIOS-based systems, or thelinuxefiline on UEFI systems:systemd.debug-shell
Optionally add thedebugoption.Press Ctrl+a and Ctrl+e to jump to the start and end of the line, respectively. On some systems, Home and End might also work. - Press Ctrl+x to boot the system with the parameter.
systemctl enable debug-shell command. Alternatively, the grubby tool can be used to make persistent changes to the kernel command line in the GRUB 2 menu. See Section 25.4, “Making Persistent Changes to a GRUB 2 Menu Using the grubby Tool” for more information on using grubby.
Warning
Procedure 25.5. Connecting to the Debug Shell
systemd-debug-generator will configure the debug shell on TTY9.
- Press Ctrl+Alt+F9 to connect to the debug shell. If working with a virtual machine, sending this key combination requires support from the virtualization application. For example, if using Virtual Machine Manager, select → from the menu.
- The debug shell does not require authentication, therefore a prompt similar to the following should be seen on TTY9:
[root@localhost /]# - If required, to verify you are in the debug shell, enter a command as follows:
/]#
systemctl status $$● debug-shell.service - Early root shell on /dev/tty9 FOR DEBUGGING ONLY Loaded: loaded (/usr/lib/systemd/system/debug-shell.service; disabled; vendor preset: disabled) Active: active (running) since Wed 2015-08-05 11:01:48 EDT; 2min ago Docs: man:sushell(8) Main PID: 450 (bash) CGroup: /system.slice/debug-shell.service ├─ 450 /bin/bash └─1791 systemctl status 450 - To return to the default shell, if the boot succeeded, press Ctrl+Alt+F1.
systemd units can be masked by adding systemd.mask=unit_name one or more times on the kernel command line. To start additional processes during the boot process, add systemd.wants=unit_name to the kernel command line. The systemd-debug-generator(8) manual page describes these options.
25.10.4. Changing and Resetting the Root Password
root password is a mandatory part of the Red Hat Enterprise Linux 7 installation. If you forget or lose the root password it is possible to reset it, however users who are members of the wheel group can change the root password as follows:
~]$ sudo passwd root
root password is now required to operate in single-user mode as well as in emergency mode.
root password are shown here:
- Procedure 25.6, “Resetting the Root Password Using an Installation Disk” takes you to a shell prompt, without having to edit the GRUB 2 menu. It is the shorter of the two procedures and it is also the recommended method. You can use a boot disk or a normal Red Hat Enterprise Linux 7 installation disk.
- Procedure 25.7, “Resetting the Root Password Using rd.break” makes use of
rd.breakto interrupt the boot process before control is passed frominitramfstosystemd. The disadvantage of this method is that it requires more steps, includes having to edit the GRUB 2 menu, and involves choosing between a possibly time consuming SELinux file relabel or changing the SELinux enforcing mode and then restoring the SELinux security context for/etc/shadow/when the boot completes.
Procedure 25.6. Resetting the Root Password Using an Installation Disk
- Start the system and when BIOS information is displayed, select the option for a boot menu and select to boot from the installation disk.
- Choose .
- Choose .
- Choose which is the default option. At this point you will be promoted for a passphrase if an encrypted file system is found.
- Press OK to acknowledge the information displayed until the shell prompt appears.
- Change the file system
rootas follows:sh-4.2#
chroot /mnt/sysimage - Enter the
passwdcommand and follow the instructions displayed on the command line to change therootpassword. - Remove the
autorelablefile to prevent a time consuming SELinux relabel of the disk:sh-4.2#
rm -f /.autorelabel - Enter the
exitcommand to exit thechrootenvironment. - Enter the
exitcommand again to resume the initialization and finish the system boot.
Procedure 25.7. Resetting the Root Password Using rd.break
- Start the system and, on the GRUB 2 boot screen, press the e key for edit.
- Remove the
rhgbandquietparameters from the end, or near the end, of thelinux16line, orlinuxefion UEFI systems.Press Ctrl+a and Ctrl+e to jump to the start and end of the line, respectively. On some systems, Home and End might also work.Important
Therhgbandquietparameters must be removed in order to enable system messages. - Add the following parameters at the end of the
linuxline on 64-Bit IBM Power Series, thelinux16line on x86-64 BIOS-based systems, or thelinuxefiline on UEFI systems:rd.break enforcing=0
Adding theenforcing=0option enables omitting the time consuming SELinux relabeling process.Theinitramfswill stop before passing control to the Linux kernel, enabling you to work with therootfile system.Note that theinitramfsprompt will appear on the last console specified on the Linux line. - Press Ctrl+x to boot the system with the changed parameters.With an encrypted file system, a password is required at this point. However the password prompt might not appear as it is obscured by logging messages. You can press the Backspace key to see the prompt. Release the key and enter the password for the encrypted file system, while ignoring the logging messages.The
initramfsswitch_rootprompt appears. - The file system is mounted read-only on
/sysroot/. You will not be allowed to change the password if the file system is not writable.Remount the file system as writable:switch_root:/#
mount -o remount,rw /sysroot - The file system is remounted with write enabled.Change the file system's
rootas follows:switch_root:/#
The prompt changes tochroot /sysrootsh-4.2#. - Enter the
passwdcommand and follow the instructions displayed on the command line to change therootpassword.Note that if the system is not writable, the passwd tool fails with the following error:Authentication token manipulation error
- Updating the password file results in a file with the incorrect SELinux security context. To relabel all files on next system boot, enter the following command:
sh-4.2#
Alternatively, to save the time it takes to relabel a large disk, you can omit this step provided you included thetouch /.autorelabelenforcing=0option in step 3. - Remount the file system as read only:
sh-4.2#
mount -o remount,ro / - Enter the
exitcommand to exit thechrootenvironment. - Enter the
exitcommand again to resume the initialization and finish the system boot.With an encrypted file system, a pass word or phrase is required at this point. However the password prompt might not appear as it is obscured by logging messages. You can press and hold the Backspace key to see the prompt. Release the key and enter the password for the encrypted file system, while ignoring the logging messages.Note
Note that the SELinux relabeling process can take a long time. A system reboot will occur automatically when the process is complete. - If you added the
enforcing=0option in step 3 and omitted thetouch /.autorelabelcommand in step 8, enter the following command to restore the/etc/shadowfile's SELinux security context:~]#
Enter the following commands to turn SELinux policy enforcement back on and verify that it is on:restorecon /etc/shadow~]#
setenforce 1~]#getenforceEnforcing
25.11. Unified Extensible Firmware Interface (UEFI) Secure Boot
shim.efi, is signed by a UEFI private key and authenticated by a public key, signed by a certificate authority (CA), stored in the firmware database. The shim.efi contains the Red Hat public key, “Red Hat Secure Boot (CA key 1)”, which is used to authenticate both the GRUB 2 boot loader, grubx64.efi, and the Red Hat kernel. The kernel in turn contains public keys to authenticate drivers and modules.
- a programming interface for cryptographically protected UEFI variables in non-volatile storage,
- how the trusted X.509 root certificates are stored in UEFI variables,
- validation of UEFI applications like boot loaders and drivers,
- procedures to revoke known-bad certificates and application hashes.
25.11.1. UEFI Secure Boot Support in Red Hat Enterprise Linux 7
Restrictions Imposed by UEFI Secure Boot
25.12. Additional Resources
Installed Documentation
/usr/share/doc/grub2-tools-version-number/— This directory contains information about using and configuring GRUB 2. version-number corresponds to the version of the GRUB 2 package installed.info grub2— The GRUB 2 info page contains a tutorial, a user reference manual, a programmer reference manual, and a FAQ document about GRUB 2 and its usage.grubby(8)— The manual page for the command-line tool for configuring GRUB and GRUB 2.new-kernel-pkg(8)— The manual page for the tool to script kernel installation.
Installable and External Documentation
/usr/share/doc/kernel-doc-kernel_version/Documentation/serial-console.txt— This file, which is provided by the kernel-doc package, contains information on the serial console. Before accessing the kernel documentation, you must run the following command asroot:~]#
yum install kernel-doc- Red Hat Installation Guide — The Installation Guide provides basic information on GRUB 2, for example, installation, terminology, interfaces, and commands.
Part VIII. System Backup and Recovery
Chapter 26. Relax-and-Recover (ReaR)
- booting a rescue system on the new hardware
- replicating the original storage layout
- restoring user and system files
rear recover command, which starts the recovery process. During this process, ReaR replicates the partition layout and filesystems, prompts for restoring user and system files from the backup created by backup software, and finally installs the boot loader. By default, the rescue system created by ReaR only restores the storage layout and the boot loader, but not the actual user and system files.
26.1. Basic ReaR Usage
26.1.1. Installing ReaR
~]# yum install rear genisoimage syslinux26.1.2. Configuring ReaR
/etc/rear/local.conf file. Specify the rescue system configuration by adding these lines:
OUTPUT=output format OUTPUT_URL=output location
ISO for an ISO disk image or USB for a bootable USB.
file:///mnt/rescue_system/ for a local filesystem directory or sftp://backup:password@192.168.0.0/ for an SFTP directory.
Example 26.1. Configuring Rescue System Format and Location
/mnt/rescue_system/ directory, add these lines to the /etc/rear/local.conf file:
OUTPUT=ISO OUTPUT_URL=file:///mnt/rescue_system/
ISO-specific Configuration
/var/lib/rear/output/-rear's default output location/mnt/rescue_system/HOSTNAME/rear-localhost.iso- output location specified inOUTPUT_URL
/etc/rear/local.conf:
OUTPUT=ISO BACKUP=NETFS OUTPUT_URL=null BACKUP_URL="iso:///backup" ISO_DIR="output location"
26.1.3. Creating a Rescue System
~]# rear -v mkrescue
Relax-and-Recover 1.17.2 / Git
Using log file: /var/log/rear/rear-rhel7.log
mkdir: created directory '/var/lib/rear/output'
Creating disk layout
Creating root filesystem layout
TIP: To login as root via ssh you need to set up /root/.ssh/authorized_keys or SSH_ROOT_PASSWORD in your configuration file
Copying files and directories
Copying binaries and libraries
Copying kernel modules
Creating initramfs
Making ISO image
Wrote ISO image: /var/lib/rear/output/rear-rhel7.iso (124M)
Copying resulting files to file location/mnt/rescue_system/. Because the system's host name is rhel7, the backup location now contains directory rhel7/ with the rescue system and auxiliary files:
~]# ls -lh /mnt/rescue_system/rhel7/
total 124M
-rw-------. 1 root root 202 Jun 10 15:27 README
-rw-------. 1 root root 166K Jun 10 15:27 rear.log
-rw-------. 1 root root 124M Jun 10 15:27 rear-rhel7.iso
-rw-------. 1 root root 274 Jun 10 15:27 VERSION26.1.4. Scheduling ReaR
/etc/crontab file:
minute hour day_of_month month day_of_week root /usr/sbin/rear mkrescueExample 26.2. Scheduling ReaR
/etc/crontab file:
0 22 * * 1-5 root /usr/sbin/rear mkrescue
26.1.5. Performing a System Rescue
- Boot the rescue system on the new hardware. For example, burn the ISO image to a DVD and boot from the DVD.
- In the console interface, select the "Recover" option:
- You are taken to the prompt:

Figure 26.2. Rescue system: prompt
Warning
Once you have started recovery in the next step, it probably cannot be undone and you may lose anything stored on the physical disks of the system. - Run the
rear recovercommand to perform the restore or migration. The rescue system then recreates the partition layout and filesystems:
Figure 26.3. Rescue system: running "rear recover"
- Restore user and system files from the backup into the
/mnt/local/directory.Example 26.3. Restoring User and System Files
In this example, the backup file is a tar archive created per instructions in Section 26.2.1.1, “Configuring the Internal Backup Method”. First, copy the archive from its storage, then unpack the files into/mnt/local/, then delete the archive:~]#
scp root@192.168.122.7:/srv/backup/rhel7/backup.tar.gz /mnt/local/~]#tar xf /mnt/local/backup.tar.gz -C /mnt/local/~]#rm -f /mnt/local/backup.tar.gzThe new storage has to have enough space both for the archive and the extracted files. - Verify that the files have been restored:
~]#
ls /mnt/local/
Figure 26.4. Rescue system: restoring user and system files from the backup
- Ensure that SELinux relabels the files on the next boot:
~]#
touch /mnt/local/.autorelabelOtherwise you may be unable to log in the system, because the/etc/passwdfile may have the incorrect SELinux context. - Finish the recovery by entering
exit. ReaR will then reinstall the boot loader. After that, reboot the system:
Figure 26.5. Rescue system: finishing recovery
Upon reboot, SELinux will relabel the whole filesystem. Then you will be able to log in to the recovered system.
26.2. Integrating ReaR with Backup Software
26.2.1. The Built-in Backup Method
- a rescue system and a full-system backup can be created using a single
rear mkbackupcommand - the rescue system restores files from the backup automatically
26.2.1.1. Configuring the Internal Backup Method
/etc/rear/local.conf:
BACKUP=NETFS BACKUP_URL=backup location
tar command. Substitute backup location with one of the options from the "Backup Software Integration" section of the rear(8) man page. Make sure that the backup location has enough space.
Example 26.4. Adding tar Backups
/srv/backup/ directory:
OUTPUT=ISO OUTPUT_URL=file:///mnt/rescue_system/ BACKUP=NETFS BACKUP_URL=file:///srv/backup/
- To keep old backup archives when new ones are created, add this line:
NETFS_KEEP_OLD_BACKUP_COPY=y
- By default, ReaR creates a full backup on each run. To make the backups incremental, meaning that only the changed files are backed up on each run, add this line:
BACKUP_TYPE=incremental
This automatically setsNETFS_KEEP_OLD_BACKUP_COPYtoy. - To ensure that a full backup is done regularly in addition to incremental backups, add this line:
FULLBACKUPDAY="Day"
Substitute "Day" with one of the "Mon", "Tue", "Wed", "Thu". "Fri", "Sat", "Sun". - ReaR can also include both the rescue system and the backup in the ISO image. To achieve this, set the
BACKUP_URLdirective toiso:///backup/:BACKUP_URL=iso:///backup/
This is the simplest method of full-system backup, because the rescue system does not need the user to fetch the backup during recovery. However, it needs more storage. Also, single-ISO backups cannot be incremental.Example 26.5. Configuring Single-ISO Rescue System and Backups
This configuration creates a rescue system and a backup file as a single ISO image and puts it into the/srv/backup/directory:OUTPUT=ISO OUTPUT_URL=file:///srv/backup/ BACKUP=NETFS BACKUP_URL=iso:///backup/
Note
The ISO image might be large in this scenario. Therefore, Red Hat recommends creating only one ISO image, not two. For details, see the section called “ISO-specific Configuration”. - To use
rsyncinstead oftar, add this line:BACKUP_PROG=rsync
Note that incremental backups are only supported when usingtar.
26.2.1.2. Creating a Backup Using the Internal Backup Method
BACKUP=NETFS set, ReaR can create either a rescue system, a backup file, or both.
- To create a rescue system only, run:
rear mkrescue
- To create a backup only, run:
rear mkbackuponly
- To create a rescue system and a backup, run:
rear mkbackup
Note
BACKUP=NETFS setting expects the backup to be present before executing rear recover. Hence, once the rescue system boots, copy the backup file into the directory specified in BACKUP_URL, unless using a single ISO image. Only then run rear recover.
~]# rear checklayout ~]# echo $?
Important
rear checklayout command does not check whether a rescue system is currently present in the output location, and can return 0 even if it is not there. So it does not guarantee that a rescue system is available, only that the layout has not changed since the last rescue system has been created.
Example 26.6. Using rear checklayout
~]# rear checklayout || rear mkrescue26.2.2. Supported Backup Methods
26.2.3. Unsupported Backup Methods
- The rescue system prompts the user to manually restore the files. This scenario is the one described in "Basic ReaR Usage", except for the backup file format, which may take a different form than a tar archive.
- ReaR executes the custom commands provided by the user. To configure this, set the
BACKUPdirective toEXTERNAL. Then specify the commands to be run during backing up and restoration using theEXTERNAL_BACKUPandEXTERNAL_RESTOREdirectives. Optionally, also specify theEXTERNAL_IGNORE_ERRORSandEXTERNAL_CHECKdirectives. See/usr/share/rear/conf/default.conffor an example configuration.
26.2.4. Creating Multiple Backups
BACKUP=NETFS(internal method)BACKUP=BORG(external method)
-C option of the rear command. The argument is a basename of the additional backup configuration file in the /etc/rear/ directory. The method, destination, and the options for each specific backup are defined in the specific configuration file, not in the main configuration file.
Procedure 26.1. Basic recovery of the system
- Create the ReaR recovery system ISO image together with a backup of the files of the basic system:
~]#
rear -C basic_system mkbackup - Back the files up in the
/homedirectories:~]#
rear -C home_backup mkbackuponly
/boot, /root, and /usr.
Procedure 26.2. Recovery of the system in the rear recovery shell
~]#
rear -C basic_system recover~]#
rear -C home_backup restoreonly
Appendix A. Choosing Suitable Red Hat Product
Appendix B. Red Hat Customer Portal Labs Relevant to System Administration
iSCSI Helper
NTP Configuration
- servers running the NTP service
- clients synchronized with NTP servers
Samba Configuration Helper
- Click to specify basic server settings.
- Click to add the directories that you want to share
- Click to add attached printers individually.
VNC Configurator
Bridge Configuration
Network Bonding Helper
LVM RAID Calculator
NFS Helper
Load Balancer Configuration Tool
Yum Repository Configuration Helper
- a local Yum repository
- a HTTP/FTP-based Yum repository
File System Layout Calculator
RHEL Backup and Restore Assistant
dump and restore: for backing up the ext2, ext3, and ext4 file systems.tar and cpio: for archiving or restoring files and folders, especially when backing up the tape drives.rsync: for performing back-up operations and synchronizing files and directories between locations.dd: for copying files from a source to a destination block by block independently of the file systems or operating systems involved.
- Disaster recovery
- Hardware migration
- Partition table backup
- Important folder backup
- Incremental backup
- Differential backup
DNS Helper
AD Integration Helper (Samba FS - winbind)
Red Hat Enterprise Linux Upgrade Helper
Registration Assistant
Rescue Mode Assistant
- Reset root password
- Generate a SOS report
- Perform a Filesystem Check(fsck)
- Reinstall GRUB
- Rebuild the Initial Ramdisk Image
- Reduce the size of the root file system
Kernel Oops Analyzer
Kdump Helper
SCSI decoder
/log/* files or log file snippets, as these error messages can be hard to understand for the user.
Red Hat Memory Analyzer
Multipath Helper
multipath.conf file for a review. When you achieve the required configuration, download the installation script to run on your server.
Multipath Configuration Visualizer
- Hosts components including Host Bus Adapters (HBAs), local devices, and iSCSI devices on the server side
- Storage components on the storage side
- Fabric or Ethernet components between the server and the storage
- Paths to all mentioned components
Red Hat I/O Usage Visualizer
Storage / LVM configuration viewer
Appendix C. Revision History
| Revision History | |||
|---|---|---|---|
| Revision 0.14-19 | Tue Mar 20 2018 | ||
| |||
| Revision 0.14-17 | Tue Dec 5 2017 | ||
| |||
| Revision 0.14-16 | Mon Aug 8 2017 | ||
| |||
| Revision 0.14-14 | Thu Jul 27 2017 | ||
| |||
| Revision 0.14-8 | Mon Nov 3 2016 | ||
| |||
| Revision 0.14-7 | Mon Jun 20 2016 | ||
| |||
| Revision 0.14-6 | Thu Mar 10 2016 | ||
| |||
| Revision 0.14-5 | Thu Jan 21 2016 | ||
| |||
| Revision 0.14-3 | Wed Nov 11 2015 | ||
| |||
| Revision 0.14-1 | Mon Nov 9 2015 | ||
| |||
| Revision 0.14-0.3 | Fri Apr 3 2015 | ||
| |||
| Revision 0.13-2 | Tue Feb 24 2015 | ||
| |||
| Revision 0.12-0.6 | Tue Nov 18 2014 | ||
| |||
| Revision 0.12-0.4 | Mon Nov 10 2014 | ||
| |||
| Revision 0.12-0 | Tue 19 Aug 2014 | ||
| |||
C.1. Acknowledgments
Index
Symbols
- .fetchmailrc, Fetchmail Configuration Options
- server options, Server Options
- user options, User Options
- .procmailrc, Procmail Configuration
A
- ABRT, Introduction to ABRT
- (see also abrtd)
- (see also Bugzilla)
- (see also Red Hat Technical Support)
- additional resources, Additional Resources
- autoreporting, Setting Up Automatic Reporting
- CLI, Using the Command Line Tool
- configuring, Configuring ABRT
- configuring events, Configuring Events
- crash detection, Introduction to ABRT
- creating events, Creating Custom Events
- GUI, Using the GUI
- installing, Installing ABRT and Starting its Services
- introducing, Introduction to ABRT
- problems
- detecting, Detecting Software Problems
- handling of, Handling Detected Problems
- supported, Detecting Software Problems
- standard events, Configuring Events
- starting, Installing ABRT and Starting its Services, Starting the ABRT Services
- testing, Testing ABRT Crash Detection
- ABRT CLI
- installing, Installing ABRT for the Command Line
- ABRT GUI
- installing, Installing the ABRT GUI
- ABRT Tools
- installing, Installing Supplementary ABRT Tools
- abrtd
- additional resources, Additional Resources
- restarting, Starting the ABRT Services
- starting, Installing ABRT and Starting its Services, Starting the ABRT Services
- status, Starting the ABRT Services
- testing, Testing ABRT Crash Detection
- Access Control Lists (see ACLs)
- ACLs
- access ACLs, Setting Access ACLs
- additional resources, ACL References
- archiving with, Archiving File Systems With ACLs
- default ACLs, Setting Default ACLs
- getfacl , Retrieving ACLs
- mounting file systems with, Mounting File Systems
- mounting NFS shares with, NFS
- on ext3 file systems, Access Control Lists
- retrieving, Retrieving ACLs
- setfacl , Setting Access ACLs
- setting
- access ACLs, Setting Access ACLs
- with Samba, Access Control Lists
- adding
- group, Adding a New Group
- user, Adding a New User
- Apache HTTP Server
- additional resources
- installable documentation, Additional Resources
- installed documentation, Additional Resources
- useful websites, Additional Resources
- checking configuration, Editing the Configuration Files
- checking status, Verifying the Service Status
- directories
- /etc/httpd/conf.d/ , Editing the Configuration Files
- /usr/lib64/httpd/modules/ , Working with Modules
- files
- /etc/httpd/conf.d/nss.conf , Enabling the mod_nss Module
- /etc/httpd/conf.d/ssl.conf , Enabling the mod_ssl Module
- /etc/httpd/conf/httpd.conf , Editing the Configuration Files
- modules
- developing, Writing a Module
- loading, Loading a Module
- mod_ssl , Setting Up an SSL Server
- mod_userdir, Updating the Configuration
- restarting, Restarting the Service
- SSL server
- certificate, An Overview of Certificates and Security, Using an Existing Key and Certificate, Generating a New Key and Certificate
- certificate authority, An Overview of Certificates and Security
- private key, An Overview of Certificates and Security, Using an Existing Key and Certificate, Generating a New Key and Certificate
- public key, An Overview of Certificates and Security
- starting, Starting the Service
- stopping, Stopping the Service
- version 2.4
- changes, Notable Changes
- updating from version 2.2, Updating the Configuration
- virtual host, Setting Up Virtual Hosts
- Automated Tasks, Automating System Tasks
B
- blkid, Using the blkid Command
- boot loader
- GRUB 2 boot loader, Working with GRUB 2
C
- ch-email .fetchmailrc
- global options, Global Options
- Configuration
- basic configuration, Basic Configuration of the Environment
- Configuring a System for Accessibility, Configuring a System for Accessibility
- CPU usage, Viewing CPU Usage
- createrepo, Creating a Yum Repository
- cron, Scheduling a Recurring Job Using Cron
- CUPS (see Print Settings)
E
- ECDSA keys
- generating, Generating Key Pairs
- additional resources, Additional Resources
- installed documentation, Installed Documentation
- online documentation, Online Documentation
- related books, Related Books
- Fetchmail, Fetchmail
- mail server
- Dovecot, Dovecot
- Postfix, Postfix
- Procmail, Mail Delivery Agents
- program classifications, Email Program Classifications
- protocols, Email Protocols
- security, Securing Communication
- clients, Secure Email Clients
- servers, Securing Email Client Communications
- Sendmail, Sendmail
- spam
- filtering out, Spam Filters
- types
- Mail Delivery Agent, Mail Delivery Agent
- Mail Transport Agent, Mail Transport Agent
- Mail User Agent, Mail User Agent
F
- Fetchmail, Fetchmail
- additional resources, Additional Resources
- command options, Fetchmail Command Options
- informational, Informational or Debugging Options
- special, Special Options
- configuration options, Fetchmail Configuration Options
- global options, Global Options
- server options, Server Options
- user options, User Options
- file systems, Viewing Block Devices and File Systems
- findmnt, Using the findmnt Command
- free, Using the free Command
- FTP, FTP
- (see also vsftpd)
- active mode, The File Transfer Protocol
- command port, The File Transfer Protocol
- data port, The File Transfer Protocol
- definition of, FTP
- introducing, The File Transfer Protocol
- passive mode, The File Transfer Protocol
G
- getfacl , Retrieving ACLs
- gnome-system-log (see System Log)
- gnome-system-monitor, Using the System Monitor Tool, Using the System Monitor Tool, Using the System Monitor Tool, Using the System Monitor Tool
- group configuration
- groupadd, Adding a New Group
- viewing list of groups, Managing Users in a Graphical Environment
- groups (see group configuration)
- additional resources, Additional Resources
- installed documentation, Additional Resources
- GID, Managing Users and Groups
- introducing, Managing Users and Groups
- shared directories, Creating Group Directories
- tools for management of
- groupadd, User Private Groups, Using Command-Line Tools
- user private, User Private Groups
- GRUB 2
- configuring GRUB 2, Working with GRUB 2
- customizing GRUB 2, Working with GRUB 2
- reinstalling GRUB 2, Working with GRUB 2
H
- hardware
- viewing, Viewing Hardware Information
- HTTP server (see Apache HTTP Server)
- httpd (see Apache HTTP Server )
I
- information
- about your system, System Monitoring Tools
K
- keyboard configuration, System Locale and Keyboard Configuration
- layout, Changing the Keyboard Layout
L
- localectl (see keyboard configuration)
- log files, Viewing and Managing Log Files
- (see also System Log)
- description, Viewing and Managing Log Files
- locating, Locating Log Files
- monitoring, Monitoring Log Files
- rotating, Locating Log Files
- rsyslogd daemon, Viewing and Managing Log Files
- viewing, Viewing Log Files
- logrotate, Locating Log Files
- lsblk, Using the lsblk Command
- lscpu, Using the lscpu Command
- lspci, Using the lspci Command
- lsusb, Using the lsusb Command
M
- Mail Delivery Agent (see email)
- Mail Transport Agent (see email) (see MTA)
- Mail Transport Agent Switcher, Mail Transport Agent (MTA) Configuration
- Mail User Agent, Mail Transport Agent (MTA) Configuration (see email)
- MDA (see Mail Delivery Agent)
- memory usage, Viewing Memory Usage
- MTA (see Mail Transport Agent)
- setting default, Mail Transport Agent (MTA) Configuration
- switching with Mail Transport Agent Switcher, Mail Transport Agent (MTA) Configuration
- MUA, Mail Transport Agent (MTA) Configuration (see Mail User Agent)
O
- OpenSSH, OpenSSH, Main Features
- (see also SSH)
- additional resources, Additional Resources
- client, OpenSSH Clients
- scp, Using the scp Utility
- sftp, Using the sftp Utility
- ssh, Using the ssh Utility
- ECDSA keys
- generating, Generating Key Pairs
- RSA keys
- generating, Generating Key Pairs
- server, Starting an OpenSSH Server
- starting, Starting an OpenSSH Server
- stopping, Starting an OpenSSH Server
- ssh-add, Configuring ssh-agent
- ssh-agent, Configuring ssh-agent
- ssh-keygen
- ECDSA, Generating Key Pairs
- RSA, Generating Key Pairs
- using key-based authentication, Using Key-based Authentication
- OpenSSL
- additional resources, Additional Resources
- SSL (see SSL )
- TLS (see TLS )
P
- package groups
- listing package groups with yum
- yum groups, Listing Package Groups
- packages, Working with Packages
- displaying packages
- yum info, Displaying Package Information
- displaying packages with yum
- yum info, Displaying Package Information
- downloading packages with yum, Downloading Packages
- installing a package group with yum, Installing a Package Group
- installing with yum, Installing Packages
- listing packages with yum
- Glob expressions, Searching Packages
- yum list available, Listing Packages
- yum list installed, Listing Packages
- yum repolist, Listing Packages
- yum search, Listing Packages
- searching packages with yum
- yum search, Searching Packages
- uninstalling packages with yum, Removing Packages
- passwords
- shadow, Shadow Passwords
- Postfix, Postfix
- default installation, The Default Postfix Installation
- postfix, Mail Transport Agent (MTA) Configuration
- Print Settings
- CUPS, Print Settings
- IPP Printers, Adding an IPP Printer
- LDP/LPR Printers, Adding an LPD/LPR Host or Printer
- Local Printers, Adding a Local Printer
- New Printer, Starting Printer Setup
- Print Jobs, Managing Print Jobs
- Samba Printers, Adding a Samba (SMB) printer
- Settings, The Settings Page
- Sharing Printers, Sharing Printers
- printers (see Print Settings)
- processes, Viewing System Processes
- Procmail, Mail Delivery Agents
- additional resources, Additional Resources
- configuration, Procmail Configuration
- recipes, Procmail Recipes
- delivering, Delivering vs. Non-Delivering Recipes
- examples, Recipe Examples
- flags, Flags
- local lockfiles, Specifying a Local Lockfile
- non-delivering, Delivering vs. Non-Delivering Recipes
- SpamAssassin, Spam Filters
- special actions, Special Conditions and Actions
- special conditions, Special Conditions and Actions
- ps, Using the ps Command
R
- RAM, Viewing Memory Usage
- rcp, Using the scp Utility
- ReaR
- basic usage, Basic ReaR Usage
- Red Hat Support Tool
- getting support on the command line, Accessing Support Using the Red Hat Support Tool
- Red Hat Subscription Management
- subscription, Registering the System and Attaching Subscriptions
- RSA keys
- generating, Generating Key Pairs
- rsyslog, Viewing and Managing Log Files
- actions, Actions
- configuration, Basic Configuration of Rsyslog
- debugging, Debugging Rsyslog
- filters, Filters
- global directives, Global Directives
- log rotation, Log Rotation
- modules, Using Rsyslog Modules
- new configuration format, Using the New Configuration Format
- queues, Working with Queues in Rsyslog
- rulesets, Rulesets
- templates, Templates
S
- Samba
- Samba Printers, Adding a Samba (SMB) printer
- scp (see OpenSSH)
- security plug-in (see Security)
- Security-Related Packages
- updating security-related packages, Updating Packages
- Sendmail, Sendmail
- additional resources, Additional Resources
- aliases, Masquerading
- common configuration changes, Common Sendmail Configuration Changes
- default installation, The Default Sendmail Installation
- LDAP and, Using Sendmail with LDAP
- limitations, Purpose and Limitations
- masquerading, Masquerading
- purpose, Purpose and Limitations
- spam, Stopping Spam
- with UUCP, Common Sendmail Configuration Changes
- sendmail, Mail Transport Agent (MTA) Configuration
- setfacl , Setting Access ACLs
- sftp (see OpenSSH)
- shadow passwords
- overview of, Shadow Passwords
- SpamAssassin
- using with Procmail, Spam Filters
- ssh (see OpenSSH)
- SSH protocol
- authentication, Authentication
- configuration files, Configuration Files
- system-wide configuration files, Configuration Files
- user-specific configuration files, Configuration Files
- connection sequence, Event Sequence of an SSH Connection
- features, Main Features
- insecure protocols, Requiring SSH for Remote Connections
- layers
- channels, Channels
- transport layer, Transport Layer
- port forwarding, Port Forwarding
- requiring for remote login, Requiring SSH for Remote Connections
- security risks, Why Use SSH?
- version 1, Protocol Versions
- version 2, Protocol Versions
- X11 forwarding, X11 Forwarding
- ssh-add, Configuring ssh-agent
- ssh-agent, Configuring ssh-agent
- SSL , Setting Up an SSL Server
- (see also Apache HTTP Server )
- SSL server (see Apache HTTP Server )
- star , Archiving File Systems With ACLs
- stunnel, Securing Email Client Communications
- subscriptions, Registering the System and Managing Subscriptions
- system information
- cpu usage, Viewing CPU Usage
- file systems, Viewing Block Devices and File Systems
- gathering, System Monitoring Tools
- hardware, Viewing Hardware Information
- memory usage, Viewing Memory Usage
- processes, Viewing System Processes
- currently running, Using the top Command
- System Log
- filtering, Viewing Log Files
- monitoring, Monitoring Log Files
- refresh rate, Viewing Log Files
- searching, Viewing Log Files
- System Monitor, Using the System Monitor Tool, Using the System Monitor Tool, Using the System Monitor Tool, Using the System Monitor Tool
- systems
- registration, Registering the System and Managing Subscriptions
- subscription management, Registering the System and Managing Subscriptions
T
- the Users settings tool (see user configuration)
- TLS , Setting Up an SSL Server
- (see also Apache HTTP Server )
- top, Using the top Command
U
- user configuration
- command line configuration
- passwd, Adding a New User
- useradd, Adding a New User
- viewing list of users, Managing Users in a Graphical Environment
- user private groups (see groups)
- and shared directories, Creating Group Directories
- useradd command
- user account creation using, Adding a New User
- users (see user configuration)
- additional resources, Additional Resources
- installed documentation, Additional Resources
- introducing, Managing Users and Groups
- tools for management of
- the Users setting tool, Using Command-Line Tools
- useradd, Using Command-Line Tools
- UID, Managing Users and Groups
V
- virtual host (see Apache HTTP Server )
- vsftpd
- additional resources, Additional Resources
- installed documentation, Installed Documentation
- online documentation, Online Documentation
- encrypting, Encrypting vsftpd Connections Using TLS
- multihome configuration, Starting Multiple Copies of vsftpd
- restarting, Starting and Stopping vsftpd
- securing, Encrypting vsftpd Connections Using TLS, SELinux Policy for vsftpd
- SELinux, SELinux Policy for vsftpd
- starting, Starting and Stopping vsftpd
- starting multiple copies of, Starting Multiple Copies of vsftpd
- status, Starting and Stopping vsftpd
- stopping, Starting and Stopping vsftpd
- TLS, Encrypting vsftpd Connections Using TLS
W
- web server (see Apache HTTP Server)
Y
- Yum
- configuring plug-ins, Enabling, Configuring, and Disabling Yum Plug-ins
- configuring yum and yum repositories, Configuring Yum and Yum Repositories
- disabling plug-ins, Enabling, Configuring, and Disabling Yum Plug-ins
- displaying packages
- yum info, Displaying Package Information
- displaying packages with yum
- yum info, Displaying Package Information
- downloading packages with yum, Downloading Packages
- enabling plug-ins, Enabling, Configuring, and Disabling Yum Plug-ins
- installing a package group with yum, Installing a Package Group
- installing with yum, Installing Packages
- listing package groups with yum
- yum groups list, Listing Package Groups
- listing packages with yum
- Glob expressions, Searching Packages
- yum list, Listing Packages
- yum list available, Listing Packages
- yum list installed, Listing Packages
- yum repolist, Listing Packages
- packages, Working with Packages
- plug-ins
- aliases, Working with Yum Plug-ins
- kabi, Working with Yum Plug-ins
- langpacks, Working with Yum Plug-ins
- product-id, Working with Yum Plug-ins
- search-disabled-repos, Working with Yum Plug-ins
- yum-changelog, Working with Yum Plug-ins
- yum-tmprepo, Working with Yum Plug-ins
- yum-verify, Working with Yum Plug-ins
- yum-versionlock, Working with Yum Plug-ins
- repository, Adding, Enabling, and Disabling a Yum Repository, Creating a Yum Repository
- searching packages with yum
- yum search, Searching Packages
- setting [main] options, Setting [main] Options
- setting [repository] options, Setting [repository] Options
- uninstalling packages with yum, Removing Packages
- variables, Using Yum Variables
- Yum plug-ins, Yum Plug-ins
- Yum repositories
- configuring yum and yum repositories, Configuring Yum and Yum Repositories
- yum update, Upgrading the System Off-line with ISO and Yum
- Yum Updates
- checking for updates, Checking For Updates
- updating a single package, Updating Packages
- updating all packages and dependencies, Updating Packages
- updating packages, Updating Packages
- updating security-related packages, Updating Packages

