Chapter 11. Known issues

This part describes known issues in Red Hat Enterprise Linux 8.2.

11.1. Installer and image creation

The auth and authconfig Kickstart commands require the AppStream repository

The authselect-compat package is required by the auth and authconfig Kickstart commands during installation. Without this package, the installation fails if auth or authconfig are used. However, by design, the authselect-compat package is only available in the AppStream repository.

To work around this problem, verify that the BaseOS and AppStream repositories are available to the installer or use the authselect Kickstart command during installation.


The reboot --kexec and inst.kexec commands do not provide a predictable system state

Performing a RHEL installation with the reboot --kexec Kickstart command or the inst.kexec kernel boot parameters do not provide the same predictable system state as a full reboot. As a consequence, switching to the installed system without rebooting can produce unpredictable results.

Note that the kexec feature is deprecated and will be removed in a future release of Red Hat Enterprise Linux.


Anaconda installation includes low limits of minimal resources setting requirements

Anaconda initiates the installation on systems with minimal resource settings required available and do not provide previous message warning about the required resources for performing the installation successfully. As a result, the installation can fail and the output errors do not provide clear messages for possible debug and recovery. To work around this problem, make sure that the system has the minimal resources settings required for installation: 2GB memory on PPC64(LE) and 1GB on x86_64. As a result, it should be possible to perform a successful installation.


Installation fails when using the reboot --kexec command

The RHEL 8 installation fails when using a Kickstart file that contains the reboot --kexec command. To avoid the problem, use the reboot command instead of reboot --kexec in your Kickstart file.


RHEL 8 initial setup cannot be performed via SSH

Currently, the RHEL 8 initial setup interface does not display when logged in to the system using SSH. As a consequence, it is impossible to perform the initial setup on a RHEL 8 machine managed via SSH. To work around this problem, perform the initial setup in the main system console (ttyS0) and, afterwards, log in using SSH.


Network access is not enabled by default in the installation program

Several installation features require network access, for example, registration of a system using the Content Delivery Network (CDN), NTP server support, and network installation sources. However, network access is not enabled by default, and as a result, these features cannot be used until network access is enabled.

To work around this problem, add ip=dhcp to boot options to enable network access when the installation starts. Optionally, passing a Kickstart file or a repository located on the network using boot options also resolves the problem. As a result, the network-based installation features can be used.


Registration fails for user accounts that belong to multiple organizations

Currently, when you attempt to register a system with a user account that belongs to multiple organizations, the registration process fails with the error message You must specify an organization for new units.

To work around this problem, you can either:

  • Use a different user account that does not belong to multiple organizations.
  • Use the Activation Key authentication method available in the Connect to Red Hat feature for GUI and Kickstart installations.
  • Skip the registration step in Connect to Red Hat and use Subscription Manager to register your system post-installation.


A GUI installation using the Binary DVD ISO image can sometimes not proceed without CDN registration

When performing a GUI installation using the Binary DVD ISO image file, a race condition in the installer can sometimes prevent the installation from proceeding until you register the system using the Connect to Red Hat feature. To work around this problem, complete the following steps:

  1. Select Installation Source from the Installation Summary window of the GUI installation.
  2. Verify that Auto-detected installation media is selected.
  3. Click Done to confirm the selection and return to the Installation Summary window.
  4. Verify that Local Media is displayed as the Installation Source status in the Installation Summary window.

As a result, you can proceed with the installation without registering the system using the Connect to Red Hat feature.


Copying the content of the Binary DVD.iso file to a partition omits the .treeinfo and .discinfo files

During local installation, while copying the content of the RHEL 8 Binary DVD.iso image file to a partition, the * in the cp <path>/\* <mounted partition>/dir command fails to copy the .treeinfo and .discinfo files. These files are required for a successful installation. As a result, the BaseOS and AppStream repositories are not loaded, and a debug-related log message in the anaconda.log file is the only record of the problem.

To work around the problem, copy the missing .treeinfo and .discinfo files to the partition.


Self-signed HTTPS server cannot be used in Kickstart installation

Currently, the installer fails to install from a self-signed https server when the installation source is specified in the kickstart file and the --noverifyssl option is used:

url --url=https://SERVER/PATH --noverifyssl

To work around this problem, append the inst.noverifyssl parameter to the kernel command line when starting the kickstart installation.

For example:

inst.ks=<URL> inst.noverifyssl


GUI installation might fail if an attempt to unregister using the CDN is made before the repository refresh is completed

In RHEL 8.2, when registering your system and attaching subscriptions using the Content Delivery Network (CDN), a refresh of the repository metadata is started by the GUI installation program. The refresh process is not part of the registration and subscription process, and as a consequence, the Unregister button is enabled in the Connect to Red Hat window. Depending on the network connection, the refresh process might take more than a minute to complete. If you click the Unregister button before the refresh process is completed, the GUI installation might fail as the unregister process removes the CDN repository files and the certificates required by the installation program to communicate with the CDN.

To work around this problem, complete the following steps in the GUI installation after you have clicked the Register button in the Connect to Red Hat window:

  1. From the Connect to Red Hat window, click Done to return to the Installation Summary window.
  2. From the Installation Summary window, verify that the Installation Source and Software Selection status messages in italics are not displaying any processing information.
  3. When the Installation Source and Software Selection categories are ready, click Connect to Red Hat.
  4. Click the Unregister button.

After performing these steps, you can safely unregister the system during the GUI installation.


11.2. Subscription management

syspurpose addons have no effect on the subscription-manager attach --auto output.

In Red Hat Enterprise Linux 8, four attributes of the syspurpose command-line tool have been added: role,usage, service_level_agreement and addons. Currently, only role, usage and service_level_agreement affect the output of running the subscription-manager attach --auto command. Users who attempt to set values to the addons argument will not observe any effect on the subscriptions that are auto-attached.


11.3. Shells and command-line tools

Applications using Wayland protocol cannot be forwarded to remote display servers

In Red Hat Enterprise Linux 8, most applications use the Wayland protocol by default instead of the X11 protocol. As a consequence, the ssh server cannot forward the applications that use the Wayland protocol but is able to forward the applications that use the X11 protocol to a remote display server.

To work around this problem, set the environment variable GDK_BACKEND=x11 before starting the applications. As a result, the application can be forwarded to remote display servers.


systemd-resolved.service fails to start on boot

The systemd-resolved service occasionally fails to start on boot. If this happens, restart the service manually after the boot finishes by using the following command:

# systemctl start systemd-resolved

However, the failure of systemd-resolved on boot does not impact any other services.


11.4. Security

Audit executable watches on symlinks do not work

File monitoring provided by the -w option cannot directly track a path. It has to resolve the path to a device and an inode to make a comparison with the executed program. A watch monitoring an executable symlink monitors the device and an inode of the symlink itself instead of the program executed in memory, which is found from the resolution of the symlink. Even if the watch resolves the symlink to get the resulting executable program, the rule triggers on any multi-call binary called from a different symlink. This results in flooding logs with false positives. Consequently, Audit executable watches on symlinks do not work.

To work around the problem, set up a watch for the resolved path of the program executable, and filter the resulting log messages using the last component listed in the comm= or proctitle= fields.


libselinux-python is available only through its module

The libselinux-python package contains only Python 2 bindings for developing SELinux applications and it is used for backward compatibility. For this reason, libselinux-python is no longer available in the default RHEL 8 repositories through the dnf install libselinux-python command.

To work around this problem, enable both the libselinux-python and python27 modules, and install the libselinux-python package and its dependencies with the following commands:

# dnf module enable libselinux-python
# dnf install libselinux-python

Alternatively, install libselinux-python using its install profile with a single command:

# dnf module install libselinux-python:2.8/common

As a result, you can install libselinux-python using the respective module.


udica processes UBI 8 containers only when started with --env container=podman

The Red Hat Universal Base Image 8 (UBI 8) containers set the container environment variable to the oci value instead of the podman value. This prevents the udica tool from analyzing a container JavaScript Object Notation (JSON) file.

To work around this problem, start a UBI 8 container using a podman command with the --env container=podman parameter. As a result, udica can generate an SELinux policy for a UBI 8 container only when you use the described workaround.


Removing the rpm-plugin-selinux package leads to removing all selinux-policy packages from the system

Removing the rpm-plugin-selinux package disables SELinux on the machine. It also removes all selinux-policy packages from the system. Repeated installation of the rpm-plugin-selinux package then installs the selinux-policy-minimum SELinux policy, even if the selinux-policy-targeted policy was previously present on the system. However, the repeated installation does not update the SELinux configuration file to account for the change in policy. As a consequence, SELinux is disabled even upon reinstallation of the rpm-plugin-selinux package.

To work around this problem:

  1. Enter the umount /sys/fs/selinux/ command.
  2. Manually install the missing selinux-policy-targeted package.
  3. Edit the /etc/selinux/config file so that the policy is equal to SELINUX=enforcing.
  4. Enter the command load_policy -i.

As a result, SELinux is enabled and running the same policy as before.


SELinux prevents systemd-journal-gatewayd to call newfstatat() on shared memory files created by corosync

SELinux policy does not contain a rule that allows the systemd-journal-gatewayd daemon to access files created by the corosync service. As a consequence, SELinux denies systemd-journal-gatewayd to call the newfstatat() function on shared memory files created by corosync.

To work around this problem, create a local policy module with an allow rule which enables the described scenario. See the audit2allow(1) man page for more information on generating SELinux policy allow and dontaudit rules. As a result of the previous workaround, systemd-journal-gatewayd can call the function on shared memory files created by corosync with SELinux in enforcing mode.


SELinux prevents auditd to halt or power off the system

The SELinux policy does not contain a rule that allows the Audit daemon to start a power_unit_file_t systemd unit. Consequently, auditd cannot halt or power off the system even when configured to do so in cases such as no space left on a logging disk partition.

To work around this problem, create a custom SELinux policy module. As a result, auditd can properly halt or power off the system only if you apply the workaround.


users can run sudo commands as locked users

In systems where sudoers permissions are defined with the ALL keyword, sudo users with permissions can run sudo commands as users whose accounts are locked. Consequently, locked and expired accounts can still be used to execute commands.

To work around this problem, enable the newly implemented runas_check_shell option together with proper settings of valid shells in /etc/shells. This prevents attackers from running commands under system accounts such as bin.


Negative effects of the default logging setup on performance

The default logging environment setup might consume 4 GB of memory or even more and adjustments of rate-limit values are complex when systemd-journald is running with rsyslog.

See the Negative effects of the RHEL default logging setup on performance and their mitigations Knowledgebase article for more information.


Parameter not known errors in the rsyslog output with config.enabled

In the rsyslog output, an unexpected bug occurs in configuration processing errors using the config.enabled directive. As a consequence, parameter not known errors are displayed while using the config.enabled directive except for the include() statements.

To work around this problem, set config.enabled=on or use include() statements.


Certain rsyslog priority strings do not work correctly

Support for the GnuTLS priority string for imtcp that allows fine-grained control over encryption is not complete. Consequently, the following priority strings do not work properly in rsyslog:


To work around this problem, use only correctly working priority strings:


As a result, current configurations must be limited to the strings that work correctly.


Connections to servers with SHA-1 signatures do not work with GnuTLS

SHA-1 signatures in certificates are rejected by the GnuTLS secure communications library as insecure. Consequently, applications that use GnuTLS as a TLS backend cannot establish a TLS connection to peers that offer such certificates. This behavior is inconsistent with other system cryptographic libraries. To work around this problem, upgrade the server to use certificates signed with SHA-256 or stronger hash, or switch to the LEGACY policy.


TLS 1.3 does not work in NSS in FIPS mode

TLS 1.3 is not supported on systems working in FIPS mode. As a result, connections that require TLS 1.3 for interoperability do not function on a system working in FIPS mode.

To enable the connections, disable the system’s FIPS mode or enable support for TLS 1.2 in the peer.


OpenSSL incorrectly handles PKCS #11 tokens that does not support raw RSA or RSA-PSS signatures

The OpenSSL library does not detect key-related capabilities of PKCS #11 tokens. Consequently, establishing a TLS connection fails when a signature is created with a token that does not support raw RSA or RSA-PSS signatures.

To work around the problem, add the following lines after the .include line at the end of the crypto_policy section in the /etc/pki/tls/openssl.cnf file:

SignatureAlgorithms = RSA+SHA256:RSA+SHA512:RSA+SHA384:ECDSA+SHA256:ECDSA+SHA512:ECDSA+SHA384
MaxProtocol = TLSv1.2

As a result, a TLS connection can be established in the described scenario.


OpenSSL generates a malformed status_request extension in the CertificateRequest message in TLS 1.3

OpenSSL servers send a malformed status_request extension in the CertificateRequest message if support for the status_request extension and client certificate-based authentication are enabled. In such case, OpenSSL does not interoperate with implementations compliant with the RFC 8446 protocol. As a result, clients that properly verify extensions in the CertificateRequest message abort connections with the OpenSSL server. To work around this problem, disable support for the TLS 1.3 protocol on either side of the connection or disable support for status_request on the OpenSSL server. This will prevent the server from sending malformed messages.


ssh-keyscan cannot retrieve RSA keys of servers in FIPS mode

The SHA-1 algorithm is disabled for RSA signatures in FIPS mode, which prevents the ssh-keyscan utility from retrieving RSA keys of servers operating in that mode.

To work around this problem, use ECDSA keys instead, or retrieve the keys locally from the /etc/ssh/ file on the server.


Libreswan does not work properly with seccomp=enabled on all configurations

The set of allowed syscalls in the Libreswan SECCOMP support implementation is currently not complete. Consequently, when SECCOMP is enabled in the ipsec.conf file, the syscall filtering rejects even syscalls needed for the proper functioning of the pluto daemon; the daemon is killed, and the ipsec service is restarted.

To work around this problem, set the seccomp= option back to the disabled state. SECCOMP support must remain disabled to run ipsec properly.


Certain sets of interdependent rules in SSG can fail

Remediation of SCAP Security Guide (SSG) rules in a benchmark can fail due to undefined ordering of rules and their dependencies. If two or more rules need to be executed in a particular order, for example, when one rule installs a component and another rule configures the same component, they can run in the wrong order and remediation reports an error. To work around this problem, run the remediation twice, and the second run fixes the dependent rules.


SCAP Workbench fails to generate results-based remediations from tailored profiles

The following error occurs when trying to generate results-based remediation roles from a customized profile using the SCAP Workbench tool:

Error generating remediation role .../ Exit code of oscap was 1: [output truncated]

To work around this problem, use the oscap command with the --tailoring-file option.


Kickstart uses org_fedora_oscap instead of com_redhat_oscap in RHEL 8

The Kickstart references the Open Security Content Automation Protocol (OSCAP) Anaconda add-on as org_fedora_oscap instead of com_redhat_oscap which might cause confusion. That is done to preserve backward compatibility with Red Hat Enterprise Linux 7.


OSCAP Anaconda Addon does not install all packages in text mode

The OSCAP Anaconda Addon plugin cannot modify the list of packages selected for installation by the system installer if the installation is running in text mode. Consequently, when a security policy profile is specified using Kickstart and the installation is running in text mode, any additional packages required by the security policy are not installed during installation.

To work around this problem, either run the installation in graphical mode or specify all packages that are required by the security policy profile in the security policy in the %packages section in your Kickstart file.

As a result, packages that are required by the security policy profile are not installed during RHEL installation without one of the described workarounds, and the installed system is not compliant with the given security policy profile.


OSCAP Anaconda Addon does not correctly handle customized profiles

The OSCAP Anaconda Addon plugin does not properly handle security profiles with customizations in separate files. Consequently, the customized profile is not available in the RHEL graphical installation even when you properly specify it in the corresponding Kickstart section.

To work around this problem, follow the instructions in the Creating a single SCAP data stream from an original DS and a tailoring file Knowledgebase article. As a result of this workaround, you can use a customized SCAP profile in the RHEL graphical installation.


GnuTLS fails to resume current session with the NSS server

When resuming a TLS (Transport Layer Security) 1.3 session, the GnuTLS client waits 60 milliseconds plus an estimated round trip time for the server to send session resumption data. If the server does not send the resumption data within this time, the client creates a new session instead of resuming the current session. This incurs no serious adverse effects except for a minor performance impact on a regular session negotiation.


The oscap-ssh utility fails when scanning a remote system with --sudo

When performing a Security Content Automation Protocol (SCAP) scan of a remote system using the oscap-ssh tool with the --sudo option, the oscap tool on the remote system saves scan result files and report files into a temporary directory as the root user. If the umask settings on the remote machine have been changed, oscap-ssh might not have access to these files. To work around this problem, modify the oscap-ssh tool as described in this solution "oscap-ssh --sudo" fails to retrieve the result files with "scp: …​: Permission denied" error. As a result, oscap saves the files as the target user, and oscap-ssh accesses the files normally.


OpenSCAP produces false positives caused by removing blank lines from YAML multi-line strings

When OpenSCAP generates Ansible remediations from a datastream, it removes blank lines from YAML multi-line strings. Because some Ansible remediations contain literal configuration file content, removing blank lines affects the corresponding remediations. This causes the openscap utility to fail the corresponding Open Vulnerability and Assessment Language (OVAL) checks, even though the blank lines do not have any effect. To work around this problem, check the rule descriptions and skip scan results that failed because of missing blank lines. Alternatively, use Bash remediations instead of Ansible remediations, because Bash remediations do not produce these false positive results.


OSPP-based profiles are incompatible with GUI package groups.

GNOME packages installed by the Server with GUI package group require the nfs-utils package that is not compliant with the Operating System Protection Profile (OSPP). As a consequence, selecting the Server with GUI package group during the installation of a system with OSPP or OSPP-based profiles, for example, Security Technical Implementation Guide (STIG), aborts the installation. If the OSPP-based profile is applied after the installation, the system is not bootable. To work around this problem, do not install the Server with GUI package group or any other groups that install GUI when using the OSPP profile and OSPP-based profiles. When you use the Server or Minimal Install package groups instead, the system installs without issues and works correctly.


RHEL8 system with the Server with GUI package group cannot be remediated using the e8 profile

Using the OpenSCAP Anaconda Add-on to harden the system on the Server With GUI package group with profiles that select rules from the Verify Integrity with RPM group requires an extreme amount of RAM on the system. This problem is caused by the OpenSCAP scanner; for more details see Scanning large numbers of files with OpenSCAP causes systems to run out of memory. As a consequence, the hardening of the system using the RHEL8 Essential Eight (e8) profile is not successful. To work around this problem, choose a smaller package group, for example, Server, and install additional packages that you require after the installation. As a result, the system will have a smaller number of packages, the scanning will require less memory, and therefore the system can be hardened automatically.


Scanning large numbers of files with OpenSCAP causes systems to run out of memory

The OpenSCAP scanner stores all the collected results in the memory until the scan finishes. As a consequence, the system might run out of memory on systems with low RAM when scanning large numbers of files, for example from the large package groups Server with GUI and Workstation. To work around this problem, use smaller package groups, for example, Server and Minimal Install on systems with limited RAM. If you need to use large package groups, you can test whether your system has sufficient memory in a virtual or staging environment. Alternatively, you can tailor the scanning profile to deselect rules that involve recursion over the entire / filesystem:

  • rpm_verify_hashes
  • rpm_verify_permissions
  • rpm_verify_ownership
  • file_permissions_unauthorized_world_writable
  • no_files_unowned_by_user
  • dir_perms_world_writable_system_owned
  • file_permissions_unauthorized_suid
  • file_permissions_unauthorized_sgid
  • file_permissions_ungroupowned
  • dir_perms_world_writable_sticky_bits

This will prevent OpenSCAP scan from causing the system to run out of memory.


11.5. Networking

IPsec network traffic fails during IPsec offloading when GRO is disabled

IPsec offloading is not expected to work when Generic Receive Offload (GRO) is disabled on the device. If IPsec offloading is configured on a network interface and GRO is disabled on that device, IPsec network traffic fails.

To work around this problem, keep GRO enabled on the device.


iptables does not request module loading for commands that update a chain if the specified chain type is not known

Note: This problem causes spurious errors with no functional implication when stopping the iptables systemd service if you are using the services default configuration.

When setting a chain’s policy with iptables-nft, the resulting update chain command sent to the kernel will fail if the associated kernel module is not loaded already. To work around the problem, use the following commands to cause the modules to load:


# iptables -t nat -n -L
# iptables -t mangle -n -L


Automatic loading of address family-specific LOG back end modules by the nft_compat module can hang

When the nft_compat module loads address family-specific LOG target back ends while an operation on network namespaces (netns) happens in parallel, a lock collision can occur. As a consequence, loading the address family-specific LOG target back ends can hang. To work around the problem, manually load the relevant LOG target back ends, such as nf_log_ipv4.ko and nf_log_ipv6.ko, before executing the iptables-restore utility. As a result, loading the LOG target back ends does not hang. However, if the problem appears during the system boots, no workaround is available.

Note that other services, such as libvirtd, also execute iptables commands, which can cause the problem to occur.


11.6. Kernel

Accidental patch removal causes to show error

A patch that updates the script, was accidentally removed. Consequently, after executing the script, the following error message appears:

SyntaxError: Missing parentheses in call to 'print'

To work around this problem, copy the script from RHEL 8.1 and install it to the /usr/bin/ directory:

  1. Download the libhugetlbfs-utils-2.21-3.el8.x86_64.rpm package from the RHEL-8.1.0 Installation Media or from the Red Hat Customer Portal.
  2. Execute the rpm2cpio command:

    # rpm2cpio libhugetlbfs-utils-2.21-3.el8.x86_64.rpm | cpio -D / -iduv '*/'

    The command extracts the script from the RHEL 8.1 RPM and saves it to the /usr/bin/ directory.

As a result, the script works correctly.


Systems with a large amount of persistent memory experience delays during the boot process

Systems with a large amount of persistent memory take a long time to boot because the initialization of the memory is serialized. Consequently, if there are persistent memory file systems listed in the /etc/fstab file, the system might timeout while waiting for devices to become available. To work around this problem, configure the DefaultTimeoutStartSec option in the /etc/systemd/system.conf file to a sufficiently large value.


KSM sometimes ignores NUMA memory policies

When the kernel shared memory (KSM) feature is enabled with the merge_across_nodes=1 parameter, KSM ignores memory policies set by the mbind() function, and may merge pages from some memory areas to Non-Uniform Memory Access (NUMA) nodes that do not match the policies.

To work around this problem, disable KSM or set the merge_across_nodes parameter to 0 if using NUMA memory binding with QEMU. As a result, NUMA memory policies configured for the KVM VM will work as expected.


Debug kernel fails to boot in crash capture environment in RHEL 8

Due to memory-demanding nature of the debug kernel, a problem occurs when the debug kernel is in use and a kernel panic is triggered. As a consequence, the debug kernel is not able to boot as the capture kernel, and a stack trace is generated instead. To work around this problem, increase the crash kernel memory accordingly. As a result, the debug kernel successfully boots in the crash capture environment.


zlib may slow down a vmcore capture in some compression functions

The kdump configuration file uses the lzo compression format (makedumpfile -l) by default. When you modify the configuration file using the zlib compression format, (makedumpfile -c) it is likely to bring a better compression factor at the expense of slowing down the vmcore capture process. As a consequence, it takes the kdump upto four times longer to capture a vmcore with zlib, as compared to lzo.

As a result, Red Hat recommends using the default lzo for cases where speed is the main driving factor. However, if the target machine is low on available space, zlib is a better option.


A vmcore capture fails after memory hot-plug or unplug operation

After performing the memory hot-plug or hot-unplug operation, the event comes after updating the device tree which contains memory layout information. Thereby the makedumpfile utility tries to access a non-existent physical address. The problem appears if all of the following conditions meet:

  • A little-endian variant of IBM Power System runs RHEL 8.
  • The kdump or fadump service is enabled on the system.

Consequently, the capture kernel fails to save vmcore if a kernel crash is triggered after the memory hot-plug or hot-unplug operation.

To work around this problem, restart the kdump service after hot-plug or hot-unplug:

# systemctl restart kdump.service

As a result, vmcore is successfully saved in the described scenario.


The fadump dumping mechanism renames the network interface to kdump-<interface-name>

When using firmware-assisted dump (fadump) to capture a vmcore and store it to a remote machine using SSH or NFS protocol, renames the network interface to kdump-<interface-name>. The renaming happens when the <interface-name> is generic, for example, *eth#, or net# and so on. This problem occurs because the vmcore capture scripts in the initial RAM disk (initrd) add the kdump- prefix to the network interface name to secure persistent naming. Since the same initrd is also used for a regular boot, the interface name is changed for the production kernel too.


The system enters the emergency mode at boot-time when fadump is enabled

The system enters the emergency mode when fadump (kdump) or dracut squash module is enabled in the initramfs scheme because systemd manager fails to fetch the mount information and configure the LV partition to mount. To work around this problem, add the following kernel command line parameter<VG>/<LV> to discover and mount the failed LV partition appropriately. As a result, the system will boot successfully in the described scenario.


Using irqpoll causes vmcore generation failure

Due to an existing problem with the nvme driver on the 64-bit ARM architectures that run on the Amazon Web Services (AWS) cloud platforms, the vmcore generation fails when you provide the irqpoll kernel command line parameter to the first kernel. Consequently, no vmcore file is dumped in the /var/crash/ directory after a kernel crash. To work around this problem:

  1. Add irqpoll to the KDUMP_COMMANDLINE_REMOVE key in the /etc/sysconfig/kdump file.
  2. Restart the kdump service by running the systemctl restart kdump command.

As a result, the first kernel boots correctly and the vmcore file is expected to be captured upon the kernel crash.

Note that the kdump service can use a significant amount of crash kernel memory to dump the vmcore file. Ensure that the capture kernel has sufficient memory available for the kdump service.


Using vPMEM memory as dump target delays the kernel crash capture process

When you use Virtual Persistent Memory (vPEM) namespaces as kdump or fadump target, the papr_scm module is forced to unmap and remap the memory backed by vPMEM and re-add the memory to its linear map. Consequently, this behavior triggers Hypervisor Calls (HCalls) to the POWER Hypervisor, and the total time taken, slows the capture kernel boot considerably. Therefore, it is recommended not to use vPMEM namespaces as a dump target for kdump or fadump.

If you must use vPMEM, to work around this problem execute the following commands:

  1. Create the /etc/dracut.conf.d/99-pmem-workaround.conf file and add:

    add_drivers+="nd_pmem nd_btt libnvdimm papr_scm"
  2. Rebuild the initial RAM disk (initrd) file system:

    # touch /etc/kdump.conf
    # systemctl restart kdump.service


The HP NMI watchdog does not always generate a crash dump

In certain cases, the hpwdt driver for the HP NMI watchdog is not able to claim a non-maskable interrupt (NMI) generated by the HPE watchdog timer because the NMI was instead consumed by the perfmon driver.

The missing NMI is initiated by one of two conditions:

  1. The Generate NMI button on the Integrated Lights-Out (iLO) server management software. This button is triggered by a user.
  2. The hpwdt watchdog. The expiration by default sends an NMI to the server.

Both sequences typically occur when the system is unresponsive. Under normal circumstances, the NMI handler for both these situations calls the kernel panic() function and if configured, the kdump service generates a vmcore file.

Because of the missing NMI, however, kernel panic() is not called and vmcore is not collected.

In the first case (1.), if the system was unresponsive, it remains so. To work around this scenario, use the virtual Power button to reset or power cycle the server.

In the second case (2.), the missing NMI is followed 9 seconds later by a reset from the Automated System Recovery (ASR).

The HPE Gen9 Server line experiences this problem in single-digit percentages. The Gen10 at an even smaller frequency.


The tuned-adm profile powersave command causes the system to become unresponsive

Executing the tuned-adm profile powersave command leads to an unresponsive state of the Penguin Valkyrie 2000 2-socket systems with the older Thunderx (CN88xx) processors. Consequently, reboot the system to resume working. To work around this problem, avoid using the powersave profile if your system matches the mentioned specifications.


The cxgb4 driver causes crash in the kdump kernel

The kdump kernel crashes while trying to save information in the vmcore file. Consequently, the cxgb4 driver prevents the kdump kernel from saving a core for later analysis. To work around this problem, add the novmcoredd parameter to the kdump kernel command line to allow saving core files.


Attempting to add ICE driver NIC port to a mode 5 (balance-tlb) bonding master interface might lead to failure

Attempting to add ICE driver NIC port to a mode 5 (balance-tlb) bonding master interface might lead to a failure with an error Master 'bond0', Slave 'ens1f0': Error: Enslave failed. Consequently, you experience an intermittent failure to add the NIC port to the bonding master interface. To workaround this problem, attempt to retry adding the interface.


Attaching the Virtual Function to virtual machine with interface type='hostdev' might fails at times

Attaching a Virtual Function (VF) to a virtual machine using an .XML file, following the Assignment with <interface type='hostdev'> method, might fail at times. This occurs because using the Assignment with <interface type='hostdev'> method prevents the VM from attaching to the VF NIC presented to this virtual machine. To workaround this problem, attach the VF to the VM using the .XML file using the Assignment with <hostdev> method. As a result, the virsh attach-device command succeeds without error. For more details about the difference between Assignment with <hostdev> and Assignment with <interface type='hostdev'> (SRIOV devices only), see PCI Passthrough of host network devices.


11.7. File systems and storage

The /boot file system cannot be placed on LVM

You cannot place the /boot file system on an LVM logical volume. This limitation exists for the following reasons:

  • On EFI systems, the EFI System Partition conventionally serves as the /boot file system. The uEFI standard requires a specific GPT partition type and a specific file system type for this partition.
  • RHEL 8 uses the Boot Loader Specification (BLS) for system boot entries. This specification requires that the /boot file system is readable by the platform firmware. On EFI systems, the platform firmware can read only the /boot configuration defined by the uEFI standard.
  • The support for LVM logical volumes in the GRUB 2 boot loader is incomplete. Red Hat does not plan to improve the support because the number of use cases for the feature is decreasing due to standards such as uEFI and BLS.

Red Hat does not plan to support /boot on LVM. Instead, Red Hat provides tools for managing system snapshots and rollback that do not need the /boot file system to be placed on an LVM logical volume.


LVM no longer allows creating volume groups with mixed block sizes

LVM utilities such as vgcreate or vgextend no longer allow you to create volume groups (VGs) where the physical volumes (PVs) have different logical block sizes. LVM has adopted this change because file systems fail to mount if you extend the underlying logical volume (LV) with a PV of a different block size.

To re-enable creating VGs with mixed block sizes, set the allow_mixed_block_sizes=1 option in the lvm.conf file.


DM Multipath might fail to start when too many LUNs are connected

The multipathd service might time out and fail to start if too many logical units (LUNs) are connected to the system. The exact number of LUNs that causes the problem depends on several factors, including the number of devices, the response time of the storage array, the memory and CPU configuration, and system load.

To work around the problem, increase the timeout value in the multipathd unit file:

  1. Open the multipathd unit in the unit editor:

    # systemctl edit multipathd
  2. Enter the following configuration to override the timeout value:


    Red Hat recommends increasing the value to 300 from the default 90, but you can also test other values above 90.

  3. Save the file in the editor.
  4. Reload systemd units to apply the change:

    # systemctl daemon-reload

As a result, multipathd can now successfully start with a larger number of LUNs.


Limitations of LVM writecache

The writecache LVM caching method has the following limitations, which are not present in the cache method:

  • You cannot take a snapshot of a logical volume while the logical volume is using writecache.
  • You cannot attach or detach writecache while a logical volume is active.
  • When attaching writecache to an inactive logical volume, you must use a writecache block size that matches the existing file system block size.

    For details, see the lvmcache(7) man page.

  • You cannot resize a logical volume while writecache is attached to it.
  • You cannot use pvmove commands on devices that are used with writecache.
  • You cannot use logical volumes with writecache in combination with thin pools or VDO.

(JIRA:RHELPLAN-27987, BZ#1798631, BZ#1808012)

LVM mirror devices that store a LUKS volume sometimes become unresponsive

Mirrored LVM devices with a segment type of mirror that store a LUKS volume might become unresponsive under certain conditions. The unresponsive devices reject all I/O operations.

To work around the issue, Red Hat recommends that you use LVM RAID 1 devices with a segment type of raid1 instead of mirror if you need to stack LUKS volumes on top of resilient software-defined storage.

The raid1 segment type is the default RAID configuration type and replaces mirror as the recommended solution.

To convert mirror devices to raid, see Converting a mirrored LVM device to a RAID1 device.


An NFS 4.0 patch can result in reduced performance under an open-heavy workload

Previously, a bug was fixed that, in some cases, could cause an NFS open operation to overlook the fact that a file had been removed or renamed on the server. However, the fix may cause slower performance with workloads that require many open operations. To work around this problem, it might help to use NFS version 4.1 or higher, which have been improved to grant delegations to clients in more cases, allowing clients to perform open operations locally, quickly, and safely.


11.8. Dynamic programming languages, web and database servers

getpwnam() might fail when called by a 32-bit application

When a user of NIS uses a 32-bit application that calls the getpwnam() function, the call fails if the nss_nis.i686 package is missing. To work around this problem, manually install the missing package by using the yum install nss_nis.i686 command.


nginx cannot load server certificates from hardware security tokens

The nginx web server supports loading TLS private keys from hardware security tokens directly from PKCS#11 modules. However, it is currently impossible to load server certificates from hardware security tokens through the PKCS#11 URI. To work around this problem, store server certificates on the file system


php-fpm causes SELinux AVC denials when php-opcache is installed with PHP 7.2

When the php-opcache package is installed, the FastCGI Process Manager (php-fpm) causes SELinux AVC denials. To work around this problem, change the default configuration in the /etc/php.d/10-opcache.ini file to the following:


Note that this problem affects only the php:7.2 stream, not the php:7.3 one.


The mod_wsgi package name is missing when being installed as a dependency

With a change in mod_wsgi installation, described in BZ#1779705, the python3-mod_wsgi package no longer provides the name mod_wsgi. When installing the mod_wsgi module, you must specify the full package name. This change causes problems with dependencies of third-party packages.

If you try to install a third-party package that requires a dependency named mod_wsgi, an error similar to the following is returned:

 Problem: conflicting requests
  - nothing provides mod_wsgi needed by package-requires-mod_wsgi.el8.noarch

To work around this problem, choose one of the following:

  1. Rebuild the package (or ask the third-party vendor for a new build) to require the full package name python3-mod_wsgi.
  2. Create a meta package with the missing package name:

    1. Build your own empty meta package that provides the name mod_wsgi.
    2. Add the module_hotfixes=True line to the .repo configuration file of the repository that includes the meta package.
    3. Manually install python3-mod_wsgi.


11.9. Compilers and development tools

Synthetic functions generated by GCC confuse SystemTap

GCC optimization can generate synthetic functions for partially inlined copies of other functions. Tools such as SystemTap and GDB cannot distinguish these synthetic functions from real functions. As a consequence, SystemTap places probes on both synthetic and real function entry points and, thus, registers multiple probe hits for a single real function call.

To work around this problem, modify SystemTap scripts to detect recursion and prevent placing of probes related to inlined partial functions.

This example script

probe kernel.function("can_nice").call { }

can be modified this way:

global in_can_nice%

probe kernel.function("can_nice").call {
  in_can_nice[tid()] ++;
  if (in_can_nice[tid()] > 1) { next }
  /* code for real probe handler */

probe kernel.function("can_nice").return {
  in_can_nice[tid()] --;

Note that this example script does not consider all possible scenarios, such as missed kprobes or kretprobes, or genuine intended recursion.


11.10. Identity Management

Changing /etc/nsswitch.conf requires a manual system reboot

Any change to the /etc/nsswitch.conf file, for example running the authselect select profile_id command, requires a system reboot so that all relevant processes use the updated version of the /etc/nsswitch.conf file. If a system reboot is not possible, restart the service that joins your system to Active Directory, which is the System Security Services Daemon (SSSD) or winbind.


SSSD returns incorrect LDAP group membership for local users when the files domain is enabled

If the System Security Services Daemon (SSSD) serves users from the local files and the ldap_rfc2307_fallback_to_local_users attribute in the [domain/LDAP] section of the sssd.conf file is set to True, then the files provider does not include group memberships from other domains. As a consequence, if a local user is a member of an LDAP group, the id local_user command does not return the user’s LDAP group membership. To work around this problem, disable the implicit files domain by adding


to the [sssd] section in the /etc/sssd/sssd.conf file.

As a result, id local_user returns correct LDAP group membership for local users.


SSSD does not correctly handle multiple certificate matching rules with the same priority

If a given certificate matches multiple certificate matching rules with the same priority, the System Security Services Daemon (SSSD) uses only one of the rules. As a workaround, use a single certificate matching rule whose LDAP filter consists of the filters of the individual rules concatenated with the | (or) operator. For examples of certificate matching rules, see the sss-certamp(5) man page.


Private groups fail to be created with auto_private_group = hybrid when multiple domains are defined

Private groups fail to be created with the option auto_private_group = hybrid when multiple domains are defined and the hybrid option is used by any domain other than the first one. If an implicit files domain is defined along with an AD or LDAP domain in the sssd.conf file and is not marked as MPG_HYBRID, then SSSD fails to create a private group for a user who has uid=gid and the group with this gid does not exist in AD or LDAP.

The sssd_nss responder checks for the value of the auto_private_groups option in the first domain only. As a consequence, in setups where multiple domains are configured, which includes the default setup on RHEL 8, the option auto_private_group has no effect.

To work around this problem, set enable_files_domain = false in the sssd section of of sssd.conf. As a result, If the enable_files_domain option is set to false, then sssd does not add a domain with id_provider=files at the start of the list of active domains, and therefore this bug does not occur.


python-ply is not FIPS compatible

The YACC module of the python-ply package uses the MD5 hashing algorithm to generate the fingerprint of a YACC signature. However, FIPS mode blocks the use of MD5, which is only allowed in non-security contexts. As a consequence, python-ply is not FIPS compatible. On a system in FIPS mode, all calls to ply.yacc.yacc() fail with the error message:

UnboundLocalError: local variable 'sig' referenced before assignment

The problem affects python-pycparser and some use cases of python-cffi. To work around this problem, modify the line 2966 of the file /usr/lib/python3.6/site-packages/ply/, replacing sig = md5() with sig = md5(usedforsecurity=False). As a result, python-ply can be used in FIPS mode.


FreeRADIUS silently truncates Tunnel-Passwords longer than 249 characters

If a Tunnel-Password is longer than 249 characters, the FreeRADIUS service silently truncates it. This may lead to unexpected password incompatibilities with other systems.

To work around the problem, choose a password that is 249 characters or fewer.


Installing KRA fails if all KRA members are hidden replicas

The ipa-kra-install utility fails on a cluster where the Key Recovery Authority (KRA) is already present if the first KRA instance is installed on a hidden replica. Consequently, you cannot add further KRA instances to the cluster.

To work around this problem, unhide the hidden replica that has the KRA role before you add new KRA instances. You can hide it again when ipa-kra-install completes successfully.


Directory Server warns about missing attributes in the schema if those attributes are used in a search filter

If you set the nsslapd-verify-filter-schema parameter to warn-invalid, Directory Server processes search operations with attributes that are not defined in the schema and logs a warning. With this setting, Directory Server returns requested attributes in search results, regardless whether the attributes is defined in the schema or not.

A future version of Directory Server will change the default setting of nsslapd-verify-filter-schema to enforce stricter checks. The new default will warn about attributes that are missing in the schema, and reject requests or return only partial results.


ipa-healthcheck-0.4 does not obsolete older versions of ipa-healthcheck

The Healthcheck tool has been split into two sub-packages: ipa-healthcheck and ipa-healthcheck-core. However, only the ipa-healthcheck-core sub-package is correctly set to obsolete older versions of ipa-healthcheck. As a result, updating Healthcheck only installs ipa-healthcheck-core and the ipa-healthcheck command does not work after the update.

To work around this problem, install the ipa-healthcheck-0.4 sub-package manually using yum install ipa-healthcheck-0.4.


11.11. Desktop

Limitations of the Wayland session

With Red Hat Enterprise Linux 8, the GNOME environment and the GNOME Display Manager (GDM) use Wayland as the default session type instead of the X11 session, which was used with the previous major version of RHEL.

The following features are currently unavailable or do not work as expected under Wayland:

  • X11 configuration utilities, such as xrandr, do not work under Wayland due to its different approach to handling, resolutions, rotations, and layout. You can configure the display features using GNOME settings.
  • Screen recording and remote desktop require applications to support the portal API on Wayland. Certain legacy applications do not support the portal API.
  • Pointer accessibility is not available on Wayland.
  • No clipboard manager is available.
  • GNOME Shell on Wayland ignores keyboard grabs issued by most legacy X11 applications. You can enable an X11 application to issue keyboard grabs using the /org/gnome/mutter/wayland/xwayland-grab-access-rules GSettings key. By default, GNOME Shell on Wayland enables the following applications to issue keyboard grabs:

    • GNOME Boxes
    • Vinagre
    • Xephyr
    • virt-manager, virt-viewer, and remote-viewer
    • vncviewer
  • Wayland inside guest virtual machines (VMs) has stability and performance problems. RHEL automatically falls back to the X11 session when running in a VM.

If you upgrade to RHEL 8 from a RHEL 7 system where you used the X11 GNOME session, your system continues to use X11. The system also automatically falls back to X11 when the following graphics drivers are in use:

  • The proprietary NVIDIA driver
  • The cirrus driver
  • The mga driver
  • The aspeed driver

You can disable the use of Wayland manually:

  • To disable Wayland in GDM, set the WaylandEnable=false option in the /etc/gdm/custom.conf file.
  • To disable Wayland in the GNOME session, select the legacy X11 option by using the cogwheel menu on the login screen after entering your login name.

For more details on Wayland, see


Drag-and-drop does not work between desktop and applications

Due to a bug in the gnome-shell-extensions package, the drag-and-drop functionality does not currently work between desktop and applications. Support for this feature will be added back in a future release.


Disabling flatpak repositories from Software Repositories is not possible

Currently, it is not possible to disable or remove flatpak repositories in the Software Repositories tool in the GNOME Software utility.


Generation 2 RHEL 8 virtual machines sometimes fail to boot on Hyper-V Server 2016 hosts

When using RHEL 8 as the guest operating system on a virtual machine (VM) running on a Microsoft Hyper-V Server 2016 host, the VM in some cases fails to boot and returns to the GRUB boot menu. In addition, the following error is logged in the Hyper-V event log:

The guest operating system reported that it failed with the following error code: 0x1E

This error occurs due to a UEFI firmware bug on the Hyper-V host. To work around this problem, use Hyper-V Server 2019 as the host.


System crash may result in fadump configuration loss

This issue is observed on systems where firmware-assisted dump (fadump) is enabled, and the boot partition is located on a journaling file system such as XFS. A system crash might cause the boot loader to load an older initrd that does not have the dump capturing support enabled. Consequently, after recovery, the system does not capture the vmcore file, which results in fadump configuration loss.

To work around this problem:

  • If /boot is a separate partition, perform the following:

    1. Restart the kdump service
    2. Run the following commands as the root user, or using a user account with CAP_SYS_ADMIN rights:

      # fsfreeze -f
      # fsfreeze -u
  • If /boot is not a separate partition, reboot the system.


11.12. Graphics infrastructures

Unable to run graphical applications using sudo command

When trying to run graphical applications as a user with elevated privileges, the application fails to open with an error message. The failure happens because Xwayland is restricted by the Xauthority file to use regular user credentials for authentication.

To work around this problem, use the sudo -E command to run graphical applications as a root user.


radeon fails to reset hardware correctly

The radeon kernel driver currently does not reset hardware in the kexec context correctly. Instead, radeon falls over, which causes the rest of the kdump service to fail.

To work around this problem, blacklist radeon in kdump by adding the following line to the /etc/kdump.conf file:

dracut_args --omit-drivers "radeon"
force_rebuild 1

Restart the machine and kdump. After starting kdump, the force_rebuild 1 line may be removed from the configuration file.

Note that in this scenario, no graphics will be available during kdump, but kdump will work successfully.


Multiple HDR displays on a single MST topology may not power on

On systems using NVIDIA Turing GPUs with the nouveau driver, using a DisplayPort hub (such as a laptop dock) with multiple monitors which support HDR plugged into it may result in failure to turn on all displays despite having done so on previous RHEL releases. This is due to the system erroneously thinking there is not enough bandwidth on the hub to support all of the displays.


11.13. The web console

Unprivileged users can access the Subscriptions page

If a non-administrator navigates to the Subscriptions page of the web console, the web console displays a generic error message Cockpit had an unexpected internal error.

To work around this problem, sign in to the web console with a privileged user and make sure to check the Reuse my password for privileged tasks checkbox.


11.14. Virtualization

Low GUI display performance in RHEL 8 virtual machines on a Windows Server 2019 host

When using RHEL 8 as a guest operating system in graphical mode on a Windows Server 2019 host, the GUI display performance is low, and connecting to a console output of the guest currently takes significantly longer than expected.

This is a known issue on Windows 2019 hosts and is pending a fix by Microsoft. To work around this problem, connect to the guest using SSH or use Windows Server 2016 as the host.


Displaying multiple monitors of virtual machines that use Wayland is not possible with QXL

Using the remote-viewer utility to display more than one monitor of a virtual machine (VM) that is using the Wayland display server causes the VM to become unresponsive and the Waiting for display status message to be displayed indefinitely.

To work around this problem, use virtio-gpu instead of qxl as the GPU device for VMs that use Wayland.


virsh iface-\* commands do not work consistently

Currently, virsh iface-* commands, such as virsh iface-start and virsh iface-destroy, frequently fail due to configuration dependencies. Therefore, it is recommended not to use virsh iface-\* commands for configuring and managing host network connections. Instead, use the NetworkManager program and its related management applications.


RHEL 8 virtual machines sometimes cannot boot on Witherspoon hosts

RHEL 8 virtual machines (VMs) that use the pseries-rhel7.6.0-sxxm machine type in some cases fail to boot on Power9 S922LC for HPC hosts (also known as Witherspoon) that use the DD2.2 or DD2.3 CPU.

Attempting to boot such a VM instead generates the following error message:

qemu-kvm: Requested safe indirect branch capability level not supported by kvm

To work around this problem, configure the virtual machine’s XML configuration as follows:

<domain type='qemu' xmlns:qemu=''>
    <qemu:arg value='-machine'/>
    <qemu:arg value='cap-ibs=workaround'/>

(BZ#1732726, BZ#1751054)

IBM POWER virtual machines do not work correctly with empty NUMA nodes

Currently, when an IBM POWER virtual machine (VM) running on a RHEL 8 host is configured with a NUMA node that uses zero memory (memory='0') and zero CPUs, the VM cannot start. Therefore, Red Hat strongly recommends not using IBM POWER VMs with such empty NUMA nodes on RHEL 8.


SMT CPU topology is not detected by VMs when using host passthrough mode on AMD EPYC

When a virtual machine (VM) boots with the CPU host passthrough mode on an AMD EPYC host, the TOPOEXT CPU feature flag is not present. Consequently, the VM is not able to detect a virtual CPU topology with multiple threads per core. To work around this problem, boot the VM with the EPYC CPU model instead of host passthrough.


Disk identifiers in RHEL 8.2 VMs may change on VM reboot.

When using a virtual machine (VM) with RHEL 8.2 as the guest operating system on a Hyper-V hypervisor, the device identifiers for the VM’s virtual disks in some cases change when the VM reboots. For example, a disk originally identified as /dev/sda may become /dev/sdb. As a consequence, the VM might fail to boot, and scripts that reference disks of the VM might stop working.

To avoid this issue, Red Hat strongly recommends to set persistent names for the disks in the VM. For detailed information, see the Microsoft Azure documentation:


Virtual machines sometimes fail to start when using many virtio-blk disks

Adding a large number of virtio-blk devices to a virtual machine (VM) may exhaust the number of interrupt vectors available in the platform. If this occurs, the VM’s guest OS fails to boot, and displays a dracut-initqueue[392]: Warning: Could not boot error.


Attaching LUN devices to virtual machines using virtio-blk does not work

The q35 machine type does not support transitional virtio 1.0 devices, and RHEL 8 therefore lacks support for features that were deprecated in virtio 1.0. In particular, it is not possible on a RHEL 8 host to send SCSI commands from virtio-blk devices. As a consequence, attaching a physical disk as a LUN device to a virtual machine fails when using the virtio-blk controller.

Note that physical disks can still be passed through to the guest operating system, but they should be configured with the device='disk' option rather than device='lun'.


11.15. Supportability

redhat-support-tool does not work with the FUTURE crypto policy

Because a cryptographic key used by a certificate on the Customer Portal API does not meet the requirements by the FUTURE system-wide cryptographic policy, the redhat-support-tool utility does not work with this policy level at the moment.

To work around this problem, use the DEFAULT crypto policy while connecting to the Customer Portal API.


11.16. Containers

UDICA is not expected to work with 1.0 stable stream

UDICA, the tool to generate SELinux policies for containers, is not expected to work with containers that are run via podman 1.0.x in the container-tools:1.0 module stream.


Notes on FIPS support with Podman

The Federal Information Processing Standard (FIPS) requires certified modules to be used. Previously, Podman correctly installed certified modules in containers by enabling the proper flags at startup. However, in this release, Podman does not properly set up the additional application helpers normally provided by the system in the form of the FIPS system-wide crypto-policy. Although setting the system-wide crypto-policy is not required by the certified modules it does improve the ability of applications to use crypto modules in compliant ways. To work around this problem, change your container to run the update-crypto-policies --set FIPS command before any other application code is executed.