Chapter 8. Known issues
This part describes known issues in Red Hat Enterprise Linux 8.
8.1. Installer and image creation
authconfig Kickstart commands require the AppStream repository
authselect-compat package is required by the
authconfig Kickstart commands during installation. Without this package, the installation fails if
authconfig are used. However, by design, the
authselect-compat package is only available in the AppStream repository.
To work around this problem, verify that the BaseOS and AppStream repositories are available to the installer or use the
authselect Kickstart command during installation.
reboot --kexec and
inst.kexec commands do not provide a predictable system state
Performing a RHEL installation with the
reboot --kexec Kickstart command or the
inst.kexec kernel boot parameters do not provide the same predictable system state as a full reboot. As a consequence, switching to the installed system without rebooting can produce unpredictable results.
Note that the
kexec feature is deprecated and will be removed in a future release of Red Hat Enterprise Linux.
Anaconda installation includes low limits of minimal resources setting requirements
Anaconda initiates the installation on systems with minimal resource settings required available and do not provide previous message warning about the required resources for performing the installation successfully. As a result, the installation can fail and the output errors do not provide clear messages for possible debug and recovery. To work around this problem, make sure that the system has the minimal resources settings required for installation: 2GB memory on PPC64(LE) and 1GB on x86_64. As a result, it should be possible to perform a successful installation.
Installation fails when using the
reboot --kexec command
The RHEL 8 installation fails when using a Kickstart file that contains the
reboot --kexec command. To avoid the problem, use the
reboot command instead of
reboot --kexec in your Kickstart file.
Support secure boot for s390x in the installer
RHEL 8.1 provides support for preparing boot disks for use in IBM Z environments that enforce the use of secure boot. The capabilities of the server and Hypervisor used during installation determine if the resulting on-disk format contains secure boot support or not. There is no way to influence the on-disk format during installation.
Consequently, if you install RHEL 8.1 in an environment that supports secure boot, the system is unable to boot when moved to an environment lacking secure boot support, as it is done in some fail-over scenarios.
To work around this problem, you need to configure the
zipl tool that controls the on-disk boot format.
zipl can be configured to write the previous on-disk format even if the environment in which it is run supports secure boot. Perform the following manual steps as root user once the installation of RHEL 8.1 is completed:
Edit the configuration file
Add a line containing "secure=0" to the section labelled "defaultboot".
Example contents of the `zipl.conf` file after the change:
[defaultboot] defaultauto prompt=1 timeout=5 target=/boot secure=0
zipltool without parameters
After performing these steps, the on-disk format of the RHEL 8.1 boot disk will no longer contain secure boot support. As a result, the installation can be booted in environments that lack secure boot support.
RHEL 8 initial setup cannot be performed via SSH
Currently, the RHEL 8 initial setup interface does not display when logged in to the system using SSH. As a consequence, it is impossible to perform the initial setup on a RHEL 8 machine managed via SSH. To work around this problem, perform the initial setup in the main system console (ttyS0) and, afterwards, log in using SSH.
The default value for the
secure= boot option is not set to auto
Currently, the default value for the
secure= boot option is not set to auto. As a consequence, the secure boot feature is not available because the current default is disabled. To work around this problem, manually set
secure=auto in the
[defaultboot] section of the
/etc/zipl.conf file. As a result, the secure boot feature is made available. For more information, see the
zipl.conf man page.
Copying the content of the
Binary DVD.iso file to a partition omits the
During local installation, while copying the content of the RHEL 8 Binary DVD.iso image file to a partition, the
* in the
cp <path>/\* <mounted partition>/dir command fails to copy the
.discinfo files. These files are required for a successful installation. As a result, the BaseOS and AppStream repositories are not loaded, and a debug-related log message in the
anaconda.log file is the only record of the problem.
To work around the problem, copy the missing
.discinfo files to the partition.
Self-signed HTTPS server cannot be used in Kickstart installation
Currently, the installer fails to install from a self-signed https server when the installation source is specified in the kickstart file and the
--noverifyssl option is used:
url --url=https://SERVER/PATH --noverifyssl
To work around this problem, append the
inst.noverifyssl parameter to the kernel command line when starting the kickstart installation.
8.2. Software management
yum repolist ends on first unavailable repository with skip_if_unavailable=false
The repository configuration option
skip_if_unavailable is by default set as follows:
This setting forces the
yum repolist command to end on first unavailable repository with an error and exit status 1. Consequently,
yum repolist does not continue listing available repositiories.
Note that it is possible to override this setting in each repository’s
However, if you want to keep the default settings, you can work around the problem by using
yum repolist with the following option:
8.3. Shells and command-line tools
Wayland protocol cannot be forwarded to remote display servers
In Red Hat Enterprise Linux 8.1, most applications use the Wayland protocol by default instead of the X11 protocol. As a consequence, the ssh server cannot forward the applications that use the Wayland protocol but is able to forward the applications that use the X11 protocol to a remote display server.
To work around this problem, set the environment variable
GDK_BACKEND=x11 before starting the applications. As a result, the application can be forwarded to remote display servers.
systemd-resolved.service fails to start on boot
systemd-resolved service occasionally fails to start on boot. If this happens, restart the service manually after the boot finishes by using the following command:
# systemctl start systemd-resolved
However, the failure of
systemd-resolved on boot does not impact any other services.
8.4. Infrastructure services
Support for DNSSEC in dnsmasq
dnsmasq package introduces Domain Name System Security Extensions (DNSSEC) support for verifying hostname information received from root servers.
Note that DNSSEC validation in dnsmasq is not compliant with FIPS 140-2. Do not enable DNSSEC in dnsmasq on Federal Information Processing Standard (FIPS) systems, and use the compliant validating resolver as a forwarder on the localhost.
libselinux-python is available only through its module
libselinux-python package contains only Python 2 bindings for developing SELinux applications and it is used for backward compatibility. For this reason,
libselinux-python is no longer available in the default RHEL 8 repositories through the
dnf install libselinux-python command.
To work around this problem, enable both the
python27 modules, and install the
libselinux-python package and its dependencies with the following commands:
# dnf module enable libselinux-python # dnf install libselinux-python
libselinux-python using its install profile with a single command:
# dnf module install libselinux-python:2.8/common
As a result, you can install
libselinux-python using the respective module.
udica processes UBI 8 containers only when started with
The Red Hat Universal Base Image 8 (UBI 8) containers set the
container environment variable to the
oci value instead of the
podman value. This prevents the
To work around this problem, start a UBI 8 container using a
podman command with the
--env container=podman parameter. As a result,
udica can generate an SELinux policy for a UBI 8 container only when you use the described workaround.
rpm-plugin-selinux package leads to removing all
selinux-policy packages from the system
rpm-plugin-selinux package disables SELinux on the machine. It also removes all
selinux-policy packages from the system. Repeated installation of the
rpm-plugin-selinux package then installs the
selinux-policy-minimum SELinux policy, even if the
selinux-policy-targeted policy was previously present on the system. However, the repeated installation does not update the SELinux configuration file to account for the change in policy. As a consequence, SELinux is disabled even upon reinstallation of the
To work around this problem:
Manually install the missing
/etc/selinux/configfile so that the policy is equal to
Enter the command
As a result, SELinux is enabled and running the same policy as before.
systemd-journal-gatewayd to call
newfstatat() on shared memory files created by
SELinux policy does not contain a rule that allows the
systemd-journal-gatewayd daemon to access files created by the
corosync service. As a consequence, SELinux denies
systemd-journal-gatewayd to call the
newfstatat() function on shared memory files created by
To work around this problem, create a local policy module with an allow rule which enables the described scenario. See the
audit2allow(1) man page for more information on generating SELinux policy allow and dontaudit rules. As a result of the previous workaround,
systemd-journal-gatewayd can call the function on shared memory files created by
corosync with SELinux in enforcing mode.
Negative effects of the default logging setup on performance
The default logging environment setup might consume 4 GB of memory or even more and adjustments of rate-limit values are complex when
systemd-journald is running with
See the Negative effects of the RHEL default logging setup on performance and their mitigations Knowledgebase article for more information.
Parameter not known errors in the
rsyslog output with
rsyslog output, an unexpected bug occurs in configuration processing errors using the
config.enabled directive. As a consequence,
parameter not known errors are displayed while using the
config.enabled directive except for the
To work around this problem, set
config.enabled=on or use
rsyslog priority strings do not work correctly
Support for the GnuTLS priority string for
imtcp that allows fine-grained control over encryption is not complete. Consequently, the following priority strings do not work properly in
To work around this problem, use only correctly working priority strings:
As a result, current configurations must be limited to the strings that work correctly.
Connections to servers with SHA-1 signatures do not work with GnuTLS
SHA-1 signatures in certificates are rejected by the GnuTLS secure communications library as insecure. Consequently, applications that use GnuTLS as a TLS backend cannot establish a TLS connection to peers that offer such certificates. This behavior is inconsistent with other system cryptographic libraries. To work around this problem, upgrade the server to use certificates signed with SHA-256 or stronger hash, or switch to the LEGACY policy.
TLS 1.3 does not work in NSS in FIPS mode
TLS 1.3 is not supported on systems working in FIPS mode. As a result, connections that require TLS 1.3 for interoperability do not function on a system working in FIPS mode.
To enable the connections, disable the system’s FIPS mode or enable support for TLS 1.2 in the peer.
OpenSSL incorrectly handles PKCS #11 tokens that does not support raw RSA or RSA-PSS signatures
OpenSSL library does not detect key-related capabilities of PKCS #11 tokens. Consequently, establishing a TLS connection fails when a signature is created with a token that does not support raw RSA or RSA-PSS signatures.
To work around the problem, add the following lines after the
.include line at the end of the
crypto_policy section in the
SignatureAlgorithms = RSA+SHA256:RSA+SHA512:RSA+SHA384:ECDSA+SHA256:ECDSA+SHA512:ECDSA+SHA384 MaxProtocol = TLSv1.2
As a result, a TLS connection can be established in the described scenario.
OpenSSL TLS library does not detect if the
PKCS#11 token supports creation of
raw RSA or
TLS-1.3 protocol requires the support for
RSA-PSS signature. If the
PKCS#11 token does not support
raw RSA or
RSA-PSS signatures, the server applications which use
TLS library will fail to work with the
RSA key if it is held by the
PKCS#11 token. As a result,
TLS communication will fail.
To work around this problem, configure server or client to use the
TLS-1.2 version as the highest
TLS protocol version available.
OpenSSL generates a malformed
status_request extension in the
CertificateRequest message in TLS 1.3
OpenSSL servers send a malformed
status_request extension in the
CertificateRequest message if support for the
status_request extension and client certificate-based authentication are enabled. In such case, OpenSSL does not interoperate with implementations compliant with the
RFC 8446 protocol. As a result, clients that properly verify extensions in the ‘CertificateRequest’ message abort connections with the OpenSSL server. To work around this problem, disable support for the TLS 1.3 protocol on either side of the connection or disable support for
status_request on the OpenSSL server. This will prevent the server from sending malformed messages.
ssh-keyscan cannot retrieve RSA keys of servers in FIPS mode
SHA-1 algorithm is disabled for RSA signatures in FIPS mode, which prevents the
ssh-keyscan utility from retrieving RSA keys of servers operating in that mode.
To work around this problem, use ECDSA keys instead, or retrieve the keys locally from the
/etc/ssh/ssh_host_rsa_key.pub file on the server.
scap-security-guide PCI-DSS remediation of Audit rules does not work properly
scap-security-guide package contains a combination of remediation and a check that can result in one of the following scenarios:
- incorrect remediation of Audit rules
- scan evaluation containing false positives where passed rules are marked as failed
Consequently, during the RHEL 8.1 installation process, scanning of the installed system reports some Audit rules as either failed or errored.
To work around this problem, follow the instructions in the RHEL-8.1 workaround for remediating and scanning with the scap-security-guide PCI-DSS profile Knowledgebase article.
Certain sets of interdependent rules in SSG can fail
SCAP Security Guide (SSG) rules in a benchmark can fail due to undefined ordering of rules and their dependencies. If two or more rules need to be executed in a particular order, for example, when one rule installs a component and another rule configures the same component, they can run in the wrong order and remediation reports an error. To work around this problem, run the remediation twice, and the second run fixes the dependent rules.
A utility for security and compliance scanning of containers is not available
In Red Hat Enterprise Linux 7, the
oscap-docker utility can be used for scanning of Docker containers based on Atomic technologies. In Red Hat Enterprise Linux 8, the Docker- and Atomic-related OpenSCAP commands are not available.
To work around this problem, see the Using OpenSCAP for scanning containers in RHEL 8 article on the Customer Portal. As a result, you can use only an unsupported and limited way for security and compliance scanning of containers in RHEL 8 at the moment.
OpenSCAP does not provide offline scanning of virtual machines and containers
OpenSCAP codebase caused certain RPM probes to fail to scan VM and containers file systems in offline mode. For that reason, the following tools were removed from the
oscap-chroot. Also, the
openscap-containers package was completely removed.
rpmverifypackage does not work correctly
chroot system calls are called twice by the
rpmverifypackage probe. Consequently, an error occurs when the probe is utilized during an OpenSCAP scan with custom Open Vulnerability and Assessment Language (OVAL) content.
To work around this problem, do not use the
rpmverifypackage_test OVAL test in your content or use only the content from the
scap-security-guide package where
rpmverifypackage_test is not used.
SCAP Workbench fails to generate results-based remediations from tailored profiles
The following error occurs when trying to generate results-based remediation roles from a customized profile using the SCAP Workbench tool:
Error generating remediation role .../remediation.sh: Exit code of oscap was 1: [output truncated]
To work around this problem, use the
oscap command with the
OSCAP Anaconda Addon does not install all packages in text mode
OSCAP Anaconda Addon plugin cannot modify the list of packages selected for installation by the system installer if the installation is running in text mode. Consequently, when a security policy profile is specified using Kickstart and the installation is running in text mode, any additional packages required by the security policy are not installed during installation.
To work around this problem, either run the installation in graphical mode or specify all packages that are required by the security policy profile in the security policy in the
%packages section in your Kickstart file.
As a result, packages that are required by the security policy profile are not installed during RHEL installation without one of the described workarounds, and the installed system is not compliant with the given security policy profile.
OSCAP Anaconda Addon does not correctly handle customized profiles
OSCAP Anaconda Addon plugin does not properly handle security profiles with customizations in separate files. Consequently, the customized profile is not available in the RHEL graphical installation even when you properly specify it in the corresponding Kickstart section.
To work around this problem, follow the instructions in the Creating a single SCAP data stream from an original DS and a tailoring file Knowledgebase article. As a result of this workaround, you can use a customized SCAP profile in the RHEL graphical installation.
The formatting of the verbose output of
arptables now matches the format of the utility on RHEL 7
In RHEL 8, the
iptables-arptables package provides an
nftables-based replacement of the
arptables utility. Previously, the verbose output of
arptables separated counter values only with a comma, while
arptables on RHEL 7 separated the described output with both a space and a comma. As a consequence, if you used scripts created on RHEL 7 that parsed the output of the
arptables -v -L command, you had to adjust these scripts. This incompatibility has been fixed. As a result,
arptables on RHEL 8.1 now also separates counter values with both a space and a comma.
nftables does not support multi-dimensional IP set types
nftables packet-filtering framework does not support set types with concatenations and intervals. Consequently, you cannot use multi-dimensional IP set types, such as
To work around this problem, use the
iptables framework with the
ipset tool if you require multi-dimensional IP set types.
IPsec network traffic fails during IPsec offloading when GRO is disabled
IPsec offloading is not expected to work when Generic Receive Offload (GRO) is disabled on the device. If IPsec offloading is configured on a network interface and GRO is disabled on that device, IPsec network traffic fails.
To work around this problem, keep GRO enabled on the device.
The i40iw module does not load automatically on boot
Due to many i40e NICs not supporting iWarp and the i40iw module not fully supporting suspend/resume, this module is not automatically loaded by default to ensure suspend/resume works properly. To work around this problem, manually edit the
/lib/udev/rules.d/90-rdma-hw-modules.rules file to enable automated load of i40iw.
Also note that if there is another RDMA device installed with a i40e device on the same machine, the non-i40e RDMA device triggers the rdma service, which loads all enabled RDMA stack modules, including the i40iw module.
Network interface is renamed to
fadump is used
When firmware-assisted dump (
fadump) is utilized to capture a vmcore and store it to a remote machine using SSH or NFS protocol, the network interface is renamed to
<interface-name> is generic, for example, *eth#, or net#. This problem occurs because the vmcore capture scripts in the initial RAM disk (
initrd) add the kdump- prefix to the network interface name to secure persistent naming. The same
initrd is used also for a regular boot, so the interface name is changed for the production kernel too.
Systems with a large amount of persistent memory experience delays during the boot process
Systems with a large amount of persistent memory take a long time to boot because the initialization of the memory is serialized. Consequently, if there are persistent memory file systems listed in the
/etc/fstab file, the system might timeout while waiting for devices to become available. To work around this problem, configure the
DefaultTimeoutStartSec option in the
/etc/systemd/system.conf file to a sufficiently large value.
KSM sometimes ignores NUMA memory policies
When the kernel shared memory (KSM) feature is enabled with the
merge_across_nodes=1 parameter, KSM ignores memory policies set by the mbind() function, and may merge pages from some memory areas to Non-Uniform Memory Access (NUMA) nodes that do not match the policies.
To work around this problem, disable KSM or set the
merge_across_nodes parameter to
0 if using NUMA memory binding with QEMU. As a result, NUMA memory policies configured for the KVM VM will work as expected.
The system enters the emergency mode at boot-time when
fadump is enabled
The system enters the emergency mode when
dracut squash module is enabled in the
initramfs scheme because
systemd manager fails to fetch the mount information and configure the LV partition to mount. To work around this problem, add the following kernel command line parameter
rd.lvm.lv=<VG>/<LV> to discover and mount the failed LV partition appropriately. As a result, the system will boot successfully in the described scenario.
irqpoll in the kdump kernel command line causes a vmcore generation failure
Due to an existing underlying problem with the
nvme driver on the 64-bit ARM architectures running on the Amazon Web Services (AWS) cloud platforms, the vmcore generation fails if the
irqpoll kdump command line argument is provided to the first kernel. Consequently, no vmcore is dumped in the /var/crash/ directory after a kernel crash. To work around this problem:
KDUMP_COMMANDLINE_REMOVEkey in the /etc/sysconfig/kdump file.
kdumpservice by running the
systemctl restart kdumpcommand.
As a result, the first kernel correctly boots and the vmcore is expected to be captured upon the kernel crash.
Debug kernel fails to boot in crash capture environment in RHEL 8
Due to memory-demanding nature of the debug kernel, a problem occurs when the debug kernel is in use and a kernel panic is triggered. As a consequence, the debug kernel is not able to boot as the capture kernel, and a stack trace is generated instead. To work around this problem, increase the crash kernel memory accordingly. As a result, the debug kernel successfully boots in the crash capture environment.
8.8. Hardware enablement
The HP NMI watchdog in some cases does not generate a crash dump
hpwdt driver for the HP NMI watchdog is sometimes not able to claim a non-maskable interrupt (NMI) generated by the HPE watchdog timer because the NMI was instead consumed by the
perfmon driver. As a consequence,
hpwdt in some cases cannot call a panic to generate a crash dump.
Installing RHEL 8.1 on a test system configured with a QL41000 card results in a kernel panic
While installing RHEL 8.1 on a test system configured with a
QL41000 card, the system is unable to handle the kernel NULL pointer dereference at
000000000000003c card. As a consequence, it results in a kernel panic error. There is no work around available for this issue.
cxgb4 driver causes crash in the kdump kernel
kdump kernel crashes while trying to save information in the
vmcore file. Consequently, the
cxgb4 driver prevents the
kdump kernel from saving a core for later analysis. To work around this problem, add the "novmcoredd" parameter to the kdump kernel command line to allow saving core files.
8.9. File systems and storage
Certain SCSI drivers might sometimes use an excessive amount of memory
Certain SCSI drivers use a larger amount of memory than in RHEL 7. In certain cases, such as vPort creation on a Fibre Channel host bus adapter (HBA), the memory usage might be excessive, depending upon the system configuration.
The increased memory usage is caused by memory preallocation in the block layer. Both the multiqueue block device scheduling (BLK-MQ) and the multiqueue SCSI stack (SCSI-MQ) preallocate memory for each I/O request in RHEL 8, leading to the increased memory usage.
VDO cannot suspend until UDS has finished rebuilding
When a Virtual Data Optimizer (VDO) volume starts after an unclean system shutdown, it rebuilds the Universal Deduplication Service (UDS) index. If you try to suspend the VDO volume using the
dmsetup suspend command while the UDS index is rebuilding, the suspend command might become unresponsive. The command finishes only after the rebuild is done.
The unresponsiveness is noticeable only with VDO volumes that have a large UDS index, which causes the rebuild to take a longer time.
An NFS 4.0 patch can result in reduced performance under an open-heavy workload
Previously, a bug was fixed that, in some cases, could cause an NFS open operation to overlook the fact that a file had been removed or renamed on the server. However, the fix may cause slower performance with workloads that require many open operations. To work around this problem, it might help to use NFS version 4.1 or higher, which have been improved to grant delegations to clients in more cases, allowing clients to perform open operations locally, quickly, and safely.
8.10. Dynamic programming languages, web and database servers
nginx cannot load server certificates from hardware security tokens
nginx web server supports loading TLS private keys from hardware security tokens directly from PKCS#11 modules. However, it is currently impossible to load server certificates from hardware security tokens through the PKCS#11 URI. To work around this problem, store server certificates on the file system
php-fpm causes SELinux AVC denials when
php-opcache is installed with PHP 7.2
php-opcache package is installed, the FastCGI Process Manager (
php-fpm) causes SELinux AVC denials. To work around this problem, change the default configuration in the
/etc/php.d/10-opcache.ini file to the following:
Note that this problem affects only the
php:7.2 stream, not the
8.11. Compilers and development tools
ltrace tool does not report function calls
Because of improvements to binary hardening applied to all RHEL components, the
ltrace tool can no longer detect function calls in binary files coming from RHEL components. As a consequence,
ltrace output is empty because it does not report any detected calls when used on such binary files. There is no workaround currently available.
As a note,
ltrace can correctly report calls in custom binary files built without the respective hardening flags.
8.12. Identity Management
AD users with expired accounts can be allowed to log in when using GSSAPI authentication
accountExpires attribute that SSSD uses to see whether an account has expired is not replicated to the global catalog by default. As a result, users with expired accounts can log in when using GSSAPI authentication. To work around this problem, the global catalog support can be disabled by specifying
ad_enable_gc=False in the
sssd.conf file. With this setting, users with expired accounts will be denied access when using GSSAPI authentication.
Note that SSSD connects to each LDAP server individually in this scenario, which can increase the connection count.
cert-fix utility with the
--agent-uid pkidbuser option breaks Certificate System
cert-fix utility with the
--agent-uid pkidbuser option corrupts the LDAP configuration of Certificate System. As a consequence, Certificate System might become unstable and manual steps are required to recover the system.
/etc/nsswitch.conf requires a manual system reboot
Any change to the
/etc/nsswitch.conf file, for example running the
authselect select profile_id command, requires a system reboot so that all relevant processes use the updated version of the
/etc/nsswitch.conf file. If a system reboot is not possible, restart the service that joins your system to Active Directory, which is the
System Security Services Daemon (SSSD) or
No information about required DNS records displayed when enabling support for AD trust in IdM
When enabling support for Active Directory (AD) trust in Red Hat Enterprise Linux Identity Management (IdM) installation with external DNS management, no information about required DNS records is displayed. Forest trust to AD is not successful until the required DNS records are added. To work around this problem, run the 'ipa dns-update-system-records --dry-run' command to obtain a list of all DNS records required by IdM. When external DNS for IdM domain defines the required DNS records, establishing forest trust to AD is possible.
SSSD returns incorrect LDAP group membership for local users
If the System Security Services Daemon (SSSD) serves users from the local files, the files provider does not include group memberships from other domains. As a consequence, if a local user is a member of an LDAP group, the
id local_user command does not return the user’s LDAP group membership. To work around the problem, either revert the order of the databases where the system is looking up the group membership of users in the
/etc/nsswitch.conf file, replacing
sss files with
files sss, or disable the implicit
files domain by adding
[sssd] section in the
As a result,
id local_user returns correct LDAP group membership for local users.
Default PAM settings for
systemd-user have changed in RHEL 8 which may influence SSSD behavior
The Pluggable authentication modules (PAM) stack has changed in Red Hat Enterprise Linux 8. For example, the
systemd user session now starts a PAM conversation using the
systemd-user PAM service. This service now recursively includes the
system-auth PAM service, which may include the
pam_sss.so interface. This means that the SSSD access control is always called.
Be aware of the change when designing access control rules for RHEL 8 systems. For example, you can add the
systemd-user service to the allowed services list.
Please note that for some access control mechanisms, such as IPA HBAC or AD GPOs, the
systemd-user service is has been added to the allowed services list by default and you do not need to take any action.
SSSD does not correctly handle multiple certificate matching rules with the same priority
If a given certificate matches multiple certificate matching rules with the same priority, the System Security Services Daemon (SSSD) uses only one of the rules. As a workaround, use a single certificate matching rule whose LDAP filter consists of the filters of the individual rules concatenated with the
| (or) operator. For examples of certificate matching rules, see the sss-certamp(5) man page.
Private groups fail to be created with auto_private_group = hybrid when multiple domains are defined
Private groups fail to be created with the option auto_private_group = hybrid when multiple domains are defined and the hybrid option is used by any domain other than the first one. If an implicit files domain is defined along with an AD or LDAP domain in the
sssd.conf`file and is not marked as `MPG_HYBRID, then SSSD fails to create a private group for a user who has uid=gid and the group with this gid does not exist in AD or LDAP.
The sssd_nss responder checks for the value of the
auto_private_groups option in the first domain only. As a consequence, in setups where multiple domains are configured, which includes the default setup on RHEL 8, the option
auto_private_group has no effect.
To work around this problem, set
enable_files_domain = false in the sssd section of of
sssd.conf. As a result, If the
enable_files_domain option is set to false, then sssd does not add a domain with
id_provider=files at the start of the list of active domains, and therefore this bug does not occur.
python-ply is not FIPS compatible
The YACC module of the
python-ply package uses the MD5 hashing algorithm to generate the fingerprint of a YACC signature. However, FIPS mode blocks the use of MD5, which is only allowed in non-security contexts. As a consequence, python-ply is not FIPS compatible. On a system in FIPS mode, all calls to
ply.yacc.yacc() fail with the error message:
"UnboundLocalError: local variable 'sig' referenced before assignment"
The problem affects
python-pycparser and some use cases of
python-cffi. To work around this problem, modify the line 2966 of the file
sig = md5() with
sig = md5(usedforsecurity=False). As a result,
python-ply can be used in FIPS mode.
Drag-and-drop does not work between desktop and applications
Due to a bug in the
gnome-shell-extensions package, the drag-and-drop functionality does not currently work between desktop and applications. Support for this feature will be added back in a future release.
flatpak repositories from Software Repositories is not possible
Currently, it is not possible to disable or remove
flatpak repositories in the Software Repositories tool in the GNOME Software utility.
Generation 2 RHEL 8 virtual machines sometimes fail to boot on Hyper-V Server 2016 hosts
When using RHEL 8 as the guest operating system on a virtual machine (VM) running on a Microsoft Hyper-V Server 2016 host, the VM in some cases fails to boot and returns to the GRUB boot menu. In addition, the following error is logged in the Hyper-V event log:
The guest operating system reported that it failed with the following error code: 0x1E
This error occurs due to a UEFI firmware bug on the Hyper-V host. To work around this problem, use Hyper-V Server 2019 as the host.
GNOME Shell on Wayland performs slowly when using a software renderer
When using a software renderer, GNOME Shell as a Wayland compositor (GNOME Shell on Wayland) does not use a cacheable framebuffer for rendering the screen. Consequently, GNOME Shell on Wayland is slow. To workaround the problem, go to the GNOME Display Manager (GDM) login screen and switch to a session that uses the X11 protocol instead. As a result, the Xorg display server, which uses cacheable memory, is used, and GNOME Shell on Xorg in the described situation performs faster compared to GNOME Shell on Wayland.
System crash may result in fadump configuration loss
This issue is observed on systems where firmware-assisted dump (fadump) is enabled, and the boot partition is located on a journaling file system such as XFS. A system crash might cause the boot loader to load an older
initrd that does not have the dump capturing support enabled. Consequently, after recovery, the system does not capture the
vmcore file, which results in fadump configuration loss.
To work around this problem:
/bootis a separate partition, perform the following:
- Restart the kdump service
Run the following commands as the root user, or using a user account with CAP_SYS_ADMIN rights:
# fsfreeze -f # fsfreeze -u
/bootis not a separate partition, reboot the system.
8.14. Graphics infrastructures
radeon fails to reset hardware correctly
radeon kernel driver currently does not reset hardware in the kexec context correctly. Instead,
radeon falls over, which causes the rest of the kdump service to fail.
To work around this problem, blacklist
radeon in kdump by adding the following line to the
dracut_args --omit-drivers "radeon" force_rebuild 1
Restart the machine and kdump. After starting kdump, the
force_rebuild 1 line may be removed from the configuration file.
Note that in this scenario, no graphics will be available during kdump, but kdump will work successfully.
8.15. The web console
Unprivileged users can access the Subscriptions page
If a non-administrator navigates to the Subscriptions page of the web console, the web console displays a generic error message “Cockpit had an unexpected internal error”.
To work around this problem, sign in to the web console with a privileged user and make sure to check the Reuse my password for privileged tasks checkbox.
cloud-init to provision virtual machines on Microsoft Azure fails
Currently, it is not possible to use the
cloud-init utility to provision a RHEL 8 virtual machine (VM) on the Microsoft Azure platform. To work around this problem, use one of the following methods:
WALinuxAgentpackage instead of
cloud-initto provision VMs on Microsoft Azure.
Add the following setting to the
[main]section in the
RHEL 8 virtual machines on RHEL 7 hosts in some cases cannot be viewed in higher resolution than 1920x1200
Currently, when using a RHEL 8 virtual machine (VM) running on a RHEL 7 host system, certain methods of displaying the the graphical output of the VM, such as running the application in kiosk mode, cannot use greater resolution than 1920x1200. As a consequence, displaying VMs using those methods only works in resolutions up to 1920x1200, even if the host hardware supports higher resolutions.
Low GUI display performance in RHEL 8 virtual machines on a Windows Server 2019 host
When using RHEL 8 as a guest operating system in graphical mode on a Windows Server 2019 host, the GUI display performance is low, and connecting to a console output of the guest currently takes significantly longer than expected.
This is a known issue on Windows 2019 hosts and is pending a fix by Microsoft. To work around this problem, connect to the guest using SSH or use Windows Server 2016 as the host.
Installing RHEL virtual machines sometimes fails
Under certain circumstances, RHEL 7 and RHEL 8 virtual machines created using the
virt-install utility fail to boot if the
--location option is used.
To work around this problem, use the
--extra-args option instead and specify an installation tree reachable by the network, for example:
This ensures that the RHEL installer finds the installation files correctly.
Displaying multiple monitors of virtual machines that use Wayland is not possible with QXL
remote-viewer utility to display more than one monitor of a virtual machine (VM) that is using the Wayland display server causes the VM to become unresponsive and the Waiting for display status message to be displayed indefinitely.
To work around this problem, use
virtio-gpu instead of
qxl as the GPU device for VMs that use Wayland.
virsh iface-\* commands do not work consistently
virsh iface-* commands, such as
virsh iface-start and
virsh iface-destroy, frequently fail due to configuration dependencies. Therefore, it is recommended not to use
virsh iface-\* commands for configuring and managing host network connections. Instead, use the NetworkManager program and its related management applications.
Customizing an ESXi VM using
cloud-init and rebooting the VM causes IP setting loss and makes booting the VM very slow
Currently, if the
cloud-init service is used to modify a virtual machine (VM) that runs on the VMware ESXi hypervisor to use static IP and the VM is then cloned, the new cloned VM in some cases takes a very long time to reboot. This is caused
cloud-init rewriting the VM’s static IP to DHCP and then searching for an available datasource.
To work around this problem, you can uninstall
cloud-init after the VM is booted for the first time. As a result, the subsequent reboots will not be slowed down.
RHEL 8 virtual machines sometimes cannot boot on Witherspoon hosts
RHEL 8 virtual machines (VMs) that use the
pseries-rhel7.6.0-sxxm machine type in some cases fail to boot on Power9 S922LC for HPC hosts (also known as Witherspoon) that use the DD2.2 or DD2.3 CPU.
Attempting to boot such a VM instead generates the following error message:
qemu-kvm: Requested safe indirect branch capability level not supported by kvm
To work around this problem, configure the virtual machine’s XML configuration as follows:
<domain type='qemu' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <qemu:commandline> <qemu:arg value='-machine'/> <qemu:arg value='cap-ibs=workaround'/> </qemu:commandline>
IBM POWER virtual machines do not work correctly with zero memory NUMA nodes
Currently, when an IBM POWER virtual machine (VM) running on a RHEL 8 host is configured with a NUMA node that uses zero memory (
memory='0'), the VM cannot boot. Therefore, Red Hat strongly recommends not using IBM POWER VMs with zero-memory NUMA nodes on RHEL 8.
SMT CPU topology is not detected by VMs when using host passthrough mode on AMD EPYC
When a virtual machine (VM) boots with the CPU host passthrough mode on an AMD EPYC host, the
TOPOEXT CPU feature flag is not present. Consequently, the VM is not able to detect a virtual CPU topology with multiple threads per core. To work around this problem, boot the VM with the EPYC CPU model instead of host passthrough.
Virtual machines sometimes fail to start when using many virtio-blk disks
Adding a large number of virtio-blk devices to a virtual machine (VM) may exhaust the number of interrupt vectors available in the platform. If this occurs, the VM’s guest OS fails to boot, and displays a
dracut-initqueue: Warning: Could not boot error.