Chapter 8. Technology Previews
This part provides a list of all Technology Previews available in Red Hat Enterprise Linux 8.5.
For information on Red Hat scope of support for Technology Preview features, see Technology Preview Features Support Scope.
8.1. Shells and command-line tools
IBM Z architecture in ReaR available as a Technology Preview
The basic support for IBM Z architecture was added to Relax and Recover (ReaR) and is now available as a Technology Preview. ReaR on IBM Z architecture is currently supported only in the z/VM environment. The support for backing up and recovering logical partitions (LPARs) has not been tested.
The only output method currently supported is Initial Program Load (IPL). IPL produces a kernel and an initial ramdisk (initrd) that can be used with the
For further information read the Using ReaR rescue image on IBM System Z architecture.
KTLS available as a Technology Preview
RHEL provides Kernel Transport Layer Security (KTLS) as a Technology Preview. KTLS handles TLS records using the symmetric encryption or decryption algorithms in the kernel for the AES-GCM cipher. KTLS also provides the interface for offloading TLS record encryption to Network Interface Controllers (NICs) that support this functionality.
AF_XDP available as a Technology Preview
Address Family eXpress Data Path (
AF_XDP) socket is designed for high-performance packet processing. It accompanies
XDP and grants efficient redirection of programmatically selected packets to user space applications for further processing.
XDP features that are available as Technology Preview
Red Hat provides the usage of the following eXpress Data Path (XDP) features as unsupported Technology Preview:
Loading XDP programs on architectures other than AMD and Intel 64-bit. Note that the
libxdplibrary is not available for architectures other than AMD and Intel 64-bit.
- The XDP hardware offloading. Before using this feature, see Unloading XDP programs fails on Netronome network cards that use the nfp driver.
Multi-protocol Label Switching for TC available as a Technology Preview
The Multi-protocol Label Switching (MPLS) is an in-kernel data-forwarding mechanism to route traffic flow across enterprise networks. In an MPLS network, the router that receives packets decides the further route of the packets based on the labels attached to the packet. With the usage of labels, the MPLS network has the ability to handle packets with particular characteristics. For example, you can add
tc filters for managing packets received from specific ports or carrying specific types of traffic, in a consistent way.
After packets enter the enterprise network, MPLS routers perform multiple operations on the packets, such as
push to add a label,
swap to update a label, and
pop to remove a label. MPLS allows defining actions locally based on one or multiple labels in RHEL. You can configure routers and set traffic control (
tc) filters to take appropriate actions on the packets based on the MPLS label stack entry (
lse) elements, such as
bottom of stack, and
time to live.
For example, the following command adds a filter to the enp0s1 network interface to match incoming packets having the first label 12323 and the second label 45832. On matching packets, the following actions are taken:
- the first MPLS TTL is decremented (packet is dropped if TTL reaches 0)
- the first MPLS label is changed to 549386
the resulting packet is transmitted over enp0s2, with destination MAC address 00:00:5E:00:53:01 and source MAC address 00:00:5E:00:53:02
# tc filter add dev enp0s1 ingress protocol mpls_uc flower mpls lse depth 1 label 12323 lse depth 2 label 45832 \ action mpls dec_ttl pipe \ action mpls modify label 549386 pipe \ action pedit ex munge eth dst set 00:00:5E:00:53:01 pipe \ action pedit ex munge eth src set 00:00:5E:00:53:02 pipe \ action mirred egress redirect dev enp0s2
act_mpls module available as a Technology Preview
act_mpls module is now available in the
kernel-modules-extra rpm as a Technology Preview. The module allows the application of Multiprotocol Label Switching (MPLS) actions with Traffic Control (TC) filters, for example, push and pop MPLS label stack entries with TC filters. The module also allows the Label, Traffic Class, Bottom of Stack, and Time to Live fields to be set independently.
Improved Multipath TCP support is available as a Technology Preview
Multipath TCP (MPTCP) improves resource usage within the network and resilience to network failure. For example, with Multipath TCP on the RHEL server, smartphones with MPTCP v1 enabled can connect to an application running on the server and switch between Wi-Fi and cellular networks without interrupting the connection to the server.
RHEL 8.4 introduced additional features, such as:
- Multiple concurrent active substreams
- Active-backup support
- Improved stream performances
Better memory usage, with
- SYN cookie support
Note that either the applications running on the server must natively support MPTCP or administrators must load an
eBPF program into the kernel to dynamically change
For further details see, Getting started with Multipath TCP.
systemd-resolved service is now available as a Technology Preview
systemd-resolved service provides name resolution to local applications. The service implements a caching and validating DNS stub resolver, an Link-Local Multicast Name Resolution (LLMNR), and Multicast DNS resolver and responder.
Note that, even if the
systemd package provides
systemd-resolved, this service is an unsupported Technology Preview.
nispor package is now available as a Technology Preview
nispor package is now available as a Technology Preview, which is a unified interface for Linux network state querying. It provides a unified way to query all running network status through the python and C api, and rust crate.
nispor works as the dependency in the
You can install the
nispor package as a dependency of
nmstate or as an individual package.
nisporas an individual package, enter:
# yum install nispor
nisporas a dependency of
# yum install nmstate
nisporis listed as the dependency.
For more information on using
nispor, refer to
The kexec fast reboot feature is available as Technology Preview
kexec fast reboot feature continues to be available as a Technology Preview.
kexec fast reboot significantly speeds the boot process by allowing the kernel to boot directly into the second kernel without passing through the Basic Input/Output System (BIOS) first. To use this feature:
- Reboot the operating system.
accel-config package available as a Technology Preview
accel-config package is now available on Intel
AMD64 architectures as a Technology Preview. This package helps in controlling and configuring data-streaming accelerator (DSA) sub-system in the Linux Kernel. Also, it configures devices via
sysfs (pseudo-filesystem), saves and loads the configuration in the
SGX available as a Technology Preview
Software Guard Extensions (SGX) is an Intel® technology for protecting software code and data from disclosure and modification. The RHEL kernel partially supports SGX v1 and v1.5. The version 1 enables platforms using the Flexible Launch Control mechanism to use the SGX technology.
eBPF available as a Technology Preview
Extended Berkeley Packet Filter (eBPF) is an in-kernel virtual machine that allows code execution in the kernel space, in the restricted sandbox environment with access to a limited set of functions.
The virtual machine includes a new system call
bpf(), which supports creating various types of maps, and also allows to load programs in a special assembly-like code. The code is then loaded to the kernel and translated to the native machine code with just-in-time compilation. Note that the
bpf() syscall can be successfully used only by a user with the
CAP_SYS_ADMIN capability, such as the root user. See the
bpf(2) manual page for more information.
The loaded programs can be attached onto a variety of points (sockets, tracepoints, packet reception) to receive and process data.
There are numerous components shipped by Red Hat that utilize the eBPF virtual machine. Each component is in a different development phase, and thus not all components are currently fully supported. All components are available as a Technology Preview, unless a specific component is indicated as supported.
The following notable eBPF components are currently available as a Technology Preview:
bpftrace, a high-level tracing language that utilizes the eBPF virtual machine.
AF_XDP, a socket for connecting the eXpress Data Path (XDP) path to user space for applications that prioritize packet processing performance.
The Intel data streaming accelerator driver for kernel is available as a Technology Preview
The Intel data streaming accelerator driver (IDXD) for the kernel is currently available as a Technology Preview. It is an Intel CPU integrated accelerator and supports a shared work queue with process address space ID (pasid) submission and shared virtual memory (SVM).
Soft-RoCE available as a Technology Preview
Remote directory memory access (RDMA) over Converged Ethernet (RoCE) is a network protocol which implements RDMA over Ethernet. Soft-RoCE is the software implementation of RoCE which supports two protocol versions, RoCE v1 and RoCE v2. The Soft-RoCE driver,
rdma_rxe, is available as an unsupported Technology Preview in RHEL 8.
stmmac driver is available as a Technology Preview
Red Hat provides the usage of
stmmac for Intel® Elkhart Lake systems on a chip (SoCs) as an unsupported Technology Preview.
8.4. File systems and storage
File system DAX is now available for ext4 and XFS as a Technology Preview
In Red Hat Enterprise Linux 8, file system DAX is available as a Technology Preview. DAX provides a means for an application to directly map persistent memory into its address space. To use DAX, a system must have some form of persistent memory available, usually in the form of one or more Non-Volatile Dual In-line Memory Modules (NVDIMMs), and a file system that supports DAX must be created on the NVDIMM(s). Also, the file system must be mounted with the
dax mount option. Then, an
mmap of a file on the dax-mounted file system results in a direct mapping of storage into the application’s address space.
OverlayFS is a type of union file system. It enables you to overlay one file system on top of another. Changes are recorded in the upper file system, while the lower file system remains unmodified. This allows multiple users to share a file-system image, such as a container or a DVD-ROM, where the base image is on read-only media.
OverlayFS remains a Technology Preview under most circumstances. As such, the kernel logs warnings when this technology is activated.
Full support is available for OverlayFS when used with supported container engines (
buildah) under the following restrictions:
- OverlayFS is supported for use only as a container engine graph driver. Its use is supported only for container COW content, not for persistent storage. You must place any persistent storage on non-OverlayFS volumes. You can use only the default container engine configuration: one level of overlay, one lowerdir, and both lower and upper levels are on the same file system.
- Only XFS is currently supported for use as a lower layer file system.
Additionally, the following rules and limitations apply to using OverlayFS:
- The OverlayFS kernel ABI and user-space behavior are not considered stable, and might change in future updates.
OverlayFS provides a restricted set of the POSIX standards. Test your application thoroughly before deploying it with OverlayFS. The following cases are not POSIX-compliant:
Lower files opened with
O_RDONLYdo not receive
st_atimeupdates when the files are read.
Lower files opened with
O_RDONLY, then mapped with
MAP_SHAREDare inconsistent with subsequent modification.
d_inovalues are not enabled by default on RHEL 8, but you can enable full POSIX compliance for them with a module option or mount option.
To get consistent inode numbering, use the
You can also use the
index=onoptions to improve POSIX compliance. These two options make the format of the upper layer incompatible with an overlay without these options. That is, you might get unexpected results or errors if you create an overlay with
index=on, unmount the overlay, then mount the overlay without these options.
- Lower files opened with
To determine whether an existing XFS file system is eligible for use as an overlay, use the following command and see if the
ftype=1option is enabled:
# xfs_info /mount-point | grep ftype
- SELinux security labels are enabled by default in all supported container engines with OverlayFS.
- Several known issues are associated with OverlayFS in this release. For details, see Non-standard behavior in the Linux kernel documentation: https://www.kernel.org/doc/Documentation/filesystems/overlayfs.txt.
For more information about OverlayFS, see the Linux kernel documentation: https://www.kernel.org/doc/Documentation/filesystems/overlayfs.txt.
Stratis is now available as a Technology Preview
Stratis is a new local storage manager. It provides managed file systems on top of pools of storage with additional features to the user.
Stratis enables you to more easily perform storage tasks such as:
- Manage snapshots and thin provisioning
- Automatically grow file system sizes as needed
- Maintain file systems
To administer Stratis storage, use the
stratis utility, which communicates with the
stratisd background service.
Stratis is provided as a Technology Preview.
For more information, see the Stratis documentation: Managing layered local storage with Stratis.
RHEL 8.3 updated Stratis to version 2.1.0. For more information, see Stratis 2.1.0 Release Notes.
Setting up a Samba server on an IdM domain member is provided as a Technology Preview
With this update, you can now set up a Samba server on an Identity Management (IdM) domain member. The new
ipa-client-samba utility provided by the same-named package adds a Samba-specific Kerberos service principal to IdM and prepares the IdM client. For example, the utility creates the
/etc/samba/smb.conf with the ID mapping configuration for the
sss ID mapping back end. As a result, administrators can now set up Samba on an IdM domain member.
Due to IdM Trust Controllers not supporting the Global Catalog Service, AD-enrolled Windows hosts cannot find IdM users and groups in Windows. Additionally, IdM Trust Controllers do not support resolving IdM groups using the Distributed Computing Environment / Remote Procedure Calls (DCE/RPC) protocols. As a consequence, AD users can only access the Samba shares and printers from IdM clients.
For details, see Setting up Samba on an IdM domain member.
NVMe/TCP is available as a Technology Preview
Accessing and sharing Nonvolatile Memory Express (NVMe) storage over TCP/IP networks (NVMe/TCP) and its corresponding
nvmet-tcp.ko kernel modules have been added as a Technology Preview.
The use of NVMe/TCP as either a storage client or a target is manageable with tools provided by the
The NVMe/TCP target Technology Preview is included only for testing purposes and is not currently planned for full support.
8.5. High availability and clusters
podman bundles available as a Technology Preview
Pacemaker container bundles now run on Podman, with the container bundle feature being available as a Technology Preview. There is one exception to this feature being Technology Preview: Red Hat fully supports the use of Pacemaker bundles for Red Hat Openstack.
corosync-qdevice available as a Technology Preview
Heuristics are a set of commands executed locally on startup, cluster membership change, successful connect to
corosync-qnetd, and, optionally, on a periodic basis. When all commands finish successfully on time (their return error code is zero), heuristics have passed; otherwise, they have failed. The heuristics result is sent to
corosync-qnetd where it is used in calculations to determine which partition should be quorate.
fence-agents-heuristics-ping fence agent
As a Technology Preview, Pacemaker now supports the
fence_heuristics_ping agent. This agent aims to open a class of experimental fence agents that do no actual fencing by themselves but instead exploit the behavior of fencing levels in a new way.
If the heuristics agent is configured on the same fencing level as the fence agent that does the actual fencing but is configured before that agent in sequence, fencing issues an
off action on the heuristics agent before it attempts to do so on the agent that does the fencing. If the heuristics agent gives a negative result for the
off action it is already clear that the fencing level is not going to succeed, causing Pacemaker fencing to skip the step of issuing the
off action on the agent that does the fencing. A heuristics agent can exploit this behavior to prevent the agent that does the actual fencing from fencing a node under certain conditions.
A user might want to use this agent, especially in a two-node cluster, when it would not make sense for a node to fence the peer if it can know beforehand that it would not be able to take over the services properly. For example, it might not make sense for a node to take over services if it has problems reaching the networking uplink, making the services unreachable to clients, a situation which a ping to a router might detect in that case.
Automatic removal of location constraint following resource move available as a Technology Preview
When you execute the
pcs resource move command, this adds a constraint to the resource to prevent it from running on the node on which it is currently running. A new
--autodelete option for the
pcs resource move command is now available as a Technology Preview. When you specify this option, the location constraint that the command creates is automatically removed once the resource has been moved.
8.6. Identity Management
Identity Management JSON-RPC API available as Technology Preview
An API is available for Identity Management (IdM). To view the API, IdM also provides an API browser as a Technology Preview.
Previously, the IdM API was enhanced to enable multiple versions of API commands. These enhancements could change the behavior of a command in an incompatible way. Users are now able to continue using existing tools and scripts even if the IdM API changes. This enables:
- Administrators to use previous or later versions of IdM on the server than on the managing client.
- Developers can use a specific version of an IdM call, even if the IdM version changes on the server.
In all cases, the communication with the server is possible, regardless if one side uses, for example, a newer version that introduces new options for a feature.
For details on using the API, see Using the Identity Management API to Communicate with the IdM Server (TECHNOLOGY PREVIEW).
DNSSEC available as Technology Preview in IdM
Identity Management (IdM) servers with integrated DNS now support DNS Security Extensions (DNSSEC), a set of extensions to DNS that enhance security of the DNS protocol. DNS zones hosted on IdM servers can be automatically signed using DNSSEC. The cryptographic keys are automatically generated and rotated.
Users who decide to secure their DNS zones with DNSSEC are advised to read and follow these documents:
Note that IdM servers with integrated DNS use DNSSEC to validate DNS answers obtained from other DNS servers. This might affect the availability of DNS zones that are not configured in accordance with recommended naming practices.
ACME available as a Technology Preview
The Automated Certificate Management Environment (ACME) service is now available in Identity Management (IdM) as a Technology Preview. ACME is a protocol for automated identifier validation and certificate issuance. Its goal is to improve security by reducing certificate lifetimes and avoiding manual processes from certificate lifecycle management.
In RHEL, the ACME service uses the Red Hat Certificate System (RHCS) PKI ACME responder. The RHCS ACME subsystem is automatically deployed on every certificate authority (CA) server in the IdM deployment, but it does not service requests until the administrator enables it. RHCS uses the
acmeIPAServerCert profile when issuing ACME certificates. The validity period of issued certificates is 90 days. Enabling or disabling the ACME service affects the entire IdM deployment.
It is recommended to enable ACME only in an IdM deployment where all servers are running RHEL 8.4 or later. Earlier RHEL versions do not include the ACME service, which can cause problems in mixed-version deployments. For example, a CA server without ACME can cause client connections to fail, because it uses a different DNS Subject Alternative Name (SAN).
Currently, RHCS does not remove expired certificates. Because ACME certificates expire after 90 days, the expired certificates can accumulate and this can affect performance.
To enable ACME across the whole IdM deployment, use the
# ipa-acme-manage enable The ipa-acme-manage command was successful
To disable ACME across the whole IdM deployment, use the
# ipa-acme-manage disable The ipa-acme-manage command was successful
To check whether the ACME service is installed and if it is enabled or disabled, use the
# ipa-acme-manage status ACME is enabled The ipa-acme-manage command was successful
GNOME for the 64-bit ARM architecture available as a Technology Preview
The GNOME desktop environment is now available for the 64-bit ARM architecture as a Technology Preview. This enables administrators to configure and manage servers from a graphical user interface (GUI) remotely, using the VNC session.
As a consequence, new administration applications are available on the 64-bit ARM architecture. For example: Disk Usage Analyzer (
baobab), Firewall Configuration (
firewall-config), Red Hat Subscription Manager (
subscription-manager), or the Firefox web browser. Using Firefox, administrators can connect to the local Cockpit daemon remotely.
(JIRA:RHELPLAN-27394, BZ#1667225, BZ#1667516, BZ#1724302)
GNOME desktop on IBM Z is available as a Technology Preview
The GNOME desktop, including the Firefox web browser, is now available as a Technology Preview on the IBM Z architecture. You can now connect to a remote graphical session running GNOME using VNC to configure and manage your IBM Z servers.
8.8. Graphics infrastructures
VNC remote console available as a Technology Preview for the 64-bit ARM architecture
On the 64-bit ARM architecture, the Virtual Network Computing (VNC) remote console is available as a Technology Preview. Note that the rest of the graphics stack is currently unverified for the 64-bit ARM architecture.
8.9. Red Hat Enterprise Linux System Roles
HA Cluster RHEL System Role available as a Technology Preview
The High Availability Cluster (HA Cluster) role is now available as a Technology Preview. Currently, the following notable configurations are available:
- Configuring nodes, fence device, resources, resource groups, and resource clones including meta attributes and resource operations
- Configuring cluster properties
- Configuring multi-link clusters
- Configuring custom cluster names and node names
- Configuring whether clusters start automatically on boot
- Configuring a basic corosync cluster and pacemaker cluster properties, stonith and resources.
ha_cluster system role does not currently support constraints. Running the role after constraints are configured manually will remove the constraints, as well as any configuration not supported by the role.
ha_cluster system role does not currently support SBD.
AMD SEV and SEV-ES for KVM virtual machines
As a Technology Preview, RHEL 8 provides the Secure Encrypted Virtualization (SEV) feature for AMD EPYC host machines that use the KVM hypervisor. If enabled on a virtual machine (VM), SEV encrypts the VM’s memory to protect the VM from access by the host. This increases the security of the VM.
In addition, the enhanced Encrypted State version of SEV (SEV-ES) is also provided as Technology Preview. SEV-ES encrypts all CPU register contents when a VM stops running. This prevents the host from modifying the VM’s CPU registers or reading any information from them.
Note that SEV and SEV-ES work only on the 2nd generation of AMD EPYC CPUs (codenamed Rome) or later. Also note that RHEL 8 includes SEV and SEV-ES encryption, but not the SEV and SEV-ES security attestation.
(BZ#1501618, BZ#1501607, JIRA:RHELPLAN-7677)
As a Technology Preview, it is now possible to divide a physical Intel GPU device into multiple virtual devices referred to as
mediated devices. These mediated devices can then be assigned to multiple virtual machines (VMs) as virtual GPUs. As a result, these VMs share the performance of a single physical Intel GPU.
Note that only selected Intel GPUs are compatible with the vGPU feature.
In addition, it is possible to enable a VNC console operated by Intel vGPU. By enabling it, users can connect to a VNC console of the VM and see the VM’s desktop hosted by Intel vGPU. However, this currently only works for RHEL guest operating systems.
Creating nested virtual machines
Nested KVM virtualization is provided as a Technology Preview for KVM virtual machines (VMs) running on Intel, AMD64, and IBM Z systems hosts with RHEL 8. With this feature, a RHEL 7 or RHEL 8 VM that runs on a physical RHEL 8 host can act as a hypervisor, and host its own VMs.
Select Intel network adapters now support SR-IOV in RHEL guests on Hyper-V
As a Technology Preview, Red Hat Enterprise Linux guest operating systems running on a Hyper-V hypervisor can now use the single-root I/O virtualization (SR-IOV) feature for Intel network adapters supported by the
iavf drivers. This feature is enabled when the following conditions are met:
- SR-IOV support is enabled for the network interface controller (NIC)
- SR-IOV support is enabled for the virtual NIC
- SR-IOV support is enabled for the virtual switch
- The virtual function (VF) from the NIC is attached to the virtual machine
The feature is currently supported with Microsoft Windows Server 2019 and 2016.
ESXi hypervisor and SEV-ES available as a Technology Preview for RHEL VMs
As a Technology Preview, in RHEL 8.4 and later, you can enable the AMD Secure Encrypted Virtualization-Encrypted State (SEV-ES) to secure RHEL virtual machines (VMs) on VMware’s ESXi hypervisor, versions 7.0.2 and later.
Sharing files between hosts and VMs using virtiofs
As a Technology Preview, RHEL 8 now provides the virtio file system (
virtiofs, you can efficiently share files between your host system and its virtual machines (VM).
KVM virtualization is usable in RHEL 8 Hyper-V virtual machines
As a Technology Preview, nested KVM virtualization can now be used on the Microsoft Hyper-V hypervisor. As a result, you can create virtual machines on a RHEL 8 guest system running on a Hyper-V host.
Note that currently, this feature only works on Intel and AMD systems. In addition, nested virtualization is in some cases not enabled by default on Hyper-V. To enable it, see the following Microsoft documentation:
Toolbox is available as a Technology Preview
Previously, the Toolbox utility was based on RHEL CoreOS github.com/coreos/toolbox. With this release, Toolbox has been replaced with github.com/containers/toolbox.
crun is available as a Technology Preview
crun OCI runtime is now available for the
container-tools:rhel8 module as a Technology Preview. The
crun container runtime supports an annotation that allows the container to access the rootless user’s additional groups. This is useful for volume mounting in a directory where setgid is set, or where the user only has group access. Currently, neither the
runc runtimes fully support
podman container image is available as a Technology Preview
registry.redhat.io/rhel8/podman container image is a containerized implementation of the
podman package. The
podman tool is used for managing containers and images, volumes mounted into those containers, and pods made of groups of containers.