Chapter 9. Technology Previews
This part provides a list of all Technology Previews available in Red Hat Enterprise Linux 8.2.
For information on Red Hat scope of support for Technology Preview features, see Technology Preview Features Support Scope.
nmstate available as a Technology Preview
Nmstate is a network API for hosts. The
nmstate packages, available as a Technology Preview, provide a library and the
nmstatectl command-line utility to manage host network settings in a declarative manner. The networking state is described by a pre-defined schema. Reporting of the current state and changes to the desired state both conform to the schema.
For further details, see the
/usr/share/doc/nmstate/README.md file and the examples in the
AF_XDP available as a Technology Preview
Address Family eXpress Data Path (
AF_XDP) socket is designed for high-performance packet processing. It accompanies
XDP and grants efficient redirection of programmatically selected packets to user space applications for further processing.
XDP available as a Technology Preview
The eXpress Data Path (XDP) feature, which is available as a Technology Preview, provides a means to attach extended Berkeley Packet Filter (eBPF) programs for high-performance packet processing at an early point in the kernel ingress data path, allowing efficient programmable packet analysis, filtering, and manipulation.
KTLS available as a Technology Preview
In Red Hat Enterprise Linux 8, Kernel Transport Layer Security (KTLS) is provided as a Technology Preview. KTLS handles TLS records using the symmetric encryption or decryption algorithms in the kernel for the AES-GCM cipher. KTLS also provides the interface for offloading TLS record encryption to Network Interface Controllers (NICs) that support this functionality.
dracut utility now supports creating
initrd images with NetworkManager support as a technology preview
By default, the
dracut utility uses a shell script to manage networking in the initial RAM disk (
initrd). In certain cases, this could cause problems when the system switches from the RAM disk to the operating system that uses NetworkManager to configure the network. For example, NetworkManager could send another DHCP request, even if the script in the RAM disk already requested an IP address. This request from the RAM disk could result in a time out.
To solve these kind of problems,
dracut in RHEL 8.2 can now use NetworkManager in the RAM disk. Use the following commands to enable the feature and recreate the RAM disk images:
echo 'add_dracutmodules+=" network-manager "' > /etc/dracut.conf.d/enable-nm.conf dracut -vf --regenerate-all
Note that Red Hat does not support technology preview features. However, to provide feedback about this feature, please contact the Red Hat support.
mlx5_core driver supports Mellanox ConnectX-6 Dx network adapter as a Technology Preview
This enhancement adds the PCI IDs of the Mellanox ConnectX-6 Dx network adapter to the
mlx5_core driver. On hosts that use this adapter, RHEL loads the
mlx5_core driver automatically. Note that Red Hat provides this feature as an unsupported Technology Preview.
kexec fast reboot as a Technology Preview
kexec fast reboot feature, continues to be available as a Technology Preview. Rebooting is now significantly faster thanks to
kexec fast reboot. To use this feature, load the kexec kernel manually, and then reboot the operating system.
eBPF available as a Technology Preview
Extended Berkeley Packet Filter (eBPF) is an in-kernel virtual machine that allows code execution in the kernel space, in the restricted sandbox environment with access to a limited set of functions.
The virtual machine includes a new system call
bpf(), which supports creating various types of maps, and also allows to load programs in a special assembly-like code. The code is then loaded to the kernel and translated to the native machine code with just-in-time compilation. Note that the
bpf() syscall can be successfully used only by a user with the
CAP_SYS_ADMIN capability, such as the root user. See the
bpf(2) man page for more information.
The loaded programs can be attached onto a variety of points (sockets, tracepoints, packet reception) to receive and process data.
There are numerous components shipped by Red Hat that utilize the eBPF virtual machine. Each component is in a different development phase, and thus not all components are currently fully supported. All components are available as a Technology Preview, unless a specific component is indicated as supported.
The following notable eBPF components are currently available as a Technology Preview:
bpftrace, a high-level tracing language that utilizes the eBPF virtual machine.
- The eXpress Data Path (XDP) feature, a networking technology that enables fast packet processing in the kernel using the eBPF virtual machine.
libbpf is available as a Technology Preview
libbpf package is currently available as a Technology Preview. The
libbpf package is crucial for bpf related applications like
It is a mirror of bpf-next linux tree
bpf-next/tools/lib/bpf directory plus its supporting header files. The version of the package reflects the version of the Application Binary Interface (ABI).
igc driver available as a Technology Preview for RHEL 8
igc Intel 2.5G Ethernet Linux wired LAN driver is now available on all architectures for RHEL 8 as a Technology Preview. The
ethtool utility also supports
igc wired LANs.
9.3. File systems and storage
NVMe/TCP is available as a Technology Preview
Accessing and sharing Nonvolatile Memory Express (NVMe) storage over TCP/IP networks (NVMe/TCP) and its corresponding
nvmet-tcp.ko kernel modules have been added as a Technology Preview.
The use of NVMe/TCP as either a storage client or a target is manageable with tools provided by the
The NVMe/TCP target Technology Preview is included only for testing purposes and is not currently planned for full support.
File system DAX is now available for ext4 and XFS as a Technology Preview
In Red Hat Enterprise Linux 8.2, file system DAX is available as a Technology Preview. DAX provides a means for an application to directly map persistent memory into its address space. To use DAX, a system must have some form of persistent memory available, usually in the form of one or more Non-Volatile Dual In-line Memory Modules (NVDIMMs), and a file system that supports DAX must be created on the NVDIMM(s). Also, the file system must be mounted with the
dax mount option. Then, an
mmap of a file on the dax-mounted file system results in a direct mapping of storage into the application’s address space.
OverlayFS is a type of union file system. It enables you to overlay one file system on top of another. Changes are recorded in the upper file system, while the lower file system remains unmodified. This allows multiple users to share a file-system image, such as a container or a DVD-ROM, where the base image is on read-only media. See the Linux kernel documentation for additional information: https://www.kernel.org/doc/Documentation/filesystems/overlayfs.txt.
OverlayFS remains a Technology Preview under most circumstances. As such, the kernel logs warnings when this technology is activated.
Full support is available for OverlayFS when used with supported container engines (
buildah) under the following restrictions:
- OverlayFS is supported for use only as a container engine graph driver. Its use is supported only for container COW content, not for persistent storage. You must place any persistent storage on non-OverlayFS volumes. Only the default container engine configuration can be used; that is, one level of overlay, one lowerdir, and both lower and upper levels are on the same file system.
- Only XFS is currently supported for use as a lower layer file system.
Additionally, the following rules and limitations apply to using OverlayFS:
- The OverlayFS kernel ABI and userspace behavior are not considered stable, and might see changes in future updates.
OverlayFS provides a restricted set of the POSIX standards. Test your application thoroughly before deploying it with OverlayFS. The following cases are not POSIX-compliant:
Lower files opened with
O_RDONLYdo not receive
st_atimeupdates when the files are read.
Lower files opened with
O_RDONLY, then mapped with
MAP_SHAREDare inconsistent with subsequent modification.
d_inovalues are not enabled by default on RHEL 8, but you can enable full POSIX compliance for them with a module option or mount option.
To get consistent inode numbering, use the
You can also use the
index=onoptions to improve POSIX compliance. These two options make the format of the upper layer incompatible with an overlay without these options. That is, you might get unexpected results or errors if you create an overlay with
index=on, unmount the overlay, then mount the overlay without these options.
- Lower files opened with
Commands used with XFS:
XFS file systems must be created with the
-n ftype=1option enabled for use as an overlay.
With the rootfs and any file systems created during system installation, set the
--mkfsoptions=-n ftype=1parameters in the Anaconda kickstart.
When creating a new file system after the installation, run the
# mkfs -t xfs -n ftype=1 /PATH/TO/DEVICEcommand.
To determine whether an existing file system is eligible for use as an overlay, run the
# xfs_info /PATH/TO/DEVICE | grep ftypecommand to see if the
ftype=1option is enabled.
- XFS file systems must be created with the
- SELinux security labels are enabled by default in all supported container engines with OverlayFS.
- There are several known issues associated with OverlayFS in this release. For details, see Non-standard behavior in the Linux kernel documentation: https://www.kernel.org/doc/Documentation/filesystems/overlayfs.txt.
Stratis is now available as a Technology Preview
Stratis is a new local storage manager. It provides managed file systems on top of pools of storage with additional features to the user.
Stratis enables you to more easily perform storage tasks such as:
- Manage snapshots and thin provisioning
- Automatically grow file system sizes as needed
- Maintain file systems
To administer Stratis storage, use the
stratis utility, which communicates with the
stratisd background service.
Stratis is provided as a Technology Preview.
For more information, see the Stratis documentation: Managing layered local storage with Stratis.
RHEL 8.2 updates Stratis to version 2.0.0. This version improves reliability and the Stratis DBus API. For more information about version 2.0.0, see Stratis 2.0.0 Release Notes.
IdM now supports setting up a Samba server on an IdM domain member as a Technology Preview
With this update, you can now set up a Samba server on an Identity Management (IdM) domain member. The new
ipa-client-samba utility provided by the same-named package adds a Samba-specific Kerberos service principal to IdM and prepares the IdM client. For example, the utility creates the
/etc/samba/smb.conf with the ID mapping configuration for the
sss ID mapping back end. As a result, administrators can now set up Samba on an IdM domain member.
Due to IdM Trust Controllers not supporting the Global Catalog Service, AD-enrolled Windows hosts cannot find IdM users and groups in Windows. Additionally, IdM Trust Controllers do not support resolving IdM groups using the Distributed Computing Environment / Remote Procedure Calls (DCE/RPC) protocols. As a consequence, AD users can only access the Samba shares and printers from IdM clients.
For details, see Setting up Samba on an IdM domain member.
9.4. High availability and clusters
podman bundles available as a Technology Preview
Pacemaker container bundles now run on the
podman container platform, with the container bundle feature being available as a Technology Preview. There is one exception to this feature being Technology Preview: Red Hat fully supports the use of Pacemaker bundles for Red Hat Openstack.
corosync-qdevice available as a Technology Preview
Heuristics are a set of commands executed locally on startup, cluster membership change, successful connect to
corosync-qnetd, and, optionally, on a periodic basis. When all commands finish successfully on time (their return error code is zero), heuristics have passed; otherwise, they have failed. The heuristics result is sent to
corosync-qnetd where it is used in calculations to determine which partition should be quorate.
fence-agents-heuristics-ping fence agent
As a Technology Preview, Pacemaker now supports the
fence_heuristics_ping agent. This agent aims to open a class of experimental fence agents that do no actual fencing by themselves but instead exploit the behavior of fencing levels in a new way.
If the heuristics agent is configured on the same fencing level as the fence agent that does the actual fencing but is configured before that agent in sequence, fencing issues an
off action on the heuristics agent before it attempts to do so on the agent that does the fencing. If the heuristics agent gives a negative result for the
off action it is already clear that the fencing level is not going to succeed, causing Pacemaker fencing to skip the step of issuing the
off action on the agent that does the fencing. A heuristics agent can exploit this behavior to prevent the agent that does the actual fencing from fencing a node under certain conditions.
A user might want to use this agent, especially in a two-node cluster, when it would not make sense for a node to fence the peer if it can know beforehand that it would not be able to take over the services properly. For example, it might not make sense for a node to take over services if it has problems reaching the networking uplink, making the services unreachable to clients, a situation which a ping to a router might detect in that case.
9.5. Identity Management
Identity Management JSON-RPC API available as Technology Preview
An API is available for Identity Management (IdM). To view the API, IdM also provides an API browser as Technology Preview.
In Red Hat Enterprise Linux 7.3, the IdM API was enhanced to enable multiple versions of API commands. Previously, enhancements could change the behavior of a command in an incompatible way. Users are now able to continue using existing tools and scripts even if the IdM API changes. This enables:
- Administrators to use previous or later versions of IdM on the server than on the managing client.
- Developers to use a specific version of an IdM call, even if the IdM version changes on the server.
In all cases, the communication with the server is possible, regardless if one side uses, for example, a newer version that introduces new options for a feature.
For details on using the API, see Using the Identity Management API to Communicate with the IdM Server (TECHNOLOGY PREVIEW).
DNSSEC available as Technology Preview in IdM
Identity Management (IdM) servers with integrated DNS now support DNS Security Extensions (DNSSEC), a set of extensions to DNS that enhance security of the DNS protocol. DNS zones hosted on IdM servers can be automatically signed using DNSSEC. The cryptographic keys are automatically generated and rotated.
Users who decide to secure their DNS zones with DNSSEC are advised to read and follow these documents:
Note that IdM servers with integrated DNS use DNSSEC to validate DNS answers obtained from other DNS servers. This might affect the availability of DNS zones that are not configured in accordance with recommended naming practices.
Checking the overall health of your public key infrastructure is now available as a Technology Preview
With this update, the public key infrastructure (PKI) Healthcheck tool reports the health of the PKI subsystem to the Identity Management (IdM) Healthcheck tool, which was introduced in RHEL 8.1. Executing the IdM Healthcheck invokes the PKI Healthcheck, which collects and returns the health report of the PKI subsystem.
pki-healthcheck tool is available on any deployed RHEL IdM server or replica. All the checks provided by
pki-healthcheck are also integrated into the
ipa-healthcheck can be installed separately from the
idm:DL1 module stream.
pki-healthcheck can also work in a standalone Red Hat Certificate System (RHCS) infrastructure.
GNOME Desktop on ARM is available as a Technology Preview
The GNOME Desktop is now available as a Technology Preview on the 64-bit ARM architecture. Users who require a graphical session to configure and manage their servers can now connect to a remote graphical session running GNOME Desktop using VNC.
GNOME for the 64-bit ARM architecture available as a Technology Preview
The GNOME desktop environment is now available for the 64-bit ARM architecture as a Technology Preview. This enables administrators to configure and manage servers from a graphical user interface (GUI) remotely, using the VNC session.
As a consequence, new administration applications are available on the 64-bit ARM architecture. For example: Disk Usage Analyzer (
baobab), Firewall Configuration (
firewall-config), Red Hat Subscription Manager (
subscription-manager), or the Firefox web browser. Using Firefox, administrators can connect to the local Cockpit daemon remotely.
(JIRA:RHELPLAN-27394, BZ#1667516, BZ#1667225)
9.7. Graphics infrastructures
VNC remote console available as a Technology Preview for the 64-bit ARM architecture
On the 64-bit ARM architecture, the Virtual Network Computing (VNC) remote console is available as a Technology Preview. Note that the rest of the graphics stack is currently unverified for the 64-bit ARM architecture.
9.8. Red Hat Enterprise Linux System Roles
postfix role of RHEL System Roles available as a Technology Preview
Red Hat Enterprise Linux System Roles provides a configuration interface for Red Hat Enterprise Linux subsystems, which makes system configuration easier through the inclusion of Ansible Roles. This interface enables managing system configurations across multiple versions of Red Hat Enterprise Linux, as well as adopting new major releases.
rhel-system-roles packages are distributed through the AppStream repository.
postfix role is available as a Technology Preview.
The following roles are fully supported:
For more information, see the Knowledgebase article about RHEL System Roles.
rhel-system-roles-sap available as a Technology Preview
rhel-system-roles-sap package provides Red Hat Enterprise Linux (RHEL) System Roles for SAP, which can be used to automate the configuration of a RHEL system to run SAP workloads. These roles greatly reduce the time to configure a system to run SAP workloads by automatically applying the optimal settings that are based on best practices outlined in relevant SAP Notes. Access is limited to RHEL for SAP Solutions offerings. Please contact Red Hat Customer Support if you need assistance with your subscription.
The following new roles in the
rhel-system-roles-sap package are available as a Technology Preview:
For more information, see Red Hat Enterprise Linux System Roles for SAP.
Note: RHEL 8.2 for SAP Solutions is scheduled to be validated for use with SAP HANA on Intel 64 architecture and IBM POWER9. Support for other SAP applications and database products, for example, SAP NetWeaver and SAP ASE, are tied to GA releases, and customers can use RHEL 8.2 features upon GA. Please consult SAP Notes 2369910 and 2235581 for the latest information about validated releases and SAP support.
Select Intel network adapters now support SR-IOV in RHEL guests on Hyper-V
As a Technology Preview, Red Hat Enterprise Linux guest operating systems running on a Hyper-V hypervisor can now use the single-root I/O virtualization (SR-IOV) feature for Intel network adapters supported by the
i40evf drivers. This feature is enabled when the following conditions are met:
- SR-IOV support is enabled for the network interface controller (NIC)
- SR-IOV support is enabled for the virtual NIC
- SR-IOV support is enabled for the virtual switch
- The virtual function (VF) from the NIC is attached to the virtual machine.
The feature is currently supported with Microsoft Windows Server 2019 and 2016.
KVM virtualization is usable in RHEL 8 Hyper-V virtual machines
As a Technology Preview, nested KVM virtualization can now be used on the Microsoft Hyper-V hypervisor. As a result, you can create virtual machines on a RHEL 8 guest system running on a Hyper-V host.
Note that currently, this feature only works on Intel systems. In addition, nested virtualization is in some cases not enabled by default on Hyper-V. To enable it, see the following Microsoft documentation:
AMD SEV for KVM virtual machines
As a Technology Preview, RHEL 8 introduces the Secure Encrypted Virtualization (SEV) feature for AMD EPYC host machines that use the KVM hypervisor. If enabled on a virtual machine (VM), SEV encrypts VM memory so that the host cannot access data on the VM. This increases the security of the VM if the host is successfully infected by malware.
Note that the number of VMs that can use this feature at a time on a single host is determined by the host hardware. Current AMD EPYC processors support up to 509 running VMs using SEV.
Also note that for VMs with SEV configured to be able to boot, you must also configure the VM with a hard memory limit. To do so, add the following to the VM’s XML configuration:
<memtune> <hard_limit unit='KiB'>N</hard_limit> </memtune>
The recommended value for N is equal to or greater then the guest RAM + 256 MiB. For example, if the guest is assigned 2 GiB RAM, N should be 2359296 or greater.
(BZ#1501618, BZ#1501607, JIRA:RHELPLAN-7677)
As a Technology Preview, it is now possible to divide a physical Intel GPU device into multiple virtual devices referred to as
mediated devices. These mediated devices can then be assigned to multiple virtual machines (VMs) as virtual GPUs. As a result, these VMs share the performance of a single physical Intel GPU.
Note that only selected Intel GPUs are compatible with the vGPU feature. In addition, assigning a physical GPU to VMs makes it impossible for the host to use the GPU, and may prevent graphical display output on the host from working.
skopeo container image is available as a Technology Preview
registry.redhat.io/rhel8/skopeo container image is a containerized implementation of the
skopeo package. The
skopeo is a command-line tool utility that performs various operations on container images and image repositories. This container image allows you to inspect and copy container images from one unauthenticated container registry to another.
buildah container image is available as a Technology Preview
registry.redhat.io/rhel8/buildah container image is a containerized implementation of the
buildah package. The
buildah is a tool that facilitates building OCI container images. This container image allows you to build container images without the need to install the
buildah package on your system. The use-case does not cover running this image in rootless mode as a non-root user.