Red Hat AI Accelerator
Red Hat AI Accelerator does not include software and does not have its own life cycle. Please refer to the OpenShift and OpenShift AI life cycle pages for a complete list of supported versions and the associated policies.
AI Accelerator - Definition
Chip designers have created specialized hardware accelerators to train and serve large language models and other foundational models that power generative artificial intelligence and machine learning (AI/ML). Data scientists, data engineers, ML engineers, and developers can take advantage of the specialized hardware acceleration capabilities for data-intensive model development and serving.
- Red Hat Enterprise Linux AI Subscriptions include an entitlement for an AI Accelerator.
- Red Hat OpenShift AI and Red Hat OpenShift Container Platform require the additional purchase of Red Hat AI Accelerator Add-On Subscriptions when running on AI Accelerators.
AI Accelerator(s) means the following:
- Hardware computing devices, such as GPUs, or ASICs that are engineered to enhance and expedite artificial intelligence computations. These devices provide processing capabilities and optimized architectures for AI tasks including machine learning, model training and inference.
- Computing devices that function as separate processing units, even if physically integrated within the same package or module as the CPU. The device is an AI Accelerator if the device:
- operates independently of the CPU cores; and
- is intended to enhance artificial intelligence computations.
- Any of the following accelerator architectures and all of their variants. Note, chip manufacturers are developing and releasing new accelerator architectures and accelerators at a rapid pace. This list is not exhaustive and is subject to change:
- NVIDIA Turing
- NVIDIA Ampere
- NVIDIA Ada Lovelace
- NVIDIA Hopper
- NVIDIA Grace Hopper
- NVIDIA Blackwell
- NVIDIA Grace Blackwell
- Intel Gaudi 2
- Intel Gaudi 3
- AMD Instinct MI200 series
- AMD Instinct MI300 series