How to configure Red Hat AI Inference server to use multiple GPUs on the system?
Issue
- How to configure Red Hat AI Inference server to use multiple GPUs on the system (on Nvidia L4 GPU)?
Environment
- Red Hat AI Inference server (RHAIIS)
- 3.0
- Nvidia L4 GPU
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.