RHEL AI: Starting LLM Model in RHEL AI is failing with python module "bitsandbytes" error.

Solution In Progress - Updated -

Issue

  • Start contaized LLM Module is failing with error.
  • RHEL AI: Starting LLM Model in RHEL AI is failing with python module "bitsandbytes" error.
$ ilab model serve --gpus 4 --model-path /var/home/instruct/.cache/instructlab/models/CustomLLM/Custom-Granite-3.1
INFO 2025-02-14 17:24:58,720 instructlab.model.serve_backend:56: Using model '/var/home/instruct/.cache/instructlab/models/CustomLLM/Custom-Granite-3.1'
with -1 gpu-layers and 4096 max context size.
INFO 2025-02-14 17:24:58,720 instructlab.model.serve_backend:88: '--gpus' flag used alongside '--tensor-parallel-size' in the vllm_args section of the config file. Using value of the --gpus flag.
INFO 2025-02-14 17:24:58,721 instructlab.model.backends.vllm:313: vLLM starting up on pid 52 at http://127.0.0.1:8000/v1
The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`.
0it [00:00, ?it/s]
.
.
Could not load bitsandbytes native library: /opt/app-root/lib64/python3.11/site-packages/bitsandbytes/libbitsandbytes_cpu.so: cannot open shared object file: No such file or directory
Traceback (most recent call last):
  File "/opt/app-root/lib64/python3.11/site-packages/bitsandbytes/cextension.py", line 104, in <module>
    lib = get_native_library()
          ^^^^^^^^^^^^^^^^^^^^
  File "/opt/app-root/lib64/python3.11/site-packages/bitsandbytes/cextension.py", line 91, in get_native_library
    dll = ct.cdll.LoadLibrary(str(binary_path))
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib64/python3.11/ctypes/__init__.py", line 454, in LoadLibrary
    return self._dlltype(name)
           ^^^^^^^^^^^^^^^^^^^
  File "/usr/lib64/python3.11/ctypes/__init__.py", line 376, in __init__
    self._handle = _dlopen(self._name, mode)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^
"OSError: /opt/app-root/lib64/python3.11/site-packages/bitsandbytes/libbitsandbytes_cpu.so: cannot open shared object file: No such file or directory
CUDA Setup failed despite CUDA being available. Please run the following command to get more information:
python -m bitsandbytes"

Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues
.
.
  File "/opt/app-root/lib64/python3.11/site-packages/vllm/entrypoints/openai/api_server.py", line 192, in build_async_engine_client_from_engine_args
    raise RuntimeError(
RuntimeError: Engine process failed to start
/usr/lib64/python3.11/multiprocessing/resource_tracker.py:254: UserWarning: resource_tracker: There appear to be 1 leaked shared_memory objects to clean up at shutdown
  warnings.warn('resource_tracker: There appear to be %d '

Environment

  • Red Hat Enterprise Linux AI 1.2
  • Red Hat Enterprise Linux AI 1.3
  • Red Hat Enterprise Linux AI 1.4

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content