Containerized Ceph - after reboot the osds are failing to start with "ceph-osd-run.sh: mount: /dev/disk/by-partuuid is not a block device" and "ceph-osd-osds-0-vdb returned error: No such container: ceph-osd-osds-0 vdb"

Solution Verified - Updated -

Issue

  • After storage node reboot several of the ceph-osds are not starting.
  • var/log/messages shows the following errors related to the issue:

    Nov 19 06:05:12 osds-0 journal: mount:  /dev/disk/by-partuuid is not a block device
    Nov 19 06:05:12 osds-0 ceph-osd-run.sh: mount:  /dev/disk/by-partuuid is not a block device
    Nov 19 06:05:14 osds-0 dockerd-current[2076]: Error response from daemon: No such container: 
    ceph-osd-osds-0 vdb
    

Environment

  • Red Hat Ceph Storage (RHCS) 3 container tag 3-30 and older
  • Red Hat Ceph Storage (RHCS) 2 container
  • Red Hat Ceph Storage (RHCS) in containers deployed with osd_scenario:
    • collocated
    • non-collocated

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase of over 48,000 articles and solutions.

Current Customers and Partners

Log in for full access

Log In