RHCS 4 - ceph-ansible fails to deploy multiple OSDs on singe device when deploying containers - TASK [ceph-osd : use ceph-volume lvm batch to create bluestore osds]

Solution Verified - Updated -

Issue

We have encountered issue while deploying multiple OSDs on the same device when deploying containerized Ceph (works fine for non-containerized)

TASK [ceph-osd : include_tasks scenarios/lvm-batch.yml] 

task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:58
Monday 08 June 2020  11:47:42 -0400 (0:00:00.152)       0:07:06.345 *********** 
included: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/lvm-batch.yml for 10.74.178.45, 10.74.178.112, 10.74.178.53, 10.74.178.153

TASK [ceph-osd : use ceph-volume lvm batch to create bluestore osds] 

task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/lvm-batch.yml:3
Monday 08 June 2020  11:47:42 -0400 (0:00:00.318)       0:07:06.663 *********** 
fatal: [10.74.178.153]: FAILED! => changed=true 
  cmd:
  - podman
  - run
  - --rm
  - --privileged
  - --net=host
  - --ipc=host
  - --ulimit
  - nofile=1024:4096
  - -v
  - /run/lock/lvm:/run/lock/lvm:z
  - -v
  - /var/run/udev/:/var/run/udev/:z
  - -v
  - /dev:/dev
  - -v
  - /etc/ceph:/etc/ceph:z
  - -v
  - /run/lvm/:/run/lvm/
  - -v
  - /var/lib/ceph/:/var/lib/ceph/:z
  - -v
  - /var/log/ceph/:/var/log/ceph/:z
  - --entrypoint=ceph-volume
  - registry.redhat.io/rhceph/rhceph-4-rhel8:4-20
  - --cluster
  - ceph
  - lvm
  - batch
  - --bluestore
  - --yes
  - --prepare
  - --osds-per-device
  - '2'
  - /dev/vdb
  - /dev/vdc
  - /dev/vdd
  - /dev/vde
  delta: '0:00:09.488748'
  end: '2020-06-08 11:47:52.513747'
  msg: non-zero return code
  rc: 1
  start: '2020-06-08 11:47:43.024999'
  stderr: |-
    Running command: /usr/sbin/vgcreate -s 1G --force --yes ceph-block-ba18a9ff-8e67-4a3d-8261-9b90a95601ed /dev/vdc
     stdout: Physical volume "/dev/vdc" successfully created.
     stdout: Volume group "ceph-block-ba18a9ff-8e67-4a3d-8261-9b90a95601ed" successfully created
    Running command: /usr/sbin/vgcreate -s 1G --force --yes ceph-block-056a37d2-20ee-40b1-a010-89bbf1c018f3 /dev/vdd
     stdout: Physical volume "/dev/vdd" successfully created.
     stdout: Volume group "ceph-block-056a37d2-20ee-40b1-a010-89bbf1c018f3" successfully created
    Running command: /usr/sbin/vgcreate -s 1G --force --yes ceph-block-1ed0ba7b-846d-408c-96fb-86258322eef8 /dev/vde
     stdout: Physical volume "/dev/vde" successfully created.
     stdout: Volume group "ceph-block-1ed0ba7b-846d-408c-96fb-86258322eef8" successfully created
    Running command: /usr/sbin/vgcreate -s 1G --force --yes ceph-block-dbs-b3a72f65-4437-4020-b889-d16327e1dc54 /dev/vdb
     stdout: Physical volume "/dev/vdb" successfully created.
     stdout: Volume group "ceph-block-dbs-b3a72f65-4437-4020-b889-d16327e1dc54" successfully created
    Running command: /usr/sbin/lvcreate --yes -l 19 -n osd-block-dc266152-a722-4fc3-9f74-743e706d841d ceph-block-ba18a9ff-8e67-4a3d-8261-9b90a95601ed
     stdout: Logical volume "osd-block-dc266152-a722-4fc3-9f74-743e706d841d" created.
    Running command: /usr/sbin/lvcreate --yes -l 3 -n osd-block-db-923eb5b4-9380-4dc5-9aad-1809e3556e10 ceph-block-dbs-b3a72f65-4437-4020-b889-d16327e1dc54
     stdout: Logical volume "osd-block-db-923eb5b4-9380-4dc5-9aad-1809e3556e10" created.
    Running command: /bin/ceph-authtool --gen-print-key
    Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new b92f4b54-60fb-4517-b461-b723350549e8
    Running command: /bin/ceph-authtool --gen-print-key
    Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
    Running command: /bin/chown -h ceph:ceph /dev/ceph-block-ba18a9ff-8e67-4a3d-8261-9b90a95601ed/osd-block-dc266152-a722-4fc3-9f74-743e706d841d
    Running command: /bin/chown -R ceph:ceph /dev/mapper/ceph--block--ba18a9ff--8e67--4a3d--8261--9b90a95601ed-osd--block--dc266152--a722--4fc3--9f74--743e706d841d
    Running command: /bin/ln -s /dev/ceph-block-ba18a9ff-8e67-4a3d-8261-9b90a95601ed/osd-block-dc266152-a722-4fc3-9f74-743e706d841d /var/lib/ceph/osd/ceph-0/block
    Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
     stderr: got monmap epoch 1
    Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-0/keyring --create-keyring --name osd.0 --add-key AQCiXd5efFGzHRAAN+QaJmJSyjRz1pT9G+G30w==
     stdout: creating /var/lib/ceph/osd/ceph-0/keyring
    added entity osd.0 auth(key=AQCiXd5efFGzHRAAN+QaJmJSyjRz1pT9G+G30w==)
    Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
    Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
    Running command: /bin/chown -h ceph:ceph /dev/ceph-block-dbs-b3a72f65-4437-4020-b889-d16327e1dc54/osd-block-db-923eb5b4-9380-4dc5-9aad-1809e3556e10
    Running command: /bin/chown -R ceph:ceph /dev/mapper/ceph--block--dbs--b3a72f65--4437--4020--b889--d16327e1dc54-osd--block--db--923eb5b4--9380--4dc5--9aad--1809e3556e10
    Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --bluestore-block-db-path /dev/ceph-block-dbs-b3a72f65-4437-4020-b889-d16327e1dc54/osd-block-db-923eb5b4-9380-4dc5-9aad-1809e3556e10 --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid b92f4b54-60fb-4517-b461-b723350549e8 --setuser ceph --setgroup ceph
    --> ceph-volume lvm prepare successful for: ceph-block-ba18a9ff-8e67-4a3d-8261-9b90a95601ed/osd-block-dc266152-a722-4fc3-9f74-743e706d841d
    Running command: /usr/sbin/lvcreate --yes -l 19 -n osd-block-0cb057c7-0905-471c-aec1-1aad76bdf2af ceph-block-ba18a9ff-8e67-4a3d-8261-9b90a95601ed
     stderr: Volume group "ceph-block-ba18a9ff-8e67-4a3d-8261-9b90a95601ed" has insufficient free space (0 extents): 19 required.
    -->  RuntimeError: command returned non-zero exit status: 5
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>


# lsblk
NAME                                                                                                                  MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda                                                                                                                   253:0    0  20G  0 disk 
└─vda1                                                                                                                253:1    0  20G  0 part /
vdb                                                                                                                   253:16   0  20G  0 disk 
└─ceph--block--dbs--b3a72f65--4437--4020--b889--d16327e1dc54-osd--block--db--923eb5b4--9380--4dc5--9aad--1809e3556e10 252:1    0   3G  0 lvm  
vdc                                                                                                                   253:32   0  20G  0 disk 
└─ceph--block--ba18a9ff--8e67--4a3d--8261--9b90a95601ed-osd--block--dc266152--a722--4fc3--9f74--743e706d841d          252:0    0  19G  0 lvm  
vdd                                                                                                                   253:48   0  20G  0 disk 
vde                                                                                                                   253:64   0  20G  0 disk 


# cat group_vars/osds.yml
osd_objectstore: bluestore
osds_per_device: 2
osd_scenario: lvm
copy_admin_key: true

devices:
  - /dev/vdb
  - /dev/vdc
  - /dev/vdd
  - /dev/vde

# /dev/vdb is used as SSD for block.db:
# ansible osds -m shell -a ' echo 0 > /sys/block/vdb/queue/rotational '

Environment

Red Hat Ceph Storage 4.0 - container tag 4-20

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content