RHOSP 16.1 deployment with external Ceph cluster using alternate names getting Glance errors

Solution In Progress - Updated -

Issue

  • We've deployed RHOSP 16.1 on an external Ceph cluster. This Ceph cluster (running latest Ceph 4), already has an existing OpenStack 13 environment, so we are trying to use alternate username and pools. Ceph username openstack-cc2, Pools vms_cc2, images_cc2, etc.

  • Deployment is successful. However, we are getting constant glance errors, similar to:

2020-11-25 15:13:44.238 7 ERROR glance_store._drivers.rbd [-] Error connecting to ceph cluster.: rados.PermissionDeniedError: [errno 13] error connecting to the cluster
2020-11-25 15:13:44.238 7 ERROR glance_store._drivers.rbd Traceback (most recent call last):
2020-11-25 15:13:44.238 7 ERROR glance_store._drivers.rbd   File "/usr/lib/python3.6/site-packages/glance_store/_drivers/rbd.py", line 273, in get_connection
2020-11-25 15:13:44.238 7 ERROR glance_store._drivers.rbd     client.connect(timeout=self.connect_timeout)
2020-11-25 15:13:44.238 7 ERROR glance_store._drivers.rbd   File "rados.pyx", line 893, in rados.Rados.connect
2020-11-25 15:13:44.238 7 ERROR glance_store._drivers.rbd rados.PermissionDeniedError: [errno 13] error connecting to the cluster
2020-11-25 15:13:44.238 7 ERROR glance_store._drivers.rbd
  • We set the desired pools in a ceph template as follows:
[stack@undercloud-cc2 templates]$ cat ceph-additional-settings.yaml
parameter_defaults:

  CephClusterFSID: 'fbd63a88-e907-4923-b490-7bfa3a5599f7'
  CephClientKey: 'SECRET/KEY=='
  CephExternalMonHost: '10.10.10.11,10.10.10.12,10.10.10.13'

  CephRdbExtraPools: volumes_hdd_cc2

  # the following parameters enable Ceph backends for Cinder, Glance, Gnocchi and Nova
  NovaEnableRbdBackend: true
  CinderEnableRbdBackend: true
  CinderBackupBackend: ceph
  GlanceBackend: rbd
  NovaRbdPoolName: vms_cc2
  CinderRbdPoolName: volumes_ssd_cc2
  CinderBackupRbdPoolName: backups_cc2
  GlanceRbdPoolName: images_cc2
  # Uncomment below if enabling legacy telemetry
  GnocchiRbdPoolName: metrics_cc2
  CephClientUserName: openstack-cc2

  # finally we disable the Cinder LVM backend
  CinderEnableIscsiBackend: false
  • Our ceph auth list for the openstack user:
client.openstack-cc2
        key: SECRET/KEY==
        caps: [mgr] allow *
        caps: [mon] profile rbd, allow command "osd blacklist"
        caps: [osd] profile rbd pool=backups_cc2, profile rbd pool=vms_cc2, prof                             ile rbd pool=volumes_ssd_cc2, profile rbd pool=volumes_hdd_cc2, profile rbd pool                             =images_cc2, profile rbd pool=metrics_cc2
  • The ceph.client.openstack.keyring installed on the controllers seems wrong. We tried modifying manually with client.openstack-cc2 and updating pools, but that did not help. Here is the default that is installed:
[root@overcloud-controller-0-cc2 ceph]# cat ceph.client.openstack.keyring
[client.openstack]
        key = SECRET/KEY2==
        caps mgr = "allow *"
        caps mon = "profile rbd"
        caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=images"

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content