When deploying a Ceph cluster using the ceph-ansible playbook, the resulting ceph.conf file contains two conflicting 'cluster_network' option entries

Solution Verified - Updated -

Issue

  • When deploying a Ceph cluster using the ceph-ansible playbook, the resulting ceph.conf file contains two conflicting 'cluster_network' option entries, for example:
# cat /etc/ceph/ceph.conf 
[global]
cluster_network = 10.10.100.0/24 <---
max open files = 131072
fsid = 78a15451-fe2b-4627-a99d-9e060d0aecf1

...

[osd]
osd mount options xfs = noatime,largeio,inode64,swalloc
osd mkfs options xfs = -f -i size=2048
public_network = 192.168.100.0/24
cluster_network = 192.168.100.0/24 <---
osd mkfs type = xfs
osd journal size = 1024
  • The cluster which contains the above option entries fails to import into an existing Red Hat Storage Console deployment. The import fails when trying to detect Red Hat Ceph Storage 2.0+ on all of the monitors in the cluster. It appears to fail to detect the FQDN's of some of the MONs being imported.

Environment

  • Red Hat Ceph Storage 2.0
  • Red Hat Storage Console
  • ceph-ansible

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content