Red Hat Training

A Red Hat training course is available for Red Hat Ceph Storage

Chapter 5. Known Issues

This section documents known issues found in this release of Red Hat Ceph Storage.

The nfs-ganesha-rgw utility cannot write to the NFS mount, if SELinux is enabled

Currently, nfs-ganesha-rgw utility does not run in an unconfined SELinux domain. When SELinux is enabled, write operations to the NFS mount fails. To work around this issue, use SELinux in permissive mode.

(BZ#1535906)

The Ceph-pools Dashboard displays previously deleted pools

The Ceph-pools Dashboard continues to reflect the pools, which have been deleted from the Ceph Storage Cluster.

(BZ#1537035)

Installing the Red Hat Ceph Storage Dashboard with a non-default password fails

Currently, you must use the default password when installing the Red Hat Ceph Storage Dashboard. After the installation finishes, you can change the default password. For details see the Changing the default Red Hat Ceph Storage Dashboard password section in the Administration Guide for Red Hat Ceph Storage 2.

(BZ#1537390)

In the Red Hat Ceph Storage Dashboard, on reboot of an OSD node the OSD IDs are repeated

In the Ceph OSD Information dashboard, the OSD IDs are repeated on a reboot of an OSD node.

(BZ#1537505)

The Red Hat Ceph Storage Dashboard does not display certain graphs correctly

The graphs Top 5 Pools by Capacity Used, and Pool Capacity - Past 7 days are not displayed on the Ceph Pools and Ceph cluster dashboards. Also, on the Ceph OSD Information dashboard, the graph BlueStore IO Summary - all OSD’s @ 95%ile displays an Internal Error.

(BZ#1538124)

The Ceph version is not reflected in the Red Hat Ceph Storage Dashboard after the Monitor node is rebooted

In the Ceph Version Configuration section of the Ceph Cluster Dashboard, the Ceph Monitor version is reflects as "-" on a reboot of Ceph Monitor node.

(BZ#1538319)

When adding an OSD to the Red Hat Ceph Storage Dashboard, some details are not displayed

On adding an OSD node to a Ceph storage cluster, the details such as hostname, Disk/OSD Summary, and OSD versions of the newly added OSD are not displayed in the Ceph Backend Storage, Ceph Cluster, and Ceph OSD Information dashboards.

(BZ#1538331)

Installing and upgrading containerized Ceph fails

Using Full Qualified Domain Names (FQDN) in the /etc/hostname file for containerized Ceph deployments will fail when installing and upgrading Ceph. When using the ceph-ansible playbook to install Ceph, the installation will fail with the following error message:

"msg": "The task includes an option with an undefined variable. The error was: 'osd_pool_default_pg_num' is undefined

To work around the installation failure, change the FQDN in the /etc/hostname file to the short host name on all nodes in the storage cluster. Next, rerun the ceph-ansible playbook to install Ceph. When upgrading Ceph with the rolling_update playbook, the upgrade will fail with the following error message:

"FAILED - RETRYING: container | waiting for the containerized monitor to join the quorum"

To work around the upgrade failure, change the FQDN in the /etc/hostname file to the short host name on all nodes in the storage cluster. Next, restart the corresponding Ceph daemons running on each node in the storage cluster, then rerun the rolling_update playbook to upgrade Ceph.

(BZ#1546127)

The copy_admin_key option, when set to true, does not copy the keyring to other nodes

When using ceph-ansible with the copy_admin_key option set to true, the administrator’s keyring does not copy to the other nodes in the Ceph Storage Cluster. To work around this issue, you must manually copy the administrator’s keyring file to the other nodes. For example, as the root user on the Ceph Monitor node:

#scp /etc/ceph/<cluster_name>.client.admin.keyring <destination_node_name>:/etc/ceph/

(BZ#1546175)