Chapter 7. Known issues

This section documents known issues found in this release of Red Hat Ceph Storage.

7.1. The Cephadm utility

Crash daemon might not be able to send crash reports to the storage cluster

Due to an issue with the crash daemon configuration, it might not be possible to send crash reports to the cluster from the crash daemon.

(BZ#2062989)

Users are warned while upgrading to Red Hat Ceph Storage 5.2

Previously, buckets resharded in Red Hat Ceph Storage 5, might not have been understandable by a Red Hat Ceph Storage 5.2 Ceph Object Gateway daemon, due to which an upgrade warning/blocker was added to make sure all users upgrading to Red Hat Ceph Storage 5.2 are aware of the issue and can downgrade if they were previously using Red Hat Ceph Storage 5.1 with object storage.

As a workaround, users not using object storage or upgrading from a version other than 5.1 can run ceph config set mgr mgr/cephadm/no_five_one_rgw --force to remove the warning/blocker and return all operations to normal. By setting this config option users have acknowledged that they are aware of the Ceph Object Gateway issue before they upgrade to Red Hat Ceph Storage 5.2.

(BZ#2104780)

HA-backed I/O operations on the virtual IP that the NFS daemon is on, are not maintained across the failover HAProxy configurations are not updated with NFS daemons

The HAProxy configurations are not updated when failing over NFS daemons from an offline to an online host. As a result, the HA-backed I/O operations are directed to the virtual IP that the NFS daemon is on and is not maintained across the failover.

(BZ#2106849)

7.2. Ceph Dashboard

Creating ingress service with SSL from the Ceph Dashboard is not working

The ingress service creation with SSL from the Ceph Dashboard is not working as the form expects the user to populate a Private key field, which is not a required field anymore.

To workaround this issue, the ingress service is successfully created using the Ceph orchestrator CLI.

(BZ#2080276)

”Throughput-optimized” option is recommended for clusters containing SSD and NVMe devices

Previously, whenever the cluster had either only SSD devices or both SSDs and NVMe devices, the “Throughput-optimized” option would be recommended, even though it shouldn’t be and it had no impact either on the user or the cluster.

As a workaround, users can use the “Advanced” mode for deploying OSDs according to their desired specifications and all the options in the “Simple” mode are still usable apart from this UI issue.

(BZ#2101680)

7.3. Ceph File System

The getpath command causes automation failure

An assumption that the directory name returned by the getpath command is the directory under which snapshots would be created causes automation failure and confusion.

As a workaround, it is recommended to use the directory path that is one level higher, to add to the snap-schedule add command. Snapshots are available one level higher than at the level returned by the getpath command.

(BZ#2053706)

7.4. Ceph Object Gateway

Upgrading to Red Hat Ceph Storage 5.2 from Red Hat Ceph Storage 5.1 with Ceph Object Gateway configuration is not supported

Upgrading to Red Hat Ceph Storage 5.2 from Red Hat Ceph Storage 5.1 on any Ceph Object Gateway (RGW) clusters (single-site or multi-site) is not supported due to a known issue BZ#2100602.

For more information, see Support Restrictions for upgrades for RGW.

Warning

Do not upgrade Red Hat Ceph Storage clusters running on Red Hat Ceph Storage 5.1 and Ceph Object Gateway (single-site or multi-site) to the Red Hat Ceph Storage 5.2 release.