Chapter 6. Known issues

This section documents known issues found in this release of Red Hat Ceph Storage.

6.1. The Cephadm utility

NFS-RGW issues in Red Hat Ceph Storage post-upgrade

It is recommended that customers using RGW-NFS defer their upgrade until Red Hat Ceph Storage 5.1.

(BZ#1842808)

The ceph orch host rm command does not remove the Ceph daemons in the host of a Red Hat Ceph Storage cluster

The ceph orch host rm command does not provide any output. This is expected behavior to avoid the accidental removal of Ceph daemons resulting in the loss of data.

To workaround this issue, the user has to remove the Ceph daemons manually. Follow the steps in the Removing hosts using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide for removing the hosts of the Red Hat Ceph Storage cluster.

(BZ#1886120)

The Ceph monitors are reported as stray daemons even after removal from the Red Hat Ceph Storage cluster

Cephadm reports the Ceph monitors as stray daemons even though they have been removed from the storage cluster.

To work around this issue, run the ceph mgr fail command, which allows the manager to restart and clear the error. If there is no standby manager, ceph mgr fail command makes the cluster temporarily unresponsive.

(BZ#1945272)

Access to the Cephadm shell is lost when monitor/s are moved to node/s without _admin label

After the bootstrap, access to the Cephadm shell is lost when the monitors are moved to other nodes if there is no _admin label. To workaround this issue, ensure that the destination hosts have the _admin label.

(BZ#1947497)

Upgrade of Red Hat Ceph Storage using Cephadm gets stuck if there are no standby MDS daemons

During an upgrade of a Red Hat Ceph Storage with an existing MDS service and with no active standby daemons, the process gets stuck.

To workaround this issue, ensure that you have at least one standby MDS daemon before an upgrade through Cephadm.

Run ceph fs status FILE_SYSTEM_NAME .

If there are no standby daemons, add MDS daemons and then upgrade the storage cluster. The upgrade works as expected when standby daemons are present.

(BZ#1959354)

The ceph orch ls command does not list the correct number of OSDs that can be created in the Red Hat Ceph Storage cluster

The command ceph orch ls gives the following output:

Example

# ceph orch ls

osd.all-available-devices    12/16  4m ago     4h   *

As per the above output, four OSDs have not started which is not correct.

To workaround this issue, run the ceph -s command to see if all the OSDs are up and running in a Red Hat Ceph Storage cluster.

(BZ#1959508)

The ceph orch osd rm help command gives an incorrect parameter description

The ceph orch osd rm help command gives ceph orch osd rm SVC_ID …​ [--replace] [--force] parameter instead of ceph orch osd rm OSD_ID…​ [--replace] [--force]. This prompts the users to specify the SVC_ID while removing the OSDs.

To workaround this issue, use the OSD identification OSD_ID parameter to remove the OSDs of a Red Hat Ceph Storage cluster.

(BZ#1966608)

The configuration parameter osd_memory_target_autotune can be enabled

With this release, osd_memory_target_autotune is disabled by default. Users can enable OSD memory autotuning by running the following command:

ceph config set osd osd_memory_target_autotune true

(BZ#1939354)

6.2. Ceph Dashboard

Remove the services from the host before removing the hosts from the storage cluster on the Red Hat Ceph Storage Dashboard

Removing the hosts on the Red Hat Ceph Storage Dashboard before removing the services causes the hosts to be in a stale, dead, or a ghost state.

To workaround this issue, manually remove all the services running on the host and then remove the host from the storage cluster using the Red Hat Ceph Storage Dashboard. If you remove the host without removing the services, then to add the host again, you will have to use the command-line interface. If you remove the hosts without removing the services , you need to use the command-line interface to add the hosts again.

(BZ#1889976)

Users cannot create snapshots of subvolumes on the Red Hat Ceph Storage Dashboard

With this release, users cannot create snapshots of the subvolumes on the Red Hat Ceph Storage Dashboard. If the user creates a snapshot of the subvolumes on the dashboard, the user gets a 500 error instead of a more descriptive error message.

(BZ#1950644)

The Red Hat Ceph Storage Dashboard displays OSDs of only the default CRUSH root children

The Red Hat Ceph Storage Dashboard considers the default CRUSH root children ignoring other CRUSH types like datacenter, zones, rack, and other types. As a result, the CRUSH map viewer on the dashboard does not display OSDs which are not part of the default CRUSH root.

The tree view of OSDs of the storage cluster on the Ceph dashboard now resembles the ceph osd tree output.

(BZ#1953903)

Users cannot log in to the Red Hat Ceph Storage Dashboard with chrome extensions or plugins

Users cannot log into the Red Hat Ceph Storage Dashboard if there are Chrome extensions for the plugins used in the browser.

To work around this issue, either clear the cookies for a specific domain name in use or use the Incognito mode to access the Red Hat Ceph Storage Dashboard.

(BZ#1913580)

The graphs on the Red Hat Ceph Storage Dashboard are not displayed

The graphs on the Red Hat Ceph Storage Dashboard are not displayed because the grafana server certificate is not trusted on the client machine.

To workaround this issue, open the Grafana URL directly in the client internet browser and accept the security exception to see the graphs on the Ceph dashboard.

(BZ#1921092)

Incompatible approaches to manage NFS-Ganesha exports in a Red Hat Ceph Storage cluster

Currently, there are two different approaches to manage NFS-Ganesha exports in a Ceph cluster. One is using the dashboard and the other is using the command-line interface. If exports are created in one way, users might not be able to manage the exports in the other way.

To work around this issue, Red hat recommends to adhere to one way of deploying and managing the NFS thereby avoiding the potential duplication or management of non-modifiable NFS exports.

(BZ#1939480)

Dashboard related URL and Grafana API URL cannot be accessed with short hostnames

To workaround this issue, on the Red Hat Ceph Storage dashboard, in the Cluster drop-down menu, click Manager modules. Change the settings from short hostnames URL to FQDN URL. Disable the dashboard using ceph mgr module disable dashboard command and re-enable the dashboard module using ceph mgr module enable dashboard command.

Dashboard should be able to access the Grafana API URL and the other dashboard URLs.

(BZ#1964323)

HA-Proxy-RGW service management is not supported on the Red Hat Ceph Storage Dashboard

The Red Hat Ceph Storage Dashboard does not support HA proxy service for Ceph Object Gateway.

As a workaround, HA proxy-RGW service can be managed using the Cephadm CLI. You can only view the service on the Red Hat Ceph Storage dashboard.

(BZ#1968397)

Red Hat does not support NFS exports over the Ceph File system in the back-end on the Red Hat Ceph Storage Dashboard

Red Hat does not support management of NFS exports over the Ceph File System (CephFS) on the Red Hat Ceph Storage Dashboard. Currently, NFS exports with Ceph object gateway in the back-end are supported.

(BZ#1974599)

6.3. Ceph File System

Backtrace now works as expected for CephFS scrub operations

Previously, backtrace was unwritten to stable storage. Scrub activity reported a failure if the backtrace did not match the in-memory copy for a new and unsynced entry. Backtrace mismatch also happened for a stray entry that was about to be purged permanently since there was no need to save the backtrace to the disk. Due to the ongoing metadata I/O, it might have happened that the raw stats would not match if there was heavy metadata I/O because the raw stats accounting is not instantaneous.

To workaround this issue, rerun the scrub when the system is idle and has had enough time to flush in-memory state to disk. As a result, once the metadata has been flushed to the disk, these errors are resolved. Backtrace validation is successful if there is no backtrace found on the disk and the file is new, and the entry is stray and about to be purged.

See the KCS Ceph status shows HEALTH_ERR with MDSs report damaged metadata for more details.

(BZ#1794781)

NFS mounts are now accessible with multiple exports

Previously, when multiple CephFS exports were created, read/write to the exports would hang. As a result the NFS mounts were inaccessible. To workaround this issue, single exports are supported for Ganesha version 3.3-2 and below. With this release, multiple CephFS exports are supported when Ganesha version 3.3-3 and above is used.

(BZ#1909949)

The cephfs-top utility displays wrong mounts and missing metrics

The cephfs-top utility expects a newer kernel than what is currently shipped with Red Hat Enterprise Linux 8. The complete set of performance statistics patches are required by the cephfs-top utility. Currently, there is no workaround for this known issue.

(BZ#1946516)

6.4. Ceph Object Gateway

The LC policy for a versioned bucket fails in between reshards

Currently, the LC policy fails to work, after suspending and enabling versioning on a versioned bucket, with reshards in between.

(BZ#1962575)

The radosgw-admin user stats command displays incorrect values for the size_utilized and size_kb_utilized fields

When a user runs the radosgw-admin user stats command after adding buckets to the Red Hat Ceph Storage cluster, the output displays incorrect values in the size_utilized and size_kb_utilized fields; they are always displayed as zero.

There is no workaround for this issue and users can ignore these values.

(BZ#1986160)

6.5. Multi-site Ceph Object Gateway

🚧 [5.0][rgw-multisite][Scale-testing][LC]: Deleting 16.5M objects via LC from the primary, does not delete the respective number of objects from secondary. |

TODO https://bugzilla.redhat.com/show_bug.cgi?id=1976874

🚧 [rgw-multisite][swift-cosbench]: Size in index not reliably updated on object overwrite, leading to ambiguity in stats on primary and secondary. |

TODO https://bugzilla.redhat.com/show_bug.cgi?id=1986826

6.6. The Ceph Ansible utility

The rbd-mirroring does not work as expected after the upgrade from Red Hat Ceph Storage 4 to Red Hat Ceph Storage 5

The cephadm-adopt playbook does not bring up rbd-mirroring after the migration of the storage cluster from Red Hat Ceph Storage 4 to Red Hat Ceph Storage 5.

To work around this issue, add the peers manually:

Syntax

rbd mirror pool peer add POOL_NAME CLIENT_NAME@CLUSTER_NAME

Example

[ceph: root@host01 /]# rbd --cluster site-a mirror pool peer add image-pool client.rbd-mirror-peer@site-b

For more information, see the Adding a storage cluster peer section in the Red Hat Ceph Storage Block Device Guide.

(BZ#1967440)

The cephadm-adopt.yml playbook currently fails when the dashboard is enabled on the Grafana node

Currently, the cephadm-adopt.yml playbook fails to run as it does not create the /etc/ceph directory on nodes deployed only with a Ceph monitor.

To work around this issue, manually create the /etc/ceph directory on the Ceph monitor node before running the playbook. Verify the directory is owned by the ceph user’s UID and GID.

(BZ#2029697)

6.7. Known issues with Documentation

  • Documentation for users to manage Ceph File system snapshots on the Red Hat Ceph Storage Dashboard

    Details for this feature will be included in the next version of the Red Hat Ceph Storage Dashboard Guide.

  • Documentation for users to manage hosts on the Red Hat Ceph Storage Dashboard

    Details for this feature will be included in the next version of the Red Hat Ceph Storage Dashboard Guide.

  • Documentation for users to import RBD images instantaneously

    Details for the rbd import command will be included in the next version of the Red Hat Ceph Storage Block Device Guide.