Chapter 4. Bug fixes
This section describes bugs fixed in this release of Red Hat Ceph Storage that have significant impact on users. In addition, it includes descriptions of fixed known issues from previous versions.
4.1. Ceph Ansible
Containerized OSDs start after reboot
Previously, in a containerized environment, after rebooting Ceph storage nodes some OSDs might not have started. This was due to a race condition. The race condition was resolved and now all OSD nodes start properly after a reboot.
Ceph Ansible no longer overwrites existing OSD partitions
On a OSD node reboot, it is possible that disk devices will get a different device path. For example, prior to restarting the OSD node, /dev/sda was an OSD, but after a reboot, the same OSD is now /dev/sdb. Previously, if no "ceph" partition was found on the disk, it was a valid OSD disk. With this release, if any partition is found on the disk, then the disk will not be used as an OSD.
Ansible no longer creates unused systemd unit files
Previously, when installing the Ceph Object Gateway by using the ceph-ansible utility, ceph-ansible created systemd unit files for the Ceph Object Gateway host corresponding to all Object Gateway instances located on other hosts. However, the only unit file that was active was the one that corresponded to the hostname of the Ceph Object Gateway. The others were not active and as such they did not cause problems. With this update of Ceph the other unit files are no longer created.
The OpenStack keys are copied to all Ceph Monitors
When Red Hat Ceph Storage was configured with run_once: true and inventory_hostname == groups.get(client_group_name) | first it can cause a bug when the only node being run is not the first node in the group. In a deployment with a single client node the keyrings will not be created since the task can be skipped. With this release this situation no longer occurs and all the OpenStack keys are copied to the monitor nodes.
The ceph-ansible utility removes the ceph-create-keys container from the same node where it was created.
Previously, the ceph-ansible utility did not always remove the ceph-create-keys container from the same node where it was created. Because of this, the deployment could fail with the message "Error response from daemon: No such container: ceph-create-keys." With this update to Red Hat Ceph Storage, ceph-ansible only tries to remove the container from the node where it was actually created, thus avoiding the error and not causing the deployment to fail.
Upgrading Red Hat Ceph Storage 2 to version 3 will set the sortbitwise option properly
Previously, a rolling upgrade from Red Hat Ceph Storage 2 to Red Hat Ceph Storage 3 would fail because the OSDs would never initialize. This is because sortbitwise was not properly set by Ceph Ansible. With this release, Ceph Ansible sets sortbitwise properly, so the ODSs can start.
Ceph ceph-ansible now installs the gwcli command during iscsi-gw install
Previously, when using Ansible playbooks from ceph-ansible to configure an iSCSI target, the gwcli command needed to verify the installation was not available. This was because the ceph-iscsi-cli package, which provides the gwcli command, was not included as a part of the install for the Ansible playbooks. With this update to Red Hat Ceph Storage, the Ansible playbooks now install the ceph-iscsi-cli package as a part of iSCSI target configuration.
Setting the mon_use_fqdn or the mds_use_fqdn options to true fails the Ceph Ansible playbook
Starting with Red Hat Ceph Storage 3.1, Red Hat no longer supports deployments with fully qualified domain names. If either the mon_use_fqdn or mds_use_fqdn options are set to true, then the Ceph Ansible playbook will fail. If the storage cluster is already configured with fully qualified domain names, then you must set the use_fqdn_yes_i_am_sure option to true in the group_vars/all.yml file.
Containerized OSDs for which osd_auto_discovery flag was set to true properly restart during a rolling update
Previously, when using the Ansible rolling update playbook in a containerized environment, OSDs for which osd_auto_discovery flag is set to true are not restarted and the OSD services run with old image. With this release, the OSDs are restarting as expected.
Ceph installation no longer fails when trying to deploy the Object Gateway
When deploying the Ceph Object Gateway using Ansible, the rgw_hostname variable was not being set on the Object Gateway node, but was incorrectly set on the Ceph Monitor node. In this release, the rgw_hostname variable is set properly and applied to the Ceph Object Gateway node.
Installing the Object Gateway no longer fails for container deployments
When installing the Object Gateway into a container the following error was observed:
fatal: [aio1_ceph-rgw_container-fc588f0a]: FAILED! => {"changed": false,
"cmd": "ceph --cluster ceph -s -f json", "msg": "[Errno 2] No such file
or directory"
An execution task failed because there was no ceph-common package installed. This Ansible task was delegated to a Ceph Monitor node, which allows the execution to happen in the correct order.
RADOS index object creation no longer assumes rados command available on the baremetal
Previously, the creation of the rados index object in ceph-ansible assumed the rados command was available on the bare metal node, but that is not always true when deploying in containers. This can cause the task which starts NFS to fail because the rados command is missing on the host. With this update to Red Hat Ceph Storage the Ansible playbook runs rados commands from the Ceph container instead during containerized deployment.
4.2. Ceph Dashboard
The Ceph-pools Dashboard no longer displays previously deleted pools
Previously in the Red Hat Ceph Storage Dashboard, the Ceph-pools Dashboard continued to reflect pools which were deleted from the Ceph Storage Cluster. With this update to Ceph they are no longer shown after being deleted.
Installation of Red Hat Ceph Storage Dashboard with a non-default password no longer fails
Previously, the Red Hat Storage Dashboard (cephmetrics) could only be deployed with the default password. To use a different password it had to be changed in the Web UI afterwards.
With this update to Red Hat Ceph Storage you can now set the Red Hat Ceph Storage Dashboard admin username and password using Ansible variables grafana.admin_user and grafana.admin_password.
For an example of how to set these variables, see the group_vars/all.yml.sample file.
OSD ids in 'Filestore OSD latencies' are no longer repeated
Previously, after rebooting OSDs, on the Red Hat Storage Dashboard page Ceph OSD Information the OSD IDs were repeated in the section Filestore OSD Latencies.
With this update to Red Hat Ceph Storage the OSD IDs are no longer repeated on reboot of an OSD node in the Ceph OSD Information dashboard. This was fixed as a part of a redesign of the underlying data reporting.
4.3. ceph-disk Utility
The ceph-disk utility defaults to BlueStore and when replacing an OSD, passing --filestore option is required
Previously, the ceph-disk utility used BlueStore as the default object store when creating OSDs. If the --filestore option was not used, then this caused problems in storage clusters using FileStore. In this release, the ceph-disk utility now defaults to FileStore as it had originally.
(BZ#1572722)
4.4. CephFS
Load on MDS daemons is not always balanced fairly or evenly in multiple active MDS configurations
Previously, in certain cases, the MDS balancers offloaded too much metadata to another active daemon, or none at all.
As of this update to Red Hat Ceph Storage this is no longer an issue as several balancer fixes and optimization have been made which address the issue.
MDS no longer asserts while in starting/resolve state
Previously, when increasing "max_mds" from "1" to "2", if the Metadata Server (MDS) daemon was in the starting/resolve state for a long period of time, then restarting the MDS daemon led to an assert. This caused the Ceph File System (CephFS) to enter a degraded state. With this update to Red Hat Ceph Storage, the underlying issue has been fixed, and increasing "max_mds" no longer causes CephFS to enter a degraded state.
Client I/O sometimes fails for CephFS FUSE clients
Client I/O sometimes failed for Ceph File System (CephFS) as a File System in User Space (FUSE) client with the error transport endpoint shutdown due to an assert in the FUSE service. With this update to Red Hat Ceph Storage, the issue is resolved.
4.5. Ceph Manager Plugins
The fixes for pg_num/pgp_num setting through the RESTful API
Previously, attempts to change pgp_num or pg_num via the RESTful API plugin failed. With this update to Red Hat Ceph Storage, the API is able to change the pgp_num and pg_num parameter successfully.
4.6. ceph-volume Utility
The SELinux context is set correctly when using ceph-volume for new filesystems
The ceph-volume utility was not labeling newly created filesystems, which was causing AVC denial messages in the /var/log/audit/audit.log file. In this release, the ceph-volume utility sets the proper SELinux context (ceph_var_lib_t), on the OSD filesystem.
4.7. Containers
The containerized Object Gateway daemon will read options from the Ceph configuration file now
When launching the Object Gateway daemon in a container, the daemon would override any rgw_frontends options. This made it impossible to add extra options, such as, the radosgw_civetweb_num_threads option. In this release, the Object Gateway daemon will read options found in the Ceph configuration file, by default, /etc/ceph/ceph.conf.
A dmcrypt OSD comes up after upgrading a containerized Red Hat Ceph Storage cluster to 3.x
Previously, on FileStore, ceph-disk created the lockbox partition for dmcrypt on partition number 3. With the introduction of BlueStore, this partition is now on position number 5, but ceph-disk was trying to create the partition on position number 3 causing the OSD to fail. In this release, ceph-disk can now detect the correct partition to use for the lockbox partition.
4.8. iSCSI Gateway
LUN resize on target side Ceph is now reflected on clients
Previously, when using the iSCSI gateway, resized Logical Unit Numbers (LUNs) were not immediately visible to initiators. This required a work around of restarting the iSCSI gateway after resizing a LUN to expose it to the initiators.
With this update to Red Hat Ceph Storage, iSCSI initiators can now see a resized LUN immediately after rescan.
The iSCSI gateway supports custom cluster names
Previously, the Ceph iSCSI gateway only worked with the default storage cluster name (ceph). In this release, the rbd-target-gw now supports arbitrary Ceph configuration file locations, which allows the use of storage clusters not named ceph.
The Ceph iSCSI gateway can be deployed using Ceph Ansible or using the command-line interface with a custom cluster name.
Pools and images with hyphens ('-') are no longer rejected by the API
Previously, the iSCSI gwcli utility did not support hyphens in pool or image names. As such it was not possible to create a disk using a pool or image name that included hyphens ("-") by using the iSCSI gwcli utility.
With this update to Red Hat Ceph Storage, the iSCSI gwcli utility correctly handles hyphens. As such creating a disk using a pool or image name with hyphens is now supported.
4.9. Object Gateway
Quota stats cache is no longer invalid
Previously in Red Hat Ceph Storage, quota values sometimes were not properly decremented. This could cause exceed errors when the quota was not actually exceeded.
With this update to Ceph, quota values are properly decremented and no incorrect errors are printed.
Object compression works properly
Previously, when using zlib compression with Object Gateway, objects were not being compressed properly. The actual size and used size were listed as the same despite log messages saying compression was in use. This was due to the usage of smaller buffers. With this update to Red Hat Ceph Storage, larger buffers are used and compression works as expected.
Marker objects no longer appear twice when listing objects
Previously, due to an error in processing, "marker" objects that were used to continue multi-segment listings were included incorrectly in the listing result. Consequently, such objects appeared twice in the listing output. With this update to Red Hat Ceph Storage, objects are only listed once, as expected.
Resharding a bucket that has ACLs set no longer alters the bucket ACL
Previously, in the Ceph Object Gateway (RGW), resharding a bucket with an access control list (ACL) set alters the bucket ACL. With this update to Red Hat Ceph Storage, ACLs on a bucket are preserved even if they are resharded.
Intermittent HTTP error code 409 no longer occurs with compression enabled
Previously, HTTP error codes could be encountered due to EEXIST being incorrectly handled in RGWPutObj::execute() in a special case. This caused the PUT operation to be incorrectly failed to the client, when it should have been retried. In this update to Red Hat Ceph Storage, the EEXIST condition handling has been corrected and this issue no longer occurs.
RGW no longer spikes to 100% CPU usage with no op traffic
Previously in certain situations an infinite loop could be encountered in rgw_get_system_obj(). This could cause spikes in CPU usage. With this update to Red Hat Ceph Storage this specific issue has been resolved.
Cache entries now refresh as expected
The new time-based metadata cache entry expiration logic did not include logic to update the expiration time on already-cached entries being updated in place. Cache entries became permanently stale after expiration, leading to a performance regression as metadata objects were effectively not cached and always read from the cluster. To resolve this issue, in Red Hat Ceph Storage 3.1, logic has been added to update the expiration time of cached entries when updated.
Ceph is now able to delete/remove swift ACLs
Previously, the Swift CLI client could be used to set, but not to delete ACLs because the Swift header parsing logic could not detect ACL delete requests. With this update to Red Hat Ceph Storage, the header parsing logic has been fixed, and users can delete ACLs with the Swift client.
4.10. Object Gateway Multisite
Some versioned objects do not sync when uploaded with 's3cmd sync'
Operations like PutACL that only modify object metadata do not generate a LINK_OLH entry in the bucket index log. When processed by multisite sync, these operations were skipped with the message versioned object will be synced on link_olh. Because of sync squashing, this caused the original LINK_OLH operation to be skipped as well, preventing the object version from syncing at all. With this update to Red Hat Ceph Storage this issue no longer occurs.
4.11. RADOS
The Ceph OSD daemon segfaults with in thread 7f02ae07d700 thread_name:safe_timer
Previously, a subtle race condition in the ceph-osd daemon can lead to the corruption of the osd_health_metrics data structure which results in corrupted data being sent to, and reported by, Ceph manager. This ultimately caused a segmentation fault. With this update to Red Hat Ceph Storage, a lock is now acquired before modifying the osd_health_metrics data structure.
(BZ#1580300)
Reduced OSD memory usage
Buffers from client operations were not being rebuilt, which was leading to unnecessary memory growth by an OSD process. Rebuilding the buffers has reduced the memory footprint for OSDs in Object Gateway workloads.

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.