- Issued:
- 2015-10-01
- Updated:
- 2015-10-01
RHBA-2015:1572 - Bug Fix Advisory
Synopsis
ceph bug fix update
Type/Severity
Bug Fix Advisory
Topic
Updated ceph packages that fix several bugs are now available for Ubuntu 12.04 and Ubuntu 14.04.
Description
Red Hat Ceph Storage is a massively scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment tools, and support services.
Fixed bugs:
- The RADOS gateway (RGW) can now properly decode the slash characters ("/") in clients' upload IDs. (BZ#1183182)
- The RGW's object attribute updates could race with other object updates operations, which led to inconsistent object states, such as incomplete object deletions. RGW now handles attribute updates correctly. (BZ#1206963)
- Recreating a previously existing bucket in RGW did not remove the bucket instance metadata object and created a redundant object in the RGW pool. The redundant objects are no longer generated. (BZ#1212524)
- RGW did not properly cache users' keystone tokens and validated all keystone tokens for every Swift operation. RGW now caches tokens correctly, so that the token validation occurs only when necessary. (BZ#1213999)
- Modifying a user's Access Control List (ACL) permissions for an object in RGW inappropriately caused the user to become the owner of the object. This update fixes this bug. (BZ#1214051)
- RGW could fail to update the bucket attributes during a Swift API "POST" operation. RGW now correctly updates the bucket attributes. (BZ#1214058)
- RGW no longer terminated unexpectedly when using keystone authentication to copy an object. (BZ#1214061)
- An attempt to download an object greater than 512 KB using a range header no longer fails when using the Swift API. (BZ#1214854)
- When a part of a multi-part object was resent, the object became broken due a discrepancy between the object size when listing the object and when stating the object. Now, multi-part objects no longer become broken in such a case. (BZ#1222091)
- When the number of placement groups (PGs) in a pool was increased, Ceph did not send watch or notify operations correctly. Consequently, the librbd library presented inconsistent RBD snapshot data. Now, Ceph correctly re-sends operations. (BZ#1245785)
- When using OpenStack's Cinder RADOS Block Device (RBD) back-end driver with Ceph administration socket enabled, Ceph could leak file descriptors and eventually consume the maximum number of allowed opened files. This behavior caused Cinder's RBD connections to fail. Now, Ceph closes the administration socket appropriately. (BZ#1220496)
- The Content-Length header is now correctly created when creating a container using the Swift API. (BZ#1213988)
- When reopening log files, Object Storage Devices (OSDs) could write data to the incorrect file descriptor. Consequently, log entries were lost, or were written to a file descriptor, which was opened by the filestore. The latter case could cause data corruption. This bug has been fixed. (BZ#1247752, BZ#1250710)
- Under certain circumstances, copying an object onto itself produced a truncated object. The truncated object had correct metadata, including the original size, but the underlying RADOS object was smaller. Consequently, when a client attempted to fetch the object, it received less data than indicated by the Content-Length header, blocked for more, and eventually timed out. This bug has been fixed, and the object can now be read successfully in the aforementioned scenario. (BZ#1258617)
- Insufficient LevelDB performance in Ceph monitors could cause spurious elections which led to slow requests during re-balancing. Ceph now caches the OSD map to be sent to the monitor clients, thus improving the cluster performance. (BZ#1262460)
- The upstart init system restarted Ceph daemons too frequently, up to five times in 30 seconds. This could lead to startup respawn loops that mask other issues, such as disk state problems. This update adjusts the upstart settings to restart daemons three times in 30 minutes. (BZ#1262974)
- There are no longer two separate ISO images for Ubuntu 12.04 and 14.04. (BZ #1253351)
Solution
Users of ceph are advised to upgrade to these updated packages, which fix these bugs.
Before applying this update, make sure all previously released errata
relevant to your system have been applied.
Affected Products
- Red Hat Ceph Storage 1.2 for RHEL 7 x86_64
- Red Hat Ceph Storage 1.2 for RHEL 6 x86_64
- Red Hat Ceph Storage Calamari 1.2 for RHEL 7 x86_64
- Red Hat Ceph Storage Calamari 1.2 for RHEL 6 x86_64
- Red Hat Ceph Storage MON 1.2 for RHEL 7 x86_64
- Red Hat Ceph Storage MON 1.2 for RHEL 6 x86_64
- Red Hat Ceph Storage OSD 1.2 for RHEL 7 x86_64
- Red Hat Ceph Storage OSD 1.2 for RHEL 6 x86_64
Fixes
- BZ - 1250710 - [1.2.3 - Ubuntu] backport of data-loss critical fix
- BZ - 1262974 - upstart: make config less generous about restarts
CVEs
(none)
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/.