- Issued:
- 2019-10-22
- Updated:
- 2019-10-22
RHBA-2019:3173 - Bug Fix Advisory
Synopsis
Red Hat Ceph Storage 3.3 bug fix and enhancement update
Type/Severity
Bug Fix Advisory
Topic
An update is now available for Red Hat Ceph Storage 3.3.
Description
Red Hat Ceph Storage is a scalable, open, software-defined storage platform
that combines the most stable version of the Ceph storage system with a
Ceph management platform, deployment utilities, and support services.
Bug Fixes and Enhancements:
For detailed information on changes in this release, see the Red Hat Ceph
Storage 3.3 Release Notes available at:
Solution
Before applying this update, make sure all previously released errata
relevant to your system have been applied.
For details on how to apply this update, refer to:
Affected Products
- Red Hat Enterprise Linux Server 7 x86_64
- Red Hat Ceph Storage MON 3 x86_64
- Red Hat Ceph Storage OSD 3 x86_64
- Red Hat Ceph Storage for Power 3 ppc64le
- Red Hat Ceph Storage MON for Power 3 ppc64le
- Red Hat Ceph Storage OSD for Power 3 ppc64le
Fixes
- BZ - 1508506 - [Ceph-ansible]: Stopping of NFS-server service required prior to installation of RGW-NFS
- BZ - 1512960 - [ceph-ansible] journal on device should be removed from osds.yml for osd scenario lvm
- BZ - 1567949 - failed copy object operation does not remove reference correctly
- BZ - 1569413 - Add support to shrink-osd.yml to shrink OSDs deployed with ceph-volume
- BZ - 1628734 - ceph-volume lvm zap not working with lvs and --destroy
- BZ - 1644623 - [RFE] ‘osd_auto_discovery’ feature with batch command
- BZ - 1663289 - [Ceph-Dashboard] [No Data] Overall Ceph Health alert due to Prometheus scrape timeout
- BZ - 1671585 - [RFE] Implement lazy omap usage statistics per pg/osd
- BZ - 1690591 - site-docker.yml.sample failure due to not noticing that firewalld is present
- BZ - 1704901 - [RFE] RGW renaming a user
- BZ - 1716589 - MDS may give spurious ENOSPC error when creating large number of directories
- BZ - 1717199 - ceph-mgr Segmentation fault in thread 7fa787013700 thread_name:prometheus
- BZ - 1719971 - [ceph-volume] NVME disk with wal/db does not mount after node reboot
- BZ - 1720000 - kernel client mount fails to parse hidden secret during remount
- BZ - 1720004 - CephFS vxattr ceph.dir.rctime xattr incorrectly prepends 09 to nanosecond time
- BZ - 1724366 - pybind: luminous volume client breaks against nautilus cluster
- BZ - 1725163 - large omap object warning threshold is too high
- BZ - 1728069 - [BlueStore] RHCS 3.x - backport and make default bluestore allocator as bitmap
- BZ - 1728132 - [ceph-ansible] shrink-osd.yml does not remove the container associated with 'prepare' command
- BZ - 1728357 - [RFE] BlueStore tool to check fragmentation
- BZ - 1728710 - db volume not deleted after zapping with osd-fsid
- BZ - 1731310 - [ceph-validate] : lvm scenario : abort site-docker .yml if GPT headers are found on devices
- BZ - 1731486 - Applying different expiration lifecycle rules to different objects in same bucket, results show same rule applied to all objects
- BZ - 1731919 - [cephmetrics-ansible] - containerized cluster - playbook fails at formulating container name
- BZ - 1732101 - RGW Multisite Sync Log trimming == high latency on client OPs
- BZ - 1733406 - After cluster update (from 3.2z2 to 3.3 downstream bits) RGW beast configuration changed to civetweb
- BZ - 1735355 - [RHCS 3.x][RFE] Bluestore rocksdb compaction threads defaults to 2
- BZ - 1737504 - MDS stray count may grow due to client not trimming its cache correctly
- BZ - 1737573 - ceph-iscsi: gwcli requests take minutes due to always trying ipv6
- BZ - 1740668 - crash from accept() in beast frontend
- BZ - 1742993 - After upgrade to RHCS 3.2.2 , client application connects and disconnects frequently with connection reset by peer entries
- BZ - 1744549 - cephmetrics-ansible only opens port 9283 on first mgr
- BZ - 1744766 - rgw: luminous: bucket cannot be created w/non-default location constrained (even default:<custom target>)
- BZ - 1744777 - [ceph-rgw] Moving non-tenanted buckets to tenanted buckets fails
- BZ - 1745200 - OSD is down after upgarde - ** ERROR: osd init failed: (5) Input/output error
- BZ - 1746127 - [RHCS 3.3.z] sync status reports complete while bucket status does not
- BZ - 1747207 - [ceph-rgw] bucket rename does not work
- BZ - 1749754 - [GSS]When using Beast in RHCS 3.3, 'experimental-data-corruption-may-happen feature' flag is still required even though it is GA
- BZ - 1750005 - check_running_containers does not consider pacemaker managed ceph-nfs
- BZ - 1753000 - Segmentation fault in radosgw-admin bucket reshard command
- BZ - 1753178 - Stand-by MGRs are going down after enabling all mgr modules sequentially
CVEs
(none)
References
(none)
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/.