Ceph: "[WRN] LARGE_OMAP_OBJECTS: xx large omap objects" for CephFS Metadata Pool
Issue
The command ceph health details
reports "[WRN] LARGE_OMAP_OBJECTS: xx large omap objects" for CephFS Metadata Pool
Example:
sh-4.4$ ceph status
cluster:
id: e83cxxxx-cluster-identifier-obfuscated-yyyyc3b9a414
health: HEALTH_WARN
2 large omap objects <--- Here
services:
mon: 3 daemons, quorum a,d,e (age 2d)
mgr: a(active, since 2d)
mds: 1/1 daemons up, 1 hot standby
osd: 3 osds: 3 up (since 2d), 3 in (since 8M)
rgw: 1 daemon active (1 hosts, 1 zones)
data:
volumes: 1/1 healthy
pools: 11 pools, 177 pgs
objects: 1.50M objects, 156 GiB
usage: 560 GiB used, 6.8 TiB / 7.4 TiB avail
pgs: 177 active+clean
io:
client: 12 KiB/s rd, 236 KiB/s wr, 4 op/s rd, 26 op/s wr
sh-4.4$ ceph health detail
HEALTH_WARN 2 large omap objects
[WRN] LARGE_OMAP_OBJECTS: 2 large omap objects
2 large objects found in pool 'ocs-storagecluster-cephfilesystem-metadata' <--- Here
Search the cluster log for 'Large omap object found' for more details.
sh-4.4$ cat ceph_df_detail
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 7.4 TiB 7.0 TiB 424 GiB 424 GiB 5.62
TOTAL 7.4 TiB 7.0 TiB 424 GiB 424 GiB 5.62
--- POOLS ---
POOL ID PGS STORED (DATA) (OMAP) OBJECTS USED (DATA) (OMAP) %USED MAX AVAIL QUOTA OBJECTS QUOTA BYTES DIRTY USED COMPR UNDER COMPR
ocs-storagecluster-cephfilesystem-metadata 1 32 648 MiB 211 MiB 437 MiB 2.37k 1.0 GiB 633 MiB 437 MiB 0.02 1.9 TiB N/A N/A N/A 0 B 0 B
ocs-storagecluster-cephfilesystem-data0 2 32 23 GiB 23 GiB 0 B 852.64k 109 GiB 109 GiB 0 B 1.79 1.9 TiB N/A N/A N/A 0 B 0 B
ocs-storagecluster-cephblockpool 3 32 107 GiB 107 GiB 73 KiB 33.14k 309 GiB 309 GiB 73 KiB 4.91 1.9 TiB N/A N/A N/A 0 B 0 B
ocs-storagecluster-cephobjectstore.rgw.control 4 8 0 B 0 B 0 B 8 0 B 0 B 0 B 0 1.9 TiB N/A N/A N/A 0 B 0 B
ocs-storagecluster-cephobjectstore.rgw.meta 5 8 4.1 KiB 3.7 KiB 434 B 16 2.6 MiB 2.6 MiB 434 B 0 1.9 TiB N/A N/A N/A 0 B 0 B
ocs-storagecluster-cephobjectstore.rgw.log 6 8 4.8 MiB 3.6 KiB 4.8 MiB 213 11 MiB 6.6 MiB 4.8 MiB 0 1.9 TiB N/A N/A N/A 0 B 0 B
ocs-storagecluster-cephobjectstore.rgw.buckets.index 7 8 359 B 0 B 359 B 22 359 B 0 B 359 B 0 1.9 TiB N/A N/A N/A 0 B 0 B
ocs-storagecluster-cephobjectstore.rgw.buckets.non-ec 8 8 0 B 0 B 0 B 0 0 B 0 B 0 B 0 1.9 TiB N/A N/A N/A 0 B 0 B
.rgw.root 9 8 4.8 KiB 4.8 KiB 0 B 16 2.8 MiB 2.8 MiB 0 B 0 1.9 TiB N/A N/A N/A 0 B 0 B
ocs-storagecluster-cephobjectstore.rgw.buckets.data 10 32 1 KiB 1 KiB 0 B 1 192 KiB 192 KiB 0 B 0 1.9 TiB N/A N/A N/A 0 B 0 B
device_health_metrics 11 1 0 B 0 B 0 B 0 0 B 0 B 0 B 0 1.9 TiB N/A N/A N/A 0 B 0 B
From MON or OSD log file(s):
debug 2022-12-15T09:12:03.941+0000 7f940c105700 0 log_channel(cluster) log [DBG] : 1.c deep-scrub ok
debug 2022-12-15T09:12:07.569+0000 7f940c105700 0 log_channel(cluster) log [WRN] : Large omap object found. Object: 1:89c35100:::10000005b45.01000000:head PG: 1.8ac391 (1.11) Key count: 202070 Size (bytes): 107299180
debug 2022-12-15T09:12:07.614+0000 7f940c105700 0 log_channel(cluster) log [DBG] : 1.11 deep-scrub ok
debug 2022-12-15T09:12:08.601+0000 7f940b103700 0 log_channel(cluster) log [WRN] : Large omap object found. Object: 1:cb0ff638:::1000000631b.01000000:head PG: 1.1c6ff0d3 (1.13) Key count: 206665 Size (bytes): 109739124
debug 2022-12-15T09:12:08.611+0000 7f940b103700 0 log_channel(cluster) log [DBG] : 1.13 deep-scrub ok
Environment
Red Hat OpenShift Data Foundation (ODF) 4.x
Red Hat Ceph Storage (RHCS) 4.x
Red Hat Ceph Storage (RHCS) 5.0.x
Red Hat Ceph Storage (RHCS) 5.1.x
Red Hat Ceph Storage (RHCS) 5.2.x
Red Hat Ceph Storage (RHCS) 5.3.0
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.