How to handle Inconsistent Placement Groups in Ceph
Environment
- Red Hat Ceph Storage (RHCS)
Issue
ceph status
orceph -s
reports inconsistent placement groups (PGs)
Resolution
ⓘ Ceph offers the ability to repair inconsistent PGs with the
ceph pg repair
command. Before doing this, it is important to know exactly why the pgs are inconsistent since there are cases that are dangerous to repair with this tool. Here are examples of errors that should not be repaired via theceph pg repair
utility:
<pg.id> shard <osd>: soid <object> digest <digest> != known digest <digest>
<pg.id> shard <osd>: soid <object> omap_digest <digest> != known omap_digest <digest>Ceph has a utility to repair this issue.
and some of the different errors that are safe to repair:
<pg.id> shard <osd>: soid <object> missing attr _, missing attr <attr type>
<pg.id> shard <osd>: soid <object> digest 0 != known digest <digest>, size 0 != known size <size>
<pg.id> shard <osd>: soid <object> size 0 != known size <size>
<pg.id> deep-scrub stat mismatch, got <mismatch>
<pg.id> shard <osd>: soid <object> candidate had a read error, digest 0 != known digest <digest>
Do make sure, SMART (Self-Monitoring, Analysis, and Reporting Technology) scans on the devices will normally be required to see if the disks are displaying any bad sectors.
Steps to repair inconsijtent PGs:
-
Watch the ceph log for the result of the scrub.
# ceph -w | grep <pg.id>
-
In another terminal session trigger a deep-scrub on the placement group.
# ceph pg deep-scrub <pg.id>
-
If the error messages from step 2. are one of the messages we can safely repair then in the same terminal session you run the deep-scrub try repairing it:
# ceph pg repair <pg.id>
Sample session:
In the first terminal session, run ceph -w
:
# ceph -w | grep 11.eeef
In another terminal, run the deep-scrub
:
# ceph pg deep-scrub 11.eeef
instructing pg 11.eeef on osd.106 to deep-scrub
In the terminal session where you run ceph -w
you should see error messages similar to:
2015-02-26 01:35:36.778215 osd.106 [ERR] 11.eeef deep-scrub stat mismatch, got 636/635 objects, 0/0 clones, 0/0 dirty, 0/0 omap, 0/0 hit_set_archive, 0/0 whiteouts, 1855455/1854371 bytes.
2015-02-26 01:35:36.788334 osd.106 [ERR] 11.eeef deep-scrub 1 errors
Determine if the errors are safe to repair (see the note above) and if so, repair:
# ceph pg repair 11.eeef
instructing pg 11.eeef on osd.106 to repair
Watch the terminal session with ceph -w
to see something similar to:
2015-02-26 01:49:28.164677 osd.106 [ERR] 11.eeef repair stat mismatch, got 636/635 objects, 0/0 clones, 0/0 dirty, 0/0 omap, 0/0 hit_set_archive, 0/0 whiteouts, 1855455/1854371 bytes.
2015-02-26 01:49:28.164957 osd.106 [ERR] 11.eeef repair 1 errors, 1 fixed
If the error messages seen are not safe to repair or if there are further doubts please do contact Red Hat Support.
Root Cause
The causes of inconstant PGs vary drastically. For a more detailed analysis of what caused this to happen, please open a support case with the Ceph team.
Diagnostic Steps
Run a ceph status
and/or ceph health detail
and look at the pg states reporting as inconsistent
:
$ ceph health detail
[...]
pg 11.eeef is active+clean+inconsistent, acting [106,427,854]
pg 5.ee92 is active+clean+inconsistent, acting [247,183,125]
[...]
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.
Comments