Ceph - Upgrading from Red Hat Ceph Storage 1.3.0 (Hammer - 0.94.1) to Red Hat Ceph Storage 2.0 (Jewel - 10.2.2) OSDs are not coming up, why?

Solution Verified - Updated -

Issue

  • Ceph - Upgrading from Red Hat Ceph Storage 1.3.0 (Hammer - 0.94.1) to Red Hat Ceph Storage 2.0 (Jewel - 10.2.2) OSDs are not coming up, why?

  • ceph osd tree output -

# ceph osd tree
ID WEIGHT  TYPE NAME            UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.17990 root default
-2 0.05997     host ceph-osd1
 0 0.01999         osd.0             up  1.00000          1.00000
 1 0.01999         osd.1             up  1.00000          1.00000
 2 0.01999         osd.2             up  1.00000          1.00000
-3 0.05997     host ceph-osd2
 3 0.01999         osd.3           down  1.00000          1.00000
 4 0.01999         osd.4           down  1.00000          1.00000
 5 0.01999         osd.5           down  1.00000          1.00000
-4 0.05997     host ceph-osd3
 6 0.01999         osd.6             up  1.00000          1.00000
 7 0.01999         osd.7             up  1.00000          1.00000
 8 0.01999         osd.8             up  1.00000          1.00000
  • OSD process stuck in boot
2016-11-16 22:16:37.489568 7f0785c19800  0 osd.4 13239 load_pgs opened 38 pgs
2016-11-16 22:16:37.489592 7f0785c19800  0 osd.4 13239 using 0 op queue with priority op cut off at 64.
2016-11-16 22:16:37.490601 7f0785c19800 -1 osd.4 13239 log_to_monitors {default=true}
2016-11-16 22:16:37.503848 7f0785c19800  0 osd.4 13239 done with init, starting boot process

Environment

  • Red Hat Ceph Storage 1.3.0 - 0.94.1-13.el7cp
  • Red Hat Ceph Storage 2.0 - 10.2.2-38.el7cp

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content