Upgrade of Ceph Storage 3 to 4 fails with the "Invalid command: nautilus not in luminous" error

Solution Verified - Updated -

Issue

  • When upgrading Ceph Storage 3 to 4 by following the documentation, the upgrade command fails with the "Invalid command: nautilus not in luminous" error
$ openstack overcloud external-upgrade run --stack overcloud --tags ceph
...
"TASK [container | disallow pre-nautilus OSDs and enable all new nautilus-only functionality] ***",
        "Wednesday 25 November 2020  12:16:23 +0000 (0:00:00.939)       0:23:51.927 **** ",
        "fatal: [controller-0 -> 192.168.0.10]: FAILED! => {\"changed\": true, \"cmd\": [\"podman\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"require-osd-release\", \"nautilus\"], \"delta\"
: \"0:00:00.911173\", \"end\": \"2020-11-25 12:16:25.316477\", \"msg\": \"non-zero return code\", \"rc\": 22, \"start\": \"2020-11-25 12:16:24.405304\", \"stderr\": \"Invalid command: nautilus not in luminous\\nosd require-osd-release
 luminous {--yes-i-really-mean-it} :  set the minimum allowed OSD release to participate in the cluster\\nError EINVAL: invalid command\\nError: non zero exit code: 22: OCI runtime error\", \"stderr_lines\": [\"Invalid command: nautil
us not in luminous\", \"osd require-osd-release luminous {--yes-i-really-mean-it} :  set the minimum allowed OSD release to participate in the cluster\", \"Error EINVAL: invalid command\", \"Error: non zero exit code: 22: OCI runtime
error\"], \"stdout\": \"\", \"stdout_lines\": []}",
...

Environment

  • Red Hat OpenStack Platform 16.1
  • Red Hat Ceph Storage 3.x, 4.x

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content