Red Hat Training

A Red Hat training course is available for Red Hat Ceph Storage

Chapter 3. Known Issues

Graphs for monitor hosts are not displayed

Graphs for monitor hosts are not displayed in the Calamari server GUI when selecting them from the Graphs drop-down menu. (BZ#1223335)

The "Update" button is not disabled when a check box is cleared

In the Calamari server GUI on the Manage > Cluster Settings page, the Update button is not disabled when a check box is cleared. Moreover, further clicking on the Update button displays an error dialog box, which leaves the button unusable. To work around this issue, reload the page as suggested in the error dialog. (BZ#1223656)

Ceph init script calls a non-present utility

The Ceph init script calls the ceph-disk utility which is not present on monitor nodes. Consequently, when running the service ceph start command with no arguments, the init script returns an error. This error does not effect functionality of starting and stopping daemons. (BZ#1225183)

Yum upgrade failures after a system upgrade

The Yum utility can fail with transaction errors after upgrading Red Hat Enterprise Linux 6 with Red Hat Ceph Storage 1.2 to Red Hat Enterprise Linux 7 with Red Hat Ceph Storage 1.3. This is because certain packages included in the previous versions of Red Hat Enterprise Linux and Red Hat Ceph Storage are not included in the newer versions of these products. Follow the steps below after upgrading from Red Hat Enterprise Linux 6 to 7 to work around this issue. Note that you have to enable all relevant repositories first.

  1. Firstly, update the python-flask package by running the following command as root:

    # yum update python-flask
  2. Next, remove the following packages:

    • python-argparse
    • libproxy-python
    • python-iwlib
    • libreport-compat
    • libreport-plugin-kerneloops
    • libreport-plugin-logger
    • subversion-perl
    • python-jinja2-26

    To do so, execute the following command as root:

    # yum remove python-argparse libproxy-python python-iwlib \
        libreport-compat libreport-plugin-kerneloops \
        libreport-plugin-logger subversion-perl python-jinja2-26
  3. Finally, update the system:

    # yum update

(BZ#1230679)

PGs creation hangs after creating a pool using incorrect values

An attempt to create a new erasure-coded pool using values that do not align with the OSD crush map causes placement groups (PGs) to remain in the "creating" state indefinitely. As a consequence, the Ceph cluster cannot achieve the active+clean state. To fix this problem, delete the erasure-encoded pool and associated crush ruleset, delete the profile that was used to create that pool, and use a new corrected erasure-encoding profile that aligns with the crush map. (BZ#1231630)

IPv6 addresses are not supported destinations for radosgw-agent

The Ceph Object Gateway Sync Agent (radosgw-agent) does not support IPv6 addresses when specifying a destination. To work around this issue, specify a host name with an associated IPv6 address instead of the IPv6 address itself. (BZ#1232036)

Ceph Block Device sometimes fails in VM

When the writeback process is blocked by I/O errors, a Ceph Block Device terminates unexpectedly after a force shutdown in the Virtual Manager (VM). This issue is specific to Ubuntu only. (BZ#1250042)

Upstart cannot stop or restart the initial "ceph-mon" process on Ubuntu

When adding a new monitor on Ubuntu, either manually or by using the ceph-deploy utility, the initial ceph-mon process cannot be stopped or restarted using the Upstart init system. To work around this issue, use the pkill utility or reboot the system to stop the ceph-mon process. Then, it is possible to restart the process using Upstart as expected. (BZ#1255497)

Missing Calamari graphs on Ubuntu nodes

After installing a new Ceph cluster on Ubuntu and initializing the Calamari GUI server, Calamari graphs can be missing. The graphs are missing on any node connected to Calamari after the sudo calamari-ctl initialize command was run. To work around this issue, run sudo calamari-ctl initialize after connecting additional Ceph cluster nodes to Calamari. This command can be run multiple times without any issues. (BZ#1273943)

Removing manually added monitors by using "ceph-deploy" fails

When attempting to remove a manually added monitor host by using the ceph-deploy mon destroy command, the command fails with the following error:

UnboundLocalError: local variable 'status_args' referenced before assignment"

The monitor is removed despite the error, however, ceph-deploy fails to remove the monitor’s configuration directory located in the /var/lib/ceph/mon/ directory. To work around this issue, remove the monitor’s directory manually. (BZ#1278524)