Chapter 11. Management of Ceph OSDs on the dashboard

As a storage administrator, you can monitor and manage OSDs on the Red Hat Ceph Storage Dashboard.

Some of the capabilities of the Red Hat Ceph Storage Dashboard are:

  • List OSDs, their status, statistics, information such as attributes, metadata, device health, performance counters and performance details.
  • Mark OSDs down, in, out, lost, purge, reweight, scrub, deep-scrub, destroy, delete, and select profiles to adjust backfilling activity.
  • List all drives associated with an OSD.
  • Set and change the device class of an OSD.
  • Deploy OSDs on new drives and hosts.

11.1. Prerequisites

  • A running Red Hat Ceph Storage cluster
  • cluster-manager level of access on the Red Hat Ceph Storage dashboard

11.2. Managing the OSDs on the Ceph dashboard

You can carry out the following actions on a Ceph OSD on the Red Hat Ceph Storage Dashboard:

  • Create a new OSD.
  • Edit the device class of the OSD.
  • Mark the Flags as No Up, No Down, No In, or No Out.
  • Scrub and deep-scrub the OSDs.
  • Reweight the OSDs.
  • Mark the OSDs Out, In, Down, or Lost.
  • Purge the OSDs.
  • Destroy the OSDs.
  • Delete the OSDs.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Dashboard is installed.
  • Hosts, Monitors and Manager Daemons are added to the storage cluster.

Procedure

  1. Log in to the Dashboard.
  2. From the Cluster drop-down menu, select OSDs.

Creating an OSD

  1. To create the OSD, click Create.

    Figure 11.1. Add device for OSDs

    Add device for OSDs
    Note

    Ensure you have an available host and a few available devices. You can check for available devices in Physical Disks under the Cluster drop-down menu.

    1. In the Create OSDs window, from Deployment Options, select one of the below options:

      • Cost/Capacity-optimized: The cluster gets deployed with all available HDDs.
      • Throughput-optimized: Slower devices are used to store data and faster devices are used to store journals/WALs.
      • IOPS-optmized: All the available NVMEs are used to deploy OSDs.
    2. From the Advanced Mode, you can add primary, WAL and DB devices by clicking +Add.

      • Primary devices: Primary storage devices contain all OSD data.
      • WAL devices: Write-Ahead-Log devices are used for BlueStore’s internal journal and are used only if the WAL device is faster than the primary device. For example, NVMEs or SSDs.
      • DB devices: DB devices are used to store BlueStore’s internal metadata and are used only if the DB device is faster than the primary device. For example, NVMEs or SSDs).
    3. If you want to encrypt your data for security purposes, under Features, select encryption.
    4. Click the Preview button and in the OSD Creation Preview dialog box, Click Create.
    5. In the OSD Creation Preview dialog box, Click Create.
  2. You get a notification that the OSD was created successfully.
  3. The OSD status changes from in and down to in and up.

Editing an OSD

  1. To edit an OSD, select the row.

    1. From Edit drop-down menu, select Edit.
    2. Edit the device class.
    3. Click Edit OSD.

      Figure 11.2. Edit an OSD

      Edit an OSD
    4. You get a notification that the OSD was updated successfully.

Marking the Flags of OSDs

  1. To mark the flag of the OSD, select the row.

    1. From Edit drop-down menu, select Flags.
    2. Mark the Flags with No Up, No Down, No In, or No Out.
    3. Click Update.

      Figure 11.3. Marking Flags of an OSD

      Marking Flags of an OSD
    4. You get a notification that the flags of the OSD was updated successfully.

Scrubbing the OSDs

  1. To scrub the OSD, select the row.

    1. From Edit drop-down menu, select Scrub.
    2. In the OSDs Scrub dialog box, click Update.

      Figure 11.4. Scrubbing an OSD

      Scrubbing an OSD
    3. You get a notification that the scrubbing of the OSD was initiated successfully.

Deep-scrubbing the OSDs

  1. To deep-scrub the OSD, select the row.

    1. From Edit drop-down menu, select Deep scrub.
    2. In the OSDs Deep Scrub dialog box, click Update.

      Figure 11.5. Deep-scrubbing an OSD

      Deep-scrubbing an OSD
    3. You get a notification that the deep scrubbing of the OSD was initiated successfully.

Reweighting the OSDs

  1. To reweight the OSD, select the row.

    1. From Edit drop-down menu, select Reweight.
    2. In the Reweight OSD dialog box, enter a value between zero and one.
    3. Click Reweight.

      Figure 11.6. Reweighting an OSD

      Reweighting an OSD

Marking OSDs Out

  1. To mark the OSD out, select the row.

    1. From Edit drop-down menu, select Mark Out.
    2. In the Mark OSD out dialog box, click Mark Out.

      Figure 11.7. Marking OSDs out

      Marking OSDs out
    3. The status of the OSD will change to out.

Marking OSDs In

  1. To mark the OSD in, select the OSD row that is in out status.

    1. From Edit drop-down menu, select Mark In.
    2. In the Mark OSD in dialog box, click Mark In.

      Figure 11.8. Marking OSDs in

      Marking OSDs in
    3. The status of the OSD will change to in.

Marking OSDs Down

  1. To mark the OSD down, select the row.

    1. From Edit drop-down menu, select Mark Down.
    2. In the Mark OSD down dialog box, click Mark Down.

      Figure 11.9. Marking OSDs down

      Marking OSDs down
    3. The status of the OSD will change to down.

Marking OSDs Lost

  1. To mark the OSD lost, select the OSD in out and down status.

    1. From Edit drop-down menu, select Mark Lost.
    2. In the Mark OSD Lost dialog box, check Yes, I am sure option, and click Mark Lost.

      Figure 11.10. Marking OSDs Lost

      Marking OSDs lost

Purging OSDs

  1. To purge the OSD, select the OSD in down status.

    1. From Edit drop-down menu, select Purge.
    2. In the Purge OSDs dialog box, check Yes, I am sure option, and click Purge OSD.

      Figure 11.11. Purging OSDs

      Purging OSDs
    3. All the flags are reset and the OSD is back in in and up status.

Destroying OSDs

  1. To destroy the OSD, select the OSD in down status.

    1. From Edit drop-down menu, select Destroy.
    2. In the Destroy OSDs dialog box, check Yes, I am sure option, and click Destroy OSD.

      Figure 11.12. Destroying OSDs

      Destroying OSDs
    3. The status of the OSD changes to destroyed.

Deleting OSDs

  1. To delete the OSD, select the OSD in down status.

    1. From Edit drop-down menu, select Delete.
    2. In the Destroy OSDs dialog box, check Yes, I am sure option, and click Delete OSD.

      Note

      You can preserve the OSD_ID when you have to to replace the failed OSD.

      Figure 11.13. Deleting OSDs

      Deleting OSDs

11.3. Replacing the failed OSDs on the Ceph dashboard

You can replace the failed OSDs in a Red Hat Ceph Storage cluster with the cluster-manager level of access on the dashboard. One of the highlights of this feature on the dashboard is that the OSD IDs can be preserved while replacing the failed OSDs.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • At least cluster-manager level of access to the Ceph Dashboard.
  • At least one of the OSDs is down

Procedure

  1. On the dashboard, you can identify the failed OSDs in the following ways:

    • Dashboard AlertManager pop-up notifications.
    • Dashboard landing page showing HEALTH_WARN status.
    • Dashboard landing page showing failed OSDs.
    • Dashboard OSD page showing failed OSDs.

      Health status of OSDs

      In this example, you can see that one of the OSDs is down on the landing page of the dashboard.

      Apart from this, on the physical drive, you can view the LED lights blinking if one of the OSDs is down.

  2. Click OSDs.
  3. Select the out and down OSD:

    1. From the Edit drop-down menu, select Flags and select No Up and click Update.
    2. From the Edit drop-down menu, select Delete.
    3. In the Delete OSD dialog box, select the Preserve OSD ID(s) for replacement and Yes, I am sure check boxes.
    4. Click Delete OSD.
    5. Wait till the status of the OSD changes to out and destroyed status.
  4. Optional: If you want to change the No Up Flag for the entire cluster, in the Cluster-wide configuration drop-down menu, select Flags.

    1. In Cluster-wide OSDs Flags dialog box, select No Up and click Update.
  5. Optional: If the OSDs are down due to a hard disk failure, replace the physical drive:

    • If the drive is hot-swappable, replace the failed drive with a new one.
    • If the drive is not hot-swappable and the host contains multiple OSDs, you might have to shut down the whole host and replace the physical drive. Consider preventing the cluster from backfilling. See the Stopping and Starting Rebalancing chapter in the Red Hat Ceph Storage Troubleshooting Guide for details.
    • When the drive appears under the /dev/ directory, make a note of the drive path.
    • If you want to add the OSD manually, find the OSD drive and format the disk.
    • If the new disk has data, zap the disk:

      Syntax

      ceph orch device zap HOST_NAME PATH --force

      Example

      ceph orch device zap ceph-adm2 /dev/sdc --force

  6. From the Create drop-down menu, select Create.
  7. In the Create OSDs window, click +Add for Primary devices.

    1. In the Primary devices dialog box, from the Hostname drop-down list, select any one filter. From Any drop-down list, select the respective option.

      Note

      You have to select the Hostname first and then at least one filter to add the devices.

      For example, from Hostname list, select Type and from Any list select hdd. Select Vendor and from Any list, select ATA

      Add device for OSDs
    2. Click Add.
    3. In the Create OSDs window, click the Preview button.
    4. In the OSD Creation Preview dialog box, Click Create.
    5. You will get a notification that the OSD is created. The OSD will be in out and down status.
  8. Select the newly created OSD that has out and down status.

    1. In the Edit drop-down menu, select Mark-in.
    2. In the Mark OSD in window, select Mark in.
    3. In the Edit drop-down menu, select Flags.
    4. Uncheck No Up and click Update.
  9. Optional: If you have changed the No Up Flag before for cluster-wide configuration, in the Cluster-wide configuration menu, select Flags.

    1. In Cluster-wide OSDs Flags dialog box, uncheck No Up and click Update.

Verification

  1. Verify that the OSD that was destroyed is created on the device and the OSD ID is preserved.

    OSD is created

Additional Resources