Red Hat Training

A Red Hat training course is available for Red Hat Ceph Storage

Chapter 6. Managing Cluster Size

Managing cluster size involves adding or removing the monitors or OSD nodes. This can be done:

  • by using the ceph-deploy utility,
  • manually.

Before adding a monitor or OSD node to the cluster by using ceph-deploy, install Ceph packages on the node. See Section 6.1, “Installing Ceph packages on Nodes by Using ceph-deploy for details.

For details on installing a cluster for the first time, see the Red Hat Ceph Installation Guides:

6.1. Installing Ceph packages on Nodes by Using ceph-deploy

To add monitors or OSD nodes to a cluster by using the ceph-deploy utility, first install the monitor or OSD packages on the nodes.

Note

This step is not needed when adding monitors and OSDs manually.

Execute the following commands from the directory on the administration node that contains the Ceph configuration file.

Installation with Online Repositories

ceph-deploy install --<type> <ceph-node> [<ceph-node>]

Replace <type> with osd or mon based on what Ceph node you want to install, for example:

  • To install Ceph on three new OSD hosts node2, node3, and node4:

    $ ceph-deploy install --osd node2 node3 node4
  • To install Ceph on new monitor hosts node5 and node6:

    $ ceph-deploy install --mon node5 node6

Installation with ISO Images

Red Hat Enterprise Linux

ceph-deploy install --repo --release=<repo-type> <ceph-node> [<ceph-node>]
ceph-deploy install <type> <ceph-node> [<ceph-node>]

Replace repo-type with ceph-osd or ceph-mon, and type with osd or mon based on what Ceph node you want to install, for example:

  • To install Ceph on three new OSD hosts node2, node3, and node4:

    $ ceph-deploy install --repo --release=ceph-osd node2 node3 node4
    $ ceph-deploy install --osd node2 node3 node4
  • To install Ceph on two new monitor hosts node5 and node6:

    ceph-deploy install --repo --release=ceph-mon node5 node6
    ceph-deploy install --mon node5 node6

Ubuntu

ceph-deploy repo <repo-type> <ceph-node> [<ceph-node>]
ceph-deploy install --no-adjust-repos --<type> <ceph-node> [<ceph-node>]

Replace repo-type with ceph-osd or ceph-mon, and type with osd or mon based on what type of Ceph node you want to install, for example:

  • To install Ceph on three new OSD hosts node2, node3, and node4:

    ceph-deploy repo ceph-osd node2 node3 node4
    ceph-deploy install --no-adjust-repos --osd node2 node3 node4
  • To install Ceph on two new monitor hosts node5 and node6:

    ceph-deploy repo ceph-mon node5 node6
    ceph-deploy install --no-adjust-repos --mon node5 node6

Once you are done installing Ceph packages, use the ceph-deploy utility to add a monitor or OSD node to the Ceph cluster. See Before you start and Before you start for details.

6.2. Adding a Monitor

Ceph monitors are light-weight processes that maintain a master copy of the cluster map. All Ceph clients contact a Ceph monitor and retrieve the current copy of the cluster map, enabling them to bind to pool and read and write data.

When a cluster is up and running, you can add or remove monitors from the cluster at runtime. You can run a cluster with one monitor, however, Red Hat recommends at least three monitors for a production cluster. Ceph monitors use a variation of the Paxos protocol to establish consensus about maps and other critical information across the cluster. Due to the nature of Paxos, Ceph requires a majority of monitors running to establish a quorum thus establishing consensus.

Red Hat recommends deploying an odd number of monitors, but it is not mandatory. An odd number of monitors has a higher resiliency to failures than an even number of monitors. To maintain a quorum on a two-monitor deployment, Ceph cannot tolerate any failures in order; with three monitors, one failure; with four monitors, one failure; with five monitors, two failures. This is why an odd number is advisable. Summarizing, Ceph needs a majority of monitors to be running and to be able to communicate with each other, but that majority can be achieved using a single monitor, or two out of two monitors, two out of three, three out of four, and so on.

For an initial deployment of a multi-node Ceph cluster, Red Hat recommends to deploy three monitors, increasing the number two at a time if a valid need for more than three monitors exists.

Since monitors are light-weight, it is possible to run them on the same host as OSDs. However, Red Hat recommends running monitors on separate hosts, because fsync issues with the kernel can impair performance.

Note

A majority of monitors in your cluster must be able to reach each other in order to establish a quorum.

6.2.1. Host Configuration

When adding Ceph monitors to a cluster, deploy them on separate hosts. Running Ceph monitors on the same host does not provide any additional high availability assurance if a host fails. Ideally, the host hardware should be uniform throughout the monitor cluster.

For details on the minimum recommendations for Ceph monitor hardware, see Hardware Recommendations.

Before installation, be sure to address the requirements listed in the Prerequisites section in the

Installation Guide for Red Hat Enterprise Linux or Installation Guide for Ubuntu.

Add a new monitor host to a rack in the cluster, connect it to the network and ensure that the monitor has network connectivity.

Important

You must install NTP and open port 6789.

6.2.2. Adding a Monitor by Using ceph-deploy

Before you start

Procedure: Adding a Monitor by Using ceph-deploy

  1. Edit the Ceph configuration file on the administration node:

    1. Add the monitor address:

      [mon.<hostname>]
      public_addr = <ip-address>

      Replace <hostname> with the host name of the new monitor node, <ip-address> with the node IP address, for example:

      [mon.node5]
      public_addr = 127.0.0.1
    2. Optionally, add the new monitor host to the initial quorum:

      [global]
      mon_initial_members = node0, node1, <hostname>

      Replace <hostname> with the monitor host name, for example:

      [global]
      mon_initial_members = node0, node1, node5
      Important

      If you add a monitor to a cluster that has only one monitor, you must add the next two monitors to the mon_initial_members and mon_host options. Production clusters require at least three monitors set in mon_initial_members and mon_host to ensure high availability.

      If you add two more monitors to a cluster with only one initial monitor, but do not list the new monitors under mon_initial_members and mon_host, the failure of the initial monitor will cause the cluster to lock up.

      If the monitor you are adding is replacing a monitor that is part of mon_initial_members and mon_host, the new monitor must be added to mon_initial_members and mon_host too.

  2. Redistribute the updated Ceph configuration file to the nodes in the Ceph cluster:

    ceph-deploy --overwrite-conf config push <ceph-node0 ceph-node1 ...>
  3. Add the monitor to the cluster:

    ceph-deploy mon add <hostname>

    Replace <hostname> with the monitor host name, for example:

    $ ceph-deploy mon add node5
  4. Ensure the monitor joined the quorum:

    # ceph quorum_status --format json-pretty
  5. Connect the monitor to the Calamari server:

    ceph-deploy calamari connect --master '<calamari-node-FQDN>' <hostname>

    Replace <hostname> with the monitor host name and <calamari-node-FQDN> with the fully qualified domain name of the Calamari server, for example:

    $ ceph-deploy calamari connect --master 'calamari.domain' node5

6.2.3. Adding a Monitor Manually

This section is intended for users who wish to use a third party deployment tool (e.g., Puppet, Chef, Juju) and manually add monitors.

This procedure creates a ceph-mon data directory, retrieves the monitor map and monitor keyring, and adds a ceph-mon daemon to your cluster. If this results in only two monitor daemons, you may add more monitors by repeating this procedure until you have a sufficient number of ceph-mon daemons to achieve a quorum.

At this point you should define your monitor’s id. By convention, monitors use the hostname (and only one monitor per host), but you are free to define the id as you see fit. For the purpose of this document, please take into account that {mon-id} should be the id you chose, without the mon. prefix (i.e., {mon-id} should be the hostname on mon.hostname).

  1. To ensure the cluster identifies the monitor on start/restart, add the monitor IP address to your Ceph configuration file.

    To add the monitor in the [mon] or [global] section of the Ceph configuration file, you may specify it in the mon_host setting, which is a list of DNS-resolvable hostnames or IP addresses (separated by "," or ";" or " "). You may also create a specific section in the Ceph configuration file for the monitor you are adding. For example:

    [mon]
    mon_host = {mon-ip:port} {mon-ip:port} {new-mon-ip:port}
    
    [mon.{mon-id}]
    host = {mon-id}

    If you want to make the monitor part of the initial quorum, you must also add the hostname to mon_initial_members in the [global] section of your Ceph configuration file.

    Important

    If you are adding a monitor to a cluster that has only one monitor, you MUST add the next two monitors to mon_initial_members and mon_host. Production clusters REQUIRE at least 3 monitors set in mon_initial_members and mon_host to ensure high availability. If a cluster with only one initial monitor adds two more monitors but does not add them to mon_initial_members and mon_host, the failure of the initial monitor will cause the cluster to lock up. If the monitor you are adding is replacing a monitor that is part of mon_initial_members and mon_host, the new monitor must be added to mon_initial_members and mon_host too.

    For example:

    mon_initial_members = mon-node1, mon-node2, new-mon-node
    mon_host = mon-node1, mon-node2, new-mon-node

    Also, ensure you have pid file = /var/run/ceph/$name.pid set in the [global] section of your Ceph configuration file.

    Finally, push a new copy of the Ceph configuration file to your Ceph nodes and Ceph clients.

    ceph-deploy --overwrite-conf config push <ceph-node0 ceph-node1 ...>
  2. Create the default directory on the machine that will host your new monitor.

    ssh {new-mon-host}
    sudo mkdir /var/lib/ceph/mon/ceph-{mon-id}
  3. Create a temporary directory {tmp} to keep the files needed during this process. This directory should be different from the monitor’s default directory created in the previous step, and can be removed after all the steps are executed.

    mkdir {tmp}
  4. Copy the admin key from your admin node to the monitor node so that you can run ceph CLI commands.

    ceph-deploy --overwrite-conf admin <ceph-node>
  5. Retrieve the keyring for your monitors, where {tmp} is the path to the retrieved keyring, and {key-filename} is the name of the file containing the retrieved monitor key.

    ceph auth get mon. -o {tmp}/{key-filename}
  6. Retrieve the monitor map, where {tmp} is the path to the retrieved monitor map, and {map-filename} is the name of the file containing the retrieved monitor monitor map.

    ceph mon getmap -o {tmp}/{map-filename}
  7. Prepare the monitor’s data directory created in the first step. You must specify the path to the monitor map so that you can retrieve the information about a quorum of monitors and their fsid. You must also specify a path to the monitor keyring:

    sudo ceph-mon -i {mon-id} --mkfs --monmap {tmp}/{map-filename} --keyring {tmp}/{key-filename}
  8. Start the new monitor and it will automatically join the cluster. The daemon needs to know which address to bind to, either via --public-addr {ip:port} or by setting mon addr in the appropriate section of ceph.conf. For example:

    sudo ceph-mon -i {mon-id} --public-addr {ip:port} --pid-file /var/run/ceph/mon.{mon-id}.pid
  9. Finally, from the admin node in the directory where you keep you cluster’s Ceph configuration, connect your monitor to Calamari.

    ceph-deploy calamari connect --master '<calamari-node-FQDN>' <ceph-node>[<ceph-node> ...]

6.3. Removing a Monitor

When you remove monitors from a cluster, consider that Ceph monitors use PAXOS to establish consensus about the master cluster map. You must have a sufficient number of monitors to establish a quorum for consensus about the cluster map.

6.3.1. Removing a Monitor by Using ceph-deploy

To remove a monitor from your cluster, use the mon destroy command.

ceph-deploy mon destroy <ceph-node> [<ceph-node>]

For example, to remove Ceph monitors on monitor hosts node5 and node6, you would execute the following:

ceph-deploy mon destroy node5 node6

Check to see that your monitors have left the quorum.

ceph quorum_status --format json-pretty
Important

Ensure that you remove any references to this monitor in your Ceph configuration file; then, push a new copy of the Ceph configuration file to your Ceph nodes.

Ideally, you should remove the monitor host from Calamari. Get the cluster ID:

http://{calamari-fqdn}/api/v2/cluster

Then, remove the monitor host from Calamari.

http://{calamari-fqdn}/api/v2/server
http://{calamari-fqdn}/api/v2/key/{host-fqdn}

6.3.2. Removing a Monitor Manually

This procedure removes a ceph-mon daemon from your cluster. If this procedure results in only two monitor daemons, you may add or remove another monitor until you have a number of ceph-mon daemons that can achieve a quorum.

  1. Stop the monitor. :

    service ceph -a stop mon.{mon-id}
  2. Remove the monitor from the cluster. :

    ceph mon remove {mon-id}
  3. On your admin node, remove the monitor entry from your Ceph configuration file.
  4. Redistribute the Ceph configuration file.

    ceph-deploy --overwrite-conf config push <ceph-node0 ceph-node1 ..>
  5. Archive the monitor data (optional).

    mv /var/lib/ceph/mon/{cluster}-{daemon-id} /var/lib/ceph/mon/removed-{cluster}-{daemon-id}
  6. Remove the monitor data (only if previous archive step not executed).

    sudo rm -r /var/lib/ceph/mon/{cluster}-{daemon-id}

6.3.3. Removing Monitors from an Unhealthy Cluster

This procedure removes a ceph-mon daemon from an unhealhty cluster—​i.e., a cluster that has placement groups that are persistently not active + clean.

  1. Identify a surviving monitor and log in to that host. :

    ceph mon dump
    ssh {mon-host}
  2. Stop the ceph-mon daemon and extract a copy of the monap file. :

    service ceph stop mon || stop ceph-mon-all
        ceph-mon -i {mon-id} --extract-monmap {map-path}
    # for example,
        ceph-mon -i a --extract-monmap /tmp/monmap
  3. Remove the non-surviving monitors. For example, if you have three monitors, mon.a, mon.b, and mon.c, where only mon.a will survive, follow the example below:

    monmaptool {map-path} --rm {mon-id}
    # for example,
    monmaptool /tmp/monmap --rm b
    monmaptool /tmp/monmap --rm c
  4. Inject the surviving map with the removed monitors into the surviving monitors. For example, to inject a map into monitor mon.a, follow the example below:

    ceph-mon -i {mon-id} --inject-monmap {map-path}
    # for example,
    ceph-mon -i a --inject-monmap /tmp/monmap

6.4. Adding OSDs

When a cluster is up and running, you can add OSDs or remove OSDs from the cluster at runtime.

A Ceph OSD generally consists of one ceph-osd daemon for one storage drive and its associated journal within a host machine. If a host has multiple storage drives, map one ceph-osd daemon for each drive.

Check the capacity of a cluster regularly to see if it is reaching the upper end of its capacity. As a cluster reaches its near full ratio, add one or more OSDs to expand the cluster’s capacity.

Important

Do not let a cluster reach the full ratio before adding an OSD. OSD failures that occur after the cluster reaches the near full ratio can cause the cluster to exceed the full ratio. Ceph blocks write access to protect your data until you resolve the storage capacity issues. Do not remove OSDs without considering the impact on the full ratio first.

There are two ways to add an OSD to a cluster:

Before adding an OSD, see Section 6.4.1, “Host Configuration”.

General Recommendations

  • Red Hat recommends using the XFS file system, which is the default file system.
Warning

Use the default XFS file system options that the ceph-deploy utility uses to format the OSD disks. Deviating from the default values can cause stability problems with the storage cluster.

For example, setting the directory block size higher than the default value of 4096 bytes can cause memory allocation deadlock errors in the file system.

  • Red Hat recommends using SSDs for journals. It is common to partition SSDs to serve multiple OSDs. Ensure that the number of SSD partitions does not exceed the SSD’s sequential write limits. Also, ensure that SSD partitions are properly aligned, or their write performance will suffer.

6.4.1. Host Configuration

OSDs and their supporting hardware should be similarly configured as a storage strategy for the pool(s) that will use the OSDs. Ceph prefers uniform hardware across pools for a consistent performance profile. For best performance, consider a CRUSH hierarchy with drives of the same type or size. See the Storage Strategies guide for details.

If you add drives of dissimilar size, adjust their weights. When you add the OSD to the CRUSH map, consider the weight for the new OSD. Hard drive capacity grows approximately 40% per year, so newer OSD hosts might have larger hard drives than older hosts in the cluster (that is, they might have greater weight).

Before adding an OSD, perform steps listed in the Prerequisites section of the Installation Guide for Red Hat Enterprise Linux or Installation Guide for Ubuntu.

6.4.2. Adding OSDs by Using ceph-deploy

Before you start

  • Install Ceph package on new nodes. See Section 6.1, “Installing Ceph packages on Nodes by Using ceph-deploy for details.
  • Red Hat recommends to delete the partition table of a Ceph OSD drive by using the ceph-deploy disk zap command before executing the ceph-deploy osd prepare command:

    ceph-deploy disk zap <hostname>:<disk_device>

    For example:

    ceph-deploy disk zap node2:/dev/sdb

Procedure: Adding an OSD by Using ceph-deploy

  1. From your administration node, prepare the OSDs:

    ceph-deploy osd prepare <hostname>:<disk_device> [<ceph_node>:<disk_device>]

    For example:

    $ ceph-deploy osd prepare node2:/dev/sdb

    The prepare command creates two partitions on a disk device; one partition is for OSD data, and the other one is for the OSD journal.

  2. Activate the OSD:

    ceph-deploy osd activate <hostname>:<data_partition>

    For example:

    $ ceph-deploy osd activate node2:/dev/sdb1
    Note

    In the ceph-deploy osd activate command, specify a particular disk partition, for example /dev/sdb1.

    It is also possible to use a disk device that is wholly formatted without a partition table. In that case, a partition on an additional disk must be used to serve as the journal store:

    ceph-deploy osd activate <hostame>:<disk_device>:<journal_partition>

    In the following example, sdd is a spinning hard drive that Ceph uses entirely for OSD data. ssdb1 is a partition of an SSD drive, which Ceph uses to store the OSD journal:

    $ ceph-deploy osd activate node{2,3,4}:sdd:ssdb1

To achieve the active + clean state, you must add as many OSDs as the osd pool default size = <n> parameter specifies in the Ceph configuration file.

Encrypted OSDs

To create an encrypted OSD:

  1. On the administration node, prepare the encrypted device by using the ceph-deploy utility with the --dmcrypt and --dymcrypt-key-dir options:

    ceph-deploy osd prepare --dmcrypt --dmcrypt-key-dir <directory> <hostname>:<device>

    Replace <directory> with the path to the directory to store the key for the encrypted OSD, <hostname> with the host name of the OSD node, and <device> with the device that the OSD will use:

    ceph-deploy osd prepare --dmcrypt --dmcrypt-key-dir /etc/ceph/dmcrypt-keys node2:/dev/sdd

    The --dmcrypt-key-dir option is optional.

  2. On the OSD node, open the newly created partition by using the cryptsetup utility:

    # /sbin/cryptsetup --key-file <key_path> luksOpen <data_partition> <uuid/name>

    Replace <key_path> with the path to the key for the encrypted partition, <data_partition> with the OSD partition, and <uuid> with the UUID of the partition:

    $ /sbin/cryptsetup --key-file /etc/ceph/dmcrypt-keys/596e7f3b-b565-40c8-bf70-451a1dafddbc.luks.key luksOpen /dev/sdd1 596e7f3b-b565-40c8-bf70-451a1dafddbc
  3. From the administration node, activate the OSD:

    ceph-deploy osd activate <hostname>:/dev/dm-<data_partition_id>

    Replace <hostname> with the host name of the OSD node, <id> with the ID of the encrypted partition:

    $ ceph-deploy osd activate node2:/dev/dm-1

    To view the ID, use the ll command on the OSD node:

    $ ll /dev/mapper/

6.4.3. Adding OSDs Manually

This procedure sets up the ceph-osd daemon, configures it to use one drive, and configures the cluster to distribute data to the OSD node. If the host has multiple drives, you can add an OSD for each drive by repeating this procedure.

To add an OSD, create a data directory for it, mount a drive to that directory, add the OSD to the cluster, and then add it to the CRUSH map.

  1. Create an OSD. If no UUID is given, it will be set automatically when the OSD starts up. To output the OSD number that is needed for subsequent steps:

    $ ceph osd create [{uuid}]
  2. Create the default directory on the new OSD node:

    # mkdir /var/lib/ceph/osd/ceph-<osd-number>
  3. If the OSD is for a drive other than the OS drive, prepare it for use with Ceph, and mount it to the directory you just created:

    # mkfs -t <fstype> /dev/<drive>
    # mount -o user_xattr /dev/<hdd> /var/lib/ceph/osd/ceph-<osd-number>
  4. Initialize the OSD data directory:

    $ ceph-osd -i <osd-num> --mkfs --mkkey

    The directory must be empty before executing ceph-osd.

  5. Register the OSD authentication key. The value of ceph for ceph-<osd-num> in the path is $cluster-$id. If your cluster name differs from ceph, use your cluster name instead:

    $ ceph auth add osd.<osd-num> osd 'allow *' mon 'allow rwx' -i /var/lib/ceph/osd/ceph-<osd-num>/keyring
  6. Add the OSD to the CRUSH map so that the OSD can start receiving data. If you specify at least one bucket, the command will place the OSD into the most specific bucket you specify, and it will move that bucket underneath any other buckets you specify.

    Important

    If you specify only the root bucket, the command will attach the OSD directly to the root, but CRUSH rules expect OSDs to be inside of the host bucket.

    $ ceph osd crush add <id-or-name> <weigh>  [<bucket-type>=<bucket-name> ...]
    Note

    You can also decompile the CRUSH map, add the OSD to the device list, add the host as a bucket if it is not already in the CRUSH map, add the device as an item in the host, assign a weight to the device, recompile the CRUSH map, and set the CRUSH map.

6.4.3.1. Starting the OSD

After you add an OSD to Ceph, the OSD is in your configuration. However, it is not yet running. The OSD is down and in. You must start your new OSD before it can begin receiving data. For sysvinit, execute:

sudo /etc/init.d/ceph start osd.{osd-num}

Once you start your OSD, it is up and in.

6.4.3.2. Observing the Data Migration

When you add an OSD to the CRUSH map, Ceph begins rebalacing the server by migrating placement groups to your new OSD. To observe this process, use the ceph CLI utility:

ceph -w

The placement group states change from active+clean to active, some degraded objects, and finally active+clean when migration completes. To exit the utility, press Control + C.

6.5. Connecting OSD Hosts to Calamari

Once you have added the initial OSDs, you need to connect the OSD hosts to Calamari.

ceph-deploy calamari connect --master '<calamari-node-FQDN>' <ceph-node>[<ceph-node> ...]

For example, using the exemplary node2, node3 and node4 from above with a Calamari FQDN of calamari.domain, you would execute:

ceph-deploy calamari connect --master 'calamari.domain' node2 node3 node4

As you expand your cluster with additional OSD hosts, you will have to connect the hosts that contain them to Calamari, too.

6.6. Removing OSDs Manually

When you want to reduce the size of a cluster or replace hardware, you may remove an OSD at runtime. With Ceph, an OSD is generally one Ceph ceph-osd daemon for one storage drive within a host machine. If your host has multiple storage drives, you may need to remove one ceph-osd daemon for each drive. Generally, it’s a good idea to check the capacity of your cluster to see if you are reaching the upper end of its capacity. Ensure that when you remove an OSD that your cluster is not at its near full ratio.

Warning

Do not let your cluster reach its full ratio when removing an OSD. Removing OSDs could cause the cluster to reach or exceed its full ratio.

6.6.1. Taking the OSD out of the Cluster

Before you remove an OSD, it is usually up and in. You need to take it out of the cluster so that Ceph can begin rebalancing and copying its data to other OSDs.

ceph osd out {osd-num}

6.6.2. Observing the Data Migration

When you take an OSD out of the cluster, Ceph begins rebalancing the cluster by migrating placement groups out of the OSD you removed. To observe this process, use the ceph CLI utility:

ceph -w

The placement group states change from active+clean to active, some degraded objects, and finally active+clean when migration completes. To exit the utility, press Control + C.

6.6.3. Stopping the OSD

After you take an OSD out of the cluster, it may still be running. That is, the OSD may be up and out. You must stop your OSD before you remove it from the configuration.

ssh {osd-host}
sudo /etc/init.d/ceph stop osd.{osd-num}

Once you stop your OSD, it is down.

6.6.4. Removing the OSD

This procedure removes an OSD from a cluster map, removes its authentication key, removes the OSD from the OSD map, and removes the OSD from the ceph.conf file. If your host has multiple drives, you may need to remove an OSD for each drive by repeating this procedure.

  1. Remove the OSD from the CRUSH map so that it no longer receives data. You may also decompile the CRUSH map, remove the OSD from the device list, remove the device as an item in the host bucket or remove the host bucket (if it’s in the CRUSH map and you intend to remove the host), recompile the map and set it. See the Storage Strategies guide for details.

    ceph osd crush remove {name}
  2. Remove the OSD authentication key.

    ceph auth del osd.{osd-num}

    The value of ceph for ceph-{osd-num} in the path is the $cluster-$id. If your cluster name differs from ceph, use your cluster name instead.

  3. Remove the OSD.

    ceph osd rm {osd-num}
    #for example
    ceph osd rm 1
  4. Navigate to the host where you keep the master copy of the cluster’s ceph.conf file.

    ssh {admin-host}
    cd /etc/ceph
    vim ceph.conf
  5. Remove the OSD entry from your ceph.conf file (if it exists). :

    [osd.1]
    host = {hostname}
  6. From the host where you keep the master copy of the cluster’s ceph.conf file, copy the updated ceph.conf file to the /etc/ceph directory of other hosts in your cluster.