Red Hat Training

A Red Hat training course is available for Red Hat Ceph Storage

Chapter 3. Storage Cluster Installation

Production Ceph storage clusters start with a minimum of three monitor hosts and three OSD nodes containing multiple OSDs.

ceph storage cluster
You can install a Red Hat Ceph Storage cluster by using:

3.1. Installing Red Hat Ceph Storage using Ansible

Previously, Red Hat did not provide the ceph-ansible package for Ubuntu. In Red Hat Ceph Storage version 2 and later, you can use the Ansible automation application to deploy a Ceph cluster from an Ubuntu node. Execute the procedures in Figure 2.1, “Prerequisite Workflow” first.

To add more Monitors or OSDs to an existing storage cluster, see the Red Hat Ceph Storage Administration Guide for details:

Before you start

  • Install Python on all nodes:

    # apt install python

3.1.1. Installing ceph-ansible

  1. Install the ceph-ansible package:

    $ sudo apt-get install ceph-ansible
  2. As root, add the Ceph hosts to the /etc/ansible/hosts file. Remember to comment out example hosts.

    If the Ceph hosts have sequential naming, consider using a range:

    1. Add Monitor nodes under the [mons] section:

      [mons]
      <monitor-host-name>
      <monitor-host-name>
      <monitor-host-name>
    2. Add OSD nodes under the [osds] section:

      [osds]
      <osd-host-name[1:10]>

      Optionally, use the devices parameter to specify devices that the OSD nodes will use. Use a comma-separated list to list multiple devices.

      [osds]
      <ceph-host-name> devices="[ '<device_1>', '<device_2>' ]"

      For example:

      [osds]
      ceph-osd-01 devices="[ '/dev/sdb', '/dev/sdc' ]"
      ceph-osd-02 devices="[ '/dev/sdb', '/dev/sdc', '/dev/sdd' ]"

      When specifying no devices, then set the osd_auto_discovery option to true in the osds.yml file. See Section 3.1.4, “Configuring Ceph OSD Settings” for more details.

      Using the devices parameter is useful when OSDs use devices with different names or when one of the devices failed on one of the OSDs. See Section A.1, “Ansible Stops Installation Because It Detects Less Devices Than It Expected” for more details.

  3. As the Ansible user, ensure that Ansible can reach the Ceph hosts:

    $ ansible all -m ping
    Note

    See Section 2.7, “Creating an Ansible User with Sudo Access” for more details on creating an Ansible user.

3.1.2. Configuring Ceph Global Settings

  1. Create a directory under the home directory so Ansible can write the keys:

    # cd ~
    # mkdir ceph-ansible-keys
  2. As root, create a symbolic link to the Ansible group_vars directory in the /etc/ansible/ directory:

    # ln -s /usr/share/ceph-ansible/group_vars /etc/ansible/group_vars
  3. As root, create an all.yml file from the all.yml.sample file and open it for editing:

    # cd /etc/ansible/group_vars
    # cp all.yml.sample all.yml
    # vim all.yml
  4. Uncomment the fetch_directory setting under the GENERAL section. Then, point it to the directory you created in step 1:

    fetch_directory: ~/ceph-ansible-keys
  5. Uncomment the ceph_repository_type setting and set it to either cdn or iso:

    ceph_repository_type: cdn
  6. Select the installation method. There are two approaches:

    1. If Ceph hosts have connectivity to the Red Hat Content Delivery Network (CDN), uncomment and set the following:

      ceph_origin: repository
      ceph_repository: rhcs
      ceph_repository_type: cdn
      ceph_rhcs_version: 2
    2. If Ceph nodes cannot connect to the Red Hat Content Delivery Network (CDN), uncomment the ceph_repository_type setting and set it to iso. This approach is most frequently used in high security environments.

      ceph_repository_type: iso

      Then, uncomment the ceph_rhcs_iso_path setting and specify the path to the ISO image:

      ceph_rhcs_iso_path: <path>

      Example

      ceph_rhcs_iso_path: /path/to/ISO_file.iso

  7. For RCHS 2.5 and later versions, uncomment and set ceph_rhcs_cdn_debian_repo and ceph_rhcs_cdn_debian_repo_version so that Ansible can automatically enable and access Ubuntu online repositories:

    ceph_rhcs_cdn_debian_repo: <repo-path>
    ceph_rhcs_cdn_debian_repo_version: <repo-version>

    Example

    ceph_rhcs_cdn_debian_repo: https://<login>:<pwd>@rhcs.download.redhat.com
    ceph_rhcs_cdn_debian_repo_version: /2-release/

    Where <login> is the RHN user login and <pwd> is the RHN user’s password.

  8. Set the generate_fsid setting to false:

    generate_fsid: false
    Note

    With generate_fsid set to false, then you must specify a value for the File System Identifier (FSID) manually. For example, using the command-line utility, uuidgen, you can generate a Universally Unique Identifier (UUID). Once you generate a UUID, then uncomment the fsid setting and specify the generated UUID:

    fsid: <generated_uuid>

    With generate_fsid set to true, then the UUID will be automatically generated. This removes the need to specify the UUID in the fsid setting.

  9. To enable authentication, uncomment the cephx setting under the Ceph Configuration section. Red Hat recommends running Ceph with authentication enabled:

    cephx: true
  10. Uncomment the monitor_interface setting and specify the network interface:

    monitor_interface:

    Example

    monitor_interface: eth0

    Note

    The monitor_interface setting will use the IPv4 address. To use an IPv6 address, use the monitor_address setting instead.

  11. If not using IPv6, then skip this step. Uncomment and set the ip_version option:

    ip_version: ipv6
  12. Set journal size:

    journal_size: <size_in_MB>

    If not filled, the default journal size will be 5 GB. See Journal Settings for additional details.

  13. Set the public network. Optionally, set the cluster network, too:

    public_network: <public_network>
    cluster_network: <cluster_network>

    See Section 2.4, “Configuring Network” and Network Configuration Reference for additional details.

  14. If not using IPv6, then skip this step. Uncomment and set the radosgw_civetweb_bind_ip option:

    radosgw_civetweb_bind_ip: "[&#123;&#123; ansible_default_ipv6.address &#125;&#125;]"
    Important

    Currently, there is a rendering bug when displaying content within double curly brackets on the Customer Portal. The Customer Portal team is working diligently to resolve this issue.

    The HTML escape codes being displayed in the above step represent the left ({) and right (}) curly brackets respectively. For example, written in long hand, the radosgw_civetweb_bind_ip option would be the following:

    radosgw_civetweb_bind_ip: “[<left_curly_bracket><left_curly_bracket> ansible_default_ipv6.address <right_curly_bracket><right_curly_bracket>]"

3.1.3. Configuring Monitor Settings

Ansible will create Ceph Monitors without any additional configuration steps. However, you may override default settings for authentication, and for use with OpenStack. By default, the Calamari API is disabled.

To configure monitors, perform the following:

  1. Navigate to the /etc/ansible/group_vars/ directory:

    # cd /etc/ansible/group_vars/
  2. As root, create an mons.yml file from mons.yml.sample file and open it for editing:

    # cp mons.yml.sample mons.yml
    # vim mons.yml
  3. To enable the Calamari API, uncomment the calamari setting and set it to true:

    calamari: true
  4. To configure other settings, uncomment them and set appropriate values.

3.1.4. Configuring Ceph OSD Settings

To configure OSDs:

  1. Navigate to the /etc/ansible/group_vars/ directory:

    # cd /etc/ansible/group_vars/
  2. As root, create a new osds.yml file from the osds.yml.sample file and open it for editing:

    # cp osds.yml.sample osds.yml
    # vim osds.yml
  3. Uncomment and set settings that are relevant for your use case. See Table 3.1, “What settings are needed for my use case?” for details.
  4. Once you are done editing the file, save your changes and close the file.

Table 3.1. What settings are needed for my use case?

I want:Relevant OptionsComments

to have the Ceph journal and OSD data co-located on the same device and to specify OSD disks on my own.

devices

journal_collocation: true

The devices setting accepts a list of devices. Ensure that the specified devices correspond to the storage devices on the OSD nodes.

to have the Ceph journal and OSD data co-located on the same device and ceph-ansible to detect and configure all the available devices.

osd_auto_discovery: true

journal_collocation: true

 

to use one or more dedicated devices to store the Ceph journal.

devices

raw_multi_journal: true

raw_journal_devices

The devices and raw_journal_devices settings except a list of devices. Ensure that the devices specified correspond to the storage devices on the OSD nodes.

to use directories instead of disks.

osd_directory: true

osd_directories

The osd_directories setting accepts a list of directories. IMPORTANT: Red Hat currently does not support this scenario.

to have the Ceph journal and OSD data co-located on the same device and encrypt OSD data.

devices

dmcrypt_journal_collocation: true

The devices setting accepts a list of devices. Ensure that the specified devices correspond to the storage devices on the OSD nodes.

Note that all OSDs will be encrypted. For details, see the Encryption chapter in the Red Hat Ceph Storage 2 Architecture Guide.

to use one or more dedicated devices to store the Ceph journal and encrypt OSD data.

devices

dmcrypt_dedicated_journal: true

raw_journal_devices

The devices and raw_journal_devices settings except a list of devices. Ensure that the devices specified correspond to the storage devices on the OSD nodes.

Note that all OSDs will be encrypted. For details, see the Encryption chapter in the Red Hat Ceph Storage 2 Architecture Guide.

to use the BlueStore back end instead of the FileStore back end.

devices

bluestore: true

The devices setting accepts a list of devices.

For details on BlueStore, see the OSD BlueStore (Technology Preview) chapter in the Administration Guide for Red Hat Ceph Storage.

For additional settings, see the osds.yml.sample file located in /usr/share/ceph-ansible/group_vars/.

Warning

Some OSD options will conflict with each other. Avoid enabling these sets of options together:

  • journal_collocation and raw_multi_journal
  • journal_collocation and osd_directory
  • raw_multi_journal and osd_directory

In addition, specifying one of these options is required.

3.1.5. Overriding Ceph Default Settings

Unless otherwise specified in the Ansible configuration files, Ceph uses its default settings.

Because Ansible manages the Ceph configuration file, edit the /etc/ansible/group_vars/all.yml file to change the Ceph configuration. Use the ceph_conf_overrides setting to override the default Ceph configuration.

Ansible supports the same sections as the Ceph configuration file; [global], [mon], [osd], [mds], [rgw], and so on. You can also override particular instances, such as a particular Ceph Object Gateway instance. For example:

###################
# CONFIG OVERRIDE #
###################

ceph_conf_overrides:
   client.rgw.rgw1:
      log_file: /var/log/ceph/ceph-rgw-rgw1.log
Note

Ansible does not include braces when referring to a particular section of the Ceph configuration file. Sections and settings names are terminated with a colon.

Important

Do not set the cluster network with the cluster_network parameter in the CONFIG OVERRIDE section because this can cause two conflicting cluster networks being set in the Ceph configuration file.

To set the cluster network, use the cluster_network parameter in the CEPH CONFIGURATION section. For details, see Configuring Ceph Global Settings.

3.1.6. Deploying a Ceph Cluster

  1. Navigate to the Ansible configuration directory:

    # cd /usr/share/ceph-ansible
  2. As root, create a site.yml file from the site.yml.sample file:

    # cp site.yml.sample site.yml
  3. As the Ansible user, run the Ansible playbook from within the directory where the playbook exists:

    $ ansible-playbook site.yml [-u <user_name>]

    Once the playbook runs, it creates a running Ceph cluster.

    Note

    During the deployment of a Red Hat Ceph Storage cluster with Ansible, the installation, configuration, and enabling NTP is done automatically on each node in the storage cluster.

  4. As root, on the Ceph Monitor nodes, create a Calamari user:

    Syntax

    # calamari-ctl add_user --password <password> --email <email_address> <user_name>

    Example

    # calamari-ctl add_user --password abc123 --email user1@example.com user1

3.2. Installing Red Hat Ceph Storage by using the Command-line Interface

All Ceph clusters require at least one monitor, and at least as many OSDs as copies of an object stored on the cluster. Red Hat recommends using three monitors for production environments and a minimum of three Object Storage Devices (OSD).

Bootstrapping the initial monitor is the first step in deploying a Ceph storage cluster. Ceph monitor deployment also sets important criteria for the entire cluster, such as:

  • The number of replicas for pools
  • The number of placement groups per OSD
  • The heartbeat intervals
  • Any authentication requirement

Most of these values are set by default, so it is useful to know about them when setting up the cluster for production.

Installing a Ceph storage cluster by using the command line interface involves these steps:

Important

Red Hat does not support or test upgrading manually deployed clusters. Currently, the only supported way to upgrade to a minor version of Red Hat Ceph Storage 2 is to use the Ansible automation application as described in Important. Therefore, Red Hat recommends to use Ansible to deploy a new cluster with Red Hat Ceph Storage 2. See Section 3.1, “Installing Red Hat Ceph Storage using Ansible” for details.

You can use command-line utilities, such as apt-get, to upgrade manually deployed clusters, but Red Hat does not support or test this.

3.2.1. Monitor Bootstrapping

Bootstrapping a Monitor and by extension a Ceph storage cluster, requires the following data:

Unique Identifier
The File System Identifier (fsid) is a unique identifier for the cluster. The fsid was originally used when the Ceph storage cluster was principally used for the Ceph file system. Ceph now supports native interfaces, block devices, and object storage gateway interfaces too, so fsid is a bit of a misnomer.
Cluster Name

Ceph clusters have a cluster name, which is a simple string without spaces. The default cluster name is ceph, but you can specify a different cluster name. Overriding the default cluster name is especially useful when you work with multiple clusters.

When you run multiple clusters in a multi-site architecture, the cluster name for example, us-west, us-east identifies the cluster for the current command-line session.

Note

To identify the cluster name on the command-line interface, specify the Ceph configuration file with the cluster name, for example, ceph.conf, us-west.conf, us-east.conf, and so on.

Example:

# ceph --cluster us-west.conf ...

Monitor Name
Each Monitor instance within a cluster has a unique name. In common practice, the Ceph Monitor name is the node name. Red Hat recommend one Ceph Monitor per node, and no co-locating the Ceph OSD daemons with the Ceph Monitor daemon. To retrieve the short node name, use the hostname -s command.
Monitor Map

Bootstrapping the initial Monitor requires you to generate a Monitor map. The Monitor map requires:

  • The File System Identifier (fsid)
  • The cluster name, or the default cluster name of ceph is used
  • At least one host name and its IP address.
Monitor Keyring
Monitors communicate with each other by using a secret key. You must generate a keyring with a Monitor secret key and provide it when bootstrapping the initial Monitor.
Administrator Keyring
To use the ceph command-line interface utilities, create the client.admin user and generate its keyring. Also, you must add the client.admin user to the Monitor keyring.

The foregoing requirements do not imply the creation of a Ceph configuration file. However, as a best practice, Red Hat recommends creating a Ceph configuration file and populating it with the fsid, the mon initial members and the mon host settings at a minimum.

You can get and set all of the Monitor settings at runtime as well. However, the Ceph configuration file might contain only those settings which overrides the default values. When you add settings to a Ceph configuration file, these settings override the default settings. Maintaining those settings in a Ceph configuration file makes it easier to maintain the cluster.

To bootstrap the initial Monitor, perform the following steps:

  1. Enable the Red Hat Ceph Storage 2 Monitor repository. For ISO-based installations, see the ISO installation section.
  2. On your initial Monitor node, install the ceph-mon package as root:

    $ sudo apt-get install ceph-mon
  3. As root, create a Ceph configuration file in the /etc/ceph/ directory. By default, Ceph uses ceph.conf, where ceph reflects the cluster name:

    Syntax

    # touch /etc/ceph/<cluster_name>.conf

    Example

    # touch /etc/ceph/ceph.conf

  4. As root, generate the unique identifier for your cluster and add the unique identifier to the [global] section of the Ceph configuration file:

    Syntax

    # echo "[global]" > /etc/ceph/<cluster_name>.conf
    # echo "fsid = `uuidgen`" >> /etc/ceph/<cluster_name>.conf

    Example

    # echo "[global]" > /etc/ceph/ceph.conf
    # echo "fsid = `uuidgen`" >> /etc/ceph/ceph.conf

  5. View the current Ceph configuration file:

    $ cat /etc/ceph/ceph.conf
    [global]
    fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993
  6. As root, add the initial Monitor to the Ceph configuration file:

    Syntax

    # echo "mon initial members = <monitor_host_name>[,<monitor_host_name>]" >> /etc/ceph/<cluster_name>.conf

    Example

    # echo "mon initial members = node1" >> /etc/ceph/ceph.conf

  7. As root, add the IP address of the initial Monitor to the Ceph configuration file:

    Syntax

    # echo "mon host = <ip-address>[,<ip-address>]" >> /etc/ceph/<cluster_name>.conf

    Example

    # echo "mon host = 192.168.0.120" >> /etc/ceph/ceph.conf

    Note

    To use IPv6 addresses, you must set the ms bind ipv6 option to true. See the Red Hat Ceph Storage Configuration Guide for more details.

  8. As root, create the keyring for the cluster and generate the Monitor secret key:

    Syntax

    # ceph-authtool --create-keyring /tmp/<cluster_name>.mon.keyring --gen-key -n mon. --cap mon '<capabilites>'

    Example

    # ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
    creating /tmp/ceph.mon.keyring

  9. As root, generate an administrator keyring, generate a <cluster_name>.client.admin.keyring user and add the user to the keyring:

    Syntax

    # ceph-authtool --create-keyring /etc/ceph/<cluster_name>.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon '<capabilites>' --cap osd '<capabilites>' --cap mds '<capabilites>'

    Example

    # ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow'
    creating /etc/ceph/ceph.client.admin.keyring

  10. As root, add the <cluster_name>.client.admin.keyring key to the <cluster_name>.mon.keyring:

    Syntax

    # ceph-authtool /tmp/<cluster_name>.mon.keyring --import-keyring /etc/ceph/<cluster_name>.client.admin.keyring

    Example

    # ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
    importing contents of /etc/ceph/ceph.client.admin.keyring into /tmp/ceph.mon.keyring

  11. Generate the Monitor map. Specify using the node name, IP address and the fsid, of the initial Monitor and save it as /tmp/monmap:

    Syntax

    $ monmaptool --create --add <monitor_host_name> <ip-address> --fsid <uuid> /tmp/monmap

    Example

    $ monmaptool --create --add node1 192.168.0.120 --fsid a7f64266-0894-4f1e-a635-d0aeaca0e993 /tmp/monmap
    monmaptool: monmap file /tmp/monmap
    monmaptool: set fsid to a7f64266-0894-4f1e-a635-d0aeaca0e993
    monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)

  12. As root on the initial Monitor node, create a default data directory:

    Syntax

    # mkdir /var/lib/ceph/mon/<cluster_name>-<monitor_host_name>

    Example

    # mkdir /var/lib/ceph/mon/ceph-node1

  13. As root, populate the initial Monitor daemon with the Monitor map and keyring:

    Syntax

    # ceph-mon [--cluster <cluster_name>] --mkfs -i <monitor_host_name> --monmap /tmp/monmap --keyring /tmp/<cluster_name>.mon.keyring

    Example

    # ceph-mon --mkfs -i node1 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
    ceph-mon: set fsid to a7f64266-0894-4f1e-a635-d0aeaca0e993
    ceph-mon: created monfs at /var/lib/ceph/mon/ceph-node1 for mon.node1

  14. View the current Ceph configuration file:

    # cat /etc/ceph/ceph.conf
    [global]
    fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993
    mon_initial_members = node1
    mon_host = 192.168.0.120

    For more details on the various Ceph configuration settings, see the Red Hat Ceph Storage Configuration Guide. The following example of a Ceph configuration file lists some of the most common configuration settings:

    Example

    [global]
    fsid = <cluster-id>
    mon initial members = <monitor_host_name>[, <monitor_host_name>]
    mon host = <ip-address>[, <ip-address>]
    public network = <network>[, <network>]
    cluster network = <network>[, <network>]
    auth cluster required = cephx
    auth service required = cephx
    auth client required = cephx
    osd journal size = <n>
    filestore xattr use omap = true
    osd pool default size = <n>  # Write an object n times.
    osd pool default min size = <n> # Allow writing n copy in a degraded state.
    osd pool default pg num = <n>
    osd pool default pgp num = <n>
    osd crush chooseleaf type = <n>

  15. As root, create the done file:

    Syntax

    # touch /var/lib/ceph/mon/<cluster_name>-<monitor_host_name>/done

    Example

    # touch /var/lib/ceph/mon/ceph-node1/done

  16. As root, update the owner and group permissions on the newly created directory and files:

    Syntax

    # chown -R <owner>:<group> <path_to_directory>

    Example

    # chown -R ceph:ceph /var/lib/ceph/mon
    # chown -R ceph:ceph /var/log/ceph
    # chown -R ceph:ceph /var/run/ceph
    # chown ceph:ceph /etc/ceph/ceph.client.admin.keyring
    # chown ceph:ceph /etc/ceph/ceph.conf
    # chown ceph:ceph /etc/ceph/rbdmap

    Note

    If the Ceph Monitor node is co-located with an OpenStack Controller node, then the Glance and Cinder keyring files must be owned by glance and cinder respectively. For example:

    # ls -l /etc/ceph/
    ...
    -rw-------.  1 glance glance      64 <date> ceph.client.glance.keyring
    -rw-------.  1 cinder cinder      64 <date> ceph.client.cinder.keyring
    ...
  17. For storage clusters with custom names, as root, add the the following line:

    Syntax

    $ sudo echo "CLUSTER=<custom_cluster_name>" >> /etc/default/ceph

    Example

    $ sudo echo "CLUSTER=test123" >> /etc/default/ceph

  18. As root, start and enable the ceph-mon process on the initial Monitor node:

    Syntax

    $ sudo systemctl enable ceph-mon.target
    $ sudo systemctl enable ceph-mon@<monitor_host_name>
    $ sudo systemctl start ceph-mon@<monitor_host_name>

    Example

    $ sudo systemctl enable ceph-mon.target
    $ sudo systemctl enable ceph-mon@node1
    $ sudo systemctl start ceph-mon@node1

  19. Verify that Ceph created the default pools:

    $ ceph osd lspools
    0 rbd,
  20. Verify that the Monitor is running. The status output will look similar to the following example. The Monitor is up and running, but the cluster health will be in a HEALTH_ERR state. This error is indicating that placement groups are stuck and inactive. Once OSDs are added to the cluster and active, the placement group health errors will disappear.

    Example

    $ ceph -s
    cluster a7f64266-0894-4f1e-a635-d0aeaca0e993
    health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds
    monmap e1: 1 mons at {node1=192.168.0.120:6789/0}, election epoch 1, quorum 0 node1
    osdmap e1: 0 osds: 0 up, 0 in
    pgmap v2: 192 pgs, 3 pools, 0 bytes data, 0 objects
    0 kB used, 0 kB / 0 kB avail
    192 creating

To add more Red Hat Ceph Storage Monitors to the storage cluster, see the Red Hat Ceph Storage Administration Guide

3.2.2. OSD Bootstrapping

Once you have your initial monitor running, you can start adding the Object Storage Devices (OSDs). Your cluster cannot reach an active + clean state until you have enough OSDs to handle the number of copies of an object.

The default number of copies for an object is three. You will need three OSD nodes at minimum. However, if you only want two copies of an object, therefore only adding two OSD nodes, then update the osd pool default size and osd pool default min size settings in the Ceph configuration file.

For more details, see the OSD Configuration Reference section in the Red Hat Ceph Storage Configuration Guide.

After bootstrapping the initial monitor, the cluster has a default CRUSH map. However, the CRUSH map does not have any Ceph OSD daemons mapped to a Ceph node.

To add an OSD to the cluster and updating the default CRUSH map, execute the following on each OSD node:

  1. Enable the Red Hat Ceph Storage 2 OSD repository. For ISO-based installations, see the ISO installation section.
  2. As root, install the ceph-osd package on the Ceph OSD node:

    $ sudo apt-get install ceph-osd
  3. Copy the Ceph configuration file and administration keyring file from the initial Monitor node to the OSD node:

    Syntax

    # scp <user_name>@<monitor_host_name>:<path_on_remote_system> <path_to_local_file>

    Example

    # scp root@node1:/etc/ceph/ceph.conf /etc/ceph
    # scp root@node1:/etc/ceph/ceph.client.admin.keyring /etc/ceph

  4. Generate the Universally Unique Identifier (UUID) for the OSD:

    $ uuidgen
    b367c360-b364-4b1d-8fc6-09408a9cda7a
  5. As root, create the OSD instance:

    Syntax

    # ceph osd create <uuid> [<osd_id>]

    Example

    # ceph osd create b367c360-b364-4b1d-8fc6-09408a9cda7a
    0

    Note

    This command outputs the OSD number identifier needed for subsequent steps.

  6. As root, create the default directory for the new OSD:

    Syntax

    # mkdir /var/lib/ceph/osd/<cluster_name>-<osd_id>

    Example

    # mkdir /var/lib/ceph/osd/ceph-0

  7. As root, prepare the drive for use as an OSD, and mount it to the directory you just created. Create a partition for the Ceph data and journal. The journal and the data partitions can be located on the same disk. This example is using a 15 GB disk:

    Syntax

    # parted <path_to_disk> mklabel gpt
    # parted <path_to_disk> mkpart primary 1 10000
    # mkfs -t <fstype> <path_to_partition>
    # mount -o noatime <path_to_partition> /var/lib/ceph/osd/<cluster_name>-<osd_id>
    # echo "<path_to_partition>  /var/lib/ceph/osd/<cluster_name>-<osd_id>   xfs defaults,noatime 1 2" >> /etc/fstab

    Example

    # parted /dev/sdb mklabel gpt
    # parted /dev/sdb mkpart primary 1 10000
    # parted /dev/sdb mkpart primary 10001 15000
    # mkfs -t xfs /dev/sdb1
    # mount -o noatime /dev/sdb1 /var/lib/ceph/osd/ceph-0
    # echo "/dev/sdb1 /var/lib/ceph/osd/ceph-0  xfs defaults,noatime 1 2" >> /etc/fstab

  8. As root, initialize the OSD data directory:

    Syntax

    # ceph-osd -i <osd_id> --mkfs --mkkey --osd-uuid <uuid>

    Example

    # ceph-osd -i 0 --mkfs --mkkey --osd-uuid b367c360-b364-4b1d-8fc6-09408a9cda7a
    ... auth: error reading file: /var/lib/ceph/osd/ceph-0/keyring: can't open /var/lib/ceph/osd/ceph-0/keyring: (2) No such file or directory
    ... created new key in keyring /var/lib/ceph/osd/ceph-0/keyring

    Note

    The directory must be empty before you run ceph-osd with the --mkkey option. If you have a custom cluster name, the ceph-osd utility requires the --cluster option.

  9. As root, register the OSD authentication key. If your cluster name differs from ceph, insert your cluster name instead:

    Syntax

    # ceph auth add osd.<osd_id> osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/<cluster_name>-<osd_id>/keyring

    Example

    # ceph auth add osd.0 osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph-0/keyring
    added key for osd.0

  10. As root, add the OSD node to the CRUSH map:

    Syntax

    # ceph [--cluster <cluster_name>] osd crush add-bucket <host_name> host

    Example

    # ceph osd crush add-bucket node2 host

  11. As root, place the OSD node under the default CRUSH tree:

    Syntax

    # ceph [--cluster <cluster_name>] osd crush move <host_name> root=default

    Example

    # ceph osd crush move node2 root=default

  12. As root, add the OSD disk to the CRUSH map

    Syntax

    # ceph [--cluster <cluster_name>] osd crush add osd.<osd_id> <weight> [<bucket_type>=<bucket-name> ...]

    Example

    # ceph osd crush add osd.0 1.0 host=node2
    add item id 0 name 'osd.0' weight 1 at location {host=node2} to crush map

    Note

    You can also decompile the CRUSH map, and add the OSD to the device list. Add the OSD node as a bucket, then add the device as an item in the OSD node, assign the OSD a weight, recompile the CRUSH map and set the CRUSH map. For more details, see the Red Hat Ceph Storage Storage Strategies Guide for more details.

  13. As root, update the owner and group permissions on the newly created directory and files:

    Syntax

    # chown -R <owner>:<group> <path_to_directory>

    Example

    # chown -R ceph:ceph /var/lib/ceph/osd
    # chown -R ceph:ceph /var/log/ceph
    # chown -R ceph:ceph /var/run/ceph
    # chown -R ceph:ceph /etc/ceph

  14. For storage clusters with custom names, as root, add the following line to the /etc/default/ceph file:

    Syntax

    $ sudo echo "CLUSTER=<custom_cluster_name>" >> /etc/default/ceph

    Example

    $ sudo echo "CLUSTER=test123" >> /etc/default/ceph

  15. The OSD node is in your Ceph storage cluster configuration. However, the OSD daemon is down and in. The new OSD must be up before it can begin receiving data. As root, enable and start the OSD process:

    Syntax

    $ sudo systemctl enable ceph-osd.target
    $ sudo systemctl enable ceph-osd@<osd_id>
    $ sudo systemctl start ceph-osd@<osd_id>

    Example

    $ sudo systemctl enable ceph-osd.target
    $ sudo systemctl enable ceph-osd@0
    $ sudo systemctl start ceph-osd@0

    Once you start the OSD daemon, it is up and in.

Now you have the monitors and some OSDs up and running. You can watch the placement groups peer by executing the following command:

$ ceph -w

To view the OSD tree, execute the following command:

$ ceph osd tree

Example

ID  WEIGHT    TYPE NAME        UP/DOWN  REWEIGHT  PRIMARY-AFFINITY
-1       2    root default
-2       2        host node2
 0       1            osd.0         up         1                 1
-3       1        host node3
 1       1            osd.1         up         1                 1

To expand the storage capacity by adding new OSDs to the storage cluster, see the Red Hat Ceph Storage Administration Guide for more details.

3.2.3. Calamari Server Installation

The Calamari server provides a RESTful API for monitoring Ceph storage clusters.

To install calamari-server, perform the following steps on all Monitor nodes.

  1. As root, enable the Red Hat Ceph Storage 2 Monitor repository
  2. As root, install calamari-server:

    $ sudo apt-get install calamari-server
  3. As root, initialize the calamari-server:

    Syntax

    $ sudo calamari-ctl clear --yes-i-am-sure
    $ sudo calamari-ctl initialize --admin-username <uid> --admin-password <pwd> --admin-email <email>

    Example

    $ sudo calamari-ctl clear --yes-i-am-sure
    $ sudo calamari-ctl initialize --admin-username admin --admin-password admin --admin-email cephadm@example.com

    Important

    The calamari-ctl clear --yes-i-am-sure command is only necessary for removing the database of old Calamari server installations. Running this command on a new Calamari server results in an error.

    Note

    During initialization, the calamari-server will generate a self-signed certificate and a private key and place them in the /etc/calamari/ssl/certs/ and /etc/calamari/ssl/private directories respectively. Use HTTPS when making requests. Otherwise, user names and passwords are transmitted in clear text.

The calamari-ctl initialize process generates a private key and a self-signed certificate, which means there is no need to purchase a certificate from a Certificate Authority (CA).

To verify access to the HTTPS API through a web browser, go to the following URL. Click through the untrusted certificate warnings, because the auto-generated certificate is self-signed:

https://<calamari_hostname>:8002/api/v2/cluster

To use a key and certificate from a CA:

  1. Purchase a certificate from a CA. During the process, you will generate a private key and a certificate for CA. Or you can also use the self-signed certificate generated by Calamari.
  2. Save the private key associated to the certificate to a path, preferably under /etc/calamari/ssl/private/.
  3. Save the certificate to a path, preferably under /etc/calamari/ssl/certs/.
  4. Open the /etc/calamari/calamari.conf file.
  5. Under the [calamari_web] section, modify ssl_cert and ssl_key to point to the respective certificate and key path, for example:

    [calamari_web]
    ...
    ssl_cert = /etc/calamari/ssl/certs/calamari-lite-bundled.crt
    ssl_key = /etc/calamari/ssl/private/calamari-lite.key
  6. As root, re-initialize Calamari:

    $ sudo calamari-ctl initialize