Red Hat Training

A Red Hat training course is available for Red Hat Ceph Storage

Chapter 4. Client Installation

Red Hat Ceph Storage supports the following types of Ceph clients:

Ceph CLI
The Ceph command-line interface (CLI) enables administrators to execute Ceph administrative commands. See Section 4.2, “Ceph Command-line Interface Installation” for information on installing the Ceph CLI.
Block Device
The Ceph Block Device is a thin-provisioned, resizable block device. See Section 4.3, “Ceph Block Device Installation” for information on installing Ceph Block Devices.
Object Gateway
The Ceph Object Ǵateway provides its own user management and Swift- and S3-compliant APIs. See Section 4.4, “Ceph Object Gateway Installation” for information on installing Ceph Object Gateways.

In addition, the ceph-ansible utility provides the ceph-client role that copies the Ceph configuration file and the administration keyring to nodes. See Section 4.1, “Installing the ceph-client role” for details.

Important

To use Ceph clients, you must have a Ceph cluster storage running, preferably in the active + clean state.

In addition, before installing the Ceph clients, ensure to perform the tasks listed in the Figure 2.1, “Prerequisite Workflow” section.

4.1. Installing the ceph-client role

The ceph-client role copies the Ceph configuration file and administration keyring to a node. In addition, you can use this role to create custom pools and clients.

Perform the following tasks on the Ansible administration node, see Installing ceph-ansible for details.

  1. Add a new section [clients] to the /etc/ansible/hosts file:

    [clients]
    <client-hostname>

    Replace <client-hostname> with the host name of the node where you want to install the ceph-client role.

  2. Navigate to the /etc/ansible/group_vars/ directory:

    $ cd /etc/ansible/group_vars
  3. Create a new copy of the clients.yml.sample file named clients.yml:

    # cp clients.yml.sample clients.yml
  4. Optionally, instruct ceph-client to create pools and clients.

    1. Update clients.yml.

      • Uncomment the user_config setting and set it to true.
      • Uncomment the pools and keys sections and update them as required. You can define custom pools and client names altogether with the cephx capabilities.
    2. Add the osd_pool_default_pg_num setting to the ceph_conf_overrides section in the all.yml file:

      ceph_conf_overrides:
         global:
            osd_pool_default_pg_num: <number>

      Replace <number> with the default number of placement groups.

  5. Run the Ansible playbook:

    $ ansible-playbook site.yml

4.2. Ceph Command-line Interface Installation

The Ceph command-line interface (CLI) is provided by the ceph-common package and includes the following utilities:

  • ceph
  • ceph-authtool
  • ceph-dencoder
  • rados

To install the Ceph CLI:

  1. On the client node, enable the Tools repository.
  2. On the client node, install the ceph-common package:

    # yum install ceph-common
  3. From the initial monitor node, copy the Ceph configuration file, in this case ceph.conf, and the administration keyring to the client node:

    Syntax

    # scp /etc/ceph/<cluster_name>.conf <user_name>@<client_host_name>:/etc/ceph/
    # scp /etc/ceph/<cluster_name>.client.admin.keyring <user_name>@<client_host_name:/etc/ceph/

    Example

    # scp /etc/ceph/ceph.conf root@node1:/etc/ceph/
    # scp /etc/ceph/ceph.client.admin.keyring root@node1:/etc/ceph/

    Replace <client_host_name> with the host name of the client node.

4.3. Ceph Block Device Installation

The following procedure shows how to install and mount a thin-provisioned, resizable Ceph Block Device.

Important

Ceph Block Devices must be deployed on separate nodes from the Ceph Monitor and OSD nodes. Running kernel clients and kernel server daemons on the same node can lead to kernel deadlocks.

Before you start

Installing Ceph Block Devices by Using the Command Line

  1. Create a Ceph Block Device user named client.rbd with full permissions to files on OSD nodes (osd 'allow rwx') and output the result to a keyring file:

    ceph auth get-or-create client.rbd mon 'allow r' osd 'allow rwx pool=<pool_name>' \
    -o /etc/ceph/rbd.keyring

    Replace <pool_name> with the name of the pool that you want to allow client.rbd to have access to, for example rbd:

    # ceph auth get-or-create \
    client.rbd mon 'allow r' osd 'allow rwx pool=rbd' \
    -o /etc/ceph/rbd.keyring

    See the User Management section in the Red Hat Ceph Storage Administration Guide for more information about creating users.

  2. Create a block device image:

    rbd create <image_name> --size <image_size> --pool <pool_name> \
    --name client.rbd --keyring /etc/ceph/rbd.keyring

    Specify <image_name>, <image_size>, and <pool_name>, for example:

    $ rbd create image1 --size 4096 --pool rbd \
    --name client.rbd --keyring /etc/ceph/rbd.keyring
    Warning

    The default Ceph configuration includes the following Ceph Block Device features:

    • layering
    • exclusive-lock
    • object-map
    • deep-flatten
    • fast-diff

    If you use the kernel RBD (krbd) client, you will not be able to map the block device image because the current kernel version included in Red Hat Enterprise Linux 7.3 does not support object-map, deep-flatten, and fast-diff.

    To work around this problem, disable the unsupported features. Use one of the following options to do so:

    • Disable the unsupported features dynamically:

      rbd feature disable <image_name> <feature_name>

      For example:

      # rbd feature disable image1 object-map deep-flatten fast-diff
    • Use the --image-feature layering option with the rbd create command to enable only layering on newly created block device images.
    • Disable the features be default in the Ceph configuration file:

      rbd_default_features = 1

    This is a known issue, for details see the Release Notes Red Hat Ceph Storage 2.2.

    All these features work for users that use the user-space RBD client to access the block device images.

  3. Map the newly created image to the block device:

    rbd map <image_name> --pool <pool_name>\
    --name client.rbd --keyring /etc/ceph/rbd.keyring

    For example:

    # rbd map image1 --pool rbd --name client.rbd \
    --keyring /etc/ceph/rbd.keyring
    Important

    Kernel block devices currently only support the legacy straw bucket algorithm in the CRUSH map. If you have set the CRUSH tunables to optimal, you must set them to legacy or an earlier major release, otherwise, you will not be able to map the image.

    Alternatively, replace straw2 with straw in the CRUSH map. For details, see the Editing a CRUSH Map chapter in the Storage Strategies Guide for Red Hat Ceph Storage 2.

  4. Use the block device by creating a file system:

    mkfs.ext4 -m5 /dev/rbd/<pool_name>/<image_name>

    Specify the pool name and the image name, for example:

    # mkfs.ext4 -m5 /dev/rbd/rbd/image1

    This can take a few moments.

  5. Mount the newly created file system:

    mkdir <mount_directory>
    mount /dev/rbd/<pool_name>/<image_name> <mount_directory>

    For example:

    # mkdir /mnt/ceph-block-device
    # mount /dev/rbd/rbd/image1 /mnt/ceph-block-device

For additional details, see the Red Hat Ceph Storage Block Device Guide.

4.4. Ceph Object Gateway Installation

The Ceph object gateway, also know as the RADOS gateway, is an object storage interface built on top of the librados API to provide applications with a RESTful gateway to Ceph storage clusters.

For more information about the Ceph object gateway, see the Object Gateway Guide for Red Hat Enterprise Linux.

There are two ways to install the Ceph object gateway:

4.4.1. Installing Ceph Object Gateway by using Ansible

Perform the following tasks on the Ansible administration node, see Install Ceph Ansible for details.

  1. As root, create the rgws file from the sample file:

    # cd /etc/ansible/group_vars
    # cp rgws.yml.sample rgws.yml
  2. To copy the administrator key to the Ceph Object Gateway node, uncomment the copy_admin_key setting in the /etc/ansible/group_vars/rgws.yml file:

    copy_admin_key: true
  3. The rgws.yml file may specify a different default port than the default port 7480. For example:

    ceph_rgw_civetweb_port: 80
  4. Generally, to change default settings, uncomment the settings in the rgw.yml file, and make changes accordingly. To make additional changes to settings that aren’t in the rgw.yml file, use ceph_conf_overrides: in the all.yml file. For example, set the rgw_dns_name: with the host of the DNS server and ensure the cluster’s DNS server to configure it for wild cards to enable S3 subdomains.

    ceph_conf_overrides:
       client.rgw.rgw1:
          rgw_dns_name: <host_name>
          rgw_override_bucket_index_max_shards: 16
          rgw_bucket_default_quota_max_objects: 1638400

    For advanced configuration details, refer to the Ceph Object Gateway for Production guide. Advanced topics include:

  5. Add the radosgw_interface setting to the group_vars/all.yml file, and specify the network interface for the Ceph Object Gateway. For example:

    radosgw_interface: eth0
  6. Add gateway hosts to the /etc/ansible/hosts file under the [rgws] section to identify their roles to Ansible. If the hosts have sequential naming, you can use a range. For example:

    [rgws]
    <rgw_host_name_1>
    <rgw_host_name_2>
    <rgw_host_name[3..10]>
  7. Navigate to the Ansible configuration directory, /etc/ansible/:

    $ cd /usr/share/ceph-ansible
  8. Run the Ansible playbook:

    $ ansible-playbook site.yml
Note

Ansible ensures that each Ceph Object Gateway is running.

For a single site configuration, add Ceph Object Gateways to the Ansible configuration.

For multi-site deployments, you should have an Ansible configuration for each zone. That is, Ansible will create a Ceph storage cluster and gateway instances for that zone.

After installation for a multi-site cluster is complete, proceed to the Multi-site chapter in the Object Gateway Guide for Red Hat Enterprise Linux for details on configuring a cluster for multi-site.

4.4.2. Installing Ceph Object Gateway Manually

  1. Enable the Red Hat Ceph Storage 2 Tools repository. For ISO-based installations, see the ISO installation section.
  2. On the Object Gateway node, install the ceph-radosgw package:

    # yum install ceph-radosgw
  3. On the initial Monitor node, do the following steps.

    1. Update the Ceph configuration file as follows:

      [client.rgw.<obj_gw_hostname>]
      host = <obj_gw_hostname>
      rgw frontends = "civetweb port=80"
      rgw dns name = <obj_gw_hostname>.example.com

      Where <obj_gw_hostname> is a short host name of the gateway node. To view the short host name, use the hostname -s command.

    2. Copy the updated configuration file to the new Object Gateway node and all other nodes in the Ceph storage cluster:

      Syntax

      # scp /etc/ceph/<cluster_name>.conf <user_name>@<target_host_name>:/etc/ceph

      Example

      # scp /etc/ceph/ceph.conf root@node1:/etc/ceph/

    3. Copy the <cluster_name>.client.admin.keyring file to the new Object Gateway node:

      Syntax

      # scp /etc/ceph/<cluster_name>.client.admin.keyring <user_name>@<target_host_name>:/etc/ceph/

      Example

      # scp /etc/ceph/ceph.client.admin.keyring root@node1:/etc/ceph/

  4. On the Object Gateway node, create the data directory:

    Syntax

    # mkdir -p /var/lib/ceph/radosgw/<cluster_name>-rgw.`hostname -s`

    Example

    # mkdir -p /var/lib/ceph/radosgw/ceph-rgw.`hostname -s`

  5. On the Object Gateway node, add a user and keyring to bootstrap the object gateway:

    Syntax

    # ceph auth get-or-create client.rgw.`hostname -s` osd 'allow rwx' mon 'allow rw' -o /var/lib/ceph/radosgw/<cluster_name>-rgw.`hostname -s`/keyring

    Example

    # ceph auth get-or-create client.rgw.`hostname -s` osd 'allow rwx' mon 'allow rw' -o /var/lib/ceph/radosgw/ceph-rgw.`hostname -s`/keyring

    Important

    When you provide capabilities to the gateway key you must provide the read capability. However, providing the Monitor write capability is optional; if you provide it, the Ceph Object Gateway will be able to create pools automatically.

    In such a case, ensure to specify a reasonable number of placement groups in a pool. Otherwise, the gateway uses the default number, which might not be suitable for your needs. See Ceph Placement Groups (PGs) per Pool Calculator for details.

  6. On the Object Gateway node, create the done file:

    Syntax

    # touch /var/lib/ceph/radosgw/<cluster_name>-rgw.`hostname -s`/done

    Example

    # touch /var/lib/ceph/radosgw/ceph-rgw.`hostname -s`/done

  7. On the Object Gateway node, change the owner and group permissions:

    # chown -R ceph:ceph /var/lib/ceph/radosgw
    # chown -R ceph:ceph /var/log/ceph
    # chown -R ceph:ceph /var/run/ceph
    # chown -R ceph:ceph /etc/ceph
  8. For storage clusters with custom names, as root, add the the following line:

    Syntax

    # echo "CLUSTER=<custom_cluster_name>" >> /etc/sysconfig/ceph

    Example

    # echo "CLUSTER=test123" >> /etc/sysconfig/ceph

  9. On the Object Gateway node, open TCP port 80:

    # firewall-cmd --zone=public --add-port=80/tcp
    # firewall-cmd --zone=public --add-port=80/tcp --permanent
  10. On the Object Gateway node, start and enable the ceph-radosgw process:

    Syntax

    # systemctl enable ceph-radosgw.target
    # systemctl enable ceph-radosgw@rgw.<rgw_hostname>
    # systemctl start ceph-radosgw@rgw.<rgw_hostname>

    Example

    # systemctl enable ceph-radosgw.target
    # systemctl enable ceph-radosgw@rgw.node1
    # systemctl start ceph-radosgw@rgw.node1

Once installed, the Ceph Object Gateway automatically creates pools if the write capability is set on the Monitor. See the Pools chapter in the Storage Strategies Guide for information on creating pools manually.