Appendix B. Using the command-line interface to install the Ceph software

As a storage administrator, you can choose to manually install various components of the Red Hat Ceph Storage software.

B.1. Installing the Ceph Command Line Interface

The Ceph command-line interface (CLI) enables administrators to execute Ceph administrative commands. The CLI is provided by the ceph-common package and includes the following utilities:

  • ceph
  • ceph-authtool
  • ceph-dencoder
  • rados

Prerequisites

  • A running Ceph storage cluster, preferably in the active + clean state.

Procedure

  1. On the client node, enable the Red Hat Ceph Storage 4 Tools repository:

    [root@gateway ~]# subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms
  2. On the client node, install the ceph-common package:

    # yum install ceph-common
  3. From the initial monitor node, copy the Ceph configuration file, in this case ceph.conf, and the administration keyring to the client node:

    Syntax

    # scp /etc/ceph/ceph.conf <user_name>@<client_host_name>:/etc/ceph/
    # scp /etc/ceph/ceph.client.admin.keyring <user_name>@<client_host_name:/etc/ceph/

    Example

    # scp /etc/ceph/ceph.conf root@node1:/etc/ceph/
    # scp /etc/ceph/ceph.client.admin.keyring root@node1:/etc/ceph/

    Replace <client_host_name> with the host name of the client node.

B.2. Manually Installing Red Hat Ceph Storage

Important

Red Hat does not support or test upgrading manually deployed clusters. Therefore, Red Hat recommends to use Ansible to deploy a new cluster with Red Hat Ceph Storage 4. See Chapter 5, Installing Red Hat Ceph Storage using Ansible for details.

You can use command-line utilities, such as Yum, to upgrade manually deployed clusters, but Red Hat does not support or test this approach.

All Ceph clusters require at least one monitor, and at least as many OSDs as copies of an object stored on the cluster. Red Hat recommends using three monitors for production environments and a minimum of three Object Storage Devices (OSD).

Bootstrapping the initial monitor is the first step in deploying a Ceph storage cluster. Ceph monitor deployment also sets important criteria for the entire cluster, such as:

  • The number of replicas for pools
  • The number of placement groups per OSD
  • The heartbeat intervals
  • Any authentication requirement

Most of these values are set by default, so it is useful to know about them when setting up the cluster for production.

Installing a Ceph storage cluster by using the command line interface involves these steps:

Monitor Bootstrapping

Bootstrapping a Monitor and by extension a Ceph storage cluster, requires the following data:

Unique Identifier
The File System Identifier (fsid) is a unique identifier for the cluster. The fsid was originally used when the Ceph storage cluster was principally used for the Ceph file system. Ceph now supports native interfaces, block devices, and object storage gateway interfaces too, so fsid is a bit of a misnomer.
Monitor Name
Each Monitor instance within a cluster has a unique name. In common practice, the Ceph Monitor name is the node name. Red Hat recommend one Ceph Monitor per node, and no co-locating the Ceph OSD daemons with the Ceph Monitor daemon. To retrieve the short node name, use the hostname -s command.
Monitor Map

Bootstrapping the initial Monitor requires you to generate a Monitor map. The Monitor map requires:

  • The File System Identifier (fsid)
  • The cluster name, or the default cluster name of ceph is used
  • At least one host name and its IP address.
Monitor Keyring
Monitors communicate with each other by using a secret key. You must generate a keyring with a Monitor secret key and provide it when bootstrapping the initial Monitor.
Administrator Keyring
To use the ceph command-line interface utilities, create the client.admin user and generate its keyring. Also, you must add the client.admin user to the Monitor keyring.

The foregoing requirements do not imply the creation of a Ceph configuration file. However, as a best practice, Red Hat recommends creating a Ceph configuration file and populating it with the fsid, the mon initial members and the mon host settings at a minimum.

You can get and set all of the Monitor settings at runtime as well. However, the Ceph configuration file might contain only those settings which overrides the default values. When you add settings to a Ceph configuration file, these settings override the default settings. Maintaining those settings in a Ceph configuration file makes it easier to maintain the cluster.

To bootstrap the initial Monitor, perform the following steps:

  1. Enable the Red Hat Ceph Storage 4 Monitor repository:

    [root@monitor ~]# subscription-manager repos --enable=rhceph-4-mon-for-rhel-8-x86_64-rpms
  2. On your initial Monitor node, install the ceph-mon package as root:

    # yum install ceph-mon
  3. As root, create a Ceph configuration file in the /etc/ceph/ directory.

    # touch /etc/ceph/ceph.conf
  4. As root, generate the unique identifier for your cluster and add the unique identifier to the [global] section of the Ceph configuration file:

    # echo "[global]" > /etc/ceph/ceph.conf
    # echo "fsid = `uuidgen`" >> /etc/ceph/ceph.conf
  5. View the current Ceph configuration file:

    $ cat /etc/ceph/ceph.conf
    [global]
    fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993
  6. As root, add the initial Monitor to the Ceph configuration file:

    Syntax

    # echo "mon initial members = <monitor_host_name>[,<monitor_host_name>]" >> /etc/ceph/ceph.conf

    Example

    # echo "mon initial members = node1" >> /etc/ceph/ceph.conf

  7. As root, add the IP address of the initial Monitor to the Ceph configuration file:

    Syntax

    # echo "mon host = <ip-address>[,<ip-address>]" >> /etc/ceph/ceph.conf

    Example

    # echo "mon host = 192.168.0.120" >> /etc/ceph/ceph.conf

    Note

    To use IPv6 addresses, you set the ms bind ipv6 option to true. For details, see the Bind section in the Configuration Guide for Red Hat Ceph Storage 4.

  8. As root, create the keyring for the cluster and generate the Monitor secret key:

    # ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
    creating /tmp/ceph.mon.keyring
  9. As root, generate an administrator keyring, generate a ceph.client.admin.keyring user and add the user to the keyring:

    Syntax

    # ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon '<capabilites>' --cap osd '<capabilites>' --cap mds '<capabilites>'

    Example

    # ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow'
    creating /etc/ceph/ceph.client.admin.keyring

  10. As root, add the ceph.client.admin.keyring key to the ceph.mon.keyring:

    # ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
    importing contents of /etc/ceph/ceph.client.admin.keyring into /tmp/ceph.mon.keyring
  11. Generate the Monitor map. Specify using the node name, IP address and the fsid, of the initial Monitor and save it as /tmp/monmap:

    Syntax

    $ monmaptool --create --add <monitor_host_name> <ip-address> --fsid <uuid> /tmp/monmap

    Example

    $ monmaptool --create --add node1 192.168.0.120 --fsid a7f64266-0894-4f1e-a635-d0aeaca0e993 /tmp/monmap
    monmaptool: monmap file /tmp/monmap
    monmaptool: set fsid to a7f64266-0894-4f1e-a635-d0aeaca0e993
    monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)

  12. As root on the initial Monitor node, create a default data directory:

    Syntax

    # mkdir /var/lib/ceph/mon/ceph-<monitor_host_name>

    Example

    # mkdir /var/lib/ceph/mon/ceph-node1

  13. As root, populate the initial Monitor daemon with the Monitor map and keyring:

    Syntax

    # ceph-mon --mkfs -i <monitor_host_name> --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring

    Example

    # ceph-mon --mkfs -i node1 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
    ceph-mon: set fsid to a7f64266-0894-4f1e-a635-d0aeaca0e993
    ceph-mon: created monfs at /var/lib/ceph/mon/ceph-node1 for mon.node1

  14. View the current Ceph configuration file:

    # cat /etc/ceph/ceph.conf
    [global]
    fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993
    mon_initial_members = node1
    mon_host = 192.168.0.120

    For more details on the various Ceph configuration settings, see the Configuration Guide for Red Hat Ceph Storage 4. The following example of a Ceph configuration file lists some of the most common configuration settings:

    Example

    [global]
    fsid = <cluster-id>
    mon initial members = <monitor_host_name>[, <monitor_host_name>]
    mon host = <ip-address>[, <ip-address>]
    public network = <network>[, <network>]
    cluster network = <network>[, <network>]
    auth cluster required = cephx
    auth service required = cephx
    auth client required = cephx
    osd journal size = <n>
    osd pool default size = <n>  # Write an object n times.
    osd pool default min size = <n> # Allow writing n copy in a degraded state.
    osd pool default pg num = <n>
    osd pool default pgp num = <n>
    osd crush chooseleaf type = <n>

  15. As root, create the done file:

    Syntax

    # touch /var/lib/ceph/mon/ceph-<monitor_host_name>/done

    Example

    # touch /var/lib/ceph/mon/ceph-node1/done

  16. As root, update the owner and group permissions on the newly created directory and files:

    Syntax

    # chown -R <owner>:<group> <path_to_directory>

    Example

    # chown -R ceph:ceph /var/lib/ceph/mon
    # chown -R ceph:ceph /var/log/ceph
    # chown -R ceph:ceph /var/run/ceph
    # chown ceph:ceph /etc/ceph/ceph.client.admin.keyring
    # chown ceph:ceph /etc/ceph/ceph.conf
    # chown ceph:ceph /etc/ceph/rbdmap

    Note

    If the Ceph Monitor node is co-located with an OpenStack Controller node, then the Glance and Cinder keyring files must be owned by glance and cinder respectively. For example:

    # ls -l /etc/ceph/
    ...
    -rw-------.  1 glance glance      64 <date> ceph.client.glance.keyring
    -rw-------.  1 cinder cinder      64 <date> ceph.client.cinder.keyring
    ...
  17. As root, start and enable the ceph-mon process on the initial Monitor node:

    Syntax

    # systemctl enable ceph-mon.target
    # systemctl enable ceph-mon@<monitor_host_name>
    # systemctl start ceph-mon@<monitor_host_name>

    Example

    # systemctl enable ceph-mon.target
    # systemctl enable ceph-mon@node1
    # systemctl start ceph-mon@node1

  18. As root, verify the monitor daemon is running:

    Syntax

    # systemctl status ceph-mon@<monitor_host_name>

    Example

    # systemctl status ceph-mon@node1
    ● ceph-mon@node1.service - Ceph cluster monitor daemon
       Loaded: loaded (/usr/lib/systemd/system/ceph-mon@.service; enabled; vendor preset: disabled)
       Active: active (running) since Wed 2018-06-27 11:31:30 PDT; 5min ago
     Main PID: 1017 (ceph-mon)
       CGroup: /system.slice/system-ceph\x2dmon.slice/ceph-mon@node1.service
               └─1017 /usr/bin/ceph-mon -f --cluster ceph --id node1 --setuser ceph --setgroup ceph
    
    Jun 27 11:31:30 node1 systemd[1]: Started Ceph cluster monitor daemon.
    Jun 27 11:31:30 node1 systemd[1]: Starting Ceph cluster monitor daemon...

To add more Red Hat Ceph Storage Monitors to the storage cluster, see the Adding a Monitor section in the Administration Guide for Red Hat Ceph Storage 4.

OSD Bootstrapping

Once you have your initial monitor running, you can start adding the Object Storage Devices (OSDs). Your cluster cannot reach an active + clean state until you have enough OSDs to handle the number of copies of an object.

The default number of copies for an object is three. You will need three OSD nodes at minimum. However, if you only want two copies of an object, therefore only adding two OSD nodes, then update the osd pool default size and osd pool default min size settings in the Ceph configuration file.

For more details, see the OSD Configuration Reference section in the Configuration Guide for Red Hat Ceph Storage 4.

After bootstrapping the initial monitor, the cluster has a default CRUSH map. However, the CRUSH map does not have any Ceph OSD daemons mapped to a Ceph node.

To add an OSD to the cluster and updating the default CRUSH map, execute the following on each OSD node:

  1. Enable the Red Hat Ceph Storage 4 OSD repository:

    [root@osd ~]# subscription-manager repos --enable=rhceph-4-osd-for-rhel-8-x86_64-rpms
  2. As root, install the ceph-osd package on the Ceph OSD node:

    # yum install ceph-osd
  3. Copy the Ceph configuration file and administration keyring file from the initial Monitor node to the OSD node:

    Syntax

    # scp <user_name>@<monitor_host_name>:<path_on_remote_system> <path_to_local_file>

    Example

    # scp root@node1:/etc/ceph/ceph.conf /etc/ceph
    # scp root@node1:/etc/ceph/ceph.client.admin.keyring /etc/ceph

  4. Generate the Universally Unique Identifier (UUID) for the OSD:

    $ uuidgen
    b367c360-b364-4b1d-8fc6-09408a9cda7a
  5. As root, create the OSD instance:

    Syntax

    # ceph osd create <uuid> [<osd_id>]

    Example

    # ceph osd create b367c360-b364-4b1d-8fc6-09408a9cda7a
    0

    Note

    This command outputs the OSD number identifier needed for subsequent steps.

  6. As root, create the default directory for the new OSD:

    Syntax

    # mkdir /var/lib/ceph/osd/ceph-<osd_id>

    Example

    # mkdir /var/lib/ceph/osd/ceph-0

  7. As root, prepare the drive for use as an OSD, and mount it to the directory you just created. Create a partition for the Ceph data and journal. The journal and the data partitions can be located on the same disk. This example is using a 15 GB disk:

    Syntax

    # parted <path_to_disk> mklabel gpt
    # parted <path_to_disk> mkpart primary 1 10000
    # mkfs -t <fstype> <path_to_partition>
    # mount -o noatime <path_to_partition> /var/lib/ceph/osd/ceph-<osd_id>
    # echo "<path_to_partition>  /var/lib/ceph/osd/ceph-<osd_id>   xfs defaults,noatime 1 2" >> /etc/fstab

    Example

    # parted /dev/sdb mklabel gpt
    # parted /dev/sdb mkpart primary 1 10000
    # parted /dev/sdb mkpart primary 10001 15000
    # mkfs -t xfs /dev/sdb1
    # mount -o noatime /dev/sdb1 /var/lib/ceph/osd/ceph-0
    # echo "/dev/sdb1 /var/lib/ceph/osd/ceph-0  xfs defaults,noatime 1 2" >> /etc/fstab

  8. As root, initialize the OSD data directory:

    Syntax

    # ceph-osd -i <osd_id> --mkfs --mkkey --osd-uuid <uuid>

    Example

    # ceph-osd -i 0 --mkfs --mkkey --osd-uuid b367c360-b364-4b1d-8fc6-09408a9cda7a
    ... auth: error reading file: /var/lib/ceph/osd/ceph-0/keyring: can't open /var/lib/ceph/osd/ceph-0/keyring: (2) No such file or directory
    ... created new key in keyring /var/lib/ceph/osd/ceph-0/keyring

  9. As root, register the OSD authentication key.

    Syntax

    # ceph auth add osd.<osd_id> osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph-<osd_id>/keyring

    Example

    # ceph auth add osd.0 osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph-0/keyring
    added key for osd.0

  10. As root, add the OSD node to the CRUSH map:

    Syntax

    # ceph osd crush add-bucket <host_name> host

    Example

    # ceph osd crush add-bucket node2 host

  11. As root, place the OSD node under the default CRUSH tree:

    Syntax

    # ceph osd crush move <host_name> root=default

    Example

    # ceph osd crush move node2 root=default

  12. As root, add the OSD disk to the CRUSH map

    Syntax

    # ceph osd crush add osd.<osd_id> <weight> [<bucket_type>=<bucket-name> ...]

    Example

    # ceph osd crush add osd.0 1.0 host=node2
    add item id 0 name 'osd.0' weight 1 at location {host=node2} to crush map

    Note

    You can also decompile the CRUSH map, and add the OSD to the device list. Add the OSD node as a bucket, then add the device as an item in the OSD node, assign the OSD a weight, recompile the CRUSH map and set the CRUSH map. For more details, see the Editing a CRUSH map section in the Storage Strategies Guide for Red Hat Ceph Storage 4 for more details.

  13. As root, update the owner and group permissions on the newly created directory and files:

    Syntax

    # chown -R <owner>:<group> <path_to_directory>

    Example

    # chown -R ceph:ceph /var/lib/ceph/osd
    # chown -R ceph:ceph /var/log/ceph
    # chown -R ceph:ceph /var/run/ceph
    # chown -R ceph:ceph /etc/ceph

  14. The OSD node is in your Ceph storage cluster configuration. However, the OSD daemon is down and in. The new OSD must be up before it can begin receiving data. As root, enable and start the OSD process:

    Syntax

    # systemctl enable ceph-osd.target
    # systemctl enable ceph-osd@<osd_id>
    # systemctl start ceph-osd@<osd_id>

    Example

    # systemctl enable ceph-osd.target
    # systemctl enable ceph-osd@0
    # systemctl start ceph-osd@0

    Once you start the OSD daemon, it is up and in.

Now you have the monitors and some OSDs up and running. You can watch the placement groups peer by executing the following command:

$ ceph -w

To view the OSD tree, execute the following command:

$ ceph osd tree

Example

ID  WEIGHT    TYPE NAME        UP/DOWN  REWEIGHT  PRIMARY-AFFINITY
-1       2    root default
-2       2        host node2
 0       1            osd.0         up         1                 1
-3       1        host node3
 1       1            osd.1         up         1                 1

To expand the storage capacity by adding new OSDs to the storage cluster, see the Adding an OSD section in the Administration Guide for Red Hat Ceph Storage 4.

B.3. Manually installing Ceph Manager

Usually, the Ansible automation utility installs the Ceph Manager daemon (ceph-mgr) when you deploy the Red Hat Ceph Storage cluster. However, if you do not use Ansible to manage Red Hat Ceph Storage, you can install Ceph Manager manually. Red Hat recommends to colocate the Ceph Manager and Ceph Monitor daemons on a same node.

Prerequisites

  • A working Red Hat Ceph Storage cluster
  • root or sudo access
  • The rhceph-4-mon-for-rhel-8-x86_64-rpms repository enabled
  • Open ports 6800-7300 on the public network if firewall is used

Procedure

Use the following commands on the node where ceph-mgr will be deployed and as the root user or with the sudo utility.

  1. Install the ceph-mgr package:

    [root@node1 ~]# yum install ceph-mgr
  2. Create the /var/lib/ceph/mgr/ceph-hostname/ directory:

    mkdir /var/lib/ceph/mgr/ceph-hostname

    Replace hostname with the host name of the node where the ceph-mgr daemon will be deployed, for example:

    [root@node1 ~]# mkdir /var/lib/ceph/mgr/ceph-node1
  3. In the newly created directory, create an authentication key for the ceph-mgr daemon:

    [root@node1 ~]# ceph auth get-or-create mgr.`hostname -s` mon 'allow profile mgr' osd 'allow *' mds 'allow *' -o /var/lib/ceph/mgr/ceph-node1/keyring
  4. Change the owner and group of the /var/lib/ceph/mgr/ directory to ceph:ceph:

    [root@node1 ~]# chown -R ceph:ceph /var/lib/ceph/mgr
  5. Enable the ceph-mgr target:

    [root@node1 ~]# systemctl enable ceph-mgr.target
  6. Enable and start the ceph-mgr instance:

    systemctl enable ceph-mgr@hostname
    systemctl start ceph-mgr@hostname

    Replace hostname with the host name of the node where the ceph-mgr will be deployed, for example:

    [root@node1 ~]# systemctl enable ceph-mgr@node1
    [root@node1 ~]# systemctl start ceph-mgr@node1
  7. Verify that the ceph-mgr daemon started successfully:

    ceph -s

    The output will include a line similar to the following one under the services: section:

        mgr: node1(active)
  8. Install more ceph-mgr daemons to serve as standby daemons that become active if the current active daemon fails.

B.4. Manually Installing Ceph Block Device

The following procedure shows how to install and mount a thin-provisioned, resizable Ceph Block Device.

Important

Ceph Block Devices must be deployed on separate nodes from the Ceph Monitor and OSD nodes. Running kernel clients and kernel server daemons on the same node can lead to kernel deadlocks.

Prerequisites

Procedure

  1. Create a Ceph Block Device user named client.rbd with full permissions to files on OSD nodes (osd 'allow rwx') and output the result to a keyring file:

    ceph auth get-or-create client.rbd mon 'profile rbd' osd 'profile rbd pool=<pool_name>' \
    -o /etc/ceph/rbd.keyring

    Replace <pool_name> with the name of the pool that you want to allow client.rbd to have access to, for example rbd:

    # ceph auth get-or-create \
    client.rbd mon 'allow r' osd 'allow rwx pool=rbd' \
    -o /etc/ceph/rbd.keyring

    See the User Management section in the Red Hat Ceph Storage 4 Administration Guide for more information about creating users.

  2. Create a block device image:

    rbd create <image_name> --size <image_size> --pool <pool_name> \
    --name client.rbd --keyring /etc/ceph/rbd.keyring

    Specify <image_name>, <image_size>, and <pool_name>, for example:

    $ rbd create image1 --size 4G --pool rbd \
    --name client.rbd --keyring /etc/ceph/rbd.keyring
    Warning

    The default Ceph configuration includes the following Ceph Block Device features:

    • layering
    • exclusive-lock
    • object-map
    • deep-flatten
    • fast-diff

    If you use the kernel RBD (krbd) client, you may not be able to map the block device image.

    To work around this problem, disable the unsupported features. Use one of the following options to do so:

    • Disable the unsupported features dynamically:

      rbd feature disable <image_name> <feature_name>

      For example:

      # rbd feature disable image1 object-map deep-flatten fast-diff
    • Use the --image-feature layering option with the rbd create command to enable only layering on newly created block device images.
    • Disable the features be default in the Ceph configuration file:

      rbd_default_features = 1

    This is a known issue, for details see the Known Issues chapter in the Release Notes for Red Hat Ceph Storage 4.

    All these features work for users that use the user-space RBD client to access the block device images.

  3. Map the newly created image to the block device:

    rbd map <image_name> --pool <pool_name>\
    --name client.rbd --keyring /etc/ceph/rbd.keyring

    For example:

    # rbd map image1 --pool rbd --name client.rbd \
    --keyring /etc/ceph/rbd.keyring
  4. Use the block device by creating a file system:

    mkfs.ext4 /dev/rbd/<pool_name>/<image_name>

    Specify the pool name and the image name, for example:

    # mkfs.ext4 /dev/rbd/rbd/image1

    This action can take a few moments.

  5. Mount the newly created file system:

    mkdir <mount_directory>
    mount /dev/rbd/<pool_name>/<image_name> <mount_directory>

    For example:

    # mkdir /mnt/ceph-block-device
    # mount /dev/rbd/rbd/image1 /mnt/ceph-block-device

Additional Resources

B.5. Manually Installing Ceph Object Gateway

The Ceph object gateway, also know as the RADOS gateway, is an object storage interface built on top of the librados API to provide applications with a RESTful gateway to Ceph storage clusters.

Prerequisites

Procedure

  1. Enable the Red Hat Ceph Storage 4 Tools repository:

    [root@gateway ~]# subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-debug-rpms
  2. On the Object Gateway node, install the ceph-radosgw package:

    # yum install ceph-radosgw
  3. On the initial Monitor node, do the following steps.

    1. Update the Ceph configuration file as follows:

      [client.rgw.<obj_gw_hostname>]
      host = <obj_gw_hostname>
      rgw frontends = "civetweb port=80"
      rgw dns name = <obj_gw_hostname>.example.com

      Where <obj_gw_hostname> is a short host name of the gateway node. To view the short host name, use the hostname -s command.

    2. Copy the updated configuration file to the new Object Gateway node and all other nodes in the Ceph storage cluster:

      Syntax

      # scp /etc/ceph/ceph.conf <user_name>@<target_host_name>:/etc/ceph

      Example

      # scp /etc/ceph/ceph.conf root@node1:/etc/ceph/

    3. Copy the ceph.client.admin.keyring file to the new Object Gateway node:

      Syntax

      # scp /etc/ceph/ceph.client.admin.keyring <user_name>@<target_host_name>:/etc/ceph/

      Example

      # scp /etc/ceph/ceph.client.admin.keyring root@node1:/etc/ceph/

  4. On the Object Gateway node, create the data directory:

    # mkdir -p /var/lib/ceph/radosgw/ceph-rgw.`hostname -s`
  5. On the Object Gateway node, add a user and keyring to bootstrap the object gateway:

    Syntax

    # ceph auth get-or-create client.rgw.`hostname -s` osd 'allow rwx' mon 'allow rw' -o /var/lib/ceph/radosgw/ceph-rgw.`hostname -s`/keyring

    Example

    # ceph auth get-or-create client.rgw.`hostname -s` osd 'allow rwx' mon 'allow rw' -o /var/lib/ceph/radosgw/ceph-rgw.`hostname -s`/keyring

    Important

    When you provide capabilities to the gateway key you must provide the read capability. However, providing the Monitor write capability is optional; if you provide it, the Ceph Object Gateway will be able to create pools automatically.

    In such a case, ensure to specify a reasonable number of placement groups in a pool. Otherwise, the gateway uses the default number, which is most likely not suitable for your needs. See Ceph Placement Groups (PGs) per Pool Calculator for details.

  6. On the Object Gateway node, create the done file:

    # touch /var/lib/ceph/radosgw/ceph-rgw.`hostname -s`/done
  7. On the Object Gateway node, change the owner and group permissions:

    # chown -R ceph:ceph /var/lib/ceph/radosgw
    # chown -R ceph:ceph /var/log/ceph
    # chown -R ceph:ceph /var/run/ceph
    # chown -R ceph:ceph /etc/ceph
  8. On the Object Gateway node, open TCP port 8080:

    # firewall-cmd --zone=public --add-port=8080/tcp
    # firewall-cmd --zone=public --add-port=8080/tcp --permanent
  9. On the Object Gateway node, start and enable the ceph-radosgw process:

    Syntax

    # systemctl enable ceph-radosgw.target
    # systemctl enable ceph-radosgw@rgw.<rgw_hostname>
    # systemctl start ceph-radosgw@rgw.<rgw_hostname>

    Example

    # systemctl enable ceph-radosgw.target
    # systemctl enable ceph-radosgw@rgw.node1
    # systemctl start ceph-radosgw@rgw.node1

Once installed, the Ceph Object Gateway automatically creates pools if the write capability is set on the Monitor. See the Pools chapter in the Storage Strategies Guide for details on creating pools manually.

Additional Resources