Chapter 3. Administering Ceph Clusters That Run in Containers

This chapter describes basic administration tasks to perform on Ceph clusters that run in containers, such as:

3.1. Starting, Stopping, and Restarting Ceph Daemons That Run in Containers

Use the systemctl command start, stop, or restart Ceph daemons that run in containers.

Procedure

  1. To start, stop, or restart a Ceph daemon running in a container, run a systemctl command as root composed in the following format:

    systemctl action ceph-daemon@ID

    Where:

    • action is the action to perform; start, stop, or restart
    • daemon is the daemon; osd, mon, mds, or rgw
    • ID is either

      • The short host name where the ceph-mon, ceph-mds, or ceph-rgw daemons are running
      • The ID of the ceph-osd daemon if it was deployed the osd_scenario parameter set to lvm
      • The device name that the ceph-osd daemon uses if it was deployed with the osd_scenario parameter set to collocated or non-collocated

    For example, to restart a ceph-osd daemon with the ID osd01:

    # systemctl restart ceph-osd@osd01

    To start a ceph-mon demon that runs on the ceph-monitor01 host:

    # systemctl start ceph-mon@ceph-monitor01

    To stop a ceph-rgw daemon that runs on the ceph-rgw01 host:

    # systemctl stop ceph-radosgw@ceph-rgw01
  2. Verify that the action was completed successfully.

    systemctl status ceph-daemon@_ID

    For example:

    # systemctl status ceph-mon@ceph-monitor01

Additional Resources

3.2. Viewing Log Files of Ceph Daemons That Run in Containers

Use the journald daemon from the container host to view a log file of a Ceph daemon from a container.

Procedure

  1. To view the entire Ceph log file, run a journalctl command as root composed in the following format:

    journalctl -u ceph-daemon@ID

    Where:

    • daemon is the Ceph daemon; osd, mon, or rgw
    • ID is either

      • The short host name where the ceph-mon, ceph-mds, or ceph-rgw daemons are running
      • The ID of the ceph-osd daemon if it was deployed the osd_scenario parameter set to lvm
      • The device name that the ceph-osd daemon uses if it was deployed with the osd_scenario parameter set to collocated or non-collocated

    For example, to view the entire log for the ceph-osd daemon with the ID osd01:

    # journalctl -u ceph-osd@osd01
  2. To show only the recent journal entries, use the -f option.

    journalctl -fu ceph-daemon@ID

    For example, to view only recent journal entries for the ceph-mon daemon that runs on the ceph-monitor01 host:

    # journalctl -fu ceph-mon@ceph-monitor01
Note

You can also use the sosreport utility to view the journald logs. For more details about SOS reports, see the What is a sosreport and how to create one in Red Hat Enterprise Linux 4.6 and later? solution on the Red Hat Customer Portal.

Additional Resources

  • The journalctl(1) manual page

3.3. Purging Clusters Deployed by Ansible

If you no longer want to use a Ceph cluster, use the purge-docker-cluster.yml playbook to purge the cluster. Purging a cluster is also useful when the installation process failed and you want to start over.

Warning

After purging a Ceph cluster, all data on the OSDs are lost.

Prerequisites

  • Ensure that the /var/log/ansible.log file is writable.

Procedure

Use the following commands from the Ansible administration node.

  1. As the root user, navigate to the /usr/share/ceph-ansible/ directory.

    [root@admin ~]# cd /usr/share/ceph-ansible
  2. Copy the purge-docker-cluster.yml playbook from the /usr/share/infrastructure-playbooks/ directory to the current directory:

    [root@admin ceph-ansible]# cp infrastructure-playbooks/purge-docker-cluster.yml .
  3. As the Ansible user, use the purge-docker-cluster.yml playbook to purge the Ceph cluster.

    1. To remove all packages, containers, configuration files, and all the data created by the ceph-ansible playbook:

      [user@admin ceph-ansible]$ ansible-playbook purge-docker-cluster.yml
    2. To specify a different inventory file than the default one (/etc/ansible/hosts), use -i parameter:

      ansible-playbook purge-docker-cluster.yml -i inventory-file

      Replace inventory-file with the path to the inventory file.

      For example:

      [user@admin ceph-ansible]$ ansible-playbook purge-docker-cluster.yml -i ~/ansible/hosts
    3. To skip the removal of the Ceph container image, use the --skip-tags=”remove_img” option:

      [user@admin ceph-ansible]$ ansible-playbook --skip-tags="remove_img" purge-docker-cluster.yml
    4. To skip the removal of the packages that were installed during the installation, use the --skip-tags=”with_pkg” option:

      [user@admin ceph-ansible]$ ansible-playbook --skip-tags="with_pkg" purge-docker-cluster.yml