Chapter 2. Understanding process management for Ceph
As a storage administrator, you can manipulate the various Ceph daemons by type or instance, on bare-metal or in containers. Manipulating these daemons allows you to start, stop and restart all of the Ceph services as needed.
2.1. Prerequisites
- Installation of the Red Hat Ceph Storage software.
2.2. Ceph process management
In Red Hat Ceph Storage, all process management is done through the Systemd service. Each time you want to start
, restart
, and stop
the Ceph daemons, you must specify the daemon type or the daemon instance.
Additional Resources
- For more information about using Systemd, see the chapter Managing services with systemd in the Red Hat Enterprise Linux System Administrator’s Guide.
2.3. Starting, stopping, and restarting all Ceph daemons
Start, stop, and restart all Ceph daemons as an admin
from the node.
Prerequisites
- A running Red Hat Ceph Storage cluster.
-
Having
root
access to the node.
Procedure
Starting all Ceph daemons:
[root@admin ~]# systemctl start ceph.target
Stopping all Ceph daemons:
[root@admin ~]# systemctl stop ceph.target
Restarting all Ceph daemons:
[root@admin ~]# systemctl restart ceph.target
2.4. Starting, stopping, and restarting the Ceph daemons by type
To start, stop, or restart all Ceph daemons of a particular type, follow these procedures on the node running the Ceph daemons.
Prerequisites
- A running Red Hat Ceph Storage cluster.
-
Having
root
access to the node.
Procedure
On Ceph Monitor nodes:
Starting:
[root@mon ~]# systemctl start ceph-mon.target
Stopping:
[root@mon ~]# systemctl stop ceph-mon.target
Restarting:
[root@mon ~]# systemctl restart ceph-mon.target
On Ceph Manager nodes:
Starting:
[root@mgr ~]# systemctl start ceph-mgr.target
Stopping:
[root@mgr ~]# systemctl stop ceph-mgr.target
Restarting:
[root@mgr ~]# systemctl restart ceph-mgr.target
On Ceph OSD nodes:
Starting:
[root@osd ~]# systemctl start ceph-osd.target
Stopping:
[root@osd ~]# systemctl stop ceph-osd.target
Restarting:
[root@osd ~]# systemctl restart ceph-osd.target
On Ceph Object Gateway nodes:
Starting:
[root@rgw ~]# systemctl start ceph-radosgw.target
Stopping:
[root@rgw ~]# systemctl stop ceph-radosgw.target
Restarting:
[root@rgw ~]# systemctl restart ceph-radosgw.target
2.5. Starting, stopping, and restarting the Ceph daemons by instance
To start, stop, or restart a Ceph daemon by instance, follow these procedures on the node running the Ceph daemons.
Prerequisites
- A running Red Hat Ceph Storage cluster.
-
Having
root
access to the node.
Procedure
On a Ceph Monitor node:
Starting:
[root@mon ~]# systemctl start ceph-mon@MONITOR_HOST_NAME
Stopping:
[root@mon ~]# systemctl stop ceph-mon@MONITOR_HOST_NAME
Restarting:
[root@mon ~]# systemctl restart ceph-mon@MONITOR_HOST_NAME
Replace
-
MONITOR_HOST_NAME
with the name of the Ceph Monitor node.
-
On a Ceph Manager node:
Starting:
[root@mgr ~]# systemctl start ceph-mgr@MANAGER_HOST_NAME
Stopping:
[root@mgr ~]# systemctl stop ceph-mgr@MANAGER_HOST_NAME
Restarting:
[root@mgr ~]# systemctl restart ceph-mgr@MANAGER_HOST_NAME
Replace
-
MANAGER_HOST_NAME
with the name of the Ceph Manager node.
-
On a Ceph OSD node:
Starting:
[root@osd ~]# systemctl start ceph-osd@OSD_NUMBER
Stopping:
[root@osd ~]# systemctl stop ceph-osd@OSD_NUMBER
Restarting:
[root@osd ~]# systemctl restart ceph-osd@OSD_NUMBER
Replace
OSD_NUMBER
with theID
number of the Ceph OSD.For example, when looking at the
ceph osd tree
command output,osd.0
has anID
of0
.
On a Ceph Object Gateway node:
Starting:
[root@rgw ~]# systemctl start ceph-radosgw@rgw.OBJ_GATEWAY_HOST_NAME
Stopping:
[root@rgw ~]# systemctl stop ceph-radosgw@rgw.OBJ_GATEWAY_HOST_NAME
Restarting:
[root@rgw ~]# systemctl restart ceph-radosgw@rgw.OBJ_GATEWAY_HOST_NAME
Replace
-
OBJ_GATEWAY_HOST_NAME
with the name of the Ceph Object Gateway node.
-
2.6. Starting, stopping, and restarting Ceph daemons that run in containers
Use the systemctl
command start, stop, or restart Ceph daemons that run in containers.
Prerequisites
- Installation of the Red Hat Ceph Storage software.
- Root-level access to the node.
Procedure
To start, stop, or restart a Ceph daemon running in a container, run a
systemctl
command asroot
composed in the following format:systemctl ACTION ceph-DAEMON@ID
- Replace
-
ACTION is the action to perform;
start
,stop
, orrestart
. -
DAEMON is the daemon;
osd
,mon
,mds
, orrgw
. ID is either:
-
The short host name where the
ceph-mon
,ceph-mds
, orceph-rgw
daemons are running. -
The ID of the
ceph-osd
daemon if it was deployed.
-
The short host name where the
For example, to restart a
ceph-osd
daemon with the IDosd01
:[root@osd ~]# systemctl restart ceph-osd@osd01
To start a
ceph-mon
demon that runs on theceph-monitor01
host:[root@mon ~]# systemctl start ceph-mon@ceph-monitor01
To stop a
ceph-rgw
daemon that runs on theceph-rgw01
host:[root@rgw ~]# systemctl stop ceph-radosgw@ceph-rgw01
-
ACTION is the action to perform;
Verify that the action was completed successfully.
systemctl status ceph-DAEMON@ID
For example:
[root@mon ~]# systemctl status ceph-mon@ceph-monitor01
Additional Resources
- See the Understanding process management for Ceph chapter in the Red Hat Ceph Storage Administration Guide for more information.
2.7. Viewing log files of Ceph daemons that run in containers
Use the journald
daemon from the container host to view a log file of a Ceph daemon from a container.
Prerequisites
- Installation of the Red Hat Ceph Storage software.
- Root-level access to the node.
Procedure
To view the entire Ceph log file, run a
journalctl
command asroot
composed in the following format:journalctl -u ceph-DAEMON@ID
- Replace
-
DAEMON is the Ceph daemon;
osd
,mon
, orrgw
. ID is either:
-
The short host name where the
ceph-mon
,ceph-mds
, orceph-rgw
daemons are running. -
The ID of the
ceph-osd
daemon if it was deployed.
-
The short host name where the
For example, to view the entire log for the
ceph-osd
daemon with the IDosd01
:[root@osd ~]# journalctl -u ceph-osd@osd01
-
DAEMON is the Ceph daemon;
To show only the recent journal entries, use the
-f
option.journalctl -fu ceph-DAEMON@ID
For example, to view only recent journal entries for the
ceph-mon
daemon that runs on theceph-monitor01
host:[root@mon ~]# journalctl -fu ceph-mon@ceph-monitor01
You can also use the sosreport
utility to view the journald
logs. For more details about SOS reports, see the What is an sosreport and how to create one in Red Hat Enterprise Linux? solution on the Red Hat Customer Portal.
Additional Resources
-
The
journalctl(1)
manual page.
2.8. Powering down and rebooting Red Hat Ceph Storage cluster
Follow the below procedure for powering down and rebooting the Ceph cluster.
Prerequisites
- A running Red Hat Ceph Storage cluster.
-
Having
root
access.
Procedure
Powering down the Red Hat Ceph Storage cluster
- Stop the clients from using the RBD images and RADOS Gateway on this cluster and any other clients.
-
The cluster must be in healthy state (
Health_OK
and all PGsactive+clean
) before proceeding. Runceph status
on a node with the client keyrings, for example, the Ceph Monitor or OpenStack controller nodes, to ensure the cluster is healthy. If you use the Ceph File System (
CephFS
), theCephFS
cluster must be brought down. Taking aCephFS
cluster down is done by reducing the number of ranks to1
, setting thecluster_down
flag, and then failing the last rank.Example:
[root@osd ~]# ceph fs set FS_NAME max_mds 1 [root@osd ~]# ceph mds deactivate FS_NAME:1 # rank 2 of 2 [root@osd ~]# ceph status # wait for rank 1 to finish stopping [root@osd ~]# ceph fs set FS_NAME cluster_down true [root@osd ~]# ceph mds fail FS_NAME:0
Setting the
cluster_down
flag prevents standbys from taking over the failed rank.Set the
noout
,norecover
,norebalance
,nobackfill
,nodown
andpause
flags. Run the following on a node with the client keyrings. For example, the Ceph Monitor or OpenStack controller node:[root@mon ~]# ceph osd set noout [root@mon ~]# ceph osd set norecover [root@mon ~]# ceph osd set norebalance [root@mon ~]# ceph osd set nobackfill [root@mon ~]# ceph osd set nodown [root@mon ~]# ceph osd set pause
Shut down the OSD nodes one by one:
[root@osd ~]# systemctl stop ceph-osd.target
Shut down the monitor nodes one by one:
[root@mon ~]# systemctl stop ceph-mon.target
Rebooting the Red Hat Ceph Storage cluster
- Power on the administration node.
Power on the monitor nodes:
[root@mon ~]# systemctl start ceph-mon.target
Power on the OSD nodes:
[root@osd ~]# systemctl start ceph-osd.target
- Wait for all the nodes to come up. Verify all the services are up and the connectivity is fine between the nodes.
Unset the
noout
,norecover
,norebalance
,nobackfill
,nodown
andpause
flags. Run the following on a node with the client keyrings. For example, the Ceph Monitor or OpenStack controller node:[root@mon ~]# ceph osd unset noout [root@mon ~]# ceph osd unset norecover [root@mon ~]# ceph osd unset norebalance [root@mon ~]# ceph osd unset nobackfill [root@mon ~]# ceph osd unset nodown [root@mon ~]# ceph osd unset pause
If you use the Ceph File System (
CephFS
), theCephFS
cluster must be brought back up by setting thecluster_down
flag tofalse
:[root@admin~]# ceph fs set FS_NAME cluster_down false
-
Verify the cluster is in healthy state (
Health_OK
and all PGsactive+clean
). Runceph status
on a node with the client keyrings. For example, the Ceph Monitor or OpenStack controller nodes, to ensure the cluster is healthy.
2.9. Additional Resources
- For more information on installing Ceph see the Red Hat Ceph Storage Installation Guide