Chapter 5. Infrastructure Security

The scope of this guide is Red Hat Ceph Storage. However, a proper Red Hat Ceph Storage security plan requires consideration of the Red Hat Enterprise Linux 7 Security Guide and the Red Hat Enterprise Linux 7 SELinux Users and Administration Guide, which by the foregoing hyperlinks are incorporated herein.

Warning

No security plan for Red Hat Ceph Storage is complete without consideration of the foregoing guides.

5.1. Administration

Administering a Red Hat Ceph Storage cluster involves using command line tools. The CLI tools require an administrator key for administrator access privileges to the cluster. By default, Ceph stores the administrator key in the /etc/ceph directory. The default file name is ceph.client.admin.keyring. Take steps to secure the keyring so that only a user with administrative privileges to the cluster may access the keyring.

5.2. Network Communication

Red Hat Ceph Storage provides two networks:

  • A public network, and
  • A cluster network.

All Ceph daemons and Ceph clients require access to the public network, which is part of the storage access security zone. By contrast, ONLY the OSD daemons require access to the cluster network, which is part of the Ceph cluster security zone.

Network Architecture

The Ceph configuration contains public_network and cluster_network settings. For hardening purposes, specify the IP address and the netmask using CIDR notation. Specify multiple comma-delimited IP/netmask entries if the cluster will have multiple subnets.

public_network = <public-network/netmask>[,<public-network/netmask>]
cluster_network = <cluster-network/netmask>[,<cluster-network/netmask>]

See the Network Configuration Reference of the Configuration Guide for details.

5.3. Hardening the Network Service

System administrators deploy Red Hat Ceph Storage clusters on Red Hat Enterprise Linux 7 Server. SELinux is on by default and the firewall blocks all inbound traffic except for the SSH service port 22; however, you MUST ensure that this is the case so that no other unauthorized ports are open or unnecessary services enabled.

On each server node, execute the following:

  1. Start the firewalld service, enable it to run on boot and ensure that it is running:

    # systemctl enable firewalld
    # systemctl start firewalld
    # systemctl status firewalld
  2. Take an inventory of all open ports.

    # firewall-cmd --list-all

    On a new installation, the sources: section should be blank indicating that no ports have been opened specifically. The services section should indicate ssh indicating that the SSH service (and port 22) and dhcpv6-client are enabled.

    sources:
    services: ssh dhcpv6-client
  3. Ensure SELinux is running and Enforcing.

    # getenforce
    Enforcing

    If SELinux is Permissive, set it to Enforcing.

    # setenforce 1

    If SELinux is not running, enable it. See the Red Hat Enterprise Linux 7 SELinux Users and Administration Guide for details.

Each Ceph daemon uses one or more ports to communicate with other daemons in the Red Hat Ceph Storage cluster. In some cases, you may change the default port settings. Administrators typically only change the default port with the Ceph Object Gateway or ceph-radosgw daemon. See Changing the CivetWeb port in the Object Gateway Configuration and Administration Guide.

Table 5.1. Ceph Ports

PortDaemonConfiguration Option

8080

ceph-radosgw

rgw_frontends

6789, 3300

ceph-mon

N/A

6800-7300

ceph-osd

ms_bind_port_min to ms_bind_port_max

6800-7300

ceph-mgr

ms_bind_port_min to ms_bind_port_max

6800

ceph-mds

N/A

The Ceph Storage Cluster daemons include ceph-mon, ceph-mgr and ceph-osd. These daemons and their hosts comprise the Ceph cluster security zone, which should use its own subnet for hardening purposes.

The Ceph clients include ceph-radosgw, ceph-mds, ceph-fuse, libcephfs, rbd, librbd and librados. These daemons and their hosts comprise the storage access security zone, which should use its own subnet for hardening purposes.

On the Ceph Storage Cluster zone’s hosts, consider enabling only hosts running Ceph clients to connect to the Ceph Storage Cluster daemons. For example:

# firewall-cmd --zone=<zone-name> --add-rich-rule="rule family="ipv4" \
source address="<ip-address>/<netmask>" port protocol="tcp" \
port="<port-number>" accept"

Replace <zone-name> with the zone name. Replace the <ipaddress> with the IP address and <netmask> with the subnet mask in CIDR notation. Replace the <port-number> with the port number or range. Repeat the process with the --permanent flag so that the changes persist after reboot. For example:

# firewall-cmd --zone=<zone-name> --add-rich-rule="rule family="ipv4" \
source address="<ip-address>/<netmask>" port protocol="tcp" \
port="<port-number>" accept" --permanent

See the Firewalls section of the Red Hat Ceph Storage installation guide for specific steps.

5.4. Reporting

Red Hat Ceph Storage provides basic system monitoring and reporting with the ceph-mgr daemon plug-ins; namely, the RESTful API, the dashboard, and other plug-ins such as Prometheus and Zabbix. Ceph collects this information using collectd and sockets to retrieve settings, configuration details and statistical information.

In addition to default system behavior, system administrators may configure collectd to report on security matters such as configuring the IP-Tables or ConnTrack plug ins to track open ports and connections respectively.

System administrators may also retrieve configuration settings at runtime. See Viewing the Ceph Runtime Configuration.

5.5. Auditing Administrator Actions

An important aspect of system security is to periodically audit administrator actions on the cluster. Red Hat Ceph Storage stores a history of administrator actions in the /var/log/ceph/ceph.audit.log file.

Each entry will contain:

  • Timestamp: Indicates when the command was executed.
  • Monitor Address: Identifies the monitor modified.
  • Client Node: Identifies the client node initiating the change.
  • Entity: Identifies the user making the change.
  • Command: Identifies the command executed.

For example, a system administrator may set and unset the nodown flag. In the audit log, it will look something like this:

2018-08-13 21:50:28.723876 mon.reesi003 mon.2 172.21.2.203:6789/0 2404194 : audit [INF] from='client.? 172.21.6.108:0/4077431892' entity='client.admin' cmd=[{"prefix": "osd set", "key": "nodown"}]: dispatch
2018-08-13 21:50:28.727176 mon.reesi001 mon.0 172.21.2.201:6789/0 2097902 : audit [INF] from='client.348389421 -' entity='client.admin' cmd=[{"prefix": "osd set", "key": "nodown"}]: dispatch
2018-08-13 21:50:28.872992 mon.reesi001 mon.0 172.21.2.201:6789/0 2097904 : audit [INF] from='client.348389421 -' entity='client.admin' cmd='[{"prefix": "osd set", "key": "nodown"}]': finished
2018-08-13 21:50:31.197036 mon.mira070 mon.5 172.21.6.108:6789/0 413980 : audit [INF] from='client.? 172.21.6.108:0/675792299' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "nodown"}]: dispatch
2018-08-13 21:50:31.252225 mon.reesi001 mon.0 172.21.2.201:6789/0 2097906 : audit [INF] from='client.347227865 -' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "nodown"}]: dispatch
2018-08-13 21:50:31.887555 mon.reesi001 mon.0 172.21.2.201:6789/0 2097909 : audit [INF] from='client.347227865 -' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "nodown"}]': finished

In distributed systems such as Ceph, actions may begin on one instance and get propagated to other nodes in the cluster. When the action begins, the log indicates dispatch. When the action ends, the log indicates finished.

In the foregoing example, entity='client.admin' indicates that the user is the admin user. The command cmd=[{"prefix": "osd set", "key": "nodown"}] indicates that the admin user executed ceph osd set nodown.