Chapter 1. What is Red Hat Ceph Storage?
Red Hat Ceph Storage is a scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment utilities, and support services.
Red Hat Ceph Storage is designed for cloud infrastructure and web-scale object storage. Red Hat Ceph Storage clusters consist of the following types of nodes:
- Red Hat Storage Ansible Administration node
This type of node acts as the traditional Ceph Administration node did for previous versions of Red Hat Ceph Storage. This type of node provides the following functions:
Centralized storage cluster management
- The Ceph configuration files and keys
- Optionally, local repositories for installing Ceph on nodes that cannot access the Internet for security reasons
In Red Hat Ceph Storage 1.3.x, the Ceph Administration node hosted the Calamari monitoring and administration server, and the
ceph-deployutility, which has been deprecated in Red Hat Ceph Storage 2. Use Ceph command-line utility or the Ansible automation application utility instead to install a Red Hat Ceph Storage cluster.
- Monitor nodes
Each monitor node runs the monitor daemon (
ceph-mon), which maintains a master copy of the cluster map. The cluster map includes the cluster topology. A client connecting to the Ceph cluster retrieves the current copy of the cluster map from the monitor which enables the client to read from and write data to the cluster.
Ceph can run with one monitor; however, to ensure high availability in a production cluster, Red Hat recommends to deploy at least three monitor nodes.
- OSD nodes
Each Object Storage Device (OSD) node runs the Ceph OSD daemon (
ceph-osd), which interacts with logical disks attached to the node. Ceph stores data on these OSD nodes.
Ceph can run with very few OSD nodes, which the default is three, but production clusters realize better performance beginning at modest scales, for example 50 OSDs in a storage cluster. Ideally, a Ceph cluster has multiple OSD nodes, allowing isolated failure domains by creating the CRUSH map.
- MDS nodes
Each Metadata Server (MDS) node runs the MDS daemon (
ceph-mds), which manages metadata related to files stored on the Ceph File System (CephFS). The MDS daemon also coordinates access to the shared cluster.
MDS and CephFS are Technology Preview features and as such they are not fully supported yet. For information on MDS installation and configuration, see the Ceph File System Guide (Technology Preview).
- Object Gateway node
Ceph Object Gateway node runs the Ceph RADOS Gateway daemon (
ceph-radosgw), and is an object storage interface built on top of
libradosto provide applications with a RESTful gateway to Ceph Storage Clusters. The Ceph RADOS Gateway supports two interfaces:
Provides object storage functionality with an interface that is compatible with a large subset of the Amazon S3 RESTful API.
Provides object storage functionality with an interface that is compatible with a large subset of the OpenStack Swift API.
For details on the Ceph architecture, see the Architecture Guide.
For minimum recommended hardware, see the Hardware Guide.