Red Hat Training

A Red Hat training course is available for Red Hat Ceph Storage

Chapter 4. Minimum Recommendations

Ceph can run on inexpensive commodity hardware. Small production clusters and development clusters can run successfully with modest hardware.

ProcessCriteriaMinimum Recommended

calamari

Processor

1x AMD64 and Intel 64 quad-core

RAM

4 GB minimum per instance

Disk Space

10 GB per instance

Network

2x 1GB Ethernet NICs

ceph-osd

Processor

1x AMD64 and Intel 64

RAM

2 GB of RAM per deamon

Volume Storage

1x storage drive per daemon

Journal

1x SSD partition per daemon (optional)

Network

2x 1GB Ethernet NICs

ceph-mon

Processor

1x AMD64 and Intel 64

RAM

1 GB per daemon

Disk Space

10 GB per daemon

Network

2x 1GB Ethernet NICs

Tip

If you are running an OSD with a single disk, create a partition for your volume storage that is separate from the partition containing the OS. Generally, Red Hat recommendsst separate disks for the OS and the volume storage.

4.1. Production Clusters

Production clusters for petabyte scale data storage may also use commodity hardware, but should have considerably more memory, processing power and data storage to account for heavy traffic loads.

4.1.1. Calamari

The administration server hardware requirements vary with the size of your cluster. A minimum recommended hardware configuration for a Calamari server includes at least 4GB of RAM, a dual core CPU on x86_64 architecture and enough network throughput to handle communication with Ceph hosts. The hardware requirements scale linearly with the number of Ceph servers, so if you intend to run a fairly large cluster, ensure that you have enough RAM, processing power and network throughput for your administration node.

4.1.2. Monitors

The Ceph monitor is a data store for the health of the entire cluster, and contains the cluster log. Red Hat strongly recommends to use at least three monitors for a cluster quorum in production. Monitor nodes typically have fairly modest CPU and memory requirements. A 1 rack unit (1U) server with a low-cost CPU (such as a processor with 6 cores @ 1.7 GHz), 16 GB of RAM, and Gigabit Ethernet (GbE) networking should suffice in most cases. Because logs are stored on local disk(s) on the monitor node, it is important to make sure that sufficient disk space is provisioned. The monitor store should be placed on an Solid-state Drive (SSD), because the leveldb store can become I/O bound.

4.1.3. OSDs

Ensure that your network interface, controllers and drive throughput don’t leave bottlenecks—​e.g., fast drives, but networks too slow to accommodate them. SSDs are typically used for journals and fast pools. Where the use of SSDs is write intensive (e.g., journals), make sure you select high performance SSDs.

When using Ceph as a backend for OpenStack volumes and images, we recommend a host bus adapter with SAS drives (10-15k rpms) and enterprise-grade SSDs using the same controller for journals.