Chapter 1. Overview

Ceph was designed to run on non-proprietary commodity hardware. Ceph supports elastic provisioning, which makes building and maintaining petabyte-to-exabyte scale data clusters economically feasible. Many mass storage systems are great at storage, but they run out of throughput or IOPS well before they run out of capacity—​making them unsuitable for some cloud computing applications. Ceph scales performance and capacity independently, which enables Ceph to support deployments optimized to a particular use case.

While Ceph runs on commodity hardware, this fact DOES NOT mean selecting the cheapest hardware possible is necessarily a good idea. The phrase "commodity hardware" simply means that running Ceph does not require a lock-in to a particular hardware vendor. Misunderstanding the phrase "commodity hardware" can lead to common mistakes in hardware selection, including:

  • Repurposing underpowered legacy hardware for use with Ceph.
  • Using dissimilar hardware in the same pool.
  • Using 1Gbps networks instead of 10Gbps or greater.
  • Neglecting to setup both public and cluster networks.
  • Using RAID instead of JBOD.
  • Selecting drives on a price basis without regard to performance or throughput.
  • Journaling on OSD data drives when the use case calls for an SSD journal.
  • Having a disk controller with insufficient throughput characteristics.

Red Hat has performed extensive testing to characterize Red Hat Ceph Storage deployments on a range of storage servers in optimized configurations.


Before purchasing hardware for use with Ceph, please read the following document.

Red Hat Ceph Storage Hardware Configuration Guide

Whereas, the RHCS Hardware Configuration Guide provides extensive detail, this guide is only intended to provide very high level guidance to avoid common hardware selection mistakes and to provide links to tested sizing and performance guides that provide significant detail with specific hardware setups, configuration, and the performance results.