- Recommendation for planning nodes and disks addition to an existing Ceph cluster ?
- We have a "small" Ceph cluster with 3 monitors and 4 nodes, with 10 daemons/disks per node and 2 SSDs for journals. We are now planning to add 8 new nodes to the cluster and are also thinking about adding more disks on each node.
- If I remember correctly, given the small number of nodes, there were concerns related to the number of daemon/disk per node and it had been limited at 10 per nodes. So, first, can you please clarify that? What is the concern in having high number of daemon/disk on a small cluster?
Now that we are planning to add 8 nodes, we will end up with a 12 nodes cluster.
- Can we consider adding more disks per node with that cluster size? At full capacity, we can reach 20 disks per node (1.2 TB disks) and 4 SSD for journals (1 SSD for 5 daemon/disk).
- Red Hat Ceph Storage 1.2.3
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.