3.3. Planning Storage
- Ceph Storage Nodes
- The director creates a set of scalable storage nodes using Red Hat Ceph Storage. The Overcloud uses these nodes for:
- Images - Glance manages images for VMs. Images are immutable. OpenStack treats images as binary blobs and downloads them accordingly. You can use glance to store images in a Ceph Block Device.
- Volumes - Cinder volumes are block devices. OpenStack uses volumes to boot VMs, or to attach volumes to running VMs. OpenStack manages volumes using Cinder services. You can use Cinder to boot a VM using a copy-on-write clone of an image.
- Guest Disks - Guest disks are guest operating system disks. By default, when you boot a virtual machine with nova, its disk appears as a file on the filesystem of the hypervisor (usually under
/var/lib/nova/instances/<uuid>/). It is possible to boot every virtual machine inside Ceph directly without using cinder, which is advantageous because it allows you to perform maintenance operations easily with the live-migration process. Additionally, if your hypervisor dies it is also convenient to trigger
nova evacuateand run the virtual machine elsewhere almost seamlessly.
ImportantIf you want to boot virtual machines in Ceph (ephemeral backend or boot from volume), the glance image format must be
RAWformat. Ceph does not support other image formats such as QCOW2 or VMDK for hosting a virtual machine disk.See Red Hat Ceph Storage Architecture Guide for additional information.
- Swift Storage Nodes
- The director creates an external object storage node. This is useful in situations where you need to scale or replace controller nodes in your Overcloud environment but need to retain object storage outside of a high availability cluster.