2017 - Red Hat OpenStack Platform on Hyper-Converged Red Hat Ceph Storage: Cinder Volume Performance at Scale

Updated -

This document describes large-scale I/O characterization testing performed by the Red Hat Performance and Scale Engineering group on RHEL 7.3 with RHOSP 10 and RHCS 2.1. It uses Glance images, Nova instances, and Cinder volumes on 544 TB of Ceph storage configured in a hyper-converged RHOSP-RHCS infrastructure. Eight servers acting as Ceph OSD nodes and RHOSP compute nodes as well as three servers acting as Ceph monitors and RHOSP controllers were used to exercise up to 512 RHOSP instances. FIO (Flexible I/O) benchmarks measured application latency percentiles as a function of scale and uperf network request-response times were measured to characterize the impact of combining disk and network workloads.

The purpose of this testing was not to achieve world-record performance by extensive tuning, but to document and understand the user experience of RHOSP in a HCI using Ceph storage at scale with respect to performance and to understand issues customers might encounter with such a configuration in production. Several deployment issues with RHOSP were encountered and documented as bugzilla reports.

Attachments

Comments