3.6. Appendix - Performance Categorization
3.6.1. Storage Type
iperfto determine the upper limit between the client and Red Hat Gluster Storage node. This testing showed that a single network interface can be expected to deliver between 600 - 700 Mbit.
3.6.3. Disk Latencies
mdadmtool. The LUN was then configured using recommended best practices, based on LVM, dm-thinp, and the XFS file system. The
fiotool was then used to reveal the random read profile of the underlying disks at increasing levels of concurrency.
Figure 3.3. Disk Latencies
- Typical latencies are in the 20 - 50 ms range.
- Attaining higher IOPS requires a multi-threaded workload; that is, one thread=32 IOPS, 32 threads = 961 IOPS.
- Combining the virtual drives with
mdadmallows the LUN to deliver IOPS beyond that of a single virtual disk.
Figure 3.4. Gluster Performance: Small File "Create" Workload
- Although the native file system starts well, a performance cross-over occurs between 8 - 12 threads, with the native file system fading and the GlusterFS volume continuing to scale.
- The throughput of the GlusterFS volume scales linearly with the increase in client workload.
- At higher concurrency, the GlusterFS volume outperforms the local file system by up to 47%.
- During high concurrency, the native file system slows down under load. Examining the disk subsystems statistics during the test run revealed the issue was increased I/O wait times (70 - 90%).