- Red Hat Satellite 6 server
- Red Hat Satellite 6 Capsule server
Satellite 6 is highly dependent on fast performing IO to ensure proper operations throughout the system. Database queries, file copies, API traffic and more are all greatly effected by the storage configured in Satellite 6.
The primary partition that has the largest effect are the directories in /var, as outlined in the Installation Guide:
Having poorly performing IO can cause:
- High load averages
- Slow to exceedingly slow content operations such as synchronizations, Content View publish and promote.
- Long running API queries: API queries that query the database may take extra time to complete, causing unexpected consequences
- Client Initiated API throughput issues: If you are seeing a growing number of Actions::Katello::Host::* API tasks taking longer than expected and backing up in the queue, you may want to investigate your IO
The Satellite 6 server and its capsules require disk IO to be at or above 60-80 Megabytes per second of average throughput for read operations. Anything below this value can have severe implications for the operation of the Satellite.
As we will outline below, this value is not particularly hard to achieve if you are using local spinning HDD technology and easily achievable with local SSDs.
The difficulty is when using Satellite 6 with network attached storage, especially on 1G or slower networks which can quickly become over-saturated and unable to provide the performance necessary for proper Satellite 6 operations.
The Satellite team has built a tool that can be utilized to test disk IO on your Satellite Server found here:
This replaces the checks built into foreman-maintain's quicker and less intensive 'fio' based testing routine which can sometimes lead to misleading results.
This storage-benchmark script will execute a series of more intensive 'fio' based IO tests against a targeted directory specified in its execution. This test will create a very large file that is double (2x) the size of the physical RAM on this system to ensure that we are not just testing the caching at the OS level of the storage. This test is meant to provide guidance and is not a hard-and-fast indicator of how your Satellite will perform.
NOTE: We recommend you stop all services before executing this script and will be prompted to do so.
Our Satellite Performance and Scale team has executed the storage-benchmark against a series of different hardware in our lab environment to help provide some examples of expected performance from different vendors. This is not an exhaustive list
 Toshiba MG03ACA1 SATA Disk 931GiB (1TB)
Running READ test via fio: READ: bw=115MiB/s (121MB/s), 115MiB/s-115MiB/s (121MB/s-121MB/s), io=66.0GiB (70.9GB), run=586724-586724msec Running WRITE test via fio: WRITE: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=66.0GiB (70.9GB), run=523016-523016msec
READ MiB/s: 115MiB/s
WRITE MiB/s: 129MiB/s
 DELL PERC H710 SCSI Disk 2791GiB
Running READ test via fio: READ: bw=773MiB/s (811MB/s), 773MiB/s-773MiB/s (811MB/s-811MB/s), io=132GiB (142GB), run=174866-174866msec Running WRITE test via fio: WRITE: bw=685MiB/s (719MB/s), 685MiB/s-685MiB/s (719MB/s-719MB/s), io=132GiB (142GB), run=197195-197195msec
READ MiB/s: 773MiB/s
WRITE MiB/s: 685MiB/s
 NFS via 1G network
Running READ test via fio: READ: bw=102MiB/s (107MB/s), 102MiB/s-102MiB/s (107MB/s-107MB/s), io=98.0GiB (105GB), run=982855-982855msec Running WRITE test via fio: WRITE: bw=55.8MiB/s (58.5MB/s), 55.8MiB/s-55.8MiB/s (58.5MB/s-58.5MB/s), io=52.8GiB (56.7GB), run=968853-968853msec
READ MiB/s: 102MiB/s
WRITE MiB/s: 55MiB/s
 DELL PERC H710 SAS 931 GiB (999GB)
Running READ test via fio: READ: bw=92.3MiB/s (96.8MB/s), 92.3MiB/s-92.3MiB/s (96.8MB/s-96.8MB/s), io=264GiB (283GB), run=2929191-2929191msec Running WRITE test via fio: WRITE: bw=109MiB/s (115MB/s), 109MiB/s-109MiB/s (115MB/s-115MB/s), io=264GiB (283GB), run=2473006-2473006msec
READ MiB/s: 92MiB/s
WRITE MiB/s: 109MiB/s
 NVMe Solid State Drive
Running READ test via fio: READ: bw=2124MiB/s (2227MB/s), 2124MiB/s-2124MiB/s (2227MB/s-2227MB/s), io=788GiB (846GB), run=379896-379896msec Running WRITE test via fio: WRITE: bw=1409MiB/s (1477MB/s), 1409MiB/s-1409MiB/s (1477MB/s-1477MB/s), io=698GiB (750GB), run=507484-507484msec
READ MiB/s: 2124MiB/s
WRITE MiB/s: 1409MiB/s
 Solid State Drive - SATA
Running READ test via fio: READ: bw=692MiB/s (725MB/s), 692MiB/s-692MiB/s (725MB/s-725MB/s), io=788GiB (846GB), run=1166398-1166398msec Running WRITE test via fio: WRITE: bw=361MiB/s (379MB/s), 361MiB/s-361MiB/s (379MB/s-379MB/s), io=443GiB (476GB), run=1256281-1256281msec
READ MiB/s: 692MiB/s
WRITE MiB/s: 361MiB/s
- Tested on 6 hardware combinations - SATA,SAS,SCSI,NFS,SSD,NVMe.
- Cleaned the cache before running the tests using this command “swapoff -a; echo 3 > /proc/sys/vm/drop_caches; swapon -a”
- Overall the testing went well and noticed that average throughput for read operations is above 80 MiB /s which is good. Satellites with this type of storage perform well.
- If you see speeds below the 60-80MB/s range you should consider alternative configurations or hardware.
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.