Rhev 3.2 I/O scheduler

Latest response

When it comes to VM's it is important to fine-tune the I/O performance. I know that VMware ESX uses an asynchronous intelligent I/O scheduler. I also know that, when ran, tuned on the VM hosts, it changes the disk I/O scheduler from cfq to deadline, which allows the host to manage the I/O scheduling instead of the VM itself. I should rephrase, allows the host to do the grunt of the work as the host is more aware of its disks. My question expands on the subject to include storage systems. For better performance, when a storage system is attached to the Rhev hosts, what would the I/O scheduler settings should be on rhev?...deadline?...noop???...since the grunt of the work is done by the storage system....e.g EMC, Compellent, etc? Also, when tuning a rhel VM and running "tuned-adm profile virtual-guest" why is the I/O scheduler set to deadline and not noop??
thanks

Responses

Here is everything profile "virtual-guest" does:

kernel.sched_min_granularity_ns = 10000000
kernel.sched_wakeup_granularity_ns = 15000000
vm.swappiness = 30
vm.dirty_ratio = 40
ELEVATOR="deadline"
set_cpu_governor performance
set_transparent_hugepages always
multiply_disk_readahead 4

I have done some small testing with schedulers and I did not found any big change in my I/O performance.
But it may happen that your I/O pastern is different and you will see something, only way to know is to test.

There isn't a default IO scheduler that is optimal for all workloads, be it running on bare metal or virtualization platform. You must thoroughly benchmark your workload using different combinations and find out which one delivers best performance and use that.

Generally, if you look at the literature on NOOP vs Deadline elevators, Deadline is still recommended for "intelligent" devices like SANs over NOOP. Really NOOP is FIFO, so if there's any need for management of I/O from within the guest, there's no way for the guest to do so. Deadline will be able to prioritize writes within the guest, so systems will still be read responsive under heavy write traffic. This could be good for transactional systems.

Generally, the overhead of Deadline over NOOP is a 'look and learn' exercise as Sadique mentions, it depends on the guest I/O profile as well as the underlying path to storage.