rhel6.4 tuned profile virtual guest
We've been struggeling with some performance problems towards our SAN lately (high write latency), and it kind of matches well with the RHEL6.4 upgrade. I just noticed that 6.4 automatically enabled the virtual guest tuned profile, which amongst other things includes a quadrupling of the read_ahead_kb setting (from 128 to 512KB).
It's not clear to me when read_ahead_kb kicks in (anybody know?), but it looks like something that might have a major impact on a storage system when 150 VMs suddenly wants to read 4x as much.. What's the logic behind increasing this setting for virtual guests?
Has anybody else noticed any performance impact on their storage after upgrading to 6.4 guests?
Responses
Hey Jan - interesting find. I had not noticed what you had discovered, but I did look into it. (I happen to have a 2-node cluster with one node at 6.4 and the other at 6.3). Mine are both currently at 128 for all devices.
Are you running RHEVH or RHEL-Hypervisor? Also - are you running 3rd-party storage drivers (i.e. HDLM or PowerPath, etc...) It would be strange if the HBA configuration was to update the read-ahead value.
I don't have a concrete answer as I now have more questions myself ;-) However, do a google search on "Linux Performance and Tuning Guidelines" and check out the IBM RedBook. There is a section title Tuning the disk subsystem which details read_ahead_kb (it does not explain how you would decide what value to set it though).
[root@6.3node] # for DEV in `ls /sys/block/ | grep sd `; do cat /sys/block/$DEV/queue/read_ahead_kb; done
128
128
128
128
128
128
128
128
128
128
128
128
128
[root@6.4node] # for DEV in `ls /sys/block/ | grep sd `; do cat /sys/block/$DEV/queue/read_ahead_kb; done
128
128
128
128
128
128
128
128
128
128
128
128
128
Slightly off topic but I was interested to see that /etc/tune-profiles/virtual-guest/ktune.sh in 6.4 includes the I/O scheduler set as ELEVATOR="deadline".
I need to check in my RHEL6 Performance Tuning notes but I'm pretty sure the elevator should be "noop" as this hands off more of the I/O scheduling to the hypervisor.
Sometimes its the little things....
Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.
