Seeking further historical insight on dirty_ratio for Oracle configurations
From the following PDF link, I consumed the vm.dirty_ratio=80 config with keen interest when doing research today on recommended RHEL kernel settings for Oracle deployments.
https://access.redhat.com/sites/default/files/attachments/deploying-oracle-12c-on-rhel6_1.2_1.pdf
This setting of 80 appears to be a reversal of earlier recommendations of 15 -- can only find historically linked from URL below...
http://www.slideshare.net/ftmiranda/oracle-11gr2-onrhel60-document-from-red-hat-inc
Its not my purpose to try and lay blame anywhere for this reversal. At the end of the day, most of us involved in performance tuning understand full well that each workload should be tuned independently. Yet, we all seek a general best practice for "out of the box builds" to meet time pressures, etc.
In fact, in my experience with dirty_ratio, I have found in a non-DB environment with heavy I/O (roughly equal on read and write rates), that the higher settings of dirty_ratio results in less time spent on processes blocked on I/O. So, in short, the 80 setting is more consistent with my direct experience. But my direct experience is outside of Oracle.
Nevertheless, I must convince others of the general need to now adopt this reversal. In short, I am looking to have someone confirm the reversal, comment on any test data supporting this change, and then advise/clarify whether this new recommendation is more, less, or equally important where Huge Pages are implemented. More, less, or equally important in VM deployments? And, more, less, or equally important in ASM implementations?
Thanks much!
Steve
Responses
You may check the official Oracle documentation or search the Metalink, I discussed these values with our Oracle expert and it seems that Red Hat follows it as well.
There wew some virtual memory architecture changes among kernel releases so it it possible that the values changed quite a lot.
Hello Steve,
The reasoning for the increase between 15->80 is based on how dirty ratio works. Dirty ratio defines maximum percentage of total memory that can be filled with dirty pages before user processes are forced to write dirty buffers themselves during their time slice instead of being allowed to do more writes. Note that all processes are blocked for writes when this happens, not just the one that filled the write buffers. This can cause what is perceived as an unfair behavior where a single process can hog all the I/O on a system. However, an application that can handle their writes being blocked altogether might benefit from decreasing the value.
With that being said, the dirty_ratio 80 is probably the higher range of what you'd want to set it too. The range should be anywhere between 40-80. Also, looking at dirty_ratio, you also need to look at dirty_background_ratio. Dirty background ratio will start a background process to start cleaning those dirty pages (value recommendation set at 3%), However, if dirty pages ever reached the dirty_ratio limit, this is when all processes are blocked for writes.
It's is also important to note that all the kernel parameters (dirty_ratio, swappiness, dirty_background_ratio, etc) as you mentioned are really just starting points and not a one size fits all answer. Every environment is different and really internal testing on those environments is required to ensure the environment is running optimally,
As for your questions, with regards to the kernel parameters, this applies to all Oracle envs including those that use Huge Pages, VMs, and ASM.
Hope that helps,
Roger
Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.
