i/o not convenient

Latest response

hi,
in a VM on vmware with rhel 7 giving top i see that wa is 41%.
how can to reduce the i/o response time?

modifying in /proc/sys/vm/dirty_writeback_centisecs /proc/sys/vm/dirty_background_ratio?
tnx

Responses

Good Day Marius Tanislav,

Welcome here. The Red Hat performance Tuning Guide seems to have some tacit info on virtual memory and specifically what you mentioned about vm_dirty_background_ratio

Like the I/O scheduler, virtual memory (VM) subsystem requires no special tuning. Given the fast nature
of I/O on SSD, try turning down the vm_dirty_background_ratio and vm_dirty_ratio settings, as
increased write-out activity does not usually have a negative impact on the latency of other operations
on the disk. However, this tuning can generate more overall I/O, and is therefore not generally
recommended without workload-specific testing.

That said... We honestly do not have sufficient info to provide a viable solution for your specific I/O issue you mention. We don't know if your system is a fresh new build, if it is just doing a one-off task that is causing the blip in I/O. This next link is a total wild stab for Vertica, but might have something useful, maybe considering we honestly do not know your environment

To look at this, examine SAR reports, by default with RHEL 7.x, SAR reports are stored in /var/log/sa/

So if you wanted to look at yesterday's sar report, you'd run

[root@webserv404v ~ ] # sar -f /var/log/sa/sa10
Linux 3.10.0-1062.1.2.el7.x86_64 (webserver404v.example.org)    10/10/2019  _x86_64_    (40 CPU)

12:00:01 AM     CPU     %user     %nice   %system   %iowait    %steal     %idle
12:10:01 AM     all      0.18      0.00      0.06      0.00      0.00     99.75
12:20:01 AM     all      0.10      0.00      0.05      0.00      0.00     99.85
12:30:01 AM     all      0.09      0.00      0.04      0.00      0.00     99.86
12:40:01 AM     all      0.81      0.00      0.14      0.00      0.00     99.04
12:50:01 AM     all      3.43      0.00      0.47      0.00      0.00     96.10
01:00:01 AM     all      2.53      0.00      0.38      0.00      0.00     97.09
01:10:01 AM     all      0.18      0.00      0.06      0.00      0.00     99.75
01:20:01 AM     all      0.10      0.00      0.05      0.00      0.00     99.85
01:30:01 AM     all      0.08      0.00      0.04      0.00      0.00     99.87
01:40:01 AM     all      0.80      0.00      0.14      0.00      0.00     99.06
01:50:01 AM     all      3.58      0.00      0.49      0.00      0.00     95.92
02:00:01 AM     all      2.30      0.00      0.36      0.00      0.00     97.34
02:10:01 AM     all      0.18      0.00      0.06      0.00      0.00     99.75
02:20:01 AM     all      0.10      0.00      0.05      0.00      0.00     99.85
02:30:01 AM     all      0.08      0.00      0.04      0.00      0.00     99.87
02:40:01 AM     all      0.64      0.00      0.13      0.00      0.00     99.23
02:50:01 AM     all      3.58      0.00      0.49      0.00      0.00     95.92
03:00:01 AM     all      2.58      0.00      0.39      0.00      0.00     97.02
03:10:01 AM     all      0.18      0.00      0.07      0.00      0.00     99.74
03:20:01 AM     all      0.10      0.00      0.05      0.00      0.00     99.85
03:30:01 AM     all      2.06      0.46      0.15      0.00      0.00     97.33
03:40:01 AM     all      2.66      0.40      0.28      0.00      0.00     96.65
03:50:01 AM     all      4.06      0.26      0.64      0.00      0.00     95.04
04:00:01 AM     all      3.63      0.30      0.52      0.00      0.00     95.55
04:10:01 AM     all      6.07      0.23      0.38      0.11      0.00     93.21
04:20:01 AM     all      0.12      0.00      0.05      0.00      0.00     99.82
04:30:01 AM     all      0.08      0.00      0.04      0.00      0.00     99.87
04:40:01 AM     all      0.70      0.00      0.13      0.00      0.00     99.17
# output truncated

Now this will only give you a small idea of what is happening on the system. You'll have to dig more into the system. However, the Red Hat Performance Tuning guide might have the best tip we can give at the moment, that you'll have to attempt to try different parameters.

We don't know enough of your system with the very limited info on this system, your scenario, environment, storage type, possible workload (or potential underlying hardware issue, maybe), etc.

Things we don't know regarding what you posted

  • What kind of storage is it you're researching for I/O issues?
  • Is your storage SAN connected or part of the VM normal pool?
  • Has anyone checked the disks your system is relying on just in case there is a bad disk?
  • Is there a bad disk and a RAID array is attempting to rebuild the RAID array with a global hot spare?
  • Is there a bad disk on a RAID with no global hot spare?
  • We don't know what role server this is... which could certainly be relevant in I/O usage
  • We do not know what specific minor version of Red Hat this system is
  • We don't know what baseline of I/O you'd normally have on that system

Kind Regards,

RJ