How to read Vmstat output
Environment
- Red Hat Enterprise Linux
Issue
- How i can read vmstat output
Resolution
vmstat (virtual memory statistics) is a valuable monitoring utility, which also provides information about block IO and CPU activity in addition to memory.
vmstat Basics
vmstat provides a number of values and will typically be called using two numerical parameters.
Example: vmstat 1 5
1 -> the values will be re-measured and reported every second
5 -> the values will be reported five times and then the program will stop
The first line of the report will contain the average values since the last time the computer was rebooted. All other lines in the report will represent their respective current values. Vmstat does not need any special user rights. It can run as a normal user.
[user@fedora9 ~]$ vmstat 1 5
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
3 0 0 44712 110052 623096 0 0 30 28 217 888 13 3 83 1 0
0 0 0 44408 110052 623096 0 0 0 0 88 1446 31 4 65 0 0
0 0 0 44524 110052 623096 0 0 0 0 84 872 11 2 87 0 0
0 0 0 44516 110052 623096 0 0 0 0 149 1429 18 5 77 0 0
0 0 0 44524 110052 623096 0 0 0 0 60 431 14 1 85 0 0
[user@fedora9 ~]$
Meaning of the individual Values
(Source man vmstat):
Procs
r: The number of processes waiting for run time.
b: The number of processes in uninterruptible sleep.
Memory
swpd: the amount of virtual memory used.
free: the amount of idle memory.
buff: the amount of memory used as buffers.
cache: the amount of memory used as cache.
inact: the amount of inactive memory. (-a option)
active: the amount of active memory. (-a option)
Swap
si: Amount of memory swapped in from disk (/s).
so: Amount of memory swapped to disk (/s).
IO
bi: Blocks received from a block device (blocks/s).
bo: Blocks sent to a block device (blocks/s).
System
in: The number of interrupts per second, including the clock.
cs: The number of context switches per second.
CPU
These are percentages of total CPU time.
us: Time spent running non-kernel code. (user time, including nice time)
sy: Time spent running kernel code. (system time)
id: Time spent idle. Prior to Linux 2.5.41, this includes IO-wait time.
wa: Time spent waiting for IO. Prior to Linux 2.5.41, included in idle.
st: Time stolen from a virtual machine. Prior to Linux 2.6.11, unknown.
Examples
CPU User Load Example
A standard audio file will be encode as an MP3 file by means of the lame encoder[1] in this example. This process is quite CPU intensive and also demonstrates the execution of vmstat in parallel with a user CPU time of 97%:
[user@RHEL ~]$ vmstat 1 5
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
6 1 0 302380 16356 271852 0 0 561 568 80 590 43 7 43 7 0
1 0 0 300892 16364 273256 0 0 0 52 79 274 97 3 0 0 0
2 0 0 299544 16364 274532 0 0 0 0 78 372 97 3 0 0 0
1 0 0 298292 16368 275780 0 0 0 0 53 255 97 3 0 0 0
1 0 0 296820 16368 277192 0 0 0 0 77 377 97 3 0 0 0
[user@RHEL ~]$
CPU System Load Example
In this example, a file will be filled with random content using dd.
[user@fedora9 ~]$ dd if=/dev/urandom of=500MBfile bs=1M count=500
For this, /dev/urandom[2] will supply random numbers, which will be generated by the kernel. This will lead to an increased load on the CPU (sy – system time). At the same time, the vmstat executing in parallel will indicate that between 93% and 97% of the CPU time is being used for the execution of kernel code (for the generation of random numbers, in this case).
[user@RHEL ~]$ vmstat 1 5
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
2 0 402944 54000 161912 745324 5 14 54 59 221 867 13 3 82 2 0
1 0 402944 53232 161916 748396 0 0 0 0 30 213 3 97 0 0 0
1 0 402944 49752 161920 751452 0 0 0 0 28 290 4 96 0 0 0
1 0 402944 45804 161924 755564 0 0 0 0 29 188 2 98 0 0 0
1 0 402944 42568 161936 758608 0 0 0 17456 272 509 7 93 0 0 0
[user@RHEL ~]$
The time for executing system calls[3][4][5] will be counted as system time (sy).
RAM Bottleneck (swapping) Example
In this example, many applications will be opened (including VirtualBox with a Windows guest system, among others). Almost all of the working memory will be used. Then, one more application (OpenOffice) will be started. The Linux kernel will then swap out several memory pages to the swap file on the hard disk, in order to get more RAM for OpenOffice. Swapping the memory pages to the swap file will be seen in the so (swap out - memory swapped to disk) column as vmstat executes in parallel.
[user@RHEL ~]$ vmstat 1 5
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
3 1 244208 10312 1552 62636 4 23 98 249 44 304 28 3 68 1 0
0 2 244920 6852 1844 67284 0 544 5248 544 236 1655 4 6 0 90 0
1 2 256556 7468 1892 69356 0 3404 6048 3448 290 2604 5 12 0 83 0
0 2 263832 8416 1952 71028 0 3788 2792 3788 140 2926 12 14 0 74 0
0 3 274492 7704 1964 73064 0 4444 2812 5840 295 4201 8 22 0 69 0
[user@RHEL ~]$
High IO Read Load Example
A large file (such as an ISO file) will be read and written to /dev/null using dd.
[user@RHEL ~]$ dd if=bigfile.iso of=/dev/null bs=1M
Executed in parallel, vmstat will show the increased IO read load (the bi value).
[user@RHEL ~]$ vmstat 1 5
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
3 1 465872 36132 82588 1018364 7 17 70 127 214 838 12 3 82 3 0
0 1 465872 33796 82620 1021820 0 0 34592 0 357 781 6 10 0 84 0
0 1 465872 36100 82656 1019660 0 0 34340 0 358 723 5 9 0 86 0
0 1 465872 35744 82688 1020416 0 0 33312 0 345 892 8 11 0 81 0
0 1 465872 35716 82572 1020948 0 0 34592 0 358 738 7 8 0 85 0
[user@RHEL ~]$
High IO Write Load Example
In contrast with the previous example, dd will read from /dev/zero and write a file. The oflag=dsync will cause the data to be written immediately to the disk (and not merely stored in the page cache).
[user@RHEL ~]$ dd if=/dev/zero of=500MBfile bs=1M count=500 oflag=dsync
Executed in parallel, vmstat will show the increased IO write load (the bo value).
[user@RHEL ~]$ vmstat 1 5
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
1 1 0 35628 14700 1239164 0 0 1740 652 117 601 11 4 66 20 0
0 1 0 34852 14896 1239788 0 0 0 23096 300 573 3 16 0 81 0
0 1 0 32780 15080 1241304 0 0 4 21000 344 526 1 13 0 86 0
0 1 0 36512 15244 1237256 0 0 0 19952 276 394 1 12 0 87 0
0 1 0 35688 15412 1237180 0 0 0 18904 285 465 1 13 0 86 0
[user@RHEL ~]$
CPU Waiting for IO Example
In the following example, an updatedb process is already running. The updatedb utility is part of mlocate. It examines the entire file system and accordingly creates the database for the locate command (by means of which file searches can be performed very quickly). Because updatedb reads all of the file names from the entire file system, the CPU must wait to get data from the IO system (the hard disk). For that reason, vmstat running in parallel will display large values for wa (waiting for IO):
[user@RHEL ~]$ vmstat 1 5
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
2 1 403256 602848 17836 400356 5 15 50 50 207 861 13 3 83 1 0
1 0 403256 601568 18892 400496 0 0 1048 364 337 1903 5 7 0 88 0
0 1 403256 600816 19640 400568 0 0 748 0 259 1142 6 4 0 90 0
0 1 403256 600300 20116 400800 0 0 476 0 196 630 8 5 0 87 0
0 1 403256 599328 20792 400792 0 0 676 0 278 1401 7 5 0 88 0
[user@RHEL ~]$
Additional vmstat Options
vmstat --help
[user@RHEL ~]$ vmstat --help
usage: vmstat [-V] [-n] [delay [count]]
-V prints version.
-n causes the headers not to be reprinted regularly.
-a print inactive/active page stats.
-d prints disk statistics
-D prints disk table
-p prints disk partition statistics
-s prints vm table
-m prints slabinfo
-S unit size
delay is the delay between updates in seconds.
unit size k:1000 K:1024 m:1000000 M:1048576 (default is K)
count is the number of updates.
vmstat
[user@fedora9 ~]$ vmstat
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
2 0 14960 38016 6584 1069284 0 1 506 81 727 1373 12 4 81 3 0
[user@fedora9 ~]$
vmstat -V
[user@fedora9 ~]$ vmstat -V
procps version 3.2.7
[user@fedora9 ~]$
vmstat -a
[user@fedora9 ~]$ vmstat -a
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
r b swpd free inact active si so bi bo in cs us sy id wa st
3 0 14960 38024 988284 461704 0 1 506 81 726 1372 12 4 81 3 0
[user@fedora9 ~]$
vmstat -d
[user@fedora9 ~]$ vmstat -d
disk- ------------reads------------ ------------writes----------- -----IO------
total merged sectors ms total merged sectors ms cur sec
ram0 0 0 0 0 0 0 0 0 0 0
ram1 0 0 0 0 0 0 0 0 0 0
ram2 0 0 0 0 0 0 0 0 0 0
ram3 0 0 0 0 0 0 0 0 0 0
ram4 0 0 0 0 0 0 0 0 0 0
ram5 0 0 0 0 0 0 0 0 0 0
ram6 0 0 0 0 0 0 0 0 0 0
ram7 0 0 0 0 0 0 0 0 0 0
ram8 0 0 0 0 0 0 0 0 0 0
ram9 0 0 0 0 0 0 0 0 0 0
ram10 0 0 0 0 0 0 0 0 0 0
ram11 0 0 0 0 0 0 0 0 0 0
ram12 0 0 0 0 0 0 0 0 0 0
ram13 0 0 0 0 0 0 0 0 0 0
ram14 0 0 0 0 0 0 0 0 0 0
ram15 0 0 0 0 0 0 0 0 0 0
sda 136909 31536 13893867 1197609 58190 219323 2233264 7688807 0 677
sda1 35703 6048 1326394 511477 6728 16136 182984 419232 0 222
sda2 85 1489 2935 653 141 3603 29952 5254 0 1
sda3 101111 23961 12564154 685330 51321 199584 2020328 7264321 0 512
sr0 0 0 0 0 0 0 0 0 0 0
fd0 0 0 0 0 0 0 0 0 0 0
[user@fedora9 ~]
vmstat -D
[user@fedora9 ~]$ vmstat -D
22 disks
0 partitions
273820 total reads
63034 merged reads
27787446 read sectors
2395193 milli reading
116450 writes
438666 merged writes
4467248 written sectors
15377932 milli writing
0 inprogress IO
1412 milli spent IO
vmstat -p
vmstat -p will not work under Fedora: https://bugzilla.redhat.com/show_bug.cgi?id=485246. The following report comes from an Ubuntu 9.10 system.
user@RHEL:~$ vmstat -p /dev/sda9
sda9 reads read sectors writes requested writes
23420 411365 24464 530801
vmstat -s
[user@fedora9 ~]$ vmstat -s
1553972 total memory
1516180 used memory
461892 active memory
988304 inactive memory
37792 free memory
6644 buffer memory
1069388 swap cache
1052248 total swap
14960 used swap
1037288 free swap
161467 non-nice user cpu ticks
7586 nice user cpu ticks
46310 system cpu ticks
1108919 idle cpu ticks
46832 IO-wait cpu ticks
2694 IRQ cpu ticks
2452 softirq cpu ticks
0 stolen cpu ticks
6947021 pages paged in
1116896 pages paged out
183 pages swapped in
3744 pages swapped out
9985406 interrupts
18852586 CPU context switches
1239004323 boot time
15072 forks
[user@fedora9 ~]$
vmstat -m
[user@fedora9 ~]$ vmstat -m
Cache Num Total Size Pages
fuse_request 11 11 368 11
fuse_inode 9 9 448 9
rpc_inode_cache 8 8 512 8
nf_conntrack_expect 0 0 168 24
nf_conntrack 26 80 248 16
dm_uevent 0 0 2464 3
UDPv6 22 22 704 11
TCPv6 6 6 1344 6
kmalloc_dma-512 8 8 512 8
sgpool-128 12 12 2048 4
scsi_io_context 0 0 104 39
ext3_inode_cache 6822 8360 496 8
ext3_xattr 85 85 48 85
journal_handle 170 170 24 170
journal_head 76 219 56 73
revoke_record 256 256 16 256
flow_cache 0 0 80 51
bsg_cmd 0 0 288 14
mqueue_inode_cache 7 7 576 7
isofs_inode_cache 0 0 376 10
hugetlbfs_inode_cache 11 11 344 11
dquot 0 0 128 32
shmem_inode_cache 1058 1071 448 9
xfrm_dst_cache 0 0 320 12
UDP 19 21 576 7
TCP 17 24 1216 6
blkdev_queue 21 21 1080 7
biovec-256 2 2 3072 2
biovec-128 5 5 1536 5
biovec-64 7 10 768 5
sock_inode_cache 619 650 384 10
file_lock_cache 39 39 104 39
Acpi-Operand 2935 2958 40 102
Acpi-Namespace 1700 1700 24 170
Cache Num Total Size Pages
taskstats 25 26 312 13
proc_inode_cache 233 242 360 11
sigqueue 28 28 144 28
radix_tree_node 7888 8606 296 13
bdev_cache 24 24 512 8
inode_cache 370 462 344 11
dentry 6592 15390 136 30
names_cache 2 2 4096 2
avc_node 73 73 56 73
selinux_inode_security 9888 10030 48 85
idr_layer_cache 627 644 144 28
buffer_head 2308 2688 64 64
mm_struct 659 693 448 9
vm_area_struct 11110 11592 88 46
files_cache 115 130 384 10
sighand_cache 141 150 1344 6
task_struct 246 248 3696 2
anon_vma 4778 5120 16 256
kmalloc-4096 95 112 4096 8
kmalloc-2048 272 304 2048 16
kmalloc-1024 518 524 1024 4
kmalloc-512 764 888 512 8
kmalloc-256 198 208 256 16
kmalloc-128 629 832 128 32
kmalloc-64 4322 5568 64 64
kmalloc-32 1554 1664 32 128
kmalloc-16 2644 3584 16 256
kmalloc-8 3561 3584 8 512
kmalloc-192 6349 6930 192 21
kmalloc-96 885 1176 96 42
[user@fedora9 ~]$
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.
Comments