Performance degradation on thin logical volumes
Issue
- Performance degradation is seen on thin logical volumes when compared to standard logical volumes. This issue is characterized by a consistently lower throughput for reads and writes on the thin LV, along with roughly double the time spent in sys per
ddcommand.
[root@host~]# lvs -a -o +devices
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices
[lvol0_pmspare] testvg ewi------- 4.00m /dev/mapper/mpathbp1(300)
mythinpool testvg twi-aot--- 1.17g 0.01 1.07 mythinpool_tdata(0)
[mythinpool_tdata] testvg Twi-ao---- 1.17g /dev/mapper/mpathbp1(301)
[mythinpool_tmeta] testvg ewi-ao---- 4.00m /dev/mapper/mpathbp1(601)
testlv testvg -wi-a----- 1.17g /dev/mapper/mpathbp1(0)
thinvolume testvg Vwi-a-t--- 1.07g mythinpool 0.01
[root@host~]# time dd if=/dev/zero of=/dev/mapper/testvg-testlv bs=4096 count=262206 oflag=direct
262206+0 records in
262206+0 records out
1073995776 bytes (1.1 GB) copied, 95.1148 s, 11.3 MB/s
real 1m35.117s
user 0m0.147s
sys 0m4.851s
[root@host~]# time dd if=/dev/zero of=/dev/mapper/testvg-thinvolume bs=4096 count=262206 oflag=direct
262206+0 records in
262206+0 records out
1073995776 bytes (1.1 GB) copied, 103.106 s, 10.4 MB/s
real 1m43.119s
user 0m0.191s
sys 0m8.104s
Environment
- Red Hat Enterprise Linux 6
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.