Performance degraded after storage migration using "pvmove"

Solution Verified - Updated -

Issue

  • After pvmove migration readahead value of lvm goes to 0.
[root@localhost ~]# lvs -o +devices testvg
  LV   VG     Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices     
  lv1  testvg -wi-a----- 1020.00m                                                     /dev/sda1(0)

[root@localhost ~]# dmsetup info -C | grep testvg
testvg-lv1       253   0 L--w    0    1      0 LVM-PnvXfWMsSNlCuBBaPjAMrZj0kDozXiGQ6pzs2hetmQegGWlhrtUvOywDlg9oFCX5

[root@localhost ~]# cat /sys/block/dm-0/queue/read_ahead_kb 
128

 Do a pvmove  and Check the read_ahead_kb value

[root@localhost ~]# pvmove /dev/sda1 /dev/sda2
  /dev/sda1: Moved: 0.39%
  /dev/sda1: Moved: 100.00%

[root@localhost ~]# cat /sys/block/dm-0/queue/read_ahead_kb 
0
  • Performance degraded after storage migration using pvmove.

Environment

  • Red Hat Enterprise Linux 7
    • lvm2

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content