Disk resize not applied at instance level

Solution In Progress - Updated -

Issue

  • 2 weeks back some multipath disks got resized successfully according to Openstack and multipath. When checking the instances, we noticed that the disks did not got resized for 1 out of the 4 resized disks (1.5TB to 2TB) on 1 VM. And 0 out of 4 on the other VM.

  • Openstack output:

| f1cbd067-0f7b-4d3b-880d-cbdf8948bf6d | oraclecl2db-datafiles2_ldg             | in-use    | 2048 | cinder_rhos_prod_ldg | false    | Attached to guestVM2.localdomain on /dev/vdp Attached to guestVM.localdomain on /dev/vdp  |            |
| 2c9cc7bc-0258-408a-a33a-44697a2c3666 | oraclecl2db-datafiles2_thr             | in-use    | 2048 | cinder_rhos_prod_thr | false    | Attached to guestVM.localdomain on /dev/vdo Attached to guestVM2.localdomain on /dev/vdo  |            |
| b5ecadbe-5ac1-4dc4-90fb-391f6a7ff642 | oraclecl2db-datafiles1_ldg             | in-use    | 2048 | cinder_rhos_prod_ldg | false    | Attached to guestVM2.localdomain on /dev/vdn Attached to guestVM.localdomain on /dev/vdn  |            |
| 6535ec16-be89-4faa-8253-fe69bf07e63a | oraclecl2db-datafiles1_thr             | in-use    | 2048 | cinder_rhos_prod_thr | false    | Attached to guestVM2.localdomain on /dev/vdm Attached to guestVM.localdomain on /dev/vdm  |            |
  • The disks got resized according to the Openstack output.

-Multipath output:

000000000000000000000000000176 dm-83 NETAPP,LUN C-Mode
size=2.0T features='2 pg_init_retries 50' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 1:0:2:42  sdka 65:480  active ready running
| `- 2:0:3:42  sdkb 65:496  active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 1:0:1:42  sdjz 65:464  active ready running
  `- 2:0:1:42  sdkc 66:256  active ready running


000000000000000000000000000175 dm-81 NETAPP,LUN C-Mode
size=2.0T features='2 pg_init_retries 50' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 1:0:2:41  sdie 134:224 active ready running
| `- 2:0:3:41  sdif 134:240 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 1:0:1:41  sdid 134:208 active ready running
  `- 2:0:1:41  sdig 135:0   active ready running


000000000000000000000000001443 dm-85 NETAPP,LUN C-Mode
size=2.0T features='2 pg_init_retries 50' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 1:0:3:42  sdke 66:288  active ready running
| `- 2:0:5:42  sdlq 68:384  active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 1:0:4:42  sdkd 66:272  active ready running
  `- 2:0:4:42  sdlp 68:368  active ready running


000000000000000000000000001442 dm-82 NETAPP,LUN C-Mode
size=2.0T features='2 pg_init_retries 50' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 1:0:3:41  sdjw 65:416  active ready running
| `- 2:0:5:41  sdjy 65:448  active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 1:0:4:41  sdil 135:80  active ready running
  `- 2:0:4:41  sdjx 65:432  active ready running
  • So, according to multipath, all 4 disks got a resize. Same on the other physical host.

  • The instances output of the first VM:

[root@guestVM (prod) ~]# lsblk
NAME               MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
...
vdm                251:192  0  1,5T  0 disk 
vdn                251:208  0  1,5T  0 disk 
vdo                251:224  0  1,5T  0 disk 
vdp                251:240  0    2T  0 disk 
...
  • The other VM in front of the multipath LUNs has all 4 disks showing 1.5T. So no resize captured here.

  • What I did notice is the following:

()[root@overcloud-compute-0 /]# virsh domblkinfo instance-0000ad10 vdp --human
Capacity:       2.000 TiB
Allocation:     1.153 TiB
Physical:       2.000 TiB

()[root@overcloud-compute-0 /]# virsh domblkinfo instance-0000ad10 vdo --human
Capacity:       1.500 TiB
Allocation:     1.153 TiB
Physical:       2.000 TiB

()[root@overcloud-compute-0 /]# virsh domblkinfo instance-0000ad10 vdn --human
Capacity:       1.500 TiB
Allocation:     1.153 TiB
Physical:       2.000 TiB

()[root@overcloud-compute-0 /]# virsh domblkinfo instance-0000ad10 vdm --human
Capacity:       1.500 TiB
Allocation:     1.153 TiB
Physical:       2.000 TiB
  • So, according to virsh, the physical size is 2TB but for some reason the logical / vm disk size is still 1.5TB for 3 out of the 4 resized disks. On the other VM, all 4 have the 2TB physical size as well but all 4 also show the logical / vm disk size of 1.5TB.

  • Is there an online possibility to fix the Capacity?

Environment

  • Red Hat OpenStack Platform 16.1 (RHOSP)

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content