Help with LVM & increasing metadata size
Hello,
We have a RHEL7 box with a large RAID (few hundred TB) allocated as a single volume group (data-vg) containing a thin pool (data-thin). Within that are 3-4 thinly provisioned logical volumes and 15 odd snapshots.
We are using 58% data but worryingly 73% metadata. The system was left to determine metadata LV size itself on creation of the vg, and we now notice it only allocated 128MB.
I am unable to increase the metadata size, because the volume group and thin pool use all the extents available and I read shrinking thin pools is not supported. (I do see someone has written a tool hosted on GitHub, https://github.com/nkshirsagar/thinpool_shrink but not sure I want to trust our data to it).
The host does have separate RAID1 SSD's for the OS. (It uses LVM for root/swap). I was hoping I could use some space on there to extend the data-vg, but it looks like I can't have multiple vg's on a single physical volume.
Is there any way out of this pickle?
Thanks!
Responses
Hi Nick Smith,
I’m not sure if I can get to the bottom of this, but I think some more detail will help whomever assists.
Please redact or change anything you do not want public
Can you give us some more detail:
- What command(s) did you run to determine the metadata usage (perhaps post the command and output)
df -PhT /path/to/mountpoint
lspci | grep name_of_mountpoint
- Just curious, what file system type is it formatted with? (XFS, or what?)
lvdisplay name_of_lv
vgdisplay name_of_vg
- What is using the thin pool, docker images?
- What kind of snapshots are these? File system snapshots?
To resolve it through the discussion forum will take a bit of a dive into the details. You might get a more prompt answer through Red Hat support, and include an sosreport
You could post more detail here and I believe others will post here in reply, and perhaps also submit a ticket too.
Regards,
RJ