During deactivation and activation of VG is changing the PV name every time for mirrored disk

Latest response

Hi
I have presented a single disk of 10gb from HPE XP array to one of the RHEL host and presented 3 disks of size 10 Gb to another RHEL host. Synchronized the host 1 disk to host 2 disk 1, disk 2 and disk3 separately . (called 1:3 mirror). This mirroring done in disk array level
Enabled multi path and created partition for host 1 disk only then created PV, VG, LV and FileSystem then mounted the filesystem.
After that ran the command partprobe on host 2 it listed the PV, VG and LV.
During backup of the disk our Backup tool is doing activating and demotivating the VG that time PV name is keep on changing. First time it is showing pv name as example disk1_part1, 2nd time vgdisplay is showing pv name as disk2_part1 and third time disk3_part1. If i run pvchage -u , it is failing on host 2 due to changing the PV name. Due to this reason our backup is failing.
But earlier RHEL versions 6.3 we didn't face this problem. Please guide us, what needs to be done in our side.

Host OS: RHEL 7.2
Disk Array: HPE XP
LVM : Default

Thanks

Attachments

Responses

You may try disabling "user_friendly_names" and check which would keep the device name unique. Otherwise, not sure how your backup tool is interpreting the device names. I may need to create this setup and check but not sure if I can do that.

If the 3 disks presented to host2 are all continuously synchronized with the one disk presented to host1, they will all have the same PV UUIDs - since they are all effectively the same disk, as far as LVM is concerned. But it seems like multipathing is not detecting that the three disks as the same on host2, so they would have to be presented with three separate WWIDs. As a result, LVM does not get to see just one multipathed device: it sees three individual disk devices, with all containing the same PV UUID. So LVM assumes it can use whichever device happens to be the first in its scan. Trying to change the PV UUID on one of the disks will just replicate the change on all the disks - since the disks are synchronized by the storage system.

I must ask: What exactly are you trying to achieve with this set-up? Why do you think you need to present the same data three times on three separate devices on host2? To me, that set-up sounds like you might be building a three-barrelled device for shooting yourself in the foot.

Trying to access the same disk simultaneously from two hosts that are unaware of each other is dangerous: disk caching will cause you grief rather quickly. To do it successfully, you'll either need a single point through which all the requests go to the disk (e.g. a NFS server) or a cluster-aware filesystem that can handle the situation where one host changes some data that the other host has already cached. (Or completely disable all caching from the entire disk access path for the shared disks - but that will absolutely ruin your disk performance.)

Think about this: 1.) Host2 gets a request from an application to read blocks 1..10 from the disk. Host2 does that, and caches those blocks in case an application or a filesystem driver needs them again. 2.) Host1 gets a request from an application to write new data to blocks 1..20. Host1 does that. 3.) Host2 gets a request to read blocks 1..20 from the disk. Since it already has blocks 1..10 in its cache, it only needs to read blocks 11..20 from the disk. It has no clue that blocks 1..10 have recently been updated by host1, since a cluster-aware filesystem is not used. But now the application on host2 is getting an inconsistent mix of old and new data!

And if you are writing to the disk from both hosts, the same thing is likely to happen to filesystem metadata... including the part that tells the hosts which blocks are free and which are not. So quite soon one host is going to assume that some blocks that the other hosts recently wrote to are free, and as a result, overwrite some data. It may seem like all is fine, until you notice that the contents of some files have been silently corrupted.

I'm not sure about your workflow here, since you wish to mirror a disk to another 3 (if i interpret your initial query correctly) then why do you want to backup mirrored disks as well, just taking backup of disk on host1 would be enough. As per the attachment I could see that UUID remains same it is just the name of the PV getting changed after re-activation of vg. I feel that is it because of the disk name being cached hence the system is using the next available name for the PV. Otherwise, there is something holding/locking that earlier name which is not getting re-used. So, the sole purpose of mirroring (1:3) is only for backup purpose?; you may wish to write back with more details and clarity on this.

Craig, was the issue fixed? Just curious to know how was this resolved.

Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.