SAN - Migrate LVM volumes to new storage

Latest response

I'm planning to migrate a set of servers to a new SAN. All servers use multipath, with (most) storage in LVM. As far as I can see the strategy should be to hook them up to the new SAN, present new LUNs and do a pvmove to the new disk, and vgreduce/pvremove'ing the old disks.

All servers are RHEL5. Volume sizes range from small to several hundred GB.

 

Are there any scenarios where I would not be able to run an online pvmove to migrate storage?

 

As the servers boot from SAN, will this strategy also work for the root vg, or is some other copy (cpio) better for the root and boot file systems?

 

Thanks for any input on planning for storage migration.

Responses

Hi Martin,

 

This is definitely an excellent question. What you have said sounds pretty much spot on. Below is a short section on the pvmove command, from the official RHEL 5 documentation.

 

https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/5/html-single/Logical_Volume_Manager_Administration/index.html#online_relocation

 

Off the top of my head I can't think of any reason why you shouldn't be able to do the pvmove whilst online. One concern would arise if you were in a clustered environment, but that would probably just mean running cmirror.

 

One concern does come up, though, and that's migrating your root filesystem. While it should, emphasis on the should, be referenced in your grub.conf and by its friendly name (/dev/vg_name/lv_name) this is not always the case. Having your naming wrong here could cause the system to not boot.

 

Another concern is having all your multipath information in your initial ramdisk (initrd). If the new SAN is not present within the initrd the system will not boot at all. This means that if the SAN is a different model or requires different drivers this will have to be checked for inclusion in the initrd prior to rebooting. The same goes for your multipath.conf. This can mostly be resolved by rebuilding the initrd after you have migrated root accross.

 

How do I rebuild the initial ramdisk image in Red Hat Enterprise Linux?

 

With all that said and done I must say, though; It is not recommended or supported by Red Hat to move the location of your root device. This is simply because if something doesn't go quite right you will be left with a completely unbootable system. Depending on how confident you feel, or whether you are able to try in a testing environment first, it may be worth reinstalling the base operating system from scratch to the new storage device. You would, naturally, still be able to migrate every other non-root volume across with pvmove.

 

Some food for thought anyhow. I'll ponder on it some more, but the only thing that really comes to mind is making sure the boot/preboot environments can see the new SAN and its LUNs.

 

Cheers,

 

Rohan

Thanks! We will do testing on this next week, I will update the thread on our results, especially regarding root vg.

 

Martin 

I've migrated several SANs this year using this method with lvm tools.  All have gone very smoothly with no issues.  We're not booting from SAN so I can't speak to that aspect though.   We took outages for the migrations as a precaution.

Great, Martin! I look forward to hearing the results.

Rohan,

 

Thanks for this confirmation.  I will be doing the same tonight on a production server.  I do have one question.  The original requestor put in that he did a vgreduce to take out the old LUNs, can I therefore, assume that a vgextend had to be issued upon new SAN being presented?  So, it would be as follows:

 

1.  Present new LUNs from new SAN

2.  vgextend the VGs to the enw LUNs

3.  pvmove from old to new

4.  vgreduce the VGs from the old LUNs

 

All should be good.  Do I have the concepts down properly.

 

Thanks,

Lee

Hi,

 

Yes, the procedure you present is correct. I've done testing both using this strategy and creating a mirror and then breaking away the part that's on the old SAN, in case you want to be able to preserve data on the old SAN in case of a rollback. One such strategy is described in this blog post: 

http://storagemeat.blogspot.no/2010/07/migrating-volumes-with-linux-lvm.html

 

In our migration of production data we will use the steps you present on some servers where we also should do LUN reorganization, but do SAN side mirroring especially where we boot from SAN. Slightly more work for the SAN administrator who has to set up the SAN side mirror, but a simpler switch for us where we in any case will need downtime to boot from LUN on the new SAN.

 

Martin

I will use this method soon, thanks for sharing! 

 

Great, Ivan, let us know how it works out for you.

I am wondering how the migration went with the root SAN attached volumes.  I have a need for this as we move toward RHEL and I'm concerned about the issues raised above (/boot, grub, multipath.conf, initrd, etc.).  I was able to successfully do this prior but I'm concerned that I may run into a unique scenario in the future that will cause me some pain.

Hi, I have finished migrating our environments to a new SAN solution (both storage and directors). For servers not booting from SAN, the original strategy worked as planned - I added new disks from the new SAN (sometimes adjusting/rearranging LUN sizes), did LVM migration with pvmove and finally removed disks from the old SAN.

For servers booting from SAN, we decided to do SAN-side mirroring of all LUNs, leaving us with downtime to boot up from cloned LUN0 on the new SAN (even if not supported according to KB #32380).

I followed the advice from KB #177893, but started by preparing a new initrd with WWID binding for my new LUN0 (I got path and LDEV from my SAN administrator). I set up grub to boot single user with the prepared initrd, where I checked that paths were OK. I then updated /var/lib/multipath/bindings so it only listed the new WWID for mpath0. On the next boot /var/lib/multipath/bindings was updated by multipathd with information for the other LUNs (this way I kept the same mpathX device names).

If SAN-side mirroring is not available I would think the best option is to do an OS redeploy on the new SAN and follow the standard LVM migration strategy for data volumes.

Take care if you're running Oracle ASM, if your mpath device names change during migration you will need to do a rescan so ASM devices are recreated.

Martin

Hello , would this work if i have stripped LV , im read that this can end with only one PV with all the stripes and that i should use --alloc . Should i  try to convert to linear first and then move the data, we are not using stripping anymore with the new SAN.

 

 

Thanks in advance.

Thanks Martin.  I have used a similar approach with the new initrd and bindgins file when virtualizing our storage to perform sub-system migrations.