Migrate standard RHEL installation from one hard disk to another

Latest response

*** DON'T DO ANY OF THIS, AS YOU WILL ALMOST CERTAINLY WRECK YOUR SYSTEM IF YOU DO! ***

You have one standard installation of RHEL, you need migrate the installation from one hard disk to another, this is required due to technology improvement.

The server is productive and has running critical services, so is important minimize the migration window, this procedure requires only one reboot, if you want to apply all changes immediately but you can program the restart after.

For x86_64

Scenario:

vda -> Old Disk
vdb -> New Disk
centos -> root volume group

Partitioning:

# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   18G  983M   17G   6% /
devtmpfs                 487M     0  487M   0% /dev
tmpfs                    497M     0  497M   0% /dev/shm
tmpfs                    497M  6.7M  490M   2% /run
tmpfs                    497M     0  497M   0% /sys/fs/cgroup
/dev/vda1                497M  164M  333M  33% /boot
tmpfs                    100M     0  100M   0% /run/user/0
tmpfs                    100M     0  100M   0% /run/user/1000
# fdisk -l
   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *        2048     1026047      512000   83  Linux
/dev/vda2         1026048    41943039    20458496   8e  Linux LVM

Steps:

Clean yum cache:

# yum clean all

Clone partitioning scheme:

# sfdisk -d /dev/vda | sfdisk --force /dev/vdb

Move Logical Volume to new disk:

# pvcreate /dev/vdb2
# vgextend centos /dev/vdb2
# pvmove /dev/vda2
# vgreduce centos /dev/vda2
# pvremove /dev/vda2

Clone /boot:

# umount /boot/
# dd if=/dev/vda1 of=/dev/vdb1 bs=512 conv=noerror,sync
# mount /boot

Copy boot sector:

# dd if=/dev/vda of=/dev/vdb bs=1 count=512

Install GRUB in new disk:

# grub2-install /dev/vdb

Sync changes:

# sync

Reboot your physical or virtual machine, please make sure that your new disk is the default boot device or remove old disk but don't delete data, can be useful in a rollback situation.

For POWER
Scenario:

sda -> Old Disk
sdb -> New Disk
ca -> root volume group

Partitioning:

# df -h
Filesystem           Size  Used Avail Use% Mounted on
/dev/mapper/ca-root   28G  1.1G   27G   4% /
devtmpfs             449M     0  449M   0% /dev
tmpfs                495M     0  495M   0% /dev/shm
tmpfs                495M   12M  484M   3% /run
tmpfs                495M     0  495M   0% /sys/fs/cgroup
/dev/sda2            497M  143M  354M  29% /boot
tmpfs                 99M     0   99M   0% /run/user/0
# fdisk -l
  Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048       10239        4096   41  PPC PReP Boot
/dev/sda2           10240     1034239      512000   83  Linux
/dev/sda3         1034240    62914559    30940160   8e  Linux LVM

Steps:

Clean yum cache:

# yum clean all

Clone partitioning scheme:

# sfdisk -d /dev/sda | sfdisk --force /dev/sdb

Move Logical Volume to new disk:

# pvcreate /dev/sdb3
# vgextend centos /dev/sdb3
# pvmove /dev/sda3
# vgreduce centos /dev/sda3
# pvremove /dev/sda3

Clone PPC PReP Boot partition:

dd if=/dev/sda1 of=/dev/sdb1 bs=512 conv=noerror,sync

Clone /boot:

# umount /boot/
# dd if=/dev/sda2 of=/dev/sdb2 bs=512 conv=noerror,sync
# mount /boot

Copy boot sector:

# dd if=/dev/sda of=/dev/sdb bs=1 count=512

Install GRUB in new disk:

# grub2-install /dev/sdb

If you receive this message: grub2-install: error: the chosen partition is not a PReP partition. maybe you can try with:

# grub2-install /dev/sdb1

Sync changes:

# sync

Reboot your physical or virtual machine, please make sure that your new disk is the default boot device or remove old disk but don't delete data, can be useful in a rollback situation.

Responses

Why, this is pretty much identical to what we used to do to migrate SAN-bootable systems from one SAN storage to another, before we installed a storage virtualization device to our SAN that should pretty much remove the requirement to do this. The only missing part is the reconfiguration of FibreChannel or iSCSI HBAs for the new boot LUN, and that is specific to the HBA model.

I think there is only one trick you may have missed: with x86_64 hardware using traditional MBR boot scheme, before running grub-install, edit /boot/grub/device.map file to make it say that the new boot disk is associated with the GRUB identifier (hd0). Because that's what it will be when the old disk is removed and the system is booting from the new one.

(In the MBR boot scheme, the GRUB identifiers map directly to hard disk numbers used with BIOS function calls: (hd0) is BIOS disk 0x80, (hd1) is 0x81, and so on. And in the MBR boot scheme, pretty much the only thing that can be universally relied on when it comes for selecting the hard disk to boot from is "the BIOS and its extensions will rearrange the list of hard disks in such a way that the disk selected for booting will be disk 0x80."

So, whichever hard disk you select in the BIOS settings as the disk to boot from will be (hd0) for GRUB when the system actually makes the boot attempt.)

Yes, I assume "Red Hat" wants to say "not supported or recommended". So, instead of saying "Don't try this...", you may better put it as "Not officially supported and you may try at your own risk".

Adding to what "Matti" wrote about, there is another point that i wish mention, which is correcting "/etc/fstab" which points to the new boot device, otherwise, using "UUID" instead of device name there.

Hello... I repeated this solution a lot of times... always works like a charm... I never need to make any change on /etc/fstab. I will prepare a video to have evidence... =)

Right, unless you had labels or UUID in place of block device name (for boot device), otherwise, you would need to edit fstab.

Hi Guys, I have tried this procedure an it works, but now I'm facing another issue, my source VM has only one disk coming from just one path, and my new volumes will have at least two paths, how can I address this?

Hello. I have a problem. I moved RHEL 6.8 installation with above procedure from SAN disk to local disk /dev/sda. After reboot pvs shows: /dev/mapper/3600508b1001c346346563636363663p2 not /dev/sda2 and command: pvdisplay /dev/sda2 Failed to find device for physical volume "/dev/sda2". I rebooted the system again and now is fine. Device sda is visible and /dev/mapper/3600508b1001c346346563... no visible. Where is the problem?

There is a pvremove command documented in the steps, I presume that you did this and which had removed your earlier pv. Otherwise, run 'pvscan' command see if that can pulls the earlier physical volume.

Command pvscan does not recover right name: /dev/sda2 After a few machine reboots still is /dev/mapper/3600508b1001c346346563...

I think I've not understood your statement, do you mean instead of showing the pv as '/dev/sda2' it shows it as '/dev/mapper/3600508b1001c346346563...'.?

I also advise you to trigger a new thread of your issue so that it would get addressed properly by the community.

All,

WARNING: The original poster describes a method that only works for KVM virtual guests. Still it is on your own risk.

This does not imply that it works for SAN connected physical servers. Multipath devices are not the same as a VIRTIO disk.

A multipath devices is "a cluster" of physical paths to a LUN on a storage box. Each sd.. device is a single physical path.

You should migrate devices like: '/dev/mapper/3600508b1001c346346563...' other wiseyou may end up writing LUN content from one path to another, breaking everything.

Regards,

Jan Gerrit

Hello: I wrote this article because was a requirement of one user with a real power system, the user confirmed that works great on a real environment. The disks are "VIOS disks".

It can be a great exercise update this document to get SAN working too.

Hi to all, this is my procedure, if what I do for mitrating boot from one storage to other:

Cloning boot disk with EFI:

Old Disk: mpath0 (360060e8016004d000001004d00001201) undef HP ,OPEN-V size=120G features='1 queue_if_no_path' hwhandler='0' wp=undef -+- policy='round-robin 0' prio=1 status=undef |- 0:0:0:16386 sda 8:0 undef ready running ==== I'll use this path |- 1:0:0:16386 sde 8:64 undef ready running |- 0:0:1:16386 sdb 8:16 undef ready running- 1:0:1:16386 sdf 8:80 undef ready running

New Disk: mpatha (360060e8012a2d3005040a2d300001240) dm-1 HITACHI ,OPEN-V size=120G features='0' hwhandler='0' wp=rw -+- policy='service-time 0' prio=1 status=active |- 0:0:2:10 sdc 8:32 active ready running ==== I'll use this path |- 1:0:3:10 sdh 8:112 active ready running |- 0:0:3:10 sdd 8:48 active ready running- 1:0:2:10 sdg 8:96 active ready running

lnxsgtw12:/root> # df -hP Filesystem Size Used Avail Use% Mounted on /dev/sde4 9.8G 219M 9.0G 3% / /dev/mapper/vg00-lvusr 9.8G 4.4G 4.9G 47% /usr /dev/sde2 976M 228M 682M 25% /boot /dev/sde1 1022M 9.5M 1013M 1% /boot/efi /dev/mapper/vg00-lvvar 9.8G 2.0G 7.4G 21% /var /dev/mapper/vg00-lvhome 9.8G 37M 9.2G 1% /home /dev/mapper/vg00-lvcrash 16G 45M 15G 1% /var/crash /dev/mapper/vg00-lvperf 2.0G 6.0M 1.8G 1% /var/opt/perf

1.- Check partitions, New Disk should have not partitions (is new remember?)

New Disk: lnxsgtw12:/root> # fdisk -l /dev/mapper/mpatha

Disk /dev/mapper/mpatha: 128.8 GB, 128849018880 bytes, 251658240 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes

Old Disk: lnxsgtw12:/root> # fdisk -l /dev/sda WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sda: 128.8 GB, 128849018880 bytes, 251658240 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: gpt

Start End Size Type Name

1 2048 2099199 1G EFI System EFI System Partition 2 2099200 4196351 1G Microsoft basic 3 4196352 71305215 32G Linux swap 4 71305216 92276735 10G Microsoft basic 5 92276736 192948223 48G Linux LVM

DESTINO: lnxsgtw12:/root> # fdisk -l /dev/sdc

Disk /dev/sdc: 128.8 GB, 128849018880 bytes, 251658240 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes

2.- move contein from /etc/multipath/ to /root/backup

mkdir /root/backup mv /etc/multipath/* /root/backup

3.- clone whole disk

time dd if=/dev/sda of=/dev/sdc bs=1M

122880+0 records in 122880+0 records out 128849018880 bytes (129 GB) copied, 362.829 s, 355 MB/s

real 6m2.843s user 0m0.102s sys 1m54.389s

4.- check the partitions in New Disk

lnxsgtw12:/root> # fdisk -l /dev/sdc WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdc: 128.8 GB, 128849018880 bytes, 251658240 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: gpt

Start End Size Type Name

1 2048 2099199 1G EFI System EFI System Partition 2 2099200 4196351 1G Microsoft basic 3 4196352 71305215 32G Linux swap 4 71305216 92276735 10G Microsoft basic 5 92276736 192948223 48G Linux LVM lnxsgtw12:/root> #

5.- Copy the efi boot manager, the label must be different to previuos one

efibootmgr -c --disk /dev/sdc --part 1 -L "Red Hat Enterprise Linux 7.3"

6.- shutdown and remove the old disk (ask to storage guy remove all paths)

7.- power on the server and enjoy.

Well I write it in my blog http://hpux-howto.blogspot.com/2018/07/how-to-migrate-boot-partition-to-other.html