Upgrade from RH5.10 32 Bit to RH6.8 64 Bit

Latest response

I have been asked to find out if it is possible to do an upgrade on our Enterprise 5.10 going from 32 Bit to Enterprise 6.8 64 Bit.

To complicate this, the system is dual server cluster using High Availability and a Fibre SAN for the drives.

All of the drives are in the storage array and connected via Fibre. Below is the structure of the drives which are all mirror sets. / /boot /dev and /drv2 are all on the same mirror set.

/dev/mapper/VolGroup00-LogVol00
127G 20G 102G 17% /
/dev/mapper/mpath0p1 99M 68M 26M 73% /boot
tmpfs 12G 0 12G 0% /dev/shm
/dev/mapper/drv2_vg-drv2_lv
50G 18G 33G 35% /drv2
/dev/mapper/drv3_vg-drv3_lv
247G 18G 217G 8% /drv3
/dev/mapper/drv4_vg-drv4_lv
247G 164G 71G 70% /drv4
/dev/mapper/drv5_vg-drv5_lv
247G 72G 162G 31% /drv5
/dev/mapper/arch_vg-arch_lv
247G 16G 218G 7% /arch
/dev/mapper/arch2_vg-arch2_lv
247G 467M 234G 1% /arch2

We are trying to do this so we don't lose our existing printers and users. The printers are all installed in the default location of /etc/cups, but the users have been modified to be on /drv2/usr. /drv2 mounts to which ever system in the cluster is active so the users are always available to either server. All other drives only mount to the live server.

Is it possible to do this as an inplace upgrade without having to recreate everything including the clustering?

Would there possibly be a way of migrating the users from one system to another if we should have to install on a new box not clustered, then upgrade our clustered system, and migrate the users back to the cluster?

Our applications and data are all stored on the other volume groups so this should be available just by mounting it to which ever system is live.

Responses

The only in-plcae upgrade that Red Hat supports officially is from RHEL 6 to 7: CHAPTER 1. HOW TO UPGRADE

Check out this KB for more details on other upgrade pros and cons: Does Red Hat support upgrades between major versions of Red Hat Enterprise Linux?

Thank you for the links. I didn't think an in place upgrade was going to work, but had to verify it for my IT director. Now to figure out the best way of doing the upgrade while users can work. Because of the clustering, it makes it a little more complicated than just adding a box with the new version and migrating the users and printers over and now your done.

I think the best way will be to add a box, put the version we want, migrate the users and printers and test. Then they can work off that box while the clustered systems are upgraded then migrate the users and printers to the cluster.

What is the reason for this upgrade? If this is because of benefits of ext4 then there is another way out there which is converting from ext3 to ext4 . The below KB got the details if that is the case:

Can I upgrade my ext3 filesystem to ext4 in Red Hat Enterprise Linux?

The application that runs on this system now supports a higher kernel of Redhat along with 64 bit while the older version that is being upgraded only supported Redhat 5.x with a lower kernel revision. We couldn't even upgrade the kernel to the newest in 5.x because the application didn't work. Since we are upgrading the application we want to take full advantage of the hardware processing power and memory in the system.

Ok... But if you add an upgraded node to your cluster (and haven't upgraded the application, yet), doing an administrative failover to the node running the upgraded OS would fail, wouldn't it?

The upgrade would be done after upgrading the application. I feel the better way would be to install new onto new system, install the new application and migrate the users to the new box. Then once everything has been upgraded in terms of the application, then the cluster can be upgraded to the new OS and the users, printers, and application migrated back to the cluster.

The other question I would have is would it be possible to have the hard drives mounted by both the existing system and the new system at the same time? Since the application is stored on non shared logical volumes now as they are only mounted to the live system in the cluster. There is only a single drive set that is mounted by both servers at once which is where both the Operating Systems are installed and one shared drive is also on that media which is where we have shared home directories for our users.

Hello Martin,

Most filesystem types may only be mounted read-write by one system at a time. If all systems mount it read-only, it could be mounted by multiple systems.

The filesystems may be shared between systems with NFS.

Regards,

Marc

Generally, if the file system is cluster aware such as GFS2 which is built on top of a clustered volume then it would support mounting on multiple systems at the same time. Being said this, what is the cluster suite being used? If this is RHCS then you might raise case with Red Hat to get official expert level advice and support. If the cluster software being used is some thing else and underlying device which is your lvm is not cluster aware then mounting it on multiple systems at the same time would fail.

There is only one drive that is shared to both systems in the cluster. This is using the gfs2 file system. All other drives are using ext3. If I remember correctly the person to setup the system did this for performance of the drives.

Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.