Comments 10 Posted In Red Hat Enterprise Linux Supported major version upgrades Latest response 2011-09-08T15:31:08+00:00 Why have to rebuild hundreds of machines to upgrade properly? BU Started 2011-07-26T23:28:26+00:00 by Best Buy Unix Team Active Contributor 107 points Log in to join the conversation Responses Sort By Oldest Sort By Newest JS Active Contributor 256 points 27 July 2011 12:54 PM Joseph Spriano I was shocked when I was told by RH that to go from 5.x to 6.x I had to do a full reload of my system. So at this time we have put off going to RH 6.x. I am glad that in AIX I don’t have to do this. We have many servers and our down time would be unacceptable. In AIX we can clone the OS to another drive and do a migration, then just reboot on the new OS and have a fall back to the old OS if needed. Or we can do in place update to the next major level or to a patch level. It would be nice to see this function in Red Hat. Our environment is small now but growing. Expert 1213 points 28 July 2011 12:37 AM James Nauer I agree that in-place upgrades should be a supported option--but only because virtualization is changing the whole concept of hardware upgrades. We used to only do major OS upgrades when we replaced the hardware (on a nominal 5 year cycle, frequently 6-7 years in reality)--then the new OS install on the new hardware is no big deal. Apps can be tested & modified or upgraded as needed before the new systems go into production. But virtualization changes that game--if the "hardware" never really gets replaced (even if physical hosts get upgraded every few years, the VMs never really get "replaced" as such), there is no OS install needed, thus no built-in opportunity to reinstall the OS from scratch. That said, our organization has always done a full OS re-install rather than in-place upgrade even on systems that support in-place (Windows 2xxx Server, Solaris 8+); we chose to do this to enforce configuration consistency, since a system installed as version X and upgraded to X+1 is invariably different in some ways from a system installed as X+1. We've done the re-install thing on maybe 5 machines in the last 9 years, so I don't really sweat it that much. The other thing we have done to take the sting out of OS replacement (rather than in-place upgrades) is to separate the applications from the OS as much as possible--so, for example, critical web servers do not use the RHEL version of Apache at all--we have our own builds of Apache, Perl, and scads of modules for each, which live on an NFS share and are patched/tested/upgraded on a schedule entirely separated from the OS patch & upgrade schedules (RHEL or Solaris)--and driven by the application needs, not the OS. Given the separate applications, we can wipe & replace the OS (/ & /var partitions, leaving any local app/data partitions alone) in a matter of minutes via Kickstart, then just drop backups of a dozen or so key config files into /etc/* and we're back in business with an entirely new OS. Finally...think about what happens on major OS upgrades. Even if it could happen, I'm not sure I'd trust an automated re-write of all of the config files to take a RHEL 5 LDAP client to RHEL 6 with SSSD. Gonna have to re-write, test, and replace all of those config files anyway, so it's not much different doing them in a Kickstart post-install script than in the aftermath of a hypothetical "yum upgrade-release" operation. SO...I have mixed feelings. I want this as a feature (more options is usually better), but I see no problem with re-installing the OS every 5-7 years (and skipping versions as appropriate). Certain Other Distros(tm) which require major version upgrades every 18-36 months are another story...and are pretty much verboten in our data centers as a result. DL Newbie 15 points 10 August 2011 4:52 PM David Lyder Hi! I work with large data sets (typically several 100s of GBs) that have been stored in ntfs. I always cringe when I have to use RedHat to open these files as I have to make copies of them on another system in a format that RedHat udnerstands (e.g., ext3). I find this a serious flaw with RedHat and I have seriously considered migrating to another version of Linux that supports ntfs files (as many do). Cheers, David ep Pro 479 points 10 August 2011 5:18 PM epjhazen Why are you not using fuse or another NTFS library? Why would you consider migrating as opposed to locating the packages that comprise the support? And why was this posted under a thread about upgrading RHEL versions? da Active Contributor 306 points 12 August 2011 3:17 PM davenport.redhat First off I agree with Jesse, not sure why an NTFS issue was posted under this discussion, although interesting and not hard to configure at all, it would be nice to include native NTFS support into RHEL. Secondly I agree with enabling version upgrading, I think it should be a new yum plugin extension...yum version-upgrade? I have ALWAYS just rebuilt my servers to upgrade versions, but if there was a yum option to do a clean upgrade I would defintiely opt for this to save time. dr Community Member 76 points 6 September 2011 7:04 PM email@example.com Fedora's PreUpgrade would handle this well, so maybe this should be a request to bring it into RHEL 7? I've used it to handle Fedora upgrades a couple times and it's been problem-free, even on the change from init to systemd. Edit: I should add it can do remote and CLI upgrades. SA Community Member 67 points 7 September 2011 2:31 PM Steve Alder Fedora has a 6 month release cycle. RHEL has a 7-10 YEAR release cycle. Every environment has those servers that seem to never be available to allow downtime/testing/certification of a new OS. That 'pain' should fall to whomever is blocking the correct way to do this. Why isn't it clustered, why is it so critical as to not allow for appropriate maintenance? Code that was written for RHEL4 more than likely would require, at least, tweaking to effective run on RHEL5/6. Physical and virtualized hardware changes significantly in a 7-10 year window, etc. Upgrades introduce new libraries, drivers, methodologies in configuration, etc. for a reason. Sorry but this sounds like a technological fix for a behavioral problem. We should be educating our "customer". Also, AIX has a very tight (and costly) grip on the hardware platform. This allows for some nice features. It might also be one of the reasons they cannot keep pace in the server market. dr Community Member 76 points 7 September 2011 3:46 PM firstname.lastname@example.org You never mentioned an issue with PreUpgrade, so I'm not sure if you have specific concerns about it. The frequency of distro release alone doesn't invalidate a tool. From what I can tell, each version of RHEL has a 10 year life cycle, but a 2 to 3.5+ year release cycle. There will always be cases of exceptions made for servers that "must stay up 24/7" despite what makes the most sense to us, and I agree about where that pain should fall and why. I also agree that major version upgrades introduce more work and therefore reticence, but I think there is still a desire for a technological implementation in this area. People love to mention how Debian/Ubuntu can roll through upgrades; I assume there's something to that. I just think it's good to have options, and I think PreUpgrade is a lot safer than a yum-based upgrade at this point. The cleanest option will always be a fresh install, but there may be reasons why that isn't feasible in certain cases. Guru 3262 points 7 September 2011 10:31 PM Phil Jensen Community Leader I agree with every single one of Steve's comments. In a very large EL enterprise environment, we would be PXE installing new releases and using Puppet for installation and application configuration management. We would not be rolling upgrades across thousands of boxes, when we can instantiate fresh VMs to do it in less time then it would take to click-next-to-continue through a new install. Pro 705 points 8 September 2011 3:31 PM Bryan Smith The problem isn't the platform. I've done EL3->EL4, EL4->EL5, etc... with the Anaconda installer. The problem is ISV software. E.g., changing out the ABI in GCC/LibStdC++ often breaks ISV software. Hence why Red Hat does not support such, at least not without services. Red Hat does offer professional services for analysis of the supportability of such, major ABI changes.