Issues with updating using offline repositories for RHEL 6.4 - 6.9

Latest response

Hello,

I'm managing a bunch of RHEL workstations in an offline network. We have a server running repositories for Base, Updates, and EPEL.
I'm in the process of updating a few hundred systems running 6.4, updating them to 6.9 (don't laugh, these are the cards I've been dealt with!), and a fairly high percentage of them are failing the updates, maybe 1 out of 5 or 1 out of 6 can't successfully complete a full update. For the ones that fail, they get about 10 updates in (out of anywhere from 500 - 1000 needed updates) and the system freezes. When I go to the machine to physically look at it, there's usually something about a kernel panic on the screen, and it'll require a hard system reboot with the power button, to get it back up again.
Is it likely this is just too much of an upgrade at once? Is the leap from 6.4 to 6.9 just too big of a jump?

Responses

I should add that after this freezing happens a couple of times midway through patching, the yum database quickly starts getting messed up, and it starts throwing a bunch of duplicate version errors and missing requires.

Hi Paul,

You "nailed" it : "Is the leap from 6.4 to 6.9 just too big of a jump?" ... yes it is - indeed ! :)

Regards,
Christian

That's not the answer I wanted to hear - :-( but it is probably the right answer! I find it odd though, that maybe 80% or so still succeed anyway.

Maybe it's better just to push for an upgrade to the 7.x series ....

Hi Paul,

I assumed that you wanted to hear something else - but I think good support should be based on truth.
Well, upgrading by leaving out minor versions leads to more or less serious problems in nearly all cases.
Also, switching to RHEL 7 is a very good idea, it includes many improvements since the RHEL 6 edition.

Cheers :)
Christian

One thing I noticed - if I try to update the tzdata package by itself and it installs correctly that seems to be a good indicator of whether the rest of the updates are going to succeed or not. If tzdata updates quick and snappy, the rest of the updates work - if tzdata is very slow to update and eventually fails, a full yum update is also going to fail. This sounds like it should be a symptom of something, but I'm not sure what.

If anyone is following this thread, I found the source of this issue. I had 2 ssh windows open when trying to update the tzdata package - "tail -F /var/log/messages" was running in one window and "yum update tzdata". When I started the update, /var/log/messages went berserk, generating maybe 100's of audit events per second. So, I stopped the auditd service and the updates ran just fine - even with the big jump from 6.4 to 6.9. Looks like the security team went a little too heavy on the stuff they wanted to log.

Hi Paul, thanks for sharing what you've investigated, what you found out as the root cause and how you applied the workaround. :)

Regards,
Christian

I've brought some systems from rhel 6.2 to 6.9 after they were "discovered" at one customer site. In that case, those systems did okay with the upgrade.

Occasionally one may have an issue with a few rpms, and I've had to either do "--skip-broken" or update what is possible in serial and not in parallel, then observe the remaining things that failed to resolve them individually.

I concur with Christian that RHEL 7 ought to be considered, but that being said, (in the case of a failed rpm, or rpms) one method I have used (not always) to do one update at a time and this will obviously bail on any that fail...

There are ways to deal with rpms that fail for a flat yum update, for instance, examine orphaned/out of date repositories under /etc/yum.repos.d/ directory. Disable orphaned/out of date repositories.

NOTE: The below method should be treated with caution, not used reflexively, examine the output of the failed yum update and see if the rpm that fails is something significant or not.

yum check-update | egrep -v 'Loaded|Uploading' | perl -ne 'print;chomp;system("echo yum update $_")'

Some systems (depending on their level of configuration paranoia) may be set to "panic" if audit is turned off. You could temporarily turn off auditing in such systems, but be aware of what triggers will occur and inform necessary parties and/or take proper precautions (like set panic to off temporarily and reboot, some audit configs won't take until after a reboot, depending on the configuration paranoia). I've taken systems from 6.2 to 6.9, but there were a few that required "special handling". You can increase the number of audit events that are acceptable, see this doc. Mercifully, we're able to patch pretty consistently.

Regards,

RJ

Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.