rhel server yum update 7.5 to 7.6 often fails for 2 rpms (ipa-client and rhn-client-tools)

Latest response

Leading with this question... Has anyone using a satellite server on a disconnected satellite scenario ever faced an issue for a client upgrading from RHEL 7.5 to RHEL 7.6 for two rpms: ipa-client and rhn-client-tools?? Details follow.

Our satellite servers are not the issue. The SHA256sum for the two rpms on the satellite pass and match what is at Red Hat, so it is not the fault of those rpm packages either. We keep our satellties patched routinely. We patch our client systems routinely.

Some background...

We have disconnected satellite servers on several networks that at running at 6.2.x (Red Hat amusingly recently released 6.2.16 satellite). The satellite server is not the issue here.

We at acquire patches from our public facing Satellite Server using a content view dump and carry it to our disconnected satellites and ensure all rpm channels successfully export and import from the point of receiving them on the public facing satellite to the point where we carry them to our disconnected satellites. (again, the satellite servers are not the issue here). We've been doing this for years, with generally no issue.

The issue we're facing of late, is when we update RHEL servers and we've discovered two culprit RPM packages that are causing all of the yum update issues (going from RHEL 7.5, patched to November 2018) going to RHEL 7.6 (patch set in the middle of December 2018)

We had two IDM servers fail on yum update, both in an identical way.

We did a normal yum update, and in both instances, both failed (hung) on rpm ipa-client. It never moved past that in it's progress during the yum update.

So we called our TAM at Red Hat (yes, he got us through the issue). The method we used (yes, there are several, but our TAM in this instance suggested this)

[root@idmserv01] # yum history
<output of yum history, we looked for the one that failed with over 400 rpms and made note of the failed transaction number>

So let's say the transaction id that had the failed instance was #22

We then ran:

root@idmsrv01] # yum history redo force-reinstall force-remove 22

So that went on, and finally complained for the rpm named "rhn-client-tools". And when you performed (in our case) a "yum check", it returned 158 duplicate rpms which is obviously "not good"

So our Red Hat TAM then told us to do the following against the rpm complaint from the last yum command: (note, this is the later, more current edition of the duplicate rpm)

[root@idmsrv01] # rpm -e --justdb --nodeps rhn-client-tools-2.0.2-24.el7.x86_64 

So we did that which, yes, only removes it from the database, and disregards dependencies.

Ok, so we did that, it worked. We then repeated the yum history redo transaction cited above in this post and then it updated successfully all the over 400 rpms bringing the server from RHEL 7.5 (November patch set) to RHEL 7.6 (December patch set).

So this happened to us on both IDM servers for one specific network. Then we tried this against a server that was not an IDM server, but a client of the IDM server.

It too failed in an identical way. very odd. Sadly the procedure our TAM gave us didn't complete (it was the server's fault, not our TAM), and the system crashed, hard during the yum history redo command. Mercifully, this specific server was VMware and we had a snapshot. So we reverted to the snapshot.

We were burned pretty hard with the previous flat yum update command with the two rpms and so this time, I did a yum update against the two rpms that were the life of the party, namely ipa-client and rhn-client-tools. So amusingly, those two rpms did a yum update without complaint... We appreciate humor.

We then after that did a yum update, and it went (this time) without complaint or issue.

So the long story short... Has anyone other than us experienced this issue at all updating from RHEL 7.5 (with semi-current patches) to RHEL 7.6?? I'm willing to bet my 8-ball on my desk that no one else has, but maybe there's someone who experienced this issue.

So far this has .... (we think) only affected RHEL 7.5 servers and not workstations.

Regards

RJ

Responses