Can old package releases be quickly filtered out of content views?

Solution In Progress - Updated -

Issue

The client-side yum metadata cache of Satellite repositories at /var/cache/yum is 1.6 GB. What we are observing is that on our smaller systems with around 1 GB of memory, frequently run commands like yum check-update grow to over 500 MB RSS and perform thousands of IOPS for an extended amount of time. The memory usage is consistent across our enterprise, but we do not see the IO problem on larger systems. The IO issue is causing us to run out of burst credits in AWS. My conjecture is that on larger systems, yum is able to load the whole cache into memory and thus avoid the massive IO penalty seen on smaller systems.

I have explored yum options like mdpolicy as a means of controlling the client-side yum cache size, but it only helps so much. What would really help is meaningfully reducing the amount of metadata in the first place. The most obvious approach is to trim our repositories to only the latest available packages. Right now, we track the 7Server repos which contain upwards of 50,000 packages when all versions are accounted for. What I'd like to know is if Red Hat has any suggestions for filtering out old packages. For example, we don't need packages from RHEL 7.0 when we're running 7.6--all they are doing is taking up valuable metadata cache space on the client hosts.

Environment

Satellite 6

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content