Select Your Language

Infrastructure and Management

Cloud Computing

Storage

Runtimes

Integration and Automation

  • Comments
    • Tags

    Spacewalk metadata not transfered to client

    Posted on

    Hello,

    I invite your assistance on this problem. I do not want to open a redhat ticket on Spacewalk.
    I hope that is this the right forum ?

    We are using Spacewalk to monitor some of our systems.
    Actually, there are Exclamation mark in front of repository names on Client.
    After clean cache on the client (yum clean all), "yum" cannot retrieve metadata from the SpaceWalk Server.
    Client system shows 0 packages in base channel.

    # yum repolist
    Loaded plugins: langpacks, product-id, rhnplugin, search-disabled-repos
    This system is receiving updates from RHN Classic or Red Hat Satellite.
    repo id                                                                     repo name                                                                   status
    dev_ppr-rhel-x86_64-server-7                                                dev_ppr-rhel-x86_64-server-7                                                 0
    dev_ppr-rhel-x86_64-server-7-extras                                         dev_ppr-rhel-x86_64-server-7-extras                                          0
    dev_ppr-rhel-x86_64-server-7-optional                                       dev_ppr-rhel-x86_64-server-7-optional                                        0
    dev_ppr-rhel-x86_64-server-7-updates                                        dev_ppr-rhel-x86_64-server-7-updates                                         0
    dev_ppr-rhel-x86_64-server-7-zabbix                                         dev_ppr-rhel-x86_64-server-7-zabbix                                          0
    repolist: 0
    

    I tried this documentation (https://access.redhat.com/solutions/19303) but it doesn't work.
    After "yum check-update" command, I have the following error message

    One of the configured repositories failed (Unknown),
     and yum doesn't have enough cached data to continue. At this point the only
     safe thing yum can do is fail. There are a few ways to work "fix" this:
    
         1. Contact the upstream for the repository and get them to fix the problem.
    
         2. Reconfigure the baseurl/etc. for the repository, to point to a working
            upstream. This is most often useful if you are using a newer
            distribution release than is supported by the repository (and the
            packages for the previous distribution release still work).
    
         3. Run the command with the repository temporarily disabled
                yum --disablerepo= ...
    
         4. Disable the repository permanently, so yum won't use it by default. Yum
            will then just ignore the repository until you permanently enable it
            again or use --enablerepo for temporary usage:
    
                yum-config-manager --disable 
            or
                subscription-manager repos --disable=
    
         5. Configure the failing repository to be skipped, if it is unavailable.
            Note that yum will try to contact the repo. when it runs most commands,
            so will have to try and fail each time (and thus. yum will be be much
            slower). If it is a very temporary problem though, this is often a nice
            compromise:
    
                yum-config-manager --save --setopt=.skip_if_unavailable=true
    
    failed to retrieve repodata/repomd.xml from dev_ppr-rhel-x86_64-server-7
    error was [Errno 14] HTTP Error 400 - Bad Request
    

    Client system always shows 0 packages in base channel (Previously, I forced regeneration process after restart taskomatic service).

    I tried out ideas :
    - On the client system : yum clean all; rm -rf /var/cache/yum/*; rhn-profile-sync; yum update
    - On the server SpaceWalk : spacewalk-service stop; rm -rf /var/cache/rhn/reposync/*; rm -rf /var/cache/rhn/repodata/*; rm -rf /var/cache/rhn/satsync/*; spacewalk-service start
    - On the server SpaceWalk : Regenerate repo data for all channels : spacecmd softwarechannel_list; for i in

    spacecmd softwarechannel_list
    ; do spacecmd softwarechannel_regenerateyumcache $i; done
    - Add a new client

    For information, I noticed that a taskomatic service no regenerate repodata after restart service. I am bound to force repodata regeneration.

    In /var/log/rhn/rhn_taskomatic_daemon.log, I have the following message :

    INFO: Initializing c3p0 pool... com.mchange.v2.c3p0.PoolBackedDataSource@a111cc3c [ connectionPoolDataSource -> com.mchange.v2.c3p0.WrapperConnectionPoolDataSource@226fcf3b [ acquireIncrement -> 3, acquireRetryAttempts -> 30, acquireRetryDelay -> 1000, autoCommitOnClose -> false, automaticTestTable -> null, breakAfterAcquireFailure -> false, checkoutTimeout -> 0, connectionCustomizerClassName -> com.redhat.rhn.common.db.RhnConnectionCustomizer, connectionTesterClassName -> com.mchange.v2.c3p0.impl.DefaultConnectionTester, debugUnreturnedConnectionStackTraces -> false, factoryClassLocation -> null, forceIgnoreUnresolvedTransactions -> false, identityToken -> 2uut749o7rg7up15sofpp|45a9cb94, idleConnectionTestPeriod -> 300, initialPoolSize -> 5, maxAdministrativeTaskTime -> 0, maxConnectionAge -> 0, maxIdleTime -> 300, maxIdleTimeExcessConnections -> 0, maxPoolSize -> 20, maxStatements -> 0, maxStatementsPerConnection -> 0, minPoolSize -> 5, nestedDataSource -> com.mchange.v2.c3p0.DriverManagerDataSource@135e2207 [ description -> null, driverClass -> null, factoryClassLocation -> null, identityToken -> 2uut749o7rg7up15sofpp|6c9ab334, jdbcUrl -> jdbc:postgresql:rhnschema, properties -> {user=******, password=******, driver_proto=jdbc:postgresql} ], preferredTestQuery -> select 'c3p0 ping' from dual, propertyCycle -> 0, testConnectionOnCheckin -> false, testConnectionOnCheckout -> true, unreturnedConnectionTimeout -> 0, usesTraditionalReflectiveProxies -> false; userOverrides: {} ], dataSourceName -> null, factoryClassLocation -> null, identityToken -> 2uut749o7rg7up15sofpp|5727e9b9, numHelperThreads -> 3 ]
    

    Have you got an idea of this behavior ?

    Thanks a lot
    Romain

    by

    points

    Responses

    Red Hat LinkedIn YouTube Facebook X, formerly Twitter

    Quick Links

    Help

    Site Info

    Related Sites

    © 2026 Red Hat