Satellite 6 - Cannot sync RedHat repo

Latest response

Similar to another discussion which was resolved by refreshing the manifest, I'm seeing "Forbidden" errors which cause sync failure for the repo. I've reuploaded the manifest and refreshed, yet the issue persists. I'd be willing to blast away the repo and start over, but I'm not sure what I need to do to make that happen.

Looks like this is the relevant section from the cascading exceptions in Apache's error_log:

{
    "yum_importer":{
        "content":{
            "size_total":0,
            "items_left":0,
            "items_total":0,
            "state":"NOT_STARTED",
            "size_left":0,
            "details":{
                "rpm_total":0,
                "rpm_done":0,
                "drpm_total":0,
                "drpm_done":0
            },
            "error_details":[]
        },
        "comps":{
            "state":"NOT_STARTED"
        },
        "distribution":{
            "items_total":0,
            "state":"NOT_STARTED",
            "error_details":[],
            "items_left":0
        },
        "errata":{
            "state":"NOT_STARTED"
        },
        "metadata":{
            "state":"FAILED",
            "error":"Forbidden"
        }
    }
}

Responses

I would sugget that you run foreman-debug, and use that zip output as part of a ticket request into GSS. My guess is that they will ask if there is a proxy sitting between the satellite and the internet. If so, was the username and password for the proxy provided during the install.

Tried that, but got nowhere. (Case #01312626)

I think everything's sorted out now except for this old stuck task I can't clear out, since all my other repos are syncing happily. If there's a way to clear the task manually, I'm game -- even if it involves editing a database or two.

Tony, are you syncing a Red Hat channel, and it is failing to create the repo? I had an issue with this (that I posted in the discussion area here, and found that the latest edition of spacewalk-java and it's associated rpms (3 or 4 total) were not being updated in my iso channels I download manually. So my rhel 6.current Satellite version 5.5 and 5.6 servers would not create repos properly. When I upgraded (manually by downloading the rpms) the spacewalk-java rpms, the problem went away. Also see this Red Hat article

Not sure if this is the situation for your case. All of my satellite servers (eight of them) are disconnected, and we manually sync our base and incremental channels as they are released.

This actually started due to "something" happening to the manifest that caused it to be invalidated. After uploading a new manifest, this task was stuck and has been stuck paused since November. (Also, it's sat6, so I'm dealing with foreman and dynflow.) I think my next step is just figuring out how to clear the task manually, since all the tools to work with tasks totally barf when trying to do anything with the stuck task.

Understood - I haven't yet installed Satellite 6.x

Hi, you should be able to go the tasks page and filter out the paused ones: /foreman_tasks/tasks?search=state+=+paused . In the details, there is a button for resuming the task so that it should finish the task and let you operate on it after that

Aye, if I could get to that page, I would've never posted this though -- that's the page that doesn't work! (Among a couple other places, caused by the same stuck task.)

What's the error you're seeing in the tasks details?

The actual error displayed on the webpage is an exception caused by a lower-level Python exception being thrown and not properly caught on the way up. The underlying task is actually stuck, due to a broken manifest a couple months back, which has since been fixed. I've been fishing all over for info on which tables to redact/update, but it seems like nobody's run into this particular issue before and posted about it.

387: unexpected token at '{"exception":null,"task_type":"pulp.server.tasks.repository.sync_with_auto_publish","_href":"/pulp/api/v2/tasks/617bce68-8505-4ea0-a78f-9a8cbd9894ba/","task_id":"617bce68-8505-4ea0-a78f-9a8cbd9894ba","tags":["pulp:repository:Red_Hat_Enterprise_Linux_Server-Red_Hat_Enterprise_Linux_6_Server_RPMs_x86_64_6Server","pulp:action:sync"],"finish_time":"2014-11-26T19:14:24Z","_ns":"task_status","start_time":"2014-11-26T19:14:23Z","traceback":"Traceback (most recent call last):\n File \"/usr/lib/python2.6/site-packages/celery/app/trace.py\", line 240, in trace_task\n R = retval = fun(*args, **kwargs)\n File \"/usr/lib/python2.6/site-packages/pulp/server/async/tasks.py\", line 306, in __call__\n return super(Task, self).__call__(*args, **kwargs)\n File \"/usr/lib/python2.6/site-packages/celery/app/trace.py\", line 437, in __protected_call__\n return self.run(*args, **kwargs)\n File \"/usr/lib/python2.6/site-packages/pulp/server/tasks/repository.py\", line 210, in sync_with_auto_publish\n sync_result = managers.repo_sync_manager().sync(repo_id, sync_config_override=overrides)\n File \"/usr/lib/python2.6/site-packages/pulp/server/managers/repo/sync.py\", line 113, in sync\n raise PulpExecutionException(_('Importer indicated a failed response'))\nPulpExecutionException: Importer indicated a failed response\n","spawned_tasks":[],"progress_report":{"yum_importer":{"content":{"size_total":0,"items_left":0,"items_total":0,"state":"NOT_STARTED","size_left":0,"details":{"rpm_total":0,"rpm_done":0,"drpm_total":0,"drpm_done":0},"error_details":[]},"comps":{"state":"NOT_STARTED"},"distribution":{"items_total":0,"state":"NOT_STARTED","error_details":[],"items_left":0},"errata":{"state":"NOT_STARTED"},"metadata":{"state":"FAILED","error":"Forbidden"}}},"queue":"reserved_resource_worker-0@hostname.goes.here.dq","state":"error","result":null,"error":{"code":"PLP0000","data":{},"description":"Importer indicated a failed response","sub_errors":[]},"_id":{"$oid":"5476268f3fb31fa591689985"},"id":"5476268f4cfd9207cd85d9c5"}],"poll_attempts":{"total":1,"failed":1}}}'

Could you please file a bug for this behavriour and possibly attach the foreman-debug tarball, as well as the screenshot of the error you're seeing on the tasks page. That would help us investigate more on this particular problem. Thanks.

https://bugzilla.redhat.com/enter_bug.cgi?product=Red%20Hat%20Satellite%206

I've started working with Chris Roberts on this issue (01311446), I'll update this discussion after the issue is resolved so we make sure anyone else having this issue can quickly correct it.

Thanks, Tony. Keep us updated on that.

I think we have found the issue, it looks like the pulp task somehow got removed with the /etc/cron.weekly/katello-remove-orphans, as we could not find the pulp task with pulp-admin that has that ID, but it still exists in the foreman database.

Working with Chris Roberts, I was able to clear out the task with the following steps:

Find the task ID for the stuck task:

su - postgres 
psql foreman
select * FROM foreman_tasks_tasks where state != 'stopped';

Make note of each of the IDs (Note: the ID we want is in the first column, not the external_id column).
Note: My query may return other states -- in my case, I was just looking for the paused task that I couldn't un-pause. Be sure to not delete anything you don't need to delete.

Exit back to root's shell, and stop all the running pulp tasks:
- pulp-admin login
- pulp-admin tasks list
- for each task: pulp-admin tasks cancel --task-id [uuid]

You may need to run the "tasks list" command a few times, as I noticed it took a while before the cancellation was complete.

Once all tasks are in a non-running state, run "foreman-rake console" as root. Go get some coffee. Read the paper. Mow the lawn. Then once it's done loading, run:

ForemanTasks::Task.find("f183ecd2-d656-48af-9846-0ffab0a44000").destroy

If you have multiple stuck tasks, you'll want to run that once for each.

You may also want to restart services -- someone more knowledgable about Satellite could say for sure if that's necessary or just excessive.

Thanks Tony. I had the same issue, and your steps solved it for me.

--C

I'm having this same problem. I installed "pulp-admin-client" and adjusted its configuration enough to get it to run, but now I get this:

+----------------------------------------------------------------------+
Tasks
+----------------------------------------------------------------------+

The web server reported an error trying to access the Pulp application. The
likely cause is that the pulp-manage-db script has not been run prior to
starting the server. More information can be found in Apache's error log file on
the server itself.

I'm not liking this version of Satellite at all. It has way too many unfamiliar components. I have no clue how to resolve the above issue, to even attempt to resolve the original issue (can't access the yum repositories from a client machine).

Can you post the relevant errors from Apache's error log? (It's probably /var/log/httpd/foreman_error.log, though don't quote me on this.)

I know what you mean about the unfamiliarity of all the components, and the lack of useful google search results -- so far. As more people adopt it, that'll change: I know a lot more about Satellite after fixing it than before, and I think it was a good opportunity to dig into the guts a bit. Of course, mine's not in production yet. :)

[Wed Jan 21 13:23:07.562648 2015] [:error] [pid 1638] [client 172.25.1.13:41006] Request denied to destination [/pulp/repos/NES/Library/RHEL_7/content/dist/rhel/server/7/7Server/x86_64/optional/os/repodata/repomd.xml]Client certificate failed extension check for destination: /pulp/repos/NES/Library/RHEL_7/content/dist/rhel/server/7/7Server/x86_64/optional/os/repodata/repomd.xml
[Wed Jan 21 13:23:07.562703 2015] [:error] [pid 1638] [client 172.25.1.13:41006] mod_wsgi (pid=1638): Client denied by server configuration: '/var/www/pub/yum/https/repos/NES/Library/RHEL_7/content/dist/rhel/server/7/7Server/x86_64/optional/os/repodata/repomd.xml'.

Those are the only two lines logged in foreman-ssl_error_ssl.log when I do a yum install katello-agent on a subscribed system.

Sounds like a different issue to me -- the problem I ran into was between my Satellite server and RedHat, then later on one of the repo sync tasks on the server itself.

I haven't run into this yet, but I've only been creating new systems to test with, not setting up on existing ones. You may want to start a new discussion, to prevent your issue from getting mixed up with this one.

It seems to be missing an https mapping as is supplied for non-ssl in /etc/pulp/vhosts80/rpm.conf. But adding the requisite HTTPD configuration is still resulting in odd behaviors. This thing is a mess.

Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.