Memory leak in goferd on clients connected to Red Hat Satellite 6.2.x when qrouterd is unavailable
Issue
- Goferd on RHEL 6 is consuming all the memory. Restarting the goferd service is a workaround for a day and it starts to use more again.
- Repeated /var/log/messages entries like the following:
goferd: [INFO][worker-0] gofer.messaging.adapter.proton.connection:131 - closed: proton+amqps://capsule.example.com:5647
goferd: [INFO][worker-0] gofer.messaging.adapter.connect:28 - connecting: proton+amqps://capsule.example.com:5647
goferd: [INFO][worker-0] gofer.messaging.adapter.proton.connection:87 - open: URL: amqps://capsule.example.com:5647|SSL: ca: /etc/rhsm/ca/katello-default-ca.pem|key: None|certificate: /etc/pki/consumer/bundle.pem|host-validation: None
goferd: [INFO][worker-0] gofer.messaging.adapter.proton.connection:92 - opened: proton+amqps://capsule.example.com:5647
goferd: [INFO][worker-0] gofer.messaging.adapter.connect:30 - connected: proton+amqps://capsule.example.com:5647
goferd: [ERROR][worker-0] gofer.messaging.adapter.proton.reliability:47 - receiver ca710a81-9d3f-4a74-ac85-0105b04f8b4e from None closed due to: Condition('qd:no-route-to-dest', 'No route to the destination node')
- If the memory consumption gets severe, one will note that all swap space is used and could result in a kernel panic like the following:
Kernel panic - not syncing: Out of memory: system-wide panic_on_oom is enabled
Pid: 2696, comm: python Not tainted 2.6.32-696.10.3.el6.x86_64 #1
Call Trace:
[<ffffffff8154a1ee>] ? panic+0xa7/0x179
[<ffffffff811316a1>] ? dump_header+0x101/0x1b0
...
Pid: 3, comm: migration/0 Not tainted 2.6.32-696.10.3.el6.x86_64 #1
Call Trace:
[<ffffffff8107c811>] ? warn_slowpath_common+0x91/0xe0
[<ffffffff8107c87a>] ? warn_slowpath_null+0x1a/0x20
...
Environment
- Red Hat Satellite 6.2.8 and above (if using goferd instead of Remote Execution)
-
Packages installed on the client:
- python-qpid-proton-0.9-16
- qpid-proton-c-0.9-16
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.