Red Hat Satellite or Capsule installation fails to start foreman-proxy service with error "Address already in use - bind(2)"

Solution Verified - Updated -

Issue

  • Red Hat Satellite\Capsule installation fails with the following error:

    [ INFO 2020-08-07T16:04:35 main] All hooks in group post finished
    [DEBUG 2020-08-07T16:04:35 main] Exit with status code: 6 (signal was 6)
    [ERROR 2020-08-07T16:04:35 main] Errors encountered during run:
    [ERROR 2020-08-07T16:04:35 main]  Systemd start for foreman-proxy failed!
    [ERROR 2020-08-07T16:04:35 main] journalctl log for foreman-proxy:
    [ERROR 2020-08-07T16:04:35 main] -- Logs begin at Thu 2020-06-11 09:09:49 EDT, end at Fri 2020-08-07 16:04:23 EDT. --
    [ERROR 2020-08-07T16:04:35 main] Aug 07 16:04:22 capsule.example.com systemd[1]: Starting Foreman Proxy...
    [ERROR 2020-08-07T16:04:35 main] Aug 07 16:04:23 capsule.example.com smart-proxy[61965]: Errors detected on startup, see log for details. Exiting: Address already in use - bind(2)
    [ERROR 2020-08-07T16:04:35 main] Aug 07 16:04:23 capsule.example.com systemd[1]: foreman-proxy.service: main process exited, code=exited, status=1/FAILURE
    [ERROR 2020-08-07T16:04:35 main] Aug 07 16:04:23 capsule.example.com systemd[1]: Failed to start Foreman Proxy.
    [ERROR 2020-08-07T16:04:35 main] Aug 07 16:04:23 capsule.example.com systemd[1]: Unit foreman-proxy.service entered failed state.
    [ERROR 2020-08-07T16:04:35 main] Aug 07 16:04:23 capsule.example.com systemd[1]: foreman-proxy.service failed.
    
  • Restoring from Satellite backup using the satellite-maintain restore /var/satellite-backup command fails with the following error:

# satellite-maintain restore /var/satellite-backup
Running Restore backup
================================================================================
Check if command is run as root user:                                 [OK]
--------------------------------------------------------------------------------
Validate backup has appropriate files:                                [OK]
--------------------------------------------------------------------------------
Validate hostname is the same as backup:                              [OK]
--------------------------------------------------------------------------------
Validate network interfaces match the backup:                         [OK]
--------------------------------------------------------------------------------
Confirm dropping databases and running restore:

WARNING: This script will drop and restore your database.
Your existing installation will be replaced with the backup database.
Once this operation is complete there is no going back.
Do you want to proceed?, [y(yes), q(quit)] y
                                                                      [OK]
--------------------------------------------------------------------------------
Setting file security:
\ Restoring SELinux context                                           [OK]
--------------------------------------------------------------------------------
Restore configs from backup:
- Restoring configs                                                   [OK]
--------------------------------------------------------------------------------
Run installer reset:
\ Installer reset                                                     [FAIL]
Failed executing yes | satellite-installer -v --reset-data --disable-system-checks , exit status 6:
 2023-06-27 14:35:09 [NOTICE] [root] Loading installer configuration. This will take some time.
2023-06-27 14:35:13 [NOTICE] [root] Running installer with log based terminal output at level NOTICE.
2023-06-27 14:35:13 [NOTICE] [root] Use -l to set the terminal output log level to ERROR, WARN, NOTICE, INFO, or DEBUG. See --full-help for definitions.
Are you sure you want to continue? This will drop the databases, reset all configurations that you have made and bring all application data back to a fresh install. [y/n]
Package versions are locked. Continuing with unlock.
2023-06-27 14:35:18 [NOTICE] [pre] Dropping foreman database!
2023-06-27 14:35:19 [NOTICE] [pre] Dropping candlepin database!
2023-06-27 14:35:19 [NOTICE] [pre] Dropping pulpcore database!
2023-06-27 14:35:19 [WARN  ] [pre] Pulpcore content directory not present at '/var/lib/pulp/docroot'
2023-06-27 14:35:19 [WARN  ] [pre] Skipping system checks.
2023-06-27 14:35:19 [WARN  ] [pre] Skipping system checks.
2023-06-27 14:35:29 [NOTICE] [configure] Starting system configuration.
2023-06-27 14:35:43 [NOTICE] [configure] 250 configuration steps out of 2171 steps complete.
2023-06-27 14:35:47 [NOTICE] [configure] 500 configuration steps out of 2175 steps complete.
2023-06-27 14:35:48 [NOTICE] [configure] 750 configuration steps out of 2177 steps complete.
2023-06-27 14:35:51 [NOTICE] [configure] 1000 configuration steps out of 2182 steps complete.
2023-06-27 14:35:51 [NOTICE] [configure] 1250 configuration steps out of 2188 steps complete.
2023-06-27 14:41:22 [NOTICE] [configure] 1500 configuration steps out of 2189 steps complete.
2023-06-27 14:41:58 [NOTICE] [configure] 1750 configuration steps out of 3040 steps complete.
2023-06-27 14:41:59 [NOTICE] [configure] 2000 configuration steps out of 3040 steps complete.
2023-06-27 14:41:59 [NOTICE] [configure] 2250 configuration steps out of 3040 steps complete.
2023-06-27 14:41:59 [NOTICE] [configure] 2500 configuration steps out of 3040 steps complete.
2023-06-27 14:42:00 [NOTICE] [configure] 2750 configuration steps out of 3040 steps complete.
2023-06-27 14:44:21 [NOTICE] [configure] 3000 configuration steps out of 3040 steps complete.
2023-06-27 14:44:27 [ERROR ] [configure] Systemd start for foreman-proxy failed!
2023-06-27 14:44:27 [ERROR ] [configure] journalctl log for foreman-proxy:
2023-06-27 14:44:27 [ERROR ] [configure] -- Logs begin at Tue 2023-06-27 13:28:58 EDT, end at Tue 2023-06-27 14:44:27 EDT. --
2023-06-27 14:44:27 [ERROR ] [configure] Jun 27 14:44:22 okchqrhsatp01.dom1.local systemd[1]: Starting Foreman Proxy...
2023-06-27 14:44:27 [ERROR ] [configure] Jun 27 14:44:26 okchqrhsatp01.dom1.local smart-proxy[36702]: /usr/share/gems/gems/sequel-5.42.0/lib/sequel/adapters/sqlite.rb:114: warning: rb_check_safe_obj will be removed in Ruby 3.0
2023-06-27 14:44:27 [ERROR ] [configure] Jun 27 14:44:27 okchqrhsatp01.dom1.local smart-proxy[36702]: #<Thread:0x000056400843ef10 /usr/share/gems/gems/logging-2.3.0/lib/logging/diagnostic_context.rb:471 run> terminated with exception (report_on_exception is true):
2023-06-27 14:44:27 [ERROR ] [configure] Jun 27 14:44:27 okchqrhsatp01.dom1.local smart-proxy[36702]: /usr/share/ruby/socket.rb:201:in `bind': Address already in use - bind(2) for [::]:9090 (Errno::EADDRINUSE)
.
.
.
  • satellite-maintain health check shows pulp and pulp_auth services as FAIL:

    # foreman-maintain health check
    Running ForemanMaintain::Scenario::FilteredScenario
    ================================================================================
    Check number of fact names in database:                               [OK]
    --------------------------------------------------------------------------------
    Check whether all services are running:                               [OK]
    --------------------------------------------------------------------------------
    Check whether all services are running using the ping call:           [FAIL]
    Some components are failing: pulp, pulp_auth
    --------------------------------------------------------------------------------
    Continue with step [Restart applicable services]?, [y(yes), n(no)] no
    Check for paused tasks:                                               [OK]
    --------------------------------------------------------------------------------
    Check whether system is self-registered or not:                       [OK]
    --------------------------------------------------------------------------------
    Scenario [ForemanMaintain::Scenario::FilteredScenario] failed.
    
    The following steps ended up in failing state:
    
      [server-ping]
    
    Resolve the failed steps and rerun the command.
    
  • hammer ping shows pulp and pulp_auth services as failed:

    # hammer ping
    database:
        Status:          ok
        Server Response: Duration: 0ms
    candlepin:
        Status:          ok
        Server Response: Duration: 343ms
    candlepin_auth:
        Status:          ok
        Server Response: Duration: 43ms
    candlepin_events:
        Status:          ok
        message:         0 Processed, 0 Failed
        Server Response: Duration: 0ms
    katello_events:
        Status:          ok
        message:         0 Processed, 0 Failed
        Server Response: Duration: 1ms
    foreman_tasks:
        Status:          ok
        Server Response: Duration: 5ms
    
    2 more service(s) failed, but not shown:
    pulp, pulp_auth
    

Environment

  • Red Hat Capsule 6
  • Red Hat Satellite 6

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content