Satellite 6.3 goes down with Internal server error

Latest response

I am facing issues with Satellite 6.3 goes down saying the following issue, I had to restart every time :

Internal server error

An internal error occurred while trying to spawn the application.
Exception type: Passenger::SystemException
Error message: Cannot create a temporary directory in the format of '/tmp/passenger-spawn-debug.XXX': No such file or directory (errno=2)
Backtrace:
in 'void Passenger::ApplicationPool2::SmartSpawner::startPreloader()' (SmartSpawner.h:206)
in 'virtual Passenger::ApplicationPool2::ProcessPtr Passenger::ApplicationPool2::SmartSpawner::spawn(const Passenger::ApplicationPool2::Options&)' (SmartSpawner.h:744)
in 'void Passenger::ApplicationPool2::Group::spawnThreadRealMain(const SpawnerPtr&, const Passenger::ApplicationPool2::Options&, unsigned int)' (Implementation.cpp:782)

Application root
/usr/share/foreman
Environment (value of RAILS_ENV, RACK_ENV, WSGI_ENV and PASSENGER_ENV)
production
Ruby interpreter command

/usr/bin/tfm-ruby

User and groups

Unknown

Environment variables

Unknown

Ulimits

Unknown

Responses

We got this issue fixed in the Satellite 6.3.2 release.

Not to nit pick but I am seeing the same behavior in 6.5.1

I guess there can be multiple causes for that error. E.g. permissions/SELinux issue or some specific systemd config. It is hard to judge, I recommend filing a new support case and attach sosreport there.

In our case it was not really a permission issue, as we were running as root. It was not able to create the folder under '/tmp/passenger-spawn-debug.XXX' . So basically it is doing the same operation many times at one point server goes down. So you need to restart katello-service, then it will work fine. In our scenario we got this issue bit later after the upgrade too, so basically we have a cron job which will check the satellite server status for every 15 mins, and it will restart the service when it is down. This could be a temporary fix.

This is the link to our Bugzilla case: https://bugzilla.redhat.com/show_bug.cgi?id=1598853

Hope this helps.

Opened second support case Friday. First one we reset some foreman stuff and hoped for the best. Last week it crashed again.

I was thinking of doing the cron job thing, probably will as the bug seems to be a bit elusive for the support team.

Cheers