[Solved] How to Auto-starting rootless pods using systemd

Latest response

Hello Community,
I'm new to the podman container ecosystem. So far I've managed to create and run rootless pods and containers with shared volumes between them using an unprivileged user account.

When the system gets restarted I have to login and start the pod manually in order get my service up and running. That's not very convenient so I would like to have systemd to take care of this job and studied [8.5. Auto-starting pods using systemd](8.5. Auto-starting pods using systemd) in the Building, running, and managing containers guide.

The solution provided in documentation only starts the service when the user logs in and stops it when the user logs out. But I would like to have the pod running regardless of the users login status.

Copying the auto-generated service units to /etc/systemd/system/ and trying to start the pod running sudo systemctl start pod-examplepod.service fails because systemd isn't able to find the infra-container which belongs the pod. The following snippet shows an example of an error message:

Jan 25 13:54:40 podhost-r8-1.lan systemd[1]: Starting Podman pod-kanboardpod.service...
-- Subject: Unit pod-kanboardpod.service has begun start-up
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
-- 
-- Unit pod-kanboardpod.service has begun starting up.
Jan 25 13:54:40 podhost-r8-1.lan podman[63084]: Error: no container with name or ID 62cdd29105a4-infra found: no such container
Jan 25 13:54:40 podhost-r8-1.lan systemd[1]: pod-kanboardpod.service: Control process exited, code=exited status=125
Jan 25 13:54:40 podhost-r8-1.lan podman[63106]: Error: no container with name or ID 62cdd29105a4-infra found: no such container
Jan 25 13:54:40 podhost-r8-1.lan systemd[1]: pod-kanboardpod.service: Control process exited, code=exited status=125
Jan 25 13:54:40 podhost-r8-1.lan systemd[1]: pod-kanboardpod.service: Failed with result 'exit-code'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
-- 
-- The unit pod-kanboardpod.service has entered the 'failed' state with result 'exit-code'.
Jan 25 13:54:40 podhost-r8-1.lan systemd[1]: Failed to start Podman pod-kanboardpod.service.

This does not surprise me, but I don't know what to do about it. I guess the auto-generated systemd service units don't work for my use case, do they? Do you folks have any suggestion on how to accomplish the task that rootless pods created by an unprivileged user would be auto-started by systemd on system startup? Is it possible to specify the path to the pod/containers somehow?

And a related task is that I don't wanna start the pod as root when using systemd. So I guess I have to specify the user running the rootless pod in the service unit, right? Could this be done by simply using "User=" parameter in service definition file?

Looking forward reading your suggestions.

Best regards,
Joerg

Responses

Many thanks to Valentin Rothberg from Red Hat who pointed me to the solution:

To allow for systemd services to be started at boot without login (and continue running after logout) of the individual users, you need to enable "lingering". You can do that via loginctl enable-linger <username>. It's buried in the manpages of podman-generate-systemd:

http://docs.podman.io/en/latest/markdown/podman-generate-systemd.1.html

Thanks!

cheers!

i followed this page - https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/managing_containers/running_containers_as_systemd_services_with_podman

but added a User= in the service to run the service as non-root user and that fails the service (unless the linger is enabled) as described here

Jan 23 20:18:14 obfuscatedomain1 podman[1898]: Error: error creating tmpdir: mkdir /run/user/1040: permission denied Jan 23 20:18:14 obfuscatedomain1 systemd[1]: gogs01-container.service: Main process exited, code=exited, status=125/n/a Jan 23 20:18:14 obfuscatedomain1 systemd[1]: gogs01-container.service: Failed with result 'exit-code'.

The documentation has been updated, so you can find the information about enabling systemd services in section 8.1:

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/building_running_and_managing_containers/index#porting-containers-to-systemd-using-podman_building-running-and-managing-containers

That's awesome! Thank you Gabriela for mentioning it here as well.

Not at all. You are very welcome :-). Thank you for your feedback!

Hi Gabriela the instructions for rootless generation of the systemd unit file as appear on "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/building_running_and_managing_containers/index#porting-containers-to-systemd-using-podman_building-running-and-managing-containers" return the error "No such file or directory" See below. Please advise.

$ podman ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 705fa9b97f1d docker.io/library/httpd httpd-foreground 18 hours ago Up 18 hours ago 0.0.0.0:8080->80/tcp apache

$ podman generate systemd --name apache > ~/.config/systemd/user/apache-container.service

-bash: /home/conadmin/.config/systemd/user/apache-container.service: No such file or directory

Hello rod lee,

You need to create the ~/.config/systemd/user directory. Then it works:

$ mkdir -p ~/.config/systemd/user
$ podman generate systemd --name myubi > ~/.config/systemd/user/container-myubi.service
$ cat  ~/.config/systemd/user/container-myubi.service  # display file content

Hope that helps. Kind regards Gabi

Hi Gabi thanks for your response. That approach didn't work for me. The command fails but does create the unit file but it is empty. I managed to get it to work by using podman generate systemd --files --name (note the addition of the --files switch) to create the file in the ~/.local folder and then I created the ~/.config/systemd/user/ folders and copied the file over. The documentation does talk about using the --files switch but after the command without it.

Hi rod lee,

Please could you send us a complete console output?

Thank you in advance.

Hello Gabriela,

I got same issue like Rod Lee. This's my discussion thread. https://access.redhat.com/discussions/6029491 Thank you in advance.

Hello Kyaw Zar Zar Phyu,

Thank you for your reply. I tried all commands you have in the (https://access.redhat.com/discussions/6029491) thread.

I have some comments regarding last two steps you performed:

a) Reload systemd manager configuration:

  • This step works for me.
$ systemctl --user daemon-reload

b) Enable the service to start at boot time and start it:

  • You originally had:
$ systemctl enable --now rootless-container.service
  • I think you are missing the "--user" option for rootless

  • I used the following command and it works for me:

$ systemctl enable --user --now rootless-container.service
Created symlink /home/gabi/.config/systemd/user/multi-user.target.wants/rootless-container.service → /home/gabi/.config/systemd/user/rootless-container.service.
Created symlink /home/gabi/.config/systemd/user/default.target.wants/rootless-container.service → /home/gabi/.config/systemd/user/rootless-container.service.

Please, if you are still having issues, can you send us the journalctl logs? Thank you in advance.

Thanks for your kind and quick support. a) Yes, it works for sudo user in my case.

[user1@server20 ~]$ systemctl --user daemon-reload [user1@server20 ~]$ systemctl --user enable --now rootless-container.service [user1@server20 ~]$ systemctl --user status rootless-container.service ● rootless-container.service - Podman container-rootless-container.service Loaded: loaded (/home/user1/.config/systemd/user/rootless-container.service; enable> Active: active (running) since Tue 2021-05-11 21:24:18 +07; 28s ago Docs: man:podman-generate-systemd(1) Process: 4060 ExecStart=/usr/bin/podman run --conmon-pidfile /run/user/1000/containe> Process: 4059 ExecStartPre=/bin/rm -f /run/user/1000/container-rootless-container.pi> Main PID: 4085 (conmon) CGroup: /user.slice/user-1000.slice/user@1000.service/rootless-container.service ├─2153 /usr/bin/podman ├─4079 /usr/bin/slirp4netns --disable-host-loopback --mtu 65520 --enable-sa> ├─4082 /usr/bin/fuse-overlayfs -o lowerdir=/home/user1/.local/share/contain> ├─4085 /usr/bin/conmon --api-version 1 -c 3cdbac1f20a0b818f985c5740b636cbf7> └─3cdbac1f20a0b818f985c5740b636cbf70c1fd447073e341fa1847e16c79412a └─4093 /bin/bash

However, it doesn't work for normal users. [conuser3@server20 ~]$ cat .config/systemd/user/rootless-containercon3.service

container-rootless-containercon3.service autogenerated by Podman 2.2.1 Tue May 11 21:30:28 +07 2021

[Unit] Description=Podman container-rootless-containercon3.service Documentation=man:podman-generate-systemd(1) Wants=network.target After=network-online.target

[Service] Environment=PODMAN_SYSTEMD_UNIT=%n Restart=on-failure ExecStartPre=/bin/rm -f %t/container-rootless-containercon3.pid %t/container-rootless-containercon3.ctr-id ExecStart=/usr/bin/podman run --conmon-pidfile %t/container-rootless-containercon3.pid --cidfile %t/container-rootless-containercon3.ctr-id --cgroups=no-conmon -d --replace -dt --name rootless-containercon3 ubi8 ExecStop=/usr/bin/podman stop --ignore --cidfile %t/container-rootless-containercon3.ctr-id -t 10 ExecStopPost=/usr/bin/podman rm --ignore -f --cidfile %t/container-rootless-containercon3.ctr-id PIDFile=%t/container-rootless-containercon3.pid KillMode=none Type=forking

[Install] WantedBy=multi-user.target default.target [conuser3@server20 ~]$ podman stop rootless-containercon3 942cf6f14d275ab7c9fb98d13f10b4169cdba08c7fc52ef72351d783393bdc27 [conuser3@server20 ~]$ podman rm rootless-containercon3 942cf6f14d275ab7c9fb98d13f10b4169cdba08c7fc52ef72351d783393bdc27 [conuser3@server20 ~]$ podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES [conuser3@server20 ~]$ systemctl --user daemon-reload Failed to connect to bus: No such file or directory

b) Yes, I'm sorry for that. I miss --user option. But, anyway, it still has same issue like this. And, I can't get any journalctl log too.

[conuser3@server20 ~]$ systemctl --user daemon-reload Failed to connect to bus: No such file or directory

[conuser3@server20 ~]$ systemctl --user enable --now rootless-containercon3.service

Failed to connect to bus: No such file or directory

[conuser3@server20 ~]$ journalctl --user -u rootless-containercon3.service Hint: You are currently not seeing messages from the system. Users in the 'systemd-journal' group can see all messages. Pass -q to turn off this notice. No journal files were opened due to insufficient permissions. [conuser3@server20 ~]$ [conuser3@server20 ~]$ journalctl --user --user-unit=rootless-containercon3.service Hint: You are currently not seeing messages from the system. Users in the 'systemd-journal' group can see all messages. Pass -q to turn off this notice. No journal files were opened due to insufficient permissions.

Hello again :-),

Please, can you check the following commands?

$ id
uid=1000(gabi) gid=1000(gabi) groups=1000(gabi),10(wheel),190(systemd-journal) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

I realized that previously, I was not able to run journalctl command too, because I was not in the systemd-journal group. So I did:

$ sudo usermod -a -G systemd-journal gabi

Thank you in advance.

Hello :) ....

Thanks FYI about journalctl cmd.

[conuser4@server20 ~]$ systemctl --user daemon-reload

Failed to connect to bus: No such file or directory

Because of that issue, I can't check specific service log info, I think.

[conuser4@server20 ~]$ journalctl --user -u rootless-container.service

No journal files were found.
-- No entries --

[conuser4@server20 ~]$ journalctl

-- Logs begin at Thu 2021-05-13 12:51:46 +0630, end at Thu 2021-05-13 14:45:40 +0630. >
May 13 12:51:46 server10.iat.com kernel: Linux version 4.18.0-240.15.1.el8_3.x86_64 (m>
May 13 12:51:46 server10.iat.com kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/vmlinuz>
......................skipping...


May 13 14:41:53 server20.iat.com stratisd[781]:         DmNameBuf {
May 13 14:41:53 server20.iat.com stratisd[781]:             inner: "rhel-swap",
May 13 14:41:53 server20.iat.com stratisd[781]:         }: 0,
May 13 14:41:53 server20.iat.com stratisd[781]:     },
May 13 14:41:53 server20.iat.com stratisd[781]: }
May 13 14:42:09 server20.iat.com dbus-daemon[1100]: [system] Activating via systemd: s>
May 13 14:42:09 server20.iat.com systemd[1]: Starting Hostname Service...
May 13 14:42:09 server20.iat.com dbus-daemon[1100]: [system] Successfully activated se>
May 13 14:42:09 server20.iat.com systemd[1]: Started Hostname Service.
May 13 14:44:22 server20.iat.com dbus-daemon[1100]: [system] Activating via systemd: s>
May 13 14:44:22 server20.iat.com systemd[1]: Starting Fingerprint Authentication Daemo>
May 13 14:44:22 server20.iat.com dbus-daemon[1100]: [system] Successfully activated se>
May 13 14:44:22 server20.iat.com systemd[1]: Started Fingerprint Authentication Daemon.
May 13 14:44:25 server20.iat.com su[5565]: (to conuser4) user1 on pts/0
May 13 14:44:25 server20.iat.com su[5565]: pam_systemd(su-l:session): Cannot create se>
May 13 14:44:25 server20.iat.com su[5565]: pam_unix(su-l:session): session opened for >
May 13 14:44:30 server20.iat.com dbus-daemon[5621]: Cannot setup inotify for '/root/.l>
May 13 14:44:33 server20.iat.com dbus-daemon[5643]: Cannot setup inotify for '/root/.l>
May 13 14:45:12 server20.iat.com dbus-daemon[5682]: Cannot setup inotify for '/root/.l>
May 13 14:45:13 server20.iat.com dbus-daemon[5710]: Cannot setup inotify for '/root/.l>
May 13 14:45:14 server20.iat.com dbus-daemon[5739]: Cannot setup inotify for '/root/.l>
May 13 14:45:21 server20.iat.com dbus-daemon[5760]: Cannot setup inotify for '/root/.l>
May 13 14:45:36 server20.iat.com dbus-daemon[5789]: Cannot setup inotify for '/root/.l>
May 13 14:45:40 server20.iat.com dbus-daemon[5810]: Cannot setup inotify for '/root/.l>
lines 8149-8172/8172 (END)

Hello Zar Zar :),

The issue might be connected with the XDG_RUNTIME_DIR environment variable that is not set properly, and therefore, you cannot access the user D-Bus. Also, the pam_systemd has to be set to register user sessions in the systemd login manager.

Here is what I found on our Customer Portal:

  • 1) https://access.redhat.com/solutions/4661741

    • I think you can try Solution 2 -> setting the XDG_RUNTIME_DIR environment variable
  • 2) https://access.redhat.com/solutions/4720161

    • check that pam_systemd is properly configured in /etc/pam.d/system-auth

Please let us know if it helps. Thank you and have a nice day!

Hello Gabriela,

Thanks for your kind support.

Yes, I'm ok now. :-) Thanks a lot.

According to your ref link, https://access.redhat.com/solutions/4661741, i tried both solutions. But, it's still not ok for my case/issue. And then, I've tried one thing by Igor Sarychev's comment. ((.......In most cases it's enough to do loginctl enable-linger USER and then export XDG_RUNTIME_DIR=/run/user/$(id -u) .........))

[conuser4@server20 ~]$ loginctl enable-linger conuser4
[conuser4@server20 ~]$ export XDG_RUNTIME_DIR=/run/user/$(id -u)
[conuser4@server20 ~]$ systemctl --user daemon-reload
[conuser4@server20 ~]$ systemctl --user enable --now root-less-container-may19.service
Created symlink /home/conuser4/.config/systemd/user/multi-user.target.wants/root-less-container-may19.service → /home/conuser4/.config/systemd/user/root-less-container-may19.service.
Created symlink /home/conuser4/.config/systemd/user/default.target.wants/root-less-container-may19.service → /home/conuser4/.config/systemd/user/root-less-container-may19.service.

[conuser4@server20 ~]$ systemctl --user status root-less-container-may19.service
● root-less-container-may19.service - Podman container-root-less-container-may19.servi>
   Loaded: loaded (/home/conuser4/.config/systemd/user/root-less-container-may19.servi>
   Active: active (running) since Wed 2021-05-19 11:25:55 +0630; 20s ago
     Docs: man:podman-generate-systemd(1)
  Process: 11888 ExecStart=/usr/bin/podman run --conmon-pidfile /run/user/3308/contain>
  Process: 11886 ExecStartPre=/bin/rm -f /run/user/3308/container-root-less-container->
 Main PID: 11914 (conmon)
   CGroup: /user.slice/user-3308.slice/user@3308.service/root-less-container-may19.ser>
           ├─11909 /usr/bin/slirp4netns --disable-host-loopback --mtu 65520 --enable-s>
           ├─11911 /usr/bin/fuse-overlayfs -o lowerdir=/home/conuser4/.local/share/con>
           ├─11914 /usr/bin/conmon --api-version 1 -c babeb1bc892302983f6936a66a469c25>
           └─babeb1bc892302983f6936a66a469c25d1edfa418e56c20a16e231fc0df8988c
             └─11922 /bin/bash

May 19 11:25:54 server20.iat.com systemd[11766]: Starting Podman container-root-less-c>
May 19 11:25:55 server20.iat.com systemd[11766]: Started Podman container-root-less-co>
[conuser4@server20 ~]$
[conuser4@server20 ~]$ podman ps
CONTAINER ID  IMAGE                                   COMMAND    CREATED         STATUS             PORTS   NAMES
babeb1bc8923  registry.access.redhat.com/ubi8:latest  /bin/bash  29 seconds ago  Up 29 seconds ago          root-less-container-may19

[conuser4@server20 ~]$ loginctl enable-linger
[conuser4@server20 ~]$ loginctl show-user conuser4 | grep -i linger
State=lingering
Linger=yes

[conuser4@server20 ~]$ systemctl --user restart root-less-container-may19.service
[conuser4@server20 ~]$ systemctl --user status root-less-container-may19.service
● root-less-container-may19.service - Podman container-root-less-container-may19.servi>
   Loaded: loaded (/home/conuser4/.config/systemd/user/root-less-container-may19.servi>
   Active: active (running) since Wed 2021-05-19 11:29:43 +0630; 7s ago
     Docs: man:podman-generate-systemd(1)

Hello Z Z, Not at all, you are welcome :-). I am happy that you finally resolved this issue!

Hello again, @Gabriela,

After reboot, when I check podman process, it's ok. One process is running. However, I can't check status or run restart the service again. :)

user1@server20 ~]$ su - conuser4
Password:
[conuser4@server20 ~]$ podman ps
CONTAINER ID  IMAGE                                   COMMAND    CREATED             STATUS                 PORTS   NAMES
a7677e498b27  registry.access.redhat.com/ubi8:latest  /bin/bash  About a minute ago  Up About a minute ago          root-less-container-may19
[conuser4@server20 ~]$ systemctl --user status root-less-container-may19
Failed to connect to bus: No such file or directory
[conuser4@server20 ~]$

Hello again Z Z :-),

Please, try to login using this command:

$ su --login

This should set up the correct login sessions. Let us know if that works. Thanks!

Hello Gabriela,

Thanks. :D

The result is the same. A new Container is still run automatically when the system boots. However, the service can't be run or check with this error ((Failed to connect to bus: No such file or directory)).

[user1@server20 ~]$ su - conuser4
Password:
[conuser4@server20 ~]$ podman ps
CONTAINER ID  IMAGE                                   COMMAND    CREATED             STATUS                 PORTS   NAMES
c681a6fa3d30  registry.access.redhat.com/ubi8:latest  /bin/bash  About a minute ago  Up About a minute ago          root-less-container-may19
[conuser4@server20 ~]$ systemctl --user status root-less-container-may19
Failed to connect to bus: No such file or directory


[conuser4@server20 ~]$ su --login conuser4
Password:
[conuser4@server20 ~]$ podman ps
CONTAINER ID  IMAGE                                   COMMAND    CREATED        STATUS            PORTS   NAMES
c681a6fa3d30  registry.access.redhat.com/ubi8:latest  /bin/bash  3 minutes ago  Up 3 minutes ago          root-less-container-may19
[conuser4@server20 ~]$ systemctl --user status root-less-container-may19
Failed to connect to bus: No such file or directory

journalctl result==>
May 19 22:27:16 server20.iat.com su[3961]: (to conuser4) user1 on pts/0
May 19 22:27:16 server20.iat.com su[3961]: pam_systemd(su-l:session): Cannot create se>
May 19 22:27:16 server20.iat.com su[3961]: pam_unix(su-l:session): session opened for >
May 19 22:27:20 server20.iat.com systemd[1788]: Started podman-4007.scope.
May 19 22:27:20 server20.iat.com systemd[1788]: podman-4007.scope: Succeeded.
May 19 22:27:42 server20.iat.com systemd[1]: fprintd.service: Succeeded.
May 19 22:27:51 server20.iat.com packagekitd[2730]: Skipping refresh of media: Cannot >
May 19 22:27:52 server20.iat.com PackageKit[2730]: search-file transaction /1407_ccbad>
lines 1872-1898/1898 (END)

Hello Z Z,

Well, not at all :-D. Please, can you send me the output of the env command?
I would like to check especially the XDG_RUNTIME_DIR and DBUS_SESSION_BUS_ADDRESS environment variables.

Thank you in advance.

Hello Gabriela,

Thanks for ur many time helps. :D

Before adding & exporting XDG_RUNTIME_DIR=/run/user/$(id -u) in ~/.bashrc file, it’s empty.

[user1@server20 ~]$ su - conuser4
Password:
[conuser4@server20 ~]$ podman ps
CONTAINER ID  IMAGE                                   COMMAND    CREATED             STATUS                 PORTS   NAMES
ac78dbfe5e80  registry.access.redhat.com/ubi8:latest  /bin/bash  About a minute ago  Up About a minute ago          root-less-container-may19
[conuser4@server20 ~]$ echo $XDG_RUNTIME_DIR

[conuser4@server20 ~]$ echo $DBUS_SESSION_BUS_ADDRESS

[conuser4@server20 ~]$ systemctl --user status root-less-container-may19
Failed to connect to bus: No such file or directory
[conuser4@server20 ~]$

After that, I added & exported there & rebooted. Now, it’s ok!!! :D

[root@server20 ~]# su - conuser4
[conuser4@server20 ~]$ podman ps
CONTAINER ID  IMAGE                                   COMMAND    CREATED             STATUS                 PORTS   NAMES
72776fa4d6f1  registry.access.redhat.com/ubi8:latest  /bin/bash  About a minute ago  Up About a minute ago          root-less-container-may19
[conuser4@server20 ~]$ echo $XDG_RUNTIME_DIR
/run/user/3308
[conuser4@server20 ~]$ echo $DBUS_SESSION_BUS_ADDRESS

[conuser4@server20 ~]$ systemctl --user status root-less-container-may19
● root-less-container-may19.service - Podman container-root-less-container-may19.servi>
   Loaded: loaded (/home/conuser4/.config/systemd/user/root-less-container-may19.servi>
   Active: active (running) since Thu 2021-05-20 09:59:43 +0630; 4min 47s ago
     Docs: man:podman-generate-systemd(1)
  Process: 2067 ExecStart=/usr/bin/podman run --conmon-pidfile /run/user/3308/containe>
  Process: 2057 ExecStartPre=/bin/rm -f /run/user/3308/container-root-less-container-m>
 Main PID: 2252 (conmon)
   CGroup: /user.slice/user-3308.slice/user@3308.service/root-less-container-may19.ser>
           ├─2229 /usr/bin/slirp4netns --disable-host-loopback --mtu 65520 --enable-sa>
           ├─2232 /usr/bin/fuse-overlayfs -o lowerdir=/home/conuser4/.local/share/cont>
           ├─2252 /usr/bin/conmon --api-version 1 -c 72776fa4d6f1eba766732c0fe83762363>
           └─72776fa4d6f1eba766732c0fe8376236314d5dc129521ca7e6f4acebae8bb8c1
             └─2298 /bin/bash

May 20 09:59:38 server20.iat.com systemd[1747]: Starting Podman container-root-less-co>
May 20 09:59:43 server20.iat.com podman[2067]: 72776fa4d6f1eba766732c0fe8376236314d5dc>
May 20 09:59:43 server20.iat.com systemd[1747]: Started Podman container-root-less-con>
[conuser4@server20 ~]$ tail -n 2 .bashrc
XDG_RUNTIME_DIR=/run/user/$(id -u)
export XDG_RUNTIME_DIR
[conuser4@server20 ~]$ tail -n 1 /etc/bashrc
# XDG_RUNTIME_DIR=/run/user/$(id -u)
[conuser4@server20 ~]$

Thans a lot for your kind reminder & continuous support.

Hello ZZ, Awesome! I am glad that it is finally solved! :-)

To conclude this topic:

  • Problem: If you are getting this error when starting your service:
$ systemctl --user enable --now file.service
Failed to connect to bus: No such file or directory 
  • Solution:
    • You can set and export the XDG_RUNTIME_DIR and DBUS_SESSION_BUS_ADDRESS environment variable in your .bashrc or .profile file.
    • Then reboot or load changes using the source command (e.g. source .profile or source .bashrc).
$ vim $HOME/.profile
XDG_RUNTIME_DIR=/run/user/$(id -u)
DBUS_SESSION_BUS_ADDRESS=unix:path=${XDG_RUNTIME_DIR}/bus
export DBUS_SESSION_BUS_ADDRESS 
export XDG_RUNTIME_DIR

More information about environment variables can be found in https://www.redhat.com/sysadmin/linux-environment-variables.

After RHEL 8.5 upgrade and with podman 3.4.2, the problem is back and this time lingering is not solving it. Does anyone have the same problem?

Hello Ismail,

please can you send us the commands you used?