Red Hat Training

A Red Hat training course is available for RHEL 8

Chapter 5. Working with containers

Containers represent a running or stopped process created from the files located in a decompressed container image. You can use the Podman tool to work with containers.

5.1. Podman run command

The podman run command runs a process in a new container based on the container image. If the container image is not already loaded then podman run pulls the image, and all image dependencies, from the repository in the same way running podman pull image, before it starts the container from that image. The container process has its own file system, its own networking, and its own isolated process tree.

The podman run command has the form:

podman run [options] image [command [arg ...]]

Basic options are:

  • --detach (-d): Runs the container in the background and prints the new container ID.
  • --attach (-a): Runs the container in the foreground mode.
  • --name (-n): Assigns a name to the container. If a name is not assigned to the container with --name then it generates a random string name. This works for both background and foreground containers.
  • --rm: Automatically remove the container when it exits. Note that the container will not be removed when it could not be created or started successfully.
  • --tty (-t): Allocates and attaches the pseudo-terminal to the standard input of the container.
  • --interactive (-i): For interactive processes, use -i and -t together to allocate a terminal for the container process. The -i -t is often written as -it.

5.2. Running commands in a container from the host

Use the podman run command to display the type of operating system of the container.

Prerequisites

  • The container-tools module is installed.

Procedure

  1. Display the type of operating system of the container based on the registry.access.redhat.com/ubi8/ubi container image using the cat /etc/os-release command:

    $ podman run --rm registry.access.redhat.com/ubi8/ubi cat /etc/os-release
    NAME="Red Hat Enterprise Linux"
    ...
    ID="rhel"
    ...
    HOME_URL="https://www.redhat.com/"
    BUG_REPORT_URL="https://bugzilla.redhat.com/"
    
    REDHAT_BUGZILLA_PRODUCT=" Red Hat Enterprise Linux 8"
    ...
  2. Optional: List all containers.

    $ podman ps
    CONTAINER ID  IMAGE   COMMAND  CREATED  STATUS  PORTS   NAMES

    Because of the --rm option you should not see any container. The container was removed.

Additional resources

  • podman-run man page

5.3. Running commands inside the container

Use the podman run command to run a container interactively.

Prerequisites

  • The container-tools module is installed.

Procedure

  1. Run the container named myubi based on the registry.redhat.io/ubi8/ubi image:

    $ podman run --name=myubi -it registry.access.redhat.com/ubi8/ubi /bin/bash
    [root@6ccffd0f6421 /]#
    • The -i option creates an interactive session. Without the -t option, the shell stays open, but you cannot type anything to the shell.
    • The -t option opens a terminal session. Without the -i option, the shell opens and then exits.
  2. Install the procps-ng package containing a set of system utilities (for example ps, top, uptime, and so on):

    [root@6ccffd0f6421 /]# yum install procps-ng
  3. Use the ps -ef command to list current processes:

    # ps -ef
    UID          PID    PPID  C STIME TTY          TIME CMD
    root           1       0  0 12:55 pts/0    00:00:00 /bin/bash
    root          31       1  0 13:07 pts/0    00:00:00 ps -ef
  4. Enter exit to exit the container and return to the host:

    # exit
  5. Optional: List all containers:

    $ podman ps
    CONTAINER ID  IMAGE                               COMMAND    CREATED         STATUS                     PORTS   NAMES
    1984555a2c27  registry.redhat.io/ubi8/ubi:latest  /bin/bash  21 minutes ago  Exited (0) 21 minutes ago          myubi

    You can see that the container is in Exited status.

Additional resources

  • podman-run man page

5.4. Listing containers

Use the podman ps command to list the running containers on the system.

Prerequisites

  • The container-tools module is installed.

Procedure

  1. Run the container based on registry.redhat.io/rhel8/rsyslog image:

    $ podman run -d registry.redhat.io/rhel8/rsyslog
  2. List all containers:

    • To list all running containers:

      $ podman ps
      CONTAINER ID IMAGE              COMMAND         CREATED       STATUS            PORTS NAMES
      74b1da000a11 rhel8/rsyslog /bin/rsyslog.sh 2 minutes ago Up About a minute       musing_brown
    • To list all containers, running or stopped:

      $ podman ps -a
      CONTAINER ID IMAGE         COMMAND    CREATED    STATUS                PORTS NAMES     IS INFRA
      d65aecc325a4 ubi8/ubi      /bin/bash  3 secs ago Exited (0) 5 secs ago peaceful_hopper false
      74b1da000a11 rhel8/rsyslog rsyslog.sh 2 mins ago Up About a minute     musing_brown    false

If there are containers that are not running, but were not removed (--rm option), the containers are present and can be restarted.

Additional resources

  • podman-ps man page

5.5. Starting containers

If you run the container and then stop it, and not remove it, the container is stored on your local system ready to run again. You can use the podman start command to re-run the containers. You can specify the containers by their container ID or name.

Prerequisites

  • The container-tools module is installed.
  • At least one container has been stopped.

Procedure

  1. Start the myubi container:

    • In the non interactive mode:

      $ podman start myubi

      Alternatively, you can use podman start 1984555a2c27.

    • In the interactive mode, use -a (--attach) and -i (--interactive) options to work with container bash shell:

      $ podman start -a -i myubi

      Alternatively, you can use podman start -a -i 1984555a2c27.

  2. Enter exit to exit the container and return to the host:

    [root@6ccffd0f6421 /]# exit

Additional resources

  • podman-start man page

5.6. Inspecting containers from the host

Use the podman inspect command to inspect the metadata of an existing container in a JSON format. You can specify the containers by their container ID or name.

Prerequisites

  • The container-tools module is installed.

Procedure

  • Inspect the container defined by ID 64ad95327c74:

    • To get all metadata:

      $ podman inspect 64ad95327c74
      [
          {
              "Id": "64ad95327c740ad9de468d551c50b6d906344027a0e645927256cd061049f681",
              "Created": "2021-03-02T11:23:54.591685515+01:00",
              "Path": "/bin/rsyslog.sh",
              "Args": [
                  "/bin/rsyslog.sh"
              ],
              "State": {
                  "OciVersion": "1.0.2-dev",
                  "Status": "running",
                  ...
    • To get particular items from the JSON file, for example, the StartedAt timestamp:

      $ podman inspect --format='{{.State.StartedAt}}' 64ad95327c74
      2021-03-02 11:23:54.945071961 +0100 CET

      The information is stored in a hierarchy. To see the container StartedAt timestamp (StartedAt is under State), use the --format option and the container ID or name.

Examples of other items you might want to inspect include:

  • .Path to see the command run with the container
  • .Args arguments to the command
  • .Config.ExposedPorts TCP or UDP ports exposed from the container
  • .State.Pid to see the process id of the container
  • .HostConfig.PortBindings port mapping from container to host

Additional resources

  • podman-inspect man page

5.7. Mounting directory on localhost to the container

You can make log messages from inside a container available to the host system by mounting the host /dev/log device inside the container.

Prerequisites

  • The container-tools module is installed.

Procedure

  1. Run the container named log_test and mount the host /dev/log device inside the container:

    # podman run --name="log_test" -v /dev/log:/dev/log --rm \
      registry.redhat.io/ubi8/ubi logger "Testing logging to the host"
  2. Use the journalctl utility to display logs:

    # journalctl -b | grep Testing
    Dec 09 16:55:00 localhost.localdomain root[14634]: Testing logging to the host

    The --rm option removes the container when it exits.

Additional resources

  • podman-run man page

5.8. Mounting a container filesystem

Use the podman mount command to mount a working container root filesystem in a location accessible from the host.

Prerequisites

  • The container-tools module is installed.

Procedure

  1. Run the container named mysyslog:

    # podman run -d --name=mysyslog registry.redhat.io/rhel8/rsyslog
  2. Optional: List all containers:

    # podman ps -a
    CONTAINER ID  IMAGE                                    COMMAND          CREATED         STATUS                     PORTS   NAMES
    c56ef6a256f8  registry.redhat.io/rhel8/rsyslog:latest  /bin/rsyslog.sh  20 minutes ago  Up 20 minutes ago                  mysyslog
  3. Mount the mysyslog container:

    # podman mount mysyslog
    /var/lib/containers/storage/overlay/990b5c6ddcdeed4bde7b245885ce4544c553d108310e2b797d7be46750894719/merged
  4. Display the content of the mount point using ls command:

    # ls /var/lib/containers/storage/overlay/990b5c6ddcdeed4bde7b245885ce4544c553d108310e2b797d7be46750894719/merged
    bin  boot  dev  etc  home  lib  lib64  lost+found  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
  5. Display the OS version:

    # cat /var/lib/containers/storage/overlay/990b5c6ddcdeed4bde7b245885ce4544c553d108310e2b797d7be46750894719/merged/etc/os-release
    NAME="Red Hat Enterprise Linux"
    VERSION="8 (Ootpa)"
    ID="rhel"
    ID_LIKE="fedora"
    ...

Additional resources

  • podman-mount man page

5.9. Running a service as a daemon with a static IP

The following example runs the rsyslog service as a daemon process in the background. The --ip option sets the container network interface to a particular IP address (for example, 10.88.0.44). After that, you can run the podman inspect command to check that you set the IP address properly.

Prerequisites

  • The container-tools module is installed.

Procedure

  1. Set the container network interface to the IP address 10.88.0.44:

    # podman run -d --ip=10.88.0.44 registry.access.redhat.com/rhel8/rsyslog
    efde5f0a8c723f70dd5cb5dc3d5039df3b962fae65575b08662e0d5b5f9fbe85
  2. Check that the IP address is set properly:

    # podman inspect efde5f0a8c723 | grep 10.88.0.44
    "IPAddress": "10.88.0.44",

Additional resources

  • podman-inspect man page
  • podman-run man page

5.10. Executing commands inside a running container

Use the podman exec command to execute a command in a running container and investigate that container. The reason for using the podman exec command instead of podman run command is that you can investigate the running container without interrupting the container activity.

Prerequisites

  • The container-tools module is installed.
  • The container is running.

Procedure

  1. Execute the rpm -qa command inside the myrsyslog container to list all installed packages:

    $ podman exec -it myrsyslog rpm -qa
    tzdata-2020d-1.el8.noarch
    python3-pip-wheel-9.0.3-18.el8.noarch
    redhat-release-8.3-1.0.el8.x86_64
    filesystem-3.8-3.el8.x86_64
    ...
  2. Execute a /bin/bash command in the myrsyslog container:

    $ podman exec -it myrsyslog /bin/bash
  3. Install the procps-ng package containing a set of system utilities (for example ps, top, uptime, and so on):

    # yum install procps-ng
  4. Inspect the container:

    • To list every process on the system:

      # ps -ef
      UID          PID    PPID  C STIME TTY          TIME CMD
      root           1       0  0 10:23 ?        00:00:01 /usr/sbin/rsyslogd -n
      root           8       0  0 11:07 pts/0    00:00:00 /bin/bash
      root          47       8  0 11:13 pts/0    00:00:00 ps -ef
    • To display file system disk space usage:

      # df -h
      Filesystem      Size  Used Avail Use% Mounted on
      fuse-overlayfs   27G  7.1G   20G  27% /
      tmpfs            64M     0   64M   0% /dev
      tmpfs           269M  936K  268M   1% /etc/hosts
      shm              63M     0   63M   0% /dev/shm
      ...
    • To display system information:

      # uname -r
      4.18.0-240.10.1.el8_3.x86_64
    • To display amount of free and used memory in megabytes:

      # free --mega
      total        used        free      shared  buff/cache   available
      Mem:       2818         615        1183          12         1020        1957
      Swap:      3124           0        3124

Additional resources

  • podman-exec man page

5.11. Sharing files between two containers

You can use volumes to persist data in containers even when a container is deleted. Volumes can be used for sharing data among multiple containers. The volume is a folder which is stored on the host machine. The volume can be shared between the container and the host.

Main advantages are:

  • Volumes can be shared among the containers.
  • Volumes are easier to back up or migrate.
  • Volumes do not increase the size of the containers.

Prerequisites

  • The container-tools module is installed.

Procedure

  1. Create a volume:

    $ podman volume create hostvolume
  2. Display information about the volume:

    $ podman volume inspect hostvolume
    [
        {
            "name": "hostvolume",
            "labels": {},
            "mountpoint": "/home/username/.local/share/containers/storage/volumes/hostvolume/_data",
            "driver": "local",
            "options": {},
            "scope": "local"
        }
    ]

    Notice that it creates a volume in the volumes directory. You can save the mount point path to the variable for easier manipulation: $ mntPoint=$(podman volume inspect hostvolume --format {{.Mountpoint}}).

    Notice that if you run sudo podman volume create hostvolume, then the mount point changes to /var/lib/containers/storage/volumes/hostvolume/_data.

  3. Create a text file inside the directory using the path that is stored in the mntPoint variable:

    $ echo "Hello from host" >> $mntPoint/host.txt
  4. List all files in the directory defined by the mntPoint variable:

    $ ls $mntPoint/
    host.txt
  5. Run the container named myubi1 and map the directory defined by the hostvolume volume name on the host to the /containervolume1 directory on the container:

    $ podman run -it --name myubi1 -v hostvolume:/containervolume1 registry.access.redhat.com/ubi8/ubi /bin/bash

    Note that if you use the volume path defined by the mntPoint variable (-v $mntPoint:/containervolume1), data can be lost when running podman volume prune command, which removes unused volumes. Always use -v hostvolume_name:/containervolume_name.

  6. List the files in the shared volume on the container:

    # ls /containervolume1
    host.txt

    You can see the host.txt file which you created on the host.

  7. Create a text file inside the /containervolume1 directory:

    # echo "Hello from container 1" >> /containervolume1/container1.txt
  8. Detach from the container with CTRL+p and CTRL+q.
  9. List the files in the shared volume on the host, you should see two files:

    $ ls $mntPoint
    container1.rxt  host.txt

    At this point, you are sharing files between the container and host. To share files between two containers, run another container named myubi2.

  10. Run the container named myubi2 and map the directory defined by the hostvolume volume name on the host to the /containervolume2 directory on the container:

    $ podman run -it --name myubi2 -v hostvolume:/containervolume2 registry.access.redhat.com/ubi8/ubi /bin/bash
  11. List the files in the shared volume on the container:

    # ls /containervolume2
    container1.txt host.txt

    You can see the host.txt file which you created on the host and container1.txt which you created inside the myubi1 container.

  12. Create a text file inside the /containervolume2 directory:

    # echo "Hello from container 2" >> /containervolume2/container2.txt
  13. Detach from the container with CTRL+p and CTRL+q.
  14. List the files in the shared volume on the host, you should see three files:

    $ ls $mntPoint
    container1.rxt  container2.txt host.txt

Additional resources

  • podman-volume man page

5.12. Exporting and importing containers

You can use the podman export command to export the file system of a running container to a tarball on your local machine. For example, if you have a large container that you use infrequently or one that you want to save a snapshot of in order to revert back to it later, you can use the podman export command to export a current snapshot of your running container into a tarball.

You can use the podman import command to import a tarball and save it as a filesystem image. Then you can run this filesystem image or you can use it as a layer for other images.

Prerequisites

  • The container-tools module is installed.

Procedure

  1. Run the myubi container based on the registry.access.redhat.com/ubi8/ubi image:

    $ podman run -dt --name=myubi registry.access.redhat.com/8/ubi
  2. Optional: List all containers:

    $ podman ps -a
    CONTAINER ID  IMAGE                                    COMMAND          CREATED     STATUS         PORTS   NAMES
    a6a6d4896142  registry.access.redhat.com/8:latest   /bin/bash        7 seconds ago  Up 7 seconds ago          myubi
  3. Attach to the myubi container:

    $ podman attach myubi
  4. Create a file named testfile:

    [root@a6a6d4896142 /]# echo "hello" > testfile
  5. Detach from the container with CTRL+p and CTRL+q.
  6. Export the file system of the myubi as a myubi-container.tar on the local machine:

    $ podman export -o myubi.tar a6a6d4896142
  7. Optional: List the current directory content:

    $ ls -l
    -rw-r--r--. 1 user user 210885120 Apr  6 10:50 myubi-container.tar
    ...
  8. Optional: Create a myubi-container directory, extract all files from the myubi-container.tar archive. List a content of the myubi-directory in a tree-like format:

    $ mkdir myubi-container
    $ tar -xf myubi-container.tar -C myubi-container
    $ tree -L 1 myubi-container
    ├── bin -> usr/bin
    ├── boot
    ├── dev
    ├── etc
    ├── home
    ├── lib -> usr/lib
    ├── lib64 -> usr/lib64
    ├── lost+found
    ├── media
    ├── mnt
    ├── opt
    ├── proc
    ├── root
    ├── run
    ├── sbin -> usr/sbin
    ├── srv
    ├── sys
    ├── testfile
    ├── tmp
    ├── usr
    └── var
    
    20 directories, 1 file

    You can see that the myubi-container.tar contains the container file system.

  9. Import the myubi.tar and saves it as a filesystem image:

    $ podman import myubi.tar myubi-imported
    Getting image source signatures
    Copying blob 277cab30fe96 done
    Copying config c296689a17 done
    Writing manifest to image destination
    Storing signatures
    c296689a17da2f33bf9d16071911636d7ce4d63f329741db679c3f41537e7cbf
  10. List all images:

    $ podman images
    REPOSITORY                              TAG     IMAGE ID      CREATED         SIZE
    docker.io/library/myubi-imported       latest  c296689a17da  51 seconds ago  211 MB
  11. Display the content of the testfile file:

    $ podman run -it --name=myubi-imported docker.io/library/myubi-imported cat testfile
    hello

Additional resources

  • podman-export man page
  • podman-import man page

5.13. Stopping containers

Use the podman stop command to stop a running container. You can specify the containers by their container ID or name.

Prerequisites

  • The container-tools module is installed.
  • At least one container is running.

Procedure

  • Stop the myubi container:

    • Using the container name:

      $ podman stop myubi
    • Using the container ID:

      $ podman stop 1984555a2c27

To stop a running container that is attached to a terminal session, you can enter the exit command inside the container.

The podman stop command sends a SIGTERM signal to terminate a running container. If the container does not stop after a defined period (10 seconds by default), Podman sends a SIGKILL signal.

You can also use the podman kill command to kill a container (SIGKILL) or send a different signal to a container. Here is an example of sending a SIGHUP signal to a container (if supported by the application, a SIGHUP causes the application to re-read its configuration files):

# *podman kill --signal="SIGHUP" 74b1da000a11*
74b1da000a114015886c557deec8bed9dfb80c888097aa83f30ca4074ff55fb2

Additional resources

  • podman-stop man page
  • podman-kill man page

5.14. Removing containers

Use the podman rm command to remove containers. You can specify containers with the container ID or name.

Prerequisites

  • The container-tools module is installed.
  • At least one container has been stopped.

Procedure

  1. List all containers, running or stopped:

    $ podman ps -a
    CONTAINER ID IMAGE         COMMAND    CREATED    STATUS                PORTS NAMES     IS INFRA
    d65aecc325a4 ubi8/ubi      /bin/bash  3 secs ago Exited (0) 5 secs ago peaceful_hopper false
    74b1da000a11 rhel8/rsyslog rsyslog.sh 2 mins ago Up About a minute     musing_brown    false
  2. Remove the containers:

    • To remove the peaceful_hopper container:

      $ podman rm peaceful_hopper

      Notice that the peaceful_hopper container was in Exited status, which means it was stopped and it can be removed immediately.

    • To remove the musing_brown container, first stop the container and then remove it:

      $ podman stop musing_brown
      $ podman rm musing_brown
      NOTE
      • To remove multiple containers:

        $ podman rm clever_yonath furious_shockley
      • To remove all containers from your local system:

        $ podman rm -a

Additional resources

  • podman-rm man page

5.15. Creating SELinux policies for containers

To generate SELinux policies for containers, use the UDICA tool. For more information, see Introduction to the udica SELinux policy generator.

5.16. Configuring pre-execution hooks in Podman

You can create plugin scripts to define a fine-control over container operations, especially blocking unauthorized actions, for example pulling, running, or listing container images.

Note

The file /etc/containers/podman_preexec_hooks.txt must be created by an administrator and can be empty. If the /etc/containers/podman_preexec_hooks.txt does not exist, the plugin scripts will not be executed.

The following rules apply to the plugin scripts:

  • Have to be root-owned and not writable.
  • Have to be located in the /usr/libexec/podman/pre-exec-hooks and /etc/containers/pre-exec-hooks directories.
  • Execute in sequentially and alphanumeric order.
  • If all plugin scripts return zero value, then the podman command is executed.
  • If any of the plugin scripts return a non-zero value, it indicates a failure. The podman command exits and returns the non-zero value of the first-failed script.
  • Red Hat recommends using the following naming convention to execute the scripts in the correct order: DDD_name.lang, where:

    • The DDD is the decimal number indicating the order of script execution. Use one or two leading zeros if necessary.
    • The name is the name of the plugin script.
    • The lang (optional) is the file extension for the given programming language. For example, the name of the plugin script can be: 001-check-groups.sh.
Note

The plugin scripts are valid at the time of creation. Containers created before plugin scripts are not affected.

Prerequisites

  • The container-tools module is installed.

Procedure

  • Create the script plugin named 001-check-groups.sh. For example:

    #!/bin/bash
    if id -nG "$USER" 2> /dev/null | grep -qw "$GROUP" 2> /dev/null ; then
        exit 0
    else
        exit 1
    fi
    • The script checks if a user is in a specified group.
    • The USER and GROUP are environment variables set by Podman.
    • Exit code provided by the 001-check-groups.sh script would be provided to the podman binary.
    • The podman command exits and returns the non-zero value of the first-failed script.

Verification

  • Check if the 001-check-groups.sh script works correctly:

    $ podman run image
    ...

    If the user is not in the correct group, the following error appears:

    external preexec hook /etc/containers/pre-exec-hooks/001-check-groups.sh failed