Chapter 11. Using the container-tools CLI

11.1. podman

The podman command (which stands for Pod Manager) lets you run containers as standalone entities, without requiring that Kubernetes, the Docker runtime, or any other container runtime be involved. It is a tool that can act as a replacement for the docker command, implementing the same command-line syntax, while it adds even more container management features. The podman features include:

  • Based on the Docker interface: Because podman syntax mirrors the docker command, transitioning to podman should be easy for those familiar with docker.
  • Managing containers and images: Both Docker- and OCI-compatible container images can be used with podman to:

    • Run, stop and restart containers
    • Create and manage container images (push, commit, configure, build, and so on)
  • Managing pods: Besides running individual containers, podman can run a set of containers grouped in a pod. A pod is the smallest container unit that Kubernetes manages.
  • Working with no runtime: No runtime environment is used by podman to work with containers.

Here are a few implementation features of Podman you should know about:

  • Podman, Buildah, and the CRI-O container engine all use the same back-end store directory, /var/lib/containers, instead of using the Docker storage location (/var/lib/docker), by default.
  • Although Podman, Buildah, and CRI-O share the same storage directory, they cannot interact with each other’s containers. Those tools can share images, however. Eventually those features will be able to share containers.
  • The podman command, like the docker command, can build container images from a Dockerfile.
  • The podman command can be a useful troubleshooting tool when the CRI-O service is unavailable.
  • Options to the docker command that are not supported by podman include network, node, plugin (podman does not support plugins), rename (use rm and create to rename containers with podman), secret, service, stack, and swarm (podman does not support Docker Swarm). The container and image options are used to run subcommands that are used directly in podman.
  • To interact programmatically with podman, you can use the Podman v2.0 RESTful API, it works in both a rootful and a rootless environment. For more information, see chapter Using the container tools API.

11.1.1. Using podman commands

If you are used to using the docker command to work with containers, you will find most of the features and options match those of podman. Table 1 shows a list of commands you can use with podman (type podman -h to see this list):

Table 11.1. Commands supported by podman

podman command

Description

podman command

Description

attach

Attach to a running container

commit

Create new image from changed container

build

Build an image using Dockerfile instructions

create

Create, but do not start, a container

diff

Inspect changes on container’s filesystems

exec

Run a process in a running container

export

Export container’s filesystem contents as a tar archive

help, h

Shows a list of commands or help for one command

history

Show history of a specified image

images

List images in local storage

import

Import a tarball to create a filesystem image

info

Display system information

inspect

Display the configuration of a container or image

kill

Send a specific signal to one or more running containers

load

Load an image from an archive

login

Login to a container registry

logout

Logout of a container registry

logs

Fetch the logs of a container

mount

Mount a working container’s root filesystem

pause

Pauses all the processes in one or more containers

ps

List containers

port

List port mappings or a specific mapping for the container

pull

Pull an image from a registry

push

Push an image to a specified destination

restart

Restart one or more containers

rm

Remove one or more containers from host. Add -f if running.

rmi

removes one or more images from local storage

run

run a command in a new container

save

Save image to an archive

search

search registry for image

start

Start one or more containers

stats

Display percentage of CPU, memory, network I/O, block I/O and PIDs for one or more containers

stop

Stop one or more containers

tag

Add an additional name to a local image

top

Display the running processes of a container

umount, unmount

Unmount a working container’s root filesystem

unpause

Unpause the processes in one or more containers

version

Display podman version information

wait

Block on one or more containers

  

11.1.2. Creating SELinux policies for containers

To generate SELinux policies for containers, use the UDICA tool. For more information, see Introduction to the udica SELinux policy generator.

11.1.3. Using podman with MPI

You can use Podman with Open MPI (Message Passing Interface) to run containers in a High Performance Computing (HPC) environment.

The example is based on the ring.c program taken from Open MPI. In this example, a value is passed around by all processes in a ring-like fashion. Each time the message passes rank 0, the value is decremented. When each process receives the 0 message, it passes it on to the next process and then quits. By passing the 0 first, every process gets the 0 message and can quit normally.

Procedure

  1. Install Open MPI:

    $ sudo yum install openmpi
  2. To activate the environment modules, type:

    $ . /etc/profile.d/modules.sh
  3. Load the mpi/openmpi-x86_64 module:

    $ module load mpi/openmpi-x86_64

    Optionally, to automatically load mpi/openmpi-x86_64 module, add this line to the .bashrc file:

    $ echo "module load mpi/openmpi-x86_64" >> .bashrc
  4. To combine mpirun and podman, create a container with the following definition:

    $ cat Containerfile
    FROM registry.access.redhat.com/ubi8/ubi
    
    RUN yum -y install openmpi-devel wget && \
        yum clean all
    
    RUN wget https://raw.githubusercontent.com/open-mpi/ompi/master/test/simple/ring.c && \
        /usr/lib64/openmpi/bin/mpicc ring.c -o /home/ring && \
        rm -f ring.c
  5. Build the container:

    $ podman build --tag=mpi-ring .
  6. Start the container. On a system with 4 CPUs this command starts 4 containers:

    $ mpirun \
       --mca orte_tmpdir_base /tmp/podman-mpirun \
       podman run --env-host \
         -v /tmp/podman-mpirun:/tmp/podman-mpirun \
         --userns=keep-id \
         --net=host --pid=host --ipc=host \
         mpi-ring /home/ring
    Rank 2 has cleared MPI_Init
    Rank 2 has completed ring
    Rank 2 has completed MPI_Barrier
    Rank 3 has cleared MPI_Init
    Rank 3 has completed ring
    Rank 3 has completed MPI_Barrier
    Rank 1 has cleared MPI_Init
    Rank 1 has completed ring
    Rank 1 has completed MPI_Barrier
    Rank 0 has cleared MPI_Init
    Rank 0 has completed ring
    Rank 0 has completed MPI_Barrier

    As a result, mpirun starts up 4 Podman containers and each container is running one instance of the ring binary. All 4 processes are communicating over MPI with each other.

    The following mpirun options are used to start the container:

    • --mca orte_tmpdir_base /tmp/podman-mpirun line tells Open MPI to create all its temporary files in /tmp/podman-mpirun and not in /tmp. If using more than one node this directory will be named differently on other nodes. This requires mounting the complete /tmp directory into the container which is more complicated.

    The mpirun command specifies the command to start, the podman command. The following podman options are used to start the container:

    • run command runs a container.
    • --env-host option copies all environment variables from the host into the container.
    • -v /tmp/podman-mpirun:/tmp/podman-mpirun line tells Podman to mount the directory where Open MPI creates its temporary directories and files to be available in the container.
    • --userns=keep-id line ensures the user ID mapping inside and outside the container.
    • --net=host --pid=host --ipc=host line sets the same network, PID and IPC namespaces.
    • mpi-ring is the name of the container.
    • /home/ring is the MPI program in the container.

For more information, see the article Podman in HPC environments by Adrian Reber.

11.1.4. Creating and restoring container checkpoints

Checkpoint/Restore In Userspace (CRIU) is a software that enables you to set a checkpoint on a running container or an individual application and store its state to disk. You can use data saved to restore the container after a reboot at the same point in time it was checkpointed.

11.1.4.1. Creating and restoring a container checkpoint locally

This example is based on a Python based web server which returns a single integer which is incremented after each request.

Procedure

  1. Create a Python based server:

    # cat counter.py
    #!/usr/bin/python3
    
    import http.server
    
    counter = 0
    
    class handler(http.server.BaseHTTPRequestHandler):
        def do_GET(s):
            global counter
               s.send_response(200)
               s.send_header('Content-type', 'text/html')
               s.end_headers()
               s.wfile.write(b'%d\n' % counter)
               counter += 1
    
    
    server = http.server.HTTPServer(('', 8088), handler)
    server.serve_forever()
  2. Create a container with the following definition:

    # cat Containerfile
    FROM registry.access.redhat.com/ubi8/ubi
    
    COPY counter.py /home/counter.py
    
    RUN useradd -ms /bin/bash counter
    
    RUN yum -y install python3 && chmod 755 /home/counter.py
    
    USER counter
    ENTRYPOINT /home/counter.py

    The container is based on the Universal Base Image (UBI 8) and uses a Python based server.

  3. Build the container:

    # podman build . --tag counter

    Files counter.py and Containerfile are the input for the container build process (podman build). The built image is stored locally and tagged with the tag counter.

  4. Start the container as root:

    # podman run --name criu-test --detach counter
  5. To list all running containers, enter:

    # podman ps
    CONTAINER ID  IMAGE  COMMAND  CREATED   STATUS  PORTS NAMES
    e4f82fd84d48  localhost/counter:latest  5 seconds ago  Up 4 seconds ago  criu-test
  6. Display IP address of the container:

    # podman inspect criu-test --format "{{.NetworkSettings.IPAddress}}"
    10.88.0.247
  7. Send requests to the container:

    # curl 10.88.0.247:8080
    0
    # curl 10.88.0.247:8080
    1
  8. Create a checkpoint for the container:

    # podman container checkpoint criu-test
  9. Reboot the system.
  10. Restore the container:

    # podman container restore --keep criu-test
  11. Send requests to the container:

    # curl 10.88.0.247:8080
    2
    # curl 10.88.0.247:8080
    3
    # curl 10.88.0.247:8080
    4

    The result now does not start at 0 again, but continues at the previous value.

This way you can easily save the complete container state through a reboot.

For more information, see the article Adding checkpoint/restore support to Podman by Adrian Reber.

11.1.4.2. Reducing startup time using container restore

You can use container migration to reduce startup time of containers which require a certain time to initialize. Using a checkpoint, you can restore the container multiple times on the same host or on different hosts. This example is based on the container from the Creating and restoring a container checkpoint locally section.

Procedure

  1. Create a checkpoint of the container, and export the checkpoint image to a tar.gz file:

    # podman container checkpoint criu-test --export /tmp/chkpt.tar.gz
  2. Restore the container from the tar.gz file:

    # podman container restore --import /tmp/chkpt.tar.gz --name counter1
    # podman container restore --import /tmp/chkpt.tar.gz --name counter2
    # podman container restore --import /tmp/chkpt.tar.gz --name counter3

    The --name (-n) option specifies a new name for containers restored from the exported checkpoint.

  3. Display ID and name of each container:

    # podman ps -a --format "{{.ID}} {{.Names}}"
    a8b2e50d463c counter3
    faabc5c27362 counter2
    2ce648af11e5 counter1
  4. Display IP address of each container:

    #️ podman inspect counter1 --format "{{.NetworkSettings.IPAddress}}"
    10.88.0.248
    
    #️ podman inspect counter2 --format "{{.NetworkSettings.IPAddress}}"
    10.88.0.249
    
    #️ podman inspect counter3 --format "{{.NetworkSettings.IPAddress}}"
    10.88.0.250
  5. Send requests to each container:

    #️ curl 10.88.0.248:8080
    4
    #️ curl 10.88.0.249:8080
    4
    #️ curl 10.88.0.250:8080
    4

    Note, that the result is 4 in all cases, because you are working with different containers restored from the same checkpoint.

Using this approach, you can quickly start up stateful replicas of the initially checkpointed container.

For more information, see the article Container migration with Podman on RHEL by Adrian Reber.

11.1.4.3. Migrating containers among systems

This procedure shows the migration of running containers from one system to another, without losing the state of the applications running in the container. This example is based on the container from the Creating and restoring a container checkpoint locally section tagged with counter.

Prerequisities

The following steps are not necessary if the container is pushed to a registry as Podman will automatically download the container from a registry if it is not available locally. This example does not use a registry, you have to export previously built and tagged container (see Creating and restoring a container checkpoint locally section) locally and import the container on the destination system of this migration.

  • Export previously built container:

    # podman save --output counter.tar counter
  • Copy exported container image to the destination system (other_host):

    # scp counter.tar other_host:
  • Import exported container on the destination system:

    # ssh other_host podman load --input counter.tar

Now the destination system of this container migration has the same container image stored in its local container storage.

Procedure

  1. Start the container as root:

    # podman run --name criu-test --detach counter
  2. Display IP address of the container:

    # podman inspect criu-test --format "{{.NetworkSettings.IPAddress}}"
    10.88.0.247
  3. Send requests to the container:

    # curl 10.88.0.247:8080
    0
    # curl 10.88.0.247:8080
    1
  4. Create a checkpoint of the container, and export the checkpoint image to a tar.gz file:

    # podman container checkpoint criu-test --export /tmp/chkpt.tar.gz
  5. Copy the checkpoint archive to the destination host:

    # scp /tmp/chkpt.tar.gz other_host:/tmp/
  6. Restore the checkpoint on the destination host (other_host):

    # podman container restore --import /tmp/chkpt.tar.gz
  7. Send a request to the container on the destination host (other_host):

    # curl 10.88.0.247:8080
    2

As a result, the stateful container has been migrated from one system to another without losing its state.

For more information, see the article Container migration with Podman on RHEL by Adrian Reber.

11.2. runc

The "runC" container runtime is a lightweight, portable implementation of the Open Container Initiative (OCI) container runtime specification. The runC unites a lot of the low-level features that make running containers possible. It shares a lot of low-level code with Docker but it is not dependent on any of the components of the Docker platform.

runc supports Linux namespaces, live migration, and has portable performance profiles. It also provides full support for Linux security features such as SELinux, control groups (cgroups), seccomp, and others. You can build and run images with runc, or you can run OCI-compatible images with runc.

11.2.1. Running containers with runc

With runc, containers are configured using bundles. A bundle for a container is a directory that includes a specification file named config.json and a root filesystem. The root filesystem contains the contents of the container.

To create a bundle, run:

$ runc spec

This command creates a config.json file that only contains a bare-bones structure that you will need to edit. Most importantly, you will need to change the args parameter to identify the executable to run. By default, args is set to sh.

    "args": [
      "sh"
    ],

As an example, you can download the Red Hat Enterprise Linux base image (ubi8/ubi) using podman then export it, create a new bundle for it with runc, and edit the config.json file to point to that image. You can then create the container image and run an instance of that image with runc. Use the following commands:

# podman pull registry.redhat.io/ubi8/ubi
# podman export $(podman create registry.redhat.io/ubi8/ubi) > rhel.tar
# mkdir -p rhel-runc/rootfs
# tar -C rhel-runc/rootfs -xf rhel.tar
# runc spec -b rhel-runc
# vi rhel-runc/config.json   Change any setting you like
# runc create -b rhel-runc/ rhel-container
# runc start rhel-container
sh-4.2#

In this example, the name of the container instance is rhel-container. Running that container, by default, starts a shell, so you can begin looking around and running commands from inside that container. Type exit when you are done.

The name of a container instance must be unique on the host. To start a new instance of a container:

# runc start <container_name>

You can provide the bundle directory using the -b option. By default, the value for the bundle is the current directory.

You will need root privileges to start containers with runc. To see all commands available to runc and their usage, run runc --help.

11.3. skopeo

With the skopeo command, you can work with container images from registries without using the docker daemon or the docker command. Registries can include the Docker Registry, your own local registries, Red Hat Quay or OpenShift registries. Activities you can do with skopeo include:

  • inspect: The output of a skopeo inspect command is similar to what you see from a docker inspect command: low-level information about the container image. That output can be in json format (default) or raw format (using the --raw option).
  • copy: With skopeo copy you can copy a container image from a registry to another registry or to a local directory.
  • layers: The skopeo layers command lets you download the layers associated with images so that they are stored as tarballs and associated manifest files in a local directory.

Like the buildah command and other tools that rely on the containers/image library, the skopeo command can work with images from container storage areas other than those associated with Docker. Available transports to other types of container storage include: containers-storage (for images stored by buildah and CRI-O), ostree (for atomic and system containers), oci (for content stored in an OCI-compliant directory), and others. See the skopeo man page for details.

To try out skopeo, you could set up a local registry, then run the commands that follow to inspect, copy, and download image layers. If you want to follow along with the examples, start by doing the following:

  • Install a local registry (such as Red Hat Quay. Container registry software available in the docker-distribution package for RHEL 7, is not available for RHEL 8.
  • Pull the latest RHEL image to your local system (podman pull ubi8/ubi).
  • Retag the RHEL image and push it to your local registry as follows:

    # podman tag ubi8/ubi localhost/myubi8
    # podman push localhost/myubi8

The rest of this section describes how to inspect, copy and get layers from the RHEL image.

Note

The skopeo tool by default requires a TLS connection. It fails when trying to use an unencrypted connection. To override the default and use an http registry, prepend http: to the <registry>/<image> string.

11.3.1. Inspecting container images with skopeo

When you inspect a container image from a registry, you need to identify the container format (such as docker), the location of the registry (such as docker.io or localhost), and the repository/image (such as ubi8/ubi).

The following example inspects the mariadb container image from the Docker Registry:

# skopeo inspect docker://docker.io/library/mariadb
{
    "Name": "docker.io/library/mariadb",
    "Tag": "latest",
    "Digest": "sha256:d3f56b143b62690b400ef42e876e628eb5e488d2d0d2a35d6438a4aa841d89c4",
    "RepoTags": [
        "10.0.15",
        "10.0.16",
        "10.0.17",
        "10.0.19",
...
    "Created": "2018-06-10T01:53:48.812217692Z",
    "DockerVersion": "1.10.3",
    "Labels": {},
    "Architecture": "amd64",
    "Os": "linux",
    "Layers": [
...

Assuming you pushed a container image tagged localhost/myubi8 to a container registry running on your local system, the following command inspects that image:

# skopeo inspect docker://localhost/myubi8
{
    "Name": "localhost/myubi8",
    "Tag": "latest",
    "Digest": "sha256:4e09c308a9ddf56c0ff6e321d135136eb04152456f73786a16166ce7cba7c904",
    "RepoTags": [
        "latest"
    ],
    "Created": "2018-06-16T17:27:13Z",
    "DockerVersion": "1.7.0",
    "Labels": {
        "Architecture": "x86_64",
        "Authoritative_Registry": "registry.access.redhat.com",
        "BZComponent": "rhel-server-docker",
        "Build_Host": "rcm-img01.build.eng.bos.redhat.com",
        "Name": "myubi8",
        "Release": "75",
        "Vendor": "Red Hat, Inc.",
        "Version": "8.0"
    },
    "Architecture": "amd64",
    "Os": "linux",
    "Layers": [
        "sha256:16dc1f96e3a1bb628be2e00518fec2bb97bd5933859de592a00e2eb7774b6ecf"
    ]
}

11.3.2. Copying container images with skopeo

This command copies the myubi8 container image from a local registry into a directory on the local system:

# skopeo copy docker://localhost/myubi8 dir:/root/test/
INFO[0000] Downloading myubi8/blobs/sha256:16dc1f96e3a1bb628be2e00518fec2bb97bd5933859de592a00e2eb7774b6ecf
# ls /root/test
16dc1f96e3a1bb628be2e00518fec2bb97bd5933859de592a00e2eb7774b6ecf.tar manifest.json

The result of the skopeo copy command is a tarball (16d*.tar) and a manifest.json file representing the image being copied to the directory you identified. If there were multiple layers, there would be multiple tarballs. The skopeo copy command can also copy images to another registry. If you need to provide a signature to write to the destination registry, you can do that by adding a --sign-by= option to the command line, followed by the required key-id.

11.3.3. Getting image layers with skopeo

The skopeo layers command is similar to skopeo copy, with the difference being that the copy option can copy an image to another registry or to a local directory, while the layers option just drops the layers (tarballs and manifest.json file) in the current directory. For example

# skopeo layers docker://localhost/myubi8
INFO[0000] Downloading myubi8/blobs/sha256:16dc1f96e3a1bb628be2e00518fec2bb97bd5933859de592a00e2eb7774b6ecf
# find .
./layers-myubi8-latest-698503105
./layers-myubi-latest-698503105/manifest.json
./layers-myubi8-latest-698503105/16dc1f96e3a1bb628be2e00518fec2bb97bd5933859de592a00e2eb7774b6ecf.tar

As you can see from this example, a new directory is created (layers-myubi8-latest-698503105) and, in this case, a single layer tarball and a manifest.json file are copied to that directory.