Red Hat Training
A Red Hat training course is available for RHEL 8
Chapter 12. Setting container network modes
The chapter provides information about how to set different network modes.
12.1. Running containers with a static IP
The podman run command with the --ip option sets the container network interface to a particular IP address (for example, 10.88.0.44). To verify that you set the IP address correctly, run the podman inspect command.
Prerequisites
-
The
container-toolsmodule is installed.
Procedure
Set the container network interface to the IP address 10.88.0.44:
# podman run -d --name=myubi --ip=10.88.0.44 registry.access.redhat.com/ubi8/ubi efde5f0a8c723f70dd5cb5dc3d5039df3b962fae65575b08662e0d5b5f9fbe85
Verification
Check that the IP address is set properly:
# podman inspect --format='{{.NetworkSettings.IPAddress}}' myubi 10.88.0.44
12.2. Running the DHCP plugin without systemd
Use the podman run --network command to connect to a user-defined network. While most of the container images do not have a DHCP client, the dhcp plugin acts as a proxy DHCP client for the containers to interact with a DHCP server.
This procedure only applies to rootfull containers. Rootless containers do not use the dhcp plugin.
Prerequisites
-
The
container-toolsmodule is installed.
Procedure
Manually run the
dhcpplugin:# /usr/libexec/cni/dhcp daemon & [1] 4966Check that the
dhcpplugin is running:# ps -a | grep dhcp 4966 pts/1 00:00:00 dhcpRun the
alpinecontainer:# podman run -it --rm --network=example alpine ip addr show enp1s0 Resolved "alpine" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf) Trying to pull docker.io/library/alpine:latest... ... Storing signatures 2: eth0@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether f6:dd:1b:a7:9b:92 brd ff:ff:ff:ff:ff:ff inet 192.168.1.22/24 brd 192.168.1.255 scope global eth0 ...In this example:
-
The
--network=exampleoption specifies the network named example to connect. -
The
ip addr show enp1s0command inside thealpinecontainer checks the IP address of the network interfaceenp1s0. - The host network is 192.168.1.0/24
-
The
eth0interface leases an IP address of 192.168.1.122 for the alpine container.
-
The
This configuration may exhaust the available DHCP addresses if you have a large number of short-lived containers and a DHCP server with long leases.
Additional resources
12.3. Running the DHCP plugin using systemd
You can use the systemd unit file to run the dhcp plugin.
Prerequisites
-
The
container-toolsmodule is installed.
Procedure
Create the socket unit file:
# cat /usr/lib/systemd/system/io.podman.dhcp.socket [Unit] Description=DHCP Client for CNI [Socket] ListenStream=%t/cni/dhcp.sock SocketMode=0600 [Install] WantedBy=sockets.targetCreate the service unit file:
# cat /usr/lib/systemd/system/io.podman.dhcp.service [Unit] Description=DHCP Client CNI Service Requires=io.podman.dhcp.socket After=io.podman.dhcp.socket [Service] Type=simple ExecStart=/usr/libexec/cni/dhcp daemon TimeoutStopSec=30 KillMode=process [Install] WantedBy=multi-user.target Also=io.podman.dhcp.socketStart the service immediately:
# systemctl --now enable io.podman.dhcp.socket
Verification
Check the status of the socket:
# systemctl status io.podman.dhcp.socket io.podman.dhcp.socket - DHCP Client for CNI Loaded: loaded (/usr/lib/systemd/system/io.podman.dhcp.socket; enabled; vendor preset: disabled) Active: active (listening) since Mon 2022-01-03 18:08:10 CET; 39s ago Listen: /run/cni/dhcp.sock (Stream) CGroup: /system.slice/io.podman.dhcp.socket
Additional resources
12.4. The macvlan plugin
Most of the container images do not have a DHCP client, the dhcp plugin acts as a proxy DHCP client for the containers to interact with a DHCP server.
The host system does not have network access to the container. To allow network connections from outside the host to the container, the container has to have an IP on the same network as the host. The macvlan plugin enables you to connect a container to the same network as the host.
This procedure only applies to rootfull containers. Rootless containers are not able to use the macvlan and dhcp plugins.
You can create a macvlan network using the podman network create --macvlan command.
Additional resources
- Leasing routable IP addresses with Podman containers
-
podman-network-createman page
12.5. Switching the network stack from CNI to Netavark
Previously, containers were able to use DNS only when connected to the single Container Network Interface (CNI) plugin. Netavark is a network stack for containers. You can use Netavark with Podman and other Open Container Initiative (OCI) container management applications. The advanced network stack for Podman is compatible with advanced Docker functionalities. Now, containers in multiple networks access containers on any of those networks.
Netavark is capable of the following:
- Create, manage, and remove network interfaces, including bridge and MACVLAN interfaces.
- Configure firewall settings, such as network address translation (NAT) and port mapping rules.
- Support IPv4 and IPv6.
- Improve support for containers in multiple networks.
Prerequisites
-
The
container-toolsmodule is installed.
Procedure
If the
/etc/containers/containers.conffile does not exist, copy the/usr/share/containers/containers.conffile to the/etc/containers/directory:# cp /usr/share/containers/containers.conf /etc/containers/Edit the
/etc/containers/containers.conffile, and add the following content to the[network]section:network_backend="netavark"
If you have any containers or pods, reset the storage back to the initial state:
# podman system resetReboot the system:
# reboot
Verification
Verify that the network stack is changed to Netavark:
# cat /etc/containers/containers.conf ... [network] network_backend="netavark" ...
If you are using Podman 4.0.0 or later, use the podman info command to check the network stack setting.
Additional resources
- Podman 4.0’s new network stack: What you need to know
-
podman-system-resetman page
12.6. Switching the network stack from Netavark to CNI
You can switch the network stack from Netavark to CNI.
The CNI network stack is going to be deprecated. Red Hat recommends using the Netavark network stack instead.
Prerequisites
-
The
container-toolsmodule is installed.
Procedure
If the
/etc/containers/containers.conffile does not exist, copy the/usr/share/containers/containers.conffile to the/etc/containers/directory:# cp /usr/share/containers/containers.conf /etc/containers/Edit the
/etc/containers/containers.conffile, and add the following content to the[network]section:network_backend="cni"
If you have any containers or pods, reset the storage back to the initial state:
# podman system resetReboot the system:
# reboot
Verification
Verify that the network stack is changed to CNI:
# cat /etc/containers/containers.conf ... [network] network_backend="cni" ...
If you are using Podman 4.0.0 or later, use the podman info command to check the network stack setting.
Additional resources
- Podman 4.0’s new network stack: What you need to know
-
podman-system-resetman page