The following libvirt-lxc packages are deprecated starting with Red Hat Enterprise Linux 7.1:
Future development on the Linux containers framework is now based on the docker command-line interface. libvirt-lxc tooling may be removed in a future release of Red Hat Enterprise Linux (including Red Hat Enterprise Linux 7) and should not be relied upon for developing custom container management applications.
The following sections provide an overview of tasks related to installation, configuration, and management of Linux containers. The material below focuses on tools provided by the libvirt library, which are useful for basic container-related operations.
The libvirt library provides a necessary infrastructure for general-purpose containers together with the virsh utility as a default command-line interface for managing guest domains, such as virtual machines and Linux containers.
There are two kinds of Linux Containers you can create, either they are persistent or volatile. Persistent containers are preserved after reboot, define them using an XML configuration file. Temporary containers are deleted as soon as the contained application finishes, you can create them with the
virsh create command.
Connecting to the LXC Driver
To execute container-related commands correctly, libvirt must be connected to LXC driver. This is not done by default as each host can only have one default libvirt URI, and the KVM driver typically takes precedence over LXC. To temporarily change the driver to LXC, use the
-c (connect) argument before a command as follows (execute as root):
~]# virsh -c lxc:/// command
-c lxc:/// specified in front of the command you change the connected driver to LXC. Since this change is temporary, the default URI is reset right after execution. All examples of container usage in this guide assume that LXC is not the default driver and therefore, the above syntax is used when necessary. However, you can avoid typing
-c lxc:/// before every command if you explicitly override the default URI for the libvirt session using the LIBVIRT_DEFAULT_URI environment variable.
To identify your default libvirt URI, type:
~]# virsh uri
In this case, the
qemu:///system URI is set as default, which means KVM driver is connected.
Change the default setting for the libvirt session by typing:
~]# export LIBVIRT_DEFAULT_URI=lxc:///
Note that this change is not preserved after system reboot.
To verify your new cofiguration, type:
~]# virsh uri
The virsh Utility
The virsh utility is a general-purpose command-line interface for administration of virtualization domains. As such, it can be used to manage the capabilities of LXC domains. The virsh utility can be used, for example, to create, modify, and destroy containers, display information about existing containers, or manage resources, storage, and network connectivity of a container.
The following table describes virsh commands that are most often used in connection with Linux containers. For a complete list of virsh commands see the virsh manual page.
||Creates a new container based on parameters in supplied libvirt configuration file in XML format.|
||Deletes a container. If the container is running, it is converted to a transient container which is removed with an application shutdown.|
||Starts a previously-defined container. With the
||Sets the container to start automatically on system boot.|
||Defines and starts a non-persistent container in one step. The temporary container is based on libvirt configuration file. By executing the
||Connects to the virtual console of the container.|
||Coordinates with the domain operating system to perform a graceful shutdown. The exact behavior can be specified with the `` parameter in the container's XML definition.|
||Immediately terminates the container. This can be used to shut down the container forcefully if it is not responding after executing
edit | Opens the container's configuration file for editing and validates the changes before applying them.
Creating a Container
To create a Linux Container using the virsh utility, follow these steps:
Create a Libvirt configuration file in the XML format with the following required parameters:
<domain type='lxc'> <name></name> <memory></memory> <os> <type>exe</type> <init>/bin/sh</init> </os> <devices> <console type='pty'/> </devices> </domain>
Here, replace container_name with a name for your container, and mem_limit with an initial memory limit for the container. In libvirt, the virtualization type for containers is defined as exe. The
<init> parameter defines a path to the binary to spawn as the container’s init (the process with PID 1). The last required parameter is the text console device, specified with the
Apart from the aforementioned required parameters, there are several other settings you can apply, see ? for a list of these parameters. For more information on the syntax and formatting of a Libvirt XML configuration file, refer to Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide.
The following is an example of Libvirt configuration file
<domain type='lxc'> <name>test-container</name> <memory>102400</memory> <os> <type>exe</type> <init>/bin/sh</init> </os> <devices> <console type='pty'/> </devices> </domain>
To import a new container to Libvirt, use the following syntax:
~]# virsh -c lxc:/// define config_file
Here, config_file stands for the XML configuration file created in the previous step.
To import the
test_container.xml file to to Libvirt, type:
~]# virsh -c lxc:/// define test_container.xml
The following message is returned:
Domain test-container defined from test-container.xml
Starting, Connecting to, and Stopping a Container
To start a previously-defined container, use the following command as root:
~]# virsh -c lxc:/// start container_name
Replace container_name with a name of the container. Once a container is started, connect to it using the following command:
~]# virsh -c lxc:/// console container_name
Note that if a container uses the /bin/sh process as the init process with a PID of 1, exiting the shell will also shut down the container.
To stop a running container, execute the following command as root:
~]# virsh -c lxc:/// shutdown container_name
If a container is not responding, it can be shut down forcefully by executing:
~]# virsh -c lxc:/// destroy container_name
Modifying a Container
To modify any of the configuration settings of an existing container, run the following command as root:
~]# virsh -c lxc:/// edit container_name
With container_name, specify the container whose settings you wish to modify. The above command opens the XML configuration file of the specified container in a text editor and lets you change any of the settings. The default editor option is vi, change it by setting the
EDITOR environment variable to your editor of choice.
The following example shows how the configuration file of the test-container looks when opened by
<domain type='lxc'> <name>test-container</name> <uuid>a99736bb-8a7e-4fc5-99dc-bd96f6116b1c</uuid> <memory unit='KiB'>102400</memory> <currentMemory unit='KiB'>102400</currentMemory> <vcpu placement='static'>1</vcpu> <os> <type arch='x86_64'>exe</type> <init>/bin/sh</init> </os> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/libexec/libvirt_lxc</emulator> <interface type='network'> <source network='default'/> </interface> <console type='pty'> <target type='lxc' port='0'/> </console> </devices> </domain>
Note that the configuration file opened by the
virsh edit differs from the original configuration file that was used to create the container. This change is to show all possible settings that can be configured, not only the required ones displayed in ?. For instance, it is possible to modify the container's behavior on reboot or on crash.
Once the file has been edited, save the file and exit the editor. After doing so,
virsh edit automatically validates your modified configuration file and in case of syntax errors, it prompts you to open the file again. The modified configuration takes effect next time the container boots. To apply the changes immediately, reboot the container (as root):
~]# virsh -c lxc:/// reboot container_name
Automatically Starting a Container on Boot
To start the container automatically on boot, use the following command as root:
~]# virsh -c lxc:/// autostart container_name
Replace container_name with a name of the container you want to start automatically on system boot. To disable this automatic start, type as root:
~]# virsh -c lxc:/// autostart --disable container_name
To start the test-container domain automatically at boot time, type:
~]# virsh -c lxc:/// autostart test-container
When the command is executed successfully, the following message appears:
Domain test-container marked as autostarted
virsh dominfo command to test your new setting:
~]# virsh -c lxc:/// dominfo test-container | grep Autostart Autostart: enable
Removing a Container
To remove an existing container, run the following command as root:
~]# virsh -c lxc:/// undefine container_name
Replace container_name with the name of the container to be removed. Undefining a container simply removes its configuration file. Thus, the container can no longer be started. If the container is running and it is undefined, it enters a transient state in which it has no configuration file on the disk. Once a transient container is shut down, it can not be started again.
The container is removed immediately after executing the
undefinecommand. virsh will not prompt you for confirmation before deleting the container. Think twice before executing the command, as the remove operation is not reversible.
Monitoring a Container
To view a simple list of all existing containers, both running and inactive, type the following command as root:
~]# virsh -c lxc:/// list --all
The output of the
virsh list --all command can look as follows:
Id Name State ---------------------------------------------------- 4369 httpd-container-001 running - test-container shut off
Once you know the name of a container, or its process ID if it is running, view the metadata of this container by executing the following command:
~]# virsh -c lxc:/// dominfo container_name
Replace container_name with a name or PID of the container you wish to examine.
The following example shows metadata of the httpd-container-001 domain:
~]# virsh -c lxc:/// dominfo httpd-container-001 Id: 4369 Name: httpd-container-001 UUID: 4e96844c-2bc6-43ab-aef9-8fb93de53095 OS Type: exe State: running CPU(s): 1 CPU time: 0.3s Max memory: 524288 KiB Used memory: 8880 KiB Persistent: yes Autostart: enable Managed save: unknown Security model: selinux Security DOI: 0 Security label: system_u:system_r:svirt_lxc_net_t:s0 (enforcing)
For a live view of currently running Linux Containers, you can use the virt-top utility that provides a variety of statistics of virtualization systems. To use virt-top, first install it as root:
~]# yum install virt-top
To launch the utility, type:
~]# virt-top -c lxc:///
The range of provided statistics and operations is similar to the top utility. For more information, see the virt-top manual page.
The above commands observe the overall status and resource consumption of containers. To go deeper beyond the container level to track individual applications running inside of a container, first connect to this container with the
virsh console command. Then execute the usual monitoring commands such as top inside the container.
When running a large number of containers simultaneously, you may want to gain an overview of containerized processes without connecting to individual containers. In this case, use the
systemd-cgls command that groups all processes within a container into a cgroup named by the container. As an alternative, use the
machinectl command to get information about containers from the host system. First, list all running containers as shown in the following example:
~]# machinectl MACHINE CONTAINER SERVICE lxc-httpd-container-001 container libvirt-lxc lxc-test-container container libvirt-lxc 2 machines listed.
View the status of one or more containers by executing:
~]# machinectl status -l container_name
Replace container_name with a name of the container you wish to inspect. This command requires the lxc- prefix before the name as shown the output of the
machinectl command in the above example. The
-l option ensures that the output is not abbreviated.
Use the following command to see the status of the test-container:
~]# machinectl status -l lxc-test-container lxc-test-container(73369262eca04dbcac288b6030b46b4c) Since: Wed 2014-02-05 06:46:50 MST; 1h 3min ago Leader: 2300 Service: libvirt-lxc; class container Unit: machine-lxc\x2dtest\x2dcontainer.scope ├─12760 /usr/libexec/libvirt_lxc --name test-container --console 21 --security=selinux --h └─12763 /bin/sh
Once you have found the PID of the containerized process, you can use standard tools on the host system to monitor what the process is doing. See the systemd-cgls(1) and machinectl(1) manual pages for more information.
Networking with Linux Containers
The guests created with virsh can by default reach all network interfaces available on the host system. If the container configuration file does not list any network interfaces, the network namespace is not activated, allowing the containerized applications to bind to TCP or UDP addresses and ports from the host operating system. It also allows applications to access UNIX domain sockets associated with the host. To forbid the container an access to UNIX domains sockets, add the
<privnet/> flag to the
<features> parameter of the container configuration file.
With network namespace, it is possible to dedicate a virtual network to the container. This network has to be previously defined with a configuration file in XML format stored in the
/etc/libvirt/qemu/networks/ directory. Also, the virtual network must be started with the
virsh net-start command. To find more detailed instructions on how to create and manage virtual networks, refer to chapters Network configuration and Managing virtual networks in Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide. To learn about general concepts and scenarios of virtual networking, see the chapter called Virtual Networking in the aforementioned guide.
To connect your container to a predefined virtual network, type as root:
~]# virsh attach-interface domain type source --mac mac --config
Replace domain with the name of the container that will use the network interface.
Replace type with either
networkto indicate a physical network interface or with
bridgeif using a network bridge.
With source you specify the name of the source network interface.
Specify a mac address for the network interface with mac.
--configoption option if you want to make the network attachment persistent. If not specified, your settings will not be applied after the system reboot.
Find the complete list of
attach-interface parameters in the virsh manual page.
To disconnect the container from the virtual network, type as root:
~]# virsh detach-interface domain type --config
Here, domain stands for the name of the container, type identifies the type of the network as with the
attach-interface command above. The
--config option makes the detachment persistent.
Virtual network can either use a Dynamic Host Configuration Protocol (DHCP) that automatically assigns TCP/IP information to client machines, or it can have manually assigned static IP address. The httpd.service is used in examples of container usage in this section; however, you can use sshd.service in the same manner without complications.
Configuration file for a virtual network named default is installed as part of the libvirt package, and is configured to start automatically when libvirtd is started. The default network uses dynamic IP address assignment and operates in the NAT mode. Network Access Translation (NAT) protocol allows only outbound connections, so the virtual machines and containers using the default network are not directly visible from the network. Refer to ?
As mentioned above, the libvirt package provides a default virtual network that is started automatically with libvirtd. To see the exact configuration, open the
/etc/libvirt/qemu/networks/default.xml file, or use the
virsh net-edit command. The default configuration file can look as follows:
<network> <name>default</name> <bridge name="virbr0" /> <forward/> <ip address="192.168.122.1" netmask="255.255.255.0"> <dhcp> <range start="192.168.122.2" end="192.168.122.254" /> </dhcp> </ip> </network>
To check if the network is running, type:
~]# virsh net-list Name State Autostart Persistent ---------------------------------------------------------- default active yes yes
With defined and running virtual network, use the
attach-interface command to connect a container to this network. For example, to persistently connect httpd-coontainer-001 to the default virtual network, type:
~]# virsh attach-interface httpd-container-001 network default --config
To verify if the network is working correctly, connect to the container and execute the usual network-monitoring commands such as
The default virtual network provided by libvirt operates in NAT mode, which makes it suitable mainly for testing purposes or for hosts that have dynamically changing network connectivity switching between ethernet, wifi and mobile connectivity. To expose your container to LAN or WAN, connect it to a network bridge.
A network bridge is a link-layer layer device which forwards traffic between networks based on MAC addresses. It makes forwarding decisions based on a table of MAC addresses which it builds by listening to network traffic and thereby learning what hosts are connected to each network. A software bridge can be used within a Linux host in order to emulate a hardware bridge, especially in virtualization applications for sharing a NIC with one or more virtual NICs. For more information on network bridging, see the chapter called Configure Network Bridging in Red Hat Enterprise Linux 7 Networking Guide.
Ethernet bridging is useful for machines with permanent wired LAN connection. Once the host networking is configured to have a bridge device, you can use this bridge for a virtual network. This requires creating a configuration file and then loading it into libvirt.
Imagine you have prepared a network bridge device called
br0 on your host operating system (see the chapter called Configure Network Bridging in Red Hat Enterprise Linux 7 Networking Guide ). To use this device to create a virtual network, create the
lan.xml file with the following content:
<network> <name>lan</name> <forward mode="bridge" /> <bridge name="br0" /> </network>
After creating a valid configuration file, you can enable the virtual network. Type as root:
~]# virsh net-define lan.xml
If the network was successfully defined, the following message is displayed:
Network lan defined from lan.xml
Start the network and set it to be started automatically:
~]# virsh net-start lan ~]# virsh net-autostart lan
To check if the network is running type:
~]# virsh net-list Name State Autostart Persistent ---------------------------------------------------------- default active yes yes lan active yes yes
With prepared virtual network, attach it to the previously created container
~]# virsh attach-interface httpd-container-002 bridge lan --config
To verify if the network is working correctly, connect to the container and execute the usual network-monitoring commands such as
The Linux macvtap driver provides an alternative way to configure a network bridge. It does not require any changes in network configuration on host, but on the other hand, it does not allow for connectivity between the host and guest operating system, only between the guest and other non-local machines. To set up the network using macvtap, follow the same steps as in the above example. The only difference is in the network configuration file, where you need to specify an interface device.
The network configuration file for a macvtap bridge can look as follows:
<network> <name>lan02</name> <forward mode="bridge" /> <interface dev="eth0" /> </network>
After creating the configuration file, start the network and connect the container to it as shown in ?
You can find more information on macvtap in the section called Network interfaces in Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide.
Mounting Devices to a Container
To mount a device to the guest file system, use the general mounting syntax provided by
virsh. The following command requires a definition of the device in an XML format. See the section called PCI devices in Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide to learn more about libvirt device configuration files. Type as root:
~]# virsh virsh attach-device domain file --config
Replace domain with the name of the container you wish to attach the device to, file stands for a libvirt configuration file for this device. Add
--config to make this change persistent.
To detach a previously mounted device, type:
~]# virsh virsh detach-device domain file --config
where domain, file, and
--config have the same meaning as with
In many scenarios, there is a need to attach an additional disk device to the container or to connect it to a virtual network. Therefore, libvirt provides more specific commands for mounting these types of devices. To learn about connecting the container to network interfaces see ?. To attach a disk to the container, type as root:
~]# virsh virsh attach-disk domain source target --config
Replace domain with the name of the container, source stands for the path to the device to be mounted, while target defines how is the mounted device exposed to the guest. Add
--config to make this change persistent. There are several other parameters that can be defined with
attach-disk, to see the complete list, refer to the virsh(1) manual page.
To detach a previously mounted disk, type:
~]# virsh virsh detach-disk domain target --config
Here, domain, target, and
--config have the same meaning as with
attach-disk described above.
To learn more about using Linux Containers in Red Hat Enterprise Linux 7, refer to the following resources.
virsh(1) — The manual page of the
systemd-cgls(1) — The manual page lists options for the
systemd-cgtop(1) — The manual page lists options for the
machinectl(1) — The manual page describes the capabilities of the
Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide — This guide instructs how to configure a Red Hat Enterprise Linux 7 host physical machine and how to install and configure guest virtual machines with different distributions, using the KVM hypervisor. Also included PCI device configuration, SR-IOV, networking, storage, device and guest virtual machine management, as well as troubleshooting, compatibility and restrictions.
Red Hat Enterprise Linux 7 Networking Guide — The Networking Guide documents relevant information regarding the configuration and administration of network interfaces, networks and network services in Red Hat Enterprise Linux 7.
- Article Type