-
Language:
English
-
Language:
English
Red Hat Training
A Red Hat training course is available for Red Hat Enterprise Linux
Chapter 6. Running System Containers
System containers provide a way to containerize services that need to run before the docker daemon is running. They use different technologies than the Docker-formatted containers, ostree
for storage, runc
for runtime, skopeo
for searching and systemd
for service management. Previously, such services were provided in the system as packages, or as part of the ostree in Atomic Host. Excluding applications from the Atomic Host system and containerizing them makes the system itself smaller. Red Hat provides the etcd
and flannel
services as system containers.
To use the system containers on Atomic Host, you need to have the atomic command-line tool version 1.12 or later, along with ostree and runc utilities (all of which are included on the latest version of Atomic Host). To use system containers on RHEL Server systems, you must be running at least RHEL 7.3.3 (because the ostree package was not available on RHEL server until that release).
Because they are not Docker-formatted containers, you do not use the docker
command for container management. The atomic
command-line tool and systemd
are used to pull, install and manage system containers. Here is a brief comparison between how you pull, install and run docker containers and system containers.
docker
-
docker pull rhel7/rsyslog
-
atomic install rhel7/syslog
-
atomic run rhel7/rsyslog
-
system containers
-
atomic pull --storage=ostree rhel7/etcd
-
atomic install --system [--set=VARIABLE] rhel7/etcd
(you will notice this command also runssystemctl start etcd
)
-
The atomic install command supports several options to configure the settings for system containers. The --set
option is used to pass variables which you would normally set for this service. These variables are stored in the manifest.json
file.
To uninstall a system image, use:
# atomic containers delete rhel7/etcd # atomic uninstall rhel7/etcd
System containers use runc
as runtime, and docker and runc images are stored in different places on the system: /var/lib/containers/atomic/$NAME
and /etc/systemd/system/$NAME.service
respectively.
Therefore, when you use docker images
and docker ps
you will only see the Docker-formatted containers. The atomic
tool will show all containers on the system:
# atomic containers list -a CONTAINER ID IMAGE COMMAND CREATED STATUS RUNTIME etcd rhel7/etcd /usr/bin/etcd-env.sh 2016-10-13 14:21 running runc flannel rhel7/flannel /usr/bin/flanneld-ru 2016-10-13 15:12 failed runc 1cf730472572 rhel7/cockpit-ws /container/atomic-ru 2016-10-13 17:55 exited Docker 9a2bb24e5978 rhel7/rsyslog /bin/rsyslog.sh 2016-10-13 17:49 created Docker 34f95af8f8f9 rhel7/cockpit-ws /container/atomic-ru 2016-09-27 19:10 exited Docker
Note that unlike docker containers, where the services are managed by the docker daemon, with system containers you have to manage the dependencies between the services yourself. For example, flannel
is a dependency for etcd
and when you run flannel, it checks whether etcd is set up (if it is not, flannel will wait).
System containers require root privileges. Because runc
requires root, containers also run as the root user.
6.1. Using the etcd System Container Image
6.1.1. Overview
The etcd service provides a highly-available key value store that can be used by applications that need to access and share configuration and service discovery information. Applications that use etcd include Kubernetes, flannel, OpenShift, fleet, vulcand, and locksmith.
The etcd container described here is what is referred to as a system container. A system container is designed to come up before the docker service or in a situation where no docker service is available. In this case, the etcd container can be used to bring up a keystore for the flannel system container, both of which can then be in place to provide networking services before the docker service comes up.
Prior to RHEL Atomic 7.3.2, there were two containerized versions of the etcd services maintained by Red Hat: etcd 2 (etcd container) and etcd 3 (etcd3 container). With 7.3.2, etcd 2 has been deprecated and etcd 3 is the only supported version of etcd. So the only available etcd container is:
- etcd: This is based on etcd version 3.
Along with the etcd 3 container, the etcd3 rpm package is also deprecated. Going forward, Red Hat expects to maintain only one version of etcd at a time. For RHEL Atomic 7.3.2, system containers in general and the etcd container specifically in supported as Tech Preview only.
Besides bypassing the docker service, this etcd container can also bypass the docker command and the storage area used to hold docker containers by default. To use the container, you need a combination of commands that include atomic (to pull, list, install, delete and unstall the image), skopeo (to inspect the image), runc (to ultimately run the image) and systemctl to manage the image among your other systemd services.
Here are some of the features of the etcd container:
- Supports atomic pull: Use the atomic pull command to pull the container to your system.
- Supports atomic install: Use the atomic install --system command to set up the etcd service to run as a systemd service.
- Configures the etcd service: When the etcd service starts, a set of ETCD environment variables are exported. Those variables identify the location of the etcd data directory and set the IP addresses and ports the etcd service listens on.
- System container: After you have used the atomic command to install the etcd container, you can use the systemd systemctl command to manage the service.
6.1.2. Getting and Running the etcd System Container
To use an etcd system container image on a RHEL Atomic system, you need to pull it, install it and enable it. There identity of the currently supported etcd container is:
registry.access.redhat.com/rhel7/etcd
The procedure below illustrates how to pull, install, and run the etcd container.
Pull the etcd container: While logged into the RHEL Atomic system, get the etcd container by running the following command:
# atomic pull --storage=ostree registry.access.redhat.com/rhel7/etcd Image rhel7/etcd is being pulled to ostree ... Pulling layer 2bf01635e2a0f7ed3800c8cb3effc5ff46adc6b9b86f0e80743c956371efe553 Pulling layer 38bd6ce6e1f2271d48ecb41a70a86122060ea91871a154b37d54ec66f593706f Pulling layer 852368668be3e36086ae7a47c8b9e40b5ca87819b3200bc83d7a2f95b73f0f12 Pulling layer e5d06327f2054d371f725243b619d66982c8d4589c1caa19bfcc23a93cf6b4d2 Pulling layer 82e7326c732857423e13163ff1e41ad63b3e2bddef8809175f89dec25f58b6ee Pulling layer b65a93c9f67115dc4c9da8dfeee63b58ec52c6ea58ff7f727b00d932d1f4e8f5
This pulls the etcd system container from the Red Hat Registry to the ostree storage area on the local system. By setting ostree storage, the docker storage area is not used and the docker daemon and docker command won’t see the pulled etcd container image.
Install the etcd container: Type the following to do a default installation of the etcd container so it is set up as a systemd service.
NoteBefore running atomic install, refer to "Configuring etcd" to see options you could add to the atomic install command to change it from the default install shown here.
# atomic install --system rhel7/etcd Extracting to /var/lib/containers/atomic/etcd.0 systemctl daemon-reload systemd-tmpfiles --create /etc/tmpfiles.d/etcd.conf systemctl enable etcd
Start the etcd service: Use the systemctl command to start the installed etcd service as you would any other systemd service.
# systemctl start etcd
Check etcd with runc: To make sure the etcd container is running, you can use the runc list command as you would use docker ps to see containers running under docker:
# runc list ID PID STATUS BUNDLE CREATED etcd 4521 running /sysroot/ostree/deploy... 2016-10-25T22:58:13.756410403Z
Test that the etcd service is working: You can use the curl command to set and retrieve keys from your etcd service. This example assigns a value to a key called testkey, then retrieves that value:
# curl -L http://127.0.0.1:2379/v2/keys/testkey -XPUT -d value="testing my etcd" {"action":"set","node":{"key":"/testkey","value":"testing my etcd","modifiedIndex":6,"createdIndex":6}} # curl -L http://127.0.0.1:2379/v2/keys/testkey {"action":"get","node":{"key":"/testkey","value":"testing my etcd","modifiedIndex":6,"createdIndex":6}}
Note that the first action does a set to set the key and the second does a get to return the value of the key.
The "Configuring etcd" section shows ways of setting up the etcd service in different ways.
6.1.3. Configuring etcd
You can change how the etcd service is configured on the atomic install command line or after it is running using the runc command.
6.1.3.1. Configuring etcd during "atomic install"
The correct way to configure the etcd container image is when you first run atomic install. Setting that are defined initially in the /etc/etcd/etcd.conf file inside of the container can be overridden on the atomic install command line using the --set option. For example, this example shows how to reset the value of ETCD_ADVERTISE_CLIENT_URLS value:
# atomic install --system --set ETCD_ADVERTISE_CLIENT_URLS="http://192.168.122.55:2379" rhel/etcd
Here is the list of other values and setting in the etcd.conf file that you can change on the atomic install command line. See the etcd.conf.yaml.sample page for descriptions of these settings.
# [member] ETCD_NAME=default ETCD_DATA_DIR="/var/lib/etcd/default.etcd" #ETCD_WAL_DIR="" #ETCD_SNAPSHOT_COUNT="10000" #ETCD_HEARTBEAT_INTERVAL="100" #ETCD_ELECTION_TIMEOUT="1000" #ETCD_LISTEN_PEER_URLS="http://localhost:2380" ETCD_LISTEN_CLIENT_URLS="http://localhost:2379" #ETCD_MAX_SNAPSHOTS="5" #ETCD_MAX_WALS="5" #ETCD_CORS="" #[cluster] #ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380" # if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..." #ETCD_INITIAL_CLUSTER="default=http://localhost:2380" #ETCD_INITIAL_CLUSTER_STATE="new" #ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379" #ETCD_DISCOVERY="" #ETCD_DISCOVERY_SRV="" #ETCD_DISCOVERY_FALLBACK="proxy" #ETCD_DISCOVERY_PROXY="" #ETCD_STRICT_RECONFIG_CHECK="false" #[proxy] #ETCD_PROXY="off" #ETCD_PROXY_FAILURE_WAIT="5000" #ETCD_PROXY_REFRESH_INTERVAL="30000" #ETCD_PROXY_DIAL_TIMEOUT="1000" #ETCD_PROXY_WRITE_TIMEOUT="5000" #ETCD_PROXY_READ_TIMEOUT="0" #[security] #ETCD_CERT_FILE="" #ETCD_KEY_FILE="" #ETCD_CLIENT_CERT_AUTH="false" #ETCD_TRUSTED_CA_FILE="" #ETCD_PEER_CERT_FILE="" #ETCD_PEER_KEY_FILE="" #ETCD_PEER_CLIENT_CERT_AUTH="false" #ETCD_PEER_TRUSTED_CA_FILE="" #[logging] #ETCD_DEBUG="false" # examples for -log-package-levels etcdserver=WARNING,security=DEBUG #ETCD_LOG_PACKAGE_LEVELS="" #[profiling] #ETCD_ENABLE_PPROF="false"
6.1.3.2. Configuring etcd security settings
The etcd service is configured with authentication and encryption disabled by default. Because etcd is initially configured to listen to localhost only, the lack of security becomes much more of an issue when the etcd service is exposed to nodes that are outside of the local host. Remote attackers will have access to passwords and secret keys.
In general, here is what you need to do to configure a secure, multi-node etcd cluster service:
- Create TLS certificates and a signed key pair for every member in a cluster, as described in The etcd Security Model.
-
Identify the certificates and keys in the
/etc/etcd/etcd.conf
file. - Open the firewall to allow access to TCP ports 7379 (client communication) and 7380 (server-to-server communication).
-
Install and run the etcd service (see
atomic install --system rhel7/etcd
as described earlier)
6.1.3.3. Configuring etcd with "runc"
With the etcd container running, you can configure settings in the etcd container using the runc exec command. For example, you could run the etcdctl command inside the etcd container to change the network range set by the Network value in the etcd keystore (used later by the flannel service) with the following command:
# runc exec etcd etcdctl set /atomic.io/network/config '{"Network":"10.40.0.0/16"}' # runc exec etcd etcdctl get /atomic.io/network/config {"Network":"10.40.0.0/16"}
The example just shown illustrates the runc exec command running etcdctl set at first to set the Network value. After that, runc executes the etcdctl get command to get configuration information.
6.1.4. Tips for Running etcd Container
If you are done with the etcd container image, you can remove it with the atomic uninstall command:
# atomic uninstall etcd
For more information on system containers, see Introduction to System Containers.
6.2. Using the flannel System Container Image
6.2.1. Overview
The flannel service was designed to provide virtual subnets for use among container hosts. Using flannel, Kubernetes (or other container platforms) can ensure that each container pod has a unique address that is routable within a Kubernetes cluster. As a result, the job of finding ports and services between containers is simpler.
The flannel container described here is what is referred to as a system container. A system container is designed to come up before the docker service or in a situation where no docker service is available. In this case, the flannel container is meant to be brought up after the etcd service (also available as a system container) and before docker and kubernetes services to provide virtual subnets that the later services can leverage.
Besides bypassing the docker service, the flannel container can also bypass the docker command and the storage area used to hold docker containers by default. To use the container, you need a combination of commands that include atomic (to pull, list, install, delete and unstall the image), skopeo (to inspect the image), runc (to ultimately run the image) and systemctl to manage the image among your other systemd services.
For RHEL 7.3, system containers in general and the flannel container specifically are supported as Tech Preview only.
Here are some of the features of the flannel container:
- Supports atomic pull: Use the atomic pull --storage=ostree" command to pull the container to the ostree storage area, instead of default docker storage, on your system.
- Supports atomic install: Use the atomic install --system command to set up the flannel service to run as a systemd service.
- Configures the flannel service: When the flannel service starts, configuration data are stored for flannel in the etcd keystore. To configure flannel, you can use the runc command to run an etcdctl command to configure flannel settings inside the etcd container.
- System container: After you have used the atomic command to install the flannel container, you can use the systemd systemctl command to manage the service.
6.2.2. Getting and Running the RHEL flannel System Container
To use the flannel system container image on a RHEL system, you need to pull it, install it and enable it, as described in the following procedure:
- Pull and run the etcd container: The flannel container is dependent on there being an available etcd keystore. See Using the etcd System Container Image for information on pulling, installing, and running the etcd system container before setting up the flannel system container.
Pull the flannel container: While logged into the RHEL system, get the RHEL etcd container by running the following command:
# atomic pull --storage=ostree rhel7/flannel Image rhel7/flannel is being pulled to ostree ... Pulling layer 2bf01635e2a0f7ed3800c8cb3effc5ff46adc6b9b86f0e80743c956371efe553 Pulling layer 38bd6ce6e1f2271d48ecb41a70a86122060ea91871a154b37d54ec66f593706f ...
This pulls the flannel system container from the Red Hat registry to the ostree storage area on the local system. By setting ostree storage, the docker storage area is not used and the docker daemon and docker command won’t see the pulled flannel container image.
Install the flannel container: Type the following to do a default installation of the flannel container so it is set up as a systemd service. See "Configuring flannel" to see options you could add to the atomic install command to change it from the default install shown here.
# atomic install --system rhel7/flannel Extracting to /var/lib/containers/atomic/flannel.0 systemctl daemon-reload systemd-tmpfiles --create /etc/tmpfiles.d/flannel.conf systemctl enable flannel
Start the flannel service: Use the systemctl command to start the installed etcd service as you would any other systemd service.
# systemctl start flannel
Check etcd and flannel with runc: To make sure the flannel and etcd containers are running, you can use the runc list command as you would use docker ps to see containers running under docker:
# runc list ID PID STATUS BUNDLE CREATED etcd 4521 running /sysroot/ostree/deploy... 2016-10-25T22:58:13.756410403Z flannel 6562 running /sysroot/ostree/deploy... 2016-10-26T13:50:49.041148994Z
Test that the flannel service is working: If the flannel service is working properly, the next time you start up the docker0 network interface, the docker network interface should pick up an address range from those assigned by flannel. After starting flannel and before restarting docker, run these commands:
# ip a | grep docker | grep inet inet 172.17.0.1/16 scope global docker0 # systemctl reboot # ip a | grep docker | grep inet inet 10.40.4.1/24 scope global docker0
Note that the docker0 interface picks up an address in the address range assigned by flannel and will, going forward, assign containers to addresses in the 10.40.4.0/24 address range.
The "Configuring flannel" section shows ways of setting up the etcd service in different ways.
6.2.3. Configuring flannel
You can change how the flannel service is configured on the atomic install command line or after it is running using the runc command.
6.2.3.1. Configuring etcd during "atomic install"
Environment variables that that are defined initially when the flannel container starts up can be overridden on the atomic install command line using the --set option. For example, this example shows how to reset the value of FLANNELD_ETCD_ENDPOINTS:
# atomic install --system --set FLANNELD_ETCD_ENDPOINTS="http://192.168.122.55:2379" rhel7/flannel
This is how two of these variables are set by default:
- FLANNELD_ETCD_ENDPOINTS=http://127.0.0.1:2379: Identifies the location of the etcd service IP address and port number.
- FLANNELD_ETCD_PREFIX=/atomic.io/network: Identifies the location of flannel values in the etcd keystore.
Here is the list of other values that you can change on the atomic install command line. See the Key Command Line Options and Environment Variables sections of the Flannel Github page for descriptions of these settings.
* *FLANNELD_PUBLIC_IP* * *FLANNELD_ETCD_ENDPOINTS* * *FLANNELD_ETCD_PREFIX* * *FLANNELD_ETCD_KEYFILE* * *FLANNELD_ETCD_CERTFILE* * *FLANNELD_ETCD_CAFILE* * *FLANNELD_IFACE* * *FLANNELD_SUBNET_FILE* * *FLANNELD_IP_MASQ* * *FLANNELD_LISTEN* * *FLANNELD_REMOTE* * *FLANNELD_REMOTE_KEYFILE* * *FLANNELD_REMOTE_CERTFILE* * *FLANNELD_REMOTE_CAFILE* * *FLANNELD_NETWORKS*
6.2.3.2. Configuring flannel with "runc"
Flannel settings that are stored in the etcd keystore can be changed by executing etcdctl commands in the etcd container. Here’s an example of how to change the Network value in the etcd keystore so that flannel uses a different set of IP address ranges.
# runc exec etcd etcdctl set /atomic.io/network/config '{"Network":"10.40.0.0/16"}' # runc exec etcd etcdctl get /atomic.io/network/config {"Network":"10.40.0.0/16"}
The example just shown illustrates the runc exec command running etcdctl set at first to set the Network value. After that, runc executes the etcdctl get command to get configuration information.
6.2.4. Tips for Running flannel Container
If you are done with the flannel container image, you can remove it with the atomic uninstall command:
# atomic uninstall flannel
For more information on system containers, see Introduction to System Containers.
6.3. Using the ovirt-guest-agent System Container Image for Red Hat Virtualization
6.3.1. Overview
The ovirt-guest-agent container launches the Red Hat Virtualization (RHV) management agent. This container is made to be deployed on Red Hat Enterprise Linux virtual machines that are running in a RHV environment. The agent provides an interface to the RHV manager that supplies heart-beat and other run-time data from inside the guest VM. The RHV manager can send control commands to shutdown, restart and otherwise change the state of the virtual machine through the agent.
The overt-guest-agent is added automatically to the Red Hat Atomic Image for RHV, which is an OVA-formatted image made for RHEV environments. You can download the image from the Red Hat Enterprise Linux Atomic Host download page. Or, you can get and run the container image manually on a RHEL Server or RHEL Atomic Host virtual machine you install yourself.
The ovirt-guest-agent container is a system container. System containers are designed to come up before the docker service or in a situation where no docker service is available. In this case, the ovirt-guest-agent allows the RHV manager to change the state of the virtual machine on which it is running whether the docker service is running or not.
Here are some of the features of the ovirt-guest-agent container:
- Supports atomic pull: Use the atomic pull command to pull the ovirt-guest-agent container to your system.
- Supports atomic install: Use the atomic install --system command to set up the ovirt-guest-agent service to run as a systemd service.
- System container: After you have used the atomic command to install the ovirt-guest-agent container, you can use the systemd systemctl command to manage the service.
Note that the ovirt-guest-agent container image is not made to run in environments other than a RHEL or RHEL Atomic virtual machine in a RHV environment.
6.3.2. Getting and Running the ovirt-guest-agent System Container
To use an ovirt-guest-agent system container image on a RHEL Server or RHEL Atomic system, you need to pull it, install it and enable it. The identity of the currently supported ovirt-guest-agent container is:
registry.access.redhat.com/rhev4/ovirt-guest-agent
The procedure below illustrates how to pull, install, and run the ovirt-guest-agent container.
Pull the ovirt-guest-agent container: While logged into the RHEL or RHEL Atomic system, get the ovirt-guest-agent container by running the following command:
# atomic pull --storage=ostree registry.access.redhat.com/rhev4/ovirt-guest-agent
This pulls the ovirt-guest-agent system container from the Red Hat Registry to the ostree storage area on the local system. By setting ostree storage, the docker storage area is not used and the docker daemon and docker command won’t see the pulled ovirt-guest-agent container image.
Install the ovirt-guest-agent container: Type the following to do a default installation of the ovirt-guest-agent container so it is set up as a systemd service.
# atomic install --system rhel7/ovirt-guest-agent Extracting to /var/lib/containers/atomic/ovirt-guest-agent.0 systemctl daemon-reload systemd-tmpfiles --create /etc/tmpfiles.d/ovirt-guest-agent.conf systemctl enable ovirt-guest-agent
Start the ovirt-guest-agent service: Use the systemctl command to start and enable the installed ovirt-guest-agent service as you would any other systemd service.
# systemctl start ovirt-guest-agent # systemctl enable ovirt-guest-agent
Check ovirt-guest-agent with runc: To make sure the ovirt-guest-agent container is running, you can use the runc list command as you would use docker ps to see containers running under docker:
# runc list ID PID STATUS BUNDLE CREATED ovirt-guest-agent 4521 running /sysroot/ostree/de... 2017-04-07T21:01:07.279104535Z
6.3.3. Removing the ovirt-guest-agent Container and Image
If you are done with the ovirt-guest-agent container image, you can stop and remove the container, then uninstall the image:
# atomic containers delete ovirt-guest-agent Do you wish to delete the following images? ID NAME IMAGE_NAME STORAGE ovirt-guest- ovirt-guest-agent registry.access.redhat.com ostree Confirm (y/N) y systemctl stop ovirt-guest-agent systemctl disable ovirt-guest-agent systemd-tmpfiles --remove /etc/tmpfiles.d/ovirt-guest-agent.conf # atomic uninstall registry.access.redhat.com/rhev4/ovirt-guest-agent Do you wish to delete the following images? IMAGE STORAGE registry.access.redhat.com/rhev4/ovirt-guest-agent ostree Confirm (y/N) y
For more information on system containers, see Introduction to System Containers.
6.4. Using the open-vm-tools System Container Image for VMware
6.4.1. Overview
The open-vm-tools container provides services and modules that allow VMware technology to manage and otherwise work with Red Hat Enterprise Linux and RHEL Atomic Host virtual machines running in VMware environments. Kernel modules included in this container are made to improve performance of RHEL systems running as VMware guests. Services provided by this container include:
- Graceful power operations
- Script execution on guests during power operations
- Enhanced guest automation via custom programs or file system operations
- Guest authentication
- Guest network, memory, and disk usage information collection
- Guest heartbeat generation, used to determine if guests are available
- Guest, host, and client desktop clock synchronization
- Host access to obtain file-system-consistent guest file system snapshots
- Guest script execution associated with quiescing guest file systems (pre-freeze and post-thaw)
- Guest customization opportunities after guests power up
- File folder sharing between VMware (Workstation or Fusion) and guest system
- Text, graphics, and file pasting between guests, hosts and client desktops
The open-vm-tools container is a system container, designed to come up before the docker service or in a situation where no docker service is available. In this case, the open-vm-tools container allows VMware technologies to manage the RHEL or RHEL Atomic virtual machines on which it is running whether the docker service is running or not.
Here are some of the features of the open-vm-tools container on the RHEL guest system:
- Supports atomic pull: Use the atomic pull command to pull the open-vm-tools container to your system.
- Supports atomic install: Use the atomic install --system command to set up the open-vm-tools service to run as a systemd service.
- System container: After you have used the atomic command to install the open-vm-tools container, you can use the systemd systemctl command to manage the service.
Note that the open-vm-tools container image is not made to run in environments other than a RHEL or RHEL Atomic virtual machine in a VMware environment.
6.4.2. Getting and Running the open-vm-tools System Container
To use an open-vm-tools system container image on a RHEL Server or RHEL Atomic system, you need to pull it, install it and enable it. The identity of the currently supported open-vm-tools container is:
registry.access.redhat.com/rhel7/open-vm-tools
The procedure below illustrates how to pull, install, and run the open-vm-tools container.
Pull the open-vm-tools container: While logged into the RHEL or RHEL Atomic system, get the open-vm-tools container by running the following command:
# atomic pull --storage=ostree registry.access.redhat.com/rhel7/open-vm-tools
This pulls the open-vm-tools system container from the Red Hat Registry to the ostree storage area on the local system. By setting ostree storage, the docker storage area is not used and the docker daemon and docker command won’t see the pulled open-vm-tools container image.
Install the open-vm-tools container: Type the following to do a default installation of the open-vm-tools container so it is set up as a systemd service.
# atomic install --system rhel7/open-vm-tools Extracting to /var/lib/containers/atomic/open-vm-tools.0 systemctl daemon-reload systemd-tmpfiles --create /etc/tmpfiles.d/open-vm-tools.conf systemctl enable open-vm-tools
Start the open-vm-tools service: Use the systemctl command to start and enable the installed open-vm-tools service as you would any other systemd service.
# systemctl start open-vm-tools # systemctl enable open-vm-tools
Check open-vm-tools with runc: To make sure the open-vm-tools container is running, you can use the runc list command as you would use docker ps to see containers running under docker:
# runc list ID PID STATUS BUNDLE CREATED open-vm-tools 4521 running /sysroot/ostree/de... 2017-04-07T18:03:01.913246491Z
6.4.3. Removing the open-vm-tools Container and Image
If you are done with the open-vm-tools container image, you can stop and remove the container, then uninstall the image:
# atomic containers delete open-vm-tools Do you wish to delete the following images? ID NAME IMAGE_NAME STORAGE ovirt-guest- open-vm-tools registry.access.redhat.com ostree Confirm (y/N) y systemctl stop open-vm-tools systemctl disable open-vm-tools systemd-tmpfiles --remove /etc/tmpfiles.d/open-vm-tools.conf # atomic uninstall registry.access.redhat.com/rhel7/open-vm-tools Do you wish to delete the following images? IMAGE STORAGE registry.access.redhat.com/rhel7/open-vm-tools ostree Confirm (y/N) y
To learn more about how the open-vm-tools container was built, refer to Containerizing open-vm-tools. Using the instructions in that article allows you to build your own open-vm-tools container, using custom configuration settings. For more information on system containers, see Introduction to System Containers.