Installing Red Hat CloudForms on OpenShift Container Platform

Red Hat CloudForms 4.6

How to install and configure Red Hat CloudForms on an OpenShift Container Platform environment

Red Hat CloudForms Documentation Team


This guide provides instructions on how to deploy and configure a Red Hat CloudForms appliance as multiple pods on an OpenShift Container Platform environment.
If you have a suggestion for improving this guide or have found an error, please submit a Bugzilla report at against Red Hat CloudForms Management Engine for the Documentation component. Please provide specific details, such as the section number, guide name, and CloudForms version so we can easily locate the content.

Chapter 1. Installing Red Hat CloudForms

Red Hat CloudForms can be installed on OpenShift Container Platform in a few steps.

This procedure uses a template to deploy a multi-pod CloudForms appliance with the database stored in a persistent volume on OpenShift Container Platform. It provides a step-by-step setup, including cluster administrative tasks as well as information and commands for the application developer using the deployment.

The ultimate goal of the deployment is to be able to deconstruct the CloudForms appliance into several containers running on a pod or a series of pods.

Running the CloudForms appliance in a series of pods has several advantages. For example, running each worker in a separate pod allows OpenShift Container Platform to manage worker processes and reduce worker memory consumption. OpenShift can also easily scale workers by adding or removing pods, and perform upgrades by using images.

There are two options for installing CloudForms on OpenShift:

  • During OpenShift Container Platform 3.7 installation:

    • When you install OpenShift Container Platform 3.7, you have the option to install CloudForms inside OpenShift at the time. This method leverages the Ansible installer to run and deploy the CloudForms template, instead of building the environment manually. See the OpenShift Container Platform 3.7 Release Notes for details.
  • Manual install on an existing OpenShift Container Platform environment:

    • Deploy CloudForms pods using the CloudForms template (.yaml file). This is the method described in this guide.

After deployment, you can configure the CloudForms environment to use any external authentication configurations supported by CloudForms.

1.1. Prerequisites

To successfully deploy a CloudForms appliance on OpenShift Container Platform, you need a functioning OpenShift Container Platform 3.7 install with the following configured:

  • NFS or other compatible volume provider
  • A cluster-admin user
  • A regular user (such as an application developer)

OpenShift Container Platform 3.7 is required for this installation. Red Hat has not tested this procedure with earlier versions of OpenShift Container Platform.

1.1.1. Cluster Sizing

To avoid deployment failures due to resource starvation, Red Hat recommends the following minimum cluster size for a test environment:

  • 1 master node with at least 8 vCPUs and 12GB RAM
  • 2 schedulable nodes with at least 4 vCPUs and 8GB RAM
  • 25GB storage for CloudForms physical volume use

These recommendations assume CloudForms is the only application running on this cluster. Alternatively, you can provision an infrastructure node to run registry, metrics, router, and logging pods.

Each CloudForms application pod will consume at least 3GB RAM on initial deployment (without providers added). RAM consumption increases depending on the appliance use. For example, after adding providers, expect higher resource consumption.

1.1.2. Limitations

The following limitations exist when deploying this version of CloudForms on OpenShift Container Platform 3.7:

  • This configuration cannot run on public OpenShift ( and OpenShift Dedicated environments) because of necessary privileges
  • The Embedded Ansible pod must run as a privileged pod
  • OpenShift cannot independently scale workers
  • A highly available database is not supported in PostgreSQL pods

1.1.3. Templates and Images

The CloudForms deployment uses .yaml template files to create the appliance, including cfme-template.yaml, which is the CloudForms template used for the deployment, and cfme-pv-example.yaml and cfme-pv-app-example.yaml, two pod volume files.

These templates are available in RPMs from Red Hat-provided image streams. To obtain the templates:

  1. Configure image streams as described in OpenShift Container Platform Installation and Configuration.
  2. After loading the image streams and templates, the templates will be available on your OpenShift system in /usr/share/ansible/openshift-ansible/roles/openshift_examples/files/examples/v3.7/cfme-templates.

The CloudForms template points to several image files to create the OpenShift pods that comprise the appliance. These image files are obtained from the Red Hat Container Catalog during deployment.

1.2. Preparing to Deploy CloudForms

To prepare for deploying the CloudForms appliance to OpenShift Container Platform, create a project, configure security contexts, and create persistent storage.

  1. As a regular user, log in to OpenShift:

    $ oc login -u <user> -p <password>
  2. Create a project with your desired parameters. The project name (<your_project> in this example) is mandatory, but <description> and <display_name> are optional:

    $ oc new-project <your_project> \
    --description="<description>" \
  3. As the admin user, configure security context constraints (SCCs) for your OpenShift service accounts:

    1. Add the cfme-anyuid service account to the anyuid SCC:

      $ oc adm policy add-scc-to-user anyuid system:serviceaccount:<your-project>:cfme-anyuid
    2. Add the cfme-orchestrator service account to the anyuid SCC:

      $ oc adm policy add-scc-to-user anyuid system:serviceaccount:<your-project>:cfme-orchestrator
    3. Add the cfme-httpd service account to the anyuid SCC:

      $ oc adm policy add-scc-to-user anyuid system:serviceaccount:<your-project>:cfme-httpd
    4. Add the cfme-privileged service account to the privileged SCC:

      $ oc adm policy add-scc-to-user privileged system:serviceaccount:<your-project>:cfme-privileged
  4. Verify the SCCs are added correctly to the service accounts and project:

    $ oc describe scc anyuid | grep Users
    Users:					system:serviceaccount:<your-project>:cfme-anyuid,system:serviceaccount:<your-project>:cfme-httpd,system:serviceaccount:<your-project>:cfme-orchestrator
    $ oc describe scc privileged | grep Users
    Users:					system:admin,system:serviceaccount:openshift-infra:build-controller,system:serviceaccount:management-infra:management-admin,system:serviceaccount:management-infra:inspector-admin,system:serviceaccount:logging:aggregated-logging-fluentd,system:serviceaccount:<your-project>:cfme-privileged

    For more information on SCCs, see the OpenShift documentation.

  5. Add the view and edit roles to the cfme-orchestrator service account:

    $ oc policy add-role-to-user view system:serviceaccount:<your-project>:cfme-orchestrator -n <your-project>
    $ oc policy add-role-to-user edit system:serviceaccount:<your-project>:cfme-orchestrator -n <your-project>
  6. As the admin user, prepare persistent storage for the deployment. (Skip this step if you have already configured persistent storage.)

    A basic CloudForms deployment needs at least two persistent volumes (PVs) to store CloudForms data. As the admin user, create two persistent volumes: one to host the CloudForms PostgreSQL database, and one to host the application data.

    Example NFS-backed volume templates are provided by cfme-pv-db-example.yaml and cfme-pv-server-example.yaml, available from GitHub.


    For NFS-backed volumes, ensure your NFS server firewall is configured to allow traffic on port 2049 (TCP) from the OpenShift cluster.

    Red Hat recommends setting permissions for the pv-app (privileged pod volume) as 777, uid/gid 0 (owned by root). For more information on configuring persistent storage in OpenShift Container Platform, see the OpenShift Container Platform Installation and Configuration guide.

    1. Configure your NFS server host details within these files, and edit any other settings needed to match your environment.
    2. Run the following commands to create the two persistent volumes:

      $ oc create -f cfme-pv-db-example.yaml
      $ oc create -f cfme-pv-server-example.yaml
    3. Process the templates, editing the NFS_HOST parameter (mandatory) and any other parameters:

      $ oc process cfme-pv-db-example.yaml -p | oc create -f -
      $ oc process cfme-pv-server-example.yaml -p | oc create -f -

      There are three parameters required to process the template. Only NFS_HOST is required, PV_SIZE and BASE_PATH contain defaults that do not need editing unless desired:

      • PV_SIZE - Defaults to the recommended PV size for the App/DB template (5Gi/15Gi respectively)
      • BASE_PATH - Defaults to /exports
      • NFS_HOST - No Default - Hostname or IP address of the NFS server
    4. Verify the persistent volumes were created successfully:

      $ oc get pv
      cfme-app            5Gi        RWO           Retain          Available                                          16s
      cfme-db             15Gi       RWO           Retain          Available                                          49s

      Red Hat recommends validating NFS share connectivity from an OpenShift node before attempting a deployment.

  7. Increase the maximum number of imported images on ImageStream.

    By default, OpenShift Container Platform can import five tags per image stream, but the CloudForms repositories contain more than five images for deployments.

    You can modify this setting on the master node at /etc/origin/master/master-config.yaml so OpenShift can import additional images.

    1. Add the following at the end of the /etc/origin/master/master-config.yaml file:

        maxImagesBulkImportedPerRepository: 100
    2. Restart the master service:

      $ systemctl restart atomic-openshift-master

1.3. Deploying the CloudForms Appliance

To deploy the appliance on OpenShift Container Platform, create the CloudForms template and verify it is available in your project.

  1. As a regular user, create the CloudForms template:

    $ oc create -f cfme-template.yaml
    template "cloudforms" created
  2. Verify the template is available with your project:

    $ oc get templates
    NAME         DESCRIPTION                                    PARAMETERS        OBJECTS
    cloudforms   CloudForms appliance with persistent storage   18 (1 blank)      12
  3. (Optional) Customize the template’s deployment parameters. Use the following command to see the available parameters and descriptions:

    $ oc process --parameters -n <your-project> cloudforms

    To customize the deployment configuration parameters, run:

    $ oc edit dc/<deployconfig_name>
  4. To deploy CloudForms from template using default settings, run:

    $ oc new-app --template=cloudforms

    Alternatively, to deploy CloudForms from a template using customized settings, add the -p option and the desired parameters to the command. For example:

    $ oc new-app --template=cloudforms -p DATABASE_VOLUME_CAPACITY=2Gi,POSTGRESQL_MEM_LIMIT=4Gi,APPLICATION_DOMAIN=hostname

    The APPLICATION_DOMAIN parameter specifies the hostname used to reach the CloudForms application, which eventually constructs the route to the CloudForms pod. If you do not specify the APPLICATION_DOMAIN parameter, the CloudForms application will not be accessible after the deployment; however, this can be fixed by changing the route. For more information on OpenShift template parameters, see the OpenShift Container Platform Developer Guide.

1.3.1. Deploying the CloudForms Appliance Using an External Database

Before attempting to deploy CloudForms using an external database deployment, ensure the following conditions are satisfied:

  • Your OpenShift cluster can access the external PostgreSQL server
  • The CloudForms user, password, and role have been created on the external PostgreSQL server
  • The intended CloudForms database is created, and ownership has been assigned to the CloudForms user

To deploy the appliance:

  1. Import the CloudForms external database template:

    $ oc create -f templates/cfme-template-ext-db.yaml
  2. Launch the deployment with the following command. The database server IP address is required, and the other settings must match your remote PostgreSQL server.

    $ oc new-app --template=cloudforms-ext-db -p DATABASE_IP=<server_ip> -p DATABASE_USER=<user> -p DATABASE_PASSWORD=<password> -p DATABASE_NAME=<database_name>

1.4. Verifying the Configuration

Verify the deployment was successful by running the following commands as a regular user under the CloudForms project:


The first deployment can take several minutes to complete while OpenShift downloads the necessary images.

  1. Confirm the CloudForms pod is bound to the correct security context constraints:

    1. List and obtain the name of the cfme-app pod:

      $ oc get pod
      NAME                 READY     STATUS    RESTARTS   AGE
      cloudforms-0         1/1       Running   0          4m
      httpd-1-w486v        1/1       Running   0          4m
      memcached-1-4xtjc    1/1       Running   0          4m
      postgresql-1-n5tm6   1/1       Running   0          4m
    2. Export the configuration of the pod:

      $ oc export pod <cfme_pod_name>
    3. Examine the output to verify that has the value anyuid:

  2. Verify the persistent volumes are attached to the postgresql and cfme-app pods:

    $ oc volume pods --all
      pvc/cfme-pgdb-claim (allocated 2GiB) as cfme-pgdb-volume
        mounted at /var/lib/pgsql/data
      secret/default-token-2se06 as default-token-2se06
        mounted at /var/run/secrets/
      pvc/cfme (allocated 2GiB) as cfme-app-volume
        mounted at /persistent
      secret/default-token-9q4ge as default-token-9q4ge
        mounted at /var/run/secrets/
  3. Check the readiness of the CloudForms pod:


    Allow approximately five minutes once pods are in running state for CloudForms to start responding on HTTPS.

    $ oc describe pods <cfme_pod_name>
      Type      Status
      Ready     True
  4. After you have successfully validated your CloudForms deployment, disable automatic image change triggers to prevent unintended upgrades.

    By default, on initial deployments the automatic image change trigger is enabled. This could potentially start an unintended upgrade on a deployment if a newer image is found in the ImageStream.

    Disable the automatic image change triggers for CloudForms deployment configurations (DCs) on each project with the following commands:

    $ oc set triggers dc --manual -l app=cloudforms
    deploymentconfig "memcached" updated
    deploymentconfig "postgresql" updated
    $ oc set triggers dc --from-config --auto -l app=cloudforms
    deploymentconfig "memcached" updated
    deploymentconfig "postgresql" updated

    The configuration change trigger is kept enabled; to have full control of your deployments, you can alternatively turn it off. See the OpenShift Container Platform Developer Guide for more information on deployment triggers.

1.5. Logging into CloudForms

As part of the deployment, a route to the CloudForms appliance is created for HTTPS access. Once the pods have been successfully deployed, you can log into CloudForms.

You can obtain the CloudForms host address from the project in the OpenShift user interface, or by opening a shell on the pod and getting the route information.

  1. To open a shell on the pod, run:

    $ oc rsh <pod_name> bash -l
  2. Get the route information:

    $ oc get routes
    NAME         HOST/PORT                   PATH                SERVICE      TERMINATION   LABELS
    cloudforms  cloudforms:443-tcp   passthrough                app=cloudforms
  3. Navigate to the reported URL/host on a web browser (in this example,
  4. Enter the default CloudForms credentials (Username: admin | Password: smartvm) for the initial login.
  5. Click Login.

Chapter 2. Configuring Red Hat CloudForms

After installing CloudForms and running it for the first time, you must perform some basic configuration. To configure CloudForms, you must at a minimum:

  1. Add a disk to the infrastructure hosting your appliance.
  2. Configure the database.

Configure the CloudForms appliance using the internal appliance console.

2.1. Accessing the Appliance Console

  1. Start the appliance and open a terminal console.
  2. Enter the appliance_console command. The Red Hat CloudForms appliance summary screen displays.
  3. Press Enter to manually configure settings.
  4. Press the number for the item you want to change, and press Enter. The options for your selection are displayed.
  5. Follow the prompts to make the changes.
  6. Press Enter to accept a setting where applicable.

The CloudForms appliance console automatically logs out after five minutes of inactivity.

2.2. Configuring a Database

CloudForms uses a database to store information about the environment. Before using CloudForms, configure the database options for it; CloudForms provides the following two options for database configuration:

  • Install an internal PostgreSQL database to the appliance
  • Configure the appliance to use an external PostgreSQL database

2.2.1. Configuring an Internal Database


Before installing an internal database, add a disk to the infrastructure hosting your appliance. See the documentation specific to your infrastructure for instructions for adding a disk. As a storage disk usually cannot be added while a virtual machine is running, Red Hat recommends adding the disk before starting the appliance. Red Hat CloudForms only supports installing of an internal VMDB on blank disks; installation will fail if the disks are not blank.

  1. Start the appliance and open a terminal console.
  2. Enter the appliance_console command. The Red Hat CloudForms appliance summary screen displays.
  3. Press Enter to manually configure settings.
  4. Select 5) Configure Database from the menu.
  5. You are prompted to create or fetch an encryption key.

    • If this is the first Red Hat CloudForms appliance, choose 1) Create key.
    • If this is not the first Red Hat CloudForms appliance, choose 2) Fetch key from remote machine to fetch the key from the first appliance. For worker and multi-region setups, use this option to copy key from another appliance.


      All CloudForms appliances in a multi-region deployment must use the same key.

  6. Choose 1) Create Internal Database for the database location.
  7. Choose a disk for the database. This can be either a disk you attached previously, or a partition on the current disk.


    Red Hat recommends using a separate disk for the database.

    If there is an unpartitioned disk attached to the virtual machine, the dialog will show options similar to the following:

    1) /dev/vdb: 20480
    2) Don't partition the disk
    • Enter 1 to choose /dev/vdb for the database location. This option creates a logical volume using this device and mounts the volume to the appliance in a location appropriate for storing the database. The default location is /var/opt/rh/rh-postgresql95/lib/pgsql, which can be found in the environment variable $APPLIANCE_PG_MOUNT_POINT.
    • Enter 2 to continue without partitioning the disk. A second prompt will confirm this choice. Selecting this option results in using the root filesystem for the data directory (not advised in most cases).
  8. Enter Y or N for Should this appliance run as a standalone database server?

    • Select Y to configure the appliance as a database-only appliance. As a result, the appliance is configured as a basic PostgreSQL server, without a user interface.
    • Select N to configure the appliance with the full administrative user interface.
  9. When prompted, enter a unique number to create a new region.


    Creating a new region destroys any existing data on the chosen database.

  10. Create and confirm a password for the database.

Red Hat CloudForms then configures the internal database.

2.2.2. Configuring an External Database

Based on your setup, you will choose to configure the appliance to use an external PostgreSQL database. For example, we can only have one database in a single region. However, a region can be segmented into multiple zones, such as database zone, user interface zone, and reporting zone, where each zone provides a specific function. The appliances in these zones must be configured to use an external database.

The postgresql.conf file used with Red Hat CloudForms databases requires specific settings for correct operation. For example, it must correctly reclaim table space, control session timeouts, and format the PostgreSQL server log for improved system support. Due to these requirements, Red Hat recommends that external Red Hat CloudForms databases use a postgresql.conf file based on the standard file used by the Red Hat CloudForms appliance.

Ensure you configure the settings in the postgresql.conf to suit your system. For example, customize the shared_buffers setting according to the amount of real storage available in the external system hosting the PostgreSQL instance. In addition, depending on the aggregate number of appliances expected to connect to the PostgreSQL instance, it may be necessary to alter the max_connections setting.

  • Red Hat CloudForms 4.x requires PostgreSQL version 9.4.
  • Because the postgresql.conf file controls the operation of all databases managed by a single instance of PostgreSQL, do not mix Red Hat CloudForms databases with other types of databases in a single PostgreSQL instance.
  1. Start the appliance and open a terminal console.
  2. Enter the appliance_console command. The Red Hat CloudForms appliance summary screen displays.
  3. Press Enter to manually configure settings.
  4. Select 5) Configure Database from the menu.
  5. You are prompted to create or fetch a security key.

    • If this is the first Red Hat CloudForms appliance, choose 1) Create key.
    • If this is not the first Red Hat CloudForms appliance, choose 2) Fetch key from remote machine to fetch the key from the first appliance.


      All CloudForms appliances in a multi-region deployment must use the same key.

  6. Choose 2) Create Region in External Database for the database location.
  7. Enter the database hostname or IP address when prompted.
  8. Enter the database name or leave blank for the default (vmdb_production).
  9. Enter the database username or leave blank for the default (root).
  10. Enter the chosen database user’s password.
  11. Confirm the configuration if prompted.

Red Hat CloudForms will then configure the external database.

2.3. Configuring a Worker Appliance

You can use multiple appliances to facilitate horizontal scaling, as well as for dividing up work by roles. Accordingly, configure an appliance to handle work for one or many roles, with workers within the appliance carrying out the duties for which they are configured. You can configure a worker appliance through the terminal. The following steps demonstrate how to join a worker appliance to an appliance that already has a region configured with a database.

  1. Start the appliance and open a terminal console.
  2. Enter the appliance_console command. The Red Hat CloudForms appliance summary screen displays.
  3. Press Enter to manually configure settings.
  4. Select 5) Configure Database from the menu.
  5. You are prompted to create or fetch a security key. Since this is not the first Red Hat CloudForms appliance, choose 2) Fetch key from remote machine. For worker and multi-region setups, use this option to copy key from another appliance.


    All CloudForms appliances in a multi-region deployment must use the same key.

  6. Choose 3) Join Region in External Database for the database location.
  7. Enter the database hostname or IP address when prompted.
  8. Enter the port number or leave blank for the default (5432).
  9. Enter the database name or leave blank for the default (vmdb_production).
  10. Enter the database username or leave blank for the default (root).
  11. Enter the chosen database user’s password.
  12. Confirm the configuration if prompted.

Chapter 3. Managing Red Hat CloudForms with OpenShift

This section includes common tasks to manage your Red Hat CloudForms deployment from OpenShift.

3.1. Scaling CloudForms Appliances

StatefulSets in OpenShift manage the deployment and scaling of a set of pods (in this case, CloudForms appliances). StatefulSets ensure ordering that applications will come up by providing unique identities for pods.


Each new replica (server) consumes a physical volume. Before scaling, ensure you have enough physical volumes available to scale.

The following example shows scaling using StatefulSets:

Example: Scaling to two replicas

$ oc scale statefulset cloudforms --replicas=2
statefulset "cloudforms" scaled
$ oc get pods
NAME                 READY     STATUS    RESTARTS   AGE
cloudforms-0           1/1       Running   0          34m
cloudforms-1           1/1       Running   0          5m
memcached-1-mzeer    1/1       Running   0          1h
postgresql-1-dufgp   1/1       Running   0          1h

The newly created replicas will join the existing CloudForms region. Each new pod is numbered in the order it is deployed, starting with 0 and increasing sequentially. For example, replicas in a StatefulSet will be numbered cloudforms-0 cloudforms-1, and so on.

3.2. Creating a Backup

Create a persistent volume for backups using the PV backup template (cfme-pv-backup-example.yaml) in case you need to restore to a previous version.

  1. Create the persistent volume for the backup:

    $ oc create -f cfme-pv-backup-example.yaml
  2. Create the backup persistent volume claim (PVC):

    $ oc create -f cfme-backup-pvc.yaml
  3. Verify the persistent volume claim was created:

    $ oc get pvc
  4. Back up secrets, such as database encryption keys and credentials.


    Be careful to back up secrets in a secure location.

    $ oc get secret -o yaml --export=true > secrets.yaml
    $ oc get pvc -o yaml --export=true > pvc.yaml
  5. Initiate the database backup:

    $ oc create -f cfme-backup-job.yaml

This step creates a container, and connects to the database pod, pg_basebackup.

3.3. Restoring to a Backup

You can restore to a database backup created in Section 3.2, “Creating a Backup” using the restore template, cfme-restore-job.yaml.

The restore job will look for cfme-backup and cfme-postgresql PVs by default, and the latest successful backup will be restored by default. If existing data is found on the cfme-postgresql volume, it will be renamed and left on the volume.


You must perform a database restore on an offline environment. All pods must be scaled down to 0, and not running.

  1. Shut down all pods:

    $ oc stop all pods
  2. To initiate the database restore, create the restore template:

    $ oc create -f cfme-restore-job.yaml

After the restore job is complete, you can scale the pods back up.

3.4. Uninstalling Red Hat CloudForms from a Project

If no longer needed, you can uninstall the Red Hat CloudForms pod from your project. Note the following commands do not remove SCC permissions, or the project itself.


Use this procedure if only Red Hat CloudForms exists in the project.

  1. Inside the project, run the following as a regular user:

    $ oc delete all --all
  2. Wait approximately 30 seconds for the command to process, then run:

    $ oc delete pvc --all

Chapter 4. Troubleshooting Deployment

Under normal circumstances, the deployment process takes approximately 10 minutes. If the deployment is unsuccessful, examining deployment events and pod logs can help identify any issues.

  1. As a regular user, first retry the failed deployment:

    $ oc get pods
    NAME                 READY     STATUS    RESTARTS   AGE
    cloudforms-1-deploy  0/1       Error     0          25m
    memcached-1-yasfq    1/1       Running   0          24m
    postgresql-1-wfv59   1/1       Running   0          24m
    $ oc deploy cloudforms --retry
    Retried #1
    Use 'oc logs -f dc/cloudforms' to track its progress.
  2. Allow a few seconds for the failed pod to get re-scheduled, then check events and logs:

    $ oc describe pods <pod-name>
      FirstSeen	LastSeen	Count	From							SubobjectPath			Type		Reason		Message
      ---------	--------	-----	----							-------------			--------	------		-------
    15m		15m		1	{kubelet}	spec.containers{cloudforms}	Warning		Unhealthy	Readiness probe failed: Get dial tcp getsockopt: connection refused

    Liveness and readiness probe failures, like in the output above, indicate the pod is taking longer than expected to come online. In this case, check the pod logs.

  3. As the cfme-app container is systemd based, use oc rsh instead of oc logs to obtain journal dumps:

    $ oc rsh <pod-name> journalctl -x
  4. Transferring all logs from the cfme-app pod to a directory on the host for further examination can be useful for troubleshooting. Transfer the logs with the oc rsync command:

    $ oc rsync <pod-name>:/persistent/container-deploy/log \
    receiving incremental file list
    sent 72 bytes  received 1881 bytes  1302.00 bytes/sec
    total size is 1585  speedup is 0.81

Appendix A. Appendix

A.1. Appliance Console Command-Line Interface (CLI)

Currently, the appliance_console_cli feature is a subset of the full functionality of the appliance_console itself, and covers functions most likely to be scripted using the command-line interface (CLI).

  1. After starting the Red Hat CloudForms appliance, log in with a user name of root and the default password of smartvm. This displays the Bash prompt for the root user.
  2. Enter the appliance_console_cli or appliance_console_cli --help command to see a list of options available with the command, or simply enter appliance_console_cli --option <argument> directly to use a specific option.

Table A.1. Database Configuration Options


--region (-r)

region number (create a new region in the database - requires database credentials passed)

--internal (-i)

internal database (create a database on the current appliance)


database disk device path (for configuring an internal database)

--hostname (-h)

database hostname


database port (defaults to 5432)

--username (-U)

database username (defaults to root)

--password (-p)

database password

--dbname (-d)

database name (defaults to vmdb_production)

Table A.2. v2_key Options


--key (-k)

create a new v2_key

--fetch-key (-K)

fetch the v2_key from the given host

--force-key (-f)

create or fetch the key even if one exists


ssh username for fetching the v2_key (defaults to root)


ssh password for fetching the v2_key

Table A.3. IPA Server Options


--host (-H)

set the appliance hostname to the given name

--ipaserver (-e)

IPA server FQDN

--ipaprincipal (-n)

IPA server principal (default: admin)

--ipapassword (-w)

IPA server password

--ipadomain (-o)

IPA server domain (optional). Will be based on the appliance domain name if not specified.

--iparealm (-l)

IPA server realm (optional). Will be based on the domain name of the ipaserver if not specified.

--uninstall-ipa (-u)

uninstall IPA client

  • In order to configure authentication through an IPA server, in addition to using Configure External Authentication (httpd) in the appliance_console, external authentication can be optionally configured via the appliance_console_cli (command-line interface).
  • Specifying --host will update the hostname of the appliance. If this step was already performed via the appliance_console and the necessary updates made to /etc/hosts if DNS is not properly configured, the --host option can be omitted.

Table A.4. Certificate Options


--ca (-c)

CA name used for certmonger (default: ipa)

--postgres-client-cert (-g)

install certs for postgres client


install certs for postgres server


install certs for http server (to create certs/httpd* values for a unique key)

--extauth-opts (-x)

external authentication options


The certificate options augment the functionality of the certmonger tool and enable creating a certificate signing request (CSR), and specifying certmonger the directories to store the keys.

Table A.5. Other Options


--logdisk (-l)

log disk path


initialize the given device for temp storage (volume mounted at /var/www/miq_tmp)

--verbose (-v)

print more debugging info

Example Usage

$ ssh

To create a new database locally on the server using /dev/sdb:

# appliance_console_cli --internal --dbdisk /dev/sdb --region 0 --password smartvm

To copy the v2_key from a host to local machine:

# appliance_console_cli --fetch-key --sshlogin root --sshpassword smartvm

You could combine the two to join a region where is the appliance hosting the database:

# appliance_console_cli --fetch-key --sshlogin root --sshpassword smartvm --hostname --password mydatabasepassword

To configure external authentication:

# appliance_console_cli --host
                        --iparealm TEST.COMPANY.COM
                        --ipaprincipal admin
                        --ipapassword smartvm1

To uninstall external authentication:

# appliance_console_cli  --uninstall-ipa