Installing OpenShift on OpenStack

Red Hat OpenStack Platform 14

A Guide to Installing OpenShift on OpenStack Bare Metal

OpenStack Documentation Team

Abstract

This document describes how to deploy OpenShift clusters on bare metal and increase the number of the container apps hosted per bare metal node to increase CPU performance and lower disk and network latency.

Chapter 1. Installing OpenShift on OpenStack

You can use Red Hat OpenStack director to deploy Red Hat OpenShift Container Platform (OCP) clusters onto bare metal nodes.

1.1. Prerequisites

  • Ensure that you have installed the OpenStack overcloud.
  • Run the hardware and network requirements validations.

You can run the hardware and network requirements validations in one of two ways:

  • From Red Hat OpenStack director.
  • From the command line.

Running the validations from Red Hat OpenStack director

To run the validations from director:

  1. In Red Hat OpenStack director, to open the Validations panel, click the validations icon at the top right of the window.
  2. To search for the OpenShift validations, type the word OpenShift in the validations search field. There are two OpenShift validations:

    • Network requirements
    • Hardware requirements
  3. To run an OpenShift validation, select the required validation from the list and click the play icon.

Running the validations from the command line

To run the hardware requirements validation from the command line:

$ openstack action execution run tripleo.validations.run_validation '{"validation": "openshift-hw-requirements", "plan": "overcloud"}'

To run the network requirements validation:

$ openstack action execution run tripleo.validations.run_validation '{"validation": "openshift-nw-requirements", "plan": "overcloud"}'openstack workflow execution create tripleo.validations.v1.run_validation '{"plan": "overcloud", "validation_name": "openshift-nw-requirements"}'
Note

If the validations fail, you can still try to install OpenShift. However, it is advised that you fulfil the requirements of the validation before you install OpenShift.

Note

For both commands, the plan name “overcloud” is the default plan used in a director installation. If you are working with your own set of heat templates, use the name you chose when creating your custom plan.

1.2. Deploy OCP nodes using director

You can use director to deploy Red Hat OpenShift Container Platform (OCP) clusters onto bare metal nodes. Director deploys the operating system onto the nodes and uses openshift-ansible to then configure OCP. You can also use director to manage the bare metal nodes.

Director installs OCP services using composable roles for OpenShiftMaster, OpenShiftWorker, and OpenShiftInfra. When you import a bare metal node using instackenv.json, you can tag it to use a certain composable role. For more information on using Composable Roles, see https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/13/html/advanced_overcloud_customization/roles.

1.2.1. The OCP roles

The OpenShiftMaster role consists of the following services:

ServicesDefault:
- OS::TripleO::Services::ContainerImagePrepare
- OS::TripleO::Services::Docker
- OS::TripleO::Services::HAproxy
- OS::TripleO::Services::Keepalived
- OS::TripleO::Services::Ntp
- OS::TripleO::Services::OpenShift::Master
- OS::TripleO::Services::Rhsm
- OS::TripleO::Services::Sshd
- OS::TripleO::Services::TripleoFirewall
- OS::TripleO::Services::TripleoPackages

The OpenShiftWorker role consists of the following services:

ServicesDefault:
- OS::TripleO::Services::Docker
- OS::TripleO::Services::Ntp
- OS::TripleO::Services::OpenShift::GlusterFS
- OS::TripleO::Services::OpenShift::Worker
- OS::TripleO::Services::Rhsm
- OS::TripleO::Services::Sshd
- OS::TripleO::Services::TripleoFirewall

The OpenShiftInfra role is a type of worker role that runs only infrastructure pods. It consists of the following services:

ServicesDefault:
- OS::TripleO::Services::Docker
- OS::TripleO::Services::Ntp
- OS::TripleO::Services::OpenShift::GlusterFS
- OS::TripleO::Services::OpenShift::Infra
- OS::TripleO::Services::Rhsm
- OS::TripleO::Services::Sshd
- OS::TripleO::Services::TripleoFirewall

1.2.2. Define the OCP roles

This procedure generates the OCP roles:

  1. Generate the OCP roles:

    $ openstack overcloud roles generate -o /home/stack/openshift_roles_data.yaml OpenShiftMaster OpenShiftWorker OpenShiftInfra
  2. View the OCP roles:

    $ openstack overcloud role list

    The result should include entries for OpenShiftMaster, OpenShiftWorker, and OpenShiftInfra.

  3. See more information on the OpenShiftMaster role:

    $ openstack overcloud role show OpenShiftMaster

1.2.3. Configure the container registry

After you deploy the undercloud, you must configure director to locate the container registry.

  1. Generate a /home/stack/containers-prepare-parameter.yaml file:

    $ openstack tripleo container image prepare default \
      --local-push-destination \
      --output-env-file containers-prepare-parameter.yaml

    For example, edit /home/stack/containers-prepare-parameter.yaml and add the following settings. Adapt these settings to suit your deployment:

    parameter_defaults:
      ContainerImagePrepare:
      - push_destination: true
        set:
          ceph_image: rhceph-3-rhel7
          ceph_namespace: registry.access.redhat.com/rhceph
          ceph_tag: latest
          name_prefix: openstack-
          name_suffix: ''
          namespace: registry.access.redhat.com/rhosp14
          neutron_driver: null
          openshift_cluster_monitoring_image: ose-cluster-monitoring-operator
          openshift_cluster_monitoring_namespace: registry.access.redhat.com/openshift3
          openshift_cluster_monitoring_tag: v3.11
          openshift_cockpit_image: registry-console
          openshift_cockpit_namespace: registry.access.redhat.com/openshift3
          openshift_cockpit_tag: v3.11
          openshift_configmap_reload_image: ose-configmap-reloader
          openshift_configmap_reload_namespace: registry.access.redhat.com/openshift3
          openshift_configmap_reload_tag: v3.11
          openshift_etcd_image: etcd
          openshift_etcd_namespace: registry.access.redhat.com/rhel7
          openshift_etcd_tag: latest
          openshift_gluster_block_image: rhgs-gluster-block-prov-rhel7
          openshift_gluster_image: rhgs-server-rhel7
          openshift_gluster_namespace: registry.access.redhat.com/rhgs3
          openshift_gluster_tag: latest
          openshift_grafana_namespace: registry.access.redhat.com/openshift3
          openshift_grafana_tag: v3.11
          openshift_heketi_image: rhgs-volmanager-rhel7
          openshift_heketi_namespace: registry.access.redhat.com/rhgs3
          openshift_kube_rbac_proxy_image: ose-kube-rbac-proxy
          openshift_kube_rbac_proxy_namespace: registry.access.redhat.com/openshift3
          openshift_kube_rbac_proxy_tag: v3.11
          openshift_kube_state_metrics_image: ose-kube-state-metrics
          openshift_kube_state_metrics_namespace: registry.access.redhat.com/openshift3
          openshift_kube_state_metrics_tag: v3.11
          openshift_namespace: registry.access.redhat.com/openshift3
          openshift_oauth_proxy_tag: v3.11
          openshift_prefix: ose
          openshift_prometheus_alertmanager_tag: v3.11
          openshift_prometheus_config_reload_image: ose-prometheus-config-reloader
          openshift_prometheus_config_reload_namespace: registry.access.redhat.com/openshift3
          openshift_prometheus_config_reload_tag: v3.11
          openshift_prometheus_node_exporter_tag: v3.11
          openshift_prometheus_operator_image: ose-prometheus-operator
          openshift_prometheus_operator_namespace: registry.access.redhat.com/openshift3
          openshift_prometheus_operator_tag: v3.11
          openshift_prometheus_tag: v3.11
          openshift_tag: v3.11
          tag: latest
        tag_from_label: '{version}-{release}'

1.2.4. Create the OCP profiles

This procedure describes how to enroll a physical node as an OpenShift node.

  1. Create a flavor for each OCP role. Adjust this values to suit your requirements:

    openstack flavor create --id auto --ram 4096 --disk 40 --vcpus 1 --swap 500 m1.OpenShiftMaster
    openstack flavor create --id auto --ram 4096 --disk 40 --vcpus 1 --swap 500 m1.OpenShiftWorker
    openstack flavor create --id auto --ram 4096 --disk 40 --vcpus 1 --swap 500 m1.OpenShiftInfra
  2. Map the flavors to the required profile:

    openstack flavor set --property "capabilities:profile"="OpenShiftMaster" --property "capabilities:boot_option"="local" m1.OpenShiftMaster
    openstack flavor set --property "capabilities:profile"="OpenShiftWorker" --property "capabilities:boot_option"="local" m1.OpenShiftWorker
    openstack flavor set --property "capabilities:profile"="OpenShiftInfra" --property "capabilities:boot_option"="local" m1.OpenShiftInfra
  3. Add your nodes to instackenv.json. You must define them to use the capabilities field. For example:

    {
      "arch":"x86_64",
      "cpu":"4",
      "disk":"60",
      "mac":[
              "00:0c:29:9f:5f:05"
      ],
      "memory":"16384",
      "pm_type":"ipmi",
      "capabilities":"profile:OpenShiftMaster",
      "name": "OpenShiftMaster_1"
    },
    {
      "arch":"x86_64",
      "cpu":"4",
      "disk":"60",
      "mac":[
              "00:0c:29:91:b9:2d"
      ],
      "memory":"16384",
      "pm_type":"ipmi",
      "capabilities":"profile:OpenShiftWorker",
      "name": "OpenShiftWorker_1"
    }
    {
      "arch":"x86_64",
      "cpu":"4",
      "disk":"60",
      "mac":[
              "00:0c:29:91:b9:6a"
      ],
      "memory":"16384",
      "pm_type":"ipmi",
      "capabilities":"profile:OpenShiftInfra",
      "name": "OpenShiftInfra_1"
    }
  4. Import and introspect the OCP nodes as your normally would for your deployment. For example:

    openstack overcloud node import ~/instackenv.json
    openstack overcloud node introspect --all-manageable --provide
  5. Verify the overcloud nodes have assigned the correct profile:
$ openstack overcloud profiles list
+--------------------------------------+--------------------+-----------------+-----------------+-------------------+
| Node UUID                            | Node Name          | Provision State | Current Profile | Possible Profiles |
+--------------------------------------+--------------------+-----------------+-----------------+-------------------+
| 72b2b1fc-6ba4-4779-aac8-cc47f126424d | openshift-worker01 | available       | OpenShiftWorker |                   |
| d64dc690-a84d-42dd-a88d-2c588d2ee67f | openshift-worker02 | available       | OpenShiftWorker |                   |
| 74d2fd8b-a336-40bb-97a1-adda531286d9 | openshift-worker03 | available       | OpenShiftWorker |                   |
| 0eb17ec6-4e5d-4776-a080-ca2fdcd38e37 | openshift-infra02  | available       | OpenShiftInfra  |                   |
| 92603094-ba7c-4294-a6ac-81f8271ce83e | openshift-infra03  | available       | OpenShiftInfra  |                   |
| b925469f-72ec-45fb-a403-b7debfcf59d3 | openshift-master01 | available       | OpenShiftMaster |                   |
| 7e9e80f4-ad65-46e1-b6b4-4cbfa2eb7ea7 | openshift-master02 | available       | OpenShiftMaster |                   |
| c2bcdd3f-38c3-491b-b971-134cab9c4171 | openshift-master03 | available       | OpenShiftMaster |                   |
| ece0ef2f-6cc8-4912-bc00-ffb3561e0e00 | openshift-infra01  | available       | OpenShiftInfra  |                   |
| d3a17110-88cf-4930-ad9a-2b955477aa6c | openshift-custom01 | available       | None            |                   |
| 07041e7f-a101-4edb-bae1-06d9964fc215 | openshift-custom02 | available       | None            |                   |
+--------------------------------------+--------------------+-----------------+-----------------+-------------------+

1.2.5. Define the OpenShift environment

Create the openshift_env.yaml file. This file defines the OpenShift-related settings that director will later apply as part of the openstack overcloud deploy procedure. Update these values to suit your deployment:

Parameter_defaults:
# by default Director assigns the VIP random from the allocation pool
# by using the FixedIPs we can set the VIPs to predictable IPs before starting the deployment

CloudName: openshift.localdomain
PublicVirtualFixedIPs: [{'ip_address':'10.0.0.200'}]

CloudNameInternal: internal.openshift.localdomain
InternalApiVirtualFixedIPs: [{'ip_address':'172.17.1.200'}]

CloudDomain: openshift.localdomain

## Required for CNS deployments only
  OpenShiftInfraParameters:
    OpenShiftGlusterDisks:
      - /dev/vdb

## Required for CNS deployments only
  OpenShiftWorkerParameters:
    OpenShiftGlusterDisks:
      - /dev/vdb
      - /dev/vdc

NtpServer: ["clock.redhat.com","clock2.redhat.com"]

ControlPlaneDefaultRoute: 192.168.24.1
EC2MetadataIp: 192.168.24.1
ControlPlaneSubnetCidr: 24

# The DNS server below should have entries for resolving {internal,public,apps}.openshift.localdomain names
DnsServers:
   - 10.0.0.90

OpenShiftGlobalVariables:

    openshift_master_identity_providers:
    - name: 'htpasswd_auth'
      login: 'true'
      challenge: 'true'
      kind: 'HTPasswdPasswordIdentityProvider'
    openshift_master_htpasswd_users:
      sysadmin: '$apr1$jpBOUqeU$X4jUsMyCHOOp8TFYtPq0v1'

    #openshift_master_cluster_hostname should match the CloudNameInternal parameter
    openshift_master_cluster_hostname: internal.openshift.localdomain

    #openshift_master_cluster_public_hostname should match the CloudName parameter
    openshift_master_cluster_public_hostname: public.openshift.localdomain

    openshift_master_default_subdomain: apps.openshift.localdomain

For custom networks or customer interfaces, it is necessary to use custom network interface templates:

resource_registry:
OS::TripleO::OpenShiftMaster::Net::SoftwareConfig: /home/stack/master-nic.yaml
OS::TripleO::OpenShiftWorker::Net::SoftwareConfig: /home/stack/worker-nic.yaml
OS::TripleO::OpenShiftInfra::Net::SoftwareConfig: /home/stack/infra-nic.yaml

1.2.6. Register overcloud nodes to the OpenShift repository

Your overcloud nodes require access to the OpenShift repository to install OCP packages. For information on how to configure RHSM in your director-based deployment, see https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/13/html/advanced_overcloud_customization/ansible-based-registration. To make the OpenShift packages available to your nodes, add an entry for rhel-7-server-ose-3.11-rpms to your rhsm.yml file:

resource_registry:
OS::TripleO::Services::Rhsm: /usr/share/openstack-tripleo-heat-templates/extraconfig/services/rhsm.yaml
parameter_defaults:
RhsmVars:
rhsm_repos:
- rhel-7-server-rpms
- rhel-7-server-extras-rpms
- rhel-7-server-ose-3.11-rpms
rhsm_pool_ids: "8a85f37c63842fef0166949e5f9c4be0"
rhsm_method: "portal"
rhsm_username: yourusername
rhsm_password: yourpassword
rhsm_autosubscribe: true

Alternatively, use an activation key that has to include enough subscriptions to enable the repositories:

resource_registry:
OS::TripleO::Services::Rhsm: /usr/share/openstack-tripleo-heat-templates/extraconfig/services/rhsm.yaml
parameter_defaults:
RhsmVars:
rhsm_repos:
- rhel-7-server-rpms
- rhel-7-server-extras-rpms
- rhel-7-server-ose-3.11-rpms
rhsm_activation_key: "activation-key"
rhsm_org_id: "1234567"
rhsm_pool_ids: "8a85f9833e1404a6023e4cddf95a0599"
rhsm_method: "portal"

1.2.7. Deploy OCP nodes

As a result of the previous steps, you have two new YAML files:

  • openshift_env.yaml
  • openshift_roles_data.yaml
  • containers-default-parameters.yaml

For a custom network deployments, it may be necessary to have NICs and network templates like:

  • master-nic.yaml
  • infra-nic.yaml
  • worker-nic.yaml
  • network_data_openshift.yaml

Add these YAML files to your openstack overcloud deploy command. For example, for CNS deployments:

$ openstack overcloud deploy \
--stack openshift \
--templates \
-r /home/stack/openshift_roles_data.yaml \
-n /usr/share/openstack-tripleo-heat-templates/network_data_openshift.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/openshift.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/openshift-cns.yaml \
-e /home/stack/openshift_env.yaml \
-e /home/stack/containers-prepare-parameter.yaml \
-e /home/stack/rhsm.yaml

For example, for non-CNS deployments:

$ openstack overcloud deploy \
--stack openshift \
--templates \
-r /home/stack/openshift_roles_data.yaml \
-n /usr/share/openstack-tripleo-heat-templates/network_data_openshift.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/openshift.yaml \
-e /home/stack/openshift_env.yaml \
-e /home/stack/containers-prepare-parameter.yaml \
-e /home/stack/rhsm.yaml

For deployment of custom networks or interfaces, it is necessary to specify them. For example:

$ openstack overcloud deploy \
--stack openshift \
--templates \
-r /home/stack/openshift_roles_data.yaml \
-n /home/stack/network_data_openshift.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/openshift.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/openshift-cns.yaml \
-e /home/stack/openshift_env.yaml \
-e /home/stack/containers-prepare-parameter.yaml \
-e /home/stack/custom-nics.yaml

1.2.8. Review the OCP deployment

When the overcloud deploy procedure completes, review the state of your OCP nodes.

  1. List all of your bare metal nodes. You should see your master and worker nodes.

    $ openstack baremetal node list
  2. Locate the OpenShift node:

    $ openstack server list
  3. SSH to one of the OpenShift master nodes. For example:

    $ ssh heat-admin@192.168.122.43
  4. Change to root user:

    $ sudo -i
  5. Review the container orchestration configuration:

    $ cat .kube/config
  6. Login to OCP

    $ oc login -u admin
  7. Review existing projects:

    $ oc get projects
  8. Review the OCP status:

    $ oc status
  9. Logout from OCP:

    $ oc logout

1.2.9. Deploy a test app using OCP

This procedure describes how to create a test application in your new OCP deployment.

  1. Login as a developer:

    $ oc login -u developer
    Logged into "https://192.168.64.3:8443" as "developer" using existing credentials.
    
    You have one project on this server: "myproject"
    
    Using project "myproject".
  2. Create a new project:

    $ oc new-project test-project

Additional resources

  • For more information about installing OpenShift Container Platform clusters, see Installing Clusters.
  • For more information about configuring OpenShift Container Platform clusters, see Configuring Clusters.