Chapter 7. Multiple OpenShift Deployments

7.1. Prerequisites

The prerequisites described in Section 3.1, “Prerequisites for Provisioning” are required when deploying another OCP environment into AWS. Below is a checklist to perform to prepare for the deployment of another OCP cluster.

  • Create subdomain
  • Map subdomain NS records to root domain
  • Configure authentication

7.1.1. SSH Configuration

The .ssh/config will need to reflect both the existing environment and the new environment. Below is an example. The environment of dev will be the existing deployment and prod will be the new deployment.

Host dev
     Hostname                 bastion.dev.sysdeseng.com
     user                       ec2-user
     StrictHostKeyChecking      no
     ProxyCommand               none
     CheckHostIP                no
     ForwardAgent               yes
     IdentityFile               /home/<user>/.ssh/id_rsa

Host *.dev.sysdeseng.com
     ProxyCommand               ssh ec2-user@dev -W %h:%p
     user                       ec2-user
     IdentityFile               /home/<user>/.ssh/id_rsa

Host prod
     Hostname                   bastion.prod.sysdeseng.com
     user                       ec2-user
     StrictHostKeyChecking      no
     ProxyCommand               none
     CheckHostIP                no
     ForwardAgent               yes
     IdentityFile               /home/<user>/.ssh/id_rsa

Host *.prod.sysdeseng.com
     ProxyCommand               ssh ec2-user@prod -W %h:%p
     user                       ec2-user
     IdentityFile               /home/<user>/.ssh/id_rsa

7.2. Deploying the Environment

Using the ose-on-aws.py script to deploy another OCP cluster is almost exactly the same as defined in Section 3.1, “Prerequisites for Provisioning” the important difference is --stack-name. In the event that ose-on-aws.py is launched with the same stack name as the previously deployed environment the cloudformation facts will be changed causing the existing deployment to be broken.

Note

Verify the existing stack name by browsing to AWS and clicking the Cloudformation service before proceeding with the steps below.

$ export AWS_ACCESS_KEY_ID=<key_id>
$ export AWS_SECRET_ACCESS_KEY=<access_key>
$ ./ose-on-aws.py --stack-name=prod --rhsm-user=rhsm-user --rhsm-password=rhsm-password --public-hosted-zone=prod.sysdeseng.com --keypair=OSE-key --github-client-secret=47a0c41f0295b451834675ed78aecfb7876905f9 --github-organization=openshift --github-organization=RHSyseng --github-client-id=3a30415d84720ad14abc --rhsm-pool="Red Hat OpenShift Container Platform, Standard, 2-Core"

Example of Greenfield Deployment Values

The highlighted value stack_name: prod ensures that the dev deployment will not be compromised.

Configured values:
    stack_name: prod
    ami: ami-10251c7a
    region: us-east-1
    master_instance_type: m4.large
    node_instance_type: t2.medium
    app_instance_type: t2.medium
    bastion_instance_type: t2.micro
    keypair: OSE-key
    create_key: no
    key_path: /dev/null
    create_vpc: yes
    vpc_id: None
    private_subnet_id1: None
    private_subnet_id2: None
    private_subnet_id3: None
    public_subnet_id1: None
    public_subnet_id2: None
    public_subnet_id3: None
    byo_bastion: no
    bastion_sg: /dev/null
    console port: 443
    deployment_type: openshift-enterprise
    openshift_sdn: redhat/openshift-ovs-subnet
    public_hosted_zone: prod.sysdeseng.com
    app_dns_prefix: apps
    apps_dns: apps.prod.sysdeseng.com
    rhsm_user: rhsm-user
    rhsm_password:  rhsm_pool: Red Hat OpenShift Container Platform, Standard, 2-Core containerized: False s3_bucket_name: prod-ocp-registry-prod s3_username: prod-s3-openshift-user github_client_id: 
    github_client_secret: *
    github_organization: openshift,RHSyseng
    deploy_openshift_metrics: true
    deploy_openshift_logging: true