Chapter 3. Deploying OpenShift

This chapter focuses on Phase 1 and 2 of the process. The prerequisites defined below are required for a successful deployment of Infrastructure and the installation of OpenShift.

3.1. Prerequisites for Provisioning

The script and playbooks provided within the git repository deploys infrastructure, installs and configures OpenShift, and scales the router and registry. The playbooks create specific roles, policies, and users required for cloud provider configuration in OpenShift and management of a newly created S3 bucket to manage container images.

3.1.1. Tooling Prerequisites

This section describes how the environment should be configured to use Ansible to provision the infrastructure, install OpenShift, and perform post installation tasks.

Note

The following tasks should be performed on the workstation that the Ansible playbooks will be launched from. The workstation can be an virtual machine, workstation, vagrant vm, or instance running in the cloud. The most important requirement is that the system is running RHEL 7.

3.1.1.1. Ansible Setup

Install the following packages on the system performing the provisioning of AWS infrastructure and installation of OpenShift.

Note

It is important to disable the epel repository to avoid package conflits. If the Ansible package is installed from epel then the version may be incompatible with OpenShift.

$ rpm -q python-2.7
$ subscription-manager repos --enable rhel-7-server-optional-rpms
$ subscription-manager repos --enable rhel-7-server-ose-3.5-rpms
$ subscription-manager repos --enable rhel-7-fast-datapath-rpms
$ yum -y install ansible atomic-openshift-utils
$ yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
$ yum -y install python2-boto \
                 python2-boto3 \
                 pyOpenSSL \
                 git \
                 python-netaddr \
                 python-click \
                 python-httplib2

3.1.1.2. Git Repository

3.1.1.3. GitHub Repositories

The code in the openshift-ansible-contrib repository referenced below handles the installation of OpenShift and the accompanying infrastructure. The openshift-ansible-contrib repository is not explicitly supported by Red Hat but the Reference Architecture team performs testing to ensure the code operates as defined and is secure.

3.1.1.4. Directory Setup

$ cd /home/<user>/git
$ git clone https://github.com/openshift/openshift-ansible-contrib.git

To verify the repository was cloned the tree command can be used to display all of the contents of the git repository.

$ yum -y install tree
$ tree /home/<user>/git/

... content abbreviated ...

|-- openshift-ansible-contrib

3.1.2. Authentication

As mentioned in the previous section, Authentication for the reference architecture deployment is handled by GitHub OAuth. The steps below describe both the process for creating an organization and performing the configuration steps required for GitHub authentication.

3.1.2.1. Create an Organization

An existing organization can be used when using GitHub authentication. If an organization does not exist then one must be created. The ose-on-aws.py script forces an organization to be defined. The script forces an organization due to the fact that if no organizations are provided all of GitHub can login to the OpenShift environment. GitHub users will need to be added to the organization and be at least a member but they also could be an owner.

Follow the directions in the link provided to create an organization. https://help.github.com/articles/creating-a-new-organization-from-scratch/

3.1.2.2. Configuring OAuth

Browse to https://github.com/settings/applications/new and login to GitHub

The image below will provide an example configuration. Insert values that will be used during the OpenShift deployment.

Figure 3.1. GitHub OAuth Application

new app
  • Insert an Application name
  • Insert a Homepage URL (This will be the URL used when accessing OpenShift)
  • Insert an Application description (Optional)
  • Insert an Authorization callback URL (The entry will be the Homepage URL + /oauth2callback/github)
  • Click Register application

Figure 3.2. GitHub OAuth Client ID

oauth client

A Client ID and Client Secret will be presented. These values will be used as variables during the installation of Openshift.

3.1.3. DNS

In this reference implementation guide a domain called sysdeseng.com domain was purchased through AWS and managed by Route53. In the example below, the domain sysdeseng.com will be the hosted zone used for the installation of OpenShift. Follow the below instructions to add the main hosted zone.

  • From the main AWS dashboard, in the Networking section click Route53

    • Click Hosted Zones

      • Click Create Hosted Zone

        • Input a Domain Name: sysdeseng.com
        • Input a Comment: Public Zone for RH Reference Architecture
        • Type: Public Hosted Zone

          • Click Create

A subdomain can also be used. The same steps listed above are applicable when using a subdomain. Once the Pubic Zone is created select the radio button for the Domain and copy the Name Servers from the right and add those to the external registrar or top level domain in Route53. Ensure that the Name Servers(NS) records are copied to the root domain or name resolution will not work for the subdomain.

3.1.4. SSH

3.1.4.1. SSH Configuration

Before beginning the deployment of the AWS Infrastructure and the deployment of OpenShift, a specific SSH configuration must be in place to ensure that SSH traffic passes through the bastion instance. If this configuration is not in place the deployment of the infrastructure will be successful but the deployment of OpenShift will fail. Use the domain or subdomain configured during Section 2.10, “Route53” to fill in the values below. For example, the domain sysdeseng.com was used so the bastion will be bastion.sysdeseng.com and the wildcard will be *.sysdeseng.com.

Note

The following task should be performed on the server that the Ansible playbooks will be launched.

$ cat /home/<user>/.ssh/config

Host bastion
     HostName                 bastion.sysdeseng.com
     User                     ec2-user
     StrictHostKeyChecking    no
     ProxyCommand             none
     CheckHostIP              no
     ForwardAgent             yes
     IdentityFile             /home/<user>/.ssh/id_rsa

Host *.sysdeseng.com
     ProxyCommand             ssh ec2-user@bastion -W %h:%p
     user                     ec2-user
     IdentityFile             /home/<user>/.ssh/id_rsa

Table 3.1. SSH Configuration

OptionPurpose

Host Bastion

Configuration Alias

Hostname

Hostname of the bastion instance

user

Remote user to access the bastion instance

StrictHostKeyChecking

Automatically add new host keys to known host file

ProxyCommand

Not required for the bastion

CheckHostIP

Key checking is against hostname rather than IP

ForwardAgent

Used to forward the SSH connection

IdentityFile

Key used to access bastion instance

Host *.sysdeseng.com

Wildcard for all *.sysdeseng instances

ProxyCommand

SSH command used to jump from the bastion host to another host in the environment

IdentityFile

Key used for all *.sysdeseng instances

Note

In the event an environment needs to be redeployed the entries in .ssh/known_hosts will need to be removed or the installation will not occur due to ssh failing because of "WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!".

3.1.5. AWS Authentication

3.1.5.1. AWS Configuration

The AWS Access Key ID and Secret Access Key must be exported on the workstation executing the Ansible playbooks. This account must have the ability to create IAM users, IAM Policies, and S3 buckets.

If the ACCESS KEY ID and SECRET ACCESS KEY were not already created follow the steps provided by AWS.

http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSGettingStartedGuide/AWSCredentials.html

To export the Access Key ID and Secret perform the following on the workstation performing the deployment of AWS and OpenShift:

$ export AWS_ACCESS_KEY_ID=<key_id>
$ export AWS_SECRET_ACCESS_KEY=<access_key>

3.1.6. Red Hat Subscription

The installation of OpenShift Container Platform (OCP) requires a valid Red Hat subscription. During the installation of OpenShift the Ansible redhat_subscription module will attempt to register the instances. The script will fail if the OpenShift entitlements are exhausted. For the installation of OCP on AWS the following items are required:

Red Hat Subscription Manager User: < Red Hat Username >

Red Hat Subscription Manager Password: < Red Hat Password >

Subscription Name: Red Hat OpenShift Container Platform, Standard, 2-Core

The items above are examples and should reflect subscriptions relevant to the account performing the installation. There are a few different variants of the OpenShift Subscription Name. It is advised to visit https://access.redhat.com/management/subscriptions to find the specific Subscription Name as the values will be used below during the deployment.

3.2. Provisioning the Environment

Within the openshift-ansible-contrib git repository is a python script called ose-on-aws.py that launches AWS resources and installs OpenShift on the new resources. Intelligence is built into the playbooks to allow for certain variables to be set using options provided by the ose-on-aws.py script. The script allows for deployment into an existing environment(brownfield) or a new environment(greenfield) using a series of Ansible playbooks. Once the Ansible playbooks begin, the installation automatically flows from the AWS deployment to the OpenShift deployment and post installation tasks.

Note

The ose-on-aws.py script does not validate EC2 instance limits. Using a web browser login to an AWS account that has access to deploy EC2 instances and Select EC2. Next, select Limits to view the current instance size limits based on the AWS acount.

3.2.1. The ose-on-aws.py Script

The ose-on-aws.py script contains many different configuration options such as the ability to change the AMI, instance size, and the ability to use a currently deployed bastion host. The region can be changed but keep in mind the AMI may need to be changed if the Red Hat Cloud Access gold image AMI ID is different. The Cloudformation stack name is also configurable. Setting --stack-name to a unique value ensures that one Cloudformation stack does not overwrite another Cloudformation stack. The script creates both an auto-generated S3 bucket name and an IAM user account to be used for the registry. These values can be changed before launching. To see all of the potential options the --help trigger is available.

Note

The ose-on-aws.py script requires the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY exported as an environment variable.

$ ./ose-on-aws.py --help
  --stack-name TEXT               Cloudformation stack name. Must be unique
                                  [default: openshift-infra]
  --console-port INTEGER RANGE    OpenShift web console port  [default: 443]
  --deployment-type [origin|openshift-enterprise]
                                  OpenShift deployment type  [default:
                                  openshift-enterprise]
  --openshift-sdn TEXT            OpenShift SDN (redhat/openshift-ovs-subnet,
                                  redhat/openshift-ovs-multitenant, or other
                                  supported SDN)  [default: redhat/openshift-
                                  ovs-subnet]
  --region TEXT                   ec2 region  [default: us-east-1]
  --ami TEXT                      ec2 ami  [default: ami-a33668b4]
  --master-instance-type TEXT     ec2 instance type  [default: m4.xlarge]
  --node-instance-type TEXT       ec2 instance type  [default: t2.large]
  --app-instance-type TEXT        ec2 instance type  [default: t2.large]
  --bastion-instance-type TEXT    ec2 instance type  [default: t2.micro]
  --keypair TEXT                  ec2 keypair name
  --create-key TEXT               Create SSH keypair  [default: no]
  --key-path TEXT                 Path to SSH public key. Default is /dev/null
                                  which will skip the step  [default:
                                  /dev/null]
  --create-vpc TEXT               Create VPC  [default: yes]
  --vpc-id TEXT                   Specify an already existing VPC
  --private-subnet-id1 TEXT       Specify a Private subnet within the existing
                                  VPC
  --private-subnet-id2 TEXT       Specify a Private subnet within the existing
                                  VPC
  --private-subnet-id3 TEXT       Specify a Private subnet within the existing
                                  VPC
  --public-subnet-id1 TEXT        Specify a Public subnet within the existing
                                  VPC
  --public-subnet-id2 TEXT        Specify a Public subnet within the existing
                                  VPC
  --public-subnet-id3 TEXT        Specify a Public subnet within the existing
                                  VPC
  --public-hosted-zone TEXT       hosted zone for accessing the environment
  --app-dns-prefix TEXT           application dns prefix  [default: apps]
  --rhsm-user TEXT                Red Hat Subscription Management User
  --rhsm-password TEXT            Red Hat Subscription Management Password
  --rhsm-pool TEXT                Red Hat Subscription Management Pool ID or
                                  Subscription Name
  --byo-bastion TEXT              skip bastion install when one exists within
                                  the cloud provider  [default: no]
  --bastion-sg TEXT               Specify Bastion Security group used with
                                  byo-bastion  [default: /dev/null]
  --containerized TEXT            Containerized installation of OpenShift
                                  [default: False]
  --s3-bucket-name TEXT           Bucket name for S3 for registry
  --github-client-id TEXT         GitHub OAuth ClientID
  --github-client-secret TEXT     GitHub OAuth Client Secret
  --github-organization TEXT      GitHub Organization
  --s3-username TEXT              S3 user for registry access
  --deploy-openshift-metrics [true|false]
                                  Deploy OpenShift Metrics
  --no-confirm                    Skip confirmation prompt
  -h, --help                      Show this message and exit.
  -v, --verbose

The -v trigger is available as well. This will allow for Ansible to run more verbose allowing for a more in-depth output of the steps occurring while running ose-on-aws.py.

3.2.2. Containerized Deployment

The OCP installation playbooks allow for OpenShift to be installed in containers. These containers can run on either Atomic Host or RHEL. If the containerized installation of OpenShift is preferred specify --containerized=true while running ose-on-aws.py. If using Atomic Host the configuration trigger --containerized=true must be specified or the installation will fail. Also, when using Atomic Host ensure the AMI being used has Docker 1.10 installed.

3.2.3. OpenShift Metrics

OpenShift Cluster Metrics can be deployed during installation time. By passing --deploy-openshift-metrics=true when running ose-on-aws.py the components required for metrics are deployed. This includes creating a PVC dynamically which is used for persistent metrics storage. The URL of the Hawkular Metrics endpoint is also set on the masters and defined by an OpenShift route. Currently, the default is to not deploy metrics at installation time.

3.2.4. SDN Selection

Two software-defined networks are provided as choices when deploying OpenShift on AWS openshift-ovs-subnet and openshift-ovs-multitenant. By default the openshift-ovs-subnet is configured as the SDN.

  • The ovs-subnet plug-in is the original plug-in which provides a "flat" pod network where every pod can communicate with every other pod and service.
  • The ovs-multitenant plug-in provides OpenShift Container Platform project level isolation for pods and services. Each project receives a unique Virtual Network ID (VNID) that identifies traffic from pods assigned to the project. Pods from different projects cannot send packets to or receive packets from pods and services of a different project.

3.2.5. Greenfield Deployment

For deploying OpenShift into a new environment, ose-on-aws.py creates instances, load balancers, Route53 entries, and IAM users an ssh key can be entered to be uploaded and used with the new instances. Once the values have been entered into the ose-on-aws.py script all values will be presented and the script will prompt to continue with the values or exit. By default, the Red Hat gold image AMI Section 2.11, “Amazon Machine Images” is used when provisioning instances but can be changed when executing the ose-on-aws.py. The keypair in the example below OSE-key is the keypair name as it appears within the AWS EC2 dashboard. If a keypair has not been created and uploaded to AWS perform the steps below to create, upload, and name the SSH keypair.

Create a Public/Private key

If a user does not currently have a public and private SSH key perform the following.

$ ssh-keygen -t rsa -N '' -f /home/user/.ssh/id_rsa

A message similar to this will be presented indicating they key has been successful created

Your identification has been saved in /home/user/.ssh/id_rsa.
Your public key has been saved in /home/user/.ssh/id_rsa.pub.
The key fingerprint is:
e7:97:c7:e2:0e:f9:0e:fc:c4:d7:cb:e5:31:11:92:14 user@sysdeseng.rdu.redhat.com
The key's randomart image is:
+--[ RSA 2048]----+
|             E.  |
|            . .  |
|             o . |
|              . .|
|        S .    . |
|         + o o ..|
|          * * +oo|
|           O +..=|
|           o*  o.|
+-----------------+

Create a Public/Private key

To deploy the environment using the newly created private/public SSH key which currently does not exist within AWS perform the following.

$ export AWS_ACCESS_KEY_ID=<key_id>
$ export AWS_SECRET_ACCESS_KEY=<access_key>
$ ./ose-on-aws.py --stack-name=dev --public-hosted-zone=sysdeseng.com \ --rhsm-user=rhsm-user --rhsm-password=rhsm-password \ --rhsm-pool="Red Hat OpenShift Container Platform, Standard, 2-Core" \ --github-client-secret=gh-secret --github-client-id=gh-client-id \ --github-organization=openshift --github-organization=RHSyseng \ --keypair=OSE-key --create-key=yes --key-path=/home/user/.ssh/id_rsa.pub

If an SSH key has already been uploaded to AWS specify the name of the keypair as it appears within the AWS EC2 dashboard.

$ export AWS_ACCESS_KEY_ID=<key_id>
$ export AWS_SECRET_ACCESS_KEY=<access_key>
$ ./ose-on-aws.py --stack-name=dev --public-hosted-zone=sysdeseng.com \ --rhsm-user=rhsm-user --rhsm-password=rhsm-password \ --rhsm-pool="Red Hat OpenShift Container Platform, Standard, 2-Core" \ --github-client-secret=gh-secret --github-client-id=gh-client-id \ --github-organization=openshift --github-organization=RHSyseng \ --keypair=OSE-key

Example of Greenfield Deployment values

Configured values:
    stack_name: dev
    ami: ami-a33668b4
    region: us-east-1
    master_instance_type: m4.xlarge
    node_instance_type: t2.large
    app_instance_type: t2.large
    bastion_instance_type: t2.micro
    keypair: OSE-key
    create_key: no
    key_path: /dev/null
    create_vpc: yes
    vpc_id: None
    private_subnet_id1: None
    private_subnet_id2: None
    private_subnet_id3: None
    public_subnet_id1: None
    public_subnet_id2: None
    public_subnet_id3: None
    byo_bastion: no
    bastion_sg: /dev/null
    console port: 443
    deployment_type: openshift-enterprise
    openshift_sdn: redhat/openshift-ovs-subnet
    public_hosted_zone: sysdeseng.com
    app_dns_prefix: apps
    apps_dns: apps.sysdeseng.com
    rhsm_user: rhsm_user
    rhsm_password: *******
    rhsm_pool: Red Hat OpenShift Container Platform, Standard, 2-Core
    containerized: False
    s3_bucket_name: dev-ocp-registry-sysdeseng
    s3_username: dev-s3-openshift-user
    github_client_id: *******
    github_client_secret: *******
    github_organization: openshift,RHSyseng
    deploy_openshift_metrics: false


Continue using these values? [y/N]:

3.2.6. Brownfield Deployment

The ose-on-aws.py script allows for deployments into an existing environment in which a VPC already exists and at least six subnets, three public and three private, are already created. The script expects three public and three private subnets are created. The private subnets must be able to connect externally which requires a NAT gateway to be deployed. Before running the brownfield deployment ensure that a NAT gateway is deployed and proper Route Table entries are created (private subnets, 0.0.0.0/0 → nat-xxxx; public subnets, 0.0.0.0/0 → igw-xxxx). By default, the Red Hat gold image AMI is used when provisioning instances but can be changed when executing the ose-on-aws.py.

Running the following will prompt for subnets and the VPC to deploy the instances and OpenShift.

$ ./ose-on-aws.py --stack-name=dev --public-hosted-zone=sysdeseng.com \ --rhsm-user=rhsm-user --rhsm-password=rhsm-password \ --rhsm-pool="Red Hat OpenShift Container Platform, Standard, 2-Core" \ --github-client-secret=gh-secret --github-client-id=gh-client-id \ --github-organization=openshift --github-organization=RHSyseng \ --keypair=OSE-key --create-vpc=no

Specify the VPC ID: vpc-11d06976
Specify the first Private subnet within the existing VPC: subnet-3e406466
Specify the second Private subnet within the existing VPC: subnet-66ae905b
Specify the third Private subnet within the existing VPC: subnet-4edfd438
Specify the first Public subnet within the existing VPC: subnet-1f416547
Specify the second Public subnet within the existing VPC: subnet-c2ae90ff
Specify the third Public subnet within the existing VPC: subnet-1ddfd46b

In the case that a bastion instance has already been deployed, an option --byo-bastion=yes exists within ose-on-aws.py exists to not deploy the bastion instance.

Note

If the bastion instance is already deployed supply the security group id of the bastion security group. The existing bastion host must be in the same AWS region as the deployment. The bastion host must have the hostname of bastion either through an A record or CNAME.

$ ./ose-on-aws.py --stack-name=dev --public-hosted-zone=sysdeseng.com \ --rhsm-user=rhsm-user --rhsm-password=rhsm-password \ --rhsm-pool="Red Hat OpenShift Container Platform, Standard, 2-Core" \ --github-client-secret=gh-secret --github-client-id=gh-client-id \ --github-organization=openshift --github-organization=RHSyseng \ --keypair=OSE-key --create-vpc=no \ --byo-bastion=yes --bastion-sg=sg-a34ff3af

Specify the VPC ID: vpc-11d06976
Specify the first Private subnet within the existing VPC: subnet-3e406466
Specify the second Private subnet within the existing VPC: subnet-66ae905b
Specify the third Private subnet within the existing VPC: subnet-4edfd438
Specify the first Public subnet within the existing VPC: subnet-1f416547
Specify the second Public subnet within the existing VPC: subnet-c2ae90ff
Specify the third Public subnet within the existing VPC: subnet-1ddfd46b

As stated in the Greenfield deployment the option exists to not use the Red Hat Cloud Access provided gold image AMI. Using the same command from above the --ami= option allows the default value to be changed.

$ ./ose-on-aws.py --stack-name=dev --public-hosted-zone=sysdeseng.com \ --rhsm-user=rhsm-user --rhsm-password=rhsm-password \ --rhsm-pool="Red Hat OpenShift Container Platform, Standard, 2-Core" \ --github-client-secret=gh-secret --github-client-id=gh-client-id \ --github-organization=openshift --github-organization=RHSyseng \ --keypair=OSE-key --create-vpc=no \ --byo-bastion=yes --bastion-sg=sg-a34ff3af \ --ami=ami-2051294a

Specify the VPC ID: vpc-11d06976
Specify the first Private subnet within the existing VPC: subnet-3e406466
Specify the second Private subnet within the existing VPC: subnet-66ae905b
Specify the third Private subnet within the existing VPC: subnet-4edfd438
Specify the first Public subnet within the existing VPC: subnet-1f416547
Specify the second Public subnet within the existing VPC: subnet-c2ae90ff
Specify the third Public subnet within the existing VPC: subnet-1ddfd46b

Example of Brownfield Deployment values

    stack_name: dev
    ami: ami-a33668b4
    region: us-east-1
    master_instance_type: m4.xlarge
    node_instance_type: t2.large
    app_instance_type: t2.large
    bastion_instance_type: t2.micro
    keypair: OSE-key
    create_key: no
    key_path: /dev/null
    create_vpc: yes
    vpc_id: vpc-11d06976
    private_subnet_id1: subnet-3e406466
    private_subnet_id2: subnet-66ae905b
    private_subnet_id3: subnet-4edfd438
    public_subnet_id1: subnet-1f416547
    public_subnet_id2: subnet-c2ae90ff
    public_subnet_id3: subnet-1ddfd46b
    byo_bastion: no
    bastion_sg: /dev/null
    console port: 443
    deployment_type: openshift-enterprise
    openshift_sdn: redhat/openshift-ovs-subnet
    public_hosted_zone: sysdeseng.com
    app_dns_prefix: apps
    apps_dns: apps.sysdeseng.com
    rhsm_user: rhsm_user
    rhsm_password: *******
    rhsm_pool: Red Hat OpenShift Container Platform, Standard, 2-Core
    containerized: False
    s3_bucket_name: dev-ocp-registry-sysdeseng
    s3_username: dev-s3-openshift-user
    github_client_id: *******
    github_client_secret: *******
    github_organization: openshift,RHSyseng
    deploy_openshift_metrics: false

Continue using these values? [y/N]:

3.3. Post Ansible Deployment

Once the playbooks have successfully completed the next steps will be to perform the steps defined in Chapter 4, Operational Management. In the event that OpenShift failed to install, follow the steps in the Appendix to restart the installation of OpenShift.

3.4. Post Provisioning Results

At this point the infrastructure and Red Hat OpenShift Container Platform have been deployed. Log into the AWS console and check the resources. In the AWS console, check for the following resources:

  • 3 Master nodes
  • 3 Infrastructure nodes
  • 2 Application nodes
  • 1 Unique VPC with the required components
  • 8 Security groups
  • 2 Elastic IPs
  • 1 NAT Gateway
  • 1 Key pair
  • 3 ELBs
  • 2 IAM roles
  • 2 IAM Policies
  • 1 S3 Bucket
  • 1 IAM user
  • 1 Zones in Route53

Information is also available in the CloudFormation output. This information describes some of the currently deployed items like the subnets, security groups, and etc. These outputs can be used by the add-node.py, add-cns-storage.py, and add-crs-storage.py scripts to auto-populate variables like security groups and other required values.

Node SG

Table 3.2. Cloudformation Outputs

Key

Value

Description

PrivateSubnet1

subnet-ea3f24b1

Private Subnet 1

PrivateSubnet2

subnet-1a02dd52

Private Subnet 2

S3UserAccessId

AKIAJVDMBDLYBJGNHXSA

AWSAccessKeyId of user

PrivateSubnet3

subnet-22adea1e

Private Subnet 3

S3Bucket

openshift-infra-ocp-registry-sysdeseng

Name of S3 bucket

InfraLb

openshift-infra-InfraElb-1X0YK42S95B28

Infrastructure ELB name

S3UserSecretKey

bbiabom7XPbNyGUJf1Dy8EDB8cSyo4z9y5PiZCY+

AWSSecretKey of new S3

InfraSGId

sg-9f456de0

Infra Node SG id

BastionSGId

sg-53456d2c

Bastion SG id

NodeSGId

sg-85466efa

Node SG id

NodeARN

openshift-infra-NodeInstanceProfile-7WF689M7WLT1

ARN for the Node instance profile

StackVpc

vpc-9c0b00fa VPC that was createdPrivateSubnet1 subnet-ea3f24b1

Private Subnet 1

PrivateSubnet2

subnet-1a02dd52

Private Subnet 2

S3UserAccessId

AKIAJVDMBDLYBJGNHXSA

AWSAccessKeyId of user

PrivateSubnet3

subnet-22adea1e

Private Subnet 3

S3Bucket

openshift-infra-ocp-registry-sysdeseng

Name of S3 bucket

InfraLb

openshift-infra-InfraElb-1X0YK42S95B28

Infrastructure ELB name

S3UserSecretKey

bbiabom7XPbNyGUJf1Dy8EDB8cSyo4z9y5PiZCY+

AWSSecretKey of new S3

InfraSGId

sg-9f456de0

Infra Node SG id

BastionSGId

sg-53456d2c

Bastion SG id

NodeSGId

sg-85466efa

Node SG id

NodeARN

openshift-infra-NodeInstanceProfile-7WF689M7WLT1

ARN for the Node instance profile

StackVpc

vpc-9c0b00fa

VPC that was created

At this point, the OpenShift public URL will be available using the public hosted zone URL provided while running the ose-on-aws.py. For example, https://openshift-master.sysdeseng.com.

Note

When installing using this method the browser certificate must be accepted three times. The certificate must be accepted three times due to the number of masters in the cluster.