Appendix B. Installation Failure

In the event of an OpenShift installation failure perform the following steps. The first step is to create an inventory file and run the uninstall playbook.

B.1. Inventory

The static inventory is used with the uninstall playbook to identify OpenShift nodes. Modify the inventory below to match the deployed environment or use the Ansible playbook located at openshift-ansible-contrib/reference-architecture/aws-ansible/playbooks/create-inventory-file.yaml to create an inventory.

Automated inventory creation

# cd /home/USER/git/openshift-ansible-contrib/reference-architecture/aws-ansible
# ansible-playbook -i inventory/aws/hosts -e 'region=us-east-1 stack_name=openshift-infra github_client_secret=c3cd9271ffb9f7258e135fcf3ea3a358cffa46b1 github_organization=["openshift"] console_port=443 wildcard_zone=apps.sysdeseng.com public_hosted_zone=sysdeseng.com playbooks/create-inventory-file.yaml

Manual creation

vi /home/user/inventory
[OSEv3:children]
masters
etcd
nodes

[OSEv3:vars]
debug_level=2
openshift_debug_level="{{ debug_level }}"
openshift_node_debug_level="{{ node_debug_level | default(debug_level, true) }}"
openshift_master_debug_level="{{ master_debug_level | default(debug_level, true) }}"
openshift_master_access_token_max_seconds=2419200
openshift_master_api_port=443
openshift_master_console_port=443
osm_cluster_network_cidr=172.16.0.0/16
openshift_registry_selector="role=infra"
openshift_router_selector="role=infra"
openshift_hosted_router_replicas=3
openshift_hosted_registry_replicas=3
openshift_master_cluster_method=native
openshift_node_local_quota_per_fsgroup=512Mi
openshift_cloudprovider_kind=aws
openshift_master_cluster_hostname=internal-openshift-master.sysdeseng.com
openshift_master_cluster_public_hostname=openshift-master.sysdeseng.com
osm_default_subdomain=apps.sysdeseng.com
openshift_master_default_subdomain=apps.sysdeseng.com
osm_default_node_selector="role=app"
deployment_type=openshift-enterprise
os_sdn_network_plugin_name="redhat/openshift-ovs-subnet"
openshift_master_identity_providers=[{'name': 'github', 'mapping_method': 'claim', 'clientID': 's3taasdgdt34tq', 'clientSecret': 'asfgasfag34qg3q4gq43gv', 'login': 'true', 'challenge': 'false', 'kind': 'GitHubIdentityProvider', 'organizations': 'openshift' }]
osm_use_cockpit=true
containerized=false
openshift_hosted_registry_storage_kind=object
openshift_hosted_registry_storage_provider=s3
openshift_hosted_registry_storage_s3_accesskey="{{ hostvars['localhost']['s3user_id'] }}"
openshift_hosted_registry_storage_s3_secretkey="{{ hostvars['localhost']['s3user_secret'] }}"
openshift_hosted_registry_storage_s3_bucket="{{ hostvars['localhost']['s3_bucket_name'] }}"
openshift_hosted_registry_storage_s3_region="{{ hostvars['localhost']['region'] }}"
openshift_hosted_registry_storage_s3_chunksize=26214400
openshift_hosted_registry_storage_s3_rootdirectory=/registry
openshift_hosted_registry_pullthrough=true
openshift_hosted_registry_acceptschema2=true
openshift_hosted_registry_enforcequota=true

[masters]
ose-master01.sysdeseng.com openshift_node_labels="{'role': 'master'}"
ose-master02.sysdeseng.com openshift_node_labels="{'role': 'master'}"
ose-master03.sysdeseng.com openshift_node_labels="{'role': 'master'}"

[etcd]
ose-master01.sysdeseng.com
ose-master02.sysdeseng.com
ose-master03.sysdeseng.com
[nodes]
ose-master01.sysdeseng.com openshift_node_labels="{'role': 'master'}"
ose-master02.sysdeseng.com openshift_node_labels="{'role': 'master'}"
ose-master03.sysdeseng.com openshift_node_labels="{'role': 'master'}"
ose-infra-node01.sysdeseng.com openshift_node_labels="{'role': 'infra'}"
ose-infra-node02.sysdeseng.com openshift_node_labels="{'role': 'infra'}"
ose-infra-node03.sysdeseng.com openshift_node_labels="{'role': 'infra'}"
ose-app-node01.sysdeseng.com openshift_node_labels="{'role': 'app'}"
ose-app-node02.sysdeseng.com openshift_node_labels="{'role': 'app'}"

B.2. Running the Uninstall Playbook

The uninstall playbook removes OpenShift related packages, ETCD, and removes any certificates that were created during the failed install.

ansible-playbook -i /home/user/inventory /usr/share/ansible/openshift-ansible/playbooks/adhoc/uninstall.yml

B.3. Manually Launching the Installation of OpenShift

The playbook below is the same playbook that is ran once the deployment of AWS resources is completed. Replace the rhsm_user and rhsm_password, stack_name, set the wildcard_zone and public_hosted_zone relevant to the information in Route53 and optionally modify the AWS region in the event us-east-1 was not used..

ansible-playbook -i inventory/aws/hosts -e 'region=us-east-1 stack_name=openshift-infra
keypair=OSE-key public_hosted_zone=sysdeseng.com wildcard_zone=apps.sysdeseng.com
console_port=443 deployment_type=openshift-enterprise rhsm_user=RHSM_USER
rhsm_password=RHSM_PASSWORD rhsm_pool="Red Hat OpenShift Container Platform, Standard, 2-Core"
containerized=False github_client_id=e76865557b0417387b35 github_client_secret=c3cd9271ffb9f7258e135fcf3ea3a358cffa46b1
github_organization=["openshift"]' playbooks/openshift-install.yaml

Also, the ose-on-aws.py can be executed again but this must be done with caution. If any of the variables in ose-on-aws.py are changed the cloudformation may update causing the AWS components to change.

./ose-on-aws.py --rhsm-user=RHSM_USER --rhsm-password=RHSM_PASSWORD --public-hosted-zone=sysdeseng.com
--rhsm-pool="Red Hat OpenShift Container Platform, Standard, 2-Core" --keypair=OSE-key
--master-instance-type=t2.medium --stack-name=tag --github-client-id=e76865557b0417387b35
--github-organization=openshift --github-client-secret=c3cd9271ffb9f7258e135fcf3ea3a358cffa46b1