Appendix D. Troubleshooting CNS Deployment Failures

If the CNS deployment process fails, it is possible to use the following command to clean up all the resource that were created in the current installation:

# cns-deploy -n <project_name> -g topology.json --abort

There are a couple of recurring reasons why the deployment might fail: The current OpenShift user doesn’t have permission in the current project The OpenShift app nodes don’t have connectivity to the Red Hat Registry to download the GlusterFS container images The firewall rules on the EC2 app nodes or the AWS security groups is blocking traffic on one or more ports The initialization of the block devices referenced in the topology fails because There are some unexpected partitioning structures. Use the following command to completely wipe the disk of EC2 nodes being used for CNS cluster deployments.

# sgdisk --zap-all /dev/<block-device>

The device specified is already part of a LVM volume group (potentially due to a previous failed run of the cns-deploy installer), remove it with the following commands. This must be done on all EC2 nodes referenced in the topology.json file.

# lvremove -y vg_xxxxxxxxxxxxxxxx
# pvremove /dev/<block-device>