-
Language:
English
-
Language:
English
Chapter 7. Deployment
This section describes how to use Red Hat OpenStack Platform director to deploy OpenStack and Ceph so that Ceph OSDs and Nova Computes may cohabit the same server.
7.1. Verify Ironic Nodes are Available
The following command to verifies all Ironic nodes are powered off, available for provisioning, and not in maintenance mode:
[stack@hci-director ~]$ openstack baremetal node list +----------------------+-------------+---------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +----------------------+-------------+---------------+-------------+--------------------+-------------+ | d4f73b0b-c55a-4735-9 | m630_slot13 | None | power off | available | False | | 176-9cb063a08bc1 | | | | | | | b5cd14dd-c305-4ce2-9 | m630_slot14 | None | power off | available | False | | f54-ef1e4e88f2f1 | | | | | | | 706adf7a-b3ed-49b8-8 | m630_slot15 | None | power off | available | False | | 101-0b8f28a1b8ad | | | | | | | c38b7728-63e4-4e6d- | r730xd_u29 | None | power off | available | False | | acbe-46d49aee049f | | | | | | | 7a2b3145-636b-4ed3 | r730xd_u31 | None | power off | available | False | | -a0ff-f0b2c9f09df4 | | | | | | | 5502a6a0-0738-4826-b | r730xd_u33 | None | power off | available | False | | b41-5ec4f03e7bfa | | | | | | +----------------------+-------------+---------------+-------------+--------------------+-------------+ [stack@hci-director ~]$
7.2. Run the Deploy Command
The following command deploys the overcloud described in this reference architecture.
time openstack overcloud deploy --templates \ -r ~/custom-templates/custom-roles.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml \ -e ~/custom-templates/network.yaml \ -e ~/custom-templates/ceph.yaml \ -e ~/custom-templates/compute.yaml \ -e ~/custom-templates/layout.yaml
7.2.1. Deployment Command Details
There are many options passed in the command above. This subsection goes through each option in detail.
time openstack overcloud deploy --templates \
The above calls the openstack overcloud deploy
command and uses the default location of the templates in /usr/share/openstack-tripleo-heat-templates/. The time
command is used to time how long the deployment takes.
-r ~/custom-templates/custom-roles.yaml
The -r
, or its longer extension --roles-file
, overrides the default roles_data.yaml in the --templates directory. This is necessary because that file was copied, and the new OsdCompute role was created as described in Section 5.3, “Hyper Converged Role Definition”.
The next set of options passed is the following:
-e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
Passing --templates
makes the deployment use the Heat templates in /usr/share/openstack-tripleo-heat-templates/ but the three environment files above, which reside in this directory, will not be used by the deployment by default. Thus, they need to be explicitly passed. Each of these Heat environment files perform the following functions:
- puppet-pacemaker.yaml - Configures controller node services in a highly available pacemaker cluster
-
storage-environment.yaml - Configures Ceph as a storage backend, whose
parameter_defaults
are passed by the custom templateceph.yaml
-
network-isolation.yaml - Configures network isolation for different services whose parameters are passed by the custom template
network.yaml
The following includes the ~/custom-templates defined in Chapter 5, Define the Overcloud or in Chapter 6, Resource Isolation and Tuning
-e ~/custom-templates/network.yaml \ -e ~/custom-templates/ceph.yaml \ -e ~/custom-templates/compute.yaml \ -e ~/custom-templates/layout.yaml
The details of each environment file are covered in the following sections:
- network.yaml - is explained in Section 5.2, “Network Configuration”
- ceph.yaml - is explained in Section 5.4, “Ceph Configuration”
- compute.yaml - is explained in Chapter 6, Resource Isolation and Tuning
- layout.yaml - is explained in Section 5.5, “Overcloud Layout”
The order of the above arguments is necessary, since each environment file overrides the previous environment file.
7.3. Verify the Deployment Succeeded
- Watch deployment progress and look for failures in a separate console window
heat resource-list -n5 overcloud | egrep -i 'fail|progress'
-
Run
openstack server list
to view IP addresses for the overcloud servers
[stack@hci-director ~]$ openstack server list +-------------------------+-------------------------+--------+-----------------------+----------------+ | ID | Name | Status | Networks | Image Name | +-------------------------+-------------------------+--------+-----------------------+----------------+ | fc8686c1-a675-4c89-a508 | overcloud-controller-2 | ACTIVE | ctlplane=192.168.1.37 | overcloud-full | | -cc1b34d5d220 | | | | | | 7c6ae5f3-7e18-4aa2-a1f8 | overcloud-osd-compute-2 | ACTIVE | ctlplane=192.168.1.30 | overcloud-full | | -53145647a3de | | | | | | 851f76db-427c-42b3 | overcloud-controller-0 | ACTIVE | ctlplane=192.168.1.33 | overcloud-full | | -8e0b-e8b4b19770f8 | | | | | | e2906507-6a06-4c4d- | overcloud-controller-1 | ACTIVE | ctlplane=192.168.1.29 | overcloud-full | | bd15-9f7de455e91d | | | | | | 0f93a712-b9eb- | overcloud-osd-compute-0 | ACTIVE | ctlplane=192.168.1.32 | overcloud-full | | 4f42-bc05-f2c8c2edfd81 | | | | | | 8f266c17-ff39-422e-a935 | overcloud-osd-compute-1 | ACTIVE | ctlplane=192.168.1.24 | overcloud-full | | -effb219c7782 | | | | | +-------------------------+-------------------------+--------+-----------------------+----------------+ [stack@hci-director ~]$
- Wait for the overcloud deploy to complete. For this reference architecture, it took approximately 45 minutes.
2016-12-20 23:25:04Z [overcloud]: CREATE_COMPLETE Stack CREATE completed successfully Stack overcloud CREATE_COMPLETE Started Mistral Workflow. Execution ID: aeca4d71-56b4-4c72-a980-022623487c05 /home/stack/.ssh/known_hosts updated. Original contents retained as /home/stack/.ssh/known_hosts.old Overcloud Endpoint: http://10.19.139.46:5000/v2.0 Overcloud Deployed real 44m24.800s user 0m4.171s sys 0m0.346s [stack@hci-director ~]$
7.4. Configure Controller Pacemaker Fencing
Fencing is the process of isolating a node to protect a cluster and its resources. Without fencing, a faulty node can cause data corruption in a cluster. In Appendix, Appendix F, Example Fencing Script, a script is provided to configure each controller node’s IPMI as a fence device.
Prior to running configure_fence.sh, be sure to update it to replace PASSWORD
with the actual IPMI password. For example, the following:
$SSH_CMD $i 'sudo pcs stonith create $(hostname -s)-ipmi fence_ipmilan pcmk_host_list=$(hostname -s) ipaddr=$(sudo ipmitool lan print 1 | awk " /IP Address / { print \$4 } ") login=root passwd=PASSWORD lanplus=1 cipher=1 op monitor interval=60sr'
would become:
$SSH_CMD $i 'sudo pcs stonith create $(hostname -s)-ipmi fence_ipmilan pcmk_host_list=$(hostname -s) ipaddr=$(sudo ipmitool lan print 1 | awk " /IP Address / { print \$4 } ") login=root passwd=p@55W0rd! lanplus=1 cipher=1 op monitor interval=60sr'
An example of running the configure_fence.sh script as the stack
user on the undercloud is below:
- Use configure_fence.sh to enable fencing
[stack@hci-director ~]$ ./configure_fence.sh enable OS_PASSWORD=41485c25159ef92bc375e5dd9eea495e5f47dbd0 OS_AUTH_URL=http://192.168.1.1:5000/v2.0 OS_USERNAME=admin OS_TENANT_NAME=admin OS_NO_CACHE=True 192.168.1.34 192.168.1.32 192.168.1.31 Cluster Properties: cluster-infrastructure: corosync cluster-name: tripleo_cluster dc-version: 1.1.13-10.el7_2.2-44eb2dd have-watchdog: false maintenance-mode: false redis_REPL_INFO: overcloud-controller-2 stonith-enabled: true [stack@hci-director ~]$
-
Verify fence devices are configured with
pcs status
[stack@hci-director ~]$ ssh heat-admin@192.168.1.34 "sudo pcs status | grep -i fence" overcloud-controller-0-ipmi (stonith:fence_ipmilan): Started overcloud-controller-2 overcloud-controller-1-ipmi (stonith:fence_ipmilan): Started overcloud-controller-0 overcloud-controller-2-ipmi (stonith:fence_ipmilan): Started overcloud-controller-0 [stack@hci-director ~]$
The configure_fence.sh script and steps above to configure it are from reference architecture Deploying Red Hat Enterprise Linux OpenStack Platform 7 with RHEL-OSP Director 7.1.