Chapter 6. Installing the Overcloud
Our Undercloud is now installed with the Red Hat Enterprise Linux OpenStack Platform director configured. In this chapter, we use the director to create our Overcloud environment. To help users at various levels, we provide two different installation scenarios to create an Overcloud. Each scenario varies in complexity and topics.
Table 6.1. Scenario Overview
Scenario
|
Level
|
Topics
|
---|---|---|
Basic Overcloud
|
Medium
|
CLI tool usage, node registration, manual node tagging, basic network isolation, plan-based Overcloud creation
|
Advanced Overcloud
|
High
|
CLI tool usage, node registration, automatic node tagging based on hardware, Ceph Storage setup, advanced network isolation, Overcloud creation, high availability fencing configuration
|
6.1. Basic Scenario: Creating a Small Overcloud with NFS Storage
This scenario creates a small enterprise-level OpenStack Platform environment. This scenario consists of two nodes in the Overcloud: one Controller node and one Compute node. Both machines are bare metal systems using IPMI for power management. This scenario focuses on the command line tools to demonstrate the director's ability to create a small production-level Red Hat Enterprise Linux OpenStack Platform environment that can scale Compute nodes in the future.
Workflow
- Create a node definition template and register blank nodes in the director.
- Inspect hardware of all nodes.
- Manually tag nodes into roles.
- Create flavors and tag them into roles.
- Create Heat templates to isolate the External network.
- Create the Overcloud environment using the default Heat template collection and the additional network isolation templates.
Requirements
- The director node created in Chapter 3, Installing the Undercloud
- Two bare metal machines. These machines must comply with the requirements set for the Controller and Compute nodes. For these requirements, see:These nodes do not require an operating system because the director copies a Red Hat Enterprise Linux 7 image to each node.
- One network connection for our Provisioning network, which is configured as a native VLAN. All nodes must connect to this network and comply with the requirements set in Section 2.3, “Networking Requirements”. For this example, we use 192.0.2.0/24 as the Provisioning subnet with the following IP address assignments:
Table 6.2. Provisioning Network IP Assignments
Node NameIP AddressMAC AddressIPMI IP AddressDirector192.0.2.1aa:aa:aa:aa:aa:aaControllerDHCP definedbb:bb:bb:bb:bb:bb192.0.2.205ComputeDHCP definedcc:cc:cc:cc:cc:cc192.0.2.206 - One network connection for our External network. All Controller nodes must connect to this network. For this example, we use 10.1.1.0/24 for the External network.
- All other network types use the Provisioning network for OpenStack services
- This scenario also uses an NFS share on a separate server on the Provisioning network. The IP Address for this server is 192.0.2.230.
6.1.1. Registering Nodes for the Basic Overcloud
In this section, we create a node definition template. This file (
instackenv.json
) is a JSON format file and contains the hardware and power management details for our two nodes.
This template uses the following attributes:
- mac
- A list of MAC addresses for the network interfaces on the node. Use only the MAC address for the Provisioning NIC of each system.
- pm_type
- The power management driver to use. This example uses the IPMI driver (
pxe_ipmitool
). - pm_user, pm_password
- The IPMI username and password.
- pm_addr
- The IP address of the IPMI device.
- cpu
- The number of CPUs on the node.
- memory
- The amount of memory in MB.
- disk
- The size of the hard disk in GB.
- arch
- The system architecture.
For example:
{ "nodes":[ { "mac":[ "bb:bb:bb:bb:bb:bb" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.205" }, { "mac":[ "cc:cc:cc:cc:cc:cc" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.206" } ] }
Note
For more supported power management types and their options, see Appendix C, Power Management Drivers.
After creating the template, save the file to the
stack
user's home directory (/home/stack/instackenv.json
), then import it into the director. Use the following command to accomplish this:
$ openstack baremetal import --json ~/instackenv.json
This imports the template and registers each node from the template into the director.
Assign the kernel and ramdisk images to all nodes:
$ openstack baremetal configure boot
The nodes are now registered and configured in the director. View a list of these nodes in the CLI using the following command:
$ openstack baremetal list
6.1.2. Inspecting the Hardware of Nodes
After registering the nodes, we inspect the hardware attribute of each node. Run the following command to inspect the hardware attributes of each node:
$ openstack baremetal introspection bulk start
Monitor the progress of the introspection using the following command in a separate terminal window:
$ sudo journalctl -l -u openstack-ironic-discoverd -u openstack-ironic-discoverd-dnsmasq -u openstack-ironic-conductor -f
Important
Make sure this process runs to completion. This process usually takes 15 minutes for bare metal nodes.
Alternatively, perform a single introspection on each node individually. Set the node to maintenance mode, perform the introspection, then revert the node out of maintenance mode:
$ ironic node-set-maintenance [NODE UUID] true $ openstack baremetal introspection start [NODE UUID] $ ironic node-set-maintenance [NODE UUID] false
6.1.3. Manually Tagging the Nodes
After registering and inspecting the hardware of each node, we tag them into specific profiles. These profile tags match our nodes to flavors, and in turn the flavors are assigned to a deployment role. For the Basic Deployment scenario, we tag them manually since there are only two nodes. For a larger number of nodes, use the Automated Health Check (AHC) Tools in the Advanced Deployment Scenario. See Section 6.2.3, “Automatically Tagging Nodes with Automated Health Check (AHC) Tools” for more details about the Automated Health Check (AHC) Tools.
To manually tag a node to a specific profile, add a
profile
option to the properties/capabilities
parameter for each node. For example, to tag our two nodes to use a controller profile and a compute profile respectively, use the following commands:
$ ironic node-update 58c3d07e-24f2-48a7-bbb6-6843f0e8ee13 add properties/capabilities='profile:compute,boot_option:local' $ ironic node-update 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0 add properties/capabilities='profile:control,boot_option:local'
The addition of the
profile:compute
and profile:control
options tag the two nodes into each respective profiles.
These commands also set the
boot_option:local
parameter, which defines the boot mode for each node.
Important
The director currently does not support UEFI boot mode.
6.1.4. Creating Flavors for the Basic Scenario
The director also needs a set of hardware profiles, or flavors, for the registered nodes. In this scenario, we'll create a profile each for the Compute and Controller nodes.
$ openstack flavor create --id auto --ram 6144 --disk 40 --vcpus 4 control $ openstack flavor create --id auto --ram 6144 --disk 40 --vcpus 4 compute
This creates two flavors for your nodes:
control
and compute
. We also set the additional properties for each flavor.
$ openstack flavor set --property "cpu_arch"="x86_64" --property "capabilities:boot_option"="local" --property "capabilities:profile"="compute" compute $ openstack flavor set --property "cpu_arch"="x86_64" --property "capabilities:boot_option"="local" --property "capabilities:profile"="control" control
The
capabilities:boot_option
sets the boot mode for the flavor and the capabilities:profile
defines the profile to use. This links to the same tag on each respective node tagged in Section 6.1.3, “Manually Tagging the Nodes”.
Important
Unused roles also require a default flavor named
baremetal
. Create this flavor if it does not exist:
$ openstack flavor create --id auto --ram 4096 --disk 40 --vcpus 1 baremetal
6.1.5. Configuring NFS Storage
This section describes configuring the Overcloud to use an NFS share. The installation and configuration process is based on the modification of an existing environment file in the Heat template collection.
The Heat template collection contains a set of environment files in
/usr/share/openstack-tripleo-heat-templates/environments/
. These are environment templates to help with custom configuration of some of the supported features in a director-created Overcloud. This includes an environment file to help configure storage. This file is located at /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml
. Copy this file to the stack
user's template directory.
$ cp /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml ~/templates/.
The environment file contains some parameters to help configure different storage options for Openstack's block and image storage components, Cinder and Glance. In this example, we will configure the Overcloud to use an NFS share. Modify the following parameters:
- CinderEnableIscsiBackend
- Enables the iSCSI backend. Set to
false
. - CinderEnableRbdBackend
- Enables the Ceph Storage backend. Set to
false
. - CinderEnableNfsBackend
- Enables the NFS backend. Set to
true
. - NovaEnableRbdBackend
- Enables Ceph Storage for Nova ephemeral storage. Set to
false
. - GlanceBackend
- Define the backend to use for Glance. Set to
file
to use file-based storage for images. The Overcloud will save these files in a mounted NFS share for Glance. - CinderNfsMountOptions
- The NFS mount options for the volume storage.
- CinderNfsServers
- The NFS share to mount for volume storage. For example,
192.168.122.1:/export/cinder
. - GlanceFilePcmkManage
- Enables Pacemaker to manage the share for image storage. If disabled, the Overcloud stores images in the Controller node's file system. Set to
true
. - GlanceFilePcmkFstype
- Defines the file system type that Pacemaker uses for image storage. Set to
nfs
. - GlanceFilePcmkDevice
- The NFS share to mount for image storage. For example,
192.168.122.1:/export/glance
. - GlanceFilePcmkOptions
- The NFS mount options for the image storage.
The environment file's options should look similar to the following:
parameters: CinderEnableIscsiBackend: false CinderEnableRbdBackend: false CinderEnableNfsBackend: true NovaEnableRbdBackend: false GlanceBackend: 'file' CinderNfsMountOptions: 'rw,sync' CinderNfsServers: '192.0.2.230:/cinder' GlanceFilePcmkManage: true GlanceFilePcmkFstype: 'nfs' GlanceFilePcmkDevice: '192.0.2.230:/glance' GlanceFilePcmkOptions: 'rw,sync,context=system_u:object_r:glance_var_lib_t:s0'
Important
Include the
context=system_u:object_r:glance_var_lib_t:s0
in the GlanceFilePcmkOptions
parameter to allow Glance access to the /var/lib
directory. Without this SELinux content, Glance will fail to write to the mount point.
These parameters are integrated as part of the Heat template collection. Setting them as such creates two NFS mount points for Cinder and Glance to use.
Save this file for inclusion in the Overcloud creation.
6.1.6. Isolating the External Network
The director provides methods to configure isolated overcloud networks. This means the Overcloud environment separates network traffic types into different networks, which in turn assigns network traffic to specific network interfaces or bonds. After configuring isolated networks, the director configures the OpenStack services to use the isolated networks. If no isolated networks are configured, all services run on the Provisioning network.
This scenario uses two separate networks:
- Network 1 - Provisioning network. The Internal API, Storage, Storage Management, and Tenant networks use this network too.
- Network 2 - External network. This network will use a dedicated interface for connecting outside of the Overcloud.
The following sections show how to create Heat templates to isolate the External network from the rest of the services. For more examples of network configuration, see Appendix F, Network Interface Template Examples.
6.1.6.1. Creating Custom Interface Templates
The Overcloud network configuration requires a set of the network interface templates. You customize these templates to configure the node interfaces on a per role basis. These templates are standard Heat templates in YAML format (see Chapter 5, Understanding Heat Templates). The director contains a set of example templates to get you started:
/usr/share/openstack-tripleo-heat-templates/network/config/single-nic-vlans
- Directory containing templates for single NIC with VLANs configuration on a per role basis./usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans
- Directory containing templates for bonded NIC configuration on a per role basis.
For the Basic Overcloud scenario, we use the default single NIC example configuration. Copy the default configuration directory into the
stack
user's home directory as nic-configs
.
$ cp -r /usr/share/openstack-tripleo-heat-templates/network/config/single-nic-vlans ~/templates/nic-configs
This creates a local set of Heat templates that define a single network interface configuration the External network uses. Each template contains the standard
parameters
, resources
, and output
sections. For our purposes, we only edit the resources
section. Each resources
section begins with the following:
resources: OsNetConfigImpl: type: OS::Heat::StructuredConfig properties: group: os-apply-config config: os_net_config: network_config:
This creates a request for the
os-apply-config
command and os-net-config
subcommand to configure the network properties for a node. The network_config
section contains our custom interface configuration arranged in a sequence based on type, which includes the following:
- interface
- Defines a single network interface. The configuration defines each interface using either the actual interface name ("eth0", "eth1", "enp0s25") or a set of numbered interfaces ("nic1", "nic2", "nic3").
- type: interface name: nic2
- vlan
- Defines a VLAN. Use the VLAN ID and subnet passed from the
parameters
section.- type: vlan vlan_id: {get_param: ExternalNetworkVlanID} addresses: - ip_netmask: {get_param: ExternalIpSubnet}
- ovs_bond
- Defines a bond in Open vSwitch. A bond joins two or more
interfaces
together to help with redundancy and increase bandwidth.- type: ovs_bond name: bond1 members: - type: interface name: nic2 - type: interface name: nic3
- ovs_bridge
- Defines a bridge in Open vSwitch. A bridge connects multiple
interface
,bond
andvlan
objects together.- type: ovs_bridge name: {get_input: bridge_name} members: - type: ovs_bond name: bond1 members: - type: interface name: nic2 primary: true - type: interface name: nic3 - type: vlan device: bond1 vlan_id: {get_param: ExternalNetworkVlanID} addresses: - ip_netmask: {get_param: ExternalIpSubnet}
See Appendix E, Network Interface Parameters for a full list of parameters for each of these items.
For the Basic Scenario, modify each interface template to move the External network to
nic2
. This ensures we use the second network interface on each node for the External network. For example, for the templates/nic-configs/controller.yaml
template:
network_config: - type: ovs_bridge name: {get_input: bridge_name} use_dhcp: true members: - type: interface name: nic1 # force the MAC address of the bridge to this interface primary: true - type: vlan vlan_id: {get_param: InternalApiNetworkVlanID} addresses: - ip_netmask: {get_param: InternalApiIpSubnet} - type: vlan vlan_id: {get_param: StorageNetworkVlanID} addresses: - ip_netmask: {get_param: StorageIpSubnet} - type: vlan vlan_id: {get_param: StorageMgmtNetworkVlanID} addresses: - ip_netmask: {get_param: StorageMgmtIpSubnet} - type: vlan vlan_id: {get_param: TenantNetworkVlanID} addresses: - ip_netmask: {get_param: TenantIpSubnet} - type: interface name: nic2 addresses: - ip_netmask: {get_param: ExternalIpSubnet} routes: - ip_netmask: 0.0.0.0/0 next_hop: {get_param: ExternalInterfaceDefaultRoute}
The above example creates a new interface (
nic2
) and reassigns the External network addresses and routes to the new interface.
For more examples of network interface templates, see Appendix F, Network Interface Template Examples.
Note that a lot of these parameters use the
get_param
function. We define these in an environment file we create specifically for our networks.
Important
Unused interfaces can cause unwanted default routes and network loops. For example, your template might contain a network interface (
nic4
) that does not use any IP assignments for OpenStack services but still uses DHCP and/or a default route. To avoid network conflicts, remove any used interfaces from ovs_bridge
devices and disable the DHCP and default route settings:
- type: interface name: nic4 use_dhcp: false defroute: false
6.1.6.2. Creating a Basic Overcloud Network Environment Template
The network environment file describes the Overcloud's network environment and points to the network interface configuration files from the previous section. We define the subnets for our network along with IP address ranges. We customize these values for the local environment.
This scenario uses the following network environment file saved as
/home/stack/templates/network-environment.yaml
:
resource_registry: OS::TripleO::BlockStorage::Net::SoftwareConfig: /home/stack/templates/nic-configs/cinder-storage.yaml OS::TripleO::Compute::Net::SoftwareConfig: /home/stack/templates/nic-configs/compute.yaml OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/templates/nic-configs/controller.yaml OS::TripleO::ObjectStorage::Net::SoftwareConfig: /home/stack/templates/nic-configs/swift-storage.yaml OS::TripleO::CephStorage::Net::SoftwareConfig: /home/stack/templates/nic-configs/ceph-storage.yaml parameter_defaults: ExternalNetCidr: 10.1.1.0/24 ExternalAllocationPools: [{'start': '10.1.1.2', 'end': '10.1.1.50'}] ExternalNetworkVlanID: 100 # Set to the router gateway on the external network ExternalInterfaceDefaultRoute: 10.1.1.1 # Gateway router for the provisioning network (or Undercloud IP) ControlPlaneDefaultRoute: 192.0.2.254 # The IP address of the EC2 metadata server. Generally the IP of the Undercloud EC2MetadataIp: 192.0.2.1 # Define the DNS servers (maximum 2) for the overcloud nodes DnsServers: ["8.8.8.8","8.8.4.4"] # Set to "br-ex" if using floating IPs on native VLAN on bridge br-ex NeutronExternalNetworkBridge: "''"
The
resource_registry
section contains links to the network interface templates for each node role. Note that the ExternalAllocationPools
parameter only defines a small range of IP addresses. This is so we can later define a separate range of floating IP addresses.
The
parameter_defaults
section contains a list of parameters that define the network options for each network type. For a full reference of these options, see Appendix G, Network Environment Options.
The External network hosts the Horizon dashboard and Public API. If using the External network for both cloud administration and floating IPs, make sure there is room for a pool of IPs to use as floating IPs for VM instances. In our example, we only have IPs from 10.1.1.10 to 10.1.1.50 assign to the External network, which leaves IP addresses from 10.1.1.51 and above free to use for Floating IP addresses. Alternately, place the Floating IP network on a separate VLAN and configure the Overcloud after creation to use it.
This scenario only defines the options for the External network. All other traffic types are automatically assigned to the Provisioning network.
Important
Changing the network configuration after creating the Overcloud can cause configuration problems due to the availability of resources. For example, if a user changes a subnet range for a network in the network isolation templates, the reconfiguration might fail due to the subnet already being used.
6.1.7. Creating the Basic Overcloud
The final stage in creating your OpenStack environment is to run the necessary commands that create it. The default plan installs one Controller node and one Compute node.
Note
The Red Hat Customer Portal contains a lab to help validate your configuration before creating the Overcloud. This lab is available at https://access.redhat.com/labs/ospec/ and instructions for this lab are available at https://access.redhat.com/labsinfo/ospec.
Run the following command to start the Basic Overcloud creation:
$ openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /home/stack/templates/network-environment.yaml -e /home/stack/templates/storage-environment.yaml --control-flavor control --compute-flavor compute --ntp-server pool.ntp.org --neutron-network-type vxlan --neutron-tunnel-types vxlan
This command contains the following additional options:
--templates
- Creates the Overcloud using the Heat template collection located in/usr/share/openstack-tripleo-heat-templates
.-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml
- The-e
option adds an additional environment file to the Overcloud plan. In this case, it is an environment file that initializes network isolation configuration.-e /home/stack/templates/network-environment.yaml
- The-e
option adds an additional environment file to the Overcloud plan. In this case, it is the network environment file we created from Section 6.1.6.2, “Creating a Basic Overcloud Network Environment Template”.-e /home/stack/templates/storage-environment.yaml
- The-e
option adds an additional environment file to the Overcloud plan. In this case, it is the storage environment file we created from Section 6.1.5, “Configuring NFS Storage”.--control-flavor control
- Use a specific flavor for the Controller nodes.--compute-flavor compute
- Use a specific flavor for the Compute nodes.--ntp-server pool.ntp.org
- Use an NTP server for time synchronization. This is useful for keeping the Controller node cluster in synchronization.--neutron-network-type vxlan
- Use Virtual Extensible LAN (VXLAN) for the Neutron networking in the Overcloud.--neutron-tunnel-types vxlan
- Use Virtual Extensible LAN (VXLAN) for Neutron tunneling in the Overcloud.
Note
For a full list of options, run:
$ openstack help overcloud deploy
See also Appendix I, Deployment Parameters for parameter examples.
The Overcloud creation process begins and the director provisions your nodes. This process takes some time to complete. To view the status of the Overcloud creation, open a separate terminal as the
stack
user and run:
$ source ~/stackrc # Initializes the stack user to use the CLI commands $ heat stack-list --show-nested
The
heat stack-list --show-nested
command shows the current stage of the Overcloud creation.
Warning
Any environment files added to the Overcloud using the
-e
option become part of your Overcloud's stack definition. The director requires these environment files for re-deployment and post-deployment functions in Chapter 7, Performing Tasks after Overcloud Creation. Failure to include these files can result in damage to your Overcloud.
If you aim to later modify the Overcloud configuration, modify parameters in the custom environment files and Heat templates, then run the
openstack overcloud deploy
command again. Do not edit the Overcloud configuration directly as such manual configuration gets overridden by the director's configuration when updating the Overcloud stack with the director.
Warning
Do not run
openstack overcloud deploy
as a background process. The Overcloud creation might hang in mid-deployment if started as a background process.
6.1.8. Accessing the Basic Overcloud
The director generates a file to configure and authenticate interactions with your Overcloud from the Undercloud. The director saves this file,
overcloudrc
, in your stack
user's home directory. Run the following command to use this file:
$ source ~/overcloudrc
This loads the necessary environment variables to interact with your Overcloud from the director host's CLI. To return to interacting with the director's host, run the following command:
$ source ~/stackrc
6.1.9. Completing the Basic Overcloud
This concludes the creation of the Basic Overcloud. For post-creation functions, see Chapter 7, Performing Tasks after Overcloud Creation.