6.2. Advanced Scenario: Creating a Large Overcloud with Ceph Storage Nodes
- Three Controller nodes with high availability
- Three Compute nodes
- Three Red Hat Ceph Storage nodes in a cluster
Workflow
- Create a node definition template and register blank nodes in the director.
- Inspect hardware and benchmark all nodes.
- Use the Automated Health Check (AHC) Tools to define policies that automatically tag nodes into roles.
- Create flavors and tag them into roles.
- Use an environment file to configure Ceph Storage.
- Create Heat templates to isolate all networks.
- Create the Overcloud environment using the default Heat template collection and the additional network isolation templates.
- Add fencing information for each Controller node in the high-availability cluster.
Requirements
- The director node created in Chapter 3, Installing the Undercloud
- Nine bare metal machines. These machines must comply with the requirements set for the Controller, Compute, and Ceph Storage nodes. For these requirements, see:These nodes do not require an operating system because the director copies a Red Hat Enterprise Linux 7 image to each node.
- One network connection for our Provisioning network, which is configured as a native VLAN. All nodes must connect to this network and comply with the requirements set in Section 2.3, “Networking Requirements”. For this example, we use 192.0.2.0/24 as the Provisioning subnet with the following IP address assignments:
Table 6.3. Provisioning Network IP Assignments
Node NameIP AddressMAC AddressIPMI IP AddressDirector192.0.2.1aa:aa:aa:aa:aa:aaController 1DHCP definedb1:b1:b1:b1:b1:b1192.0.2.205Controller 2DHCP definedb2:b2:b2:b2:b2:b2192.0.2.206Controller 3DHCP definedb3:b3:b3:b3:b3:b3192.0.2.207Compute 1DHCP definedc1:c1:c1:c1:c1:c1192.0.2.208Compute 2DHCP definedc2:c2:c2:c2:c2:c2192.0.2.209Compute 3DHCP definedc3:c3:c3:c3:c3:c3192.0.2.210Ceph 1DHCP definedd1:d1:d1:d1:d1:d1192.0.2.211Ceph 2DHCP definedd2:d2:d2:d2:d2:d2192.0.2.212Ceph 3DHCP definedd3:d3:d3:d3:d3:d3192.0.2.213 - Each Overcloud node uses the remaining two network interfaces in a bond to serve networks in tagged VLANs. The following network assignments apply to this bond:
Table 6.4. Network Subnet and VLAN Assignments
Network TypeSubnetVLANInternal API172.16.0.0/24201Tenant172.17.0.0/24202Storage172.18.0.0/24203Storage Management172.19.0.0/24204External / Floating IP10.1.1.0/24100
6.2.1. Registering Nodes for the Advanced Overcloud
instackenv.json
) is a JSON format file and contains the hardware and power management details for our nine nodes.
- mac
- A list of MAC addresses for the network interfaces on the node. Use only the MAC address for the Provisioning NIC of each system.
- pm_type
- The power management driver to use. This example uses the IPMI driver (
pxe_ipmitool
). - pm_user, pm_password
- The IPMI username and password.
- pm_addr
- The IP address of the IPMI device.
- cpu
- The number of CPUs on the node.
- memory
- The amount of memory in MB.
- disk
- The size of the hard disk in GB.
- arch
- The system architecture.
{ "nodes":[ { "mac":[ "b1:b1:b1:b1:b1:b1" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.205" }, { "mac":[ "b2:b2:b2:b2:b2:b2" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.206" }, { "mac":[ "b3:b3:b3:b3:b3:b3" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.207" }, { "mac":[ "c1:c1:c1:c1:c1:c1" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.208" }, { "mac":[ "c2:c2:c2:c2:c2:c2" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.209" }, { "mac":[ "c3:c3:c3:c3:c3:c3" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.210" }, { "mac":[ "d1:d1:d1:d1:d1:d1" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.211" }, { "mac":[ "d2:d2:d2:d2:d2:d2" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.212" }, { "mac":[ "d3:d3:d3:d3:d3:d3" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.213" } ] }
Note
stack
user's home directory as instackenv.json
, then import it into the director. Use the following command to accomplish this:
$ openstack baremetal import --json ~/instackenv.json
$ openstack baremetal configure boot
$ openstack baremetal list
6.2.2. Inspecting the Hardware of Nodes
Important
discovery_runbench
option set to true when initially configuring the director (see Section 3.6, “Configuring the Director”).
/httpboot/discoverd.ipxe
and set the RUNBENCH
kernel parameter to 1
.
$ openstack baremetal introspection bulk start
$ sudo journalctl -l -u openstack-ironic-discoverd -u openstack-ironic-discoverd-dnsmasq -u openstack-ironic-conductor -f
Important
$ ironic node-set-maintenance [NODE UUID] true $ openstack baremetal introspection start [NODE UUID] $ ironic node-set-maintenance [NODE UUID] false
6.2.3. Automatically Tagging Nodes with Automated Health Check (AHC) Tools
$ sudo yum install -y ahc-tools
ahc-report
, which provides reports from the benchmark tests.ahc-match
, which tags nodes into specific roles based on policies.
Important
/etc/ahc-tools/ahc-tools.conf
file. These are the same credentials in /etc/ironic-discoverd/discoverd.conf
. Use the following commands to copy and tailor the configuration file for /etc/ahc-tools/ahc-tools.conf
:
$ sudo -i # mkdir /etc/ahc-tools # sed 's/\[discoverd/\[ironic/' /etc/ironic-discoverd/discoverd.conf > /etc/ahc-tools/ahc-tools.conf # chmod 0600 /etc/ahc-tools/ahc-tools.conf # exit
6.2.3.1. ahc-report
ahc-report
script produces various reports about your nodes. To view a full report, use the --full
option.
$ sudo ahc-report --full
ahc-report
command can also focus on specific parts of a report. For example, use the --categories
to categorize nodes based on their hardware (processors, network interfaces, firmware, memory, and various hardware controllers). This also groups these nodes together with similar hardware profiles. For example, the Processors section for our two example nodes might list the following:
###################### ##### Processors ##### 2 identical systems : [u'7F8831F1-0D81-464E-A767-7577DF49AAA5', u'7884BC95-6EF8-4447-BDE5-D19561718B29'] [(u'cpu', u'logical', u'number', u'4'), (u'cpu', u'physical', u'number', u'4'), (u'cpu', u'physical_0', u'flags', u'fpu fpu_exception wp de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 syscall nx x86-64 rep_good nopl pni cx16 hypervisor lahf_lm'), (u'cpu', u'physical_0', u'frequency', u'2000000000'), (u'cpu', u'physical_0', u'physid', u'0'), (u'cpu', u'physical_0', u'product', u'Intel(R) Xeon(TM) CPU E3-1271v3 @ 3.6GHz'), (u'cpu', u'physical_0', u'vendor', u'GenuineIntel'), (u'cpu', u'physical_1', u'flags', u'fpu fpu_exception wp de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 syscall nx x86-64 rep_good nopl pni cx16 hypervisor lahf_lm'), (u'cpu', u'physical_0', u'frequency', u'2000000000'), (u'cpu', u'physical_0', u'physid', u'0'), (u'cpu', u'physical_0', u'product', u'Intel(R) Xeon(TM) CPU E3-1271v3 @ 3.6GHz'), (u'cpu', u'physical_0', u'vendor', u'GenuineIntel') ... ]
ahc-report
tool also identifies the outliers in your node collection. Use the --outliers
switch to enable this:
$ sudo ahc-report --outliers Group 0 : Checking logical disks perf standalone_randread_4k_KBps : INFO : sda : Group performance : min=45296.00, mean=53604.67, max=67923.00, stddev=12453.21 standalone_randread_4k_KBps : ERROR : sda : Group's variance is too important : 23.23% of 53604.67 whereas limit is set to 15.00% standalone_randread_4k_KBps : ERROR : sda : Group performance : UNSTABLE standalone_read_1M_IOps : INFO : sda : Group performance : min= 1199.00, mean= 1259.00, max= 1357.00, stddev= 85.58 standalone_read_1M_IOps : INFO : sda : Group performance = 1259.00 : CONSISTENT standalone_randread_4k_IOps : INFO : sda : Group performance : min=11320.00, mean=13397.33, max=16977.00, stddev= 3113.39 standalone_randread_4k_IOps : ERROR : sda : Group's variance is too important : 23.24% of 13397.33 whereas limit is set to 15.00% standalone_randread_4k_IOps : ERROR : sda : Group performance : UNSTABLE standalone_read_1M_KBps : INFO : sda : Group performance : min=1231155.00, mean=1292799.67, max=1393152.00, stddev=87661.11 standalone_read_1M_KBps : INFO : sda : Group performance = 1292799.67 : CONSISTENT ...
ahc-report
marked the standalone_randread_4k_KBps
and standalone_randread_4k_IOps
disk metrics as unstable due to the standard deviation of all nodes being higher than the allowable threshold. In our example, this could happen if our two nodes have a significant difference in disk transfer rates.
ahc-match
command to assign nodes to specific roles.
6.2.3.2. ahc-match
ahc-match
command applies a set of policies to your Overcloud plan to help assign nodes to certain roles. Prior to using this command, create a set of policies to match suitable nodes to roles.
ahc-tools
package installs a set of policy files under /etc/ahc-tools/edeploy
. This includes:
state
- The state file, which outlines the number of nodes for each role.compute.specs
,control.specs
- Policy files for matching Compute and Controller nodes.compute.cmdb.sample
,control.cmdb.sample
- Sample Configuration Management Database (CMDB) files, which contain key/value settings for RAID and BIOS ready-state configuration (Dell DRAC only).
State File
state
file indicates the number of nodes for each role. The default configuration file shows:
[('control', '1'), ('compute', '*')]
ahc-match
assigns one control node and any number of compute nodes. For this scenario, edit this file:
[('control', '3'), ('ceph-storage', '3'), ('compute', '*')]
Policy Files
compute.specs
and control.specs
files list the assignment rules for each respective role. The file contents is a tuple format, such as:
[ ('cpu', 'logical', 'number', 'ge(2)'), ('disk', '$disk', 'size', 'gt(4)'), ('network', '$eth', 'ipv4', 'network(192.0.2.0/24)'), ('memory', 'total', 'size', 'ge(4294967296)'), ]
network()
- The network interface is in the specified network.gt()
,ge()
- Greater than (or equal).lt()
,le()
- Lower than (or equal).in()
- The item to match shall be in a specified set.regexp()
- Match a regular expression.or()
,and()
,not()
- Boolean functions.or()
andand()
take two parameters andnot()
one parameter.
standalone_randread_4k_KBps
and standalone_randread_4k_IOps
values from Section 6.2.3.1, “ahc-report” to limit the Controller role to node with disk access rates higher than the average rate. The rules for each would be:
[ ('disk', '$disk', 'standalone_randread_4k_KBps', 'gt(53604)'), ('disk', '$disk', 'standalone_randread_4k_IOps', 'gt(13397)') ]
ceph-storage.spec
for a profile specifically for Red Hat Ceph Storage. Ensure these new filenames (without extension) are included in the state
file.
Ready-State Files (Dell DRAC only)
bios_settings
key. For example:
[ { 'bios_settings': {'ProcVirtualization': 'Enabled', 'ProcCores': 4} } ]
- List the IDs of the physical disks - Provide a list of physical disk IDs using the following attributes:
controller
,size_gb
,raid_level
and the list ofphysical_disks
.controller
should be the FQDD of the RAID controller that the DRAC assigns. Similarly, the list ofphysical_disks
should be the FQDDs of physical disks the DRAC card assigns.[ { 'logical_disks': [ {'controller': 'RAID.Integrated.1-1', 'size_gb': 100, 'physical_disks': [ 'Disk.Bay.0:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.1:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.2:Enclosure.Internal.0-1:RAID.Integrated.1-1'], 'raid_level': '5'}, ] } ]
- Let Ironic assign physical disks to the RAID volume - The following attributes are required:
controller
,size_gb
,raid_level
and thenumber_of_physical_disks
.controller
should be the FQDD of the RAID controller the DRAC card assigns.[ { 'logical_disks': [ {'controller': 'RAID.Integrated.1-1', 'size_gb': 50, 'raid_level': '1', 'number_of_physical_disks': 2}, ] } ]
Running the Matching Tool
ahc-match
tool to assign your nodes.
$ sudo ahc-match
/etc/ahc-tools/edeploy/state
. When a node matches a role, ahc-match
adds the role to the node in Ironic as a capability.
$ ironic node-show b73fb5fa-1a2c-49c6-b38e-8de41e3c0532 | grep properties -A2
| properties | {u'memory_mb': u'6144', u'cpu_arch': u'x86_64', u'local_gb': u'40', |
| | u'cpus': u'4', u'capabilities': u'profile:control,boot_option:local'} |
| instance_uuid | None |
profile
tag from each node to match to roles and flavors with the same tag.
$ instack-ironic-deployment --configure-nodes
6.2.4. Creating Hardware Profiles
$ openstack flavor create --id auto --ram 6144 --disk 40 --vcpus 4 control $ openstack flavor create --id auto --ram 6144 --disk 40 --vcpus 4 compute $ openstack flavor create --id auto --ram 6144 --disk 40 --vcpus 4 ceph-storage
Important
$ openstack flavor set --property "cpu_arch"="x86_64" --property "capabilities:boot_option"="local" --property "capabilities:profile"="compute" compute $ openstack flavor set --property "cpu_arch"="x86_64" --property "capabilities:boot_option"="local" --property "capabilities:profile"="control" control $ openstack flavor set --property "cpu_arch"="x86_64" --property "capabilities:boot_option"="local" --property "capabilities:profile"="ceph-storage" ceph-storage
capabilities:boot_option
sets the boot mode for the flavor and the capabilities:profile
defines the profile to use.
Important
baremetal
. Create this flavor if it does not exist:
$ openstack flavor create --id auto --ram 4096 --disk 40 --vcpus 1 baremetal
6.2.5. Configuring Ceph Storage
storage-environment.yaml
environment file to your stack
user's templates
directory.
$ cp /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml ~/templates/.
storage-environment.yaml
:
- CinderEnableIscsiBackend
- Enables the iSCSI backend. Set to
false
. - CinderEnableRbdBackend
- Enables the Ceph Storage backend. Set to
true
. - CinderEnableNfsBackend
- Enables the NFS backend. Set to
false
. - NovaEnableRbdBackend
- Enables Ceph Storage for Nova ephemeral storage. Set to
true
. - GlanceBackend
- Define the backend to use for Glance. Set to
rbd
to use Ceph Storage for images.
Note
storage-environment.yaml
also contains some options to configure Ceph Storage directly through Heat. However, these options are not necessary in this scenario since the director creates these nodes and automatically defines the configuration values.
parameter_defaults: ExtraConfig: ceph::profile::params::osds:
ceph::profile::params::osds
parameter to map the relevant journal partitions and disks. For example, a Ceph node with four disks might have the following assignments:
/dev/sda
- The root disk containing the Overcloud image/dev/sdb
- The disk containing the journal partitions. This is usually a solid state disk (SSD) to aid with system performance./dev/sdc
and/dev/sdd
- The OSD disks
ceph::profile::params::osds: '/dev/sdc': journal: '/dev/sdb' '/dev/sdd': journal: '/dev/sdb'
journal
parameters:
ceph::profile::params::osds: '/dev/sdb': {} '/dev/sdc': {} '/dev/sdd': {}
storage-environment.yaml
file's options should look similar to the following:
parameters: CinderEnableIscsiBackend: false CinderEnableRbdBackend: true CinderEnableNfsBackend: false NovaEnableRbdBackend: true parameter_defaults: ExtraConfig: ceph::profile::params::osds: '/dev/sdc': journal: '/dev/sdb' '/dev/sdd': journal: '/dev/sdb'
storage-environment.yaml
so that when we deploy the Overcloud, the Ceph Storage nodes will use our disk mapping and custom settings. We include this file in our deployment to initiate our storage requirements.
Important
# parted [device] mklabel gpt
6.2.6. Isolating all Networks into VLANs
- Network 1 - Provisioning
- Network 2 - Internal API
- Network 3 - Tenant Networks
- Network 4 - Storage
- Network 5 - Storage Management
- Network 6 - External and Floating IP (mapped after Overcloud creation)
6.2.6.1. Creating Custom Interface Templates
/usr/share/openstack-tripleo-heat-templates/network/config/single-nic-vlans
- Directory containing templates for single NIC with VLANs configuration on a per role basis./usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans
- Directory containing templates for bonded NIC configuration on a per role basis.
/usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans
.
$ cp -r /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans ~/templates/nic-configs
parameters
, resources
, and output
sections. For our purposes, we only edit the resources
section. Each resources
section begins with the following:
resources: OsNetConfigImpl: type: OS::Heat::StructuredConfig properties: group: os-apply-config config: os_net_config: network_config:
os-apply-config
command and os-net-config
subcommand to configure the network properties for a node. The network_config
section contains our custom interface configuration arranged in a sequence based on type, which includes the following:
- interface
- Defines a single network interface. The configuration defines each interface using either the actual interface name ("eth0", "eth1", "enp0s25") or a set of numbered interfaces ("nic1", "nic2", "nic3").
- type: interface name: nic2
- vlan
- Defines a VLAN. Use the VLAN ID and subnet passed from the
parameters
section.- type: vlan vlan_id: {get_param: ExternalNetworkVlanID} addresses: - ip_netmask: {get_param: ExternalIpSubnet}
- ovs_bond
- Defines a bond in Open vSwitch. A bond joins two or more
interfaces
together to help with redundancy and increase bandwidth.- type: ovs_bond name: bond1 members: - type: interface name: nic2 - type: interface name: nic3
- ovs_bridge
- Defines a bridge in Open vSwitch. A bridge connects multiple
interface
,bond
andvlan
objects together.- type: ovs_bridge name: {get_input: bridge_name} members: - type: ovs_bond name: bond1 members: - type: interface name: nic2 primary: true - type: interface name: nic3 - type: vlan device: bond1 vlan_id: {get_param: ExternalNetworkVlanID} addresses: - ip_netmask: {get_param: ExternalIpSubnet}
- linux_bridge
- Defines a Linux bridge. Similar to an Open vSwitch bridge, it connects multiple
interface
,bond
andvlan
objects together.- type: linux_bridge name: bridge1 members: - type: interface name: nic1 primary: true - type: vlan device: bridge1 vlan_id: {get_param: ExternalNetworkVlanID} addresses: - ip_netmask: {get_param: ExternalIpSubnet}
/home/stack/templates/nic-configs/controller.yaml
template uses the following network_config
:
network_config: - type: interface name: nic1 use_dhcp: false addresses: - ip_netmask: list_join: - '/' - - {get_param: ControlPlaneIp} - {get_param: ControlPlaneSubnetCidr} routes: - ip_netmask: 169.254.169.254/32 next_hop: {get_param: EC2MetadataIp} - type: ovs_bridge name: {get_input: bridge_name} dns_servers: {get_param: DnsServers} members: - type: ovs_bond name: bond1 ovs_options: {get_param: BondInterfaceOvsOptions} members: - type: interface name: nic2 primary: true - type: interface name: nic3 - type: vlan device: bond1 vlan_id: {get_param: ExternalNetworkVlanID} addresses: - ip_netmask: {get_param: ExternalIpSubnet} routes: - ip_netmask: 0.0.0.0/0 next_hop: {get_param: ExternalInterfaceDefaultRoute} - type: vlan device: bond1 vlan_id: {get_param: InternalApiNetworkVlanID} addresses: - ip_netmask: {get_param: InternalApiIpSubnet} - type: vlan device: bond1 vlan_id: {get_param: StorageNetworkVlanID} addresses: - ip_netmask: {get_param: StorageIpSubnet} - type: vlan device: bond1 vlan_id: {get_param: StorageMgmtNetworkVlanID} addresses: - ip_netmask: {get_param: StorageMgmtIpSubnet} - type: vlan device: bond1 vlan_id: {get_param: TenantNetworkVlanID} addresses: - ip_netmask: {get_param: TenantIpSubnet}
br-ex
) and creates a bonded interface called bond1
from two numbered interfaces: nic2
and nic3
. The bridge also contains a number of tagged VLAN devices, which use bond1
as a parent device.
get_param
function. We define these in an environment file we create specifically for our networks.
Important
nic4
) that does not use any IP assignments for OpenStack services but still uses DHCP and/or a default route. To avoid network conflicts, remove any unused interfaces from ovs_bridge
devices and disable the DHCP and default route settings:
- type: interface name: nic4 use_dhcp: false defroute: false
6.2.6.2. Creating an Advanced Overcloud Network Environment File
/home/stack/templates/network-environment.yaml
:
resource_registry: OS::TripleO::BlockStorage::Net::SoftwareConfig: /home/stack/templates/nic-configs/cinder-storage.yaml OS::TripleO::Compute::Net::SoftwareConfig: /home/stack/templates/nic-configs/compute.yaml OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/templates/nic-configs/controller.yaml OS::TripleO::ObjectStorage::Net::SoftwareConfig: /home/stack/templates/nic-configs/swift-storage.yaml OS::TripleO::CephStorage::Net::SoftwareConfig: /home/stack/templates/nic-configs/ceph-storage.yaml parameter_defaults: InternalApiNetCidr: 172.16.0.0/24 TenantNetCidr: 172.17.0.0/24 StorageNetCidr: 172.18.0.0/24 StorageMgmtNetCidr: 172.19.0.0/24 ExternalNetCidr: 10.1.1.0/24 InternalApiAllocationPools: [{'start': '172.16.0.10', 'end': '172.16.0.200'}] TenantAllocationPools: [{'start': '172.17.0.10', 'end': '172.17.0.200'}] StorageAllocationPools: [{'start': '172.18.0.10', 'end': '172.18.0.200'}] StorageMgmtAllocationPools: [{'start': '172.19.0.10', 'end': '172.19.0.200'}] # Leave room for floating IPs in the External allocation pool ExternalAllocationPools: [{'start': '10.1.1.10', 'end': '10.1.1.50'}] # Set to the router gateway on the external network ExternalInterfaceDefaultRoute: 10.1.1.1 # Gateway router for the provisioning network (or Undercloud IP) ControlPlaneDefaultRoute: 192.0.2.254 # The IP address of the EC2 metadata server. Generally the IP of the Undercloud EC2MetadataIp: 192.0.2.1 # Define the DNS servers (maximum 2) for the overcloud nodes DnsServers: ["8.8.8.8","8.8.4.4"] InternalApiNetworkVlanID: 201 StorageNetworkVlanID: 202 StorageMgmtNetworkVlanID: 203 TenantNetworkVlanID: 204 ExternalNetworkVlanID: 100 # Set to "br-ex" if using floating IPs on native VLAN on bridge br-ex NeutronExternalNetworkBridge: "''" # Customize bonding options if required BondInterfaceOvsOptions: "bond_mode=balance-slb"
resource_registry
section contains links to the network interface templates for each node role.
parameter_defaults
section contains a list of parameters that define the network options for each network type. For a full reference of these options, see Appendix G, Network Environment Options.
BondInterfaceOvsOptions
option provides options for our bonded interface using nic2
and nic3
. For more information on bonding options, see Appendix H, Bonding Options.
Important
6.2.6.3. Assigning OpenStack Services to Isolated Networks
/home/stack/templates/network-environment.yaml
). The ServiceNetMap
parameter determines the network types used for each service.
... parameter_defaults: ServiceNetMap: NeutronTenantNetwork: tenant CeilometerApiNetwork: internal_api MongoDbNetwork: internal_api CinderApiNetwork: internal_api CinderIscsiNetwork: storage GlanceApiNetwork: storage GlanceRegistryNetwork: internal_api KeystoneAdminApiNetwork: internal_api KeystonePublicApiNetwork: internal_api NeutronApiNetwork: internal_api HeatApiNetwork: internal_api NovaApiNetwork: internal_api NovaMetadataNetwork: internal_api NovaVncProxyNetwork: internal_api SwiftMgmtNetwork: storage_mgmt SwiftProxyNetwork: storage HorizonNetwork: internal_api MemcachedNetwork: internal_api RabbitMqNetwork: internal_api RedisNetwork: internal_api MysqlNetwork: internal_api CephClusterNetwork: storage_mgmt CephPublicNetwork: storage # Define which network will be used for hostname resolution ControllerHostnameResolveNetwork: internal_api ComputeHostnameResolveNetwork: internal_api BlockStorageHostnameResolveNetwork: internal_api ObjectStorageHostnameResolveNetwork: internal_api CephStorageHostnameResolveNetwork: storage
storage
places these services on the Storage network instead of the Storage Management network. This means you only need to define a set of parameter_defaults
for the Storage network and not the Storage Management network.
6.2.7. Enabling SSL/TLS on the Overcloud
Enabling SSL/TLS
enable-tls.yaml
environment file from the Heat template collection:
$ cp -r /usr/share/openstack-tripleo-heat-templates/environments/enable-tls.yaml ~/templates/.
parameter_defaults:
- SSLCertificate:
- Copy the contents of the certificate file into the
SSLCertificate
parameter. For example:parameter_defaults: SSLCertificate: | -----BEGIN CERTIFICATE----- MIIDgzCCAmugAwIBAgIJAKk46qw6ncJaMA0GCSqGSIb3DQEBCwUAMFgxCzAJBgNV ... sFW3S2roS4X0Af/kSSD8mlBBTFTCMBAj6rtLBKLaQbIxEpIzrgvp -----END CERTIFICATE-----
Important
The certificate authority contents require the same indentation level for all new lines. - SSLKey:
- Copy the contents of the private key into the
SSLKey
parameter. For example>parameter_defaults: ... SSLKey: | -----BEGIN RSA PRIVATE KEY----- MIIEowIBAAKCAQEAqVw8lnQ9RbeI1EdLN5PJP0lVO9hkJZnGP6qb6wtYUoy1bVP7 ... ctlKn3rAAdyumi4JDjESAXHIKFjJNOLrBmpQyES4XpZUC7yhqPaU -----END RSA PRIVATE KEY-----
Important
The private key contents require the same indentation level for all new lines. - EndpointMap:
- The
EndpointMap
contains a mapping of the services using HTTPS and HTTP communication. If using DNS for SSL communication, leave this section with the defaults. However, if using an IP address for the SSL certificate's common name (see Appendix B, SSL/TLS Certificate Configuration), replace all instances ofCLOUDNAME
withIP_ADDRESS
. Use the following command to accomplish this:$ sed -i 's/CLOUDNAME/IP_ADDRESS/' ~/templates/enable-tls.yaml
Important
Do not substituteIP_ADDRESS
orCLOUDNAME
for actual values. Heat replaces these variables with the appropriate value during the Overcloud creation.
resource_registry:
- OS::TripleO::NodeTLSData:
- Change the resource URL for
OS::TripleO::NodeTLSData:
to an absolute URL:resource_registry: OS::TripleO::NodeTLSData: /usr/share/openstack-tripleo-heat-templates/puppet/extraconfig/tls/tls-cert-inject.yaml
Injecting a Root Certificate
inject-trust-anchor.yaml
environment file from the Heat template collection:
$ cp -r /usr/share/openstack-tripleo-heat-templates/environments/inject-trust-anchor.yaml ~/templates/.
parameter_defaults:
- SSLRootCertificate:
- Copy the contents of the root certificate authority file into the
SSLRootCertificate
parameter. For example:parameter_defaults: SSLRootCertificate: | -----BEGIN CERTIFICATE----- MIIDgzCCAmugAwIBAgIJAKk46qw6ncJaMA0GCSqGSIb3DQEBCwUAMFgxCzAJBgNV ... sFW3S2roS4X0Af/kSSD8mlBBTFTCMBAj6rtLBKLaQbIxEpIzrgvp -----END CERTIFICATE-----
Important
The certificate authority contents require the same indentation level for all new lines.
resource_registry:
- OS::TripleO::NodeTLSCAData:
- Change the resource URL for
OS::TripleO::NodeTLSCAData:
to an absolute URL:resource_registry: OS::TripleO::NodeTLSCAData: /usr/share/openstack-tripleo-heat-templates/puppet/extraconfig/tls/ca-inject.yaml
Configuring DNS Endpoints
~/templates/cloudname.yaml
) to define the hostname of the Overcloud's endpoints. Use the following parameters:
parameter_defaults:
- CloudName:
- The DNS hostname for the Overcloud endpoints.
- DnsServers:
- A list of DNS server to use. The configured DNS servers must contain an entry for the configured
CloudName
that matches the IP for the Public API.
parameter_defaults: CloudName: overcloud.example.com DnsServers: ["10.0.0.1"]
Adding Environment Files During Overcloud Creation
openstack overcloud deploy
) in Section 6.2.9, “Creating the Advanced Overcloud” uses the -e
option to add environment files. Add the environment files from this section in the following order:
- The environment file to enable SSL/TLS (
enable-tls.yaml
) - The environment file to set the DNS hostname (
cloudname.yaml
) - The environment file to inject the root certificate authority (
inject-trust-anchor.yaml
)
$ openstack overcloud deploy --templates [...] -e /home/stack/templates/enable-tls.yaml -e ~/templates/cloudname.yaml -e ~/templates/inject-trust-anchor.yaml
6.2.8. Registering the Overcloud
Method 1 - Command Line
openstack overcloud deploy
) uses a set of options to define your registration details. The table in Appendix I, Deployment Parameters contains these options and their descriptions. Include these options when running the deployment command in Section 6.2.9, “Creating the Advanced Overcloud”. For example:
# openstack overcloud deploy --templates --rhel-reg --reg-method satellite --reg-sat-url http://example.satellite.com --reg-org MyOrg --reg-activation-key MyKey --reg-force [...]
Method 2 - Environment File
$ cp -r /usr/share/openstack-tripleo-heat-templates/extraconfig/pre_deploy/rhel-registration ~/templates/.
~/templates/rhel-registration/environment-rhel-registration.yaml
and modify the following values to suit your registration method and details.
- rhel_reg_method
- Choose the registration method. Either
portal
,satellite
, ordisable
. - rhel_reg_type
- The type of unit to register. Leave blank to register as a
system
- rhel_reg_auto_attach
- Automatically attach compatible subscriptions to this system. Set to either
true
to enable. - rhel_reg_service_level
- The service level to use for auto attachment.
- rhel_reg_release
- Use this parameter to set a release version for auto attachment. Leave blank to use the default from Red Hat Subscription Manager.
- rhel_reg_pool_id
- The subscription pool ID to use. Use this if not auto-attaching subscriptions.
- rhel_reg_sat_url
- The base URL of the Satellite server to register Overcloud nodes. Use the Satellite's HTTP URL and not the HTTPS URL for this parameter. For example, use
http://satellite.example.com
and nothttps://satellite.example.com
. The Overcloud creation process uses this URL to determine whether the server is a Red Hat Satellite 5 or Red Hat Satellite 6 server. If a Red Hat Satellite 6 server, the Overcloud obtains thekatello-ca-consumer-latest.noarch.rpm
file, registers withsubscription-manager
, and installskatello-agent
. If a Red Hat Satellite 6 server, the Overcloud obtains theRHN-ORG-TRUSTED-SSL-CERT
file and registers withrhnreg_ks
. - rhel_reg_server_url
- The hostname of the subscription service to use. The default is for Customer Portal Subscription Management,
subscription.rhn.redhat.com
. If this option is not used, the system is registered with Customer Portal Subscription Management. The subscription server URL uses the form ofhttps://hostname:port/prefix
. - rhel_reg_base_url
- Gives the hostname of the content delivery server to use to receive updates. The default is
https://cdn.redhat.com
. Since Satellite 6 hosts its own content, the URL must be used for systems registered with Satellite 6. The base URL for content uses the form ofhttps://hostname:port/prefix
. - rhel_reg_org
- The organization to use for registration.
- rhel_reg_environment
- The environment to use within the chosen organization.
- rhel_reg_repos
- A comma-separated list of repositories to enable.
- rhel_reg_activation_key
- The activation key to use for registration.
- rhel_reg_user, rhel_reg_password
- The username and password for registration. If possible, use activation keys for registration.
- rhel_reg_machine_name
- The machine name. Leave this as blank to use the hostname of the node.
- rhel_reg_force
- Set to
true
to force your registration options. For example, when re-registering nodes.
openstack overcloud deploy
) in Section 6.2.9, “Creating the Advanced Overcloud” uses the -e
option to add environment files. Add both ~/templates/rhel-registration/environment-rhel-registration.yaml
and ~/templates/rhel-registration/rhel-registration-resource-registry.yaml
. For example:
$ openstack overcloud deploy --templates [...] -e /home/stack/templates/rhel-registration/environment-rhel-registration.yaml -e /home/stack/templates/rhel-registration/rhel-registration-resource-registry.yaml
Important
OS::TripleO::NodeExtraConfig
Heat resource. This means you can only use this resource for registration. See Section 10.2, “Customizing Overcloud Pre-Configuration” for more information.
6.2.9. Creating the Advanced Overcloud
Note
$ openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e ~/templates/network-environment.yaml -e ~/templates/storage-environment.yaml --control-scale 3 --compute-scale 3 --ceph-storage-scale 3 --control-flavor control --compute-flavor compute --ceph-storage-flavor ceph-storage --ntp-server pool.ntp.org --neutron-network-type vxlan --neutron-tunnel-types vxlan
--templates
- Creates the Overcloud using the Heat template collection in/usr/share/openstack-tripleo-heat-templates
.-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml
- The-e
option adds an additional environment file to the Overcloud deployment. In this case, it is an environment file that initializes network isolation configuration.-e ~/templates/network-environment.yaml
- The-e
option adds an additional environment file to the Overcloud deployment. In this case, it is the network environment file from Section 6.2.6.2, “Creating an Advanced Overcloud Network Environment File”.-e ~/templates/storage-environment.yaml
- The-e
option adds an additional environment file to the Overcloud deployment. In this case, it is a custom environment file that initializes our storage configuration.--control-scale 3
- Scale the Controller nodes to three.--compute-scale 3
- Scale the Compute nodes to three.--ceph-storage-scale 3
- Scale the Ceph Storage nodes to three.--control-flavor control
- Use the a specific flavor for the Controller nodes.--compute-flavor compute
- Use the a specific flavor for the Compute nodes.--ceph-storage-flavor ceph-storage
- Use the a specific flavor for the Ceph Storage nodes.--ntp-server pool.ntp.org
- Use an NTP server for time synchronization. This is useful for keeping the Controller node cluster in synchronization.--neutron-network-type vxlan
- Use Virtual Extensible LAN (VXLAN) for the Neutron networking in the Overcloud.--neutron-tunnel-types vxlan
- Use Virtual Extensible LAN (VXLAN) for Neutron tunneling in the Overcloud.
Note
$ openstack help overcloud deploy
stack
user and run:
$ source ~/stackrc # Initializes the stack user to use the CLI commands $ heat stack-list --show-nested
heat stack-list --show-nested
command shows the current stage of the Overcloud creation.
Warning
-e
option become part of your Overcloud's stack definition. The director requires these environment files for re-deployment and post-deployment functions in Chapter 7, Performing Tasks after Overcloud Creation. Failure to include these files can result in damage to your Overcloud.
openstack overcloud deploy
command again. Do not edit the Overcloud configuration directly as such manual configuration gets overridden by the director's configuration when updating the Overcloud stack with the director.
deploy-overcloud.sh
:
#!/bin/bash openstack overcloud deploy --templates \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e ~/templates/network-environment.yaml \ -e ~/templates/storage-environment.yaml \ -t 150 \ --control-scale 3 \ --compute-scale 3 \ --ceph-storage-scale 3 \ --swift-storage-scale 0 \ --block-storage-scale 0 \ --compute-flavor compute \ --control-flavor control \ --ceph-storage-flavor ceph-storage \ --swift-storage-flavor swift-storage \ --block-storage-flavor block-storage \ --ntp-server pool.ntp.org \ --neutron-network-type vxlan \ --neutron-tunnel-types vxlan \ --libvirt-type qemu
Warning
openstack overcloud deploy
as a background process. The Overcloud creation might hang in mid-deployment if started as a background process.
6.2.10. Accessing the Advanced Overcloud
overcloudrc
, in your stack
user's home director. Run the following command to use this file:
$ source ~/overcloudrc
$ source ~/stackrc
6.2.11. Fencing the Controller Nodes
Note
heat-admin
user from the stack
user on the director. The Overcloud creation automatically copies the stack
user's SSH key to each node's heat-admin
.
pcs status
:
$ sudo pcs status Cluster name: openstackHA Last updated: Wed Jun 24 12:40:27 2015 Last change: Wed Jun 24 11:36:18 2015 Stack: corosync Current DC: lb-c1a2 (2) - partition with quorum Version: 1.1.12-a14efad 3 Nodes configured 141 Resources configured
pcs property show
:
$ sudo pcs property show
Cluster Properties:
cluster-infrastructure: corosync
cluster-name: openstackHA
dc-version: 1.1.12-a14efad
have-watchdog: false
stonith-enabled: false
Table 6.5. Fence Agents
Device
|
Type
|
---|---|
fence_ipmilan
|
The Intelligent Platform Management Interface (IPMI)
|
fence_idrac , fence_drac5
|
Dell Remote Access Controller (DRAC)
|
fence_ilo
|
Integrated Lights-Out (iLO)
|
fence_ucs
|
Cisco UCS - For more information, see Configuring Cisco Unified Computing System (UCS) Fencing on an OpenStack High Availability Environment
|
fence_xvm , fence_virt
|
Libvirt and SSH
|
fence_ipmilan
) as an example.
$ sudo pcs stonith describe fence_ipmilan
stonith
device in pacemaker for each node. Use the following commands for the cluster:
Note
$ sudo pcs stonith create my-ipmilan-for-controller01 fence_ipmilan pcmk_host_list=overcloud-controller-0 ipaddr=192.0.2.205 login=admin passwd=p@55w0rd! lanplus=1 cipher=1 op monitor interval=60s $ sudo pcs constraint location my-ipmilan-for-controller01 avoids overcloud-controller-0
$ sudo pcs stonith create my-ipmilan-for-controller02 fence_ipmilan pcmk_host_list=overcloud-controller-1 ipaddr=192.0.2.206 login=admin passwd=p@55w0rd! lanplus=1 cipher=1 op monitor interval=60s $ sudo pcs constraint location my-ipmilan-for-controller02 avoids overcloud-controller-1
$ sudo pcs stonith create my-ipmilan-for-controller03 fence_ipmilan pcmk_host_list=overcloud-controller-2 ipaddr=192.0.2.207 login=admin passwd=p@55w0rd! lanplus=1 cipher=1 op monitor interval=60s $ sudo pcs constraint location my-ipmilan-for-controller03 avoids overcloud-controller-2
$ sudo pcs stonith show
$ sudo pcs stonith show [stonith-name]
stonith
property to true
:
$ sudo pcs property set stonith-enabled=true
$ sudo pcs property show