-
Language:
English
-
Language:
English
Chapter 5. Define the Overcloud
The plan for the overcloud is defined in a set of Heat templates. This chapter covers the details of creating each Heat template to define the overcloud used in this reference architecture. As an alternative to creating new Heat templates, it is possible to directly download and modify the Heat templates used this reference architecture as described in the Chapter 2, Technical Summary.
5.1. Custom Environment Templates
The installation of the undercloud, covered in Section 4.1, “Deploy the Undercloud”, creates a directory of TripleO Heat Templates in /usr/share/openstack-tripleo-heat-templates. No direct customization of the TripleO Heat Templates shipped with Red Hat OpenStack Platform director is necessary; however, the creation of a separate directory called ~/custom-templates shall be used to override default template values. Create the directory for the custom templates.
mkdir ~/custom-templates
The rest of this chapter consists of creating YAML files in the above directory to define the overcloud.
5.2. Network Configuration
In this section, the following three files are added to the ~/custom-templates directory to define how the networks used in the reference architecture should be configured by Red Hat OpenStack Platform director:
- ~/custom-templates/network.yaml
- ~/custom-templates/nic-configs/controller-nics.yaml
- ~/custom-templates/nic-configs/compute-nics.yaml
This section describes how to create new versions of the above files. It is possible to copy example files and then modify them based on the details of the environment. Complete copies of the above files, as used in this reference architecture, may be found in Appendix: Custom Heat Templates. They may also be found online. See the Appendix G, GitHub Repository of Example Files Appendix for more details.
5.2.1. Assign OpenStack Services to Isolated Networks
Create a new file in ~/custom-templates called network.yaml and add content to this file.
-
Add a
resource_registry
that includes two network templates
The resource_registry
section contains references network configuration for the controller/monitor and compute/OSD nodes. The first three lines of network.yaml contains the following:
resource_registry: OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/custom-templates/nic-configs/controller-nics.yaml OS::TripleO::Compute::Net::SoftwareConfig: /home/stack/custom-templates/nic-configs/compute-nics.yaml
The controller-nics.yaml and compute-nics.yaml files in the nic-configs directory above are created in the next section. Add the above to reference the empty files in the meantime.
- Add the parameter defaults for the Neutron bridge mappings Tenant network
Within the network.yaml, add the following parameters:
parameter_defaults: NeutronBridgeMappings: 'datacentre:br-ex,tenant:br-tenant' NeutronNetworkType: 'vxlan' NeutronTunnelType: 'vxlan' NeutronExternalNetworkBridge: "''"
The above defines the bridge mappings associated to the logical networks and enables tenants to use VXLAN.
- Add parameter defaults based on the networks to be created
The Heat templates referenced in the resource_registry
require parameters to define the specifics of each network. For example, the IP address range and VLAN ID for the storage network to be created. Under the parameters_defaults
section to network.yaml, define the parameters for each network. The value of the parameters should be based on the networks that OpenStack deploys. For this reference architecture, the parameter values may be found in the Network section of Appendix: Environment Details. These values are then supplied to the parameters_defaults
section as follows:
# Internal API used for private OpenStack Traffic InternalApiNetCidr: 192.168.2.0/24 InternalApiAllocationPools: [{'start': '192.168.2.10', 'end': '192.168.2.200'}] InternalApiNetworkVlanID: 4049 # Tenant Network Traffic - will be used for VXLAN over VLAN TenantNetCidr: 192.168.3.0/24 TenantAllocationPools: [{'start': '192.168.3.10', 'end': '192.168.3.200'}] TenantNetworkVlanID: 4050 # Public Storage Access - e.g. Nova/Glance <--> Ceph StorageNetCidr: 172.16.1.0/24 StorageAllocationPools: [{'start': '172.16.1.10', 'end': '172.16.1.200'}] StorageNetworkVlanID: 4046 # Private Storage Access - i.e. Ceph background cluster/replication StorageMgmtNetCidr: 172.16.2.0/24 StorageMgmtAllocationPools: [{'start': '172.16.2.10', 'end': '172.16.2.200'}] StorageMgmtNetworkVlanID: 4047 # External Networking Access - Public API Access ExternalNetCidr: 10.19.137.0/21 # Leave room for floating IPs in the External allocation pool (if required) ExternalAllocationPools: [{'start': '10.19.139.37', 'end': '10.19.139.48'}] # Set to the router gateway on the external network ExternalInterfaceDefaultRoute: 10.19.143.254 # Gateway router for the provisioning network (or Undercloud IP) ControlPlaneDefaultRoute: 192.168.1.1 # The IP address of the EC2 metadata server. Generally the IP of the Undercloud EC2MetadataIp: 192.168.1.1 # Define the DNS servers (maximum 2) for the Overcloud nodes DnsServers: ["10.19.143.247","10.19.143.248"]
For more information on the above directives see section 6.2, Isolating Networks, of the Red Hat document Advanced Overcloud Customization.
5.2.2. Define Server NIC Configurations
Within the network.yaml file, references were made to controller and compute Heat templates which need to be created in ~/custom-templates/nic-configs/. Complete the following steps to create these files:
- Create a nic-configs directory within the ~/custom-templates directory
mkdir ~/custom-templates/nic-configs
- Copy the appropriate sample network interface configurations
Red Hat OpenStack Platform director contains a directory of network interface configuration templates for the following four scenarios:
$ ls /usr/share/openstack-tripleo-heat-templates/network/config/ bond-with-vlans multiple-nics single-nic-linux-bridge-vlans single-nic-vlans $
In this reference architecture, VLANs are trunked onto a single NIC. The following is creates the compute.yaml and controller.yaml files:
cp /usr/share/openstack-tripleo-heat-templates/network/config/single-nic-vlans/compute.yaml ~/custom-templates/nic-configs/compute-nics.yaml
cp /usr/share/openstack-tripleo-heat-templates/network/config/single-nic-vlans/controller.yaml ~/custom-templates/nic-configs/controller-nics.yaml
- Modify the Controller NICs template
Modify ~/custom-templates/nic-configs/controller-nics.yaml based on the hardware that hosts the controller as described in 6.2. Creating a Network Environment File of the Red Hat document Advanced Overcloud Customization.
- Modify the Compute NICs template
Modify ~/custom-templates/nic-configs/compute-nics.yaml as described in the previous step. Extend the provided template to include the StorageMgmtIpSubnet
and StorageMgmtNetworkVlanID
attributes of the storage management network. When defining the interface entries of the storage network, consider setting the MTU to 9000 (jumbo frames) for improved storage performance. An example of these additions to compute-nics.yaml includes the following:
- type: interface name: em2 use_dhcp: false mtu: 9000 - type: vlan device: em2 mtu: 9000 use_dhcp: false vlan_id: {get_param: StorageMgmtNetworkVlanID} addresses: - ip_netmask: {get_param: StorageMgmtIpSubnet} - type: vlan device: em2 mtu: 9000 use_dhcp: false vlan_id: {get_param: StorageNetworkVlanID} addresses: - ip_netmask: {get_param: StorageIpSubnet}
In order to prevent network misconfigurations from taking overcloud nodes out of production, network changes like MTU settings must be made during initial deployment, and may not yet be applied via Red Hat OpenStack Platform director retroactively to an existing deployment. Thus, if this setting is desired, then it should be set before the deployment.
All network switch ports between servers using the interface with the new MTU must be updated to support jumbo frames if the above setting is made. If this change is not made on the switch, then problems may manifest on the applicaton layer that could cause the Ceph cluster to not reach quorum. If the setting above was made, and these types of problems are observed, then verify that all hosts using the network using jumbo frames can communicate at the desired MTU with a command like ping -M do -s 8972 172.16.1.11
.
Complete versions of compute-nics.yaml and controller-nics.yaml, as used in this reference architecture, may be found in Appendix: Custom Resources and Parameters. They may also be found online. See the Appendix G, GitHub Repository of Example Files Appendix for more details.
5.3. Hyper Converged Role Definition
This section covers how composable roles are used to create a new role called OsdCompute, that offers both Nova compute and Ceph OSD services.
Red Hat OpenStack Platform director 10 ships with the environment file /usr/share/openstack-tripleo-heat-templates/environments/hyperconverged-ceph.yaml, which merges the OSD service into the compute service for a hyperconverged compute role. This reference architecture does not use this template and instead composes a new role. This approach, of using composable roles, allows for overcloud deployments consisting of both compute nodes without OSDs mixed with compute nodes with OSDs. In addition, it allows for converged Compute/OSD nodes with differing OSD counts. It also allows the same deployment to contain OSD servers that do not run compute services. All of this is possible, provided that a new role is composed for each.
5.3.1. Composable Roles
By default the overcloud consists of five roles: Controller, Compute, BlockStorage, ObjectStorage, and CephStorage. Each role consists of a list of services. As of Red Hat OpenStack Platform 10, the services that are deployed per role may be seen in /usr/share/openstack-tripleo-heat-templates/roles_data.yaml.
It is possible to make a copy of roles_data.yaml and then define a new role within it consisting of any mix of available services found under other roles. This reference architecture follows this procedure to create a new role called OsdCompute. For more information about Composable Roles themselves, see the Composable Roles section of the Red Hat document Advanced Overcloud Customization.
5.3.2. Custom Template
Copy the roles data file to the custom templates directory for modification.
cp /usr/share/openstack-tripleo-heat-templates/roles_data.yaml ~/custom-templates/custom-roles.yaml
Edit ~/custom-templates/custom-roles.yaml to add the following to the bottom of the file which defines a new role called OsdCompute.
- name: OsdCompute CountDefault: 0 HostnameFormatDefault: '%stackname%-osd-compute-%index%' ServicesDefault: - OS::TripleO::Services::CephOSD - OS::TripleO::Services::CACerts - OS::TripleO::Services::CephClient - OS::TripleO::Services::CephExternal - OS::TripleO::Services::Timezone - OS::TripleO::Services::Ntp - OS::TripleO::Services::Snmp - OS::TripleO::Services::NovaCompute - OS::TripleO::Services::NovaLibvirt - OS::TripleO::Services::Kernel - OS::TripleO::Services::ComputeNeutronCorePlugin - OS::TripleO::Services::ComputeNeutronOvsAgent - OS::TripleO::Services::ComputeCeilometerAgent - OS::TripleO::Services::ComputeNeutronL3Agent - OS::TripleO::Services::ComputeNeutronMetadataAgent - OS::TripleO::Services::TripleoPackages - OS::TripleO::Services::TripleoFirewall - OS::TripleO::Services::NeutronSriovAgent - OS::TripleO::Services::OpenDaylightOvs - OS::TripleO::Services::SensuClient - OS::TripleO::Services::FluentdClient - OS::TripleO::Services::VipHosts
The list of services under ServicesDefault
is identical to the list of services under the Compute role, except the CephOSD service has been added to the list. The CountDefault
of 0 ensures that no nodes from this new role are deployed unless explicitly requested and the HostnameFormatDefault
defines what each node should be called when deployed, e.g. overcloud-osd-compute-0, overcloud-osd-compute-1, etc.
In the Chapter 7, Deployment section, the new ~/custom-templates/custom-roles.yaml, which contains the above, is passed to the openstack overcloud deploy
command.
5.4. Ceph Configuration
In this section, the following three files are added to the ~/custom-templates directory to define how the Ceph configurations used in the reference architecture should be configured by Red Hat OpenStack Platform director:
- ~/custom-templates/ceph.yaml
- ~/custom-templates/first-boot-template.yaml
- ~/custom-templates/post-deploy-template.yaml
This section describes how to create new versions of the above files. It is possible to copy example files and then modify them based on the details of the environment. Complete copies of the above files, as used in this reference architecture, may be found in Appendix: Custom Heat Templates. They may also be found online. See the Appendix G, GitHub Repository of Example Files Appendix for more details.
Create a file in ~/custom-templates/ called ceph.yaml. In the next two subsections, content is added to this file.
5.4.1. Set the Resource Registry for Ceph
To set the resource registry for Ceph, add a resource_registry
section to which includes a first-boot template and a post-deploy template to ceph.yaml.
resource_registry: OS::TripleO::NodeUserData: /home/stack/custom-templates/first-boot-template.yaml OS::TripleO::NodeExtraConfigPost: /home/stack/custom-templates/post-deploy-template.yaml
The first-boot-template.yaml and post-deploy-template.yaml files above are used to configure Ceph during the deployment and are created in the next subsection.
5.4.1.1. Create the Firstboot Template
A Ceph deployment may fail to add an OSD or OSD journal disk under either of the following conditions:
- The disk has an FSID from a previous Ceph install
- The disk does not have a GPT disk label
The conditions above are avoided by preparing a disk with the following commands.
-
Erase all GPT and MBR data structures, including the FSID, with
sgdisk -Z $disk
-
Convert an MBR or BSD disklabel disk to a GPT disk with
sgdisk -g $disk
Red Hat OpenStack Platform director is configured to run the above commands on all disks, except the root disk, when initially deploying a server that hosts OSDs by using a firstboot Heat template.
Create a filed called ~/custom-templates/first-boot-template.yaml whose content is the following:
heat_template_version: 2014-10-16 description: > Wipe and convert all disks to GPT (except the disk containing the root file system) resources: userdata: type: OS::Heat::MultipartMime properties: parts: - config: {get_resource: wipe_disk} wipe_disk: type: OS::Heat::SoftwareConfig properties: config: {get_file: wipe-disk.sh} outputs: OS::stack_id: value: {get_resource: userdata}
Create a file called ~/custom-templates/wipe-disk.sh, to be called by the above, whose content is the following:
#!/usr/bin/env bash if [[ `hostname` = *"ceph"* ]] || [[ `hostname` = *"osd-compute"* ]] then echo "Number of disks detected: $(lsblk -no NAME,TYPE,MOUNTPOINT | grep "disk" | awk '{print $1}' | wc -l)" for DEVICE in `lsblk -no NAME,TYPE,MOUNTPOINT | grep "disk" | awk '{print $1}'` do ROOTFOUND=0 echo "Checking /dev/$DEVICE..." echo "Number of partitions on /dev/$DEVICE: $(expr $(lsblk -n /dev/$DEVICE | awk '{print $7}' | wc -l) - 1)" for MOUNTS in `lsblk -n /dev/$DEVICE | awk '{print $7}'` do if [ "$MOUNTS" = "/" ] then ROOTFOUND=1 fi done if [ $ROOTFOUND = 0 ] then echo "Root not found in /dev/${DEVICE}" echo "Wiping disk /dev/${DEVICE}" sgdisk -Z /dev/${DEVICE} sgdisk -g /dev/${DEVICE} else echo "Root found in /dev/${DEVICE}" fi done fi
Both first-boot-template.yaml and wipe-disk.sh are derivative works of the Red Hat document, Red Hat Ceph Storage for the Overcloud section 2.9. Formatting Ceph Storage Node Disks to GPT. The wipe-disk.sh script has been modified to wipe all disks, except the one mounted by /, but only if hostname matches the pattern *ceph* or *osd-compute*.
The firstboot heat template, which is run by cloud-init when a node is first deployed, deletes data. If any data from a previous Ceph install is present, then it will be deleted. If this is not desired, then comment out the OS::TripleO::NodeUserData
line with a #
in the ~/custom-templates/ceph.yaml file.
5.4.1.2. Create the Post Deploy Template
Post deploy scripts may used to run arbitrary shell scripts after the configuration built into TripleO, mostly implemented in Puppet, has run. Because some of the configuration done in Chapter 6, Resource Isolation and Tuning is not presently configurable in the Puppet triggered by Red Hat OpenStack Platform director, a Heat template to run a shell script will be put in place in this section and modified later.
Create a filed called ~/custom-templates/post-deploy-template.yaml whose content is the following:
heat_template_version: 2014-10-16 parameters: servers: type: json resources: ExtraConfig: type: OS::Heat::SoftwareConfig properties: group: script inputs: - name: OSD_NUMA_INTERFACE config: | #!/usr/bin/env bash { echo "TODO: pin OSDs to the NUMA node of $OSD_NUMA_INTERFACE" } 2>&1 > /root/post_deploy_heat_output.txt ExtraDeployments: type: OS::Heat::SoftwareDeployments properties: servers: {get_param: servers} config: {get_resource: ExtraConfig} input_values: OSD_NUMA_INTERFACE: 'em2' actions: ['CREATE']
When an overcloud is deployed with the above Heat environment template, the following would be found on each node.
[root@overcloud-osd-compute-2 ~]# cat /root/post_deploy_heat_output.txt TODO: pin OSDs to the NUMA node of em2 [root@overcloud-osd-compute-2 ~]#
The OSD_NUMA_INTERFACE variable and embedded shell script will be modified in the Section 6.2, “Ceph NUMA Pinning” section so that instead of logging that the OSDs need to be NUMA pinned, the systemd unit file for the OSD service will be modified by the script and restart the OSD services so that they are NUMA pinned.
5.4.2. Set the Parameter Defaults for Ceph
Add a parameter defaults section in ~/custom-templates/ceph.yaml under the resource registry defined in the previous subsection.
5.4.2.1. Add parameter defaults for Ceph OSD tunables
As described in the Red Hat Ceph Storage Strategies Guide, the following OSD values may be tuned to affect the performance of a Red Hat Ceph Storage cluster.
- Journal size (journal_size)
- Placement Groups (pg_num)
- Placement Group for placement purpose (pgp_num)
- Number of replicas for objects in the pool (default_size)
- Minimum number of written replicas for objects in a pool in order to acknowledge a write operation to the client (default_min_size)
- Recovery operations to be run in the event of OSD loss (recovery_max_active and recovery_op_priority)
- Backfill operations to be run in the event of OSD loss (max_backfills)
All of these values are set for the overcloud deployment by using the following in the parameter_defaults
section of ~/custom-templates/ceph.yaml. These parameters are passed as ExtraConfig
when they benefit both the Ceph OSD and Monitor nodes, or passed as extra configuration only for the custom role, OsdCompute, by using OsdComputeExtraConfig
.
parameter_defaults: ExtraConfig: ceph::profile::params::osd_pool_default_pg_num: 256 ceph::profile::params::osd_pool_default_pgp_num: 256 ceph::profile::params::osd_pool_default_size: 3 ceph::profile::params::osd_pool_default_min_size: 2 ceph::profile::params::osd_recovery_op_priority: 2 OsdComputeExtraConfig: ceph::profile::params::osd_journal_size: 5120
The values provided above are reasonable example values for the size of the deployment in this reference architecture. See the Red Hat Ceph Storage Strategies Guide to determine the appropriate values for a larger number of OSDs. The recovery and backfill options above were chosen deliberately for a hyper-converged deployment, and details on these values are covered in Section 6.3, “Reduce Ceph Backfill and Recovery Operations”.
5.4.2.2. Add parameter defaults for Ceph OSD block devices
In this subsection, a list of block devices are defined. The list should be appended directly to the OsdComputeExtraConfig
defined in the previous subsection.
The Compute/OSD servers used for this reference architecture have the following disks for Ceph:
- Twelve 1117GB SAS hard disks presented as /dev/{sda, sdb, …, sdl} are used for OSDs
- Three 400GB SATA SSD disks presented as /dev/{sdm, sdn, sdo} are used for OSD journals
To configure Red Hat OpenStack Platform director to create four partitions on each SSD, to be used as a Ceph journal by each hard disk, the following list may be defined in Heat (it is not necessary to specify the partition number):
ceph::profile::params::osds: '/dev/sda': journal: '/dev/sdm' '/dev/sdb': journal: '/dev/sdm' '/dev/sdc': journal: '/dev/sdm' '/dev/sdd': journal: '/dev/sdm' '/dev/sde': journal: '/dev/sdn' '/dev/sdf': journal: '/dev/sdn' '/dev/sdg': journal: '/dev/sdn' '/dev/sdh': journal: '/dev/sdn' '/dev/sdi': journal: '/dev/sdo' '/dev/sdj': journal: '/dev/sdo' '/dev/sdk': journal: '/dev/sdo' '/dev/sdl': journal: '/dev/sdo'
The above should be added under the parameter_defaults
, OsdComputeExtraConfig
(under ceph::profile::params::osd_journal_size: 5120
) in the ~/custom-templates/ceph.yaml file. The complete file may be found in Appendix: Custom Resources and Parameters. It may also be found online. See the Appendix G, GitHub Repository of Example Files Appendix for more details.
5.5. Overcloud Layout
This section creates a new custom template called layout.yaml to define the following properties:
- For each node type, how many of those nodes should be deployed?
- For each node, what specific server should be used?
- For each isolated network per node, which IP addresses should be assigned?
- Which isolated networks should the OsdCompute role use?
It also passes other parameters, which in prior versions of Red Hat OpenStack Platform, would be passed through the command line.
5.5.1. Configure the ports for both roles to use a pool of IPs
Create the file ~/custom-templates/layout.yaml and add the following to it:
resource_registry: OS::TripleO::Controller::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api_from_pool.yaml OS::TripleO::Controller::Ports::TenantPort: /usr/share/openstack-tripleo-heat-templates/network/ports/tenant_from_pool.yaml OS::TripleO::Controller::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage_from_pool.yaml OS::TripleO::Controller::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage_mgmt_from_pool.yaml OS::TripleO::OsdCompute::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api_from_pool.yaml OS::TripleO::OsdCompute::Ports::TenantPort: /usr/share/openstack-tripleo-heat-templates/network/ports/tenant_from_pool.yaml OS::TripleO::OsdCompute::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage_from_pool.yaml OS::TripleO::OsdCompute::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage_mgmt_from_pool.yaml
The above defines the ports that are used by the Controller role and OsdCompute role. Each role has four lines of very similar template includes, but the Controller lines override what is already defined by the default for the Controller role, in order to get its IPs from a list defined later in this file. In contrast, the OsdCompute role has no default ports to override, as it is a new custom role. The ports that it uses are defined in this Heat template. In both cases the IPs that each port receives are defined in a list under ControllerIPs
or OsdComputeIPs
which will be added to the layout.yaml file in the next section.
5.5.2. Define the node counts and other parameters
This subsection defines for each node type, how many of which should be deployed. Under the resource_registry
of ~/custom-templates/layout.yaml, add the following:
parameter_defaults: NtpServer: 10.5.26.10 ControllerCount: 3 ComputeCount: 0 CephStorageCount: 0 OsdComputeCount: 3
In prior versions of Red Hat OpenStack Platform, the above parameters would have been passed through the command line, e.g. --ntp-server
can now be passed in a Heat template as NtpServer
as in the example above.
The above indicates that three Controller nodes are deployed and that three OsdCompute nodes are deployed. In Section 8.2, “Adding Compute/Red Hat Ceph Storage Nodes” the value for OsdComputeCount
is set to four to deploy an additional OsdCompute node. For this deployment, it is necessary to set the number of Compute nodes to zero because the CountDefault
for a Compute node is 1, as can be verified by reading /usr/share/openstack-tripleo-heat-templates/roles_data.yaml. It is not necessary set the CephStorageCount
to 0, because none are deployed by default. However, this parameter is included in this example to demonstrate how to add separate Ceph OSD servers that do not offer Nova compute services, and similarly the ComputeCount
could be changed to offer Nova compute servers that do not offer Ceph OSD services. The process to mix hyper-converged Ceph/OSD nodes with non-hyper-converged Ceph or OSD nodes in a single deployment, is to increase these counts and define the properties of either standard role.
5.5.3. Configure scheduler hints to control node placement and IP assignment
In the Section 4.2, “Register and Introspect Hardware” section, each node is assigned a capabilities profile of either controller-X or osd-compute-Y, where X or Y was the number of the physical node. These labels are used by the scheduler hints feature in Red Hat OpenStack Platform director to define specific node placement and ensure that a particular physical node is always be deployed to the overcloud with the same assignemnt; e.g. to ensure that the server in rack U33 is always osd-compute-2.
This section contains an example of using node specific placement and predictable IPs. These two aspects of this reference architecture are not tightly coupled with all hyper-converged deployments. Thus, it is possible to perform a hyper-converged deployment where director assigns the IP from a pool in no specific order and each node gets a different hostname per deployment. Red Hat does recommend the use of network isolation, e.g. having the internal_api
and storage_mgmt
on separate networks.
Append the following to layout.yaml under the parameter_defaults
stanza to implement the predictable node palcement described above.
ControllerSchedulerHints: 'capabilities:node': 'controller-%index%' NovaComputeSchedulerHints: 'capabilities:node': 'compute-%index%' CephStorageSchedulerHints: 'capabilities:node': 'ceph-storage-%index%' OsdComputeSchedulerHints: 'capabilities:node': 'osd-compute-%index%'
Add the following to layout.yaml under the parameter_defaults
stanza above to ensure that each node gets a specific IP.
ControllerIPs: internal_api: - 192.168.2.200 - 192.168.2.201 - 192.168.2.202 tenant: - 192.168.3.200 - 192.168.3.201 - 192.168.3.202 storage: - 172.16.1.200 - 172.16.1.201 - 172.16.1.202 storage_mgmt: - 172.16.2.200 - 172.16.2.201 - 172.16.2.202 OsdComputeIPs: internal_api: - 192.168.2.203 - 192.168.2.204 - 192.168.2.205 tenant: - 192.168.3.203 - 192.168.3.204 - 192.168.3.205 storage: - 172.16.1.203 - 172.16.1.204 - 172.16.1.205 storage_mgmt: - 172.16.2.203 - 172.16.2.204 - 172.16.2.205
The above specifies that the following predictable IP assignment will happen for each deploy:
controller-0 will have the IPs
- 192.168.2.200
- 192.168.3.200
- 172.16.1.200
- 172.16.2.200
controller-1 will have the IPs
- 192.168.2.201
- 192.168.3.201
- 172.16.1.201
- 172.16.2.201
and so on for the controller nodes and:
osd-compute-0 will have the IPs
- 192.168.2.203
- 192.168.3.203
- 172.16.1.203
- 172.16.2.203
osd-compute-1 will have the IPs
- 192.168.2.204
- 192.168.3.204
- 172.16.1.204
- 172.16.2.204
and so on for the osd-compute nodes.
For more information on assigning node specific identification, see section 7.1. Assigning Specific Node IDs of the Red Hat document Advanced Overcloud Customization.