Chapter 6. Configuring Advanced Customizations for the Overcloud
This chapter follows on from Chapter 5, Configuring Basic Overcloud Requirements. At this point, the director has registered the nodes and configured the necessary services for Overcloud creation. Now you can customize your Overcloud using the methods in this chapter.
The examples in this chapter are optional steps for configuring the Overcloud. These steps are only required to provide the Overcloud with additional functionality. Use only the steps that apply to the needs of your environment.
6.1. Understanding Heat Templates
The custom configurations in this chapter use Heat templates and environment files to define certain aspects of the Overcloud, such as network isolation and network interface configuration. This section provides a basic introduction to heat templates so that you can understand the structure and format of these templates in the context of the Red Hat OpenStack Platform director.
6.1.1. Heat Templates
The director uses Heat Orchestration Templates (HOT) as a template format for its Overcloud deployment plan. Templates in HOT format are mostly expressed in YAML format. The purpose of a template is to define and create a stack, which is a collection of resources that heat creates, and the configuration of the resources. Resources are objects in OpenStack and can include compute resources, network configuration, security groups, scaling rules, and custom resources.
The structure of a Heat template has three main sections:
-
Parameters - These are settings passed to heat, which provides a way to customize a stack, and any default values for parameters without passed values. These are defined in the
parameterssection of a template. -
Resources - These are the specific objects to create and configure as part of a stack. OpenStack contains a set of core resources that span across all components. These are defined in the
resourcessection of a template. -
Output - These are values passed from heat after the stack’s creation. You can access these values either through the heat API or client tools. These are defined in the
outputsection of a template.
Here is an example of a basic heat template:
heat_template_version: 2013-05-23
description: > A very basic Heat template.
parameters:
key_name:
type: string
default: lars
description: Name of an existing key pair to use for the instance
flavor:
type: string
description: Instance type for the instance to be created
default: m1.small
image:
type: string
default: cirros
description: ID or name of the image to use for the instance
resources:
my_instance:
type: OS::Nova::Server
properties:
name: My Cirros Instance
image: { get_param: image }
flavor: { get_param: flavor }
key_name: { get_param: key_name }
output:
instance_name:
description: Get the instance's name
value: { get_attr: [ my_instance, name ] }
This template uses the resource type type: OS::Nova::Server to create an instance called my_instance with a particular flavor, image, and key. The stack can return the value of instance_name, which is called My Cirros Instance.
When Heat processes a template it creates a stack for the template and a set of child stacks for resource templates. This creates a hierarchy of stacks that descend from the main stack you define with your template. You can view the stack hierarchy using this following command:
$ heat stack-list --show-nested
6.1.2. Environment Files
An environment file is a special type of template that provides customization for your Heat templates. This includes three key parts:
-
Resource Registry - This section defines custom resource names, linked to other heat templates. This essentially provides a method to create custom resources that do not exist within the core resource collection. These are defined in the
resource_registrysection of an environment file. -
Parameters - These are common settings you apply to the top-level template’s parameters. For example, if you have a template that deploys nested stacks, such as resource registry mappings, the parameters only apply to the top-level template and not templates for the nested resources. Parameters are defined in the
parameterssection of an environment file. -
Parameter Defaults - These parameters modify the default values for parameters in all templates. For example, if you have a Heat template that deploys nested stacks, such as resource registry mappings,the parameter defaults apply to all templates. In other words, the top-level template and those defining all nested resources. The parameter defaults are defined in the
parameter_defaultssection of an environment file.
It is recommended to use parameter_defaults instead of parameters When creating custom environment files for your Overcloud. This is so the parameters apply to all stack templates for the Overcloud.
An example of a basic environment file:
resource_registry: OS::Nova::Server::MyServer: myserver.yaml parameter_defaults: NetworkName: my_network parameters: MyIP: 192.168.0.1
For example, this environment file (my_env.yaml) might be included when creating a stack from a certain Heat template (my_template.yaml). The my_env.yaml files creates a new resource type called OS::Nova::Server::MyServer. The myserver.yaml file is a Heat template file that provides an implementation for this resource type that overrides any built-in ones. You can include the OS::Nova::Server::MyServer resource in your my_template.yaml file.
The MyIP applies a parameter only to the main Heat template that deploys along with this environment file. In this example, it only applies to the parameters in my_template.yaml.
The NetworkName applies to both the main Heat template (in this example, my_template.yaml) and the templates associated with resources included the main template, such as the OS::Nova::Server::MyServer resource and its myserver.yaml template in this example.
6.1.3. Core Overcloud Heat Templates
The director contains a core heat template collection for the Overcloud. This collection is stored in /usr/share/openstack-tripleo-heat-templates.
There are many heat templates and environment files in this collection. However, the main files and directories to note in this template collection are:
-
overcloud.yaml- This is the main template file used to create the Overcloud environment. -
overcloud-resource-registry-puppet.yaml- This is the main environment file used to create the Overcloud environment. It provides a set of configurations for Puppet modules stored on the Overcloud image. After the director writes the Overcloud image to each node, heat starts the Puppet configuration for each node using the resources registered in this environment file. -
environments- A directory that contains example environment files to apply to your Overcloud deployment.
6.2. Isolating Networks
The director provides methods to configure isolated Overcloud networks. This means the Overcloud environment separates network traffic types into different networks, which in turn assigns network traffic to specific network interfaces or bonds. After configuring isolated networks, the director configures the OpenStack services to use the isolated networks. If no isolated networks are configured, all services run on the Provisioning network.
This example uses separate networks for all services:
- Network 1 - Provisioning
- Network 2 - Internal API
- Network 3 - Tenant Networks
- Network 4 - Storage
- Network 5 - Storage Management
- Network 6 - Management
- Network 7 - External and Floating IP (mapped after Overcloud creation)
In this example, each Overcloud node uses two network interfaces in a bond to serve networks in tagged VLANs. The following network assignments apply to this bond:
Table 6.1. Network Subnet and VLAN Assignments
| Network Type | Subnet | VLAN |
| Internal API | 172.16.0.0/24 | 201 |
| Tenant | 172.17.0.0/24 | 202 |
| Storage | 172.18.0.0/24 | 203 |
| Storage Management | 172.19.0.0/24 | 204 |
| Management | 172.20.0.0/24 | 205 |
| External / Floating IP | 10.1.1.0/24 | 100 |
For more examples of network configuration, see Section 3.2, “Planning Networks”.
6.2.1. Creating Custom Interface Templates
The Overcloud network configuration requires a set of the network interface templates. You customize these templates to configure the node interfaces on a per role basis. These templates are standard Heat templates in YAML format (see Section 6.1.1, “Heat Templates”). The director contains a set of example templates to get you started:
-
/usr/share/openstack-tripleo-heat-templates/network/config/single-nic-vlans- Directory containing templates for single NIC with VLANs configuration on a per role basis. -
/usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans- Directory containing templates for bonded NIC configuration on a per role basis. -
/usr/share/openstack-tripleo-heat-templates/network/config/multiple-nics- Directory containing templates for multiple NIC configuration using one NIC per role. -
/usr/share/openstack-tripleo-heat-templates/network/config/single-nic-linux-bridge-vlans- Directory containing templates for single NIC with VLANs configuration on a per role basis and using a Linux bridge instead of an Open vSwitch bridge.
For this example, use the default bonded NIC example configuration as a basis. Copy the version located at /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans.
$ cp -r /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans ~/templates/nic-configs
This creates a local set of heat templates that define a bonded network interface configuration for each role. Each template contains the standard parameters, resources, and output sections. For this example, you would only edit the resources section. Each resources section begins with the following:
resources:
OsNetConfigImpl:
type: OS::Heat::StructuredConfig
properties:
group: os-apply-config
config:
os_net_config:
network_config:
This creates a request for the os-apply-config command and os-net-config subcommand to configure the network properties for a node. The network_config section contains your custom interface configuration arranged in a sequence based on type, which includes the following:
- interface
Defines a single network interface. The configuration defines each interface using either the actual interface name ("eth0", "eth1", "enp0s25") or a set of numbered interfaces ("nic1", "nic2", "nic3").
- type: interface name: nic2- vlan
Defines a VLAN. Use the VLAN ID and subnet passed from the
parameterssection.- type: vlan vlan_id: {get_param: ExternalNetworkVlanID} addresses: - ip_netmask: {get_param: ExternalIpSubnet}- ovs_bond
Defines a bond in Open vSwitch to join two or more
interfacestogether. This helps with redundancy and increases bandwidth.- type: ovs_bond name: bond1 members: - type: interface name: nic2 - type: interface name: nic3- ovs_bridge
Defines a bridge in Open vSwitch, which connects multiple
interface,ovs_bondandvlanobjects together.- type: ovs_bridge name: {get_input: bridge_name} members: - type: ovs_bond name: bond1 members: - type: interface name: nic2 primary: true - type: interface name: nic3 - type: vlan device: bond1 vlan_id: {get_param: ExternalNetworkVlanID} addresses: - ip_netmask: {get_param: ExternalIpSubnet}- linux_bond
Defines a Linux bond that joins two or more
interfacestogether. This helps with redundancy and increases bandwidth. Make sure to include the kernel-based bonding options in thebonding_optionsparameter. For more information on Linux bonding options, see 4.5.1. Bonding Module Directives in the Red Hat Enterprise Linux 7 Networking Guide.- type: linux_bond name: bond1 members: - type: interface name: nic2 - type: interface name: nic3 bonding_options: "mode=802.3ad"- linux_bridge
Defines a Linux bridge, which connects multiple
interface,linux_bondandvlanobjects together.- type: linux_bridge name: bridge1 addresses: - ip_netmask: list_join: - '/' - - {get_param: ControlPlaneIp} - {get_param: ControlPlaneSubnetCidr} members: - type: interface name: nic1 primary: true - type: vlan vlan_id: {get_param: ExternalNetworkVlanID} device: bridge1 addresses: - ip_netmask: {get_param: ExternalIpSubnet} routes: - ip_netmask: 0.0.0.0/0 default: true next_hop: {get_param: ExternalInterfaceDefaultRoute}
See Appendix E, Network Interface Parameters for a full list of parameters for each of these items.
For this example, you use the default bonded interface configuration. For example, the /home/stack/templates/nic-configs/controller.yaml template uses the following network_config:
resources:
OsNetConfigImpl:
type: OS::Heat::StructuredConfig
properties:
group: os-apply-config
config:
os_net_config:
network_config:
- type: interface
name: nic1
use_dhcp: false
addresses:
- ip_netmask:
list_join:
- '/'
- - {get_param: ControlPlaneIp}
- {get_param: ControlPlaneSubnetCidr}
routes:
- ip_netmask: 169.254.169.254/32
next_hop: {get_param: EC2MetadataIp}
- type: ovs_bridge
name: {get_input: bridge_name}
dns_servers: {get_param: DnsServers}
members:
- type: ovs_bond
name: bond1
ovs_options: {get_param: BondInterfaceOvsOptions}
members:
- type: interface
name: nic2
primary: true
- type: interface
name: nic3
- type: vlan
device: bond1
vlan_id: {get_param: ExternalNetworkVlanID}
addresses:
- ip_netmask: {get_param: ExternalIpSubnet}
routes:
- default: true
next_hop: {get_param: ExternalInterfaceDefaultRoute}
- type: vlan
device: bond1
vlan_id: {get_param: InternalApiNetworkVlanID}
addresses:
- ip_netmask: {get_param: InternalApiIpSubnet}
- type: vlan
device: bond1
vlan_id: {get_param: StorageNetworkVlanID}
addresses:
- ip_netmask: {get_param: StorageIpSubnet}
- type: vlan
device: bond1
vlan_id: {get_param: StorageMgmtNetworkVlanID}
addresses:
- ip_netmask: {get_param: StorageMgmtIpSubnet}
- type: vlan
device: bond1
vlan_id: {get_param: TenantNetworkVlanID}
addresses:
- ip_netmask: {get_param: TenantIpSubnet}
- type: vlan
device: bond1
vlan_id: {get_param: ManagementNetworkVlanID}
addresses:
- ip_netmask: {get_param: ManagementIpSubnet}The Management network section is commented in the network interface Heat templates. Uncomment this section to enable the Management network.
This template defines a bridge (usually the external bridge named br-ex) and creates a bonded interface called bond1 from two numbered interfaces: nic2 and nic3. The bridge also contains a number of tagged VLAN devices, which use bond1 as a parent device. The template also include an interface that connects back to the director (nic1).
For more examples of network interface templates, see Appendix F, Network Interface Template Examples.
Note that a lot of these parameters use the get_param function. You would define these in an environment file you create specifically for your networks.
Unused interfaces can cause unwanted default routes and network loops. For example, your template might contain a network interface (nic4) that does not use any IP assignments for OpenStack services but still uses DHCP and/or a default route. To avoid network conflicts, remove any unused interfaces from ovs_bridge devices and disable the DHCP and default route settings:
- type: interface name: nic4 use_dhcp: false defroute: false
6.2.2. Creating a Network Environment File
The network environment file is a Heat environment file that describes the Overcloud’s network environment and points to the network interface configuration templates from the previous section. You can define the subnets and VLANs for your network along with IP address ranges. You can then customize these values for the local environment.
The director contains a set of example environment files to get you started. Each environment file corresponds to the example network interface files in /usr/share/openstack-tripleo-heat-templates/network/config/:
-
/usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml- Example environment file for single NIC with VLANs configuration in thesingle-nic-vlans) network interface directory. Environment files for disabling the External network (net-single-nic-with-vlans-no-external.yaml) or enabling IPv6 (net-single-nic-with-vlans-v6.yaml) are also available. -
/usr/share/openstack-tripleo-heat-templates/environments/net-bond-with-vlans.yaml- Example environment file for bonded NIC configuration in thebond-with-vlansnetwork interface directory. Environment files for disabling the External network (net-bond-with-vlans-no-external.yaml) or enabling IPv6 (net-bond-with-vlans-v6.yaml) are also available. -
/usr/share/openstack-tripleo-heat-templates/environments/net-multiple-nics.yaml- Example environment file for a multiple NIC configuration in themultiple-nicsnetwork interface directory. An environment file for enabling IPv6 (net-multiple-nics-v6.yaml) is also available. -
/usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-linux-bridge-with-vlans.yaml- Example environment file for single NIC with VLANs configuration using a Linux bridge instead of an Open vSwitch bridge, which uses the thesingle-nic-linux-bridge-vlansnetwork interface directory.
This scenario uses a modified version of the /usr/share/openstack-tripleo-heat-templates/environments/net-bond-with-vlans.yaml file. Copy this file to the stack user’s templates directory.
$ cp /usr/share/openstack-tripleo-heat-templates/environments/net-bond-with-vlans.yaml /home/stack/templates/network-environment.yaml
Modify the environment file so that it contains parameters related to your environment, especially:
-
The
resource_registrysection should contain modified links to the custom network interface templates for each node role. See Section 6.1.2, “Environment Files”. -
The
parameter_defaultssection should contain a list of parameters that define the network options for each network type. For a full reference of these options, see Appendix G, Network Environment Options.
For example:
resource_registry:
OS::TripleO::BlockStorage::Net::SoftwareConfig: /home/stack/templates/nic-configs/cinder-storage.yaml
OS::TripleO::Compute::Net::SoftwareConfig: /home/stack/templates/nic-configs/compute.yaml
OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/templates/nic-configs/controller.yaml
OS::TripleO::ObjectStorage::Net::SoftwareConfig: /home/stack/templates/nic-configs/swift-storage.yaml
OS::TripleO::CephStorage::Net::SoftwareConfig: /home/stack/templates/nic-configs/ceph-storage.yaml
parameter_defaults:
InternalApiNetCidr: 172.16.0.0/24
TenantNetCidr: 172.17.0.0/24
StorageNetCidr: 172.18.0.0/24
StorageMgmtNetCidr: 172.19.0.0/24
ManagementNetCidr: 172.20.0.0/24
ExternalNetCidr: 10.1.1.0/24
InternalApiAllocationPools: [{'start': '172.16.0.10', 'end': '172.16.0.200'}]
TenantAllocationPools: [{'start': '172.17.0.10', 'end': '172.17.0.200'}]
StorageAllocationPools: [{'start': '172.18.0.10', 'end': '172.18.0.200'}]
StorageMgmtAllocationPools: [{'start': '172.19.0.10', 'end': '172.19.0.200'}]
ManagementAllocationPools: [{'start': '172.20.0.10', 'end': '172.20.0.200'}]
# Leave room for floating IPs in the External allocation pool
ExternalAllocationPools: [{'start': '10.1.1.10', 'end': '10.1.1.50'}]
# Set to the router gateway on the external network
ExternalInterfaceDefaultRoute: 10.1.1.1
# Gateway router for the provisioning network (or Undercloud IP)
ControlPlaneDefaultRoute: 192.0.2.254
# The IP address of the EC2 metadata server. Generally the IP of the Undercloud
EC2MetadataIp: 192.0.2.1
# Define the DNS servers (maximum 2) for the overcloud nodes
DnsServers: ["8.8.8.8","8.8.4.4"]
InternalApiNetworkVlanID: 201
StorageNetworkVlanID: 202
StorageMgmtNetworkVlanID: 203
TenantNetworkVlanID: 204
ManagementNetworkVlanID: 205
ExternalNetworkVlanID: 100
# Set to "br-ex" if using floating IPs on native VLAN on bridge br-ex
NeutronExternalNetworkBridge: "''"
# Customize bonding options if required
BondInterfaceOvsOptions:
"bond_mode=balance-slb"This scenario defines options for each network. All network types use an individual VLAN and subnet used for assigning IP addresses to hosts and virtual IPs. In the example above, the allocation pool for the Internal API network starts at 172.16.0.10 and continues to 172.16.0.200 using VLAN 201. This results in static and virtual IPs assigned starting at 172.16.0.10 and upwards to 172.16.0.200 while using VLAN 201 in your environment.
The External network hosts the Horizon dashboard and Public API. If using the External network for both cloud administration and floating IPs, make sure there is room for a pool of IPs to use as floating IPs for VM instances. In this example, you only have IPs from 10.1.1.10 to 10.1.1.50 assigned to the External network, which leaves IP addresses from 10.1.1.51 and above free to use for Floating IP addresses. Alternately, place the Floating IP network on a separate VLAN and configure the Overcloud after creation to use it.
The BondInterfaceOvsOptions option provides options for our bonded interface using nic2 and nic3. For more information on bonding options, see Appendix H, Open vSwitch Bonding Options.
Changing the network configuration after creating the Overcloud can cause configuration problems due to the availability of resources. For example, if a user changes a subnet range for a network in the network isolation templates, the reconfiguration might fail due to the subnet already being in use.
6.2.3. Assigning OpenStack Services to Isolated Networks
Each OpenStack service is assigned to a default network type in the resource registry. These services are then bound to IP addresses within the network type’s assigned network. Although the OpenStack services are divided among these networks, the number of actual physical networks might differ as defined in the network environment file. You can reassign OpenStack services to different network types by defining a new network map in your network environment file (/home/stack/templates/network-environment.yaml). The ServiceNetMap parameter determines the network types used for each service.
For example, you can reassign the Storage Management network services to the Storage Network by modifying the ServiceNetMap:
parameter_defaults:
...
ServiceNetMap:
NeutronTenantNetwork: tenant
CeilometerApiNetwork: internal_api
AodhApiNetwork: internal_api
GnocchiApiNetwork: internal_api
MongoDbNetwork: internal_api
CinderApiNetwork: internal_api
CinderIscsiNetwork: storage
GlanceApiNetwork: storage
GlanceRegistryNetwork: internal_api
KeystoneAdminApiNetwork: ctlplane
KeystonePublicApiNetwork: internal_api
NeutronApiNetwork: internal_api
HeatApiNetwork: internal_api
NovaApiNetwork: internal_api
NovaMetadataNetwork: internal_api
NovaVncProxyNetwork: internal_api
NovaLibvirtNetwork: internal_api
SwiftMgmtNetwork: storage # Change from 'storage_mgmt'
SwiftProxyNetwork: storage
SaharaApiNetwork: internal_api
HorizonNetwork: internal_api
MemcachedNetwork: internal_api
RabbitMqNetwork: internal_api
RedisNetwork: internal_api
MysqlNetwork: internal_api
CephClusterNetwork: storage # Change from 'storage_mgmt'
CephPublicNetwork: storage
ControllerHostnameResolveNetwork: internal_api
ComputeHostnameResolveNetwork: internal_api
BlockStorageHostnameResolveNetwork: internal_api
ObjectStorageHostnameResolveNetwork: internal_api
CephStorageHostnameResolveNetwork: storage
NovaColdMigrationNetwork: ctlplane
NovaLibvirtNetwork: internal_api
...
Changing these parameters to storage places these services on the Storage network instead of the Storage Management network. This means you only need to define a set of parameter_defaults for the Storage network and not the Storage Management network.
6.2.4. Selecting Networks to Deploy
The settings in the resource_registry section of the environment file for networks and ports do not ordinarily need to be changed. The list of networks can be changed if only a subset of the networks are desired.
When specifying custom networks and ports, do not include the environments/network-isolation.yaml on the deployment command line. Instead, specify all the networks and ports in the network environment file.
In order to use isolated networks, the servers must have IP addresses on each network. You can use neutron in the Undercloud to manage IP addresses on the isolated networks, so you will need to enable neutron port creation for each network. You can override the resource registry in your environment file.
First, this is the complete set of networks and ports that can be deployed:
resource_registry: # This section is usually not modified, if in doubt stick to the defaults # TripleO overcloud networks OS::TripleO::Network::External: /usr/share/openstack-tripleo-heat-templates/network/external.yaml OS::TripleO::Network::InternalApi: /usr/share/openstack-tripleo-heat-templates/network/internal_api.yaml OS::TripleO::Network::StorageMgmt: /usr/share/openstack-tripleo-heat-templates/network/storage_mgmt.yaml OS::TripleO::Network::Storage: /usr/share/openstack-tripleo-heat-templates/network/storage.yaml OS::TripleO::Network::Tenant: /usr/share/openstack-tripleo-heat-templates/network/tenant.yaml OS::TripleO::Network::Management: /usr/share/openstack-tripleo-heat-templates/network/management.yaml # Port assignments for the VIPs OS::TripleO::Network::Ports::ExternalVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/external.yaml OS::TripleO::Network::Ports::InternalApiVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml OS::TripleO::Network::Ports::StorageVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage.yaml OS::TripleO::Network::Ports::StorageMgmtVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage_mgmt.yaml OS::TripleO::Network::Ports::TenantVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/tenant.yaml OS::TripleO::Network::Ports::ManagementVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/management.yaml OS::TripleO::Network::Ports::RedisVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/vip.yaml # Port assignments for the controller role OS::TripleO::Controller::Ports::ExternalPort: /usr/share/openstack-tripleo-heat-templates/network/ports/external.yaml OS::TripleO::Controller::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml OS::TripleO::Controller::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage.yaml OS::TripleO::Controller::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage_mgmt.yaml OS::TripleO::Controller::Ports::TenantPort: /usr/share/openstack-tripleo-heat-templates/network/ports/tenant.yaml OS::TripleO::Controller::Ports::ManagementPort: /usr/share/openstack-tripleo-heat-templates/network/ports/management.yaml # Port assignments for the compute role OS::TripleO::Compute::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml OS::TripleO::Compute::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage.yaml OS::TripleO::Compute::Ports::TenantPort: /usr/share/openstack-tripleo-heat-templates/network/ports/tenant.yaml OS::TripleO::Compute::Ports::ManagementPort: /usr/share/openstack-tripleo-heat-templates/network/ports/management.yaml # Port assignments for the ceph storage role OS::TripleO::CephStorage::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage.yaml OS::TripleO::CephStorage::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage_mgmt.yaml OS::TripleO::CephStorage::Ports::ManagementPort: /usr/share/openstack-tripleo-heat-templates/network/ports/management.yaml # Port assignments for the swift storage role OS::TripleO::SwiftStorage::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml OS::TripleO::SwiftStorage::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage.yaml OS::TripleO::SwiftStorage::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage_mgmt.yaml OS::TripleO::SwiftStorage::Ports::ManagementPort: /usr/share/openstack-tripleo-heat-templates/network/ports/management.yaml # Port assignments for the block storage role OS::TripleO::BlockStorage::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml OS::TripleO::BlockStorage::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage.yaml OS::TripleO::BlockStorage::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage_mgmt.yaml OS::TripleO::BlockStorage::Ports::ManagementPort: /usr/share/openstack-tripleo-heat-templates/network/ports/management.yaml
The first section of this file has the resource registry declaration for the OS::TripleO::Network::* resources. By default these resources point at a noop.yaml file that does not create any networks. By pointing these resources at the YAML files for each network, you enable the creation of these networks.
The next several sections create the IP addresses for the nodes in each role. The controller nodes have IPs on each network. The compute and storage nodes each have IPs on a subset of the networks.
To deploy without one of the pre-configured networks, disable the network definition and the corresponding port definition for the role. For example, all references to storage_mgmt.yaml could be replaced with noop.yaml:
resource_registry:
# This section is usually not modified, if in doubt stick to the defaults
# TripleO overcloud networks
OS::TripleO::Network::External: /usr/share/openstack-tripleo-heat-templates/network/external.yaml
OS::TripleO::Network::InternalApi: /usr/share/openstack-tripleo-heat-templates/network/internal_api.yaml
OS::TripleO::Network::StorageMgmt: /usr/share/openstack-tripleo-heat-templates/network/noop.yaml
OS::TripleO::Network::Storage: /usr/share/openstack-tripleo-heat-templates/network/storage.yaml
OS::TripleO::Network::Tenant: /usr/share/openstack-tripleo-heat-templates/network/tenant.yaml
# Port assignments for the VIPs
OS::TripleO::Network::Ports::ExternalVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/external.yaml
OS::TripleO::Network::Ports::InternalApiVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml
OS::TripleO::Network::Ports::StorageVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage.yaml
OS::TripleO::Network::Ports::StorageMgmtVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
OS::TripleO::Network::Ports::TenantVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/tenant.yaml
OS::TripleO::Network::Ports::RedisVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/vip.yaml
# Port assignments for the controller role
OS::TripleO::Controller::Ports::ExternalPort: /usr/share/openstack-tripleo-heat-templates/network/ports/external.yaml
OS::TripleO::Controller::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml
OS::TripleO::Controller::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage.yaml
OS::TripleO::Controller::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
OS::TripleO::Controller::Ports::TenantPort: /usr/share/openstack-tripleo-heat-templates/network/ports/tenant.yaml
# Port assignments for the compute role
OS::TripleO::Compute::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml
OS::TripleO::Compute::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage.yaml
OS::TripleO::Compute::Ports::TenantPort: /usr/share/openstack-tripleo-heat-templates/network/ports/tenant.yaml
# Port assignments for the ceph storage role
OS::TripleO::CephStorage::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage.yaml
OS::TripleO::CephStorage::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
# Port assignments for the swift storage role
OS::TripleO::SwiftStorage::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml
OS::TripleO::SwiftStorage::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage.yaml
OS::TripleO::SwiftStorage::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
# Port assignments for the block storage role
OS::TripleO::BlockStorage::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml
OS::TripleO::BlockStorage::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage.yaml
OS::TripleO::BlockStorage::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
parameter_defaults:
ServiceNetMap:
NeutronTenantNetwork: tenant
CeilometerApiNetwork: internal_api
AodhApiNetwork: internal_api
GnocchiApiNetwork: internal_api
MongoDbNetwork: internal_api
CinderApiNetwork: internal_api
CinderIscsiNetwork: storage
GlanceApiNetwork: storage
GlanceRegistryNetwork: internal_api
KeystoneAdminApiNetwork: ctlplane # Admin connection for Undercloud
KeystonePublicApiNetwork: internal_api
NeutronApiNetwork: internal_api
HeatApiNetwork: internal_api
NovaApiNetwork: internal_api
NovaMetadataNetwork: internal_api
NovaVncProxyNetwork: internal_api
SwiftMgmtNetwork: storage # Changed from storage_mgmt
SwiftProxyNetwork: storage
SaharaApiNetwork: internal_api
HorizonNetwork: internal_api
MemcachedNetwork: internal_api
RabbitMqNetwork: internal_api
RedisNetwork: internal_api
MysqlNetwork: internal_api
CephClusterNetwork: storage # Changed from storage_mgmt
CephPublicNetwork: storage
ControllerHostnameResolveNetwork: internal_api
ComputeHostnameResolveNetwork: internal_api
BlockStorageHostnameResolveNetwork: internal_api
ObjectStorageHostnameResolveNetwork: internal_api
CephStorageHostnameResolveNetwork: storage
By using noop.yaml, no network or ports are created, so the services on the Storage Management network would default to the Provisioning network. This can be changed in the ServiceNetMap in order to move the Storage Management services to another network, such as the Storage network.
6.3. Controlling Node Placement
The default behavior for the director is to randomly select nodes for each role, usually based on their profile tag. However, the director provides the ability to define specific node placement. This is a useful method to:
-
Assign specific node IDs e.g.
controller-0,controller-1, etc - Assign custom hostnames
- Assign specific IP addresses
- Assign specific Virtual IP addresses
Manually setting predictable IP addresses, virtual IP addresses, and ports for a network alleviates the need for allocation pools. However, it is recommended to retain allocation pools for each network to ease with scaling new nodes. Make sure that any statically defined IP addresses fall outside the allocation pools. For more information on setting allocation pools, see Section 6.2.2, “Creating a Network Environment File”.
6.3.1. Assigning Specific Node IDs
This procedure assigns node ID to specific nodes. Examples of node IDs include controller-0, controller-1, compute-0, compute-1, and so forth.
The first step is to assign the ID as a per-node capability that the Nova scheduler matches on deployment. For example:
ironic node-update <id> replace properties/capabilities='node:controller-0,boot_option:local'
This assigns the capability node:controller-0 to the node. Repeat this pattern using a unique continuous index, starting from 0, for all nodes. Make sure all nodes for a given role (Controller, Compute, or each of the storage roles) are tagged in the same way or else the Nova scheduler will not match the capabilities correctly.
The next step is to create a Heat environment file (for example, scheduler_hints_env.yaml) that uses scheduler hints to match the capabilities for each node. For example:
parameter_defaults:
ControllerSchedulerHints:
'capabilities:node': 'controller-%index%'
To use these scheduler hints, include the scheduler_hints_env.yaml environment file with the overcloud deploy command during Overcloud creation.
The same approach is possible for each role via these parameters:
-
ControllerSchedulerHintsfor Controller nodes. -
NovaComputeSchedulerHintsfor Compute nodes. -
BlockStorageSchedulerHintsfor Block Storage nodes. -
ObjectStorageSchedulerHintsfor Object Storage nodes. -
CephStorageSchedulerHintsfor Ceph Storage nodes.
Node placement takes priority over profile matching. To avoid scheduling failures, use the default baremetal flavor for deployment and not the flavors designed for profile matching (compute, control, etc). For example:
$ openstack overcloud deploy ... --control-flavor baremetal --compute-flavor baremetal ...
6.3.2. Assigning Custom Hostnames
In combination with the node ID configuration in Section 6.3.1, “Assigning Specific Node IDs”, the director can also assign a specific custom hostname to each node. This is useful when you need to define where a system is located (e.g. rack2-row12), match an inventory identifier, or other situations where a custom hostname is desired.
To customize node hostnames, use the HostnameMap parameter in an environment file, such as the scheduler_hints_env.yaml file from Section 6.3.1, “Assigning Specific Node IDs”. For example:
parameter_defaults:
ControllerSchedulerHints:
'capabilities:node': 'controller-%index%'
NovaComputeSchedulerHints:
'capabilities:node': 'compute-%index%'
HostnameMap:
overcloud-controller-0: overcloud-controller-prod-123-0
overcloud-controller-1: overcloud-controller-prod-456-0
overcloud-controller-2: overcloud-controller-prod-789-0
overcloud-compute-0: overcloud-compute-prod-abc-0
Define the HostnameMap in the parameter_defaults section, and set each mapping as the original hostname that Heat defines using HostnameFormat parameters (e.g. overcloud-controller-0) and the second value is the desired custom hostname for that node (e.g. overcloud-controller-prod-123-0).
Using this method in combination with the node ID placement ensures each node has a custom hostname.
6.3.3. Assigning Predictable IPs
For further control over the resulting environment, the director can assign Overcloud nodes with specific IPs on each network as well. Use the environments/ips-from-pool-all.yaml environment file in the core Heat template collection. Copy this file to the stack user’s templates directory.
$ cp /usr/share/openstack-tripleo-heat-templates/environments/ips-from-pool-all.yaml ~/templates/.
There are two major sections in the ips-from-pool-all.yaml file.
The first is a set of resource_registry references that override the defaults. These tell the director to use a specific IP for a given port on a node type. Modify each resource to use the absolute path of its respective template. For example:
OS::TripleO::Controller::Ports::ExternalPort: /usr/share/openstack-tripleo-heat-templates/network/ports/external_from_pool.yaml OS::TripleO::Controller::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api_from_pool.yaml OS::TripleO::Controller::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage_from_pool.yaml OS::TripleO::Controller::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage_mgmt_from_pool.yaml OS::TripleO::Controller::Ports::TenantPort: /usr/share/openstack-tripleo-heat-templates/network/ports/tenant_from_pool.yaml
The default configuration sets all networks on all node types to use pre-assigned IPs. To allow a particular network or node type to use default IP assignment instead, simply remove the resource_registry entries related to that node type or network from the environment file.
The second section is parameter_defaults, where the actual IP addresses are assigned. Each node type has an associated parameter:
-
ControllerIPsfor Controller nodes. -
NovaComputeIPsfor Compute nodes. -
CephStorageIPsfor Ceph Storage nodes. -
BlockStorageIPsfor Block Storage nodes. -
SwiftStorageIPsfor Object Storage nodes.
Each parameter is a map of network names to a list of addresses. Each network type must have at least as many addresses as there will be nodes on that network. The director assigns addresses in order. The first node of each type receives the first address on each respective list, the second node receives the second address on each respective lists, and so forth.
For example, if an Overcloud will contain three Ceph Storage nodes, the CephStorageIPs parameter might look like:
CephStorageIPs: storage: - 172.16.1.100 - 172.16.1.101 - 172.16.1.102 storage_mgmt: - 172.16.3.100 - 172.16.3.101 - 172.16.3.102
The first Ceph Storage node receives two addresses: 172.16.1.100 and 172.16.3.100. The second receives 172.16.1.101 and 172.16.3.101, and the third receives 172.16.1.102 and 172.16.3.102. The same pattern applies to the other node types.
Make sure the chosen IP addresses fall outside the allocation pools for each network defined in your network environment file (see Section 6.2.2, “Creating a Network Environment File”). For example, make sure the internal_api assignments fall outside of the InternalApiAllocationPools range. This avoids conflicts with any IPs chosen automatically. Likewise, make sure the IP assignments do not conflict with the VIP configuration, either for standard predictable VIP placement (see Section 6.3.4, “Assigning Predictable Virtual IPs”) or external load balancing (see Section 6.5, “Configuring External Load Balancing”).
To apply this configuration during a deployment, include the environment file with the openstack overcloud deploy command. If using network isolation (see Section 6.2, “Isolating Networks”), include this file after the network-isolation.yaml file. For example:
$ openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e ~/templates/ips-from-pool-all.yaml [OTHER OPTIONS]
6.3.4. Assigning Predictable Virtual IPs
In addition to defining predictable IP addresses for each node, the director also provides a similar ability to define predictable Virtual IPs (VIPs) for clustered services. To accomplish this, edit the network environment file from Section 6.2.2, “Creating a Network Environment File” and add the VIP parameters in the parameter_defaults section:
parameter_defaults:
...
# Predictable VIPs
ControlFixedIPs: [{'ip_address':'192.168.201.101'}]
InternalApiVirtualFixedIPs: [{'ip_address':'172.16.0.9'}]
PublicVirtualFixedIPs: [{'ip_address':'10.1.1.9'}]
StorageVirtualFixedIPs: [{'ip_address':'172.18.0.9'}]
StorageMgmtVirtualFixedIPs: [{'ip_address':'172.19.0.9'}]
RedisVirtualFixedIPs: [{'ip_address':'172.16.0.8'}]
Select these IPs from outside of their respective allocation pool ranges. For example, select an IP address for InternalApiVirtualFixedIPs that is not within the InternalApiAllocationPools range.
6.4. Configuring Containerized Compute Nodes
The director provides an option to integrate services from OpenStack’s containerization project (kolla) into the Overcloud’s Compute nodes. This includes creating Compute nodes that use Red Hat Enterprise Linux Atomic Host as a base operating system and individual containers to run different OpenStack services.
Containerized Compute nodes are a Technology Preview feature. Technology Preview features are not fully supported under Red Hat Subscription Service Level Agreements (SLAs), may not be functionally complete, and are not intended for production use. However, these features provide early access to upcoming product innovations, enabling customers to test functionality and provide feedback during the development process. For more information on the support scope for features marked as technology previews, see https://access.redhat.com/support/offerings/techpreview/.
The director’s core Heat template collection includes environment files to aid the configuration of containerized Compute nodes. These files include:
-
docker.yaml- The main environment file for configuring containerized Compute nodes. -
docker-network.yaml- The environment file for containerized Compute nodes networking without network isolation. -
docker-network-isolation.yaml- The environment file for containerized Compute nodes using network isolation.
6.4.1. Examining the Containerized Compute Environment File (docker.yaml)
The docker.yaml file is the main environment file for the containerized Compute node configuration. It includes the entries in the resource_registry:
resource_registry: OS::TripleO::ComputePostDeployment: ../docker/compute-post.yaml OS::TripleO::NodeUserData: ../docker/firstboot/install_docker_agents.yaml
- OS::TripleO::NodeUserData
-
Provides a Heat template that uses custom configuration on first boot. In this case, it installs the
openstack-heat-docker-agentscontainer on the Compute nodes when they first boot. This container provides a set of initialization scripts to configure the containerized Compute node and Heat hooks to communicate with the director. - OS::TripleO::ComputePostDeployment
Provides a Heat template with a set of post-configuration resources for Compute nodes. This includes a software configuration resource that provides a set of
tagsto Puppet:ComputePuppetConfig: type: OS::Heat::SoftwareConfig properties: group: puppet options: enable_hiera: True enable_facter: False tags: package,file,concat,file_line,nova_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 inputs: - name: tripleo::packages::enable_install type: Boolean default: True outputs: - name: result config: get_file: ../puppet/manifests/overcloud_compute.ppThese tags define the Puppet modules to pass to the
openstack-heat-docker-agentscontainer.
The docker.yaml file includes a parameter called NovaImage that replaces the standard overcloud-full image with a different image (atomic-image) when provisioning Compute nodes. See in Section 6.4.2, “Uploading the Atomic Host Image” for instructions on uploading this new image.
The docker.yaml file also includes a parameter_defaults section that defines the Docker registry and images to use for our Compute node services. You can modify this section to use a local registry instead of the default registry.access.redhat.com. See Section 6.4.3, “Using a Local Registry” for instructions on configuring a local registry.
6.4.2. Uploading the Atomic Host Image
The director requires a copy of the Cloud Image for Red Hat Enterprise Linux 7 Atomic Host imported into its image store as atomic-image. This is because the Compute node requires this image for the base OS during the provisioning phase of the Overcloud creation.
Download a copy of the Cloud Image from the Red Hat Enterprise Linux 7 Atomic Host product page (https://access.redhat.com/downloads/content/271/ver=/rhel---7/7.2.2-2/x86_64/product-software) and save it to the images subdirectory in the stack user’s home directory.
Once the image download completes, import the image into the director as the stack user.
$ glance image-create --name atomic-image --file ~/images/rhel-atomic-cloud-7.2-12.x86_64.qcow2 --disk-format qcow2 --container-format bare
This imports the image alongside the other Overcloud images.
$ glance image-list +--------------------------------------+------------------------+ | ID | Name | +--------------------------------------+------------------------+ | 27b5bad7-f8b2-4dd8-9f69-32dfe84644cf | atomic-image | | 08c116c6-8913-427b-b5b0-b55c18a01888 | bm-deploy-kernel | | aec4c104-0146-437b-a10b-8ebc351067b9 | bm-deploy-ramdisk | | 9012ce83-4c63-4cd7-a976-0c972be747cd | overcloud-full | | 376e95df-c1c1-4f2a-b5f3-93f639eb9972 | overcloud-full-initrd | | 0b5773eb-4c64-4086-9298-7f28606b68af | overcloud-full-vmlinuz | +--------------------------------------+------------------------+
6.4.3. Using a Local Registry
The default configuration uses Red Hat’s container registry for image downloads. However, as an optional step, you can use a local registry to conserve bandwidth during the Overcloud creation process.
You can use an existing local registry or install a new one. To install a new registry, use the instructions in Chapter 2. Get Started with Docker Formatted Container Images in Getting Started with Containers.
Pull the required images into your registry:
$ sudo docker pull registry.access.redhat.com/rhosp9_tech_preview/openstack-nova-compute:latest $ sudo docker pull registry.access.redhat.com/rhosp9_tech_preview/openstack-data:latest $ sudo docker pull registry.access.redhat.com/rhosp9_tech_preview/openstack-nova-libvirt:latest $ sudo docker pull registry.access.redhat.com/rhosp9_tech_preview/openstack-neutron-openvswitch-agent:latest $ sudo docker pull registry.access.redhat.com/rhosp9_tech_preview/openstack-openvswitch-vswitchd:latest $ sudo docker pull registry.access.redhat.com/rhosp9_tech_preview/openstack-openvswitch-db-server:latest $ sudo docker pull registry.access.redhat.com/rhosp9_tech_preview/openstack-heat-docker-agents:latest
After pulling the images, tag them with the proper registry host:
$ sudo docker tag registry.access.redhat.com/rhosp9_tech_preview/openstack-nova-compute:latest localhost:8787/registry.access.redhat.com/openstack-nova-compute:latest $ sudo docker tag registry.access.redhat.com/rhosp9_tech_preview/openstack-data:latest localhost:8787/registry.access.redhat.com/openstack-data:latest $ sudo docker tag registry.access.redhat.com/rhosp9_tech_preview/openstack-nova-libvirt:latest localhost:8787/registry.access.redhat.com/openstack-nova-libvirt:latest $ sudo docker tag registry.access.redhat.com/rhosp9_tech_preview/openstack-neutron-openvswitch-agent:latest localhost:8787/registry.access.redhat.com/openstack-neutron-openvswitch-agent:latest $ sudo docker tag registry.access.redhat.com/rhosp9_tech_preview/openstack-openvswitch-vswitchd:latest localhost:8787/registry.access.redhat.com/openstack-openvswitch-vswitchd:latest $ sudo docker tag registry.access.redhat.com/rhosp9_tech_preview/openstack-openvswitch-db-server:latest localhost:8787/registry.access.redhat.com/openstack-openvswitch-db-server:latest $ sudo docker tag registry.access.redhat.com/rhosp9_tech_preview/openstack-heat-docker-agents:latest localhost:8787/registry.access.redhat.com/openstack-heat-docker-agents:latest
Push them to the registry:
$ sudo docker push localhost:8787/registry.access.redhat.com/openstack-nova-compute:latest $ sudo docker push localhost:8787/registry.access.redhat.com/openstack-data:latest $ sudo docker push localhost:8787/registry.access.redhat.com/openstack-nova-libvirt:latest $ sudo docker push localhost:8787/registry.access.redhat.com/openstack-neutron-openvswitch-agent:latest $ sudo docker push localhost:8787/registry.access.redhat.com/openstack-openvswitch-vswitchd:latest $ sudo docker push localhost:8787/registry.access.redhat.com/openstack-openvswitch-db-server:latest $ sudo docker push localhost:8787/registry.access.redhat.com/openstack-heat-docker-agents:latest
Create a copy of the main docker.yaml environment file in the templates subdirectory:
$ cp /usr/share/openstack-tripleo-heat-templates/environments/docker.yaml ~/templates/.
Edit the file and modify the resource_registry to use absolute paths:
resource_registry: OS::TripleO::ComputePostDeployment: /usr/share/openstack-tripleo-heat-templates/docker/compute-post.yaml OS::TripleO::NodeUserData: /usr/share/openstack-tripleo-heat-templates/docker/firstboot/install_docker_agents.yaml
Set DockerNamespace in parameter_defaults to your registry URL. Also set DockerNamespaceIsRegistry to true For example:
parameter_defaults: DockerNamespace: registry.example.com:8787/registry.access.redhat.com DockerNamespaceIsRegistry: true
Your local registry now has the required docker images and the containerized Compute configuration is now set to use that registry.
6.4.4. Including Environment Files in the Overcloud Deployment
When running the Overcloud creation, include the main environment file (docker.yaml) and the network environment file (docker-network.yaml) for the containerized Compute nodes along with the openstack overcloud deploy command. For example:
$ openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/docker.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/docker-network.yaml [OTHER OPTIONS] ...
The containerized Compute nodes also function in an Overcloud with network isolation. This also requires the main environment file along with the network isolation file (docker-network-isolation.yaml). Add these files before the network isolation files from Section 6.2, “Isolating Networks”. For example:
openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/docker.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/docker-network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml [OTHER OPTIONS] ...
The director creates an Overcloud with containerized Compute nodes.
6.5. Configuring External Load Balancing
An Overcloud uses multiple Controllers together as a high availability cluster, which ensures maximum operational performance for your OpenStack services. In addition, the cluster provides load balancing for access to the OpenStack services, which evenly distributes traffic to the Controller nodes and reduces server overload for each node. It is also possible to use an external load balancer to perform this distribution. For example, an organization might use their own hardware-based load balancer to handle traffic distribution to the Controller nodes.
For more information about configuring external load balancing, see the dedicated External Load Balancing for the Overcloud guide for full instructions.
6.6. Configuring IPv6 Networking
As a default, the Overcloud uses Internet Protocol version 4 (IPv4) to configure the service endpoints. However, the Overcloud also supports Internet Protocol version 6 (IPv6) endpoints, which is useful for organizations that support IPv6 infrastructure. The director includes a set of environment files to help with creating IPv6-based Overclouds.
For more information about configuring IPv6 in the Overcloud, see the dedicated IPv6 Networking for the Overcloud guide for full instructions.
6.7. Configuring NFS Storage
This section describes configuring the Overcloud to use an NFS share. The installation and configuration process is based on the modification of an existing environment file in the core Heat template collection.
The core heat template collection contains a set of environment files in /usr/share/openstack-tripleo-heat-templates/environments/. These environment templates help with custom configuration of some of the supported features in a director-created Overcloud. This includes an environment file to help configure storage. This file is located at /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml. Copy this file to the stack user’s template directory.
$ cp /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml ~/templates/.
The environment file contains some parameters to help configure different storage options for Openstack’s block and image storage components, cinder and glance. In this example, you will configure the Overcloud to use an NFS share. Modify the following parameters:
- CinderEnableIscsiBackend
-
Enables the iSCSI backend. Set to
false. - CinderEnableRbdBackend
-
Enables the Ceph Storage backend. Set to
false. - CinderEnableNfsBackend
-
Enables the NFS backend. Set to
true. - NovaEnableRbdBackend
-
Enables Ceph Storage for Nova ephemeral storage. Set to
false. - GlanceBackend
-
Define the back end to use for Glance. Set to
fileto use file-based storage for images. The Overcloud will save these files in a mounted NFS share for Glance. - CinderNfsMountOptions
- The NFS mount options for the volume storage.
- CinderNfsServers
- The NFS share to mount for volume storage. For example, 192.168.122.1:/export/cinder.
- GlanceFilePcmkManage
-
Enables Pacemaker to manage the share for image storage. If disabled, the Overcloud stores images in the Controller node’s file system. Set to
true. - GlanceFilePcmkFstype
-
Defines the file system type that Pacemaker uses for image storage. Set to
nfs. - GlanceFilePcmkDevice
- The NFS share to mount for image storage. For example, 192.168.122.1:/export/glance.
- GlanceFilePcmkOptions
- The NFS mount options for the image storage.
The environment file’s options should look similar to the following:
parameter_defaults: CinderEnableIscsiBackend: false CinderEnableRbdBackend: false CinderEnableNfsBackend: true NovaEnableRbdBackend: false GlanceBackend: 'file' CinderNfsMountOptions: 'rw,sync' CinderNfsServers: '192.0.2.230:/cinder' GlanceFilePcmkManage: true GlanceFilePcmkFstype: 'nfs' GlanceFilePcmkDevice: '192.0.2.230:/glance' GlanceFilePcmkOptions: 'rw,sync,context=system_u:object_r:glance_var_lib_t:s0'
Include the context=system_u:object_r:glance_var_lib_t:s0 in the GlanceFilePcmkOptions parameter to allow glance access to the /var/lib directory. Without this SELinux content, glance will fail to write to the mount point.
These parameters are integrated as part of the heat template collection. Setting them as such creates two NFS mount points for cinder and glance to use.
Save this file for inclusion in the Overcloud creation.
6.8. Configuring Ceph Storage
The director provides two main methods for integrating Red Hat Ceph Storage into an Overcloud.
- Creating an Overcloud with its own Ceph Storage Cluster
- The director has the ability to create a Ceph Storage Cluster during the creation on the Overcloud. The director creates a set of Ceph Storage nodes that use the Ceph OSD to store the data. In addition, the director install the Ceph Monitor service on the Overcloud’s Controller nodes. This means if an organization creates an Overcloud with three highly available controller nodes, the Ceph Monitor also becomes a highly available service.
- Integrating a Existing Ceph Storage into an Overcloud
- If you already have an existing Ceph Storage Cluster, you can integrate this during an Overcloud deployment. This means you manage and scale the cluster outside of the Overcloud configuration.
For more information about configuring Overcloud Ceph Storage, see the dedicated Red Hat Ceph Storage for the Overcloud guide for full instructions on both scenarios.
6.9. Configuring Third Party Storage
The director include a couple of environment files to help configure third-party storage providers. This includes:
- Dell Storage Center
Deploys a single Dell Storage Center back end for the Block Storage (cinder) service.
The environment file is located at
/usr/share/openstack-tripleo-heat-templates/environments/cinder-dellsc-config.yaml.See the Dell Storage Center Back End Guide for full configuration information.
- Dell EqualLogic
Deploys a single Dell EqualLogic back end for the Block Storage (cinder) service.
The environment file is located at
/usr/share/openstack-tripleo-heat-templates/environments/cinder-eqlx-config.yaml.See the Dell EqualLogic Back End Guide for full configuration information.
- NetApp Block Storage
Deploys a NetApp storage appliance as a back end for the Block Storage (cinder) service.
The environment file is located at
/usr/share/openstack-tripleo-heat-templates/environments/cinder-dellsc-config.yaml/cinder-netapp-config.yaml.See the NetApp Block Storage Back End Guide for full configuration information.
6.10. Enabling SSL/TLS on the Overcloud
By default, the Overcloud uses unencrypted endpoints for its services; this means that the Overcloud configuration requires an additional environment file to enable SSL/TLS for its Public API endpoints.
This process only enables SSL/TLS for Public API endpoints. The Internal and Admin APIs remain unencrypted.
This process requires network isolation to define the endpoints for the Public API. See Section 6.2, “Isolating Networks” for instruction on network isolation.
Ensure you have a private key and certificate authority created. See Appendix A, SSL/TLS Certificate Configuration for more information on creating a valid SSL/TLS key and certificate authority file.
6.10.1. Enabling SSL/TLS
Copy the enable-tls.yaml environment file from the Heat template collection:
$ cp -r /usr/share/openstack-tripleo-heat-templates/environments/enable-tls.yaml ~/templates/.
Edit this file and make the following changes for these parameters:
- SSLCertificate
Copy the contents of the certificate file into the
SSLCertificateparameter. For example:parameter_defaults: SSLCertificate: | -----BEGIN CERTIFICATE----- MIIDgzCCAmugAwIBAgIJAKk46qw6ncJaMA0GCSqGSIb3DQEBCwUAMFgxCzAJBgNV ... sFW3S2roS4X0Af/kSSD8mlBBTFTCMBAj6rtLBKLaQbIxEpIzrgvp -----END CERTIFICATE-----ImportantThe certificate authority contents require the same indentation level for all new lines.
- SSLKey
Copy the contents of the private key into the
SSLKeyparameter. For example:parameter_defaults: ... SSLKey: | -----BEGIN RSA PRIVATE KEY----- MIIEowIBAAKCAQEAqVw8lnQ9RbeI1EdLN5PJP0lVO9hkJZnGP6qb6wtYUoy1bVP7 ... ctlKn3rAAdyumi4JDjESAXHIKFjJNOLrBmpQyES4XpZUC7yhqPaU -----END RSA PRIVATE KEY-----ImportantThe private key contents require the same indentation level for all new lines.
- EndpointMap
The
EndpointMapcontains a mapping of the services using HTTPS and HTTP communication. If using DNS for SSL communication, leave this section with the defaults. However, if using an IP address for the SSL certificate’s common name (see Appendix A, SSL/TLS Certificate Configuration), replace all instances ofCLOUDNAMEwithIP_ADDRESS. Use the following command to accomplish this:$ sed -i 's/CLOUDNAME/IP_ADDRESS/' ~/templates/enable-tls.yaml
ImportantDo not substitute
IP_ADDRESSorCLOUDNAMEfor actual values. Heat replaces these variables with the appropriate value during the Overcloud creation.- OS::TripleO::NodeTLSData
Change the resource path for
OS::TripleO::NodeTLSData:to an absolute path:resource_registry: OS::TripleO::NodeTLSData: /usr/share/openstack-tripleo-heat-templates/puppet/extraconfig/tls/tls-cert-inject.yaml
6.10.2. Injecting a Root Certificate
If the certificate signer is not in the default trust store on the Overcloud image, you must inject the certificate authority into the Overcloud image. Copy the inject-trust-anchor.yaml environment file from the heat template collection:
$ cp -r /usr/share/openstack-tripleo-heat-templates/environments/inject-trust-anchor.yaml ~/templates/.
Edit this file and make the following changes for these parameters:
- SSLRootCertificate
Copy the contents of the root certificate authority file into the
SSLRootCertificateparameter. For example:parameter_defaults: SSLRootCertificate: | -----BEGIN CERTIFICATE----- MIIDgzCCAmugAwIBAgIJAKk46qw6ncJaMA0GCSqGSIb3DQEBCwUAMFgxCzAJBgNV ... sFW3S2roS4X0Af/kSSD8mlBBTFTCMBAj6rtLBKLaQbIxEpIzrgvp -----END CERTIFICATE-----ImportantThe certificate authority contents require the same indentation level for all new lines.
- OS::TripleO::NodeTLSCAData
Change the resource path for
OS::TripleO::NodeTLSCAData:to an absolute path:resource_registry: OS::TripleO::NodeTLSCAData: /usr/share/openstack-tripleo-heat-templates/puppet/extraconfig/tls/ca-inject.yaml
6.10.3. Configuring DNS Endpoints
If using a DNS hostname to access the Overcloud through SSL/TLS, create a new environment file (~/templates/cloudname.yaml) to define the hostname of the Overcloud’s endpoints. Use the following parameters:
- CloudName
- The DNS hostname of the Overcloud endpoints.
- DnsServers
-
A list of DNS servers to use. The configured DNS servers must contain an entry for the configured
CloudNamethat matches the IP address of the Public API.
An example of the contents for this file:
parameter_defaults: CloudName: overcloud.example.com DnsServers: ["10.0.0.1"]
6.10.4. Adding Environment Files During Overcloud Creation
The deployment command (openstack overcloud deploy) in Chapter 7, Creating the Overcloud uses the -e option to add environment files. Add the environment files from this section in the following order:
-
The environment file to enable SSL/TLS (
enable-tls.yaml) -
The environment file to set the DNS hostname (
cloudname.yaml) -
The environment file to inject the root certificate authority (
inject-trust-anchor.yaml)
For example:
$ openstack overcloud deploy --templates [...] -e /home/stack/templates/enable-tls.yaml -e ~/templates/cloudname.yaml -e ~/templates/inject-trust-anchor.yaml
6.11. Configuring Base Parameters
The Overcloud’s Heat templates contain a set of base parameters that you configure prior to running the openstack overcloud deploy command. Include these base parameters in the parameter_defaults section of an environment file and include the environment file with the openstack overcloud deploy command.
For a full list of base parameters for the Overcloud, see Appendix D, Base Parameters.
Example 1: Configuring the Timezone
Add the entry in an environment file to set your timezone to Japan:
parameter_defaults: TimeZone: 'Japan'
Example 2: Disabling Layer 3 High Availability (L3HA)
If you need to disable Layer 3 High Availability (L3HA) for OpenStack Networking, add this to an environment file:
parameter_defaults: NeutronL3HA: False
Example 3: Configuring the Telemetry Dispatcher
The OpenStack Telemetry (ceilometer) service includes a new component for a time series data storage (gnocchi). It is possible in Red Hat OpenStack Platform to switch the default Ceilometer dispatcher to use this new component instead of the standard database. You accomplish this with the CeilometerMeterDispatcher, which you set to either:
-
database- Use the standard database for the Ceilometer dispatcher. This is the default option. -
gnocchi- Use the new time series database for Ceilometer dispatcher.
To switch to the time series database, add the following to an environment file:
parameter_defaults: CeilometerMeterDispatcher: gnocchi
When installing Red Hat OpenStack Platform 9 with CeilometerMeterDispatcher parameter set to gnocchi, restart the openstack-ceilometer-collector service after the install completes to make sure that the collector communicates with gnocchi successfully. To restart the service, use the following command:
$ sudo pcs resource restart openstack-ceilometer-collector-clone
The step is necessary because keystone init, which registers the gnocchi user to keystone, runs at the very end of the installation process. Because of that, the gnocchi user is not in the keystone catalogue when the collector is started. Restarting the collector after the installation enables ceilometer to communicate with gnocchi and dispatch the data accordingly. The next release of Red Hat OpenStack Platform will solve this problem.
Example 4: Configuring RabbitMQ File Descriptor Limit
For certain configurations, you might need to increase the file descriptor limit for the RabbitMQ server. Set the RabbitFDLimit parameter to the limit you require:
parameter_defaults: RabbitFDLimit: 65536
6.12. Registering the Overcloud
The Overcloud provides a method to register nodes to either the Red Hat Content Delivery Network, a Red Hat Satellite 5 server, or a Red Hat Satellite 6 server. You can either achieve this through environment files or the command line.
6.12.1. Method 1: Command Line
The deployment command (openstack overcloud deploy) uses a set of options to define your registration details. The table in Section 7.1, “Setting Overcloud Parameters” contains these options and their descriptions. Include these options when running the deployment command in Chapter 7, Creating the Overcloud. For example:
# openstack overcloud deploy --templates --rhel-reg --reg-method satellite --reg-sat-url http://example.satellite.com --reg-org MyOrg --reg-activation-key MyKey --reg-force [...]
6.12.2. Method 2: Environment File
Copy the registration files from the Heat template collection:
$ cp -r /usr/share/openstack-tripleo-heat-templates/extraconfig/pre_deploy/rhel-registration ~/templates/.
Edit the ~/templates/rhel-registration/environment-rhel-registration.yaml and modify the following values to suit your registration method and details.
- rhel_reg_method
-
Choose the registration method. Either
portal,satellite, ordisable. - rhel_reg_type
-
The type of unit to register. Leave blank to register as a
system - rhel_reg_auto_attach
-
Automatically attach compatible subscriptions to this system. Set to
trueto enable. - rhel_reg_service_level
- The service level to use for auto attachment.
- rhel_reg_release
- Use this parameter to set a release version for auto attachment. Leave blank to use the default from Red Hat Subscription Manager.
- rhel_reg_pool_id
- The subscription pool ID to use. Use this if not auto-attaching subscriptions.
- rhel_reg_sat_url
-
The base URL of the Satellite server to register Overcloud nodes. Use the Satellite’s HTTP URL and not the HTTPS URL for this parameter. For example, use http://satellite.example.com and not https://satellite.example.com. The Overcloud creation process uses this URL to determine whether the server is a Red Hat Satellite 5 or Red Hat Satellite 6 server. If a Red Hat Satellite 6 server, the Overcloud obtains the
katello-ca-consumer-latest.noarch.rpmfile, registers withsubscription-manager, and installskatello-agent. If a Red Hat Satellite 5 server, the Overcloud obtains theRHN-ORG-TRUSTED-SSL-CERTfile and registers withrhnreg_ks. - rhel_reg_server_url
- The hostname of the subscription service to use. The default is for Customer Portal Subscription Management, subscription.rhn.redhat.com. If this option is not used, the system is registered with Customer Portal Subscription Management. The subscription server URL uses the form of https://hostname:port/prefix.
- rhel_reg_base_url
- Gives the hostname of the content delivery server to use to receive updates. The default is https://cdn.redhat.com. Since Satellite 6 hosts its own content, the URL must be used for systems registered with Satellite 6. The base URL for content uses the form of https://hostname:port/prefix.
- rhel_reg_org
- The organization to use for registration.
- rhel_reg_environment
- The environment to use within the chosen organization.
- rhel_reg_repos
- A comma-separated list of repositories to enable. See Section 2.5, “Repository Requirements” for repositories to enable.
- rhel_reg_activation_key
- The activation key to use for registration.
- rhel_reg_user; rhel_reg_password
- The username and password for registration. If possible, use activation keys for registration.
- rhel_reg_machine_name
- The machine name. Leave this as blank to use the hostname of the node.
- rhel_reg_force
-
Set to
trueto force your registration options. For example, when re-registering nodes. - rhel_reg_sat_repo
-
The repository containing Red Hat Satellite 6’s management tools, such as
katello-agent. Check the correct repository name corresponds to your Red Hat Satellite version and check that the repository is synchronized on the Satellite server. For example,rhel-7-server-satellite-tools-6.2-rpmscorresponds to Red Hat Satellite 6.2.
The deployment command (openstack overcloud deploy) in Chapter 7, Creating the Overcloud uses the -e option to add environment files. Add both ~/templates/rhel-registration/environment-rhel-registration.yaml and ~/templates/rhel-registration/rhel-registration-resource-registry.yaml. For example:
$ openstack overcloud deploy --templates [...] -e /home/stack/templates/rhel-registration/environment-rhel-registration.yaml -e /home/stack/templates/rhel-registration/rhel-registration-resource-registry.yaml
Registration is set as the OS::TripleO::NodeExtraConfig Heat resource. This means you can only use this resource for registration. See Section 6.14, “Customizing Overcloud Pre-Configuration” for more information.
6.13. Customizing Configuration on First Boot
The director provides a mechanism to perform configuration on all nodes upon the initial creation of the Overcloud. The director achieves this through cloud-init, which you can call using the OS::TripleO::NodeUserData resource type.
In this example, you will update the nameserver with a custom IP address on all nodes. You must first create a basic heat template (/home/stack/templates/nameserver.yaml) that runs a script to append each node’s resolv.conf with a specific nameserver. You can use the OS::TripleO::MultipartMime resource type to send the configuration script.
heat_template_version: 2014-10-16
description: >
Extra hostname configuration
resources:
userdata:
type: OS::Heat::MultipartMime
properties:
parts:
- config: {get_resource: nameserver_config}
nameserver_config:
type: OS::Heat::SoftwareConfig
properties:
config: |
#!/bin/bash
echo "nameserver 192.168.1.1" >> /etc/resolv.conf
outputs:
OS::stack_id:
value: {get_resource: userdata}
Next, create an environment file (/home/stack/templates/firstboot.yaml) that registers your heat template as the OS::TripleO::NodeUserData resource type.
resource_registry: OS::TripleO::NodeUserData: /home/stack/templates/nameserver.yaml
To add the first boot configuration, add the environment file to the stack when first creating the Overcloud. For example:
$ openstack overcloud deploy --templates -e /home/stack/templates/firstboot.yaml
The -e applies the environment file to the Overcloud stack.
This adds the configuration to all nodes when they are first created and boot for the first time. Subsequent inclusions of these templates, such as updating the Overcloud stack, does not run these scripts.
You can only register the OS::TripleO::NodeUserData to one heat template. Subsequent usage overrides the heat template to use.
6.14. Customizing Overcloud Pre-Configuration
The Overcloud uses Puppet for the core configuration of OpenStack components. The director provides a set of resources to provide custom configuration after the first boot completes and before the core configuration begins. These resources include:
- OS::TripleO::ControllerExtraConfigPre
- Additional configuration applied to Controller nodes before the core Puppet configuration.
- OS::TripleO::ComputeExtraConfigPre
- Additional configuration applied to Compute nodes before the core Puppet configuration.
- OS::TripleO::CephStorageExtraConfigPre
- Additional configuration applied to CephStorage nodes before the core Puppet configuration.
- OS::TripleO::NodeExtraConfig
- Additional configuration applied to all nodes roles before the core Puppet configuration.
In this example, you first create a basic heat template (/home/stack/templates/nameserver.yaml) that runs a script to append each node’s resolv.conf with a variable nameserver.
heat_template_version: 2014-10-16
description: >
Extra hostname configuration
parameters:
server:
type: string
nameserver_ip:
type: string
DeployIdentifier:
type: string
resources:
ExtraPreConfig:
type: OS::Heat::SoftwareConfig
properties:
group: script
config:
str_replace:
template: |
#!/bin/sh
echo "nameserver _NAMESERVER_IP_" >> /etc/resolv.conf
params:
_NAMESERVER_IP_: {get_param: nameserver_ip}
ExtraPreDeployment:
type: OS::Heat::SoftwareDeployment
properties:
config: {get_resource: ExtraPreConfig}
server: {get_param: server}
actions: ['CREATE','UPDATE']
input_values:
deploy_identifier: {get_param: DeployIdentifier}
outputs:
deploy_stdout:
description: Deployment reference, used to trigger pre-deploy on changes
value: {get_attr: [ExtraPreDeployment, deploy_stdout]}
In this example, the resources section contains the following:
- ExtraPreConfig
-
This defines a software configuration. In this example, we define a Bash
scriptand Heat replaces_NAMESERVER_IP_with the value stored in thenameserver_ipparameter. - ExtraPreDeployment
This executes a software configuration, which is the software configuration from the
ExtraPreConfigresource. Note the following:-
The
serverparameter is provided by the parent template and is mandatory in templates for this hook. -
input_valuescontains a parameter calleddeploy_identifier, which stores theDeployIdentifierfrom the parent template. This parameter provides a timestamp to the resource for each deployment update. This ensures the resource reapplies on subsequent overcloud updates.
-
The
Next, create an environment file (/home/stack/templates/pre_config.yaml) that registers your heat template as the OS::TripleO::NodeExtraConfig resource type.
resource_registry: OS::TripleO::NodeExtraConfig: /home/stack/templates/nameserver.yaml parameter_defaults: nameserver_ip: 192.168.1.1
To apply the configuration, add the environment file to the stack when creating or updating the Overcloud. For example:
$ openstack overcloud deploy --templates -e /home/stack/templates/pre_config.yaml
This applies the configuration to all nodes before the core configuration begins on either the initial Overcloud creation or subsequent updates.
You can only register these resources to only one Heat template each. Subsequent usage overrides the heat template to use per resource.
6.15. Customizing Overcloud Post-Configuration
A situation might occur where you have completed the creation of your Overcloud but want to add additional configuration, either on initial creation or on a subsequent update of the Overcloud. In this case, you use the OS::TripleO::NodeExtraConfigPost resource to apply configuration using the standard OS::Heat::SoftwareConfig types. This applies additional configuration after the main Overcloud configuration completes.
In this example, you first create a basic heat template (/home/stack/templates/nameserver.yaml) that runs a script to append each node’s resolv.conf with a variable nameserver.
heat_template_version: 2014-10-16
description: >
Extra hostname configuration
parameters:
servers:
type: json
nameserver_ip:
type: string
DeployIdentifier:
type: string
resources:
ExtraConfig:
type: OS::Heat::SoftwareConfig
properties:
group: script
config:
str_replace:
template: |
#!/bin/sh
echo "nameserver _NAMESERVER_IP_" >> /etc/resolv.conf
params:
_NAMESERVER_IP_: {get_param: nameserver_ip}
ExtraDeployments:
type: OS::Heat::SoftwareDeployments
properties:
config: {get_resource: ExtraConfig}
servers: {get_param: servers}
actions: ['CREATE','UPDATE']
input_values:
deploy_identifier: {get_param: DeployIdentifier}
In this example, the resources section contains the following:
- ExtraConfig
-
This defines a software configuration. In this example, we define a Bash
scriptand Heat replaces_NAMESERVER_IP_with the value stored in thenameserver_ipparameter. - ExtraDeployments
This executes a software configuration, which is the software configuration from the
ExtraConfigresource. Note the following:-
The
serversparameter is provided by the parent template and is mandatory in templates for this hook. -
input_valuescontains a parameter calleddeploy_identifier, which stores theDeployIdentifierfrom the parent template. This parameter provides a timestamp to the resource for each deployment update. This ensures the resource reapplies on subsequent overcloud updates.
-
The
Next, create an environment file (/home/stack/templates/post_config.yaml) that registers your heat template as the OS::TripleO::NodeExtraConfigPost: resource type.
resource_registry: OS::TripleO::NodeExtraConfigPost: /home/stack/templates/nameserver.yaml parameter_defaults: nameserver_ip: 192.168.1.1
To apply the configuration, add the environment file to the stack when creating or updating the Overcloud. For example:
$ openstack overcloud deploy --templates -e /home/stack/templates/post_config.yaml
This applies the configuration to all nodes after the core configuration completes on either initial Overcloud creation or subsequent updates.
You can only register the OS::TripleO::NodeExtraConfigPost to only one heat template. Subsequent usage overrides the heat template to use.
6.16. Customizing Puppet Configuration Data
The Heat template collection contains a set of parameters to pass extra configuration to certain node types. These parameters save the configuration as hieradata for the node’s Puppet configuration. These parameters are:
- ExtraConfig
- Configuration to add to all nodes.
- controllerExtraConfig
- Configuration to add to all Controller nodes.
- NovaComputeExtraConfig
- Configuration to add to all Compute nodes.
- BlockStorageExtraConfig
- Configuration to add to all Block Storage nodes.
- ObjectStorageExtraConfig
- Configuration to add to all Object Storage nodes
- CephStorageExtraConfig
- Configuration to add to all Ceph Storage nodes
To add extra configuration to the post-deployment configuration process, create an environment file that contains these parameters in the parameter_defaults section. For example, to increase the reserved memory for Compute hosts to 1024 MB and set the VNC keymap to Japanese:
parameter_defaults:
NovaComputeExtraConfig:
nova::compute::reserved_host_memory: 1024
nova::compute::vnc_keymap: ja
Include this environment file when running openstack overcloud deploy.
You can only define each parameter once. Subsequent usage overrides previous values.
6.17. Applying Custom Puppet Configuration
In certain circumstances, you might need to install and configure some additional components to your Overcloud nodes. You can achieve this with a custom Puppet manifest that applies to nodes on after the main configuration completes. As a basic example, you might intend to install motd to each node. The process for accomplishing is to first create a Heat template (/home/stack/templates/custom_puppet_config.yaml) that launches Puppet configuration.
heat_template_version: 2014-10-16
description: >
Run Puppet extra configuration to set new MOTD
parameters:
servers:
type: json
resources:
ExtraPuppetConfig:
type: OS::Heat::SoftwareConfig
properties:
config: {get_file: motd.pp}
group: puppet
options:
enable_hiera: True
enable_facter: False
ExtraPuppetDeployments:
type: OS::Heat::SoftwareDeployments
properties:
config: {get_resource: ExtraPuppetConfig}
servers: {get_param: servers}
This includes the /home/stack/templates/motd.pp within the template and passes it to nodes for configuration. The motd.pp file itself contains the Puppet classes to install and configure motd.
Next, create an environment file (/home/stack/templates/puppet_post_config.yaml) that registers your heat template as the OS::TripleO::NodeExtraConfigPost: resource type.
resource_registry: OS::TripleO::NodeExtraConfigPost: /home/stack/templates/custom_puppet_config.yaml
And finally include this environment file when creating or updating the Overcloud stack:
$ openstack overcloud deploy --templates -e /home/stack/templates/puppet_post_config.yaml
This applies the configuration from motd.pp to all nodes in the Overcloud.
6.18. Using Customized Core Heat Templates
When creating the Overcloud, the director uses a core set of heat templates. You can copy the standard heat templates into a local directory and use these templates for creating your Overcloud.
Copy the heat template collection in /usr/share/openstack-tripleo-heat-templates to the stack user’s templates directory:
$ cp -r /usr/share/openstack-tripleo-heat-templates ~/templates/my-overcloud
This creates a clone of the Overcloud Heat templates. When running openstack overcloud deploy, we use the --templates option to specify your local template directory. This occurs later in this guide (see Chapter 7, Creating the Overcloud).
The director uses the default template directory (/usr/share/openstack-tripleo-heat-templates) if you specify the --templates option without a directory.
Red Hat provides updates to the heat template collection over subsequent releases. Using a modified template collection can lead to a divergence between your custom copy and the original copy in /usr/share/openstack-tripleo-heat-templates. Red Hat recommends using the methods from the following section instead of modifying the heat template collection:
If creating a copy of the heat template collection, you should track changes to the templates using a version control system such as git.
