Chapter 5. Deploying Red Hat Hyperconverged Infrastructure for Cloud Using the Command-line Interface
As a technician, you can deploy and manage the Red Hat Hyperconverged Infrastructure for Cloud solution using the command-line interface.
5.1. Prerequisites
- Verify that all the requirements are met.
- Installing the undercloud
5.2. Preparing the Nodes Before Deploying the Overcloud Using the Command-line Interface
As a technician, before you can deploy the overcloud, the undercloud needs to understand the hardware being used in the environment.
The Red Hat OpenStack Platform director (RHOSP-d) is also known as the undercloud.
Prerequisites
- Verify that all the requirements are met.
- Installing the undercloud
5.2.1. Registering and Introspecting the Hardware
The Red Hat OpenStack Platform director (RHOSP-d) runs an introspection process on each node and collects data about the node’s hardware. This introspection data is stored on the RHOSP-d node, and is used for various purposes, such as benchmarking and root disk assignments.
Prerequisites
- Complete the software installation of the RHOSP-d node.
- The MAC addresses for the network interface cards (NICs).
- IPMI User name and password
Procedure
Do the following steps on the RHOSP-d node, as the stack user:
Create the
osd-computeflavor:[stack@director ~]$ openstack flavor create --id auto --ram 2048 --disk 40 --vcpus 2 osd-compute [stack@director ~]$ openstack flavor set --property "capabilities:boot_option"="local" --property "capabilities:profile"="osd-compute" osd-compute
Create and populate a host definition file for the Ironic service to manage the nodes.
Create the
instackenv.jsonhost definition file:[stack@director ~]$ touch ~/instackenv.json
Add a definition block for each node between the
nodesstanza square brackets ({"nodes": []}), using this template:{ "pm_password": "$IPMI_USER_PASSWORD", "name": "$NODE_NAME", "pm_user": "$IPMI_USER_NAME", "pm_addr": "$IPMI_IP_ADDR", "pm_type": "pxe_ipmitool", "mac": [ "$NIC_MAC_ADDR" ], "arch": "x86_64", "capabilities": "node:$NODE_ROLE-INSTANCE_NUM,boot_option:local" },- Replace…
-
$IPMI_USER_PASSWORDwith the IPMI password. -
$NODE_NAMEwith a descriptive name of the node. This is an optional parameter. -
$IPMI_USER_NAMEwith the IPMI user name that has access to power the node on or off. -
$IPMI_IP_ADDRwith the IPMI IP address. -
$NIC_MAC_ADDRwith the network card MAC address handling the PXE boot. $NODE_ROLE-INSTANCE_NUMwith the node’s role, along with a node number. This solution uses two roles:controlandosd-compute.{ "nodes": [ { "pm_password": "AbC1234", "name": "m630_slot1", "pm_user": "ipmiadmin", "pm_addr": "10.19.143.61", "pm_type": "pxe_ipmitool", "mac": [ "c8:1f:66:65:33:41" ], "arch": "x86_64", "capabilities": "node:control-0,boot_option:local" }, { "pm_password": "AbC1234", "name": "m630_slot2", "pm_user": "ipmiadmin", "pm_addr": "10.19.143.62", "pm_type": "pxe_ipmitool", "mac": [ "c8:1f:66:65:33:42" ], "arch": "x86_64", "capabilities": "node:osd-compute-0,boot_option:local" }, ... Continue adding node definition blocks for each node in the initial deployment here. ] }NoteThe
osd-computerole is a custom role that is created in a later step. To predictably control node placement, add these nodes in order. For example:[stack@director ~]$ grep capabilities ~/instackenv.json "capabilities": "node:control-0,boot_option:local" "capabilities": "node:control-1,boot_option:local" "capabilities": "node:control-2,boot_option:local" "capabilities": "node:osd-compute-0,boot_option:local" "capabilities": "node:osd-compute-1,boot_option:local" "capabilities": "node:osd-compute-2,boot_option:local"
-
Import the nodes into the Ironic database:
[stack@director ~]$ openstack baremetal import ~/instackenv.json
Verify that the
openstack baremetal importcommand populated the Ironic database with all the nodes:[stack@director ~]$ openstack baremetal node list
Assign the bare metal boot kernel and RAMdisk images to all the nodes:
[stack@director ~]$ openstack baremetal configure boot
To start the nodes, collect their hardware data and store the information in the Ironic database, execute the following:
[stack@director ~]$ openstack baremetal introspection bulk start
NoteBulk introspection can take a long time to complete based on the number of nodes imported. Setting the
inspection_runbenchvalue tofalsein~/undercloud.conffile will speed up the bulk introspection process, but it will not collect thesysbenchandfiobenchmark data will not be collected, which can be useful data for the RHOSP-d.Verify that the introspection process completes without errors for all the nodes:
[stack@director ~]$ openstack baremetal introspection bulk status
Additional Resources
- For more information on assigning node identification parameters, see the Controlling Node Placement chapter of the RHOSP Advanced Overcloud Customization Guide.
5.2.2. Setting the Root Device
The Red Hat OpenStack Platform director (RHOSP-d) must identify the root disk to provision the nodes. By default Ironic will image the first block device, typically this block device is /dev/sda. Follow this procedure to change the root disk device according to the disk configuration of the Compute/OSD nodes.
This procedure will use the following Compute/OSD node disk configuration as an example:
-
OSD : 12 x 1TB SAS disks presented as
/dev/[sda, sdb, …, sdl]block devices -
OSD Journal : 3 x 400GB SATA SSD disks presented as
/dev/[sdm, sdn, sdo]block devices -
Operating System : 2 x 250GB SAS disks configured in RAID1 presented as
/dev/sdpblock device
Since an OSD will use /dev/sda, Ironic will use /dev/sdp, the RAID 1 disk, as the root disk instead. During the hardware introspection process, Ironic stores the world-wide number (WWN) and size of each block device.
Prerequisites
- Complete the hardware introspection procedure.
Procedure
Run one of the following commands on the RHOSP-d node.
Configure the root disk device to use the
smallestroot device:[stack@director ~]$ openstack baremetal configure boot --root-device=smallest
or
Configure the root disk device to use the disk’s
by-pathname:[stack@director ~]$ openstack baremetal configure boot --root-device=disk/by-path/pci-0000:00:1f.1-scsi-0:0:0:0
Ironic will apply this root device directive to all nodes within Ironic’s database.
Verify the correct root disk device was set:
openstack baremetal introspection data save $NODE_NAME_or_UUID | jq .
- Replace…
-
$NODE_NAME_or_UUIDwith the host name or UUID of the node.
Additional Resources
- For more information on Defining the Root Disk for Nodes section in the RHOSP Director Installation and Usage Guide.
5.2.3. Verifying that Ironic’s Disk Cleaning is Working
To verify if Ironic’s disk cleaning feature is working, you can toggle the node’s state, then observe if the node’s state goes into a cleaning state.
Prerequisites
- Installing the undercloud.
Procedure
Set the node’s state to manage:
openstack baremetal node manage $NODE_NAME
Example
[stack@director ~]$ openstack baremetal node manage osdcompute-0
Set the node’s state to provide:
openstack baremetal node provide $NODE_NAME
Example
[stack@director ~]$ openstack baremetal node provide osdcompute-0
Check the node status:
openstack node list
Additional Resources
- For more information, see the RHOSP-d Installation and Usage Guide.
5.3. Configuring a Container Image Source
As a technician, you can containerize the overcloud, but this first requires access to a registry with the required container images. Here you can find information on how to prepare the registry and the overcloud configuration to use container images for Red Hat OpenStack Platform.
There are several methods for configuring the overcloud to use a registry, based on the use case.
5.3.1. Registry Methods
Red Hat Hyperconverged Infrastructure for Cloud supports the following registry types, choose one of the following methods:
- Remote Registry
-
The overcloud pulls container images directly from
registry.access.redhat.com. This method is the easiest for generating the initial configuration. However, each overcloud node pulls each image directly from the Red Hat Container Catalog, which can cause network congestion and slower deployment. In addition, all overcloud nodes require internet access to the Red Hat Container Catalog. - Local Registry
-
Create a local registry on the undercloud, synchronize the images from
registry.access.redhat.com, and the overcloud pulls the container images from the undercloud. This method allows you to store a registry internally, which can speed up the deployment and decrease network congestion. However, the undercloud only acts as a basic registry and provides limited life cycle management for container images.
5.3.2. Including Additional Container Images for Red Hat OpenStack Platform Services
The Red Hat Hyperconverged Infrastructure for Cloud uses additional services besides the core Red Hat OpenStack Platform services. These additional services require additional container images, and you enable these services with their corresponding environment file. These environment files enable the composable containerized services in the overcloud and the director needs to know these services are enabled to prepare their images.
Prerequisites
- A running undercloud.
Procedure
As the
stackuser, on the undercloud node, using theopenstack overcloud container image preparecommand to include the additional services.Include the following environment file using the
-eoption:-
Ceph Storage Cluster :
/usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml
-
Ceph Storage Cluster :
Include the following
--setoptions for Red Hat Ceph Storage:--set ceph_namespace- Defines the namespace for the Red Hat Ceph Storage container image.
--set ceph_image-
Defines the name of the Red Hat Ceph Storage container image. Use image name:
rhceph-3-rhel7. --set ceph_tag-
Defines the tag to use for the Red Hat Ceph Storage container image. When
--tag-from-labelis specified, the versioned tag is discovered starting from this tag.
Run the image prepare command:
Example
[stack@director ~]$ openstack overcloud container image prepare \ ... -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml \ --set ceph_namespace=registry.access.redhat.com/rhceph \ --set ceph_image=rhceph-3-rhel7 \ --tag-from-label {version}-{release} \ ...NoteThese options are passed in addition to any other options that need to be passed to the
openstack overcloud container image preparecommand.
5.3.3. Using the Red Hat Registry as a Remote Registry Source
Red Hat hosts the overcloud container images on registry.access.redhat.com. Pulling the images from a remote registry is the simplest method because the registry is already setup and all you require is the URL and namespace of the image you aim to pull.
Prerequisites
- A running Red Hat Hyperconverged Infrastructure for Cloud 10 environment.
- Access to the Internet.
Procedure
To pull the images directly from
registry.access.redhat.comin the overcloud deployment, an environment file is required to specify the image parameters. The following command automatically creates this environment file:(undercloud) [stack@director ~]$ openstack overcloud container image prepare \ --namespace=registry.access.redhat.com/rhosp13 \ --prefix=openstack- \ --tag-from-label {version}-{release} \ --output-env-file=/home/stack/custom-templates/overcloud_images.yamlNoteUse the
-eoption to include any environment files for optional services.-
This creates an
overcloud_images.yamlenvironment file, which contains image locations, on the undercloud. Include this file with all future upgrade and deployment operations.
Additional Resources
5.3.4. Using the Undercloud as a Local Registry
You can configure a local registry on the undercloud to store overcloud container images. This method involves the following:
-
The director pulls each image from the
registry.access.redhat.com. - The director creates the overcloud.
- During the overcloud creation, the nodes pull the relevant images from the undercloud.
Prerequisites
- A running Red Hat Hyperconverged Infrastructure for Cloud 10 environment.
- Access to the Internet.
Procedure
Create a template to pull the images to the local registry:
(undercloud) [stack@director ~]$ openstack overcloud container image prepare \ --namespace=registry.access.redhat.com/rhosp13 \ --prefix=openstack- \ --tag-from-label {version}-{release} \ --output-images-file /home/stack/local_registry_images.yamlUse the
-eoption to include any environment files for optional services.NoteThis version of the
openstack overcloud container image preparecommand targets the registry on theregistry.access.redhat.comto generate an image list. It uses different values than theopenstack overcloud container image preparecommand used in a later step.
This creates a file called
local_registry_images.yamlwith the container image information. Pull the images using thelocal_registry_images.yamlfile:(undercloud) [stack@director ~]$ sudo openstack overcloud container image upload \ --config-file /home/stack/local_registry_images.yaml \ --verbose
NoteThe container images consume approximately 10 GB of disk space.
Find the namespace of the local images. The namespace uses the following pattern:
<REGISTRY IP ADDRESS>:8787/rhosp13
Use the IP address of the undercloud, which you previously set with the
local_ipparameter in theundercloud.conffile. Alternatively, you can also obtain the full namespace with the following command:(undercloud) [stack@director ~]$ docker images | grep -v redhat.com | grep -o '^.*rhosp13' | sort -u
Create a template for using the images in our local registry on the undercloud. For example:
(undercloud) [stack@director ~]$ openstack overcloud container image prepare \ --namespace=192.168.24.1:8787/rhosp13 \ --prefix=openstack- \ --tag-from-label {version}-{release} \ --output-env-file=/home/stack/custom-templates/overcloud_images.yaml-
Use the
-eoption to include any environment files for optional services. -
If using Ceph Storage, include the additional parameters to define the Ceph Storage container image location:
--set ceph_namespace,--set ceph_image,--set ceph_tag.
NoteThis version of the
openstack overcloud container image preparecommand targets the Satellite server. It uses different values than theopenstack overcloud container image preparecommand used in a previous step.-
Use the
-
This creates an
overcloud_images.yamlenvironment file, which contains image locations on the undercloud. Include this file with all future upgrade and deployment operations.
Additional Resources
Next Steps
- Prepare the overcloud for an upgrade.
Additional Resources
- See Section 4.2 in the Red Hat OpenStack Platform Fast Forward Upgrades Guide for more information.
5.4. Defining the Overcloud Using the Command-line Interface
As a technician, you can create a customizable set of TripleO Heat templates which defines the overcloud.
5.4.1. Prerequisites
- Verify that all the requirements are met.
- Deploy the Red Hat OpenStack Platform director, also known as the undercloud.
The high-level steps for defining the Red Hat Hyperconverged Infrastructure for Cloud overcloud:
- Creating a Directory for Custom Templates
- Configuring the Overcloud Networks
- Creating the Controller and ComputeHCI Roles
- Configuring Red Hat Ceph Storage for the overcloud
- Configuring the Overcloud Node Profile Layouts
5.4.2. Creating a Directory for the Custom Templates
The installation of the Red Hat OpenStack Platform director (RHOSP-d) creates a set of TripleO Heat templates. These TripleO Heat templates are located in the /usr/share/openstack-tripleo-heat-templates/ directory. Red Hat recommends copying these templates before customizing them.
Prerequisites
- Deploy the undercloud.
Procedure
Do the following step on the command-line interface of the RHOSP-d node.
Create new directories for the custom templates:
[stack@director ~]$ mkdir -p ~/custom-templates/nic-configs
5.4.3. Configuring the Overcloud Networks
This procedure will customize the network configuration files for isolated networks and assigning them to the Red Hat OpenStack Platform (RHOSP) services.
Prerequisites
- Verify that all the network requirements are met.
Procedure
Do the following steps on the RHOSP director node, as the stack user.
Choose the Compute NIC configuration template applicable to the environment:
-
/usr/share/openstack-tripleo-heat-templates/network/config/single-nic-vlans/compute.yaml -
/usr/share/openstack-tripleo-heat-templates/network/config/single-nic-linux-bridge-vlans/compute.yaml -
/usr/share/openstack-tripleo-heat-templates/network/config/multiple-nics/compute.yaml /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/compute.yamlNoteSee the
README.mdin each template’s respective directory for details about the NIC configuration.
-
Create a new directory within the ~/custom-templates/ directory:
[stack@director ~]$ touch ~/custom-templates/nic-configs
Copy the chosen template to the
~/custom-templates/nic-configs/directory and rename it tocompute-hci.yaml:Example
[stack@director ~]$ cp /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/compute.yaml ~/custom-templates/nic-configs/compute-hci.yaml
Add following definition, if it does note already exist, in the
parameters:section of the~/custom-templates/nic-configs/compute-hci.yamlfile:StorageMgmtNetworkVlanID: default: 40 description: Vlan ID for the storage mgmt network traffic. type: numberMap
StorageMgmtNetworkVlanIDto a specific NIC on each node. For example, if you chose to trunk VLANs to a single NIC (single-nic-vlans/compute.yaml), then add the following entry to thenetwork_config:section of~/custom-templates/nic-configs/compute-hci.yaml:type: vlan device: em2 mtu: 9000 use_dhcp: false vlan_id: {get_param: StorageMgmtNetworkVlanID} addresses: - ip_netmask: {get_param: StorageMgmtIpSubnet}ImportantRed Hat recommends setting the
mtuto9000, when mapping a NIC toStorageMgmtNetworkVlanID. This MTU setting provides measurable performance improvement to the performance of Red Hat Ceph Storage. For more details, see Configuring Jumbo Frames in the Red Hat OpenStack Platform Advanced Overcloud Customization guide.Create a new file in the custom templates directory:
[stack@director ~]$ touch ~/custom-templates/network.yaml
Open and edit the
network.yamlfile.Add the
resource_registrysection:resource_registry:
Add the following two lines under the
resource_registry:section:OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/custom-templates/nic-configs/controller-nics.yaml OS::TripleO::Compute::Net::SoftwareConfig: /home/stack/custom-templates/nic-configs/compute-nics.yaml
These two lines point the RHOSP services to the network configurations of the Controller/Monitor and Compute/OSD nodes respectively.
Add the
parameter_defaultssection:parameter_defaults:
Add the following default parameters for the Neutron bridge mappings for the tenant network:
NeutronBridgeMappings: 'datacentre:br-ex,tenant:br-tenant' NeutronNetworkType: 'vxlan' NeutronTunnelType: 'vxlan' NeutronExternalNetworkBridge: "''"
This defines the bridge mappings assigned to the logical networks and enables the tenants to use
vxlan.The two TripleO Heat templates referenced in step 2b requires parameters to define each network. Under the
parameter_defaultssection add the following lines:# Internal API used for private OpenStack Traffic InternalApiNetCidr: $IP_ADDR_CIDR InternalApiAllocationPools: [{'start': '$IP_ADDR_START', 'end': '$IP_ADDR_END'}] InternalApiNetworkVlanID: $VLAN_ID # Tenant Network Traffic - will be used for VXLAN over VLAN TenantNetCidr: $IP_ADDR_CIDR TenantAllocationPools: [{'start': '$IP_ADDR_START', 'end': '$IP_ADDR_END'}] TenantNetworkVlanID: $VLAN_ID # Public Storage Access - Nova/Glance <--> Ceph StorageNetCidr: $IP_ADDR_CIDR StorageAllocationPools: [{'start': '$IP_ADDR_START', 'end': '$IP_ADDR_END'}] StorageNetworkVlanID: $VLAN_ID # Private Storage Access - Ceph cluster/replication StorageMgmtNetCidr: $IP_ADDR_CIDR StorageMgmtAllocationPools: [{'start': '$IP_ADDR_START', 'end': '$IP_ADDR_END'}] StorageMgmtNetworkVlanID: $VLAN_ID # External Networking Access - Public API Access ExternalNetCidr: $IP_ADDR_CIDR # Leave room for floating IPs in the External allocation pool (if required) ExternalAllocationPools: [{'start': '$IP_ADDR_START', 'end': '$IP_ADDR_END'}] # Set to the router gateway on the external network ExternalInterfaceDefaultRoute: $IP_ADDRESS # Gateway router for the provisioning network (or undercloud IP) ControlPlaneDefaultRoute: $IP_ADDRESS # The IP address of the EC2 metadata server, this is typically the IP of the undercloud EC2MetadataIp: $IP_ADDRESS # Define the DNS servers (maximum 2) for the Overcloud nodes DnsServers: ["$DNS_SERVER_IP","$DNS_SERVER_IP"]- Replace…
-
$IP_ADDR_CIDRwith the appropriate IP address and net mask (CIDR). -
$IP_ADDR_STARTwith the appropriate starting IP address. -
$IP_ADDR_ENDwith the appropriate ending IP address. -
$IP_ADDRESSwith the appropriate IP address. -
$VLAN_IDwith the appropriate VLAN identification number for the corresponding network. $DNS_SERVER_IPwith the appropriate IP address for defining two DNS servers, separated by a comma (,).See the appendix for an example
network.yamlfile.
-
Additional Resources
- For more information on Isolating Networks, see the Red Hat OpenStack Platform Advance Overcloud Customization Guide.
5.4.4. Creating the Controller and ComputeHCI Roles
The overcloud has five default roles: Controller, Compute, BlockStorage, ObjectStorage, and CephStorage. These roles contains a list of services. You can mix these services to create a custom deployable role.
Prerequisites
- Deploy the Red Hat OpenStack Platform director, also known as the undercloud.
- Create a Directory for Custom Templates.
Procedure
Do the following step on the Red Hat OpenStack Platform director node, as the stack user.
Generate a custom
roles_data_custom.yamlfile that includes theControllerand theComputeHCI:[stack@director ~]$ openstack overcloud roles generate -o ~/custom-templates/roles_data_custom.yaml Controller ComputeHCI
Additional Resources
- See Section 5.6, “Deploying the Overcloud Using the Command-line Interface” on using this custom role.
5.4.5. Setting the Red Hat Ceph Storage Parameters
This procedure defines what Red Hat Ceph Storage (RHCS) OSD parameters to use.
Prerequisites
- Deploy the Red Hat OpenStack Platform director, also known as the undercloud.
- Create a Directory for Custom Templates.
Procedure
Do the following steps on the Red Hat OpenStack Platform director node, as the stack user.
Open the
~/custom-templates/ceph.yamlfile.Add all the Ceph OSD parameters to the
parameter_defaultssection:Example
parameter_defaults: CephPoolDefaultSize: 3 CephPoolDefaultPgNum: $NUM CephAnsibleDisksConfig: osd_scenario: non-collocated devices: - /dev/sda - /dev/sdb - /dev/sdc - /dev/sdd - /dev/sde - /dev/sdf - /dev/sdg dedicated_devices: - /dev/sdh - /dev/sdh - /dev/sdh - /dev/sdh - /dev/sdi - /dev/sdi - /dev/sdi CephAnsibleExtraConfig: osd_scenario: non-collocated osd_objectstore: filestore ceph_osd_docker_memory_limit: 5g ceph_osd_docker_cpu_limit: 1 CephConfigOverrides: osd_recovery_op_priority: 3 osd_recovery_max_active: 3 osd_max_backfills: 1- Replace…
-
$NUMwith the calculated values from the Ceph PG calculator.
For this step, the following Compute/OSD node disk configuration will be used as an example:
-
OSD : 12 x 1TB SAS disks presented as
/dev/[sda, sdb, …, sdg]block devices -
OSD Journal : 3 x 400GB SATA SSD disks presented as
/dev/[sdh, sdi]block devices
-
OSD : 12 x 1TB SAS disks presented as
Additional Resources
- For more details on tuning Ceph OSD parameters, see the Red Hat Ceph Storage Storage Strategies Guide.
5.4.6. Configuring the Overcloud Nodes Layout
The overcloud layout for the nodes defines, how many of these nodes to deploy based on the type, which pool of IP addresses to assign, and other parameters.
Prerequisites
- Deploy the Red Hat OpenStack Platform director, also known as the undercloud.
- Create a Directory for Custom Templates.
Procedure
Do the following steps on the Red Hat OpenStack Platform director node, as the stack user.
Create the
layout.yamlfile in the custom templates directory:[stack@director ~]$ touch ~/custom-templates/layout.yaml
Open the
layout.yamlfile for editing.Add the resource registry section by adding the following line:
resource_registry:
Add the following lines under the
resource_registrysection for configuring theControllerandComputeHCIroles to use a pool of IP addresses:OS::TripleO::Controller::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api_from_pool.yaml OS::TripleO::Controller::Ports::TenantPort: /usr/share/openstack-tripleo-heat-templates/network/ports/tenant_from_pool.yaml OS::TripleO::Controller::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage_from_pool.yaml OS::TripleO::Controller::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage_mgmt_from_pool.yaml OS::TripleO::ComputeHCI::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api_from_pool.yaml OS::TripleO::ComputeHCI::Ports::TenantPort: /usr/share/openstack-tripleo-heat-templates/network/ports/tenant_from_pool.yaml OS::TripleO::ComputeHCI::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage_from_pool.yaml OS::TripleO::ComputeHCI::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage_mgmt_from_pool.yaml
Add a new section for the parameter defaults called
parameter_defaultsand include the following parameters underneath this section:parameter_defaults: NtpServer: $NTP_IP_ADDR ControllerHostnameFormat: 'controller-%index%' ComputeHCIHostnameFormat: 'compute-hci-%index%' ControllerCount: 3 ComputeHCICount: 3 OvercloudComputeFlavor: compute OvercloudComputeHCIFlavor: osd-compute
- Replace…
$NTP_IP_ADDRwith the IP address of the NTP source. Time synchronization is very important!Example
parameter_defaults: NtpServer: 10.5.26.10 ControllerHostnameFormat: 'controller-%index%' ComputeHCIHostnameFormat: 'compute-hci-%index%' ControllerCount: 3 ComputeHCICount: 3 OvercloudComputeFlavor: compute OvercloudComputeHCIFlavor: osd-compute
The value of
3for theControllerCountandComputeHCICountparameters means three Controller/Monitor nodes and three Compute/OSD nodes will be deployed.
Under the
parameter_defaultssection, add a two scheduler hints, one calledControllerSchedulerHintsand the other calledComputeHCISchedulerHints. Under each scheduler hint, add the node name format for predictable node placement, as follows:ControllerSchedulerHints: 'capabilities:node': 'control-%index%' ComputeHCISchedulerHints: 'capabilities:node': 'osd-compute-%index%'Under the
parameter_defaultssection, add the required IP addresses for each node profile, for example:Example
ControllerIPs: internal_api: - 192.168.2.200 - 192.168.2.201 - 192.168.2.202 tenant: - 192.168.3.200 - 192.168.3.201 - 192.168.3.202 storage: - 172.16.1.200 - 172.16.1.201 - 172.16.1.202 storage_mgmt: - 172.16.2.200 - 172.16.2.201 - 172.16.2.202 ComputeHCIIPs: internal_api: - 192.168.2.203 - 192.168.2.204 - 192.168.2.205 tenant: - 192.168.3.203 - 192.168.3.204 - 192.168.3.205 storage: - 172.16.1.203 - 172.16.1.204 - 172.16.1.205 storage_mgmt: - 172.16.2.203 - 172.16.2.204 - 172.16.2.205From this example, node
control-0would have the following IP addresses:192.168.2.200,192.168.3.200,172.16.1.200, and172.16.2.200.
5.4.7. Additional Resources
- The Red Hat OpenStack Platform Advanced Overcloud Customization Guide for more information.
5.5. Isolating Resources and Tuning the Overcloud Using the Command-line Interface
Resource contention between Red Hat OpenStack Platform (RHOSP) and Red Hat Ceph Storage (RHCS) might cause a degradation of either service. Therefore, isolating system resources is important with the Red Hat Hyperconverged Infrastructure Cloud solution.
Likewise, tuning the overcloud is equally important for a more predictable performance outcome for a given workload.
To isolate resources and tune the overcloud, you will continue to refine the custom templates created in previous chapters.
5.5.1. Prerequisites
- Build the overcloud foundation by defining the overcloud.
5.5.2. Reserving CPU and Memory Resources for Hyperconverged Nodes
By default, the Nova Compute service parameters do not take into account the colocation of Ceph OSD services on the same node. Hyperconverged nodes need to be tuned in order to maintain stability and maximize the number of possible instances. Using a plan environment file allows you to set resource constraints for the Nova Compute service on hyperconverged nodes. Plan environment files define workflows, and the Red Hat OpenStack Platform director (RHOSP-d) executes the plan file with the OpenStack Workflow (Mistral) service.
The RHOSP-d also provides a default plan environment file specifically for configuring resource constraints on hyperconverged nodes:
/usr/share/openstack-tripleo-heat-templates/plan-samples/plan-environment-derived-params.yaml
Using the -p parameter invokes a plan environment file during the overcloud deployment.
This plan environment file will direct the OpenStack Workflow to:
- Retrieve hardware introspection data.
- Calculate optimal CPU and memory constraints for Compute on hyper-converged nodes based on that data.
- Autogenerate the necessary parameters to configure those constraints.
In the plan-environment-derived-params.yaml plan environment file, the hci_profile_config option defines several CPU and memory allocation workload profiles. The hci_profile parameter sets which workload profile is enabled.
Here is the default hci_profile:
Defaut Example
hci_profile: default
hci_profile_config:
default:
average_guest_memory_size_in_mb: 2048
average_guest_cpu_utilization_percentage: 50
many_small_vms:
average_guest_memory_size_in_mb: 1024
average_guest_cpu_utilization_percentage: 20
few_large_vms:
average_guest_memory_size_in_mb: 4096
average_guest_cpu_utilization_percentage: 80
nfv_default:
average_guest_memory_size_in_mb: 8192
average_guest_cpu_utilization_percentage: 90
In the above example, assumes that the average guest will use 2 GB of memory and 50% of their CPUs.
You can create a custom workload profile for the environment by adding a new profile to the hci_profile_config section. You can enable this custom workload profile by setting the hci_profile parameter to the profile’s name.
Custom Example
hci_profile: my_workload
hci_profile_config:
default:
average_guest_memory_size_in_mb: 2048
average_guest_cpu_utilization_percentage: 50
many_small_vms:
average_guest_memory_size_in_mb: 1024
average_guest_cpu_utilization_percentage: 20
few_large_vms:
average_guest_memory_size_in_mb: 4096
average_guest_cpu_utilization_percentage: 80
nfv_default:
average_guest_memory_size_in_mb: 8192
average_guest_cpu_utilization_percentage: 90
my_workload:
average_guest_memory_size_in_mb: 131072
average_guest_cpu_utilization_percentage: 100
The my_workload profile assumes that the average guest will use 128 GB of RAM and 100% of the CPUs allocated to the guest.
Additional Resources
- See the Red Hat OpenStack Platform Hyper-converged Infrastructure Guide for more information.
5.5.3. Applying Resource Isolation to the Ceph OSDs
Limiting the amount of CPU and memory for each Ceph OSD is important, so resources are free for the Nova Compute processes. The ceph_osd_docker_memory_limit option corresponds to the docker run … –memory command, and the ceph_osd_docker_cpu_limit option corresponds to the docker run … –cpu-quota command.
Example
CephAnsibleExtraConfig:
ceph_osd_docker_memory_limit: 5g
ceph_osd_docker_cpu_limit: 1
Because these setting imposes a hard limit per OSD, it is possible that each OSD could run out of memory. Ceph OSDs consume extra memory during backfilling, so test the deployment with the expected workload. Include OSD removal in your testing to ensure the default settings of 5 GB of memory and 1 vCPU are appropriate for this deployment.
With hyper-converged nodes running the Nova Compute services and the Ceph OSD services, determinism is improved by pinning Ceph to one of the available NUMA nodes. The NUMA node with the network IRQ and storage controller IRQ is the NUMA node that Ceph must be pinned to. Building on the previous example, adding the ceph_osd_docker_cpuset_cpus and ceph_osd_docker_cpuset_mems options affects NUMA affinity for the OSD processes.
Example
CephAnsibleExtraConfig:
ceph_osd_docker_memory_limit: 5g
ceph_osd_docker_cpu_limit: 1
ceph_osd_docker_cpuset_cpus: "0,2,4,6,8,10,12,14"
ceph_osd_docker_cpuset_mems: "0"
The ceph_osd_docker_cpuset_cpus option corresponds to the docker run … --cpuset-cpus command and the ceph_osd_docker_cpuset_mems option corresponds to the docker run … --cpuset-mems command. These commands are ran when starting the OSDs.
CephAnsibleExtraConfig is not additive. This means that you may only use it once in your Heat environment files.
5.5.4. Tuning the Backfilling and Recovery Operations for Ceph
Ceph uses a backfilling and recovery process to rebalance the storage cluster, whenever an OSD is removed. This is done to keep multiple copies of the data, according to the placement group policy. These two operations use system resources, so when a Ceph storage cluster is under load, then Ceph’s performance will drop as Ceph diverts resources to the backfill and recovery process. To maintain acceptable performance of the Ceph storage when an OSD is removed, then reduce the priority of backfill and recovery operations. The trade off for reducing the priority is that there are less data replicas for a longer period of time, and putting the data at a slightly greater risk.
The three variables to modify are:
osd_recovery_max_active- The number of active recovery requests per OSD at one time. More requests will accelerate recovery, but the requests place an increased load on the cluster.
osd_max_backfills- The maximum number of backfills allowed to or from a single OSD.
osd_recovery_op_priority- The priority set for recovery operations. It is relative to osd client op priority.
Since the osd_recovery_max_active and osd_max_backfills parameters are set to the correct values already, there is no need to add them to the ceph.yaml file. If you want to overwrite the default values of 3 and 1 respectively, then add them to the ceph.yaml file.
Additional Resources
- For more information on the OSD configurable parameters, see the Red Hat Ceph Storage Configuration Guide.
5.5.5. Additional Resources
- See Table 5.2 Deployment Parameters in the Red Hat OpenStack Platform 10 Director Installation and Usage Guide for more information on the overcloud parameters.
- See Customizing Virtual Machine Settings for more information.
-
See Section 5.6.4, “Running the Deploy Command” for details on running the
openstack overcloud deploycommand.
5.6. Deploying the Overcloud Using the Command-line Interface
As a technician, you can deploy the overcloud nodes so the Nova Compute and the Ceph OSD services are colocated on the same node.
5.6.1. Prerequisites
5.6.2. Verifying the Available Nodes for Ironic
Before deploying the overcloud nodes, verify that the nodes are powered off and available.
The nodes can not be in maintenance mode.
Procedure
Do the following step on the Red Hat OpenStack Platform director node, as the stack user.
Run the following command to verify all nodes are powered off, and available:
[stack@director ~]$ openstack baremetal node list
5.6.3. Configuring the Controller for Pacemaker Fencing
Isolating a node in a cluster so data corruption doesn’t happen is called fencing. Fencing protects the integrity of cluster and cluster resources.
Prerequisites
- An IPMI user and password
Procedure
Do the following steps on the Red Hat OpenStack Platform director node, as the stack user.
Generate the fencing Heat environment file:
[stack@director ~]$ openstack overcloud generate fencing --ipmi-lanplus instackenv.json --output fencing.yaml
-
Include the
fencing.yamlfile with theopenstack overcloud deploycommand.
Additional Resources
- For more information, see the Deploying Red Hat Enterprise Linux OpenStack Platform 7 with Red Hat OpenStack Platform director.
5.6.4. Running the Deploy Command
After all the customization and tuning, it is time to deploy the overcloud.
The deployment of the overcloud can take a long time to finish based on the sized of the deployment.
Prerequisites
Procedure
Do the following step on the Red Hat OpenStack Platform director (RHOSP-d) node, as the stack user.
Run the following command:
[stack@director ~]$ time openstack overcloud deploy \ --templates /usr/share/openstack-tripleo-heat-templates \ --stack overcloud \ -p /usr/share/openstack-tripleo-heat-templates/plan-samples/plan-environment-derived-params.yaml -r /home/stack/custom-templates/roles_data_custom.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/docker.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ -e /home/stack/custom-templates/overcloud_images.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml \ -e ~/custom-templates/network.yaml \ -e ~/custom-templates/ceph.yaml \ -e ~/custom-templates/layout.yaml -e /home/stack/fencing.yaml
- Command Details
-
The
timecommand is used to tell you how long the deployment takes. -
The
openstack overcloud deploycommand does the actual deployment. -
The
--templatesargument uses the default directory (/usr/share/openstack-tripleo-heat-templates/) containing the TripleO Heat templates to deploy. -
The
-pargument points to the plan environment file for HCI deployments. See Section 5.5.2, “Reserving CPU and Memory Resources for Hyperconverged Nodes” for more details. -
The
-rargument points to the roles file and overrides the defaultrole_data.yamlfile. -
The
-eargument points to an explicit template file to use during the deployment. -
The
network-isolation.yamlfile configures network isolation for different services, whose parameters are passed by the custom template,network.yaml. This file will be created automatically when the deployment starts. -
The
network.yamlfile is explained in Section 5.4.3, “Configuring the Overcloud Networks”. -
The
ceph.yamlfile is explained in Section 5.4.5, “Setting the Red Hat Ceph Storage Parameters”. -
The
compute.yamlfile is explained in Appendix F, Changing Nova Reserved Memory and CPU Allocation Manually. -
The
layout.yamlfile is explained in Section 5.4.6, “Configuring the Overcloud Nodes Layout”. The
fencing.yamlfile is explained in Section 5.6.3, “Configuring the Controller for Pacemaker Fencing”.ImportantThe order of the arguments matters. The custom template files will override the default template files.
NoteOptionally, add the
--rhel-reg,--reg-method,--reg-orgoptions, if you want to use the RHOSP-d node as a software repository for package installations.
-
The
- Wait for the overcloud deployment to finish.
Additional Resources
- See Table 5.2 Deployment Parameters in the Red Hat OpenStack Platform 13 Director Installation and Usage Guide for more information on the overcloud parameters.
5.6.5. Verifying a Successful Overcloud Deployment
It is important to verify if the overcloud deployment was successful.
Procedure
Do the following steps on the Red Hat OpenStack Platform director node in a separate console sessiont, as the stack user.
Watch the deployment process and look for failures:
[stack@director ~]$ heat resource-list -n5 overcloud | egrep -i 'fail|progress'
After the deployment finishes, view the IP addresses for the overcloud nodes:
[stack@director ~]$ openstack server list
Example Output from a successful overcloud deployment:
2016-12-20 23:25:04Z [overcloud]: CREATE_COMPLETE Stack CREATE completed successfully Stack overcloud CREATE_COMPLETE Started Mistral Workflow. Execution ID: aeca4d71-56b4-4c72-a980-022623487c05 /home/stack/.ssh/known_hosts updated. Original contents retained as /home/stack/.ssh/known_hosts.old Overcloud Endpoint: http://10.19.139.46:5000/v2.0 Overcloud Deployed

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.