Chapter 8. Director operator deployment scenario: Overcloud with external Ceph Storage
You can use the director Operator to deploy an overcloud that connects to an external Red Hat Ceph Storage cluster.
Prerequisites
- An external Red Hat Ceph Storage cluster.
8.1. Creating a data volume for the base operating system
You must create a data volume with the OpenShift Container Platform (OCP) cluster to store the base operating system image for your Controller virtual machines.
Prerequisites
- Download a Red Hat Enterprise Linux 8.4 QCOW2 image to your workstation. You can download this image from the Product Download section of the Red Hat Customer Portal.
Install the
virtctlclient tool on your workstation. You can install this tool on a Red Hat Enterprise Linux workstation using the following commands:$ sudo subscription-manager repos --enable=cnv-4.10-for-rhel-8-x86_64-rpms $ sudo dnf install -y kubevirt-virtctl
Install the
virt-customizeclient tool on your workstation. You can install this tool on a Red Hat Enterprise Linux workstation using the following command:$ dnf install -y libguestfs-tools-c
Procedure
The default QCOW2 image that you have downloaded from access.redhat.com does not use biosdev predictable network interface names. Modify the image with
virt-customizeto use biosdev predictable network interface names:$ sudo virt-customize -a <local path to image> --run-command 'sed -i -e "s/^\(kernelopts=.*\)net.ifnames=0 \(.*\)/\1\2/" /boot/grub2/grubenv' $ sudo virt-customize -a <local path to image> --run-command 'sed -i -e "s/^\(GRUB_CMDLINE_LINUX=.*\)net.ifnames=0 \(.*\)/\1\2/" /etc/default/grub' --truncate /etc/machine-id
Upload the image to OpenShift Virtualization with
virtctl:$ virtctl image-upload dv openstack-base-img -n openstack --size=50Gi --image-path=<local path to image> --storage-class <storage class> --insecure
Choose a suitable size for your environment, the minimum size is 50GB. For the
--storage-classoption, choose a storage class from your cluster. View a list of storage classes with the following command:$ oc get storageclass
When you create the OpenStackControlPlane resource and individual OpenStackVmSet resources, set the
baseImageVolumeNameparameter to the data volume name:... spec: ... baseImageVolumeName: openstack-base-img ...
Additional resources
8.2. Adding authentication details for your remote Git repository
The director Operator stores rendered Ansible playbooks to a remote Git repository and uses this repository to track changes to the overcloud configuration. You can use any Git repository that supports SSH authentication. You must provide details for the Git repository as an OpenShift Secret resource named git-secret.
Prerequisites
- Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
-
Ensure that you have installed the
occommand line tool on your workstation. - Prepare a remote Git repository for the director Operator to store the generated configuration for your overcloud.
-
Prepare an SSH key pair. Upload the public key to the Git repository and keep the private key available to add to the
git-secretSecret resource.
Procedure
Create the Secret resource:
$ oc create secret generic git-secret -n openstack --from-file=git_ssh_identity=<path_to_private_SSH_key> --from-literal=git_url=<git_server_URL>
The
git-secretSecret resource contains two key-value pairs:git_ssh_identity-
The private key to access the Git repository. The
--from-fileoption stores the content of the private SSH key file. git_url-
The SSH URL of the git repository to store the configuration. The
--from-literaloption stores the URL that you enter for this key.
Verification
View the Secret resource:
$ oc get secret/git-secret -n openstack
Additional resources
8.3. Setting the root password for nodes
To access the root user with a password on each node, you can set a root password in a Secret resource named userpassword.
Setting the root password for nodes is optional. If you do not set a root password, you can still log into nodes with the SSH keys defined in the osp-controlplane-ssh-keys Secret.
Prerequisites
- Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
-
Ensure that you have installed the
occommand line tool on your workstation.
Procedure
Convert your chosen password to a base64 value:
$ echo -n "p@ssw0rd!" | base64 cEBzc3cwcmQh
NoteThe
-noption removes the trailing newline from the echo output.Create a file named
openstack-userpassword.yamlon your workstation. Include the following resource specification for the Secret in the file:apiVersion: v1 kind: Secret metadata: name: userpassword namespace: openstack data: NodeRootPassword: "cEBzc3cwcmQh"
Set the
NodeRootPasswordparameter to your base64 encoded password.Create the
userpasswordSecret:$ oc create -f openstack-userpassword.yaml -n openstack
Enter the userpassword Secret in passwordSecret when you create OpenStackControlPlane or OpenStackBaremetalSet:
apiVersion: osp-director.openstack.org/v1beta2 kind: OpenStackControlPlane metadata: name: overcloud namespace: openstack spec: passwordSecret: <userpassword>
-
Replace
<userpassword>with youruserpasswordSecret.
Additional resources
8.4. Creating an overcloud control plane network with OpenStackNetConfig
You must define at least one control plane network for your overcloud in OpenStackNetConfig. In addition to IP address assignment, the network definition includes the mapping information for OpenStackNetAttachment. OpenShift Virtualization uses this information to attach any virtual machines to the network.
Prerequisites
- Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
-
Ensure that you have installed the
occommand line tool on your workstation.
Procedure
Create a file named
osnetconfig.yamlon your workstation. Include the resource specification for the control plane network, which is namedctlplane. For example, the specification for a control plane that uses a Linux bridge connected to theenp6s0Ethernet device on each worker node is as follows:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackNetConfig metadata: name: openstacknetconfig spec: attachConfigurations: br-osp: nodeNetworkConfigurationPolicy: nodeSelector: node-role.kubernetes.io/worker: "" desiredState: interfaces: - bridge: options: stp: enabled: false port: - name: enp6s0 description: Linux bridge with enp6s0 as a port name: br-osp state: up type: linux-bridge mtu: 1500 # optional DnsServers list dnsServers: - 192.168.25.1 # optional DnsSearchDomains list dnsSearchDomains: - osptest.test.metalkube.org - some.other.domain # DomainName of the OSP environment domainName: osptest.test.metalkube.org networks: - name: Control nameLower: ctlplane subnets: - name: ctlplane ipv4: allocationEnd: 172.22.0.250 allocationStart: 172.22.0.100 cidr: 172.22.0.0/24 gateway: 172.22.0.1 attachConfiguration: br-osp # optional: configure static mapping for the networks per nodes. If there is none, a random gets created reservations: controller-0: ipReservations: ctlplane: 172.22.0.120 compute-0: ipReservations: ctlplane: 172.22.0.140Set the following values in the networks specification:
name- Set to the name of the control plane network, which is Control.
nameLower- Set to the lower name of the control plane network, which is ctlplane.
subnets- Set the subnet specifications.
subnets.name- Set the name of the control plane subnet, which is ctlplane.
subnets.attachConfiguration- Set the reference to which of the attach configuration should be used.
subnets.ipv4- Details of the ipv4 subnet with allocationStart, allocationEnd, cidr, gateway and optional list of routes (with destination and nexthop)
For descriptions of the values you can use in this section, view the specification schema in the custom resource definition for the
openstacknetconfigCRD:$ oc describe crd openstacknetconfig
Save the file when you have finished configuring the network specification.
Create the control plane network:
$ oc create -f osnetconfig.yaml -n openstack
Verification
View the resource for the control plane network:
$ oc get openstacknetconfig/openstacknetconfig
8.5. Creating VLAN networks for network isolation with OpenStackNetConfig
You must create additional networks to implement network isolation for your composable networks. To accomplish this network isolation, you can place your composable networks on individual VLAN networks. In addition to IP address assignment, the OpenStackNetConfig resource includes information to define the network configuration policy that OpenShift Virtualization uses to attach any virtual machines to VLAN networks.
To use the default Red Hat OpenStack Platform networks, you must create an OpenStackNetConfig resource which defines each network.
Table 8.1. Default Red Hat OpenStack Platform networks
| Network | VLAN | CIDR | Allocation |
|---|---|---|---|
| External | 10 | 10.0.0.0/24 | 10.0.0.10 - 10.0.0.250 |
| InternalApi | 20 | 172.17.0.0/24 | 172.17.0.10 - 172.17.0.250 |
| Storage | 30 | 172.18.0.0/24 | 172.18.0.10 - 172.18.0.250 |
| StorageMgmt | 40 | 172.19.0.0/24 | 172.19.0.10 - 172.19..250 |
| Tenant | 50 | 172.20.0.0/24 | 172.20.0.10 - 172.20.0.250 |
To use different networking details for each network, you must create a custom network_data.yaml file.
Prerequisites
- Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
-
Ensure that you have installed the
occommand line tool on your workstation.
Procedure
Create a file for your network configuration. Include the resource specification for the VLAN network. For example, the specification for internal API, storage, storage mgmt, tenant, and external network that manages VLAN-tagged traffic over Linux bridges
br-exandbr-ospconnected to theenp6s0andenp7s0Ethernet device on each worker node is as follows:kind: OpenStackNetConfig metadata: name: openstacknetconfig spec: attachConfigurations: br-osp: nodeNetworkConfigurationPolicy: nodeSelector: node-role.kubernetes.io/worker: "" desiredState: interfaces: - bridge: options: stp: enabled: false port: - name: enp7s0 description: Linux bridge with enp7s0 as a port name: br-osp state: up type: linux-bridge mtu: 1500 br-ex: nodeNetworkConfigurationPolicy: nodeSelector: node-role.kubernetes.io/worker: "" desiredState: interfaces: - bridge: options: stp: enabled: false port: - name: enp6s0 description: Linux bridge with enp6s0 as a port name: br-ex state: up type: linux-bridge mtu: 1500 # optional DnsServers list dnsServers: - 172.22.0.1 # optional DnsSearchDomains list dnsSearchDomains: - osptest.test.metalkube.org - some.other.domain # DomainName of the OSP environment domainName: osptest.test.metalkube.org networks: - name: Control nameLower: ctlplane subnets: - name: ctlplane ipv4: allocationEnd: 172.22.0.250 allocationStart: 172.22.0.10 cidr: 172.22.0.0/24 gateway: 172.22.0.1 attachConfiguration: br-osp - name: InternalApi nameLower: internal_api mtu: 1350 subnets: - name: internal_api attachConfiguration: br-osp vlan: 20 ipv4: allocationEnd: 172.17.0.250 allocationStart: 172.17.0.10 cidr: 172.17.0.0/24 - name: External nameLower: external subnets: - name: external ipv4: allocationEnd: 10.0.0.250 allocationStart: 10.0.0.10 cidr: 10.0.0.0/24 gateway: 10.0.0.1 attachConfiguration: br-ex - name: Storage nameLower: storage mtu: 1500 subnets: - name: storage ipv4: allocationEnd: 172.18.0.250 allocationStart: 172.18.0.10 cidr: 172.18.0.0/24 vlan: 30 attachConfiguration: br-osp - name: StorageMgmt nameLower: storage_mgmt mtu: 1500 subnets: - name: storage_mgmt ipv4: allocationEnd: 172.19.0.250 allocationStart: 172.19.0.10 cidr: 172.19.0.0/24 vlan: 40 attachConfiguration: br-osp - name: Tenant nameLower: tenant vip: False mtu: 1500 subnets: - name: tenant ipv4: allocationEnd: 172.20.0.250 allocationStart: 172.20.0.10 cidr: 172.20.0.0/24 vlan: 50 attachConfiguration: br-ospWhen you use VLAN for network isolation with
linux-bridgethe following happens:-
The director Operator creates a Node Network Configuration Policy for the bridge interface specified in the resource, which uses
nmstateto configure the bridge on worker nodes. -
The director Operator creates a Network Attach Definition for each network, which defines the Multus CNI plugin configuration. When you specify the VLAN ID on the Network Attach Definition, the Multus CNI plugin enables
vlan-filteringon the bridge. -
The director Operator attaches a dedicated interface for each network on a virtual machine. This means that the network template for the
OpenStackVMSetis a multi-NIC network template.
Set the following values in the resource specification:
metadata.name- Set to the name of the OpenStackNetConfig.
specSet the network configuration for attaching the networks and the network specifics. For descriptions of the values you can use in this section, view the specification schema in the custom resource definition for the
openstacknetconfigCRD:$ oc describe crd openstacknetconfig
Save the file when you have finished configuring the network specification.
-
The director Operator creates a Node Network Configuration Policy for the bridge interface specified in the resource, which uses
Create the network configuration:
$ oc apply -f openstacknetconfig.yaml -n openstack
Verification
View the OpenStackNetConfig API and created child resources:
$ oc get openstacknetconfig/openstacknetconfig -n openstack $ oc get openstacknetattachment -n openstack $ oc get openstacknet -n openstack
If you see errors, check the underlying
network-attach-definitionand node network configuration policies:$ oc get network-attachment-definitions -n openstack $ oc get nncp
8.6. Creating a control plane with OpenStackControlPlane
The overcloud control plane contains the main Red Hat OpenStack Platform services that manage overcloud functionality. The control plane usually consists of 3 Controller nodes and can scale to other control plane-based composable roles. When you use composable roles, each service must run on exactly 3 additional dedicated nodes and the total number of nodes in the control plane must be odd to maintain Pacemaker quorum.
The OpenStackControlPlane custom resource creates control plane-based nodes as virtual machines within OpenShift Virtualization.
Prerequisites
- Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
-
Ensure that you have installed the
occommand line tool on your workstation. - Use the OpenStackNetConfig resource to create a control plane network and any additional isolated networks.
Procedure
Create a file named
openstack-controller.yamlon your workstation. Include the resource specification for the Controller nodes. For example, the specification for a control plane that consists of 3 Controller nodes is as follows:apiVersion: osp-director.openstack.org/v1beta2 kind: OpenStackControlPlane metadata: name: overcloud namespace: openstack spec: gitSecret: git-secret openStackClientNetworks: - ctlplane - internal_api - external openStackClientStorageClass: host-nfs-storageclass passwordSecret: userpassword virtualMachineRoles: Controller: roleName: Controller roleCount: 3 networks: - ctlplane - internal_api - external - tenant - storage - storage_mgmt cores: 12 memory: 64 rootDisk: diskSize: 100 baseImageVolumeName: openstack-base-img # storageClass must support RWX to be able to live migrate VMs storageClass: host-nfs-storageclass storageAccessMode: ReadWriteMany # When using OpenShift Virtualization with OpenShift Container Platform Container Storage, # specify RBD block mode persistent volume claims (PVCs) when creating virtual machine disks. # With virtual machine disks, RBD block mode volumes are more efficient and provide better # performance than Ceph FS or RBD filesystem-mode PVCs. # To specify RBD block mode PVCs, use the 'ocs-storagecluster-ceph-rbd' storage class and # VolumeMode: Block. storageVolumeMode: Filesystem # optional configure additional discs to be attached to the VMs, # need to be configured manually inside the VMs where to be used. additionalDisks: - name: datadisk diskSize: 500 storageClass: host-nfs-storageclass storageAccessMode: ReadWriteMany storageVolumeMode: Filesystem openStackRelease: "16.2"Set the following values in the resource specification:
metadata.name-
Set to the name of the overcloud control plane, which is
overcloud. metadata.namespace-
Set to the director Operator namespace, which is
openstack. specSet the configuration for the control plane. For descriptions of the values you can use in this section, view the specification schema in the custom resource definition for the
openstackcontrolplaneCRD:$ oc describe crd openstackcontrolplane
Save the file when you have finished configuring the control plane specification.
Create the control plane:
$ oc create -f openstack-controller.yaml -n openstack
Wait until OCP creates the resources related to OpenStackControlPlane resource.
As a part of the OpenStackControlPlane resource, the director Operator also creates an OpenStackClient pod that you can access through a remote shell and run RHOSP commands.
Verification
View the resource for the control plane:
$ oc get openstackcontrolplane/overcloud -n openstack
View the OpenStackVMSet resources to verify the creation of the control plane virtual machine set:
$ oc get openstackvmsets -n openstack
View the virtual machine resources to verify the creation of the control plane virtual machines in OpenShift Virtualization:
$ oc get virtualmachines
Test access to the
openstackclientremote shell:$ oc rsh -n openstack openstackclient
8.7. Creating directories for templates and environment files
Create directories on your workstation to store your custom templates and environment files, which you upload to ConfigMaps in OpenShift Container Platform (OCP).
Procedure
Create a directory for your custom templates:
$ mkdir custom_templates
Create a directory for your custom environment files:
$ mkdir custom_environment_files
8.8. Custom NIC heat template for Compute nodes
The following example is a heat template that contains NIC configuration for the Compute bare metal nodes.
heat_template_version: rocky
description: >
Software Config to drive os-net-config to configure VLANs for the Compute role.
parameters:
ControlPlaneIp:
default: ''
description: IP address/subnet on the ctlplane network
type: string
ControlPlaneSubnetCidr:
default: ''
description: >
The subnet CIDR of the control plane network. (The parameter is
automatically resolved from the ctlplane subnet's cidr attribute.)
type: string
ControlPlaneDefaultRoute:
default: ''
description: The default route of the control plane network. (The parameter
is automatically resolved from the ctlplane subnet's gateway_ip attribute.)
type: string
ControlPlaneStaticRoutes:
default: []
description: >
Routes for the ctlplane network traffic.
JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}]
Unless the default is changed, the parameter is automatically resolved
from the subnet host_routes attribute.
type: json
ControlPlaneMtu:
default: 1500
description: The maximum transmission unit (MTU) size(in bytes) that is
guaranteed to pass through the data path of the segments in the network.
(The parameter is automatically resolved from the ctlplane network's mtu attribute.)
type: number
StorageIpSubnet:
default: ''
description: IP address/subnet on the storage network
type: string
StorageNetworkVlanID:
default: 30
description: Vlan ID for the storage network traffic.
type: number
StorageMtu:
default: 1500
description: The maximum transmission unit (MTU) size(in bytes) that is
guaranteed to pass through the data path of the segments in the
Storage network.
type: number
StorageInterfaceRoutes:
default: []
description: >
Routes for the storage network traffic.
JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}]
Unless the default is changed, the parameter is automatically resolved
from the subnet host_routes attribute.
type: json
InternalApiIpSubnet:
default: ''
description: IP address/subnet on the internal_api network
type: string
InternalApiNetworkVlanID:
default: 20
description: Vlan ID for the internal_api network traffic.
type: number
InternalApiMtu:
default: 1500
description: The maximum transmission unit (MTU) size(in bytes) that is
guaranteed to pass through the data path of the segments in the
InternalApi network.
type: number
InternalApiInterfaceRoutes:
default: []
description: >
Routes for the internal_api network traffic.
JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}]
Unless the default is changed, the parameter is automatically resolved
from the subnet host_routes attribute.
type: json
TenantIpSubnet:
default: ''
description: IP address/subnet on the tenant network
type: string
TenantNetworkVlanID:
default: 50
description: Vlan ID for the tenant network traffic.
type: number
TenantMtu:
default: 1500
description: The maximum transmission unit (MTU) size(in bytes) that is
guaranteed to pass through the data path of the segments in the
Tenant network.
type: number
TenantInterfaceRoutes:
default: []
description: >
Routes for the tenant network traffic.
JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}]
Unless the default is changed, the parameter is automatically resolved
from the subnet host_routes attribute.
type: json
ExternalMtu:
default: 1500
description: The maximum transmission unit (MTU) size(in bytes) that is
guaranteed to pass through the data path of the segments in the
External network.
type: number
DnsServers: # Override this via parameter_defaults
default: []
description: >
DNS servers to use for the Overcloud (2 max for some implementations).
If not set the nameservers configured in the ctlplane subnet's
dns_nameservers attribute will be used.
type: comma_delimited_list
DnsSearchDomains: # Override this via parameter_defaults
default: []
description: A list of DNS search domains to be added (in order) to resolv.conf.
type: comma_delimited_list
resources:
MinViableMtu:
# This resource resolves the minimum viable MTU for interfaces, bonds and
# bridges that carry multiple VLANs. Each VLAN may have different MTU. The
# bridge, bond or interface must have an MTU to allow the VLAN with the
# largest MTU.
type: OS::Heat::Value
properties:
type: number
value:
yaql:
expression: $.data.max()
data:
- {get_param: ControlPlaneMtu}
- {get_param: StorageMtu}
- {get_param: InternalApiMtu}
- {get_param: TenantMtu}
- {get_param: ExternalMtu}
OsNetConfigImpl:
type: OS::Heat::SoftwareConfig
properties:
group: script
config:
str_replace:
template:
get_file: /usr/share/openstack-tripleo-heat-templates/network/scripts/run-os-net-config.sh
params:
$network_config:
network_config:
- type: interface
name: nic4
mtu:
get_attr: [MinViableMtu, value]
use_dhcp: false
dns_servers:
get_param: DnsServers
domain:
get_param: DnsSearchDomains
addresses:
- ip_netmask:
list_join:
- /
- - get_param: ControlPlaneIp
- get_param: ControlPlaneSubnetCidr
routes:
list_concat_unique:
- get_param: ControlPlaneStaticRoutes
- - default: true
next_hop:
get_param: ControlPlaneDefaultRoute
- type: vlan
mtu:
get_param: StorageMtu
device: nic4
vlan_id:
get_param: StorageNetworkVlanID
addresses:
- ip_netmask:
get_param: StorageIpSubnet
routes:
list_concat_unique:
- get_param: StorageInterfaceRoutes
- type: vlan
mtu:
get_param: InternalApiMtu
device: nic4
vlan_id:
get_param: InternalApiNetworkVlanID
addresses:
- ip_netmask:
get_param: InternalApiIpSubnet
routes:
list_concat_unique:
- get_param: InternalApiInterfaceRoutes
- type: ovs_bridge
# This will default to br-ex, anything else requires specific
# bridge mapping entries for it to be used.
name: bridge_name
mtu:
get_param: ExternalMtu
use_dhcp: false
members:
- type: interface
name: nic3
mtu:
get_param: ExternalMtu
use_dhcp: false
primary: true
- type: vlan
mtu:
get_param: TenantMtu
vlan_id:
get_param: TenantNetworkVlanID
addresses:
- ip_netmask:
get_param: TenantIpSubnet
routes:
list_concat_unique:
- get_param: TenantInterfaceRoutes
outputs:
OS::stack_id:
description: The OsNetConfigImpl resource.
value:
get_resource: OsNetConfigImplThis configuration maps the the networks to the following bridges and interfaces:
| Networks | Bridge | interface |
|---|---|---|
| Control Plane, Storage, Internal API | N/A |
|
| External, Tenant |
|
|
You can modify this configuration to suit the NIC configuration of your bare metal nodes.
To use this template in your deployment, copy the contents of the example to net-config-two-nic-vlan-compute.yaml in your custom_templates directory on your workstation.
8.9. Adding custom templates to the overcloud configuration
Archive your custom templates into a tarball file so that you can include these templates as a part of your overcloud deployment.
Prerequisites
- Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
-
Ensure that you have installed the
occommand line tool on your workstation. - Create the custom templates that you want to apply to provisioned nodes.
Procedure
Navigate to the location of your custom templates:
$ cd ~/custom_templates
Archive the templates into a tarball:
$ tar -cvzf custom-config.tar.gz *.yaml
Create the
tripleo-tarball-configConfigMap and use the tarball as data:$ oc create configmap tripleo-tarball-config --from-file=custom-config.tar.gz -n openstack
Verification
View the ConfigMap:
$ oc get configmap/tripleo-tarball-config -n openstack
Additional resources
8.10. Custom environment file for configuring networking in the director Operator
The following example is an environment file that map the network software configuration resources to the NIC templates for your overcloud.
resource_registry: OS::TripleO::Compute::Net::SoftwareConfig: net-config-two-nic-vlan-compute.yaml
Add any additional network configuration in a parameter_defaults section.
To use this template in your deployment, copy the contents of the example to network-environment.yaml in your custom_environment_files directory on your workstation.
Additional resources
8.11. Custom environment file for configuring external Ceph Storage usage in the director Operator
To integrate with an external Ceph Storage cluster, include an environment file with parameters and values similar to those shown in the following example.
resource_registry:
OS::TripleO::Services::CephExternal: deployment/ceph-ansible/ceph-external.yaml
parameter_defaults:
# needed for now because of the repo used to create tripleo-deploy image
CephAnsibleRepo: "rhelosp-ceph-4-tools"
CephAnsiblePlaybookVerbosity: 3
NovaEnableRbdBackend: true
GlanceBackend: rbd
CinderEnableRbdBackend: true
CinderBackupBackend: ceph
CinderEnableIscsiBackend: false
# Change the following values for your external ceph cluster
NovaRbdPoolName: vms
CinderRbdPoolName: volumes
CinderBackupRbdPoolName: backups
GlanceRbdPoolName: images
CephClientUserName: openstack
CephClientKey: AQDLOh1VgEp6FRAAFzT7Zw+Y9V6JJExQAsRnRQ==
CephClusterFSID: 4b5c8c0a-ff60-454b-a1b4-9747aa737d19
CephExternalMonHost: 172.16.1.7, 172.16.1.8, 172.16.1.9
# Change the ContainerImageRegistryCredentials to a registry service account
# OR remove ContainerImageRegistryCredentials and set ContainerCephDaemonImage
# to a local registry not requiring authentication.
ContainerCephDaemonImage: registry.redhat.io/rhceph/rhceph-4-rhel8:latest
ContainerImageRegistryCredentials:
registry.redhat.io:
my_username: my_password
This configuration contains parameters to enable the CephExternal and CephClient services on your nodes, set the pools for different RHOSP services, and sets the following parameters to integrate with your external Ceph Storage cluster:
- CephClientKey
- The Ceph client key of your external Ceph Storage cluster.
- CephClusterFSID
- The file system ID of your external Ceph Storage cluster.
- CephExternalMonHost
- A comma-delimited list of the IPs of all MON hosts in your external Ceph Storage cluster.
You can modify this configuration to suit your storage configuration.
A Ceph container is required by the ceph-ansible client role to configure the Ceph client. You must set the ContainerCephDaemonImage parameter. The Ceph container, hosted at registry.redhat.io, requires authentication. For more information on the ContainerImageRegistryLogin parameter see, Transitioning to Containerized Services
To use this template in your deployment, copy the contents of the example to ceph-ansible-external.yaml in your custom_environment_files directory on your workstation.
Additional resources
8.12. Adding custom environment files to the overcloud configuration
Upload a set of custom environment files from a directory to a ConfigMap that you can include as a part of your overcloud deployment.
Prerequisites
- Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
-
Ensure that you have installed the
occommand line tool on your workstation. - Create custom environment files for your overcloud deployment.
Procedure
Create the
heat-env-configConfigMap and use the directory that contains the environment files as data:$ oc create configmap -n openstack heat-env-config --from-file=~/custom_environment_files/ --dry-run=client -o yaml | oc apply -f -
Verification
View the ConfigMap:
$ oc get configmap/heat-env-config -n openstack
Additional resources
8.13. Creating Compute nodes with OpenStackBaremetalSet
Compute nodes provide computing resources to your Red Hat OpenStack Platform environment. You must have at least one Compute node in your overcloud and you can scale the number of Compute nodes after deployment.
The OpenStackBaremetalSet custom resource creates Compute nodes from bare metal machines that OpenShift Container Platform manages.
Prerequisites
- Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
-
Ensure that you have installed the
occommand line tool on your workstation. - Use the OpenStackNetConfig resource to create a control plane network and any additional isolated networks.
Procedure
Create a file named
openstack-compute.yamlon your workstation. Include the resource specification for the Compute nodes. For example, the specification for 1 Compute node is as follows:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackBaremetalSet metadata: name: compute namespace: openstack spec: count: 1 baseImageUrl: http://host/images/rhel-image-8.4.x86_64.qcow2 deploymentSSHSecret: osp-controlplane-ssh-keys # If you manually created an OpenStackProvisionServer, you can use it here, # otherwise the director Operator will create one for you (with `baseImageUrl` as the image that it server) # to use with this OpenStackBaremetalSet # provisionServerName: openstack-provision-server ctlplaneInterface: enp2s0 networks: - ctlplane - internal_api - tenant - storage roleName: Compute passwordSecret: userpasswordSet the following values in the resource specification:
metadata.name-
Set to the name of the Compute node bare metal set, which is
overcloud. metadata.namespace-
Set to the director Operator namespace, which is
openstack. specSet the configuration for the Compute nodes. For descriptions of the values you can use in this section, view the specification schema in the custom resource definition for the
openstackbaremetalsetCRD:$ oc describe crd openstackbaremetalset
Save the file when you have finished configuring the Compute node specification.
Create the Compute nodes:
$ oc create -f openstack-compute.yaml -n openstack
Verification
View the resource for the Compute nodes:
$ oc get openstackbaremetalset/compute -n openstack
View the bare metal machines that OpenShift manages to verify the creation of the Compute nodes:
$ oc get baremetalhosts -n openshift-machine-api
8.14. Creating Ansible playbooks for overcloud configuration with OpenStackConfigGenerator
After you provision the overcloud infrastructure, you must create a set of Ansible playbooks to configure the Red Hat OpenStack Platform (RHOSP) software on the overcloud nodes. You create these playbooks with the OpenStackConfigGenerator resource, which uses the config-download feature in RHOSP director to convert heat configuration to playbooks.
Prerequisites
- Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
-
Ensure that you have installed the
occommand line tool on your workstation. - OpenStackControlPlane and OpenStackBarementalSets created as required.
-
Configure a
git-secretSecret that contains authentication details for your remote Git repository. -
Configure a
tripleo-tarball-configConfigMap that contains your custom heat templates. -
Configure a
heat-env-configConfigMap that contains your custom environment files.
Procedure
Create a file named
openstack-config-generator.yamlon your workstation. Include the resource specification to generate the Ansible playbooks. For example, the specification to generate the playbooks is as follows:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackConfigGenerator metadata: name: default namespace: openstack spec: enableFencing: true gitSecret: git-secret imageURL: registry.redhat.io/rhosp-rhel8/openstack-tripleoclient:16.2 heatEnvConfigMap: heat-env-config # List of heat environment files to include from tripleo-heat-templates/environments heatEnvs: - ssl/tls-endpoints-public-dns.yaml - ssl/enable-tls.yaml tarballConfigMap: tripleo-tarball-config
Set the following values in the resource specification:
metadata.name-
Set to the name of the Compute node bare metal set, by default
default. metadata.namespace-
Set to the director Operator namespace, by default
openstack. spec.enableFencing- Enable the automatic creation of required heat environment files to enable fencing.
NoteProduction OSP environments must have fencing enabled. Virtual machines running pacemaker require the
fence-agents-kubevirtpackage.spec.gitSecret-
Set to the ConfigMap that contains the Git authentication credentials, by default
git-secret. spec.heatEnvs- A list of default tripleo environment files used to generate the playbooks.
spec.heatEnvConfigMap-
Set to the ConfigMap that contains your custom environment files, by default
heat-env-config. spec.tarballConfigMap-
Set to the ConfigMap that contains the tarball with your custom heat templates, by default
tripleo-tarball-config.
For more descriptions of the values you can use in the
specsection, view the specification schema in the custom resource definition for theopenstackconfiggeneratorCRD:$ oc describe crd openstackconfiggenerator
Save the file when you have finished configuring the Ansible config generator specification.
Create the Ansible config generator:
$ oc create -f openstack-config-generator.yaml -n openstack
Verification
View the resource for the config generator:
$ oc get openstackconfiggenerator/default -n openstack
8.15. Registering the operating system of your overcloud
Before the director Operator configures the overcloud software on nodes, you must register the operating system of all nodes to either the Red Hat Customer Portal or Red Hat Satellite Server, and enable repositories for your nodes.
As a part of the OpenStackControlPlane resource, the director Operator creates an OpenStackClient pod that you access through a remote shell and run Red Hat OpenStack Platform (RHOSP) commands. This pod also contains an ansible inventory script named /home/cloud-admin/ctlplane-ansible-inventory.
To register your nodes, you can use the redhat_subscription Ansible module with the inventory script from the OpentackClient pod.
Prerequisites
- Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
-
Ensure that you have installed the
occommand line tool on your workstation. - Use the OpenStackControlPlane resource to create a control plane.
- Use the OpenStackBareMetalSet resource to create bare metal Compute nodes.
Procedure
Access the remote shell for
openstackclient:$ oc rsh -n openstack openstackclient
Change to the
cloud-adminhome directory:$ cd /home/cloud-admin
Create a playbook that uses the
redhat_subscriptionmodules to register your nodes. For example, the following playbook registers Controller nodes:--- - name: Register Controller nodes hosts: Controller become: yes vars: repos: - rhel-8-for-x86_64-baseos-eus-rpms - rhel-8-for-x86_64-appstream-eus-rpms - rhel-8-for-x86_64-highavailability-eus-rpms - ansible-2.9-for-rhel-8-x86_64-rpms - openstack-16.2-for-rhel-8-x86_64-rpms - fast-datapath-for-rhel-8-x86_64-rpms tasks: - name: Register system redhat_subscription: username: myusername password: p@55w0rd! org_id: 1234567 release: 8.4 pool_ids: 1a85f9223e3d5e43013e3d6e8ff506fd - name: Disable all repos command: "subscription-manager repos --disable *" - name: Enable Controller node repos command: "subscription-manager repos --enable {{ item }}" with_items: "{{ repos }}"This play contains the following three tasks:
- Register the node.
- Disable any auto-enabled repositories.
-
Enable only the repositories relevant to the Controller node. The repositories are listed with the
reposvariable.
Register the overcloud nodes to required repositories:
ansible-playbook -i /home/cloud-admin/ctlplane-ansible-inventory ./rhsm.yaml
8.16. Applying overcloud configuration with the director Operator
You can configure the overcloud with director Operator only after you have created your control plane, provisioned your bare metal Compute nodes, and generated the Ansible playbooks to configure software on each node. When you create an OpenStackDeploy resource, the director Operator creates a job that runs the ansible playbooks to configure the overcloud.
Prerequisites
- Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
-
Ensure that you have installed the
occommand line tool on your workstation. - Use the OpenStackControlPlane resource to create a control plane.
- Use the OpenStackBareMetalSet resource to create bare metal Compute nodes.
- Use the OpentackConfigGenerator to create the Ansible playbook configuration for your overcloud.
- Use the OpeenstackConfigVersion to select the hash/digest of the ansible playbooks which should be used to configure the overcloud.
Procedure
Create a file named
openstack-deployment.yamlon your workstation. Include the resource specification to the Ansible playbooks. For example:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: default spec: configVersion: n5fch96h548h75hf4hbdhb8hfdh676h57bh96h5c5h59hf4h88h… configGenerator: default
Set the following values in the resource specification:
metadata.name-
Set the name of the Compute node baremetal set, by default
default. metadata.namespace-
Set to the diretor Operator namespace, by default
openstack. spec.configVersion- The config version/git hash of the playbooks to deploy.
spec.configGenerator- The name of the configGenerator.
For more descriptions of the values you can use inthe spec section, view the specification schema in the custom resource definition of the
openstackdeployCRD:$ oc describe crd openstackdeploy
Save the file when you have finished configuring the OpenStackDeploy specification.
Create the OpenStackDeploy resource:
$ oc create -f openstack-deployment.yaml -n openstack
As the deployment runs it creates a Kubernetes job to execute the Ansible playbooks. You can tail the logs of the job to watch the Ansible playbooks running:
$ oc logs -f jobs/deploy-openstack-default
Additionally, you can manually access the executed Ansible playbooks by logging into the
openstackclientpod. In the/home/cloud-admin/work/directoryyou can find the ansible playbooks and theansible.logfile for the current deployment.