Appendix D. Custom Heat Templates
The complete custom Heat templates used in this reference architecture are included in this appendix to be read and may accessed online. See the Appendix G, GitHub Repository of Example Files Appendix for more details. The ~/custom-templates/network.yaml file contains the following:
resource_registry:
OS::TripleO::OsdCompute::Net::SoftwareConfig: /home/stack/custom-templates/nic-configs/compute-nics.yaml
OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/custom-templates/nic-configs/controller-nics.yaml
parameter_defaults:
NeutronBridgeMappings: 'datacentre:br-ex,tenant:br-tenant'
NeutronNetworkType: 'vxlan'
NeutronTunnelType: 'vxlan'
NeutronExternalNetworkBridge: "''"
# Internal API used for private OpenStack Traffic
InternalApiNetCidr: 192.168.2.0/24
InternalApiAllocationPools: [{'start': '192.168.2.10', 'end': '192.168.2.200'}]
InternalApiNetworkVlanID: 4049
# Tenant Network Traffic - will be used for VXLAN over VLAN
TenantNetCidr: 192.168.3.0/24
TenantAllocationPools: [{'start': '192.168.3.10', 'end': '192.168.3.200'}]
TenantNetworkVlanID: 4050
# Public Storage Access - e.g. Nova/Glance <--> Ceph
StorageNetCidr: 172.16.1.0/24
StorageAllocationPools: [{'start': '172.16.1.10', 'end': '172.16.1.200'}]
StorageNetworkVlanID: 4046
# Private Storage Access - i.e. Ceph background cluster/replication
StorageMgmtNetCidr: 172.16.2.0/24
StorageMgmtAllocationPools: [{'start': '172.16.2.10', 'end': '172.16.2.200'}]
StorageMgmtNetworkVlanID: 4047
# External Networking Access - Public API Access ExternalNetCidr: 10.19.137.0/21
# Leave room for floating IPs in the External allocation pool (if required)
ExternalAllocationPools: [{'start': '10.19.139.37', 'end': '10.19.139.48'}]
# Set to the router gateway on the external network
ExternalInterfaceDefaultRoute: 10.19.143.254
# Gateway router for the provisioning network (or Undercloud IP)
ControlPlaneDefaultRoute: 192.168.1.1
# The IP address of the EC2 metadata server. Generally the IP of the Undercloud
EC2MetadataIp: 192.168.1.1
# Define the DNS servers (maximum 2) for the overcloud nodes
DnsServers: ["10.19.143.247","10.19.143.248"]The ~/custom-templates/nic-configs/compute-nics.yaml file contains the following:
heat_template_version: 2015-04-30
description: >
Software Config to drive os-net-config to configure VLANs for the
compute and osd role (assumption is that compute and osd cohabitate)
parameters:
ControlPlaneIp:
default: ''
description: IP address/subnet on the ctlplane network
type: string
ExternalIpSubnet:
default: ''
description: IP address/subnet on the external network
type: string
InternalApiIpSubnet:
default: ''
description: IP address/subnet on the internal API network
type: string
StorageIpSubnet:
default: ''
description: IP address/subnet on the storage network
type: string
StorageMgmtIpSubnet:
default: ''
description: IP address/subnet on the storage mgmt network
type: string
TenantIpSubnet:
default: ''
description: IP address/subnet on the tenant network
type: string
ManagementIpSubnet: # Only populated when including environments/network-management.yaml
default: ''
description: IP address/subnet on the management network
type: string
ExternalNetworkVlanID:
default: 10
description: Vlan ID for the external network traffic.
type: number
InternalApiNetworkVlanID:
default: 20
description: Vlan ID for the internal_api network traffic.
type: number
StorageNetworkVlanID:
default: 30
description: Vlan ID for the storage network traffic.
type: number
StorageMgmtNetworkVlanID:
default: 40
description: Vlan ID for the storage mgmt network traffic.
type: number
TenantNetworkVlanID:
default: 50
description: Vlan ID for the tenant network traffic.
type: number
ManagementNetworkVlanID:
default: 60
description: Vlan ID for the management network traffic.
type: number
ExternalInterfaceDefaultRoute:
default: '10.0.0.1'
description: default route for the external network
type: string
ControlPlaneSubnetCidr: # Override this via parameter_defaults
default: '24'
description: The subnet CIDR of the control plane network.
type: string
ControlPlaneDefaultRoute: # Override this via parameter_defaults
description: The subnet CIDR of the control plane network.
type: string
DnsServers: # Override this via parameter_defaults
default: []
description: A list of DNS servers (2 max for some implementations) that will be added to resolv.conf.
type: comma_delimited_list
EC2MetadataIp: # Override this via parameter_defaults
description: The IP address of the EC2 metadata server.
type: string
resources:
OsNetConfigImpl:
type: OS::Heat::StructuredConfig
properties:
group: os-apply-config
config:
os_net_config:
network_config:
-
type: interface
name: em3
use_dhcp: false
dns_servers: {get_param: DnsServers}
addresses:
-
ip_netmask:
list_join:
- '/'
- - {get_param: ControlPlaneIp}
- {get_param: ControlPlaneSubnetCidr}
routes:
-
ip_netmask: 169.254.169.254/32
next_hop: {get_param: EC2MetadataIp}
-
default: true
next_hop: {get_param: ControlPlaneDefaultRoute}
-
type: interface
name: em2
use_dhcp: false
mtu: 9000
-
type: vlan
device: em2
mtu: 9000
use_dhcp: false
vlan_id: {get_param: StorageMgmtNetworkVlanID}
addresses:
-
ip_netmask: {get_param: StorageMgmtIpSubnet}
-
type: vlan
device: em2
mtu: 9000
use_dhcp: false
vlan_id: {get_param: StorageNetworkVlanID}
addresses:
-
ip_netmask: {get_param: StorageIpSubnet}
-
type: interface
name: em4
use_dhcp: false
addresses:
-
ip_netmask: {get_param: InternalApiIpSubnet}
-
# VLAN for VXLAN tenant networking
type: ovs_bridge
name: br-tenant
mtu: 1500
use_dhcp: false
members:
-
type: interface
name: em1
mtu: 1500
use_dhcp: false
# force the MAC address of the bridge to this interface
primary: true
-
type: vlan
mtu: 1500
vlan_id: {get_param: TenantNetworkVlanID}
addresses:
-
ip_netmask: {get_param: TenantIpSubnet}
# Uncomment when including environments/network-management.yaml
#-
# type: interface
# name: nic7
# use_dhcp: false
# addresses:
# -
# ip_netmask: {get_param: ManagementIpSubnet}
outputs:
OS::stack_id:
description: The OsNetConfigImpl resource.
value: {get_resource: OsNetConfigImpl}
The ~/custom-templates/nic-configs/controller-nics.yaml file contains the following:
heat_template_version: 2015-04-30
description: >
Software Config to drive os-net-config to configure VLANs for the
controller role.
parameters:
ControlPlaneIp:
default: ''
description: IP address/subnet on the ctlplane network
type: string
ExternalIpSubnet:
default: ''
description: IP address/subnet on the external network
type: string
InternalApiIpSubnet:
default: ''
description: IP address/subnet on the internal API network
type: string
StorageIpSubnet:
default: ''
description: IP address/subnet on the storage network
type: string
StorageMgmtIpSubnet:
default: ''
description: IP address/subnet on the storage mgmt network
type: string
TenantIpSubnet:
default: ''
description: IP address/subnet on the tenant network
type: string
ManagementIpSubnet: # Only populated when including environments/network-management.yaml
default: ''
description: IP address/subnet on the management network
type: string
ExternalNetworkVlanID:
default: 10
description: Vlan ID for the external network traffic.
type: number
InternalApiNetworkVlanID:
default: 20
description: Vlan ID for the internal_api network traffic.
type: number
StorageNetworkVlanID:
default: 30
description: Vlan ID for the storage network traffic.
type: number
StorageMgmtNetworkVlanID:
default: 40
description: Vlan ID for the storage mgmt network traffic.
type: number
TenantNetworkVlanID:
default: 50
description: Vlan ID for the tenant network traffic.
type: number
ManagementNetworkVlanID:
default: 60
description: Vlan ID for the management network traffic.
type: number
ExternalInterfaceDefaultRoute:
default: '10.0.0.1'
description: default route for the external network
type: string
ControlPlaneSubnetCidr: # Override this via parameter_defaults
default: '24'
description: The subnet CIDR of the control plane network.
type: string
ControlPlaneDefaultRoute: # Override this via parameter_defaults
description: The default route of the control plane network.
type: string
DnsServers: # Override this via parameter_defaults
default: []
description: A list of DNS servers (2 max for some implementations) that will be added to resolv.conf.
type: comma_delimited_list
EC2MetadataIp: # Override this via parameter_defaults
description: The IP address of the EC2 metadata server.
type: string
resources:
OsNetConfigImpl:
type: OS::Heat::StructuredConfig
properties:
group: os-apply-config
config:
os_net_config:
network_config:
-
type: interface
name: p2p1
use_dhcp: false
dns_servers: {get_param: DnsServers}
addresses:
-
ip_netmask:
list_join:
- '/'
- - {get_param: ControlPlaneIp}
- {get_param: ControlPlaneSubnetCidr}
routes:
-
ip_netmask: 169.254.169.254/32
next_hop: {get_param: EC2MetadataIp}
-
type: ovs_bridge
# Assuming you want to keep br-ex as external bridge name
name: {get_input: bridge_name}
use_dhcp: false
addresses:
-
ip_netmask: {get_param: ExternalIpSubnet}
routes:
-
ip_netmask: 0.0.0.0/0
next_hop: {get_param: ExternalInterfaceDefaultRoute}
members:
-
type: interface
name: p2p2
# force the MAC address of the bridge to this interface
primary: true
-
# Unused Interface
type: interface
name: em3
use_dhcp: false
defroute: false
-
# Unused Interface
type: interface
name: em4
use_dhcp: false
defroute: false
-
# Unused Interface
type: interface
name: p2p3
use_dhcp: false
defroute: false
-
type: interface
name: p2p4
use_dhcp: false
addresses:
-
ip_netmask: {get_param: InternalApiIpSubnet}
-
type: interface
name: em2
use_dhcp: false
mtu: 9000
-
type: vlan
device: em2
mtu: 9000
use_dhcp: false
vlan_id: {get_param: StorageMgmtNetworkVlanID}
addresses:
-
ip_netmask: {get_param: StorageMgmtIpSubnet}
-
type: vlan
device: em2
mtu: 9000
use_dhcp: false
vlan_id: {get_param: StorageNetworkVlanID}
addresses:
-
ip_netmask: {get_param: StorageIpSubnet}
-
# VLAN for VXLAN tenant networking
type: ovs_bridge
name: br-tenant
mtu: 1500
use_dhcp: false
members:
-
type: interface
name: em1
mtu: 1500
use_dhcp: false
# force the MAC address of the bridge to this interface
primary: true
-
type: vlan
mtu: 1500
vlan_id: {get_param: TenantNetworkVlanID}
addresses:
-
ip_netmask: {get_param: TenantIpSubnet}
# Uncomment when including environments/network-management.yaml
#-
# type: interface
# name: nic7
# use_dhcp: false
# addresses:
# -
# ip_netmask: {get_param: ManagementIpSubnet}
outputs:
OS::stack_id:
description: The OsNetConfigImpl resource.
value: {get_resource: OsNetConfigImpl}The ~/custom-templates/ceph.yaml file contains the following:
resource_registry:
OS::TripleO::NodeUserData: /home/stack/custom-templates/first-boot-template.yaml
OS::TripleO::NodeExtraConfigPost: /home/stack/custom-templates/post-deploy-template.yaml
parameter_defaults:
ExtraConfig:
ceph::profile::params::fsid: eb2bb192-b1c9-11e6-9205-525400330666
ceph::profile::params::osd_pool_default_pg_num: 256
ceph::profile::params::osd_pool_default_pgp_num: 256
ceph::profile::params::osd_pool_default_size: 3
ceph::profile::params::osd_pool_default_min_size: 2
ceph::profile::params::osd_recovery_max_active: 3
ceph::profile::params::osd_max_backfills: 1
ceph::profile::params::osd_recovery_op_priority: 2
OsdComputeExtraConfig:
ceph::profile::params::osd_journal_size: 5120
ceph::profile::params::osds:
'/dev/sda':
journal: '/dev/sdm'
'/dev/sdb':
journal: '/dev/sdm'
'/dev/sdc':
journal: '/dev/sdm'
'/dev/sdd': journal: '/dev/sdm'
'/dev/sde':
journal: '/dev/sdn'
'/dev/sdf':
journal: '/dev/sdn'
'/dev/sdg':
journal: '/dev/sdn'
'/dev/sdh':
journal: '/dev/sdn'
'/dev/sdi':
journal: '/dev/sdo'
'/dev/sdj':
journal: '/dev/sdo'
'/dev/sdk':
journal: '/dev/sdo'
'/dev/sdl':
journal: '/dev/sdo'The ~/custom-templates/first-boot-template.yaml file contains the following:
heat_template_version: 2014-10-16
description: >
Wipe and convert all disks to GPT (except the disk containing the root file system)
resources:
userdata:
type: OS::Heat::MultipartMime
properties:
parts:
- config: {get_resource: wipe_disk}
wipe_disk:
type: OS::Heat::SoftwareConfig
properties:
config: {get_file: wipe-disk.sh}
outputs:
OS::stack_id:
value: {get_resource: userdata}
The ~/custom-templates/wipe-disk.sh file contains the following:
#!/usr/bin/env bash
if [[ `hostname` = *"ceph"* ]] || [[ `hostname` = *"osd-compute"* ]]
then
echo "Number of disks detected: $(lsblk -no NAME,TYPE,MOUNTPOINT | grep "disk" | awk '{print $1}' | wc -l)"
for DEVICE in `lsblk -no NAME,TYPE,MOUNTPOINT | grep "disk" | awk '{print $1}'`
do
ROOTFOUND=0
echo "Checking /dev/$DEVICE..."
echo "Number of partitions on /dev/$DEVICE: $(expr $(lsblk -n /dev/$DEVICE | awk '{print $7}' | wc -l) - 1)"
for MOUNTS in `lsblk -n /dev/$DEVICE | awk '{print $7}'`
do
if [ "$MOUNTS" = "/" ]
then
ROOTFOUND=1
fi
done
if [ $ROOTFOUND = 0 ]
then
echo "Root not found in /dev/${DEVICE}"
echo "Wiping disk /dev/${DEVICE}"
sgdisk -Z /dev/${DEVICE}
sgdisk -g /dev/${DEVICE}
else
echo "Root found in /dev/${DEVICE}"
fi
done
fiThe ~/custom-templates/layout.yaml file contains the following:
resource_registry:
OS::TripleO::Controller::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api_from_pool.yaml
OS::TripleO::Controller::Ports::TenantPort: /usr/share/openstack-tripleo-heat-templates/network/ports/tenant_from_pool.yaml
OS::TripleO::Controller::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage_from_pool.yaml
OS::TripleO::Controller::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage_mgmt_from_pool.yaml
OS::TripleO::OsdCompute::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api_from_pool.yaml
OS::TripleO::OsdCompute::Ports::TenantPort: /usr/share/openstack-tripleo-heat-templates/network/ports/tenant_from_pool.yaml
OS::TripleO::OsdCompute::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage_from_pool.yaml
OS::TripleO::OsdCompute::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage_mgmt_from_pool.yaml
parameter_defaults:
NtpServer: 10.5.26.10
ControllerCount: 3
ComputeCount: 0
CephStorageCount: 0
OsdComputeCount: 3
ControllerSchedulerHints:
'capabilities:node': 'controller-%index%'
NovaComputeSchedulerHints:
'capabilities:node': 'compute-%index%'
CephStorageSchedulerHints:
'capabilities:node': 'ceph-storage-%index%'
OsdComputeSchedulerHints:
'capabilities:node': 'osd-compute-%index%'
ControllerIPs:
internal_api:
- 192.168.2.200
- 192.168.2.201
- 192.168.2.202
tenant:
- 192.168.3.200
- 192.168.3.201
- 192.168.3.202
storage:
- 172.16.1.200
- 172.16.1.201
- 172.16.1.202
storage_mgmt:
- 172.16.2.200
- 172.16.2.201
- 172.16.2.202
OsdComputeIPs:
internal_api:
- 192.168.2.203
- 192.168.2.204
- 192.168.2.205
#- 192.168.2.206
tenant:
- 192.168.3.203
- 192.168.3.204
- 192.168.3.205
#- 192.168.3.206
storage:
- 172.16.1.203
- 172.16.1.204
- 172.16.1.205
#- 172.16.1.206
storage_mgmt:
- 172.16.2.203
- 172.16.2.204
- 172.16.2.205
#- 172.16.2.206The ~/custom-templates/custom-roles.yaml file contains the following:
# Specifies which roles (groups of nodes) will be deployed
# Note this is used as an input to the various *.j2.yaml
# jinja2 templates, so that they are converted into *.yaml
# during the plan creation (via a mistral action/workflow).
#
# The format is a list, with the following format:
#
# * name: (string) mandatory, name of the role, must be unique
#
# CountDefault: (number) optional, default number of nodes, defaults to 0
# sets the default for the {{role.name}}Count parameter in overcloud.yaml
#
# HostnameFormatDefault: (string) optional default format string for hostname
# defaults to '%stackname%-{{role.name.lower()}}-%index%'
# sets the default for {{role.name}}HostnameFormat parameter in overcloud.yaml
#
# ServicesDefault: (list) optional default list of services to be deployed
# on the role, defaults to an empty list. Sets the default for the
# {{role.name}}Services parameter in overcloud.yaml
- name: Controller
CountDefault: 1
ServicesDefault:
- OS::TripleO::Services::CACerts
- OS::TripleO::Services::CephMon
- OS::TripleO::Services::CephExternal
- OS::TripleO::Services::CephRgw
- OS::TripleO::Services::CinderApi
- OS::TripleO::Services::CinderBackup
- OS::TripleO::Services::CinderScheduler
- OS::TripleO::Services::CinderVolume
- OS::TripleO::Services::Core
- OS::TripleO::Services::Kernel
- OS::TripleO::Services::Keystone
- OS::TripleO::Services::GlanceApi
- OS::TripleO::Services::GlanceRegistry
- OS::TripleO::Services::HeatApi
- OS::TripleO::Services::HeatApiCfn
- OS::TripleO::Services::HeatApiCloudwatch
- OS::TripleO::Services::HeatEngine
- OS::TripleO::Services::MySQL
- OS::TripleO::Services::NeutronDhcpAgent
- OS::TripleO::Services::NeutronL3Agent
- OS::TripleO::Services::NeutronMetadataAgent
- OS::TripleO::Services::NeutronApi
- OS::TripleO::Services::NeutronCorePlugin
- OS::TripleO::Services::NeutronOvsAgent
- OS::TripleO::Services::RabbitMQ
- OS::TripleO::Services::HAproxy
- OS::TripleO::Services::Keepalived
- OS::TripleO::Services::Memcached
- OS::TripleO::Services::Pacemaker
- OS::TripleO::Services::Redis
- OS::TripleO::Services::NovaConductor
- OS::TripleO::Services::MongoDb
- OS::TripleO::Services::NovaApi
- OS::TripleO::Services::NovaMetadata
- OS::TripleO::Services::NovaScheduler
- OS::TripleO::Services::NovaConsoleauth
- OS::TripleO::Services::NovaVncProxy
- OS::TripleO::Services::Ntp
- OS::TripleO::Services::SwiftProxy
- OS::TripleO::Services::SwiftStorage
- OS::TripleO::Services::SwiftRingBuilder
- OS::TripleO::Services::Snmp
- OS::TripleO::Services::Timezone
- OS::TripleO::Services::CeilometerApi
- OS::TripleO::Services::CeilometerCollector
- OS::TripleO::Services::CeilometerExpirer
- OS::TripleO::Services::CeilometerAgentCentral
- OS::TripleO::Services::CeilometerAgentNotification
- OS::TripleO::Services::Horizon
- OS::TripleO::Services::GnocchiApi
- OS::TripleO::Services::GnocchiMetricd
- OS::TripleO::Services::GnocchiStatsd
- OS::TripleO::Services::ManilaApi
- OS::TripleO::Services::ManilaScheduler
- OS::TripleO::Services::ManilaBackendGeneric
- OS::TripleO::Services::ManilaBackendNetapp
- OS::TripleO::Services::ManilaBackendCephFs
- OS::TripleO::Services::ManilaShare
- OS::TripleO::Services::AodhApi
- OS::TripleO::Services::AodhEvaluator
- OS::TripleO::Services::AodhNotifier
- OS::TripleO::Services::AodhListener
- OS::TripleO::Services::SaharaApi
- OS::TripleO::Services::SaharaEngine
- OS::TripleO::Services::IronicApi
- OS::TripleO::Services::IronicConductor
- OS::TripleO::Services::NovaIronic
- OS::TripleO::Services::TripleoPackages
- OS::TripleO::Services::TripleoFirewall
- OS::TripleO::Services::OpenDaylightApi
- OS::TripleO::Services::OpenDaylightOvs
- OS::TripleO::Services::SensuClient
- OS::TripleO::Services::FluentdClient
- OS::TripleO::Services::VipHosts
- name: Compute
CountDefault: 1
HostnameFormatDefault: '%stackname%-compute-%index%'
ServicesDefault:
- OS::TripleO::Services::CACerts
- OS::TripleO::Services::CephClient
- OS::TripleO::Services::CephExternal
- OS::TripleO::Services::Timezone
- OS::TripleO::Services::Ntp
- OS::TripleO::Services::Snmp
- OS::TripleO::Services::NovaCompute
- OS::TripleO::Services::NovaLibvirt
- OS::TripleO::Services::Kernel
- OS::TripleO::Services::ComputeNeutronCorePlugin
- OS::TripleO::Services::ComputeNeutronOvsAgent
- OS::TripleO::Services::ComputeCeilometerAgent
- OS::TripleO::Services::ComputeNeutronL3Agent
- OS::TripleO::Services::ComputeNeutronMetadataAgent
- OS::TripleO::Services::TripleoPackages
- OS::TripleO::Services::TripleoFirewall
- OS::TripleO::Services::NeutronSriovAgent
- OS::TripleO::Services::OpenDaylightOvs
- OS::TripleO::Services::SensuClient
- OS::TripleO::Services::FluentdClient
- OS::TripleO::Services::VipHosts
- name: BlockStorage
ServicesDefault:
- OS::TripleO::Services::CACerts
- OS::TripleO::Services::BlockStorageCinderVolume
- OS::TripleO::Services::Kernel
- OS::TripleO::Services::Ntp
- OS::TripleO::Services::Timezone
- OS::TripleO::Services::Snmp
- OS::TripleO::Services::TripleoPackages
- OS::TripleO::Services::TripleoFirewall
- OS::TripleO::Services::SensuClient
- OS::TripleO::Services::FluentdClient
- OS::TripleO::Services::VipHosts
- name: ObjectStorage
ServicesDefault:
- OS::TripleO::Services::CACerts
- OS::TripleO::Services::Kernel
- OS::TripleO::Services::Ntp
- OS::TripleO::Services::SwiftStorage
- OS::TripleO::Services::SwiftRingBuilder
- OS::TripleO::Services::Snmp
- OS::TripleO::Services::Timezone
- OS::TripleO::Services::TripleoPackages
- OS::TripleO::Services::TripleoFirewall
- OS::TripleO::Services::SensuClient
- OS::TripleO::Services::FluentdClient
- OS::TripleO::Services::VipHosts
- name: CephStorage
ServicesDefault:
- OS::TripleO::Services::CACerts
- OS::TripleO::Services::CephOSD
- OS::TripleO::Services::Kernel
- OS::TripleO::Services::Ntp
- OS::TripleO::Services::Snmp
- OS::TripleO::Services::Timezone
- OS::TripleO::Services::TripleoPackages
- OS::TripleO::Services::TripleoFirewall
- OS::TripleO::Services::SensuClient
- OS::TripleO::Services::FluentdClient
- OS::TripleO::Services::VipHosts
- name: OsdCompute
CountDefault: 0
HostnameFormatDefault: '%stackname%-osd-compute-%index%'
ServicesDefault:
- OS::TripleO::Services::CephOSD
- OS::TripleO::Services::CACerts
- OS::TripleO::Services::CephClient
- OS::TripleO::Services::CephExternal
- OS::TripleO::Services::Timezone
- OS::TripleO::Services::Ntp
- OS::TripleO::Services::Snmp
- OS::TripleO::Services::NovaCompute
- OS::TripleO::Services::NovaLibvirt
- OS::TripleO::Services::Kernel
- OS::TripleO::Services::ComputeNeutronCorePlugin
- OS::TripleO::Services::ComputeNeutronOvsAgent
- OS::TripleO::Services::ComputeCeilometerAgent
- OS::TripleO::Services::ComputeNeutronL3Agent
- OS::TripleO::Services::ComputeNeutronMetadataAgent
- OS::TripleO::Services::TripleoPackages
- OS::TripleO::Services::TripleoFirewall
- OS::TripleO::Services::NeutronSriovAgent
- OS::TripleO::Services::OpenDaylightOvs
- OS::TripleO::Services::SensuClient
- OS::TripleO::Services::FluentdClient
- OS::TripleO::Services::VipHostsThe ~/custom-templates/compute.yaml file contains the following:
parameter_defaults:
ExtraConfig:
nova::compute::reserved_host_memory: 75000
nova::cpu_allocation_ratio: 8.2
The ~/custom-templates/post-deploy-template.yaml file contains the following:
heat_template_version: 2014-10-16
parameters:
servers:
type: json
resources:
ExtraConfig:
type: OS::Heat::SoftwareConfig
properties:
group: script
inputs:
- name: OSD_NUMA_INTERFACE
config: {get_file: numa-systemd-osd.sh}
ExtraDeployments:
type: OS::Heat::SoftwareDeployments
properties:
servers: {get_param: servers}
config: {get_resource: ExtraConfig}
input_values:
OSD_NUMA_INTERFACE: 'em2'
actions: ['CREATE']The ~/custom-templates/numa-systemd-osd.sh file contains the following:
#!/usr/bin/env bash
{
if [[ `hostname` = *"ceph"* ]] || [[ `hostname` = *"osd-compute"* ]]; then
# Verify the passed network interface exists
if [[ ! $(ip add show $OSD_NUMA_INTERFACE) ]]; then
exit 1
fi
# If NUMA related packages are missing, then install them
# If packages are baked into image, no install attempted
for PKG in numactl hwloc; do
if [[ ! $(rpm -q $PKG) ]]; then
yum install -y $PKG
if [[ ! $? ]]; then
echo "Unable to install $PKG with yum"
exit 1
fi
fi
done
# Find the NUMA socket of the $OSD_NUMA_INTERFACE
declare -A NUMASOCKET
while read TYPE SOCKET_NUM NIC ; do
if [[ "$TYPE" == "NUMANode" ]]; then
NUMASOCKET=$(echo $SOCKET_NUM | sed s/L//g);
fi
if [[ "$NIC" == "$OSD_NUMA_INTERFACE" ]]; then
# because $NIC is the $OSD_NUMA_INTERFACE,
# the NUMASOCKET has been set correctly above
break # so stop looking
fi
done < <(lstopo-no-graphics | tr -d [:punct:] | egrep "NUMANode|$OSD_NUMA_INTERFACE")
if [[ -z $NUMASOCKET ]]; then
echo "No NUMAnode found for $OSD_NUMA_INTERFACE. Exiting."
exit 1
fi
UNIT='/usr/lib/systemd/system/ceph-osd@.service'
# Preserve the original ceph-osd start command
CMD=$(crudini --get $UNIT Service ExecStart)
if [[ $(echo $CMD | grep numactl) ]]; then
echo "numactl already in $UNIT. No changes required."
exit 0
fi
# NUMA control options to append in front of $CMD
NUMA="/usr/bin/numactl -N $NUMASOCKET --preferred=$NUMASOCKET"
# Update the unit file to start with numactl
# TODO: why doesn't a copy of $UNIT in /etc/systemd/system work with numactl?
crudini --verbose --set $UNIT Service ExecStart "$NUMA $CMD"
# Reload so updated file is used
systemctl daemon-reload
# Restart OSDs with NUMA policy (print results for log)
OSD_IDS=$(ls /var/lib/ceph/osd | awk 'BEGIN { FS = "-" } ; { print $2 }')
for OSD_ID in $OSD_IDS; do
echo -e "\nStatus of OSD $OSD_ID before unit file update\n"
systemctl status ceph-osd@$OSD_ID
echo -e "\nRestarting OSD $OSD_ID..."
systemctl restart ceph-osd@$OSD_ID
echo -e "\nStatus of OSD $OSD_ID after unit file update\n"
systemctl status ceph-osd@$OSD_ID
done
fi
} 2>&1 > /root/post_deploy_heat_output.txt

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.