Red Hat Training

A Red Hat training course is available for Red Hat OpenStack Platform

8.2. Install a Compute Node

8.2.1. Install the Compute Service Packages

The OpenStack Compute service requires the following packages:
openstack-nova-api
Provides the OpenStack Compute API service. At least one node in the environment must host an instance of the API service. This must be the node pointed to by the Identity service endpoint definition for the Compute service.
openstack-nova-compute
Provides the OpenStack Compute service.
openstack-nova-conductor
Provides the Compute conductor service. The conductor handles database requests made by Compute nodes, ensuring that individual Compute nodes do not require direct database access. At least one node in each environment must act as a Compute conductor.
openstack-nova-scheduler
Provides the Compute scheduler service. The scheduler handles scheduling of requests made to the API across the available Compute resources. At least one node in each environment must act as a Compute scheduler.
python-cinderclient
Provides client utilities for accessing storage managed by the Block Storage service. This package is not required if you do not intend to attach block storage volumes to your instances or you intend to manage such volumes using a service other than the Block Storage service.
Install the packages:
# yum install -y openstack-nova-api openstack-nova-compute \
   openstack-nova-conductor openstack-nova-scheduler \
   python-cinderclient

Note

In the example above, all Compute service packages are installed on a single node. In a production deployment, Red Hat recommends that the API, conductor, and scheduler services be installed on a separate controller node or on separate nodes entirely. The Compute service itself must be installed on each node that is expected to host virtual machine instances.

8.2.2. Create the Compute Service Database

Create the database and database user used by the Compute service. All steps in this procedure must be performed on the database server, while logged in as the root user.

Procedure 8.3. Creating the Compute Service Database

  1. Connect to the database service:
    # mysql -u root -p
  2. Create the nova database:
    mysql> CREATE DATABASE nova;
  3. Create a nova database user and grant the user access to the nova database:
    mysql> GRANT ALL ON nova.* TO 'nova'@'%' IDENTIFIED BY 'PASSWORD';
    mysql> GRANT ALL ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'PASSWORD';
    Replace PASSWORD with a secure password that will be used to authenticate with the database server as this user.
  4. Flush the database privileges to ensure that they take effect immediately:
    mysql> FLUSH PRIVILEGES;
  5. Exit the mysql client:
    mysql> quit

8.2.3. Configure the Compute Service Database Connection

The database connection string used by the Compute service is defined in the /etc/nova/nova.conf file. It must be updated to point to a valid database server before starting the service.
The database connection string only needs to be set on nodes that are hosting the conductor service (openstack-nova-conductor). Compute nodes communicate with the conductor using the messaging infrastructure; the conductor orchestrates communication with the database. As a result, individual Compute nodes do not require direct access to the database. There must be at least one instance of the conductor service in any Compute environment.
All steps in this procedure must be performed on the server or servers hosting the Compute conductor service, while logged in as the root user.

Procedure 8.4. Configuring the Compute Service SQL Database Connection

  • Set the value of the sql_connection configuration key:
    # openstack-config --set /etc/nova/nova.conf \
       DEFAULT sql_connection mysql://USER:PASS@IP/DB
    Replace the following values:
    • Replace USER with the Compute service database user name, usually nova.
    • Replace PASS with the password of the database user.
    • Replace IP with the IP address or host name of the database server.
    • Replace DB with the name of the Compute service database, usually nova.

Important

The IP address or host name specified in the connection configuration key must match the IP address or host name to which the Compute service database user was granted access when creating the Compute service database. Moreover, if the database is hosted locally and you granted permissions to 'localhost' when creating the Compute service database, you must enter 'localhost'.

8.2.4. Create the Compute Service Identity Records

Create and configure Identity service records required by the Compute service. These entries assist other OpenStack services attempting to locate and access the functionality provided by the Compute service.
This procedure assumes that you have already created an administrative user account and a services tenant. For more information, see:
Perform this procedure on the Identity service server, or on any machine onto which you have copied the keystonerc_admin file and on which the keystone command-line utility is installed.

Procedure 8.5. Creating Identity Records for the Compute Service

  1. Set up the shell to access keystone as the administrative user:
    # source ~/keystonerc_admin
  2. Create the compute user:
    [(keystone_admin)]# keystone user-create --name nova --pass PASSWORD
    +----------+----------------------------------+
    | Property |              Value               |
    +----------+----------------------------------+
    |  email   |                                  |
    | enabled  |               True               |
    |    id    | 96cd855e5bfe471ce4066794bbafb615 |
    |   name   |               nova               |
    | username |               nova               |
    +----------+----------------------------------+
    
    Replace PASSWORD with a secure password that will be used by the Compute service when authenticating with the Identity service.
  3. Link the compute user and the admin role together within the context of the services tenant:
    [(keystone_admin)]# keystone user-role-add --user nova --role admin --tenant services
  4. Create the compute service entry:
    [(keystone_admin)]# keystone service-create --name compute \
       --type compute \
       --description "OpenStack Compute Service"
    +-------------+----------------------------------+
    |   Property  |              Value               |
    +-------------+----------------------------------+
    | description |     OpenStack Compute Service    |
    |   enabled   |               True               |
    |      id     | 8dea97f5ee254b309c1792d2bd821e59 |
    |     name    |              compute             |
    |     type    |              compute             |
    +-------------+----------------------------------+
  5. Create the compute endpoint entry:
    [(keystone_admin)]# keystone endpoint-create \
       --service compute
       --publicurl "http://IP:8774/v2/%(tenant_id)s" \
       --adminurl "http://IP:8774/v2/%(tenant_id)s" \
       --internalurl "http://IP:8774/v2/%(tenant_id)s" \
       --region 'RegionOne'
    Replace IP with the IP address or host name of the system hosting the Compute API service.

8.2.5. Configure Compute Service Authentication

Configure the Compute service to use the Identity service for authentication. All steps in this procedure must be performed on each system hosting Compute services, while logged in as the root user.

Procedure 8.6. Configuring the Compute Service to Authenticate Through the Identity Service

  1. Set the authentication strategy to keystone:
    # openstack-config --set /etc/nova/nova.conf \
       DEFAULT auth_strategy keystone
  2. Set the Identity service host that the Compute service must use:
    # openstack-config --set /etc/nova/api-paste.ini \
       filter:authtoken auth_host IP
    Replace IP with the IP address or host name of the server hosting the Identity service.
  3. Set the Compute service to authenticate as the correct tenant:
    # openstack-config --set /etc/nova/api-paste.ini \
       filter:authtoken admin_tenant_name services
    Replace services with the name of the tenant created for the use of the Compute service. Examples in this guide use services.
  4. Set the Compute service to authenticate using the compute administrative user account:
    # openstack-config --set /etc/nova/api-paste.ini \
       filter:authtoken admin_user compute
  5. Set the Compute service to use the correct compute administrative user account password:
    # openstack-config --set /etc/nova/api-paste.ini \
       filter:authtoken admin_password PASSWORD
    Replace PASSWORD with the password set when the compute user was created.

8.2.6. Configure the Firewall to Allow Compute Service Traffic

Connections to virtual machine consoles, whether direct or through the proxy, are received on ports 5900 to 5999. Connections to the Compute API service are received on port 8774. The firewall on the service node must be configured to allow network traffic on these ports. All steps in this procedure must be performed on each Compute node, while logged in as the root user.

Procedure 8.7. Configuring the Firewall to Allow Compute Service Traffic

  1. Open the /etc/sysconfig/iptables file in a text editor.
  2. Add an INPUT rule allowing TCP traffic on ports in the ranges 5900 to 5999. The new rule must appear before any INPUT rules that REJECT traffic:
    -A INPUT -p tcp -m multiport --dports 5900:5999 -j ACCEPT
  3. Add an INPUT rule allowing TCP traffic on port 8774. The new rule must appear before any INPUT rules that REJECT traffic:
    -A INPUT -p tcp -m multiport --dports 8774 -j ACCEPT
  4. Save the changes to the /etc/sysconfig/iptables file.
  5. Restart the iptables service to ensure that the change takes effect:
    # systemctl restart iptables.service

8.2.7. Configure the Compute Service to Use SSL

Use the following options in the nova.conf file to configure SSL.

Table 8.1. SSL Options for Compute

Configuration Option Description
enabled_ssl_apis
A list of APIs with enabled SSL.
ssl_ca_file
The CA certificate file to use to verify connecting clients.
ssl_cert_file
The SSL certificate of the API server.
ssl_key_file
The SSL private key of the API server.
tcp_keepidle
Sets the value of TCP_KEEPIDLE in seconds for each server socket. Defaults to 600.

8.2.8. Configure RabbitMQ Message Broker Settings for the Compute Service

RabbitMQ is the default (and recommended) message broker. The RabbitMQ messaging service is provided by the rabbitmq-server package. All steps in the following procedure must be performed on systems hosting the Compute controller service and Compute nodes, while logged in as the root user.

Procedure 8.8. Configuring the Compute Service to use the RabbitMQ Message Broker

  1. Set RabbitMQ as the RPC back end:
    # openstack-config --set /etc/nova/nova.conf \
       DEFAULT rpc_backend rabbit
  2. Set the Compute service to connect to the RabbitMQ host:
    # openstack-config --set /etc/nova/nova.conf \
       DEFAULT rabbit_host RABBITMQ_HOST
    Replace RABBITMQ_HOST with the IP address or host name of the message broker.
  3. Set the message broker port to 5672:
    # openstack-config --set /etc/nova/nova.conf \
       DEFAULT rabbit_port 5672
  4. Set the RabbitMQ user name and password created for the Compute service when RabbitMQ was configured:
    # openstack-config --set /etc/nova/nova.conf \
       DEFAULT rabbit_userid nova
    # openstack-config --set /etc/nova/nova.conf \
       DEFAULT rabbit_password NOVA_PASS
    Replace nova and NOVA_PASS with the RabbitMQ user name and password created for the Compute service.
  5. When RabbitMQ was launched, the nova user was granted read and write permissions to all resources: specifically, through the virtual host /. Configure the Compute service to connect to this virtual host:
    # openstack-config --set /etc/nova/nova.conf \
       DEFAULT rabbit_virtual_host /

8.2.9. Enable SSL Communication Between the Compute Service and the Message Broker

If you enabled SSL on the message broker, you must configure the Compute service accordingly. This procedure requires the exported client certificates and key file. See Section 2.3.5, “Export an SSL Certificate for Clients” for instructions on how to export these files.

Procedure 8.9. Enabling SSL Communication Between the Compute Service and the RabbitMQ Message Broker

  1. Enable SSL communication with the message broker:
    # openstack-config --set /etc/nova/nova.conf \
       DEFAULT rabbit_use_ssl True
    # openstack-config --set /etc/nova/nova.conf \
       DEFAULT kombu_ssl_certfile /path/to/client.crt
    # openstack-config --set /etc/nova/nova.conf \
       DEFAULT kombu_ssl_keyfile /path/to/clientkeyfile.key
    Replace the following values:
    • Replace /path/to/client.crt with the absolute path to the exported client certificate.
    • Replace /path/to/clientkeyfile.key with the absolute path to the exported client key file.
  2. If your certificates were signed by a third-party Certificate Authority (CA), you must also run the following command:
    # openstack-config --set /etc/nova/nova.conf \
       DEFAULT kombu_ssl_ca_certs /path/to/ca.crt
    Replace /path/to/ca.crt with the absolute path to the CA file provided by the third-party CA (see Section 2.3.4, “Enable SSL on the RabbitMQ Message Broker” for more information).

8.2.10. Configure Resource Overcommitment

OpenStack supports overcommitting of CPU and memory resources on Compute nodes. Overcommitting is a technique of allocating more virtualized CPUs and/or memory than there are physical resources.

Important

Overcommitting increases the number of instances you are able to run, but reduces instance performance.
CPU and memory overcommit settings are represented as a ratio. OpenStack uses the following ratios by default:
  • The default CPU overcommit ratio is 16. This means that up to 16 virtual cores can be assigned to a node for each physical core.
  • The default memory overcommit ratio is 1.5. This means that instances can be assigned to a physical node if the total instance memory usage is less than 1.5 times the amount of physical memory available.
Use the cpu_allocation_ratio and ram_allocation_ratio directives in /etc/nova/nova.conf to change these default settings.

8.2.11. Reserve Host Resources

You can reserve host memory and disk resources so that they are always available to OpenStack. To prevent a given amount of memory and disk resources from being considered as available to be allocated for usage by virtual machines, edit the following directives in /etc/nova/nova.conf:
  • reserved_host_memory_mb. Defaults to 512MB.
  • reserved_host_disk_mb. Defaults to 0MB.

8.2.12. Configure Compute Networking

8.2.12.1. Compute Networking Overview

Unlike Nova-only deployments, when OpenStack Networking is in use, the nova-network service must not run. Instead all network related decisions are delegated to the OpenStack Networking Service.
Therefore, it is very important that you refer to this guide when configuring networking, rather than relying on Nova networking documentation or past experience with Nova networking. In particular, using CLI tools like nova-manage and nova to manage networks or IP addressing, including both fixed and floating IPs, is not supported with OpenStack Networking.

Important

It is strongly recommended that you uninstall nova-network and reboot any physical nodes that were running nova-network before using these nodes to run OpenStack Network. Problems can arise from inadvertently running the nova-network process while using OpenStack Networking service; for example, a previously running nova-network could push down stale firewall rules.

8.2.12.2. Update the Compute Configuration

Each time a Compute instance is provisioned or deprovisioned, the service communicates with OpenStack Networking through its API. To facilitate this connection, you must configure each Compute node with the connection and authentication details outlined in this procedure.
All steps in the following procedure must be performed on each Compute node, while logged in as the root user.

Procedure 8.10. Updating the Connection and Authentication Settings of Compute Nodes

  1. Modify the network_api_class configuration key to indicate that OpenStack Networking is in use:
    # openstack-config --set /etc/nova/nova.conf \
       DEFAULT network_api_class nova.network.neutronv2.api.API
  2. Set the Compute service to use the endpoint of the OpenStack Networking API:
    # openstack-config --set /etc/nova/nova.conf \
       neutron url http://IP:9696/
    Replace IP with the IP address or host name of the server hosting the OpenStack Networking API service.
  3. Set the name of the tenant used by the OpenStack Networking service. Examples in this guide use services:
    # openstack-config --set /etc/nova/nova.conf \
       neutron admin_tenant_name services
  4. Set the name of the OpenStack Networking administrative user:
    # openstack-config --set /etc/nova/nova.conf \
       neutron admin_username neutron
  5. Set the password associated with the OpenStack Networking administrative user:
    # openstack-config --set /etc/nova/nova.conf \
       neutron admin_password PASSWORD
  6. Set the URL associated with the Identity service endpoint:
    # openstack-config --set /etc/nova/nova.conf \
       neutron admin_auth_url http://IP:35357/v2.0
    Replace IP with the IP address or host name of the server hosting the Identity service.
  7. Enable the metadata proxy and configure the metadata proxy secret:
    # openstack-config --set /etc/nova/nova.conf \
       neutron service_metadata_proxy true
    # openstack-config --set /etc/nova/nova.conf \
       neutron metadata_proxy_shared_secret METADATA_SECRET
    Replace METADATA_SECRET with the string that the metadata proxy will use to secure communication.
  8. Enable the use of OpenStack Networking security groups:
    # openstack-config --set /etc/nova/nova.conf \
       DEFAULT security_group_api neutron
  9. Set the firewall driver to nova.virt.firewall.NoopFirewallDriver:
    # openstack-config --set /etc/nova/nova.conf \
       DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
    This must be done when OpenStack Networking security groups are in use.
  10. Open the /etc/sysctl.conf file in a text editor, and add or edit the following kernel networking parameters:
    net.ipv4.ip_forward = 1
    net.ipv4.conf.all.rp_filter = 0
    net.ipv4.conf.default.rp_filter = 0
    net.bridge.bridge-nf-call-arptables = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
  11. Load the updated kernel parameters:
    # sysctl -p

8.2.12.3. Configure the L2 Agent

Each compute node must run an instance of the Layer 2 (L2) agent appropriate to the networking plug-in that is in use.

8.2.12.4. Configure Virtual Interface Plugging

When nova-compute creates an instance, it must 'plug' each of the vNIC associated with the instance into a OpenStack Networking controlled virtual switch. Compute must also inform the virtual switch of the OpenStack Networking port identifier associated with each vNIC.
A generic virtual interface driver, nova.virt.libvirt.vif.LibvirtGenericVIFDriver, is provided in Red Hat OpenStack Platform. This driver relies on OpenStack Networking being able to return the type of virtual interface binding required. The following plug-ins support this operation:
  • Linux Bridge
  • Open vSwitch
  • NEC
  • BigSwitch
  • CloudBase Hyper-V
  • Brocade
To use the generic driver, execute the openstack-config command to set the value of the vif_driver configuration key appropriately:
# openstack-config --set /etc/nova/nova.conf \
   libvirt vif_driver \
   nova.virt.libvirt.vif.LibvirtGenericVIFDriver

Important

Considerations for Open vSwitch and Linux Bridge deployments:
  • If running Open vSwitch with security groups enabled, use the Open vSwitch specific driver, nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver, instead of the generic driver.
  • For Linux Bridge environments, you must add the following to the /etc/libvirt/qemu.conf file to ensure that the virtual machine launches properly:
    user = "root"
    group = "root"
    cgroup_device_acl = [
       "/dev/null", "/dev/full", "/dev/zero",
       "/dev/random", "/dev/urandom",
       "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
       "/dev/rtc", "/dev/hpet", "/dev/net/tun",
    ]

8.2.13. Populate the Compute Service Database

Populate the Compute Service database after you have successfully configured the Compute Service database connection string.

Important

This procedure must be followed only once to initialize and populate the database. You do not need to perform these steps again when adding additional systems hosting Compute services.

Procedure 8.11. Populating the Compute Service Database

  1. Log in to a system hosting an instance of the openstack-nova-conductor service.
  2. Switch to the nova user:
    # su nova -s /bin/sh
  3. Initialize and populate the database identified in /etc/nova/nova.conf:
    $ nova-manage db sync

8.2.14. Launch the Compute Services

Procedure 8.12. Launching Compute Services

  1. Libvirt requires that the messagebus service be enabled and running. Start the service:
    # systemctl start messagebus.service
  2. The Compute service requires that the libvirtd service be enabled and running. Start the service and configure it to start at boot time:
    # systemctl start libvirtd.service
    # systemctl enable libvirtd.service
  3. Start the API service on each system that is hosting an instance of it. Note that each API instance should either have its own endpoint defined in the Identity service database or be pointed to by a load balancer that is acting as the endpoint. Start the service and configure it to start at boot time:
    # systemctl start openstack-nova-api.service
    # systemctl enable openstack-nova-api.service
  4. Start the scheduler on each system that is hosting an instance of it. Start the service and configure it to start at boot time:
    # systemctl start openstack-nova-scheduler.service
    # systemctl enable openstack-nova-scheduler.service
  5. Start the conductor on each system that is hosting an instance of it. Note that it is recommended that this service is not run on every Compute node as this eliminates the security benefits of restricting direct database access from the Compute nodes. Start the service and configure it to start at boot time:
    # systemctl start openstack-nova-conductor.service
    # systemctl enable openstack-nova-conductor.service
  6. Start the Compute service on every system that is intended to host virtual machine instances. Start the service and configure it to start at boot time:
    # systemctl start openstack-nova-compute.service
    # systemctl enable openstack-nova-compute.service
  7. Depending on your environment configuration, you may also need to start the following services:
    openstack-nova-cert
    The X509 certificate service, required if you intend to use the EC2 API to the Compute service.

    Note

    To use the EC2 API to the Compute service, you must set the options in the nova.conf configuration file. For more information, see Configuring the EC2 API section in the Red Hat OpenStack Platform Configuration Reference Guide. This document is available from the following link:
    openstack-nova-network
    The Nova networking service. Note that you must not start this service if you have installed and configured, or intend to install and configure, OpenStack Networking.
    openstack-nova-objectstore
    The Nova object storage service. It is recommended that the Object Storage service (Swift) is used for new deployments.