8.4. Install a Compute Node

8.4.1. Create the Compute Service Database

The following procedure creates the database and database user used by the Compute service. These steps must be performed while logged in to the database server as the root user.

Procedure 8.5. Creating the Compute Service database

  1. Connect to the database service using the mysql command.
    # mysql -u root -p
  2. Create the nova database.
    mysql> CREATE DATABASE nova;
  3. Create a nova database user and grant it access to the nova database.
    mysql> GRANT ALL ON nova.* TO 'nova'@'%' IDENTIFIED BY 'PASSWORD';
    mysql> GRANT ALL ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'PASSWORD';
    Replace PASSWORD with a secure password that will be used to authenticate with the database server as this user.
  4. Flush the database privileges to ensure that they take effect immediately.
    mysql> FLUSH PRIVILEGES;
  5. Exit the mysql client.
    mysql> quit
The Compute database has been created. The database will be populated during service configuration.

8.4.2. Configure Compute Service Authentication

This section outlines the steps for creating and configuring Identity service records required by the Compute service.
  1. Create the compute user, who has the admin role in the services tenant.
  2. Create the compute service entry and assign it an endpoint.
These entries will assist other OpenStack services attempting to locate and access the functionality provided by the Compute service. In order to proceed, you should have already performed the following (through the Identity service):
  1. Created an Administrator role named admin (refer to Section 3.8, “Create an Administrator Account” for instructions)
  2. Created the services tenant (refer to Section 3.10, “Create the Services Tenant” for instructions)

Note

The Deploying OpenStack: Learning Environments guide uses one tenant for all service users. For more information, refer to Section 3.10, “Create the Services Tenant”.
You can perform the following procedure from your Identity service host or on any machine where you've copied the keystonerc_admin file (which contains administrator credentials) and the keystone command-line utility is installed.

Procedure 8.6. Configuring the Compute Service to authenticate through the Identity Service

  1. Authenticate as the administrator of the Identity service by running the source command on the keystonerc_admin file containing the required credentials:
    # source ~/keystonerc_admin
  2. Create a user named compute for the OpenStack Compute service to use:
    # keystone user-create --name compute --pass PASSWORD
    +----------+----------------------------------+
    | Property |              Value               |
    +----------+----------------------------------+
    |  email   |                                  |
    | enabled  |               True               |
    |    id    | 96cd855e5bfe471ce4066794bbafb615 |
    |   name   |              compute             |
    | tenantId |                                  |
    +----------+----------------------------------+
    
    Replace PASSWORD with a secure password that will be used by the Compute service when authenticating against the Identity service.
  3. Use the keystone user-role-add command to link the compute user, admin role, and services tenant together:
    # keystone user-role-add --user compute --role admin --tenant services
  4. Create the compute service entry:
    # keystone service-create --name compute \
            --type compute \
            --description "OpenStack Compute Service"
    +-------------+----------------------------------+
    |   Property  |              Value               |
    +-------------+----------------------------------+
    | description |     OpenStack Compute Service    |
    |      id     | 8dea97f5ee254b309c1792d2bd821e59 |
    |     name    |              compute             |
    |     type    |              compute             |
    +-------------+----------------------------------+
  5. Create the compute endpoint entry:
    # keystone endpoint-create \
              --service compute
              --publicurl "http://IP:8774/v2/\$(tenant_id)s" \
              --adminurl "http://IP:8774/v2/\$(tenant_id)s" \
              --internalurl "http://IP:8774/v2/\$(tenant_id)s"
    Replace IP with the IP address or host name of the system that will be acting as the compute node.
All supporting Identity service entries required by the OpenStack Compute service have been created.

8.4.3. Install the Compute Service Packages

The OpenStack Compute services are provided the following packages:
openstack-nova-api
Provides the OpenStack Compute API service. At least one node in the environment must host an instance of the API service. This must be the node pointed to by the Identity service endpoint definition for the Compute service.
openstack-nova-compute
Provides the OpenStack Compute service.
openstack-nova-conductor
Provides the Compute conductor service. The conductor handles database requests made by Compute nodes, ensuring that individual Compute nodes do not require direct database access. At least one node in each environment must act as a Compute conductor.
openstack-nova-scheduler
Provides the Compute scheduler service. The scheduler handles scheduling of requests made to the API across the available Compute resources. At least one node in each environment must act as a Compute scheduler.
python-cinderclient
Provides client utilities for accessing storage managed by the OpenStack Block Storage service. This package is not required if you do not intend to attach block storage volumes to your instances or you intend to manage such volumes using a service other than the OpenStack Block Storage service.
To install the above packages, execute the following command while logged in as the root user:
# yum install -y openstack-nova-api openstack-nova-compute \
   openstack-nova-conductor openstack-nova-scheduler \
   python-cinderclient

Note

In the command presented here all Compute service packages are installed on a single node. In a production deployment is recommended that the API, conductor, and scheduler services are installed on a separate controller node or on separate nodes entirely. The Compute service itself must be installed on each node that is expected to host virtual machine instances.
The Compute service package is now installed.

8.4.4. Configure the Compute Service to Use SSL

Use the following options in the nova.conf file to configure SSL.

Table 8.3. SSL options for Compute

Configuration Option Description
enabled_ssl_apis
A list of APIs with enabled SSL.
ssl_ca_file
CA certificate file to use to verify connecting clients.
ssl_cert_file
SSL certificate of API server.
ssl_key_file
SSL private key of API server.
tcp_keepidle
Sets the value of TCP_KEEPIDLE in seconds for each server socket. Defaults to 600.

8.4.5. Configure the Compute Service

8.4.5.1. Configure Compute Service Authentication

The Compute service must be explicitly configured to use the Identity service for authentication. Follow the steps listed in this procedure to configure this.
All steps listed in this procedure must be performed on each system hosting Compute services while logged in as the root user.

Procedure 8.7. Configuring the Compute Service to authenticate through the Identity Service

  1. Set the authentication strategy (auth_strategy) configuration key to keystone using the openstack-config command.
    # openstack-config --set /etc/nova/nova.conf \
       DEFAULT auth_strategy keystone
  2. Set the authentication host (auth_host) configuration key to the IP address or host name of the Identity server.
    # openstack-config --set /etc/nova/api-paste.ini \
       filter:authtoken auth_host IP
    Replace IP with the IP address or host name of the Identity server.
  3. Set the administration tenant name (admin_tenant_name) configuration key to the name of the tenant that was created for the use of the Compute service. In this guide, examples use services.
    # openstack-config --set /etc/nova/api-paste.ini \
       filter:authtoken admin_tenant_name services
  4. Set the administration user name (admin_user) configuration key to the name of the user that was created for the use of the Compute service. In this guide, examples use compute.
    # openstack-config --set /etc/nova/api-paste.ini \
       filter:authtoken admin_user compute
  5. Set the administration password (admin_password) configuration key to the password that is associated with the user specified in the previous step.
    # openstack-config --set /etc/nova/api-paste.ini \
       filter:authtoken admin_password PASSWORD
The authentication keys used by the Compute services have been set and will be used when the services are started.

8.4.5.2. Configure the Compute Service Database Connection

The database connection string used by the Compute service is defined in the /etc/nova/nova.conf file. It must be updated to point to a valid database server before starting the service.
The database connection string only needs to be set on nodes that will be hosting the conductor service (openstack-nova-conductor). Compute nodes communicate with the conductor using the messaging infrastructure, the conductor in turn orchestrates communication with the database. As a result individual compute nodes do not require direct access to the database. This procedure only needs to be followed on nodes that will host the conductor service. There must be at least one instance of the conductor service in any compute environment.
All commands in this procedure must be run while logged in as the root user on the server hosting the Compute service.

Procedure 8.8. Configuring the Compute Service SQL database connection

  • Use the openstack-config command to set the value of the sql_connection configuration key.
    # openstack-config --set /etc/nova/nova.conf \
       DEFAULT sql_connection mysql://USER:PASS@IP/DB
    Replace:
    • USER with the database user name the Compute service is to use, usually nova.
    • PASS with the password of the chosen database user.
    • IP with the IP address or host name of the database server.
    • DB with the name of the database that has been created for use by the compute, usually nova.

Important

The IP address or host name specified in the connection configuration key must match the IP address or host name to which the nova database user was granted access when creating the nova database. Moreover, if the database is hosted locally and you granted permissions to 'localhost' when creating the nova database, you must enter 'localhost'.
The database connection string has been set and will be used by the Compute service.

8.4.5.3. Configure RabbitMQ Message Broker Settings for the Compute Service

As of Red Hat Enterprise Linux OpenStack Platform 5, RabbitMQ replaces QPid as the default (and recommended) message broker. The RabbitMQ messaging service is provided by the rabbitmq-server package.
This section assumes that you have already configured a RabbitMQ message broker. For more information, refer to:

Procedure 8.9. Configuring the Compute service to use the RabbitMQ message broker

  1. Log in as root to the Compute controller node.
  2. In /etc/nova/nova.conf of that system, set RabbitMQ as the RPC back end.
    # openstack-config --set /etc/nova/nova.conf \
     DEFAULT rpc_backend rabbit
  3. Set the Compute service to connect to the RabbitMQ host:
    # openstack-config --set /etc/nova/nova.conf \
     DEFAULT rabbit_host RABBITMQ_HOST
    Replace RABBITMQ_HOST with the IP address or host name of the message broker.
  4. Set the message broker port to 5672:
    # openstack-config --set /etc/nova/nova.conf \
     DEFAULT rabbit_port 5672
  5. Set the RabbitMQ username and password created for the Compute service:
    # openstack-config --set /etc/nova/nova.conf \
     DEFAULT rabbit_userid nova
    # openstack-config --set /etc/nova/nova.conf \
     DEFAULT rabbit_password NOVA_PASS
    Where nova and NOVA_PASS are the RabbitMQ username and password created for Compute (in Section 2.4.2, “Install and Configure the RabbitMQ Message Broker”).

8.4.5.4. Configure Resource Overcommitment

OpenStack supports overcommitting of CPU and memory resources on compute nodes. Overcommitting is a technique of allocating more virtualized CPUs and/or memory than there are physical resources.

Important

Overcommitting increases the amount of instances you are able to run, but reduces instance performance.
CPU and memory overcommit settings are represented as a ratio. OpenStack uses the following ratios by default:
  • Default CPU overcommit ratio - 16
  • Default memory overcommit ratio - 1.5
These default settings have the following implications:
  • The default CPU overcommit ratio of 16 means that up to 16 virtual cores can be assigned to a node for each physical core.
  • The default memory overcommit ratio of 1.5 means that instances can be assigned to a physical node if the total instance memory usage is less than 1.5 times the amount of physical memory available.
Use the cpu_allocation_ratio and ram_allocation_ratio directives in /etc/nova/nova.conf to change these default settings.

8.4.5.5. Reserve Host Resources

You can reserve host memory and disk resources as always available to OpenStack. To prevent a given amount of memory and disk resources from being considered as available to be allocated for usage by virtual machines, edit the following directives in /etc/nova/nova.conf:
  • reserved_host_memory_mb - Defaults to 512MB.
  • reserved_host_disk_mb - Defaults to 0MB.

8.4.5.6. Configure Compute Networking

8.4.5.6.1. Compute Networking Overview
Unlike Nova-only deployments, when OpenStack Networking is in use, the nova-network service must not run. Instead all network related decisions are delegated to the OpenStack Networking Service.
Therefore, it is very important that you refer to this guide when configuring networking, rather than relying on Nova networking documentation or past experience with Nova networking. In particular, using CLI tools like nova-manage and nova to manage networks or IP addressing, including both fixed and floating IPs, is not supported with OpenStack Networking.

Important

It is strongly recommended that you uninstall nova-network and reboot any physical nodes that were running nova-network before using these nodes to run OpenStack Network. Problems can arise from inadvertently running the nova-network process while using OpenStack Networking service; for example, a previously running nova-network could push down stale firewall rules.
8.4.5.6.2. Update the Compute Configuration
Each time a Compute instance is provisioned or deprovisioned, the service communicates with OpenStack Networking through its API. To facilitate this connection, it is necessary to configure each Compute node with the connection and authentication details outlined in this procedure.
These steps must be performed on each Compute node while logged in as the root user.

Procedure 8.10. Updating the connection and authentication settings of Compute nodes

  1. Modify the network_api_class configuration key to indicate that the OpenStack Networking service is in use.
    # openstack-config --set /etc/nova/nova.conf \
       DEFAULT network_api_class nova.network.neutronv2.api.API
  2. Set the value of the neutron_url configuration key to point to the endpoint of the networking API.
    # openstack-config --set /etc/nova/nova.conf \
       DEFAULT neutron_url http://IP:9696/
    Replace IP with the IP address or host name of the server hosting the API of the OpenStack Networking service.
  3. Set the value of the neutron_admin_tenant_name configuration key to the name of the tenant used by the OpenStack Networking service. Examples in this guide use services.
    # openstack-config --set /etc/nova/nova.conf \
       DEFAULT neutron_admin_tenant_name services
  4. Set the value of the neutron_admin_username configuration key to the name of the administrative user for the OpenStack Networking service. Examples in this guide use neutron.
    # openstack-config --set /etc/nova/nova.conf \
       DEFAULT neutron_admin_username neutron
  5. Set the value of the neutron_admin_password configuration key to the password associated with the administrative user for the networking service.
    # openstack-config --set /etc/nova/nova.conf \
       DEFAULT neutron_admin_password PASSWORD
  6. Set the value of the neutron_admin_auth_url configuration key to the URL associated with the Identity service endpoint.
    # openstack-config --set /etc/nova/nova.conf \
      DEFAULT neutron_admin_auth_url http://IP:35357/v2.0
    Replace IP with the IP address or host name of the Identity service endpoint.
  7. Set the value of the security_group_api configuration key to neutron.
    # openstack-config --set /etc/nova/nova.conf \
      DEFAULT security_group_api neutron
    This enables the use of OpenStack Networking security groups.
  8. Set the value of the firewall_driver configuration key to nova.virt.firewall.NoopFirewallDriver.
    # openstack-config --set /etc/nova/nova.conf \
      DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
    This must be done when OpenStack Networking security groups are in use.
The configuration has been updated and the Compute service will use OpenStack Networking when it is next started.
8.4.5.6.3. Configure the L2 Agent
Each compute node must run an instance of the Layer 2 (L2) agent appropriate to the networking plug-in that is in use.
8.4.5.6.4. Configure Virtual Interface Plugging
When nova-compute creates an instance, it must 'plug' each of the vNIC associated with the instance into a OpenStack Networking controlled virtual switch. Compute must also inform the virtual switch of the OpenStack Networking port identifier associated with each vNIC.
A generic virtual interface driver, nova.virt.libvirt.vif.LibvirtGenericVIFDriver, is provided in Red Hat Enterprise Linux OpenStack Platform. This driver relies on OpenStack Networking being able to return the type of virtual interface binding required. The following plug-ins support this operation:
  • Linux Bridge
  • Open vSwitch
  • NEC
  • BigSwitch
  • CloudBase Hyper-V
  • Brocade
To use the generic driver, execute the openstack-config command to set the value of the vif_driver configuration key appropriately:
# openstack-config --set /etc/nova/nova.conf \
   libvirt vif_driver \
   nova.virt.libvirt.vif.LibvirtGenericVIFDriver

Important

If using:
  • Open vSwitch with security groups enabled, use the Open vSwitch specific driver, nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver, instead of the generic driver.
  • Linux Bridge, you must add the following to the /etc/libvirt/qemu.conf file to ensure that the virtual machine launches properly:
    user = "root"
    group = "root"
    cgroup_device_acl = [
       "/dev/null", "/dev/full", "/dev/zero",
       "/dev/random", "/dev/urandom",
       "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
       "/dev/rtc", "/dev/hpet", "/dev/net/tun",
    ]

8.4.5.7. Configure the Firewall to Allow Compute Service Traffic

Connections to virtual machine consoles, whether direct or through the proxy, are received on ports 5900 to 5999.
To allow this, the firewall on the service node must be configured to allow network traffic on these ports. Log in as the root user to the server hosting the Compute service and perform the following procedure:

Procedure 8.11. Configuring the firewall to allow Compute Service traffic (for Red Hat Enterprise Linux 6-based systems)

  1. Open the /etc/sysconfig/iptables file in a text editor.
  2. Add an INPUT rule allowing TCP traffic on ports in the ranges 5900 to 5999 by adding the following line to the file.
    -A INPUT -p tcp -m multiport --dports 5900:5999 -j ACCEPT
    The new rule must appear before any INPUT rules that REJECT traffic.
  3. Save the changes to the /etc/sysconfig/iptables file.
  4. Restart the iptables service to ensure that the change takes effect.
    # service iptables restart

Procedure 8.12. Configuring the firewall to allow Compute Service traffic (for Red Hat Enterprise Linux 7-based systems)

  1. Add a rule allowing TCP traffic on ports in the range 5900 to 5999:
    # firewall-cmd --permanent --add-port=5900-5999/tcp
  2. For the change to take immediate effect, add the rule to the runtime mode:
    # firewall-cmd --add-port=5900-5999/tcp
The firewall is now configured to allow incoming connections to the Compute services. Repeat this process for each compute node.

8.4.6. Populate the Compute Service Database

You can populate the Compute Service database after you have successfully configured the Compute Service database connection string (refer to Section 8.4.5.2, “Configure the Compute Service Database Connection”).

Important

This procedure only needs to be followed once to initialize and populate the database. You do not need to perform these steps again when adding additional systems hosting Compute services.

Procedure 8.13. Populating the Compute Service database

  1. Log in to a system hosting an instance of the openstack-nova-conductor service.
  2. Use the su command to switch to the nova user.
    # su nova -s /bin/sh
  3. Run the nova-manage db sync command to initialize and populate the database identified in /etc/nova/nova.conf.
    $ nova-manage db sync
The Compute service database has been initialized and populated.

8.4.7. Launch the Compute Services

Procedure 8.14. Launching Compute services

  1. Starting the Message Bus Service

    Libvirt requires that the messagebus service be enabled and running.
    1. Use the service command to start the messagebus service.
      # service messagebus start
    2. Use the chkconfig command to enable the messagebus service permanently.
      # chkconfig messagebus on
  2. Starting the Libvirtd Service

    The Compute service requires that the libvirtd service be enabled and running.
    1. Use the service command to start the libvirtd service.
      # service libvirtd start
    2. Use the chkconfig command to enable the libvirtd service permanently.
      # chkconfig libvirtd on
  3. Starting the API Service

    Start the API service on each system that will be hosting an instance of it. Note that each API instance should either have its own endpoint defined in the Identity service database or be pointed to by a load balancer that is acting as the endpoint.
    1. Use the service command to start the openstack-nova-api service.
      # service openstack-nova-api start
    2. Use the chkconfig command to enable the openstack-nova-api service permanently.
      # chkconfig openstack-nova-api on
  4. Starting the Scheduler

    Start the scheduler on each system that will be hosting an instance of it.
    1. Use the service command to start the openstack-nova-scheduler service.
      # service openstack-nova-scheduler start
    2. Use the chkconfig command to enable the openstack-nova-scheduler service permanently.
      # chkconfig openstack-nova-scheduler on
  5. Starting the Conductor

    The conductor is intended to minimize or eliminate the need for Compute nodes to access the database directly. Compute nodes instead communicate with the conductor through a message broker and the conductor handles database access.
    Start the conductor on each system that is intended to host an instance of it. Note that it is recommended that this service is not run on each and every Compute node as this eliminates the security benefits of restricting direct database access from the Compute nodes.
    1. Use the service command to start the openstack-nova-conductor service.
      # service openstack-nova-conductor start
    2. Use the chkconfig command to enable the openstack-nova-conductor service permanently.
      # chkconfig openstack-nova-conductor on
  6. Starting the Compute Service

    Start the Compute service on every system that is intended to host virtual machine instances.
    1. Use the service command to start the openstack-nova-compute service.
      # service openstack-nova-compute start
    2. Use the chkconfig command to enable the openstack-nova-compute service permanently.
      # chkconfig openstack-nova-compute on
  7. Starting Optional Services

    Depending on environment configuration you may also need to start these services:
    openstack-nova-cert
    The X509 certificate service, required if you intend to use the EC2 API to the Compute service.

    Note

    If you intend to use the EC2 API to the Compute service, you need to set the options in the nova.conf configuration file. For more information, see Configuring the EC2 API section in the Red Hat Enterprise Linux OpenStack Platform Configuration Reference Guide. This document is available from the following link:
    openstack-nova-network
    The Nova networking service. Note that you must not start this service if you have installed and configured, or intend to install and configure, OpenStack Networking.
    openstack-nova-objectstore
    The Nova object storage service. It is recommended that the OpenStack Object Storage service (Swift) is used for new deployments.
The Compute services have been started and are ready to accept virtual machine instance requests.