Chapter 3. Virtual Machine Instances
The RHEL OpenStack Platform allows you to easily manage virtual machine instances in the cloud. OpenStack Compute is the central component that creates, schedules, and manages instances, and exposes this functionality to other OpenStack components.
Note
The term 'instance' is used by OpenStack to mean a virtual machine instance.
3.1. Manage Instances
3.1.1. Create an Instance
Prerequisites: Ensure that a network, key pair, and a boot source are available:
- In the dashboard, select Project.
- Select Network > Networks, and ensure there is a private network to which you can attach the new instance (to create a network, see Section 5.1.1, “Add a Network”).
- Select Compute > Access & Security > Key Pairs, and ensure there is a key pair (to create a key pair, see Section 3.2.1, “Manage Key Pairs”).
- Ensure that you have either an image or a volume that can be used as a boot source:
- To view boot-source images, select the Images tab (to create an image, see Section 4.1.1, “Create an Image”).
- To view boot-source volumes, select the Volumes tab (to create a volume, see Section 4.2.1.1, “Create a Volume”).
Procedure 3.1. Create an Instance
- In the dashboard, select Project > Compute > Instances.
- Click .
- Fill out instance fields (those marked with '*' are required), and click when finished.
Tab Field Notes Details Availability Zone Zones are logical groupings of cloud resources in which your instance can be placed. If you are unsure, use the default zone (for more information, see section Section 3.4, “Manage Host Aggregates”). Instance Name The name must be unique within the project. Flavor The flavor determines what resources the instance is given (for example, memory). For default flavor allocations and information on creating new flavors, see Section 3.3, “Manage Flavors”. Instance Boot Source Depending on the item selected, new fields are displayed allowing you to select the source:- Image sources must be compatible with OpenStack (see Section 4.1, “Manage Images”).
- If a volume or volume source is selected, the source must be formatted using an image (see Section 4.2, “Manage Volumes”).
Access and Security Key Pair The specified key pair is injected into the instance and is used to remotely access the instance using SSH (if neither a direct login information or a static key pair is provided). Usually one key pair per project is created. Security Groups Security groups contain firewall rules which filter the type and direction of the instance's network traffic (for more information on configuring groups, see Section 2.1.5, “Manage Project Security”). Networking Selected Networks You must select at least one network. Instances are typically assigned to a private network, and then later given a floating IP address to enable external access. Post-Creation Customization Script Source You can provide either a set of commands or a script file, which will run after the instance is booted (for example, to set the instance hostname or a user password). If 'Direct Input' is selected, write your commands in the Script Data field; otherwise, specify your script file. Note: Any script that starts with '#cloud-config' is interpreted as using the cloud-config syntax (for information on the syntax, see http://cloudinit.readthedocs.org/en/latest/topics/examples.html). Advanced Options Disk Partition By default, the instance is built as a single partition and dynamically resized as needed. However, you can choose to manually configure the partitions yourself. Configuration Drive If selected, OpenStack writes metadata to a read-only configuration drive that is attached to the instance when it boots (instead of to Compute's metadata service). After the instance has booted, you can mount this drive to view its contents (enables you to provide files to the instance).
3.1.2. Update an Instance (Actions menu)
You can update an instance by selecting Project > Compute > Instance, and selecting an action for that instance in the Actions column. Actions allow you to manipulate the instance in a number of ways:
| Action | Description |
|---|---|
| Create Snapshot | Snapshots preserve the disk state of a running instance. You can create a snapshot to migrate the instance, as well as to preserve backup copies. |
| Associate/Disassociate Floating IP | You must associate an instance with a floating IP (external) address before it can communicate with external networks, or be reached by external users. Because there are a limited number of external addresses in your external subnets, it is recommended that you disassociate any unused addresses. |
| Edit Instance | Update the instance's name and associated security groups. |
| Edit Security Groups | Add and remove security groups to or from this instance using the list of available security groups (for more information on configuring groups, see Section 2.1.5, “Manage Project Security”). |
| Console | View the instance's console in the browser (allows easy access to the instance). |
| View Log | View the most recent section of the instance's console log. Once opened, you can view the full log by clicking . |
| Pause/Resume Instance | Immediately pause the instance (you are not asked for confirmation); the state of the instance is stored in memory (RAM). |
| Suspend/Resume Instance | Immediately suspend the instance (you are not asked for confirmation); like hybernation, the state of the instance is kept on disk. |
| Resize Instance | Bring up the Resize Instance window (see Section 3.1.3, “Resize an instance”). |
| Soft Reboot | Gracefully stop and restart the instance. A soft reboot attempts to gracefully shut down all processes before restarting the instance. |
| Hard Reboot | Stop and restart the instance. A hard reboot effectively just shuts down the instance's 'power' and then turns it back on. |
| Shut Off Instance | Gracefully stop the instance. |
| Rebuild Instance | Use new image and disk-partition options to rebuild the image (shut down, re-image, and re-boot the instance). If encountering operating system issues, this option is easier to try than terminating the instance and starting over. |
| Terminate Instance | Permanently destroy the instance (you are asked for confirmation). |
For example, you can create and allocate an external address by using the 'Associate Floating IP' action.
Procedure 3.2. Update Example - Assign a Floating IP
- In the dashboard, select Project > Compute > Instances.
- Select the Associate Floating IP action for the instance.NoteA floating IP address can only be selected from an already created floating IP pool (see Section 5.2.1, “Create Floating IP Pools”).
- Click '+' and select Allocate IP > Associate.
Note
If you do not know the name of the instance, just its IP address (and you do not want to flip through the details of all your instances), you can run the following on the command line:
$nova list --ip IPAddress
Where IPAddress is the IP address you are looking up.
$nova list --ip 192.0.2.0
3.1.3. Resize an instance
To resize an instance (memory or CPU count), you must select a new flavor for the instance that has the right capacity. If you are increasing the size, remember to first ensure that the host has enough space.
- If you are resizing an instance in a distributed deployment, you must ensure communication between hosts. Set up each host with SSH key authentication so that Compute can use SSH to move disks to other hosts (for example, compute nodes can share the same SSH key). For more information about setting up SSH key authentication, see Section 3.1.4, “Configure SSH Tunneling between Nodes”.
- Enable resizing on the original host by setting the following parameter in the
/etc/nova/nova.conffile:[DEFAULT] allow_resize_to_same_host = True
- In the dashboard, select Project > Compute > Instances.
- Click the instance's Actions arrow, and select Resize Instance.
- Select a new flavor in the New Flavor field.
- If you want to manually partition the instance when it launches (results in a faster build time):
- Select Advanced Options.
- In the Disk Partition field, select 'Manual'.
- Click .
3.1.4. Configure SSH Tunneling between Nodes
Warning
Red Hat does not recommend any particular libvirt security strategy; SSH-tunneling steps are provided for user reference only. Only users with
root access can set up SSH tunneling.
To migrate instances between nodes using SSH tunneling or to resize instance in a distributed environment, each node must be set up with SSH key authentication so that the Compute service can use SSH to move disks to other nodes. For example, compute nodes could use the same SSH key to ensure communication.
Note
If the Compute service cannot migrate the instance to a different node, it will attempt to migrate the instance back to its original host. To avoid migration failure in this case, ensure that 'allow_migrate_to_same_host=True' is set in the
/etc/nova/nova.conf file.
To share a key pair between compute nodes:
- As root on both nodes, make
novaa login user:#
usermod -s /bin/bash nova - On the first compute node, generate a key pair for the
novauser:#su nova#ssh-keygen#echo 'StrictHostKeyChecking no' >> /var/lib/nova/.ssh/config#cat /var/lib/nova/.ssh/id_rsa.pub >> /var/lib/nova/.ssh/authorized_keysThe key pair,id_rsaandid_rsa.pub, is generated in/var/lib/nova/.ssh. - As root, copy the created key pair to the second compute node:
#scp /var/lib/nova/.ssh/id_rsa root@computeNodeAddress:~/#scp /var/lib/nova/.ssh/id_rsa.pub root@computeNodeAddress:~/ - As root on the second compute node, change the copied key pair's permissions back to 'nova', and then add the key pair into SSH:
#chown nova:nova id_rsa#chown nova:nova id_rsa.pub#su nova#mkdir -p /var/lib/nova/.ssh#cp id_rsa /var/lib/nova/.ssh/#cat id_rsa.pub >> /var/lib/nova/.ssh/authorized_keys#echo 'StrictHostKeyChecking no' >> /var/lib/nova/.ssh/config - Ensure that the
novauser can now log into each node without using a password:#su nova#ssh nova@computeNodeAddress - As root on both nodes, restart both libvirt and the Compute services:
#systemctl restart libvirtd.service#systemctl restart openstack-nova-compute.service
3.1.5. Connect to an Instance
3.1.5.1. Access using the Dashboard Console
The console allows you a way to directly access your instance within the dashboard.
- In the dashboard, select Compute > Instances.
- Click the instance's button and select Console.
Figure 3.1. Console Access

- Log in using the image's user name and password (for example, a CirrOS image uses 'cirros'/'cubswin:)').NoteRed Hat Enterprise Linux guest images typically do not allow direct console access; you must SSH into the instance (see Section 3.1.5.4, “SSH into an Instance”).
3.1.5.2. Directly Connect to a VNC Console
You can directly access an instance's VNC console using a URL returned by nova get-vnc-console command.
- Browser
- To obtain a browser URL, use:
$nova get-vnc-console INSTANCE_ID novnc - Java Client
- To obtain a Java-client URL, use:
$nova get-vnc-console INSTANCE_ID xvpvncNotenova-xvpvncviewerprovides a simple example of a Java client. To download the client, use:# git clone https://github.com/cloudbuilders/nova-xvpvncviewer # cd nova-xvpvncviewer/viewer # make
Run the viewer with the instance's Java-client URL:# java -jar VncViewer.jar URL
This tool is provided only for customer convenience, and is not officially supported by Red Hat.
3.1.5.3. Directly Connect to a Serial Console
You can directly access an instance's serial port using a websocket client. Serial connections are typically used as a debugging tool (for example, instances can be accessed even if the network configuration fails). To obtain a serial URL for a running instance, use:
$nova get-serial-console INSTANCE_ID
Note
novaconsole provides a simple example of a websocket client. To download the client, use:
# git clone https://github.com/larsks/novaconsole/ # cd novaconsole
Run the client with the instance's serial URL:
# python console-client-poll.py URL
This tool is provided only for customer convenience, and is not officially supported by Red Hat.
However, depending on your installation, the administrator may need to first set up the
nova-serialproxy service. The proxy service is a websocket proxy that allows connections to OpenStack Compute serial ports.
Procedure 3.3. Install and Configure nova-serialproxy
- Install the
nova-serialproxyservice:#yum install openstack-nova-serialproxy - Update the
serial_consolesection in/etc/nova/nova.conf:- Enable the
nova-serialproxyservice:$openstack-config --set /etc/nova/nova.conf serial_console enabled true - Specify the string used to generate URLS provided by the nova get-serial-console command.
$openstack-config --set /etc/nova/nova.conf serial_console base_url ws://PUBLIC_IP:6083/Where PUBLIC_IP is the public IP address of the host running thenova-serialproxyservice. - Specify the IP address on which the instance serial console should listen (string).
$openstack-config --set /etc/nova/nova.conf serial_console listen 0.0.0.0 - Specify the address to which proxy clients should connect (string).
$openstack-config --set /etc/nova/nova.conf serial_console proxyclient_address ws://HOST_IP:6083/Where HOST_IP is the IP address of your Compute host.
Example 3.1. Enabled nova-serialproxy
[serial_console] enabled=true base_url=ws://192.0.2.0:6083/ listen=0.0.0.0 proxyclient_address=192.0.2.3
- Restart Compute services:
#openstack-service restart nova - Start the
nova-serialproxyservice:#systemctl enable openstack-nova-serialproxy#systemctl start openstack-nova-serialproxy - Restart any running instances, to ensure that they are now listening on the right sockets.
- Open the firewall for serial-console port connections. Serial ports are set using
[serial_console] port_rangein/etc/nova/nova.conf; by default, the range is 10000:20000. Update iptables with:#iptables -I INPUT 1 -p tcp --dport 10000:20000 -j ACCEPT
3.1.5.4. SSH into an Instance
- Ensure that the instance's security group has an SSH rule (see Section 2.1.5, “Manage Project Security”).
- Ensure the instance has a floating IP address (external address) assigned to it (see Section 3.2.2, “Create, Assign, and Release Floating IP Addresses”).
- Obtain the instance's key-pair certificate. The certificate is downloaded when the key pair is created; if you did not create the key pair yourself, ask your administrator (see Section 3.2.1, “Manage Key Pairs”).
- On your local machine, load the key-pair certificate into SSH. For example:
$ssh-add ~/.ssh/os-key.pem - You can now SSH into the file with the user supplied by the image.The following example command shows how to SSH into the Red Hat Enterprise Linux guest image with the user 'cloud-user':
$ssh cloud-user@192.0.2.24NoteYou can also use the certificate directly. For example:$ssh -i /myDir/os-key.pem cloud-user@192.0.2.24
3.1.6. View Instance Usage
The following usage statistics are available:
- Per ProjectTo view instance usage per project, select Project > Compute > Overview. A usage summary is immediately displayed for all project instances.You can also view statistics for a specific period of time by specifying the date range and clicking .
- Per HypervisorIf logged in as an administrator, you can also view information for all projects. Click Admin > System and select one of the tabs. For example, the Resource Usage tab offers a way to view reports for a distinct time period. You might also click Hypervisors to view your current vCPU, memory, or disk statistics.NoteThe 'vCPU Usage' value ('x of y') reflects the number of total vCPUs of all virtual machines (x) and the total number of hypervisor cores (y).
3.1.7. Delete an Instance
- In the dashboard, select Project > Compute > Instances, and select your instance.
- Click .
Note
Deleting an instance does not delete its attached volumes; you must do this separately (see Section 4.2.1.4, “Delete a Volume”).