Red Hat Training
A Red Hat training course is available for Red Hat OpenStack Platform
Chapter 3. Virtual machine instances
OpenStack Compute is the central component that provides virtual machines on demand. Compute interacts with the Identity service for authentication, Image service for images (used to launch instances), and the dashboard service for the user and administrative interface.
Red Hat OpenStack Platform allows you to easily manage virtual machine instances in the cloud. The Compute service creates, schedules, and manages instances, and exposes this functionality to other OpenStack components. This chapter discusses these procedures along with procedures to add components like key pairs, security groups, host aggregates and flavors. The term instance is used by OpenStack to mean a virtual machine instance.
3.1. Managing instances
Before you can create an instance, you need to ensure certain other OpenStack components (for example, a network, key pair and an image or a volume as the boot source) are available for the instance.
This section discusses the procedures to add these components, create and manage an instance. Managing an instance refers to updating, and logging in to an instance, viewing how the instances are being used, resizing or deleting them.
3.1.1. Adding components
Use the following sections to create a network, key pair and upload an image or volume source. These components are used in the creation of an instance and are not available by default. You will also need to create a new security group to allow SSH access to the user.
- In the dashboard, select Project.
- Select Network > Networks, and ensure there is a private network to which you can attach the new instance (to create a network, see Create a Network section in the Networking Guide).
- Select Compute > Access & Security > Key Pairs, and ensure there is a key pair (to create a key pair, see Section 3.3.1.1, “Creating a key pair”).
Ensure that you have either an image or a volume that can be used as a boot source:
- To view boot-source images, select the Images tab (to create an image, see Section 1.2.1, “Creating an image”).
- To view boot-source volumes, select the Volumes tab (to create a volume, see Create a Volume in the Storage Guide).
- Select Compute > Access & Security > Security Groups, and ensure you have created a security group rule (to create a security group, see Project Security Management in the Users and Identity Management Guide).
3.1.2. Creating an instance
- In the dashboard, select Project > Compute > Instances.
- Click Launch Instance.
- Fill out instance fields (those marked with '* ' are required), and click Launch when finished.
Table 3.1. Instance Options
Tab | Field | Notes |
---|---|---|
Project and User | Project | Select the project from the dropdown list. |
User | Select the user from the dropdown list. | |
Details | Availability Zone | Zones are logical groupings of cloud resources in which your instance can be placed. If you are unsure, use the default zone (for more information, see Section 3.5, “Managing host aggregates”). |
Instance Name | A name to identify your instance. | |
Flavor | The flavor determines what resources the instance is given (for example, memory). For default flavor allocations and information on creating new flavors, see Section 3.4, “Managing flavors”. | |
Instance Count | The number of instances to create with these parameters. "1" is preselected. | |
Instance Boot Source | Depending on the item selected, new fields are displayed allowing you to select the source:
| |
Access and Security | Key Pair | The specified key pair is injected into the instance and is used to remotely access the instance using SSH (if neither a direct login information or a static key pair is provided). Usually one key pair per project is created. |
Security Groups | Security groups contain firewall rules which filter the type and direction of the instance’s network traffic (for more information on configuring groups, see Project Security Management in the Users and Identity Management Guide). | |
Networking | Selected Networks | You must select at least one network. Instances are typically assigned to a private network, and then later given a floating IP address to enable external access. |
Post-Creation | Customization Script Source | You can provide either a set of commands or a script file, which will run after the instance is booted (for example, to set the instance host name or a user password). If 'Direct Input' is selected, write your commands in the Script Data field; otherwise, specify your script file. Note Any script that starts with '#cloud-config' is interpreted as using the cloud-config syntax (for information on the syntax, see http://cloudinit.readthedocs.org/en/latest/topics/examples.html). |
Advanced Options | Disk Partition | By default, the instance is built as a single partition and dynamically resized as needed. However, you can choose to manually configure the partitions yourself. |
Configuration Drive | If selected, OpenStack writes metadata to a read-only configuration drive that is attached to the instance when it boots (instead of to Compute’s metadata service). After the instance has booted, you can mount this drive to view its contents (enables you to provide files to the instance). |
3.1.3. Updating an instance
You can update an instance by selecting Project > Compute > Instances, and selecting an action for that instance in the Actions column. Actions allow you to manipulate the instance in a number of ways:
Table 3.2. Update Instance Options
Action | Description |
---|---|
Create Snapshot | Snapshots preserve the disk state of a running instance. You can create a snapshot to migrate the instance, as well as to preserve backup copies. |
Associate/Disassociate Floating IP | You must associate an instance with a floating IP (external) address before it can communicate with external networks, or be reached by external users. Because there are a limited number of external addresses in your external subnets, it is recommended that you disassociate any unused addresses. |
Edit Instance | Update the instance’s name and associated security groups. |
Edit Security Groups | Add and remove security groups to or from this instance using the list of available security groups (for more information on configuring groups, see Project Security Management in the Users and Identity Management Guide). |
Console | View the instance’s console in the browser (allows easy access to the instance). |
View Log | View the most recent section of the instance’s console log. Once opened, you can view the full log by clicking View Full Log. |
Pause/Resume Instance | Immediately pause the instance (you are not asked for confirmation); the state of the instance is stored in memory (RAM). |
Suspend/Resume Instance | Immediately suspend the instance (you are not asked for confirmation); like hibernation, the state of the instance is kept on disk. |
Resize Instance | Bring up the Resize Instance window (see Section 3.1.4, “Resizing an instance”). |
Soft Reboot | Gracefully stop and restart the instance. A soft reboot attempts to gracefully shut down all processes before restarting the instance. |
Hard Reboot | Stop and restart the instance. A hard reboot effectively just shuts down the instance’s power and then turns it back on. |
Shut Off Instance | Gracefully stop the instance. |
Rebuild Instance | Use new image and disk-partition options to rebuild the image (shut down, re-image, and re-boot the instance). If encountering operating system issues, this option is easier to try than terminating the instance and starting over. |
Terminate Instance | Permanently destroy the instance (you are asked for confirmation). |
You can create and allocate an external IP address, see Section 3.3.3, “Creating, assigning, and releasing floating IP addresses”
3.1.4. Resizing an instance
To resize an instance (memory or CPU count), you must select a new flavor for the instance that has the correct capacity. Before you increase the size of an instance, ensure that there is at least one compute node with the requested capacity (based on the new flavor).
Ensure communication between hosts by setting up each host with SSH key authentication so that Compute can use SSH to move disks to other hosts (for example, compute nodes can share the same SSH key).
For more information about setting up SSH key authentication, see Configure SSH Tunneling Between Nodes in the Migrating Instances guide.
Enable resizing on the original host by setting the following parameter in the
/etc/nova/nova.conf
file:[DEFAULT] allow_resize_to_same_host = True
NoteThe allow_resize_to_same_host parameter does not resize the instance on the same host. Even if the parameter equals true on all compute nodes, the scheduler does not force the instance to resize on the same host. This is the expected behavior.
- In the dashboard, select Project > Compute > Instances.
- Click the instance’s Actions arrow, and select Resize Instance.
- Select a new flavor in the New Flavor field.
If you want to manually partition the instance when it launches (results in a faster build time):
- Select Advanced Options.
- In the Disk Partition field, select Manual.
- Click Resize.
3.1.5. Connecting to an instance
This section discusses the different methods you can use to access an instance console using the dashboard or the command-line interface. You can also directly connect to an instance’s serial port allowing you to debug even if the network connection fails.
3.1.5.1. Accessing an instance console using the dashboard
The console allows you a way to directly access your instance within the dashboard.
- In the dashboard, select Compute > Instances.
- Click the instance’s More button and select Console.
- Log in using the image’s user name and password (for example, a CirrOS image uses cirros/cubswin:)).
3.1.5.2. Connecting directly to a VNC console
You can directly access an instance’s VNC console using a URL returned by nova get-vnc-console
command.
- Browser
To obtain a browser URL, use:
$ nova get-vnc-console INSTANCE_ID novnc
- Java Client
To obtain a Java-client URL, use:
$ nova get-vnc-console INSTANCE_ID xvpvnc
nova-xvpvncviewer provides a simple example of a Java client. To download the client, use:
# git clone https://github.com/cloudbuilders/nova-xvpvncviewer # cd nova-xvpvncviewer/viewer # make
Run the viewer with the instance’s Java-client URL:
# java -jar VncViewer.jar URL
This tool is provided only for customer convenience, and is not officially supported by Red Hat.
3.1.5.3. Connecting directly to a serial console
You can directly access an instance’s serial port using a websocket client. Serial connections are typically used as a debugging tool (for example, instances can be accessed even if the network configuration fails). To obtain a serial URL for a running instance, use:
$ nova get-serial-console INSTANCE_ID
novaconsole provides a simple example of a websocket client. To download the client, use:
# git clone https://github.com/larsks/novaconsole/ # cd novaconsole
Run the client with the instance’s serial URL:
# python console-client-poll.py
This tool is provided only for customer convenience, and is not officially supported by Red Hat.
However, depending on your installation, the administrator may need to first set up the nova-serialproxy service. The proxy service is a websocket proxy that allows connections to OpenStack Compute serial ports.
3.1.5.3.1. Installing and configuring nova-serialproxy
Install the
nova-serialproxy
service:# yum install openstack-nova-serialproxy
Update the
serial_console
section in/etc/nova/nova.conf
:Enable the
nova-serialproxy
service:$ openstack-config --set /etc/nova/nova.conf serial_console enabled true
Specify the string used to generate URLS provided by the
nova get-serial-console
command.$ openstack-config --set /etc/nova/nova.conf serial_console base_url ws://PUBLIC_IP:6083/
Where
PUBLIC_IP
is the public IP address of the host running thenova-serialproxy
service.Specify the IP address on which the instance serial console should listen (string).
$ openstack-config --set /etc/nova/nova.conf serial_console listen 0.0.0.0
Specify the address to which proxy clients should connect (string).
$ openstack-config --set /etc/nova/nova.conf serial_console proxyclient_address ws://HOST_IP:6083/
Where
HOST_IP
is the IP address of your Compute host. For example, an enablednova-serialproxy
service is as following:[serial_console] enabled=true base_url=ws://192.0.2.0:6083/ listen=0.0.0.0 proxyclient_address=192.0.2.3
Restart Compute services:
# openstack-service restart nova
Start the
nova-serialproxy
service:# systemctl enable openstack-nova-serialproxy # systemctl start openstack-nova-serialproxy
- Restart any running instances, to ensure that they are now listening on the right sockets.
Open the firewall for serial-console port connections. Serial ports are set using
[serial_console]
port_range in/etc/nova/nova.conf
; by default, the range is 10000:20000. Update iptables with:# iptables -I INPUT 1 -p tcp --dport 10000:20000 -j ACCEPT
3.1.6. Viewing instance usage
The following usage statistics are available:
Per Project
To view instance usage per project, select Project > Compute > Overview. A usage summary is immediately displayed for all project instances.
You can also view statistics for a specific period of time by specifying the date range and clicking Submit.
Per Hypervisor
If logged in as an administrator, you can also view information for all projects. Click Admin > System and select one of the tabs. For example, the Resource Usage tab offers a way to view reports for a distinct time period. You might also click Hypervisors to view your current vCPU, memory, or disk statistics.
NoteThe
vCPU Usage
value (x of y
) reflects the number of total vCPUs of all virtual machines (x) and the total number of hypervisor cores (y).
3.1.7. Deleting an instance
- In the dashboard, select Project > Compute > Instances, and select your instance.
- Click Terminate Instance.
Deleting an instance does not delete its attached volumes; you must do this separately (see Delete a Volume in the Storage Guide).
3.1.8. Managing multiple instances simultaneously
If you need to start multiple instances at the same time (for example, those that were down for compute or controller maintenance) you can do so easily at Project > Compute > Instances:
- Click the check boxes in the first column for the instances that you want to start. If you want to select all of the instances, click the check box in the first row in the table.
- Click More Actions above the table and select Start Instances.
Similarly, you can shut off or soft reboot multiple instances by selecting the respective actions.
3.2. Customizing an instance using cloud-init
The cloud-init
package uses the user data
file from the metadata service to perform an action based on the metadata. For example, you can configure the virtual machine image to create instances on boot. The cloud-init
package also provides other functionality, such as copying a public key to an account. The process changes based on the format of the information in the user data
file.
cloud-init
supports different input formats for the user data
file. The most commonly used are shell scripts beginning with #!
and cloud configuration files beginning with #cloud-config
. The cloud-config
files use special scripts that cloud-init
runs at first boot of a server.
You can pass information in a local file or a shell script using the user-data
configuration file.
The following section describes the procedure to create a virtual machine in Red Hat OpenStack Platform using a script at the first boot with the user data
file. This allows you to automate the configuration of virtual machine instances. For example, installing packages, starting certain services or managing an instance using the puppet
service.
Make sure you install the cloud-init
package and that the service is running on the image.
This section includes an example script that you can use with cloud-init
to create instances using the command line.
Example
The following example uses the contents of the user_data.file
to launch an instance using a cirros
image:
# nova boot --image cirros --key-name key-osp10 --flavor m1.small \ --availability-zone internal --user-data user_data.file "Test instance"
A sample user_data.file
contents are as follows:
#!/bin/bash # Example script to run at first boot via OpenStack # using the user_data and cloud-init. KEY="key-osp10.pem" BOOTIMG="d79afca0-ecae-4015-9255-8606bf86a3ec" ZONE="internal" FLAVOR="m1.small" source ~/keystonerc_admin for RUN in {1..2}; do echo "Creating VM ${RUN}"" VMUUID=$(nova boot \ --image "${BOOTIMG}" \ --flavor "${FLAVOR}" \ --availability-zone "${ZONE}" \ --nic net-id=00000000-0000-0000-0000-000000000000 \ --key-name "${KEY}" \ --user-data user_data.file \ "VPS-${RUN}-${ZONE}" | awk '/id/ {print $4}' | head -n 1); until [[ "$(nova show ${VMUUID} | awk '/status/ {print $4}')" == "ACTIVE" ]]; do : done echo "VM ${RUN} (${VMUUID}) is active." done
3.2.1. Example user-data
files
The following section provides examples of common user-data
files that you can use with the cloud-init
package.
3.2.1.1. Example user-data
file to manage users and groups
To define new users on a server, use the users
option as follows:
#cloud-config users: - name: first_user_name groups: first_user_group passwd: first_user_password ... ... - name: second_user_name groups: second_user_group passwd: second_user_password ... ...
Each new user begins with a dash. Each user defines parameters in key-value pairs.
To define groups, use the groups
directive:
#cloud-config groups: - group1 - group2: [user1, user2]
This directive lists a group of users you can create. You can add a sub-list of the users you want to add to a group.
Example
The following example creates two groups - rhel
and cloud-users
. The rhel
group includes the root
and sys
users.
#cloud-config groups: - rhel: [root,sys] - cloud-users
3.2.1.2. Example user-data
file to configure SSH keys for user accounts
To configure SSH keys, use the users
directive or specify the keys in the ssh_authorized_keys
section of your cloud config file as shown in the following example. The general format is as follows:
#cloud-config ssh_authorized_keys: - ssh_key_1 - ssh_key_2
You can also generate SSH keys by using the ssh_keys
parameter and place them on the file system. Using the ssh_keys
parameter allows the client machine to trust a server as soon as it comes online. The ssh_keys
parameter accepts the key pairs for Rivest–Shamir–Adleman (RSA), Digital Signature Algorithm (DSA), or Elliptic Curve Digital Signature Algorithm (ECDSA) keys using the rsa_private
, rsa_public
, dsa_private
, dsa_public
, ecdsa_private
, and ecdsa_public
sub-items. For example:
#cloud-config ssh_authorized_keys: - ssh-rsa your_example_key1 - ssh-rsa your_example_key2 ssh_keys: rsa_private: | -----BEGIN RSA PRIVATE KEY----- your_rsa_private_key -----END RSA PRIVATE KEY----- rsa_public: your_rsa_public_key dsa_private: | -----BEGIN DSA PRIVATE KEY----- your_dsa_private_key -----END DSA PRIVATE KEY----- dsa_public: your_dsa_public_key
Formatting and line breaks are important in a private key. Make sure to use a block with a pipe key when specifying the values. You must include the BEGIN
and END
key lines for the keys to be valid.
For more examples on using Cloud config, see Cloud config examples.
3.3. Managing instance security
You can manage access to an instance by assigning it the correct security group (set of firewall rules) and key pair (enables SSH user access). Further, you can assign a floating IP address to an instance to enable external network access. The sections below outline how to create and manage key pairs, security groups, floating IP addresses and logging in to an instance using SSH. There is also a procedure for injecting an admin
password into an instance.
For information on managing security groups, see Project Security Management in the Users and Identity Management Guide.
3.3.1. Managing key pairs
Key pairs provide SSH access to the instances. Each time a key pair is generated, its certificate is downloaded to the local machine and can be distributed to users. Typically, one key pair is created for each project (and used for multiple instances).
You can also import an existing key pair into OpenStack.
3.3.1.1. Creating a key pair
- In the dashboard, select Project > Compute > Access & Security.
- On the Key Pairs tab, click Create Key Pair.
- Specify a name in the Key Pair Name field, and click Create Key Pair.
When the key pair is created, a key pair file is automatically downloaded through the browser. Save this file for later connections from external machines. For command-line SSH connections, you can load this file into SSH by executing:
# ssh-add ~/.ssh/os-key.pem
3.3.1.2. Importing a key pair
- In the dashboard, select Project > Compute > Access & Security.
- On the Key Pairs tab, click Import Key Pair.
- Specify a name in the Key Pair Name field, and copy and paste the contents of your public key into the Public Key field.
- Click Import Key Pair.
3.3.1.3. Deleting a key pair
- In the dashboard, select Project > Compute > Access & Security.
- On the Key Pairs tab, click the key’s Delete Key Pair button.
3.3.2. Creating a security group
Security groups are sets of IP filter rules that can be assigned to project instances, and which define networking access to the instance. Security group are project specific; project members can edit the default rules for their security group and add new rule sets.
- In the dashboard, select the Project tab, and click Compute > Access & Security.
- On the Security Groups tab, click + Create Security Group.
- Provide a name and description for the group, and click Create Security Group.
For more information on managing project security, see Project Security Management in the Users and Identity Management Guide.
3.3.3. Creating, assigning, and releasing floating IP addresses
By default, an instance is given an internal IP address when it is first created. However, you can enable access through the public network by creating and assigning a floating IP address (external address). You can change an instance’s associated IP address regardless of the instance’s state.
Projects have a limited range of floating IP address that can be used (by default, the limit is 50), so you should release these addresses for reuse when they are no longer needed. Floating IP addresses can only be allocated from an existing floating IP pool, see Create Floating IP Pools in the Networking Guide.
3.3.3.1. Allocating a floating IP to the project
- In the dashboard, select Project > Compute > Access & Security.
- On the Floating IPs tab, click Allocate IP to Project.
- Select a network from which to allocate the IP address in the Pool field.
- Click Allocate IP.
3.3.3.2. Assigning a floating IP
- In the dashboard, select Project > Compute > Access & Security.
- On the Floating IPs tab, click the address' Associate button.
Select the address to be assigned in the IP address field.
NoteIf no addresses are available, you can click the
+
button to create a new address.- Select the instance to be associated in the Port to be Associated field. An instance can only be associated with one floating IP address.
- Click Associate.
3.3.3.3. Releasing a floating IP
- In the dashboard, select Project > Compute > Access & Security.
- On the Floating IPs tab, click the address' menu arrow (next to the Associate/Disassociate button).
- Select Release Floating IP.
3.3.4. Logging in to an instance
Prerequisites:
- Ensure that the instance’s security group has an SSH rule (see Project Security Management in the Users and Identity Management Guide).
- Ensure the instance has a floating IP address (external address) assigned to it (see Section 3.3.3, “Creating, assigning, and releasing floating IP addresses”).
- Obtain the instance’s key-pair certificate. The certificate is downloaded when the key pair is created; if you did not create the key pair yourself, ask your administrator (see Section 3.3.1, “Managing key pairs”).
To first load the key pair file into SSH, and then use ssh without naming it:
Change the permissions of the generated key-pair certificate.
$ chmod 600 os-key.pem
Check whether
ssh-agent
is already running:# ps -ef | grep ssh-agent
If not already running, start it up with:
# eval `ssh-agent`
On your local machine, load the key-pair certificate into SSH. For example:
$ ssh-add ~/.ssh/os-key.pem
- You can now SSH into the file with the user supplied by the image.
The following example command shows how to SSH into the Red Hat Enterprise Linux guest image with the user cloud-user
:
$ ssh cloud-user@192.0.2.24
You can also use the certificate directly. For example:
$ ssh -i /myDir/os-key.pem cloud-user@192.0.2.24
3.3.5. Injecting an admin
password into an instance
You can inject an admin
(root
) password into an instance using the following procedure.
In the
/etc/openstack-dashboard/local_settings
file, set thechange_set_password
parameter value toTrue
.can_set_password: True
In the
/etc/nova/nova.conf
file, set theinject_password
parameter toTrue
.inject_password=true
Restart the Compute service.
# service nova-compute restart
When you use the nova boot
command to launch a new instance, the output of the command displays an adminPass
parameter. You can use this password to log into the instance as the root
user.
The Compute service overwrites the password value in the /etc/shadow
file for the root
user. This procedure can also be used to activate the root
account for the KVM guest images. For more information on how to use KVM guest images, see Section 1.2.1.1, “Using a KVM guest image with Red Hat OpenStack Platform”
You can also set a custom password from the dashboard. To enable this, run the following command after you have set can_set_password
parameter to true
.
# systemctl restart httpd.service
The newly added admin
password fields are as follows:
These fields can be used when you launch or rebuild an instance.
3.4. Managing flavors
Each created instance is given a flavor (resource template), which determines the instance’s size and capacity. Flavors can also specify secondary ephemeral storage, swap disk, metadata to restrict usage, or special project access (none of the default flavors have these additional attributes defined).
Table 3.3. Default Flavors
Name | vCPUs | RAM | Root Disk Size |
---|---|---|---|
m1.tiny | 1 | 512 MB | 1 GB |
m1.small | 1 | 2048 MB | 20 GB |
m1.medium | 2 | 4096 MB | 40 GB |
m1.large | 4 | 8192 MB | 80 GB |
m1.xlarge | 8 | 16384 MB | 160 GB |
The majority of end users will be able to use the default flavors. However, you can create and manage specialized flavors. For example, you can:
- Change default memory and capacity to suit the underlying hardware needs.
- Add metadata to force a specific I/O rate for the instance or to match a host aggregate.
Behavior set using image properties overrides behavior set using flavors (for more information, see Section 1.2, “Managing images”).
3.4.1. Updating configuration permissions
By default, only administrators can create flavors or view the complete flavor list (select Admin > System > Flavors). To allow all users to configure flavors, specify the following in the /etc/nova/policy.json
file (nova-api server):
"compute_extension:flavormanage": "",
3.4.2. Creating a flavor
- As an admin user in the dashboard, select Admin > System > Flavors.
Click Create Flavor, and specify the following fields:
Table 3.4. Flavor Options
Tab Field Description Flavor Information
Name
Unique name.
ID
Unique ID. The default value,
auto
, generates a UUID4 value, but you can also manually specify an integer or UUID4 value.VCPUs
Number of virtual CPUs.
RAM (MB)
Memory (in megabytes).
Root Disk (GB)
Ephemeral disk size (in gigabytes); to use the native image size, specify
0
. This disk is not used if Instance Boot Source=Boot from Volume.Epehemeral Disk (GB)
Secondary ephemeral disk size (in gigabytes) available to an instance. This disk is destroyed when an instance is deleted.
The default value is
0
, which implies that no ephemeral disk is created.Swap Disk (MB)
Swap disk size (in megabytes).
Flavor Access
Selected Projects
Projects which can use the flavor. If no projects are selected, all projects have access (
Public=Yes
).- Click Create Flavor.
3.4.3. Updating general attributes
- As an admin user in the dashboard, select Admin > System > Flavors.
- Click the flavor’s Edit Flavor button.
- Update the values, and click Save.
3.4.4. Updating flavor metadata
In addition to editing general attributes, you can add metadata to a flavor (extra_specs
), which can help fine-tune instance usage. For example, you might want to set the maximum-allowed bandwidth or disk writes.
- Pre-defined keys determine hardware support or quotas. Pre-defined keys are limited by the hypervisor you are using (for libvirt, see Table 3.5, “Libvirt Metadata”).
-
Both pre-defined and user-defined keys can determine instance scheduling. For example, you might specify
SpecialComp=True
; any instance with this flavor can then only run in a host aggregate with the same key-value combination in its metadata (see Section 3.5, “Managing host aggregates”).
3.4.4.1. Viewing metadata
- As an admin user in the dashboard, select Admin > System > Flavors.
-
Click the flavor’s Metadata link (
Yes
orNo
). All current values are listed on the right-hand side under Existing Metadata.
3.4.4.2. Adding metadata
You specify a flavor’s metadata using a key/value
pair.
- As an admin user in the dashboard, select Admin > System > Flavors.
-
Click the flavor’s Metadata link (
Yes
orNo
). All current values are listed on the right-hand side under Existing Metadata. - Under Available Metadata, click on the Other field, and specify the key you want to add (see Table 3.5, “Libvirt Metadata”).
- Click the + button; you can now view the new key under Existing Metadata.
Fill in the key’s value in its right-hand field.
- When finished with adding key-value pairs, click Save.
Table 3.5. Libvirt Metadata
Key | Description |
---|---|
| Action that configures support limits per instance. Valid actions are:
Example: |
| Definition of NUMA topology for the instance. For flavors whose RAM and vCPU allocations are larger than the size of NUMA nodes in the compute hosts, defining NUMA topology enables hosts to better utilize NUMA and improve performance of the guest OS. NUMA definitions defined through the flavor override image definitions. Valid definitions are:
Note
If the values of Example when the instance has 8 vCPUs and 4GB RAM:
The scheduler looks for a host with 2 NUMA nodes with the ability to run 6 CPUs + 3072 MB, or 3 GB, of RAM on one node, and 2 CPUS + 1024 MB, or 1 GB, of RAM on another node. If a host has a single NUMA node with capability to run 8 CPUs and 4 GB of RAM, it will not be considered a valid match. The same logic is applied in the scheduler regardless of the |
| An instance watchdog device can be used to trigger an action if the instance somehow fails (or hangs). Valid actions are:
Example: |
|
A random-number generator device can be added to an instance using its image properties (see If the device has been added, valid actions are:
Example: |
| Maximum permitted RAM to be allowed for video devices (in MB).
Example: |
| Enforcing limit for the instance. Valid options are:
Example: In addition, the VMware driver supports the following quota options, which control upper and lower limits for CPUs, RAM, disks, and networks, as well as shares, which can be used to control relative allocation of available resources among tenants:
|
3.5. Managing host aggregates
A single Compute deployment can be partitioned into logical groups for performance or administrative purposes. OpenStack uses the following terms:
Host aggregates - A host aggregate creates logical units in a OpenStack deployment by grouping together hosts. Aggregates are assigned Compute hosts and associated metadata; a host can be in more than one host aggregate. Only administrators can see or create host aggregates.
An aggregate’s metadata is commonly used to provide information for use with the Compute scheduler (for example, limiting specific flavors or images to a subset of hosts). Metadata specified in a host aggregate will limit the use of that host to any instance that has the same metadata specified in its flavor.
Administrators can use host aggregates to handle load balancing, enforce physical isolation (or redundancy), group servers with common attributes, or separate out classes of hardware. When you create an aggregate, a zone name must be specified, and it is this name which is presented to the end user.
Availability zones - An availability zone is the end-user view of a host aggregate. An end user cannot view which hosts make up the zone, nor see the zone’s metadata; the user can only see the zone’s name.
End users can be directed to use specific zones which have been configured with certain capabilities or within certain areas.
3.5.1. Enabling host aggregate scheduling
By default, host-aggregate metadata is not used to filter instance usage. You must update the Compute scheduler’s configuration to enable metadata usage:
-
Edit the
/etc/nova/nova.conf
file (you must have either root or nova user permissions). Ensure that the
scheduler_default_filters
parameter contains:AggregateInstanceExtraSpecsFilter
for host aggregate metadata. For example:scheduler_default_filters=AggregateInstanceExtraSpecsFilter,RetryFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter
NoteScoped specifications must be used for setting flavor
extra_specs
when specifying bothAggregateInstanceExtraSpecsFilter
andComputeCapabilitiesFilter
filters as values of the samescheduler_default_filters
parameter, otherwise theComputeCapabilitiesFilter
will fail to select a suitable host. See Table 3.7, “Scheduling Filters” for further details.AvailabilityZoneFilter
for availability zone host specification when launching an instance. For example:scheduler_default_filters=AvailabilityZoneFilter,RetryFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter
- Save the configuration file.
3.5.2. Viewing availability zones or host aggregates
As an admin user in the dashboard, select Admin > System > Host Aggregates. All currently defined aggregates are listed in the Host Aggregates section; all zones are in the Availability Zones section.
3.5.3. Adding a host aggregate
- As an admin user in the dashboard, select Admin > System > Host Aggregates. All currently defined aggregates are listed in the Host Aggregates section.
- Click Create Host Aggregate.
- Add a name for the aggregate in the Name field, and a name by which the end user should see it in the Availability Zone field.
- Click Manage Hosts within Aggregate.
- Select a host for use by clicking its + icon.
- Click Create Host Aggregate.
3.5.4. Updating a host aggregate
- As an admin user in the dashboard, select Admin > System > Host Aggregates. All currently defined aggregates are listed in the Host Aggregates section.
To update the instance’s Name or Availability zone:
- Click the aggregate’s Edit Host Aggregate button.
- Update the Name or Availability Zone field, and click Save.
To update the instance’s Assigned hosts:
- Click the aggregate’s arrow icon under Actions.
- Click Manage Hosts.
- Change a host’s assignment by clicking its + or - icon.
- When finished, click Save.
To update the instance’s Metatdata:
- Click the aggregate’s arrow icon under Actions.
- Click the Update Metadata button. All current values are listed on the right-hand side under Existing Metadata.
- Under Available Metadata, click on the Other field, and specify the key you want to add. Use predefined keys (see Table 3.6, “Host Aggregate Metadata”) or add your own (which will only be valid if exactly the same key is set in an instance’s flavor).
Click the + button; you can now view the new key under Existing Metadata.
NoteRemove a key by clicking its - icon.
Click Save.
Table 3.6. Host Aggregate Metadata
Key Description cpu_allocation_ratio
Sets allocation ratio of virtual CPU to physical CPU. Depends on the
AggregateCoreFilter
filter being set for the Compute scheduler.disk_allocation_ratio
Sets allocation ratio of Virtual disk to physical disk. Depends on the
AggregateDiskFilter
filter being set for the Compute scheduler.filter_tenant_id
If specified, the aggregate only hosts this tenant (project). Depends on the
AggregateMultiTenancyIsolation
filter being set for the Compute scheduler.ram_allocation_ratio
Sets allocation ratio of virtual RAM to physical RAM. Depends on the
AggregateRamFilter
filter being set for the Compute scheduler.
3.5.5. Deleting a host aggregate
- As an admin user in the dashboard, select Admin > System > Host Aggregates. All currently defined aggregates are listed in the Host Aggregates section.
Remove all assigned hosts from the aggregate:
- Click the aggregate’s arrow icon under Actions.
- Click Manage Hosts.
- Remove all hosts by clicking their - icon.
- When finished, click Save.
- Click the aggregate’s arrow icon under Actions.
- Click Delete Host Aggregate in this and the next dialog screen.
3.6. Scheduling hosts and cells
The Compute scheduling service determines on which cell or host (or host aggregate), an instance will be placed. As an administrator, you can influence where the scheduler will place an instance. For example, you might want to limit scheduling to hosts in a certain group or with the right RAM.
You can configure the following components:
- Filters - Determine the initial set of hosts on which an instance might be placed (see Section 3.6.1, “Configuring scheduling filters”).
- Weights - When filtering is complete, the resulting set of hosts are prioritized using the weighting system. The highest weight has the highest priority (see Section 3.6.2, “Configuring scheduling weights”).
-
Scheduler service - There are a number of configuration options in the
/etc/nova/nova.conf
file (on the scheduler host), which determine how the scheduler executes its tasks, and handles weights and filters. There is both a host and a cell scheduler. For a list of these options, see the Configuration Reference guide.
In the following diagram, both host 1 and 3 are eligible after filtering. Host 1 has the highest weight and therefore has the highest priority for scheduling.
3.6.1. Configuring scheduling filters
You define which filters you would like the scheduler to use in the scheduler_default_filters option (/etc/nova/nova.conf
file; you must have either root or nova user permissions). Filters can be added or removed.
By default, the following filters are configured to run in the scheduler:
scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter
Some filters use information in parameters passed to the instance in:
-
The
nova boot
command, see Command-Line Interface Reference guide. - The instance’s flavor (see Section 3.4.4, “Updating flavor metadata”)
- The instance’s image (see Appendix A, Image Configuration Parameters).
The following table lists all the available filters.
Table 3.7. Scheduling Filters
Filter | Description |
---|---|
AggregateCoreFilter |
Uses the host-aggregate metadata key |
If this ratio is not set, the filter uses the | |
AggregateDiskFilter |
Uses the host-aggregate metadata key |
If this ratio is not set, the filter uses the | |
AggregateImagePropertiesIsolation | Only passes hosts in host aggregates whose metadata matches the instance’s image metadata; only valid if a host aggregate is specified for the instance. For more information, see Section 1.2.1, “Creating an image”. |
AggregateInstanceExtraSpecsFilter | Metadata in the host aggregate must match the host’s flavor metadata. For more information, see Section 3.4.4, “Updating flavor metadata”.
This filter can only be specified in the same
|
AggregateMultiTenancyIsolation |
A host with the specified Note The tenant can still place instances on other hosts. |
AggregateRamFilter |
Uses the host-aggregate metadata key |
If this ratio is not set, the filter uses the | |
AllHostsFilter | Passes all available hosts (however, does not disable other filters). |
AvailabilityZoneFilter | Filters using the instance’s specified availability zone. |
ComputeCapabilitiesFilter |
Ensures Compute metadata is read correctly. Anything before the |
ComputeFilter | Passes only hosts that are operational and enabled. |
CoreFilter |
Uses the |
DifferentHostFilter |
Enables an instance to build on a host that is different from one or more specified hosts. Specify |
DiskFilter |
Uses disk_allocation_ratio in the |
ImagePropertiesFilter | Only passes hosts that match the instance’s image properties. For more information, see Section 1.2.1, “Creating an image”. |
IsolatedHostsFilter |
Passes only isolated hosts running isolated images that are specified in the |
JsonFilter | Recognises and uses an instance’s custom JSON filters:
|
The filter is specfied as a query hint in the
| |
MetricFilter | Filters out hosts with unavailable metrics. |
NUMATopologyFilter | Filters out hosts based on its NUMA topology; if the instance has no topology defined, any host can be used. The filter tries to match the exact NUMA topology of the instance to those of the host (it does not attempt to pack the instance onto the host). The filter also looks at the standard over-subscription limits for each NUMA node, and provides limits to the compute host accordingly. |
RamFilter |
Uses |
RetryFilter |
Filters out hosts that have failed a scheduling attempt; valid if |
SameHostFilter |
Passes one or more specified hosts; specify hosts for the instance using the |
ServerGroupAffinityFilter | Only passes hosts for a specific server group:
|
ServerGroupAntiAffinityFilter | Only passes hosts in a server group that do not already host an instance:
|
SimpleCIDRAffinityFilter |
Only passes hosts on the specified IP subnet range specified by the instance’s cidr and
|
3.6.2. Configuring scheduling weights
Both cells and hosts can be weighted for scheduling; the host or cell with the largest weight (after filtering) is selected. All weighers are given a multiplier that is applied after normalising the node’s weight. A node’s weight is calculated as:
w1_multiplier * norm(w1) + w2_multiplier * norm(w2) + ...
You can configure weight options in the scheduler host’s /etc/nova/nova.conf
file (must have either root or nova user permissions).
3.6.2.1. Configure Weight Options for Hosts
You can define the host weighers you would like the scheduler to use in the [DEFAULT] scheduler_weight_classes option. Valid weighers are:
-
nova.scheduler.weights.ram
- Weighs the host’s available RAM. -
nova.scheduler.weights.metrics
- Weighs the host’s metrics. -
nova.scheduler.weights.affinity
- Weighs the host’s proximity to other hosts in the given server group. -
nova.scheduler.weights.all_weighers
- Uses all host weighers (default).
Table 3.8. Host Weight Options
Weigher | Option | Description |
---|---|---|
All | [DEFAULT] scheduler_host_subset_size |
Defines the subset size from which a host is selected (integer); must be at least |
affinity |
[default] |
Used for weighing hosts for group soft-affinity. Should be a positive floating-point number, because a negative value results in the opposite behavior, which is normally controlled by |
affinity |
[default] |
Used for weighing hosts for group soft-anti-affinity. Should be a positive floating-point number, because a negative value results in the opposite behavior, which is normally controlled by |
metrics | [metrics] required |
Specifies how to handle metrics in [metrics]
|
metrics |
[metrics] |
Used as the weight if any metric in [metrics] |
metrics |
[metrics] |
Mulitplier used for weighing metrics. By default, |
metrics |
[metrics] |
Specifies metrics and the ratio with which they are weighed; use a comma-separated list of
Example: |
ram |
[DEFAULT] |
Multiplier for RAM (floating point). By default, |
3.6.2.2. Configure Weight Options for Cells
You define which cell weighers you would like the scheduler to use in the [cells] scheduler_weight_classes option (/etc/nova/nova.conf
file; you must have either root
or nova
user permissions).
The use of cells is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.
Valid weighers are:
-
nova.cells.weights.all_weighers
- Uses all cell weighers(default). -
nova.cells.weights.mute_child
- Weighs whether a child cell has not sent capacity or capability updates for some time. -
nova.cells.weights.ram_by_instance_type
- Weighs the cell’s available RAM. nova.cells.weights.weight_offset
- Evaluates a cell’s weight offset.NoteA cell’s weight offset is specified using
--woffset `in the `nova-manage cell create
command.
Table 3.9. Cell Weight Options
Weighers | Option | Description |
---|---|---|
|
[cells] |
Multiplier for hosts which have been silent for some time (negative floating point). By default, this value is |
|
[cells] |
Weight value given to silent hosts (positive floating point). By default, this value is |
|
[cells] |
Multiplier for weighing RAM (floating point). By default, this value is |
|
[cells] |
Multiplier for weighing cells (floating point). Enables the instance to specify a preferred cell (floating point) by setting its weight offset to |
3.7. Evacuating instances
If you want to move an instance from a dead or shut-down compute node to a new host server in the same environment (for example, because the server needs to be swapped out), you can evacuate it using nova evacuate
.
- An evacuation is only useful if the instance disks are on shared storage or if the instance disks are Block Storage volumes. Otherwise, the disks will not be accessible and cannot be accessed by the new compute node.
-
An instance can only be evacuated from a server if the server is shut down; if the server is not shut down, the
evacuate
command will fail.
If you have a functioning compute node, and you want to:
-
Make a static copy (not running) of an instance for backup purposes or to copy the instance to a different environment, make a snapshot using
nova image-create
(see Chapter 2. How to Migrate a Static Instance). Images created usingnova image-create
are only usable by nova (and not glance). -
Move an instance in a static state (not running) to a host in the same environment (shared storage not needed), migrate it using
nova migrate
(see Migrate a Static Instance). -
Move an instance in a live state (running) to a host in the same environment, migrate it using
nova live-migration
(see Migrate a Live (running) Instance).
3.7.1. Evacuating one instance
Evacuate an instance using:
# nova evacuate [--password pass] instance_name [target_host]
Where:
-
--password
- Admin password to set for the evacuated instance. If a password is not specified, a random password is generated and output when evacuation is complete. -
instance_name
- Name of the instance to be evacuated. target_host
- Host to which the instance is evacuated; if you do not specify the host, the Compute scheduler selects one for you. You can find possible hosts using:# nova host-list | grep compute
For example:
# nova evacuate myDemoInstance Compute2_OnEL7.myDomain
-
3.7.2. Evacuating all instances
Evacuate all instances on a specified host using:
# nova host-evacuate [--target_host <target_host>] [--force] <host>
Where:
<target_host>
- The host the instance is evacuated to. If you do not specify the host, the Compute scheduler selects one for you. You can find possible hosts using the following command:# nova host-list | grep compute
-
<host>
- Name of the host to be evacuated.
For example:
# nova host-evacuate --target_host Compute2_OnEL7.localdomain myDemoHost.localdomain
3.7.3. Configuring shared storage
If you are using shared storage, this procedure exports the instances directory for the Compute service to the two nodes, and ensures the nodes have access. The directory path is set in the state_path
and instances_path
parameters in the /etc/nova/nova.conf
file. This procedure uses the default value, which is /var/lib/nova/instances
. Only users with root access can set up shared storage.
On the controller host:
Ensure the
/var/lib/nova/instances
directory has read-write access by the Compute service user (this user must be the same across controller and nodes). For example:drwxr-xr-x. 9 nova nova 4096 Nov 5 20:37 instances
Add the following lines to the
/etc/exports
file; switch out node1_IP and node2_IP for the IP addresses of the two compute nodes:/var/lib/nova/instances (rw,sync,fsid=0,no_root_squash) /var/lib/nova/instances (rw,sync,fsid=0,no_root_squash)
Export the
/var/lib/nova/instances
directory to the compute nodes.# exportfs -avr
Restart the NFS server:
# systemctl restart nfs-server
On each compute node:
-
Ensure the
/var/lib/nova/instances
directory exists locally. Add the following line to the
/etc/fstab
file::/var/lib/nova/instances /var/lib/nova/instances nfs4 defaults 0 0
Mount the controller’s instance directory (all devices listed in
/etc/fstab
):# mount -a -v
Ensure qemu can access the directory’s images:
# ls -ld /var/lib/nova/instances drwxr-xr-x. 9 nova nova 4096 Nov 5 20:37 /var/lib/nova/instances
Ensure that the node can see the instances directory with:
drwxr-xr-x. 9 nova nova 4096 Nov 5 20:37 /var/lib/nova/instances
-
Ensure the
You can also run the following to view all mounted devices:
# df -k
3.8. Managing instance snapshots
An instance snapshot allows you to create a new image from an instance. This is very convenient for upgrading base images or for taking a published image and customizing it for local use.
The difference between an image that you upload directly to the Image Service and an image that you create by snapshot is that an image created by snapshot has additional properties in the Image Service database. These properties are found in the image_properties
table and include the following parameters:
Table 3.10. Snapshot Options
Name | Value |
---|---|
image_type | snapshot |
instance_uuid | <uuid of instance that was snapshotted> |
base_image_ref | <uuid of original image of instance that was snapshotted> |
image_location | snapshot |
Snapshots allow you to create new instances based on that snapshot, and potentially restore an instance to that state. Moreover, this can be performed while the instance is running.
By default, a snapshot is accessible to the users and projects that were selected while launching an instance that the snapshot is based on.
3.8.1. Creating an instance snapshot
If you intend to use an instance snapshot as a template to create new instances, you must ensure that the disk state is consistent. Before you create a snapshot, set the snapshot image metadata property os_require_quiesce=yes
. For example,
$ glance image-update IMAGE_ID --property os_require_quiesce=yes
For this to work, the guest should have the qemu-guest-agent
package installed, and the image should be created with the metadata property parameter hw_qemu_guest_agent=yes
set. For example,
$ glance image-create --name NAME \ --disk-format raw \ --container-format bare \ --file FILE_NAME \ --is-public True \ --property hw_qemu_guest_agent=yes \ --progress
If you unconditionally enable the hw_qemu_guest_agent=yes
parameter, then you are adding another device to the guest. This consumes a PCI slot, and will limit the number of other devices you can allocate to the guest. It also causes Windows guests to display a warning message about an unknown hardware device.
For these reasons, setting the hw_qemu_guest_agent=yes
parameter is optional, and the parameter should be used for only those images that require the QEMU guest agent.
- In the dashboard, select Project > Compute > Instances.
- Select the instance from which you want to create a snapshot.
- In the Actions column, click Create Snapshot.
In the Create Snapshot dialog, enter a name for the snapshot and click Create Snapshot.
The Images category now shows the instance snapshot.
To launch an instance from a snapshot, select the snapshot and click Launch.
3.8.2. Managing a snapshot
- In the dashboard, select Project > Images.
- All snapshots you created, appear under the Project option.
For every snapshot you create, you can perform the following functions, using the dropdown list:
- Use the Create Volume option to create a volume and entering the values for volume name, description, image source, volume type, size and availability zone. For more information, see Create a Volume in the Storage Guide.
- Use the Edit Image option to update the snapshot image by updating the values for name, description, Kernel ID, Ramdisk ID, Architecture, Format, Minimum Disk (GB), Minimum RAM (MB), public or private. For more information, see Section 1.2.3, “Updating an image”.
- Use the Delete Image option to delete the snapshot.
3.8.3. Rebuilding an instance to a state in a snapshot
In an event that you delete an instance on which a snapshot is based, the snapshot still stores the instance ID. You can check this information using the nova image-list command and use the snapshot to restore the instance.
- In the dashboard, select Project > Compute > Images.
- Select the snapshot from which you want to restore the instance.
- In the Actions column, click Launch Instance.
- In the Launch Instance dialog, enter a name and the other details for the instance and click Launch.
For more information on launching an instance, see Section 3.1.2, “Creating an instance”.
3.8.4. Consistent Snapshots
Previously, file systems had to be quiesced manually (fsfreeze) before taking a snapshot of active instances for consistent backups.
Compute’s libvirt
driver automatically requests the QEMU Guest Agent to freeze the file systems (and applications if fsfreeze-hook
is installed) during an image snapshot. Support for quiescing file systems enables scheduled, automatic snapshots at the block device level.
This feature is only valid if the QEMU Guest Agent is installed (qemu-ga
) and the image metadata enables the agent (hw_qemu_guest_agent=yes
)
Snapshots should not be considered a substitute for an actual system backup.
3.9. Using rescue mode for instances
Compute has a method to reboot a virtual machine in rescue mode. Rescue mode provides a mechanism for access when the virtual machine image renders the instance inaccessible. A rescue virtual machine allows a user to fix their virtual machine by accessing the instance with a new root password. This feature is useful if an instance’s filesystem is corrupted. By default, rescue mode starts an instance from the initial image attaching the current boot disk as a secondary one.
3.9.1. Preparing an image for a rescue mode instance
Due to the fact that both the boot disk and the disk for rescue mode have same UUID, sometimes the virtual machine can be booted from the boot disk instead of the disk for rescue mode.
To avoid this issue, you should create a new image as rescue image based on the procedure in Section 1.2.1, “Creating an image”:
The rescue
image is stored in glance
and configured in the nova.conf
as a default, or you can select when you do the rescue.
3.9.1.1. Rescuing image if using ext4 filesystem
When the base image uses ext4
filesystem, you can create a rescue image from it using the following procedure:
Change the UUID to a random value using the
tune2fs
command:# tune2fs -U random /dev/DEVICE_NODE
Here DEVICE_NODE is the root device node (for example,
sda
,vda
, and so on).Verify the details of the filesystem, including the new UUID:
# tune2fs -l
-
Update the
/etc/fstab
to use the new UUID. You may need to repeat this for any additional partitions you have, that are mounted in thefstab
by UUID. -
Update the
/boot/grub2/grub.conf
file and update the UUID parameter with the new UUID of the root disk. - Shut down and use this image as your rescue image. This will cause the rescue image to have a new random UUID that will not conflict with the instance that you are rescuing.
The XFS filesystem cannot change the UUID of the root device on the running virtual machine. Reboot the virtual machine until the virtual machine is launched from the disk for rescue mode.
3.9.2. Adding the rescue image to the OpenStack Image Service
When you have completed modifying the UUID of your image, use the following commands to add the generated rescue image to the OpenStack Image service:
Add the rescue image to the Image service:
# glance image-create --name IMAGE_NAME --disk-format qcow2 \ --container-format bare --is-public True --file IMAGE_PATH
Here IMAGE_NAME is the name of the image, IMAGE_PATH is the location of the image.
Use the
image-list
command to obtain the IMAGE_ID required for launching an instace in the rescue mode.# glance image-list
You can also upload an image using the OpenStack Dashboard, see Section 1.2.2, “Uploading an image”.
3.9.3. Launching an instance in rescue mode
Since you need to rescue an instance with a specific image, rather than the default one, use the
--image
parameter:# nova rescue --image IMAGE_ID VIRTUAL_MACHINE_ID
Here IMAGE_ID is the ID of the image you want to use and VIRTUAL_MACHINE_ID is ID of a virtual machine that you want to rescue.
NoteThe
nova rescue
command allows an instance to perform a soft shut down. This allows the guest operating system to perform a controlled shutdown before the instance is powered off. The shut down behavior is configured by theshutdown_timeout
parameter that can be set in thenova.conf
file. The value stands for the overall period (in seconds) a guest operation system is allowed to complete the shutdown. The default timeout is 60 seconds.The timeout value can be overridden on a per image basis by means of
os_shutdown_timeout
that is an image metadata setting allowing different types of operating systems to specify how much time they need to shut down cleanly.- Reboot the virtual machine.
-
Confirm the status of the virtual machine is RESCUE on the controller node by using
nova list
command or by using dashboard. - Log in to the new virtual machine dashboard by using the password for rescue mode.
You can now make the necessary changes to your instance to fix any issues.
3.9.4. Unrescuing an instance
You can unrescue
the fixed instance to restart it from the boot disk.
Execute the following commands on the controller node.
# nova unrescue VIRTUAL_MACHINE_ID
Here VIRTUAL_MACHINE_ID is ID of a virtual machine that you want to unrescue.
The status of your instance returns to ACTIVE once the unrescue operation has completed successfully.
3.10. Setting a configuration drive for instances
You can use the config-drive
parameter to present a read-only drive to your instances. This drive can contain selected files that are then accessible to the instance. The configuration drive is attached to the instance at boot, and is presented to the instance as a partition. Configuration drives are useful when combined with cloud-init (for server bootstrapping), and when you want to pass large files to your instances.
3.10.1. Configuration drive options
Set the initial configuration drive options under [DEFAULT]
in nova.conf:
-
config_drive_format
- sets the format of the drive, and accepts the optionsiso9660
andvfat
. By default, it usesiso9660
. -
force_config_drive=true
- this forces the configuration drive to be presented to all instances. -
mkisofs_cmd=genisoimage
- specifies the command to use for ISO file creation. This value must not be changed, as only genisoimage is supported.
3.10.2. Using a configuration drive
An instance attaches its configuration drive at boot time. This is enabled by the --config-drive
option. For example, this command creates a new instance named test-instance01 and attaches a drive containing a file named /root/user-data.txt:
# nova boot --flavor m1.tiny --config-drive true --file /root/user-data.txt=/root/user-data.txt --image cirros test-instance01
Once the instance has booted, you can log in to it and see a file named /root/user-data.txt.
You can use the configuration drive as a source for cloud-init information. During the initial instance boot, cloud-init can automatically mount the configuration drive and run the setup scripts.