8. Manage instances and hosts

Instances are virtual machines that run inside the cloud on physical compute nodes. The Compute service manages instances. A host is the node on which a group of instances resides.

This section describes how to perform the different tasks involved in instance management, such as adding floating IP addresses, stopping and starting instances, and terminating instances. This section also discusses node management tasks.

8.1. Manage IP addresses

Each instance has a private, fixed IP address and can also have a public, or floating, address. Private IP addresses are used for communication between instances, and public addresses are used for communication with networks outside the cloud, including the Internet.

When you launch an instance, it is automatically assigned a private IP address that stays the same until you explicitly terminate the instance. Rebooting an instance has no effect on the private IP address.

A pool of floating IP addresses, configured by the cloud administrator, is available in OpenStack Compute. The project quota defines the maximum number of floating IP addresses that you can allocate to the project. After you allocate a floating IP address to a project, you can:

  • Associate the floating IP address with an instance of the project. Only one floating IP address can be allocated to an instance at any given time.

  • Disassociate a floating IP address from an instance in the project.

  • Delete a floating IP from the project; deleting a floating IP automatically deletes that IP's associations.

Use the nova floating-ip-* commands to manage floating IP addresses.

8.1.1. List floating IP address information

  • To list all pools that provide floating IP addresses, run:

    $ nova floating-ip-pool-list
    +--------+
    | name   |
    +--------+
    | public |
    | test   |
    +--------+
    Note

    If this list is empty, the cloud administrator must configure a pool of floating IP addresses.

  • To list all floating IP addresses that are allocated to the current project, run:

    $ nova floating-ip-list
    +--------------+--------------------------------------+----------+--------+
    | Ip           | Instance Id                          | Fixed Ip | Pool   |
    +--------------+--------------------------------------+----------+--------+
    | 172.24.4.225 | 4a60ff6a-7a3c-49d7-9515-86ae501044c6 | 10.0.0.2 | public |
    | 172.24.4.226 | None                                 | None     | public |
    +--------------+--------------------------------------+----------+--------+

    For each floating IP address that is allocated to the current project, the command outputs the floating IP address, the ID for the instance to which the floating IP address is assigned, the associated fixed IP address, and the pool from which the floating IP address was allocated.

8.1.2. Associate floating IP addresses

You can assign a floating IP address to a project and to an instance.

  1. Run the following command to allocate a floating IP address to the current project. By default, the floating IP address is allocated from the public pool. The command outputs the allocated IP address.

    $ nova floating-ip-create
    +--------------+-------------+----------+--------+
    | Ip           | Instance Id | Fixed Ip | Pool   |
    +--------------+-------------+----------+--------+
    | 172.24.4.225 | None        | None     | public |
    +--------------+-------------+----------+--------+
    Note

    If more than one IP address pool is available, you can specify the pool from which to allocate the IP address, using the pool's name. For example, to allocate a floating IP address from the test pool, run:

    $ nova floating-ip-create test
  2. List all project instances with which a floating IP address could be associated:

    $ nova list
    +--------------------------------------+------+---------+------------+-------------+------------------+
    | ID                                   | Name | Status  | Task State | Power State | Networks         |
    +--------------------------------------+------+---------+------------+-------------+------------------+
    | d5c854f9-d3e5-4fce-94d9-3d9f9f8f2987 | VM1  | ACTIVE  | -          | Running     | private=10.0.0.3 |
    | 42290b01-0968-4313-831f-3b489a63433f | VM2  | SHUTOFF | -          | Shutdown    | private=10.0.0.4 |
    +--------------------------------------+------+---------+------------+-------------+------------------+
  3. Associate an IP address with an instance in the project, as follows:

    $ nova floating-ip-associate INSTANCE_NAME_OR_ID FLOATING_IP_ADDRESS

    For example:

    $ nova floating-ip-associate VM1 172.24.4.225

    Notice that the instance is now associated with two IP addresses:

    $ nova list
    +--------------------------------------+------+---------+------------+-------------+--------------------------------+
    | ID                                   | Name | Status  | Task State | Power State | Networks                       |
    +--------------------------------------+------+---------+------------+-------------+--------------------------------+
    | d5c854f9-d3e5-4fce-94d9-3d9f9f8f2987 | VM1  | ACTIVE  | -          | Running     | private=10.0.0.3, 172.24.4.225 |
    | 42290b01-0968-4313-831f-3b489a63433f | VM2  | SHUTOFF | -          | Shutdown    | private=10.0.0.4               |
    +--------------------------------------+------+---------+------------+-------------+--------------------------------+

    After you associate the IP address and configure security group rules for the instance, the instance is publicly available at the floating IP address.

    Note

    If an instance is connected to multiple networks, you can associate a floating IP address with a specific fixed IP address using the optional --fixed-address parameter:

    $ nova floating-ip-associate --fixed-address=FIXED_IP_ADDRESS INSTANCE_NAME_OR_ID FLOATING_IP_ADDRESS

8.1.3. Disassociate floating IP addresses

  1. Release a floating IP address from an instance, as follows:

    $ nova floating-ip-disassociate INSTANCE_NAME_OR_ID FLOATING_IP_ADDRESS
  2. Release the floating IP address from the current project, as follows:

    $ nova floating-ip-delete FLOATING_IP_ADDRESS

    The IP address is returned to the pool of IP addresses that is available for all projects. If the IP address is still associated with a running instance, it is automatically disassociated from that instance.

8.2. Change the size of your server

You change the size of a server by changing its flavor.

Prerequisite: If you are resizing an instance in a distributed deployment, you will need to ensure one of the following:

  • Communication between hosts. Set up each host with SSH key authentication so that Compute can use SSH to move disks to other hosts (for example, the compute nodes might share the same SSH key).

  • Resizing on the original host. Enable resizing on the original host by:

    • Setting the following parameter in the /etc/nova/nova.conf file:

      [DEFAULT]
      allow_resize_to_same_host = True
    • Ensuring that the host has enough space available for the new size.

Procedure 2.1. Resize a server

  1. Show information about your server, including its size, which is shown as the value of the flavor property.

    $ nova show myCirrosServer
    +--------------------------------------+----------------------------------------------------------+
    | Property                             | Value                                                    |
    +--------------------------------------+----------------------------------------------------------+
    | OS-DCF:diskConfig                    | AUTO                                                     |
    | OS-EXT-AZ:availability_zone          | nova                                                     |
    | OS-EXT-SRV-ATTR:host                 | devstack                                                 |
    | OS-EXT-SRV-ATTR:hypervisor_hostname  | devstack                                                 |
    | OS-EXT-SRV-ATTR:instance_name        | instance-00000005                                        |
    | OS-EXT-STS:power_state               | 1                                                        |
    | OS-EXT-STS:task_state                | -                                                        |
    | OS-EXT-STS:vm_state                  | active                                                   |
    | OS-SRV-USG:launched_at               | 2014-08-14T06:10:21.000000                               |
    | OS-SRV-USG:terminated_at             | -                                                        |
    | accessIPv4                           |                                                          |
    | accessIPv6                           |                                                          |
    | config_drive                         |                                                          |
    | created                              | 2014-08-14T06:09:47Z                                     |
    | flavor                               | m1.small (2)                                             |
    | hostId                               | 6e1e69b71ac9b1e6871f91e2dfc9a9b9ceca0f052a81d45385       |
    | id                                   | 84c6e57d-a6b1-44b6-81eb-fcb36afd31b5                     |
    | image                                | cirros-uec (397e713c-b95b-4186-ad46-6126863ea0a9)        |
    | key_name                             | None                                                     |
    | metadata                             | {}                                                       |
    | name                                 | myCirrosServer                                           |
    | os-extended-volumes:volumes_attached | []                                                       |
    | progress                             | 0                                                        |
    | private network                      | 10.0.0.3                                                 |
    | security_groups                      | default                                                  |
    | status                               | ACTIVE                                                   |
    | tenant_id                            | 66265572db174a7aa66eba661f58eb9e                         |
    | updated                              | 2014-08-14T22:54:56Z                                     |
    | user_id                              | 376744b5910b4b4da7d8e6cb483b06a8                         |
    +--------------------------------------+----------------------------------------------------------+

    The size (flavor) of the server is m1.small (2).

  2. List the available flavors with the following command:

    $ nova flavor-list
    +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
    | ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
    +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
    | 1  | m1.tiny   | 512       | 0    | 0         |      | 1     | 1.0         | True      |
    | 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
    | 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
    | 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
    | 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
    +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
  3. To resize the server, pass the server ID or name and the new flavor to the nova resize command. Include the --poll parameter to report the resize progress.

    $ nova resize myCirrosServer 4 --poll
    Instance resizing... 100% complete
    Finished
  4. Show the status for your server:

    $ nova list
    +--------------------------------------+---------------+--------+------------+-------------+-----------------------------------------+
    | ID                                   | Name          | Status | Task State | Power State | Networks                                |
    +--------------------------------------+---------------+--------+------------+-------------+-----------------------------------------+
    | 3503e70a-8ede-444c-b779-fcb36afd31b5 | CirrosServer  | RESIZE | -          | Running     | private=172.16.101.6, public=10.4.113.6 |
    +--------------------------------------+---------------+--------+------------+-------------+-----------------------------------------+

    When the resize completes, the status becomes VERIFY_RESIZE.

  5. Confirm the resize:

    $ nova resize-confirm 84c6e57d-a6b1-44b6-81eb-fcb36afd31b5

    The server status becomes ACTIVE.

Note

If the resize fails or does not work as expected, you can revert the resize:

$ nova resize-revert 84c6e57d-a6b1-44b6-81eb-fcb36afd31b5

The server status becomes ACTIVE.

8.3. Search for an instance using IP address

You can search for an instance using the IP address parameter, --ip, with the nova list command.

$ nova list --ip IP_ADDRESS 

The following example shows the results of a search on 10.0.0.4.

$ nova list --ip 10.0.0.4
+--------------------------------------+----------------------+--------+------------+-------------+------------------+
| ID                                   | Name                 | Status | Task State | Power State | Networks         |
+--------------------------------------+----------------------+--------+------------+-------------+------------------+
| 8a99547e-7385-4ad1-ae50-4ecfaaad5f42 | myInstanceFromVolume | ACTIVE | None       | Running     | private=10.0.0.4 |
+--------------------------------------+----------------------+--------+------------+-------------+------------------+

8.4. Stop and start an instance

Use one of the following methods to stop and start an instance.

8.4.1. Pause and unpause an instance

  • To pause an instance, run the following command:

    $ nova pause INSTANCE_NAME

    This command stores the state of the VM in RAM. A paused instance continues to run in a frozen state.

  • To unpause the instance, run the following command:

    $ nova unpause INSTANCE_NAME

8.4.2. Suspend and resume an instance

Administrative users might want to suspend an instance if it is infrequently used or to perform system maintenance. When you suspend an instance, its VM state is stored on disk, all memory is written to disk, and the virtual machine is stopped. Suspending an instance is similar to placing a device in hibernation; memory and vCPUs become available to create other instances.

  • To initiate a hypervisor-level suspend operation, run the following command:

    $ nova suspend INSTANCE_NAME
  • To resume a suspended instance, run the following command:

    $ nova resume INSTANCE_NAME

8.4.3. Shelve and unshelve an instance

Shelving is useful if you have an instance that you are not using, but would like to retain in your list of servers. For example, you can stop an instance on Friday, and resume work again on Monday. All associated data and resources are kept; however, anything still in memory is not retained. If a shelved instance is no longer needed, it can also be entirely removed.

You can complete the following shelving tasks:

Shelve an instance

Shuts down the instance, and stores it together with associated data and resources (a snapshot is taken if not volume backed). Anything in memory is lost. Use the following command:

$ nova shelve SERVERNAME
Unshelve an instance

Restores the instance:

$ nova unshelve SERVERNAME
Remove a shelved instance

Removes the instance from the server; data and resource associations are deleted. If an instance is no longer needed, you can move that instance off the hypervisor in order to minimize resource usage:

$ nova shelve-offload SERVERNAME

8.5. Reboot an instance

You can soft or hard reboot a running instance. A soft reboot attempts a graceful shut down and restart of the instance. A hard reboot power cycles the instance.

  • By default, when you reboot a server, it is a soft reboot.

    $ nova reboot SERVER

To perform a hard reboot, pass the --hard parameter, as follows:

$ nova reboot --hard SERVER

8.6. Delete an instance

When you no longer need an instance, you can delete it.

  1. List all instances:

    $ nova list
    +--------------------------------------+----------------------+--------+------------+-------------+------------------+
    | ID                                   | Name                 | Status | Task State | Power State | Networks         |
    +--------------------------------------+----------------------+--------+------------+-------------+------------------+
    | 84c6e57d-a6b1-44b6-81eb-fcb36afd31b5 | myCirrosServer       | ACTIVE | None       | Running     | private=10.0.0.3 |
    | 8a99547e-7385-4ad1-ae50-4ecfaaad5f42 | myInstanceFromVolume | ACTIVE | None       | Running     | private=10.0.0.4 |
    | d7efd3e4-d375-46d1-9d57-372b6e4bdb7f | newServer            | ERROR  | None       | NOSTATE     |                  |
    +--------------------------------------+----------------------+--------+------------+-------------+------------------+
  2. Run the nova delete command to delete the instance. The following example shows deletion of the newServer instance, which is in ERROR state:

    $ nova delete newServer

    The command does not notify that your server was deleted.

  3. To verify that the server was deleted, run the nova list command:

    $ nova list
    +--------------------------------------+----------------------+--------+------------+-------------+------------------+
    | ID                                   | Name                 | Status | Task State | Power State | Networks         |
    +--------------------------------------+----------------------+--------+------------+-------------+------------------+
    | 84c6e57d-a6b1-44b6-81eb-fcb36afd31b5 | myCirrosServer       | ACTIVE | None       | Running     | private=10.0.0.3 |
    | 8a99547e-7385-4ad1-ae50-4ecfaaad5f42 | myInstanceFromVolume | ACTIVE | None       | Running     | private=10.0.0.4 |
    +--------------------------------------+----------------------+--------+------------+-------------+------------------+

    The deleted instance does not appear in the list.

8.7. Access an instance through a console

To access an instance through a VNC console, run the following command:

$ nova get-vnc-console INSTANCE_NAME xvpvnc

The command returns a URL from which you can access your instance:

+--------+------------------------------------------------------------------------------+
| Type   | Url                                                                          |
+--------+------------------------------------------------------------------------------+
| xvpvnc | http://166.78.190.96:6081/console?token=c83ae3a3-15c4-4890-8d45-aefb494a8d6c |
+--------+------------------------------------------------------------------------------+
Note

To access an instance through a non-VNC console, specify the novnc parameter instead of the xvpvnc parameter.

8.8. Manage bare-metal nodes

The bare-metal driver for OpenStack Compute manages provisioning of physical hardware by using common cloud APIs and tools such as Orchestration (Heat). The use case for this driver is for single tenant clouds such as a high-performance computing cluster or for deploying OpenStack itself.

If you use the bare-metal driver, you must create a network interface and add it to a bare-metal node. Then, you can launch an instance from a bare-metal image.

You can list and delete bare-metal nodes. When you delete a node, any associated network interfaces are removed. You can list and remove network interfaces that are associated with a bare-metal node.

Commands

The following commands can be used to manage bare-metal nodes.

  • baremetal-interface-add. Adds a network interface to a bare-metal node.

  • baremetal-interface-list. Lists network interfaces associated with a bare-metal node.

  • baremetal-interface-remove. Removes a network interface from a bare-metal node.

  • baremetal-node-create. Creates a bare-metal node.

  • baremetal-node-delete. Removes a bare-metal node and any associated interfaces.

  • baremetal-node-list. Lists available bare-metal nodes.

  • baremetal-node-show. Shows information about a bare-metal node.

8.8.1. Create a bare-metal node

When you create a bare-metal node, your PM address, username, and password should match those that are configured in your hardware's BIOS/IPMI configuration.

$ nova baremetal-node-create --pm_address=PM_ADDRESS --pm_user=PM_USERNAME \
 --pm_password=PM_PASSWORD $(hostname -f) 1 512 10 aa:bb:cc:dd:ee:ff

The following example shows the command and results from creating a node with the PM address 1.2.3.4, the PM username ipmi, and password ipmi.

$ nova baremetal-node-create --pm_address=1.2.3.4 --pm_user=ipmi \
 --pm_password=ipmi $(hostname -f) 1 512 10 aa:bb:cc:dd:ee:ff
+------------------+-------------------+
| Property         | Value             |
+------------------+-------------------+
| instance_uuid    | None              |
| pm_address       | 1.2.3.4           |
| interfaces       | []                |
| prov_vlan_id     | None              |
| cpus             | 1                 |
| memory_mb        | 512               |
| prov_mac_address | aa:bb:cc:dd:ee:ff |
| service_host     | rhel              |
| local_gb         | 10                |
| id               | 1                 |
| pm_user          | ipmi              |
| terminal_port    | None              |
+------------------+-------------------+

8.8.2. Add a network interface to the node

For each NIC on the node, you must create an interface, specifying the interface's MAC address.

$ nova baremetal-interface-add 1 aa:bb:cc:dd:ee:ff
+-------------+-------------------+
| Property    | Value             |
+-------------+-------------------+
| datapath_id | 0                 |
| id          | 1                 |
| port_no     | 0                 |
| address     | aa:bb:cc:dd:ee:ff |
+-------------+-------------------+

8.8.3. Launch an instance from a bare-metal image

A bare-metal instance is an instance created directly on a physical machine without any virtualization layer running underneath it. Compute retains power control via IPMI. In some situations,Compute may retain network control via OpenStack Networking (neutron) and OpenFlow.

$ nova boot --image my-baremetal-image --flavor my-baremetal-flavor --nic net-id=myNetID test
+-----------------------------+--------------------------------------+
| Property                    | Value                                |
+-----------------------------+--------------------------------------+
| status                      | BUILD                                |
| id                          | cc302a8f-cd81-484b-89a8-b75eb3911b1b |

... wait for instance to become active ...
Note

Set the --availability_zone parameter to specify which zone or node to use to start the server. Separate the zone from the host name with a comma. For example:

$ nova boot --availability_zone=zone:HOST,NODE

host is optional for the --availability_zone parameter. You can specify simply zone:,node. You must still use the comma.

8.8.4. List bare-metal nodes and interfaces

Use the nova baremetal-node-list command to view all bare-metal nodes and interfaces. When a node is in use, its status includes the UUID of the instance that runs on it:

$ nova baremetal-node-list
+----+--------+------+-----------+---------+-------------------+------+------------+-------------+-------------+---------------+
| ID | Host   | CPUs | Memory_MB | Disk_GB | MAC Address       | VLAN | PM Address | PM Username | PM Password | Terminal Port |
+----+--------+------+-----------+---------+-------------------+------+------------+-------------+-------------+---------------+
| 1  | rhel   | 1    | 512       | 10      | aa:bb:cc:dd:ee:ff | None | 1.2.3.4    | ipmi        |             | None          |
+----+--------+------+-----------+---------+-------------------+------+------------+-------------+-------------+---------------+

8.8.5. Show details for a bare-metal node

Use the nova baremetal-node-list command to view the details for a bare-metal node.

$ nova baremetal-node-show 1
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| instance_uuid    | cc302a8f-cd81-484b-89a8-b75eb3911b1b |
| pm_address       | 1.2.3.4                              |
| interfaces       |
  [{u'datapath_id': u'0', u'id': 1, u'port_no': 0, u'address': u'aa:bb:cc:dd:ee:ff'}] |
| prov_vlan_id     | None                                 |
| cpus             | 1                                    |
| memory_mb        | 512                                  |
| prov_mac_address | aa:bb:cc:dd:ee:ff                    |
| service_host     | rhel                                 |
| local_gb         | 10                                   |
| id               | 1                                    |
| pm_user          | ipmi                                 |
| terminal_port    | None                                 |
+------------------+--------------------------------------+

8.9. Show usage statistics for hosts and instances

You can show basic statistics on resource usage for hosts and instances.

Note

For more sophisticated monitoring, see the ceilometer project. You can also use tools, such as Ganglia or Graphite, to gather more detailed data.

8.9.1. Show host usage statistics

The following examples show the host usage statistics for a host called devstack.

  • List the hosts and the nova-related services that run on them:

    $ nova host-list
    +-----------+-------------+----------+
    | host_name | service     | zone     |
    +-----------+-------------+----------+
    | devstack  | conductor   | internal |
    | devstack  | compute     | nova     |
    | devstack  | cert        | internal |
    | devstack  | network     | internal |
    | devstack  | scheduler   | internal |
    | devstack  | consoleauth | internal |
    +-----------+-------------+----------+
  • Get a summary of resource usage of all of the instances running on the host:

    $ nova host-describe devstack
     +-----------+----------------------------------+-----+-----------+---------+
    | HOST     | PROJECT                          | cpu | memory_mb | disk_gb |
    +----------+----------------------------------+-----+-----------+---------+
    | devstack | (total)                          | 2   | 4003      | 157     |
    | devstack | (used_now)                       | 3   | 5120      | 40      |
    | devstack | (used_max)                       | 3   | 4608      | 40      |
    | devstack | b70d90d65e464582b6b2161cf3603ced | 1   | 512       | 0       |
    | devstack | 66265572db174a7aa66eba661f58eb9e | 2   | 4096      | 40      |
    +----------+----------------------------------+-----+-----------+---------+

    The cpu column shows the sum of the virtual CPUs for instances running on the host.

    The memory_mb column shows the sum of the memory (in MB) allocated to the instances that run on the host.

    The disk_gb column shows the sum of the root and ephemeral disk sizes (in GB) of the instances that run on the host.

    The row that has the value used_now in the PROJECT column shows the sum of the resources allocated to the instances that run on the host, plus the resources allocated to the virtual machine of the host itself.

    The row that has the value used_max row in the PROJECT column shows the sum of the resources allocated to the instances that run on the host.

Note

These values are computed by using information about the flavors of the instances that run on the hosts. This command does not query the CPU usage, memory usage, or hard disk usage of the physical host.

8.9.2. Show instance usage statistics

  • Get CPU, memory, I/O, and network statistics for an instance.

    1. List instances:

      $ nova list
      +--------------------------------------+----------------------+--------+------------+-------------+------------------+
      | ID                                   | Name                 | Status | Task State | Power State | Networks         |
      +--------------------------------------+----------------------+--------+------------+-------------+------------------+
      | 84c6e57d-a6b1-44b6-81eb-fcb36afd31b5 | myCirrosServer       | ACTIVE | None       | Running     | private=10.0.0.3 |
      | 8a99547e-7385-4ad1-ae50-4ecfaaad5f42 | myInstanceFromVolume | ACTIVE | None       | Running     | private=10.0.0.4 |
      +--------------------------------------+----------------------+--------+------------+-------------+------------------+
    2. Get diagnostic statistics:

      $ nova diagnostics myCirrosServer
      +------------------+----------------+
      | Property         | Value          |
      +------------------+----------------+
      | vnet1_rx         | 1210744        |
      | cpu0_time        | 19624610000000 |
      | vda_read         | 0              |
      | vda_write        | 0              |
      | vda_write_req    | 0              |
      | vnet1_tx         | 863734         |
      | vnet1_tx_errors  | 0              |
      | vnet1_rx_drop    | 0              |
      | vnet1_tx_packets | 3855           |
      | vnet1_tx_drop    | 0              |
      | vnet1_rx_errors  | 0              |
      | memory           | 2097152        |
      | vnet1_rx_packets | 5485           |
      | vda_read_req     | 0              |
      | vda_errors       | -1             |
      +------------------+----------------+
  • Get summary statistics for each tenant:

    $ nova usage-list
    Usage from 2013-06-25 to 2013-07-24:
    +----------------------------------+-----------+--------------+-----------+---------------+
    | Tenant ID                        | Instances | RAM MB-Hours | CPU Hours | Disk GB-Hours |
    +----------------------------------+-----------+--------------+-----------+---------------+
    | b70d90d65e464582b6b2161cf3603ced | 1         | 344064.44    | 672.00    | 0.00          |
    | 66265572db174a7aa66eba661f58eb9e | 3         | 671626.76    | 327.94    | 6558.86       |
    +----------------------------------+-----------+--------------+-----------+---------------+