Chapter 4. Creating and managing the cell within the Compute service

After you have deployed the overcloud with your cell stacks, you must create the cell within the Compute service. To create the cell within the Compute service you create entries for the cell and message queue mappings in the global API database. You can then add Compute nodes to the cells by running the cell host discovery on one of the Controller nodes.

To create your cells, you must perform the following tasks:

  1. Use the nova-manage utility to create the cell and message queue mapping records in the global API database.
  2. Add Compute nodes to each cell.
  3. Create an availability zone for each cell.
  4. Add all the Compute nodes in each cell to the availability zone for the cell.

4.1. Prerequisites

  • You have configured and deployed your overcloud with multiple cells.

4.2. Creating the cell within the Compute service

After you deploy the overcloud with a new cell stack, you must create the cell within the Compute service. To create the cell within the Compute service you create entries for the cell and message queue mappings in the global API database.

Note

You must repeat this process for each cell that you create and launch. You can automate the steps in an Ansible playbook. For an example of an Ansible playbook, see the Create the cell and discover Compute nodes section of the OpenStack community documentation. Community documentation is provided as-is and is not officially supported.

Procedure

  1. Get the IP addresses of the control plane and cell controller:

    $ CTRL_IP=$(openstack server list -f value -c Networks --name overcloud-controller-0 | sed 's/ctlplane=//')
    $ CELL_CTRL_IP=$(openstack server list -f value -c Networks --name cell1-cellcontroller-0 | sed 's/ctlplane=//')
  2. Add the cell information to all Controller nodes. This information is used to connect to the cell endpoint from the undercloud. The following example uses the prefix cell1 to specify only the cell systems and exclude the controller systems:

    (undercloud)$ CELL_INTERNALAPI_INFO=$(ssh heat-admin@${CELL_CTRL_IP} \
     egrep cell1.*\.internalapi /etc/hosts)
    (undercloud)$ ansible -i /usr/bin/tripleo-ansible-inventory \
     Controller -b -m lineinfile -a "dest=/etc/hosts line=\"$CELL_INTERNALAPI_INFO\""
  3. Get the message queue endpoint for the controller cell from the transport_url parameter, and the database connection for the controller cell from the database.connection parameter:

    (undercloud)$ CELL_TRANSPORT_URL=$(ssh tripleo-admin@${CELL_CTRL_IP} \
     sudo crudini --get /var/lib/config-data/puppet-generated/nova/etc/nova/nova.conf \
     DEFAULT transport_url)
    (undercloud)$ CELL_MYSQL_VIP=$(ssh tripleo-admin@${CELL_CTRL_IP} \
     sudo crudini --get /var/lib/config-data/puppet-generated/nova/etc/nova/nova.conf \
     database connection | awk -F[@/] '{print $4}')
  4. Log in to one of the global Controller nodes and create the cell:

    $ ssh heat-admin@${CTRL_IP} sudo podman \
     exec -i -u root nova_api \
     nova-manage cell_v2 create_cell --name cell1 \
     --database_connection "{scheme}://{username}:{password}@$CELL_MYSQL_VIP/nova?{query}" \
     --transport-url "$CELL_TRANSPORT_URL"
  5. Check that the cell is created and appears in the cell list:

    $ ssh heat-admin@${CTRL_IP} sudo podman \
     exec -i -u root nova_api \
     nova-manage cell_v2 list_cells --verbose
  6. Restart the Compute services on the Controller nodes:

    $ ansible -i /usr/bin/tripleo-ansible-inventory Controller -b -a \
    "systemctl restart tripleo_nova_api tripleo_nova_conductor tripleo_nova_scheduler"
  7. Check that the cell controller services are provisioned:

    (overcloud)$ openstack compute service list -c Binary -c Host -c Status -c State
    +----------------+-------------------------+---------+-------+
    | Binary         | Host                    | Status  | State |
    +----------------+-------------------------+---------+-------+
    | nova-conductor | controller-0.ostest     | enabled | up    |
    | nova-scheduler | controller-0.ostest     | enabled | up    |
    | nova-conductor | cellcontroller-0.ostest | enabled | up    |
    | nova-compute   | compute-0.ostest        | enabled | up    |
    | nova-compute   | compute-1.ostest        | enabled | up    |
    +----------------+-------------------------+---------+-------+

4.3. Adding Compute nodes to a cell

Run the cell host discovery on one of the Controller nodes to discover the Compute nodes and update the API database with the node-to-cell mappings.

Procedure

  1. Log in to the undercloud as the stack user.
  2. Get the IP address of the control plane for the cell and enter the host discovery command to expose and assign Compute hosts to the cell:

    $ CTRL_IP=$(openstack server list -f value -c Networks --name overcloud-controller-0 | sed 's/ctlplane=//')
    
    $ ssh heat-admin@${CTRL_IP} sudo podman exec -i -u root nova_api \
      nova-manage cell_v2 discover_hosts --by-service --verbose
  3. Verify that the Compute hosts were assigned to the cell:

    $ ssh heat-admin@${CTRL_IP} sudo podman exec -i -u root nova_api \
      nova-manage cell_v2 list_hosts

4.4. Creating a cell availability zone

You must create an availability zone (AZ) for each cell to ensure that instances created on the Compute nodes in that cell are migrated only to other Compute nodes in the same cell. Migrating instances between cells is not supported.

After you create the cell AZ you must add all the Compute nodes in the cell to the cell AZ. The default cell must be in a different availability zone from the Compute cells.

Procedure

  1. Source the overcloudrc file:

    (undercloud)$ source ~/overcloudrc
  2. Create the AZ for the cell:

    (overcloud)# openstack aggregate create \
     --zone <availability_zone> \
     <aggregate_name>
    • Replace <availability_zone> with the name you want to assign to the availability zone.
    • Replace <aggregate_name> with the name you want to assign to the host aggregate.
  3. Optional: Add metadata to the availability zone:

    (overcloud)# openstack aggregate set --property <key=value> \
      <aggregate_name>
    • Replace <key=value> with your metadata key-value pair. You can add as many key-value properties as required.
    • Replace <aggregate_name> with the name of the availability zone host aggregate.
  4. Retrieve a list of the Compute nodes assigned to the cell:

    $ ssh heat-admin@${CTRL_IP} sudo podman exec -i -u root nova_api \
      nova-manage cell_v2 list_hosts
  5. Add the Compute nodes assigned to the cell to the cell availability zone:

    (overcloud)# openstack aggregate add host <aggregate_name> \
      <host_name>
    • Replace <aggregate_name> with the name of the availability zone host aggregate to add the Compute node to.
    • Replace <host_name> with the name of the Compute node to add to the availability zone.
Note
  • You cannot use the OS::TripleO::Services::NovaAZConfig parameter to automatically create the AZ during deployment, because the cell is not created at this stage.
  • Migrating instances between cells is not supported. To move an instance to a different cell, you must delete it from the old cell and re-create it in the new cell.

For more information on host aggregates and availability zones, see Creating and managing host aggregates.

4.5. Deleting a Compute node from a cell

To delete a Compute node from a cell, you must delete all instances from the cell and delete the host names from the Placement database.

Procedure

  1. Delete all instances from the Compute nodes in the cell.

    Note

    Migrating instances between cells is not supported. You must delete the instances and re-create them in another cell.

  2. On one of the global Controllers, delete all Compute nodes from the cell.

    $ CTRL_IP=$(openstack server list -f value -c Networks --name overcloud-controller-0 | sed 's/ctlplane=//')
    
    $ ssh heat-admin@${CTRL_IP} sudo podman  \
     exec -i -u root nova_api \
     nova-manage cell_v2 list_hosts
    
    $ ssh heat-admin@${CTRL_IP} sudo podman  \
     exec -i -u root nova_api \
     nova-manage cell_v2 delete_host --cell_uuid <uuid> --host <compute>
  3. Delete the resource providers for the cell from the Placement service, to ensure that the host name is available in case you want to add Compute nodes with the same host name to another cell later:

    (undercloud)$ source ~/overcloudrc
    
    (overcloud)$ openstack resource provider list
    +--------------------------------------+---------------------------------------+------------+
    | uuid                                 | name                                  | generation |
    +--------------------------------------+---------------------------------------+------------+
    | 9cd04a8b-5e6c-428e-a643-397c9bebcc16 | computecell1-novacompute-0.site1.test |         11 |
    +--------------------------------------+---------------------------------------+------------+
    
    (overcloud)$ openstack resource provider  \
     delete 9cd04a8b-5e6c-428e-a643-397c9bebcc16

4.6. Deleting a cell

To delete a cell, you must first delete all instances and Compute nodes from the cell, as described in Deleting a Compute node from a cell. Then, you delete the cell itself and the cell stack.

Procedure

  1. On one of the global Controllers, delete the cell.

    $ CTRL_IP=$(openstack server list -f value -c Networks --name overcloud-controller-0 | sed 's/ctlplane=//')
    
    $ ssh heat-admin@${CTRL_IP} sudo podman  \
     exec -i -u root nova_api \
     nova-manage cell_v2 list_cells
    
    $ ssh heat-admin@${CTRL_IP} sudo podman  \
     exec -i -u root nova_api \
     nova-manage cell_v2 delete_cell --cell_uuid <uuid>
  2. Delete the cell stack from the overcloud.

    $ openstack stack delete <stack name> --wait --yes && openstack  \
     overcloud plan delete <stack_name>
    Note

    If you deployed separate cell stacks for a Controller and Compute cell, delete the Compute cell stack first and then the Controller cell stack.