Appendix A. Deployment migration options

This section includes topics related validation of DCN storage, as well as migrating or changing architectures.

A.1. Validating edge storage

Ensure that the deployment of central and edge sites are working by testing glance multi-store and instance creation.

You can import images into glance that are available on the local filesystem or available on a web server.

Note

Always store an image copy in the central site, even if there are no instances using the image at the central location.

Prerequisites

  1. Check the stores that are available through the Image service by using the glance stores-info command. In the following example, three stores are available: central, dcn1, and dcn2. These correspond to glance stores at the central location and edge sites, respectively:

      $ glance stores-info
      +----------+----------------------------------------------------------------------------------+
      | Property | Value                                                                            |
      +----------+----------------------------------------------------------------------------------+
      | stores   | [{"default": "true", "id": "central", "description": "central rbd glance         |
      |          | store"}, {"id": "dcn0", "description": "dcn0 rbd glance store"},                 |
      |          | {"id": "dcn1", "description": "dcn1 rbd glance store"}]                          |
      +----------+----------------------------------------------------------------------------------+

A.1.1. Importing from a local file

You must upload the image to the central location’s store first, then copy the image to remote sites.

  1. Ensure that your image file is in RAW format. If the image is not in raw format, you must convert the image before importing it into the Image service:

    file cirros-0.5.1-x86_64-disk.img
    cirros-0.5.1-x86_64-disk.img: QEMU QCOW2 Image (v3), 117440512 bytes
    
    qemu-img convert -f qcow2 -O raw cirros-0.5.1-x86_64-disk.img cirros-0.5.1-x86_64-disk.raw
    Import the image into the default back end at the central site:
    glance image-create \
    --disk-format raw --container-format bare \
    --name cirros --file cirros-0.5.1-x86_64-disk.raw \
    --store central

A.1.2. Importing an image from a web server

If the image is hosted on a web server, you can use the GlanceImageImportPlugins parameter to upload the image to multiple stores.

This procedure assumes that the default image conversion plugin is enabled in glance. This feature automatically converts QCOW2 file formats into RAW images, which are optimal for Ceph RBD. You can confirm that a glance image is in RAW format by running the glance image-show ID | grep disk_format.

Procedure

  1. Use the image-create-via-import parameter of the glance command to import an image from a web server. Use the --stores parameter.

    # glance image-create-via-import \
    --disk-format qcow2 \
    --container-format bare \
    --name cirros \
    --uri http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img \
    --import-method web-download \
    --stores central,dcn1

    In this example, the qcow2 cirros image is downloaded from the official Cirros site, converted to RAW by glance, and imported into the central site and edge site 1 as specified by the --stores parameter.

Alternatively you can replace --stores with --all-stores True to upload the image to all of the stores.

A.1.3. Copying an image to a new site

You can copy existing images from the central location to edge sites, which gives you access to previously created images at newly established locations.

  1. Use the UUID of the glance image for the copy operation:

    ID=$(openstack image show cirros -c id -f value)
    
    glance image-import $ID --stores dcn0,dcn1 --import-method copy-image
    Note

    In this example, the --stores option specifies that the cirros image will be copied from the central site to edge sites dcn1 and dcn2. Alternatively, you can use the --all-stores True option, which uploads the image to all the stores that don’t currently have the image.

  2. Confirm a copy of the image is in each store. Note that the stores key, which is the last item in the properties map, is set to central,dcn0,dcn1.:

      $ openstack image show $ID | grep properties
      | properties       | direct_url=rbd://d25504ce-459f-432d-b6fa-79854d786f2b/images/8083c7e7-32d8-4f7a-b1da-0ed7884f1076/snap, locations=[{u'url: u'rbd://d25504ce-459f-432d-b6fa-79854d786f2b/images/8083c7e7-32d8-4f7a-b1da-0ed7884f1076/snap', u'metadata': {u'store': u'central'}}, {u'url': u'rbd://0c10d6b5-a455-4c4d-bd53-8f2b9357c3c7/images/8083c7e7-32d8-4f7a-b1da-0ed7884f1076/snap', u'metadata': {u'store': u'dcn0'}}, {u'url': u'rbd://8649d6c3-dcb3-4aae-8c19-8c2fe5a853ac/images/8083c7e7-32d8-4f7a-b1da-0ed7884f1076/snap', u'metadata': {u'store': u'dcn1'}}], os_glance_failed_import=', os_glance_importing_to_stores=', os_hash_algo='sha512, os_hash_value=b795f047a1b10ba0b7c95b43b2a481a59289dc4cf2e49845e60b194a911819d3ada03767bbba4143b44c93fd7f66c96c5a621e28dff51d1196dae64974ce240e, os_hidden=False, stores=central,dcn0,dcn1 |
Note

Always store an image copy in the central site even if there is no VM using it on that site.

A.1.4. Confirming that an instance at an edge site can boot with image based volumes

You can use an image at the edge site to create a persistent root volume.

Procedure

  1. Identify the ID of the image to create as a volume, and pass that ID to the openstack volume create command:

    IMG_ID=$(openstack image show cirros -c id -f value)
    openstack volume create --size 8 --availability-zone dcn0 pet-volume-dcn0 --image $IMG_ID
  2. Identify the volume ID of the newly created volume and pass it to the openstack server create command:

    VOL_ID=$(openstack volume show -f value -c id pet-volume-dcn0)
    openstack server create --flavor tiny --key-name dcn0-key --network dcn0-network --security-group basic --availability-zone dcn0 --volume $VOL_ID pet-server-dcn0
  3. You can verify that the volume is based on the image by running the rbd command within a ceph-mon container at the dcn0 edge site to list the volumes pool.

    $ sudo podman exec ceph-mon-$HOSTNAME rbd --cluster dcn0 -p volumes ls -l
    NAME                                      SIZE  PARENT                                           FMT PROT LOCK
    volume-28c6fc32-047b-4306-ad2d-de2be02716b7 8 GiB images/8083c7e7-32d8-4f7a-b1da-0ed7884f1076@snap   2      excl
    $
  4. Confirm that you can create a cinder snapshot of the root volume of the instance. Ensure that the server is stopped to quiesce data to create a clean snapshot. Use the --force option, because the volume status remains in-use when the instance is off.

    openstack server stop pet-server-dcn0
    openstack volume snapshot create pet-volume-dcn0-snap --volume $VOL_ID --force
    openstack server start pet-server-dcn0
  5. List the contents of the volumes pool on the dcn0 Ceph cluster to show the newly created snapshot.

    $ sudo podman exec ceph-mon-$HOSTNAME rbd --cluster dcn0 -p volumes ls -l
    NAME                                                                                      SIZE  PARENT                                           FMT PROT LOCK
    volume-28c6fc32-047b-4306-ad2d-de2be02716b7                                               8 GiB images/8083c7e7-32d8-4f7a-b1da-0ed7884f1076@snap   2      excl
    volume-28c6fc32-047b-4306-ad2d-de2be02716b7@snapshot-a1ca8602-6819-45b4-a228-b4cd3e5adf60 8 GiB images/8083c7e7-32d8-4f7a-b1da-0ed7884f1076@snap   2 yes

A.1.5. Confirming image snapshots can be created and copied between sites

  1. Verify that you can create a new image at the dcn0 site. Ensure that the server is stopped to quiesce data to create a clean snapshot:

    NOVA_ID=$(openstack server show pet-server-dcn0 -f value -c id)
    openstack server stop $NOVA_ID
    openstack server image create --name cirros-snapshot $NOVA_ID
    openstack server start $NOVA_ID
  2. Copy the image from the dcn0 edge site back to the hub location, which is the default back end for glance:

    IMAGE_ID=$(openstack image show cirros-snapshot -f value -c id)
    glance image-import $IMAGE_ID --stores central --import-method copy-image

For more information on glance multistore operations, see Image service with multiple stores.

A.2. Migrating to a spine and leaf deployment

It is possible to migrate an existing cloud with a pre-existing network configuration to one with a spine leaf architecture. For this, the following conditions are needed:

  • All bare metal ports must have their physical-network property value set to ctlplane.
  • The parameter enable_routed_networks is added and set to true in undercloud.conf, followed by a re-run of the undercloud installation command, openstack undercloud install.

Once the undercloud is re-deployed, the overcloud is considered a spine leaf, with a single leaf leaf0. You can add additional provisioning leaves to the deployment through the following steps.

  1. Add the desired subnets to undercloud.conf as shown in Configuring routed spine-leaf in the undercloud.
  2. Re-run the undercloud installation command, openstack undercloud install.
  3. Add the desired additional networks and roles to the overcloud templates, network_data.yaml and roles_data.yaml respectively.

    Note

    If you are using the {{network.name}}InterfaceRoutes parameter in the network configuration file, then you’ll need to ensure that the NetworkDeploymentActions parameter includes the value UPDATE.

      NetworkDeploymentActions: ['CREATE','UPDATE'])
  4. Finally, re-run the overcloud installation script that includes all relevant heat templates for your cloud deployment.

A.3. Migrating to a multistack deployment

You can migrate from a single stack deployment to a multistack deployment by treating the existing deployment as the central site, and adding additional edge sites.

The ability to migrate from single to multistack in this release is a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.

You cannot split the existing stack. You can scale down the existing stack to remove compute nodes if needed. These compute nodes can then be added to edge sites.

Note

This action creates workload interruptions if all compute nodes are removed.

A.4. Backing up and restoring across edge sites

You can back up and restore Block Storage service (cinder) volumes across distributed compute node (DCN) architectures in edge site and availability zones. The cinder-backup service runs in the central availability zone (AZ), and backups are stored in the central AZ. The Block Storage service does not store backups at DCN sites.

Prerequisites

Procedure

  1. Create a backup of a volume in the first DCN site:

    $ cinder --os-volume-api-version 3.51 backup-create --name <volume_backup> --availability-zone <az_central> <edge_volume>
    • Replace <volume_backup> with a name for the volume backup.
    • Replace <az_central> with the name of the central availability zone that hosts the cinder-backup service.
    • Replace <edge_volume> with the name of the volume that you want to back up.

      Note

      If you experience issues with Ceph keyrings, you might need to restart the cinder-backup container so that the keyrings copy from the host to the container successfully.

  2. Restore the backup to a new volume in the second DCN site:

    $ cinder --os-volume-api-version 3.51 create --availability-zone <az_2> --name <new_volume> --backup-id <volume_backup> <volume_size>
    • Replace <az_2> with the name of the availability zone where you want to restore the backup.
    • Replace <new_volume> with a name for the new volume.
    • Replace <volume_backup> with the name of the volume backup that you created in the previous step.
    • Replace <volume_size> with a value in GB equal to or greater than the size of the original volume.

A.5. Removing a DCN site

To remove an edge site, remove the stack, storage, and associated services from your Red Hat OpenStack deployment. In the following example, dcn2 is being decommissioned and removed from the DCN (distributed compute node) deployment. Adjust the commands to suit your environment.

Prerequisites

  • You have a site available to be decommissioned that is part of a fully deployed DCN architected Red Hat OpenStack Platform cluster
  • All instances active on the edge site to be deleted must be removed.
  • All volumes active on the edge site to be deleted must be removed.

Procedure

  1. If this site is provisioned with Red Hat Ceph Storage, you must first remove the glance store, Ceph configuration, and associated permissions associated with the edge location you are removing. If you have not deployed storage at the edge, you can proceed to step 2:

    1. Log in to Red Hat Openstack Platform director and source the RC credentials file for the central site:

      source /home/stack/centralrc
    2. Delete all images associated with the stack:

      glance stores-delete --store dcn2 <IMAGE>

      Replace <_IMAGE_> with the name of the image that you want to delete.

    3. Remove the stanza associated with the stack from the dcn_ceph_external.yaml file. In this example, remove the dcn2 configuration.

      ...
          - cluster: "dcn1"
            fsid: "e70c3c80-4eca-4f1a-8cbb-3e9753b401c9"
            external_cluster_mon_ips: "172.23.2.58, 172.23.2.153, 172.23.2.32"
            keys:
              - name: "client.external"
                caps:
                  mgr: "allow *"
                  mon: "profile rbd"
                  osd: "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=images"
                key: "AQD4Ae9gAAAAABAAeeG5N0E3Jka7bXRDhB8CtQ=="
                mode: "0600"
            dashboard_enabled: false
            ceph_conf_overrides:
              client:
                keyring: /etc/ceph/dcn1.client.external.keyring
      -   - cluster: "dcn2"
      -     fsid: "c699978c-876d-4c65-984f-2179d22244ea"
      -     external_cluster_mon_ips: "172.23.3.239, 172.23.3.99, 172.23.3.67"
      -     keys:
      -      - name: "client.external"
      -        caps:
      -          mgr: "allow *"
      -           mon: "profile rbd"
      -           osd: "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=images"
      -         key: "AQA1Du9gAAAAABAAMOBAczLIw1SzBR7vxwYqJg=="
      -         mode: "0600"
      -     dashboard_enabled: false
      -     ceph_conf_overrides:
      -       client:
      -         keyring: /etc/ceph/dcn2.client.external.keyring
    4. Remove the stanza associated with the stack from the glance_update.yaml file. In this example, remove the dcn2 configuration.

      parameter_defaults:
        GlanceMultistoreConfig:
          dcn0:
            GlanceBackend: rbd
            GlanceStoreDescription: 'dcn0 rbd glance store'
            CephClientUserName: 'openstack'
            CephClusterName: dcn0
          dcn1:
            GlanceBackend: rbd
            GlanceStoreDescription: 'dcn1 rbd glance store'
            CephClientUserName: 'openstack'
            CephClusterName: dcn1
      -   dcn2:
      -     GlanceBackend: rbd
      -     GlanceStoreDescription: 'dcn2 rbd glance store'
      -     CephClientUserName: 'openstack'
      -     CephClusterName: dcn2
    5. Delete files in /etc/ceph that relate to the site you are removing. Perform this step on each Controller node.

      source /home/stack/stackrc
      for i in $(openstack server list | awk '/controller/ {print $8}' | cut -d= -f2); do
           ssh heat-admin@${i} sudo -E \
           rm -f /etc/ceph/dcn2.conf /etc/ceph/dcn2.client.openstack.keyring
      done
    6. Disable the cinder-volume service for the stack:

      1. Identify the cinder-volume service for the stack that you want to delete:

        source /home/stack/centralrc
        VOLUME_HOST=openstack volume service list --service cinder-volume
      2. Disable the cinder-volume service

        openstack volume service set --disable ${VOLUME_HOST} cinder-volume
      3. Remove the cinder-volume service from the cinder_api container on one of the Controller nodes at the central location:

        ssh heat-admin@CONTROLLER_IP sudo podman exec cinder_api cinder-manage service remove cinder-volume ${VOLUME_HOST}
  2. If you use OVS, clean the neutron agents for the stack that you deleted:

    1. Identify the UUIDs of the network agents:

      source /home/stack/centralrc
      openstack network agent list
    2. Use the UUIDs from the previous step to identify the network agents to delete. Run the following command for each UUID:

      openstack network agent delete $UUID
      Note

      You cannot delete OVN agents.

  3. Delete the compute services associated with the stack:

    1. Find the IDs of the compute services associated with dcn2. You can use the unique hostnames for this step. If the hostnames include the stacknames, you can use grep to list only the compute services that you want to delete:

      source /home/stack/centralrc
      openstack compute service list | grep dcn2
    2. Use the IDs shown in the first field to delete the compute services. Run the following command for each ID:

      openstack compute service delete <ID>
  4. Delete the host aggregate associated with the site:

    source /home/stack/centralrc
    openstack aggregate delete dcn2
  5. Remove the subnets for the stack that you want to delete from the /home/stack/central/network/network_data.yaml file:

      - name: Tenant
        name_lower: tenant
        vip: false  # Tenant network does not use VIPs
        vlan: 1189
        ip_subnet: '172.19.1.0/24'
        allocation_pools: [{'start': '172.19.1.4', 'end': '172.19.1.250'}]
        gateway_ip: '172.19.1.254'
        subnets:
          tenant_leaf1:
            vlan: 1179
            ip_subnet: '172.19.2.0/24'
            allocation_pools: [{'start': '172.19.2.4', 'end': '172.19.2.250'}]
            gateway_ip: '172.19.2.254'
    -     tenant_leaf2:
    -       vlan: 1169
    -       ip_subnet: '172.19.3.0/24'
    -       allocation_pools: [{'start': '172.19.3.4', 'end': '172.19.3.250'}]
    -       gateway_ip: '172.19.3.254
  6. Delete the stack:

    source /home/stack/stackrc
    openstack overcloud delete -y dcn2
  7. Optional: Delete the bare metal nodes from the decommissioned site:

    1. List the nodes and identify those associated with the deleted stack dcn2

      source /home/stack/stackrc
      openstack baremetal node list
    2. Use the node UUIDs from the previous step to remove each node from ironic:

      openstack baremetal node delete $UUID