Chapter 3. Documenting your RHOSP environment

Documenting the system components, networks, services, and software is important in identifying security concerns, attack vectors, and possible security zone bridging points. The documentation for your Red Hat OpenStack Platform (RHOSP) deployment should include the following information:

  • A description of the system components, networks, services, and software in your RHOSP production, development, and test environments.
  • An inventory of any ephemeral resources, such as virtual machines or virtual disk volumes.

3.1. Documenting the system roles

Each node in your Red Hat OpenStack Platform (RHOSP) deployment serves a specific role, either contributing to the infrastructure of the cloud, or providing cloud resources.

Nodes that contribute to the infrastructure run the cloud-related services, such as the message queuing service, storage management, monitoring, networking, and other services required to support the operation and provisioning of the cloud. Examples of infrastructure roles include the following:

  • Controller
  • Networker
  • Database
  • Telemetry

Nodes that provide cloud resources offer compute or storage capacity for instances running on your cloud. Examples of resource roles include the following:

  • CephStorage
  • Compute
  • ComputeOvsDpdk
  • ObjectStorage

Document the system roles that are used in your environment. These roles can be identified within the templates used to deploy RHOSP. For example, there is a NIC configuration file for each role in use in your environment.

Procedure

  1. Check the existing templates for your deployment for files that specify the roles currently in use. There is a NIC configuration file for each role in use in your environment. In the following example, the RHOSP environment includes the ComputeHCI role, the Compute role, and the Controller role:

    $ cd ~/templates
    $ tree
    .
    ├── environments
    │   └── network-environment.yaml
    ├── hci.yaml
    ├── network
    │   └── config
    │       └── multiple-nics
    │           ├── computehci.yaml
    │           ├── compute.yaml
    │           └── controller.yaml
    ├── network_data.yaml
    ├── plan-environment.yaml
    └── roles_data_hci.yaml
  2. Each role for your RHOSP environment performs many interrelated services. You can document the services used by each role by inspecting a roles file.

    1. If a roles file was generated for your templates, you can find it in the ~/templates directory:

      $ cd ~/templates
      $ find . -name *role*
      > ./templates/roles_data_hci.yaml
    2. If a roles file was not generated for your templtes, you can generate one for the roles you currently use to inspect for documentation purposes:

      $ openstack overcloud roles generate \
      > --roles-path /usr/share/openstack-tripleo-heat-templates/roles \
      > -o roles_data.yaml Controller Compute

3.2. Creating a hardware inventory

You can retrieve hardware information aobut your Red Hat OpenStack Platform deployment by viewing data that is collected during introspection. Introspection gathers hardware information from the nodes about the CPU, memory, disks, and so on.

Prerequisites

  • You have an installed Red Hat OpenStack Platform director environment.
  • You have introspected nodes for your Red Hat OpenStack Platform deployment.
  • You are logged into the director as stack.

Procedure

  1. From the undercloud, source the stackrc file:

    $ source ~/stackrc
  2. List the nodes in your environment:

    $ openstack baremetal node list -c Name
    +--------------+
    | Name         |
    +--------------+
    | controller-0 |
    | controller-1 |
    | controller-2 |
    | compute-0    |
    | compute-1    |
    | compute-2    |
    +--------------+
  3. For each baremetal node from which to gather information, and run the following command to retrieve the introspection data:

    $ openstack baremetal introspection data save <node> | jq

    Replace <node> with the name of the node from the list you retrieved in step 1.

  4. Optional: To limit the output to a specific type of hardware, you can retrieve a list of the inventory keys and view introspection data for a specific key:

    1. Run the following command to get a list of top level keys from introspection data:

      $ openstack baremetal introspection data save controller-0 | jq '.inventory | keys'
      
      [
        "bmc_address",
        "bmc_v6address",
        "boot",
        "cpu",
        "disks",
        "hostname",
        "interfaces",
        "memory",
        "system_vendor"
      ]
    2. Select a key, for example disks, and run the following to get more information:

      $ openstack baremetal introspection data save controller-1 | jq '.inventory.disks'
      [
        {
          "name": "/dev/sda",
          "model": "QEMU HARDDISK",
          "size": 85899345920,
          "rotational": true,
          "wwn": null,
          "serial": "QM00001",
          "vendor": "ATA",
          "wwn_with_extension": null,
          "wwn_vendor_extension": null,
          "hctl": "0:0:0:0",
          "by_path": "/dev/disk/by-path/pci-0000:00:01.1-ata-1"
        }
      ]

3.3. Creating a software inventory

Document the software components in use on nodes deployed in your Red Hat OpenStack Platform (RHOSP) infrastructure. System databases, RHOSP software services and supporting components such as load balancers, DNS, or DHCP services, are critical when assessing the impact of a compromise or vulnerability in a library, application, or class of software.

  • You have an installed Red Hat OpenStack Platform environment.
  • You are logged into the director as stack.

Procedure

  1. Ensure that you know the entry points for systems and services that can be subject to malicious activity. Run the following commands on the undercloud:

    $ cat /etc/hosts
    $ source stackrc ; openstack endpoint list
    $ source overcloudrc ; openstack endpoint list
  2. RHOSP is deployed in containerized services, therefore you can view the software components on an overcloud node by checking the running containers on that node. Use ssh to connect to an overcloud node and list the running containers. For example, to view the overcloud services on compute-0, run a command similar to the following:

    $ ssh tripleo-admin@compute-0 podman ps