Appendix B. Booting Fibre Channel from SAN

The following procedure lists the steps to configure booting a Fibre Channel from SAN before configuring the overcloud and deploying your Red Hat OpenStack Platform cloud.

Note

This procedure lists the steps which differ from the standard steps as described in the Director Installation and Configuration guide.

Make sure you have installed the undercloud by installing the necessary packages and subscribing to the channels as described in the Installing the Undercloud.

After you have completed installing the undercloud, download the images and upload them to the Image service (glance) as described in Obtaining Images for Overcloud Nodes.

Next, you need to set the nameserver on the undercloud’s OpenStack Networking subnet:

  1. On viewing the current ctlplane-subnet, you will see there is currently no dns_nameserver:

    # openstack subnet list
    # openstack subnet show <subnet-uuid>
  2. Update this ctlplane-subnet to have a DNS nameserver:

    # openstack subnet set --name ctlplane-subnet --dns-nameserver <dns-ip-address>
  3. Check to make sure dns_nameserver is present and pingable:

    # openstack subnet show $(openstack subnet list | awk '/ctlplane/ {print $2}') | grep dns_nameservers
  4. Register the controller and compute nodes for the overcloud as described in the Registering Nodes for the Overcloud.

    For example, a template for registering two nodes might look like this:

    {
        "nodes":[
        {
                "name": "cougar12",
                "pm_type":"pxe_ipmitool",
                "mac":[
                    "bb:bb:bb:bb:bb:bb
    "
                ],
                "cpu":"1",
                "memory":"8192",
                "disk":"40",
                "arch":"x86_64",
                "pm_user":"root",
                "pm_password":"password",
                "pm_addr":"192.168.24.205
    "
            },
            {
                "name": "cougar09",
                "pm_type":"pxe_ipmitool",
                "mac":[
                    "cc:cc:cc:cc:cc:cc"
                ],
                "cpu":"1",
                "memory":"8192",
                "disk":"40",
                "arch":"x86_64",
                "pm_user":"root",
                "pm_password":"password",
                "pm_addr":"192.168.24.205
    "
            }
        ]
    }

    Here cougar9 and cougar12 are the nodes respectively.

  5. After creating the template, save the file to the stack user’s home directory (/home/stack/instackenv.json), then import it into the director using the following commands:

    # openstack overcloud node import ~/instackenv.json
  6. Verify the nodes are registered:

    # openstack baremetal node list
  7. (Optional) To monitor the logs:

    # sudo journalctl -l -u openstack-ironic-inspector -u openstack-ironic-inspector-dnsmasq -u openstack-ironic-conductor -f
  8. When the command completes running, the nodes should show power off, managed, false:

    # openstack baremetal node list
  9. Introspect all the nodes:

    # openstack overcloud node introspect --all-manageable
    # openstack overcloud node provide --all-manageable
  10. Find the serial information to let the nodes know where to boot from as shown in the following example:

    # openstack baremetal introspection data save cougar12|jq '.inventory.disks'
    {
        "size": 64424509440,
        "serial": "514f0c5a51600d7b",
        "rotational": false,
        "vendor": "XtremIO",
        "name": "/dev/sdb",
        "wwn_vendor_extension": null,
        "hctl": "7:0:0:1",
        "wwn_with_extension": "0x514f0c5a51600d7b",
        "model": "XtremApp",
        "wwn": "0x514f0c5a51600d7b"
    
    # openstack baremetal node set 3a5c5a91-334b-4baa-8347-6cc79dba75b7 --property root_device='{"serial": "514f0c5a51600d7b"}'
    
    +------------------------+--------------------------------------------------------------------------+
    | Property               | Value                                                                    |
    +------------------------+--------------------------------------------------------------------------+
    | boot_interface         |                                                                          |
    | chassis_uuid           | None                                                                     |
    | clean_step             | {}                                                                       |
    | console_enabled        | False                                                                    |
    | console_interface      |                                                                          |
    | created_at             | 2017-08-02T15:14:44+00:00                                                |
    | deploy_interface       |                                                                          |
    | driver                 | pxe_ipmitool                                                             |
    | driver_info            | {u'deploy_kernel': u'50ce10e3-5ffc-4ccd-ac29-93aac89d01f5',              |
    |                        | u'ipmi_address': u'10.35.160.114', u'deploy_ramdisk': u'b1e4bd0e-        |
    |                        | 17f8-4ebe-9854-ccd9ce0bbec2', u'ipmi_password': u'******',               |
    |                        | u'ipmi_username': u'root'}                                               |
    | driver_internal_info   | {}                                                                       |
    | extra                  | {u'hardware_swift_object': u'extra_hardware-3a5c5a91-334b-               |
    |                        | 4baa-8347-6cc79dba75b7'}                                                 |
    | inspect_interface      |                                                                          |
    | inspection_finished_at | None                                                                     |
    | inspection_started_at  | None                                                                     |
    | instance_info          | {}                                                                       |
    | instance_uuid          | None                                                                     |
    | last_error             | None                                                                     |
    | maintenance            | False                                                                    |
    | maintenance_reason     | None                                                                     |
    | management_interface   |                                                                          |
    | name                   | cougar12                                                                 |
    | network_interface      | flat                                                                     |
    | power_interface        |                                                                          |
    | power_state            | power off                                                                |
    | properties             | {u'cpu_arch': u'x86_64', u'root_device': {u'serial':                     |
    |                        | u'514f0c5a51600d7b'}, u'cpus': u'12', u'capabilities': u'cpu_aes:true,cp |
    |                        | u_hugepages:true,boot_option:local,cpu_vt:true,cpu_hugepages_1g:true,boo |
    |                        | t_mode:bios', u'memory_mb': u'65536', u'local_gb': u'59'}                |
    | provision_state        | available                                                                |
    | provision_updated_at   | 2017-08-02T16:52:24+00:00                                                |
    | raid_config            | {}                                                                       |
    | raid_interface         |                                                                          |
    | reservation            | localhost.localdomain                                                    |
    | resource_class         | None                                                                     |
    | storage_interface      |                                                                          |
    | target_power_state     | None                                                                     |
    | target_provision_state | None                                                                     |
    | target_raid_config     | {}                                                                       |
    | updated_at             | 2017-08-02T17:09:43+00:00                                                |
    | uuid                   | 3a5c5a91-334b-4baa-8347-6cc79dba75b7                                     |
    | vendor_interface       |                                                                          |
    +------------------------+--------------------------------------------------------------------------+
    
    # openstack baremetal introspection data save cougar09|jq '.inventory.disks'
    {
        "size": 64424509440,
        "serial": "514f0c5a51600d79",
        "rotational": false,
        "vendor": "XtremIO",
        "name": "/dev/sdc",
        "wwn_vendor_extension": null,
        "hctl": "6:0:0:1",
        "wwn_with_extension": "0x514f0c5a51600d79",
        "model": "XtremApp",
        "wwn": "0x514f0c5a51600d79"
      },
    
    # openstack baremetal node set e64dbe76-de91-4a59-96fa-bd7b6080bcea     --property root_device='{"serial": "514f0c5a51600d79"}'
    +------------------------+--------------------------------------------------------------------------+
    | Property               | Value                                                                    |
    +------------------------+--------------------------------------------------------------------------+
    | boot_interface         |                                                                          |
    | chassis_uuid           | None                                                                     |
    | clean_step             | {}                                                                       |
    | console_enabled        | False                                                                    |
    | console_interface      |                                                                          |
    | created_at             | 2017-08-02T15:14:49+00:00                                                |
    | deploy_interface       |                                                                          |
    | driver                 | pxe_ipmitool                                                             |
    | driver_info            | {u'deploy_kernel': u'50ce10e3-5ffc-4ccd-ac29-93aac89d01f5',              |
    |                        | u'ipmi_address': u'10.35.160.140', u'deploy_ramdisk': u'b1e4bd0e-        |
    |                        | 17f8-4ebe-9854-ccd9ce0bbec2', u'ipmi_password': u'******',               |
    |                        | u'ipmi_username': u'root'}                                               |
    | driver_internal_info   | {}                                                                       |
    | extra                  | {u'hardware_swift_object': u'extra_hardware-e64dbe76-de91-4a59-96fa-     |
    |                        | bd7b6080bcea'}                                                           |
    | inspect_interface      |                                                                          |
    | inspection_finished_at | None                                                                     |
    | inspection_started_at  | None                                                                     |
    | instance_info          | {}                                                                       |
    | instance_uuid          | None                                                                     |
    | last_error             | None                                                                     |
    | maintenance            | False                                                                    |
    | maintenance_reason     | None                                                                     |
    | management_interface   |                                                                          |
    | name                   | cougar09                                                                 |
    | network_interface      | flat                                                                     |
    | power_interface        |                                                                          |
    | power_state            | power off                                                                |
    | properties             | {u'cpu_arch': u'x86_64', u'root_device': {u'serial':                     |
    |                        | u'514f0c5a51600d79'}, u'cpus': u'12', u'capabilities': u'cpu_aes:true,cp |
    |                        | u_hugepages:true,boot_option:local,cpu_vt:true,cpu_hugepages_1g:true,boo |
    |                        | t_mode:bios', u'memory_mb': u'65536', u'local_gb': u'59'}                |
    | provision_state        | available                                                                |
    | provision_updated_at   | 2017-08-02T16:52:24+00:00                                                |
    | raid_config            | {}                                                                       |
    | raid_interface         |                                                                          |
    | reservation            | localhost.localdomain                                                    |
    | resource_class         | None                                                                     |
    | storage_interface      |                                                                          |
    | target_power_state     | None                                                                     |
    | target_provision_state | None                                                                     |
    | target_raid_config     | {}                                                                       |
    | updated_at             | 2017-08-02T17:10:30+00:00                                                |
    | uuid                   | e64dbe76-de91-4a59-96fa-bd7b6080bcea                                     |
    | vendor_interface       |                                                                          |
    +------------------------+--------------------------------------------------------------------------+
  11. Tag the nodes to the controller and compute profiles respectively. Check before and after to check the profile changes using the openstack overcloud profiles list:

    # openstack baremetal node set --property capabilities='profile:compute,boot_option:local'  cougar09
    
    # openstack baremetal node set --property capabilities='profile:compute,boot_option:local'  cougar12
    
    # openstack overcloud profiles list
  12. Make sure to check the openstack flavor list and update the templates/node_data.yaml file to match accordingly. Ensure your DNS servers are correct and pingable:

    # openstack flavor list
    +--------------------------------------+---------------+------+------+-----------+-------+-----------+
    | ID                                   | Name          |  RAM | Disk | Ephemeral | VCPUs | Is Public |
    +--------------------------------------+---------------+------+------+-----------+-------+-----------+
    | 17606b91-5e0f-4b2b-90f9-b00ba711612b | baremetal     | 4096 |   40 |         0 |     1 | True      |
    | 25e764bc-7f0d-45ed-9025-22e05570d4a8 | block-storage | 4096 |   40 |         0 |     1 | True      |
    | a6892705-65c4-44b3-872f-784455a14290 | ceph-storage  | 4096 |   40 |         0 |     1 | True      |
    | afc46d67-074b-42c8-9c19-5a586103b868 | control       | 4096 |   40 |         0 |     1 | True      |
    | ba5f0485-b41a-4057-b0fe-7dd946c7d4fb | compute       | 4096 |   40 |         0 |     1 | True      |
    | c508df47-9f59-4d69-ac8f-0b0f62cbfe73 | swift-storage | 4096 |   40 |         0 |     1 | True      |
    +--------------------------------------+---------------+------+------+-----------+-------+-----------+
    Note

    Make sure that the control and compute names match the names in the nodes_data.yaml file:

    # cat templates/nodes_data.yaml
    parameter_defaults:
        ControllerCount: '1'
        OvercloudControlFlavor: 'control'
        ComputeCount: '1'
        OvercloudComputeFlavor: 'compute'
        NtpServer: ["clock.redhat.com","clock2.redhat.com"]
        DnsServers: ["192.168.24.205","192.168.22.202"]
  13. Set up the registry:

    For remote registry:

    Discover tag:

    # sudo openstack overcloud container image tag discover \
    --image registry.access.redhat.com/rhosp12/openstack-base:latest \
    --tag-from-label <12-RELEASE>

    Create the rhos12.yaml file:

    # openstack overcloud container image prepare \
    --namespace=registry.access.redhat.com/rhosp12 \
    --env-file=/home/stack/rhos12.yaml --prefix=openstack- --suffix=-docker --tag=$TAG

    For local registry:

    Discover tag:

    # sudo openstack overcloud container image tag discover \
    --image registry.access.redhat.com/rhosp12/openstack-base:latest \
    --tag-from-label <12-RELEASE>

    Create the container_images.yaml file:

    # openstack overcloud container image prepare \ --namespace=192.168.24.1:8787/rhosp12 \ --env-file=/home/stack/container_images.yaml --prefix=openstack- \ --suffix=-docker --tag=$TAG

    Upload the container image:

    # sudo openstack overcloud container image upload --verbose \
    --config-file /home/stack/container_images.yaml

    Create the rhos12.yaml file:

    # openstack overcloud container image prepare \ --namespace=192.168.24.1:8787/rhosp12 \ --env-file=/home/stack/rhos12.yaml --prefix=openstack- \ --suffix=-docker --tag=$TAG

    Add the following line to the rhos12.yaml file:

    parameter_defaults:
       DockerInsecureRegistryAddress: 192.168.24.1:8787
  14. Restart the docker service on the undercloud and deploy the overcloud:

    # systemctl restart docker
    # openstack overcloud deploy --templates \
    --libvirt-type kvm \
    -e /home/stack/templates/nodes_data.yaml \
    -e /home/stack/rhos12.yaml

B.1. Enabling Multipath in the Overcloud Nodes

By default, multipath is disabled on the overcloud nodes. In order to enable multipath support on the overcloud nodes, you need to run the following steps:

  1. Add the following configuration options to the /etc/multipath.conf file:

    multipaths {
           multipath {
                   wwid                    3514f0c5a51600d7b
                   alias                   elmertlee
           }
    }
  2. Start and enable the multipath daemon (multipathd):

    # systemctl restart multipathd
    # systemctl status multipathd
    # systemctl is-enabled multipathd
  3. Add the wwid for the specified device to the `wwid`s file:

    for i in `lsblk --list --paths --nodeps --noheadings --output NAME,MODEL,VENDOR|awk '/Xtrem/ {print $1}'`; do multipath -a $i; done
    $i all the disk that are the same LAN  4 paths to same LAN
  4. Boot with multipath in initramfs, dracut is a low-level tool for generating an initramfs image:

    /sbin/dracut --force -H --add multipath
  5. Reboot and allow OpenStack to utilize the multipath and verify the reboot:

    # multipath -ll
    WDC_WD5003ABYX-18WERA0_WD-WMAYP2759782 dm-3 ATA,WDC WD5003ABYX-1
    size=466G features='0' hwhandler='0' wp=rw
    `-+- policy='service-time 0' prio=1 status=active
      `- 2:0:0:0 sda 8:0  active ready running
    elmertlee (3514f0c5a51600d7b) dm-0 XtremIO ,XtremApp
    size=60G features='0' hwhandler='0' wp=rw
    `-+- policy='queue-length 0' prio=1 status=active
      |- 6:0:0:1 sdb 8:16 active ready running
      |- 6:0:1:1 sdc 8:32 active ready running
      |- 7:0:0:1 sdd 8:48 active ready running
      `- 7:0:1:1 sde 8:64 active ready running
    
    # multipath -ll
    ST1000NM0011_Z1N4784E dm-4 ATA,ST1000NM0011
    size=932G features='0' hwhandler='0' wp=rw
    `-+- policy='service-time 0' prio=1 status=active
      `- 4:0:0:0 sdb 8:16 active ready running
    WDC_WD5003ABYX-18WERA0_WD-WMAYP2908342 dm-3 ATA     ,WDC WD5003ABYX-1
    size=466G features='0' hwhandler='0' wp=rw
    `-+- policy='service-time 0' prio=1 status=active
      `- 2:0:0:0 sda 8:0  active ready running
    3514f0c5a51600d79 dm-0 XtremIO ,XtremApp
    size=60G features='0' hwhandler='0' wp=rw
    `-+- policy='queue-length 0' prio=1 status=active
      |- 6:0:0:1 sdc 8:32 active ready running
      |- 6:0:1:1 sdd 8:48 active ready running
      |- 7:0:0:1 sde 8:64 active ready running
      `- 7:0:1:1 sdf 8:80 active ready running