Chapter 9. Replacing a Gluster Storage Node

If a Red Hat Gluster Storage node needs to be replaced, there are two options for the replacement node:

  1. Replace the node with a new node that has a different fully-qualified domain name by following the instructions in Section 9.1, “Replacing a Gluster Storage Node (Different FQDN)”.
  2. Replace the node with a new node that has the same fully-qualified domain name by following the instructions in Section 9.2, “Replacing a Gluster Storage Node (Same FQDN)”.

Follow the instructions in whichever section is appropriate for your deployment.

9.1. Replacing a Gluster Storage Node (Different FQDN)

Important

When self-signed encryption is enabled, replacing a node is a disruptive process that requires virtual machines and the Hosted Engine to be shut down.

  1. Prepare the replacement node

    Follow the instructions in Deploying Red Hat Hyperconverged Infrastructure to install the physical machine.

  2. Stop any existing geo-replication sessions

    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop

    For further information, see the Red Hat Gluster Storage Administration Guide: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/sect-starting_geo-replication#Stopping_a_Geo-replication_Session.

  3. Move the node to be replaced into Maintenance mode

    Perform the following steps in Red Hat Virtualization Manager:

    1. Click the Hosts tab and select the Red Hat Gluster Storage node in the results list.
    2. Click Maintenance to open the Maintenance Host(s) confirmation window.
    3. Click OK to move the host to Maintenance mode.
  4. Prepare the replacement node

    1. Configure key-based SSH authentication

      Configure key-based SSH authentication from a physical machine still in the cluster to the replacement node. For details, see https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure/1.1/html/deploying_red_hat_hyperconverged_infrastructure/task-configure-key-based-ssh-auth.

    2. Prepare the replacement node

      Create a file called replace_node_prep.conf based on the template provided in Section B.2, “Example gdeploy configuration file for preparing to replace a node”.

      From a node with gdeploy installed (usually the node that hosts the Hosted Engine), run gdeploy using the new configuration file:

      # gdeploy -c replace_node_prep.conf
  5. Create replacement brick directories

    Ensure the new directories are owned by the vdsm user and the kvm group.

    # mkdir /gluster_bricks/engine/engine
    # chmod vdsm:kvm /gluster_bricks/engine/engine
    # mkdir /gluster_bricks/data/data
    # chmod vdsm:kvm /gluster_bricks/data/data
    # mkdir /gluster_bricks/vmstore/vmstore
    # chmod vdsm:kvm /gluster_bricks/vmstore/vmstore
  6. (Optional) If encryption is enabled

    1. Generate the private key and self-signed certificate on the new server using the steps in the Red Hat Gluster Storage Administration Guide: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/chap-network_encryption#chap-Network_Encryption-Prereqs.

      If encryption using a Certificate Authority is enabled, follow the steps at the following link before continuing: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/ch22s04.

    2. Add the new node’s certificate to existing certificates.

      1. On one of the healthy nodes, make a backup copy of the /etc/ssl/glusterfs.ca file.
      2. Add the new node’s certificate to the /etc/ssl/glusterfs.ca file on the healthy node.
      3. Distribute the updated /etc/ssl/glusterfs.ca file to all other nodes, including the new node.
    3. Enable management encryption

      Run the following command on the new node to enable management encryption:

      # touch /var/lib/glusterd/secure-access
    4. Include the new server in the value of the auth.ssl-allow volume option by running the following command for each volume.

      # gluster volume set <volname> auth.ssl-allow "<old_node1>,<old_node2>,<new_node>"
    5. Restart the glusterd service on all nodes

      # systemctl restart glusterd
    6. If encryption uses self-signed certificates, follow the steps in Section 4.1, “Configuring TLS/SSL using self-signed certificates” to remount all gluster processes.
  7. Add the new host to the existing cluster

    1. Run the following command from one of the healthy cluster members:

      # gluster peer probe <new_node>
    2. Add the new host to the existing cluster

      1. Click the Hosts tab and then click New to open the New Host dialog.
      2. Provide a Name, Address, and Password for the new host.
      3. Uncheck the Automatically configure host firewall checkbox, as firewall rules are already configured by gdeploy.
      4. In the Hosted Engine tab of the New Host dialog, set the value of Choose hosted engine deployment action to deploy.
      5. Click Deploy.
      6. When the host is available, click the Network Interfaces subtab and then click Setup Host Networks.
      7. Drag and drop the network you created for gluster to the IP associated with this host, and click OK.

        See the Red Hat Virtualization 4.1 Self-Hosted Engine Guide for further details: https://access.redhat.com/documentation/en/red-hat-virtualization/4.1/paged/self-hosted-engine-guide/chapter-7-installing-additional-hosts-to-a-self-hosted-environment.

  8. Configure and mount shared storage on the new host

    # cp /etc/fstab /etc/fstab.bk
    # echo "<new_host>:/gluster_shared_storage /var/run/gluster/shared_storage/ glusterfs defaults 0 0" >> /etc/fstab
    # mount /gluster_shared_storage
  9. Replace the old brick with the brick on the new host

    1. In Red Hat Virtualization Manager, click the Hosts tab and select the volume.
    2. Click the Bricks sub-tab.
    3. Click Replace Brick beside the old brick and specify the replacement brick.
    4. Verify that brick heal completes successfully.
  10. In the Hosts tab, right-click on the old host and click Remove.

    Use gluster peer status to verify that that the old host no longer appears. If the old host is still present in the status output, run the following command to forcibly remove it:

    # gluster peer detach <old_node> force
  11. Clean old host metadata

    # hosted-engine --clean-metadata --host-id=<old_host_id> --force-clean
  12. Set up new SSH keys for geo-replication of new brick

    # gluster system:: execute gsec_create
  13. Recreate geo-replication session and distribute new SSH keys.

    # gluster volume geo-replication <MASTER_VOL> <SLAVE_HOST>::<SLAVE_VOL> create push-pem force
  14. Start the geo-replication session.

    # gluster volume geo-replication <MASTER_VOL> <SLAVE_HOST>::<SLAVE_VOL> start

9.2. Replacing a Gluster Storage Node (Same FQDN)

Important

When self-signed encryption is enabled, replacing a node is a disruptive process that requires virtual machines and the Hosted Engine to be shut down.

  1. (Optional) If encryption using a Certificate Authority is enabled, follow the steps at the following link before continuing: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/ch22s04.
  2. Move the node to be replaced into Maintenance mode

    1. In Red Hat Virtualization Manager, click the Hosts tab and select the Red Hat Gluster Storage node in the results list.
    2. Click Maintenance to open the Maintenance Host(s) confirmation window.
    3. Click OK to move the host to Maintenance mode.
  3. Prepare the replacement node

    Follow the instructions in Deploying Red Hat Hyperconverged Infrastructure to install the physical machine and configure storage on the new node.

  4. Prepare the replacement node

    1. Create a file called replace_node_prep.conf based on the template provided in Section B.2, “Example gdeploy configuration file for preparing to replace a node”.
    2. From a node with gdeploy installed (usually the node that hosts the Hosted Engine), run gdeploy using the new configuration file:

      # gdeploy -c replace_node_prep.conf
  5. (Optional) If encryption with self-signed certificates is enabled

    1. Generate the private key and self-signed certificate on the replacement node. See the Red Hat Gluster Storage Administration Guide for details: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/chap-network_encryption#chap-Network_Encryption-Prereqs.
    2. On a healthy node, make a backup copy of the /etc/ssl/glusterfs.ca file:

      # cp /etc/ssl/glusterfs.ca /etc/ssl/glusterfs.ca.bk
    3. Append the new node’s certificate to the content of the /etc/ssl/glusterfs.ca file.
    4. Distribute the /etc/ssl/glusterfs.ca file to all nodes in the cluster, including the new node.
    5. Run the following command on the replacement node to enable management encryption:

      # touch /var/lib/glusterd/secure-access
  6. Replace the host machine

    Follow the instructions in the Red Hat Gluster Storage Administration Guide to replace the host: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/sect-replacing_hosts#Replacing_a_Host_Machine_with_the_Same_Hostname.

  7. Restart the glusterd service on all nodes

    # systemctl restart glusterd
  8. Verify that all nodes reconnect

    # gluster peer status
  9. (Optional) If encryption uses self-signed certificates, follow the steps in Section 4.1, “Configuring TLS/SSL using self-signed certificates” to remount all gluster processes.
  10. Verify that all nodes reconnect and that brick heal completes successfully

    # gluster peer status
  11. Refresh fingerprint

    1. In Red Hat Virtualization Manager, click the Hosts tab and select the new host.
    2. Click Edit Host.
    3. Click Advanced on the details screen.
    4. Click Fetch fingerprint.
  12. Click Reinstall and provide the root password when prompted.
  13. Click the Hosted Engine tab and click Deploy
  14. Attach the gluster network to the host

    1. Click the Hosts tab and select the host.
    2. Click the Network Interfaces subtab and then click Setup Host Networks.
    3. Drag and drop the newly created network to the correct interface.
    4. Ensure that the Verify connectivity checkbox is checked.
    5. Ensure that the Save network configuration checkbox is checked.
    6. Click OK to save.
  15. Verify the health of the network

    Click the Hosts tab and select the host. Click the Networks subtab and check the state of the host’s network.

    If the network interface enters an "Out of sync" state or does not have an IPv4 Address, click the Management tab that corresponds to the host and click Refresh Capabilities.