5.5.6. Adding or Deleting a GULM Lock Server Member

The procedure for adding or deleting a GULM cluster member depends on the type of GULM node: either a node that functions only as a GULM client (a cluster member capable of running applications, but not eligible to function as a GULM lock server) or a node that functions as a GULM lock server. The procedure in this section describes how to add or delete a member that functions as a GULM lock server. To add a member that functions only as a GULM client, refer to Section 5.5.4, “Adding a GULM Client-only Member”; to delete a member that functions only as a GULM client, refer to Section 5.5.5, “Deleting a GULM Client-only Member”.

Important

The number of nodes that can be configured as GULM lock servers is limited to either one, three, or five.
To add or delete a GULM member that functions as a GULM lock server in an existing cluster that is currently in operation, follow these steps:
  1. At one of the running members (running on a node that is not to be deleted), start system-config-cluster (refer to Section 5.2, “Starting the Cluster Configuration Tool). At the Cluster Status Tool tab, disable each service listed under Services.
  2. Stop the cluster software on each running node by running the following commands at each node in this order:
    1. service rgmanager stop, if the cluster is running high-availability services (rgmanager)
    2. service gfs stop, if you are using Red Hat GFS
    3. service clvmd stop, if CLVM has been used to create clustered volumes
    4. service lock_gulmd stop
    5. service ccsd stop
  3. To add a a GULM lock server member, at system-config-cluster, in the Cluster Configuration Tool tab, add each node and configure fencing for it as in Section 5.5.1, “Adding a Member to a New Cluster”. Make sure to select GULM Lockserver in the Node Properties dialog box (refer to Figure 5.6, “Adding a Member to a New GULM Cluster”).
  4. To delete a GULM lock server member, at system-config-cluster (running on a node that is not to be deleted), in the Cluster Configuration Tool tab, delete each member as follows:
    1. If necessary, click the triangle icon to expand the Cluster Nodes property.
    2. Select the cluster node to be deleted. At the bottom of the right frame (labeled Properties), click the Delete Node button.
    3. Clicking the Delete Node button causes a warning dialog box to be displayed requesting confirmation of the deletion (Figure 5.9, “Confirm Deleting a Member”).
      Confirm Deleting a Member

      Figure 5.9. Confirm Deleting a Member

    4. At that dialog box, click Yes to confirm deletion.
  5. Propagate the configuration file to the cluster nodes as follows:
    1. Log in to the node where you created the configuration file (the same node used for running system-config-cluster).
    2. Using the scp command, copy the /etc/cluster/cluster.conf file to all nodes in the cluster.

      Note

      Propagating the cluster configuration file this way is necessary under these circumstances because the cluster software is not running, and therefore not capable of propagating the configuration. Once a cluster is installed and running, the cluster configuration file is propagated using the Red Hat cluster management GUI Send to Cluster button. For more information about propagating the cluster configuration using the GUI Send to Cluster button, refer to Section 6.3, “Modifying the Cluster Configuration”.
    3. After you have propagated the cluster configuration to the cluster nodes you can either reboot each node or start the cluster software on each cluster node by running the following commands at each node in this order:
      1. service ccsd start
      2. service lock_gulmd start
      3. service clvmd start, if CLVM has been used to create clustered volumes
      4. service gfs start, if you are using Red Hat GFS
      5. service rgmanager start, if the node is also functioning as a GULM client and the cluster is running cluster services (rgmanager)
    4. At system-config-cluster (running on a node that was not deleted), in the Cluster Configuration Tool tab, verify that the configuration is correct. At the Cluster Status Tool tab verify that the nodes and services are running as expected.

Note

Make sure to configure other parameters that may be affected by changes in this section. Refer to Section 5.1, “Configuration Tasks”.