Red Hat Training

A Red Hat training course is available for Red Hat Enterprise Linux

9.4.2. Updating a Configuration Using scp

To update the configuration using the scp command, perform the following steps:
  1. At any node in the cluster, edit the /etc/cluster/cluster.conf file.
  2. Update the config_version attribute by incrementing its value (for example, changing from config_version="2" to config_version="3">).
  3. Save /etc/cluster/cluster.conf.
  4. Validate the updated file against the cluster schema (cluster.rng) by running the ccs_config_validate command. For example:
    [root@example-01 ~]# ccs_config_validate 
    Configuration validates
    
  5. If the updated file is valid, use the scp command to propagate it to /etc/cluster/ in each cluster node.
  6. Verify that the updated configuration file has been propagated.
  7. To reload the new configuration, execute the following command on one of the cluster nodes:
    cman_tool version -r -S
    
  8. You may skip this step (restarting cluster software) if you have made only the following configuration changes:
    • Deleting a node from the cluster configuration—except where the node count changes from greater than two nodes to two nodes. For information about deleting a node from a cluster and transitioning from greater than two nodes to two nodes, see Section 9.2, “Deleting or Adding a Node”.
    • Adding a node to the cluster configuration—except where the node count changes from two nodes to greater than two nodes. For information about adding a node to a cluster and transitioning from two nodes to greater than two nodes, see Section 9.2.2, “Adding a Node to a Cluster”.
    • Changes to how daemons log information.
    • HA service/VM maintenance (adding, editing, or deleting).
    • Resource maintenance (adding, editing, or deleting).
    • Failover domain maintenance (adding, editing, or deleting).
    Otherwise, you must restart the cluster software as follows:
    1. At each node, stop the cluster software according to Section 9.1.2, “Stopping Cluster Software”.
    2. At each node, start the cluster software according to Section 9.1.1, “Starting Cluster Software”.
    Stopping and starting the cluster software ensures that any configuration changes that are checked only at startup time are included in the running configuration.
  9. Verify that that the nodes are functioning as members in the cluster and that the HA services are running as expected.
    1. At any cluster node, run cman_tool nodes to verify that the nodes are functioning as members in the cluster (signified as "M" in the status column, "Sts"). For example:
      [root@example-01 ~]# cman_tool nodes
      Node  Sts   Inc   Joined               Name
         1   M    548   2010-09-28 10:52:21  node-01.example.com
         2   M    548   2010-09-28 10:52:21  node-02.example.com
         3   M    544   2010-09-28 10:52:21  node-03.example.com
      
    2. At any node, using the clustat utility, verify that the HA services are running as expected. In addition, clustat displays status of the cluster nodes. For example:
      [root@example-01 ~]#clustat
      Cluster Status for mycluster @ Wed Nov 17 05:40:00 2010
      Member Status: Quorate
      
       Member Name                             ID   Status
       ------ ----                             ---- ------
       node-03.example.com                         3 Online, rgmanager
       node-02.example.com                         2 Online, rgmanager
       node-01.example.com                         1 Online, Local, rgmanager
      
       Service Name                   Owner (Last)                   State         
       ------- ----                   ----- ------                   -----           
       service:example_apache         node-01.example.com            started       
       service:example_apache2        (none)                         disabled
      
    If the cluster is running as expected, you are done updating the configuration.