Chapter 18. Fencing the Controller Nodes

Fencing is the process of isolating a failed node to protect a cluster and its resources. Without fencing, a failed node can result in data corruption in a cluster.

The director uses Pacemaker to provide a highly available cluster of Controller nodes. Pacemaker uses a process called STONITH to fence failed nodes. STONITH is disabled by default and requires manual configuration so that Pacemaker can control the power management of each node in the cluster.

18.1. Review the Prerequisites

To configure fencing in the overcloud, your overcloud must already have been deployed and be in a working state. The following steps review the state of Pacemaker and STONITH in your deployment:

  1. Log in to each node as the heat-admin user from the stack user on the director. The overcloud creation automatically copies the stack user’s SSH key to each node’s heat-admin.
  2. Verify you have a running cluster:

    $ sudo pcs status
    Cluster name: openstackHA
    Last updated: Wed Jun 24 12:40:27 2015
    Last change: Wed Jun 24 11:36:18 2015
    Stack: corosync
    Current DC: lb-c1a2 (2) - partition with quorum
    Version: 1.1.12-a14efad
    3 Nodes configured
    141 Resources configured
  3. Verify STONITH is disabled:

    $ sudo pcs property show
    Cluster Properties:
    cluster-infrastructure: corosync
    cluster-name: openstackHA
    dc-version: 1.1.12-a14efad
    have-watchdog: false
    stonith-enabled: false

18.2. Enable Fencing

Having confirmed your overcloud is deployed and working, you can then configure fencing:

  1. Generate the fencing.yaml file:

    $ openstack overcloud generate fencing --ipmi-lanplus --ipmi-level administrator --output fencing.yaml instackenv.json
    • Sample fencing.yaml file:

      parameter_defaults:
        EnableFencing: true
        FencingConfig:
          devices:
          - agent: fence_ipmilan
            host_mac: 11:11:11:11:11:11
            params:
              action: reboot
              ipaddr: 10.0.0.101
              lanplus: true
              login: admin
              passwd: InsertComplexPasswordHere
              pcmk_host_list: host04
              privlvl: administrator
  2. Pass the resulting fencing.yaml file to the deploy command you previously used to deploy the overcloud. This will re-run the deployment procedure and configure fencing on the hosts:

    openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e ~/templates/network-environment.yaml -e ~/templates/storage-environment.yaml --control-scale 3 --compute-scale 3 --ceph-storage-scale 3 --control-flavor control --compute-flavor compute --ceph-storage-flavor ceph-storage --ntp-server pool.ntp.org --neutron-network-type vxlan --neutron-tunnel-types vxlan -e fencing.yaml

    The deployment command should complete without any error or exceptions.

  3. Log in to the overcloud and verify fencing was configured for each of the controllers:

    1. Check the fencing resources are managed by Pacemaker:

      $ source stackrc
      $ nova list | grep controller
      $ ssh heat-admin@<controller-x_ip>
      $ sudo pcs status |grep fence
      stonith-overcloud-controller-x (stonith:fence_ipmilan): Started overcloud-controller-y

      You should see Pacemaker is configured to use a STONITH resource for each of the controllers specified in fencing.yaml. The fence-resource process should not be configured on the same host it controls.

    2. Use pcs to verify the fence resource attributes:

      $ sudo pcs stonith show <stonith-resource-controller-x>

      The values used by STONITH should match those defined in the fencing.yaml.

18.3. Test Fencing

This procedure tests whether fencing is working as expected.

  1. Trigger a fencing action for each controller in the deployment:

    1. Log in to a controller:

      $ source stackrc
      $ nova list |grep controller
      $ ssh heat-admin@<controller-x_ip>
    2. As root, trigger fencing by using iptables to close all ports:

      $ sudo -i
      iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT &&
      iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT &&
      iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 5016 -j ACCEPT &&
      iptables -A INPUT -p udp -m state --state NEW -m udp --dport 5016 -j ACCEPT &&
      iptables -A INPUT ! -i lo -j REJECT --reject-with icmp-host-prohibited &&
      iptables -A OUTPUT -p tcp --sport 22 -j ACCEPT &&
      iptables -A OUTPUT -p tcp --sport 5016 -j ACCEPT &&
      iptables -A OUTPUT -p udp --sport 5016 -j ACCEPT &&
      iptables -A OUTPUT ! -o lo -j REJECT --reject-with icmp-host-prohibited

      As a result, the connections should drop, and the server should be rebooted.

    3. From another controller, locate the fencing event in the Pacemaker log file:

      $ ssh heat-admin@<controller-x_ip>
      $ less /var/log/cluster/corosync.log
      (less): /fenc*

      You should see that STONITH has issued a fence action against the controller, and that Pacemaker has raised an event in the log.

    4. Verify the rebooted controller has returned to the cluster:

      1. From the second controller, wait a few minutes and run pcs status to see if the fenced controller has returned to the cluster. The duration can vary depending on your configuration.