Red Hat Training

A Red Hat training course is available for Red Hat Gluster Storage

16.10. Restoring Snapshot

Before restoring a snapshot ensure that the following prerequisites are met
  • The specified snapshot has to be present
  • The original / parent volume of the snapshot has to be in a stopped state.
  • Red Hat Gluster Storage nodes have to be in quorum.
  • If you have a Hadoop enabled Red Hat Gluster Storage volume, you must ensure to stop all the Hadoop Services in Ambari.
  • No volume operation (e.g. add-brick, rebalance, etc) should be running on the origin or parent volume of the snapshot.
    # gluster snapshot restore <snapname>
    where,
    • snapname - The name of the snapshot to be restored.
    For Example:
    # gluster snapshot restore snap1
    Snapshot restore: snap1: Snap restored successfully
    After snapshot is restored and the volume is started, trigger a self-heal by running the following command:
    # gluster volume heal VOLNAME full
    If you have a Hadoop enabled Red Hat Gluster Storage volume, you must start all the Hadoop Services in Ambari.

    Note

  • In the cluster, identify the nodes participating in the snapshot with the snapshot status command. For example:
     # gluster snapshot status snapname
           
        Snap Name : snapname
        Snap UUID : bded7c02-8119-491b-a7e1-cc8177a5a1cd
    
         Brick Path        :   10.70.43.46:/var/run/gluster/snaps/816e8403874f43a78296decd7c127205/brick2/brick2
         Volume Group      :   snap_lvgrp
         Brick Running     :   Yes
         Brick PID         :   8303
         Data Percentage   :   0.43
         LV Size           :   2.60g
    
    
         Brick Path        :   10.70.42.33:/var/run/gluster/snaps/816e8403874f43a78296decd7c127205/brick3/brick3
         Volume Group      :   snap_lvgrp
         Brick Running     :   Yes
         Brick PID         :   4594
         Data Percentage   :   42.63
         LV Size           :   2.60g
    
    
         Brick Path        :   10.70.42.34:/var/run/gluster/snaps/816e8403874f43a78296decd7c127205/brick4/brick4
         Volume Group      :   snap_lvgrp
         Brick Running     :   Yes
         Brick PID         :   23557
         Data Percentage   :   12.41
         LV Size           :   2.60g
    
    • In the nodes identified above, check if the geo-replication repository is present in /var/lib/glusterd/snaps/snapname. If the repository is present in any of the nodes, ensure that the same is present in /var/lib/glusterd/snaps/snapname throughout the cluster. If the geo-replication repository is missing in any of the nodes in the cluster, copy it to /var/lib/glusterd/snaps/snapname in that node.
    • Restore snapshot of the volume using the following command:
      # gluster snapshot restore snapname
Restoring Snapshot of a Geo-replication Volume

If you have a geo-replication setup, then perform the following steps to restore snapshot:

  1. Stop the geo-replication session.
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop
  2. Stop the slave volume and then the master volume.
    # gluster volume stop VOLNAME
  3. Restore snapshot of the slave volume and the master volume.
    # gluster snapshot restore snapname
  4. Start the slave volume first and then the master volume.
    # gluster volume start VOLNAME
  5. Start the geo-replication session.
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start
    
  6. Resume the geo-replication session.
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL resume