Red Hat Training

A Red Hat training course is available for Red Hat Gluster Storage

Chapter 12. Managing Snapshots

Red Hat Storage Snapshot feature enables you to create point-in-time copies of Red Hat Storage volumes, which you can use to protect data. Users can directly access Snapshot copies which are read-only to recover from accidental deletion, corruption, or modification of the data.
Description

Figure 12.1. Snapshot Architecture

In the Snapshot Architecture diagram, Red Hat Storage volume consists of multiple bricks (Brick1 Brick2 etc) which is spread across one or more nodes and each brick is made up of independent thin Logical Volumes (LV). When a snapshot of a volume is taken, it takes the snapshot of the LV and creates another brick. Brick1_s1 is an identical image of Brick1. Similarly, identical images of each brick is created and these newly created bricks combine together to form a snapshot volume.
Some features of snapshot are:
  • Crash Consistency

    A crash consistent snapshot is captured at a particular point-in-time. When a crash consistent snapshot is restored, the data is identical as it was at the time of taking a snapshot.

    Note

    Currently, application level consistency is not supported.
  • Online Snapshot

    Snapshot is an online snapshot hence the file system and its associated data continue to be available for the clients even while the snapshot is being taken.

  • Quorum Based

    The quorum feature ensures that the volume is in a good condition while the bricks are down. If any brick that is down for a n way replication, where n <= 2 , quorum is not met. In a n-way replication where n >= 3, quorum is met when m bricks are up, where m >= (n/2 +1) where n is odd and m >= n/2 and the first brick is up where n is even. If quorum is not met snapshot creation fails.

    Note

    The quorum check feature in snapshot is in technology preview. Snapshot delete and restore feature checks node level quorum instead of brick level quorum. Snapshot delete and restore is successful only when m number of nodes of a n node cluster is up, where m >= (n/2+1).
  • Barrier

    To guarantee crash consistency some of the fops are blocked during a snapshot operation.

    These fops are blocked till the snapshot is complete. All other fops is passed through. There is a default time-out of 2 minutes, within that time if snapshot is not complete then these fops are unbarriered. If the barrier is unbarriered before the snapshot is complete then the snapshot operation fails. This is to ensure that the snapshot is in a consistent state.

Note

Taking a snapshot of a Red Hat Storage volume that is hosting the Virtual Machine Images is not recommended. Taking a Hypervisor assisted snapshot of a virtual machine would be more suitable in this use case.

12.1. Prerequisites

Before using this feature, ensure that the following prerequisites are met:
  • Snapshot is supported in Red Hat Storage 3.0 and above. If you have previous versions of Red Hat Storage, then you must upgrade to Red Hat Storage 3.0. For more information, see Chapter 8 - Setting Up Software Updates in the Red Hat Storage 3 Installation Guide.
  • Snapshot is based on thinly provisioned LVM. Ensure the volume is based on LVM2. Red Hat Storage 3.0 is supported on Red Hat Enterprise Linux 6.5 and Red Hat Enterprise Linux 6.6. Both these versions of Red Hat Enterprise Linux is based on LVM2 by default. For more information, see https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Logical_Volume_Manager_Administration/thinprovisioned_volumes.html
  • Each brick must be independent thinly provisioned logical volume(LV).
  • The logical volume which contains the brick must not contain any data other than the brick.
  • Each snapshot creates as many bricks as in the original Red Hat Storage volume. Bricks, by default, use privileged ports to communicate. The total number of privileged ports in a system is restricted to 1024. Hence, for supporting 256 snapshots per volume, the following options must be set on Gluster volume. These changes will allow bricks and glusterd to communicate using non-privileged ports.
    1. Run the following command to permit insecure ports:
      # gluster volume set VOLNAME server.allow-insecure on
    2. Edit the /etc/glusterfs/glusterd.vol in each Red Hat Storage node, and add the following setting:
      option rpc-auth-allow-insecure on
    3. Restart glusterd service on each Red Hat Server node using the following command:
      # service glusterd restart
Recommended Setup

The recommended setup for using Snapshot is described below. In addition, you must ensure to read Chapter 9, Configuring Red Hat Storage for Enhancing Performance for enhancing snapshot performance:
  • For each volume brick, create a dedicated thin pool that contains the brick of the volume and its (thin) brick snapshots. With the current thin-p design, avoid placing the bricks of different Red Hat Storage volumes in the same thin pool, as this reduces the performance of snapshot operations, such as snapshot delete, on other unrelated volumes.
  • The recommended thin pool chunk size is 256KB. There might be exceptions to this in cases where we have a detailed information of the customer's workload.
  • The recommended pool metadata size is 0.1% of the thin pool size for a chunk size of 256KB or larger. In special cases, where we recommend a chunk size less than 256KB, use a pool metadata size of 0.5% of thin pool size.
For Example

To create a brick from device /dev/sda1.
  1. Create a physical volume(PV) by using the pvcreate command.
    pvcreate /dev/sda1
    Use the correct dataalignment option based on your device. For more information, Section 9.2, “Brick Configuration”
  2. Create a Volume Group (VG) from the PV using the following command:
    vgcreate dummyvg /dev/sda1
  3. Create a thin-pool using the following command:
    lvcreate -L 1T -T dummyvg/dummypool -c 256k --poolmetadatasize 16G
    A thin pool of size 1 TB is created, using a chunksize of 256 KB. Maximum pool metadata size of 16 G is used.
  4. Create a thinly provisioned volume from the previously created pool using the following command:
    lvcreate -V 1G -T dummyvg/dummypool -n dummylv
  5. Create a file system (XFS) on this. Use the recommended options to create the XFS file system on the thin LV.
    For example,
    mkfs.xfs -f -i size=512 -n size=8192 /dev/dummyvg/dummylv
  6. Mount this logical volume and use the mount path as the brick.
    mount/dev/dummyvg/dummylv /mnt/brick1