Chapter 5. Shared Storage

The Red Hat Update Appliance (RHUA) and content delivery servers (CDSs) need a shared storage volume that is accessible by both. Red Hat Gluster Storage is provided with Red Hat Enterprise Linux (RHEL) 6 and 7, but you can use any Network File System (NFS) solution.

5.1. Gluster Storage

5.1.1. Create Shared Storage

Note

glusterfs-server is available only with the appropriate subscription.

See the Red Hat Gluster Storage documentation for installation and administration details. In particular, see Section 11.15 of the Red Hat Gluster Storage 3.4 Administration Guide for split-brain management.

Warning

As of Red Hat Gluster Storage 3.4, two-way replication without arbiter bricks is considered deprecated. Existing volumes that use two-way replication without arbiter bricks remain supported for this release. New volumes with this configuration are not supported. Red Hat no longer recommends the use of two-way replication without arbiter bricks and plans to remove support entirely in future versions of Red Hat Gluster Storage. This change affects both replicated and distributed-replicated volumes that do not use arbiter bricks.

Two-way replication without arbiter bricks is being deprecated because it does not provide adequate protection from split-brain conditions. Even in distributed-replicated configurations, two-way replication cannot ensure that the correct copy of a conflicting file is selected without the use of a tie-breaking node.

Red Hat strongly recommends using three-node Gluster Storage volumes.

Information about three-way replication is available in Section 5.6.2, Creating Three-way Replicated Volumes and Section 5.7.2, Creating Three-way Distributed Replicated Volumes of the Red Hat Gluster Storage 3.4 Administration Guide.

One concern regarding shared storage is the inability to expand block storage size if the disk usage approaches 100% because of raw disk. Gluster Storage usually works on a physical server, and its bricks are internal storage. With a physical server, the disks of bricks cannot be extended if they are assigned to an entire physical internal disk. According to general storage practice, the brick should be placed on the Logical Volume Manager (LVM.)

The following steps describe how to create a shared volume on LVM using Gluster Storage of three nodes and install required packages. Refer to the product documentation if you are using a different storage solution.

  1. Run the following steps on all CDS nodes. The example shows cds1.

    1. For Red Hat Enterprise Linux 7, run the following command.

      [root@cds1 ~]# yum install glusterfs-server glusterfs-cli rh-rhua-selinux-policy
    2. For Red Hat Enterprise Linux 6, run the following command.

      [root@cds1 ~]# yum install xfsprogs glusterfs-server glusterfs-cli rh-rhua-selinux-policy
  2. Initialize the physical volume on the new disk.

    # pvcreate /dev/vdb
  3. Create a Volume Group on /dev/vdb.

    # vgcreate vg_gluster /dev/vdb
  4. Create a logical volume on LVM.

    # lvcreate -n lv_brick1 -l 100%FREE vg_gluster
  5. Format the device.

    mkfs.xfs -f -i size=512 /dev/mapper/vg_gluster-lv_brick1
  6. Create a mount directory, mount the disk, enable glusterd, and start glusterd.

    # mkdir -p /export/xvdb; mount /dev/mapper/vg_gluster-lv_brick1 /export/xvdb; mkdir -p /export/xvdb/brick; systemctl enable glusterd.service; systemctl start glusterd.service
  7. Add the following entry in /etc/fstab on each CDS node.

    /dev/mapper/vg_gluster-lv_brick1 /export/xvdb xfs defaults 0 0
  8. Run the following steps on only one CDS node, for example, cds1.

    [root@cds1 ~]# gluster peer probe cds2.example.com
    peer probe: success.
    [root@cds1 ~]# gluster peer probe cds3.example.com
    peer probe: success.
    Note

    Make sure DNS resolution is working. A bad name resolution error is shown below.

    [root@cds1 ~]# gluster peer probe <cds[23].example.com hostnames>
     peer probe: failed: Probe returned with Transport endpoint is not connected
    Important

    The Gluster peer probe might also fail with peer probe: failed: Probe returned with Transport endpoint is not connected when there is a communication or port issue. A workaround to this failure is to disable the firewalld service. If you prefer not to disable the firewall, you can allow the correct ports as described in Section 3.1, Verifying Port Access of the Red Hat Gluster Storage Administration Guide 3.4.

  9. Before proceeding, verify that the peer connections were successful. You should see a similar output.

    [root@cds1 ~]# gluster peer status
    Number of Peers: 2
    Hostname: cds2.v3.example.com
    Uuid: 6cb9fdf9-1486-4db5-a438-24c64f47e63e
    State: Peer in Cluster (Connected)
    Hostname: cds3.v3.example.com
    Uuid: 5e0eea6c-933d-48ff-8c2f-0228effa6b82
    State: Peer in Cluster (Connected)
    [root@cds1 ~]# gluster volume create rhui_content_0 replica 3 \
    cds1.example.com:/export/xvdb/brick cds2.example.com:/export/xvdb/brick \
    cds3.example.com:/export/xvdb/brick
    volume create: rhui_content_0: success: please start the volume to access data
    [root@cds1 ~]# gluster volume start rhui_content_0
    volume start: rhui_content_0: success

5.1.2. Extend the Storage Volume

You can extend a disk’s volume if it is approaching its capacity by adding a new disk of the same size to each CDS node and running the following commands on each CDS node. The name of the device file representing the disk depends on the technology you use, but if the first disk was /dev/vdb, the second can be /dev/vdc. Replace the device file in the following procedure with the actual device file name.

  1. Initialize the physical volume on the new disk.

    # pvcreate /dev/vdc
  2. Extend the logical volume group.

    # vgextend vg_gluster /dev/vdc
  3. Extend the logical volume itself by the amount of free disk space on the new physical volume.

    # lvextend vg_gluster/lv_brick1 /dev/vdc
  4. Expand the file system.

    # xfs_growfs /dev/mapper/vg_gluster-lv_brick1
  5. Run df on the RHUA node to confirm that the mounted Gluster Storage volume has the expected new size.

5.2. Create NFS Storage

You can set up an NFS server for the content managed by RHUI on the RHUA node or on a dedicated machine. The following procedure describes how to set up storage using NFS.

Important

Using a dedicated machine allows CDS nodes, and mainly your RHUI clients, to continue to work if something happens to the RHUA node. Red Hat recommends that you set up an NFS server on a dedicated machine.

  1. Install the nfs-utils package on the node hosting the NFS server, on the RHUA node (if it differs), and also on all your CDS nodes.

    # yum install nfs-utils
  2. Edit the /etc/exports file on the NFS server. Choose a suitable directory to hold the RHUI content and allow the RHUA node and all your CDS nodes to access it. For example, to use the /export directory and make it available to all systems in the example.com domain, put the following line to /etc/exports.

    /export *.example.com(rw,no_root_squash)
  3. Create the directory for the RHUI content as defined in /export.

    # mkdir /export
  4. Start and enable the NFS service.

    1. On RHEL 7, run the following command.

      # systemctl start nfs
      # systemctl start rpcbind
      # systemctl enable nfs-server
      # systemctl enable rpcbind
    2. On RHEL 6, run the following command.

      # service nfs start
      # service rpcbind start
      # chkconfig nfs on
      # chkconfig rpcbind on
      Note

      If you are using an existing NFS server and the NFS service is already running, use restart instead of start.

  5. Test your setup. On a CDS node, run the following commands, which assume that the NFS server has been set up on a machine named filer.example.com.

    # mkdir /mnt/nfstest
    # mount filer.example.com:/export /mnt/nfstest
    # touch /mnt/nfstest/test

    You should not get any error messages.

  6. To clean up after this test, remove the test file, unmount the remote share, and remove the test directory.

    # rm /mnt/nfstest/test
    # umount /mnt/nfstest
    # rmdir /mnt/nfstest

    Your NFS server is now set up. See Section 8.7. NFS Server Configuration for more information on NFS server configuration for RHEL 7. See Chapter 9. Network File System (NFS) for more information on NFS server configuration for RHEL 6.

Report a bug