Chapter 5. Shared Storage

The Red Hat Update Appliance (RHUA) and content delivery servers (CDSs) need a shared storage volume that is accessible by both. Red Hat Gluster Storage is provided with Red Hat Enterprise Linux (RHEL) 6 and 7, but you can use any Network File System (NFS) solution.

5.1. Gluster Storage

5.1.1. Create Shared Storage

Note

glusterfs-server is available only with the appropriate subscription.

See the Red Hat Gluster Storage documentation for installation and administration details. In particular, see Section 11.11 of the Red Hat Gluster Storage Administration Guide for server- and client-side quorum configuration and brain-split management .

Important

For a replicated volume with two nodes and one brick on each machine, if the server-side quorum is enabled and one of the nodes goes offline, the other node is also taken offline due to the quorum configuration.

If the client-side quorum is configured and is not met, files in that replica group become read-only.

One concern regarding shared storage is the inability to expand block storage size if the disk usage approaches 100% because of raw disk. Gluster Storage usually works on a physical server, and its bricks are internal storage. With a physical server, the disks of bricks cannot be extended if they are assigned to an entire physical internal disk. According to general storage practice, the brick should be placed on the Logical Volume Manager (LVM.)

The following steps describe how to create a shared volume on LVM using Gluster Storage of three nodes and install required packages. Refer to the product documentation if you are using a different storage solution.

  1. Run the following steps on all CDS nodes. The example shows cds1.

    1. For Red Hat Enterprise Linux 7, run the following command.

      [root@cds1 ~]# yum install glusterfs-server glusterfs-cli rh-rhua-selinux-policy
    2. For Red Hat Enterprise Linux 6, run the following command.

      [root@cds1 ~]# yum install xfsprogs glusterfs-server glusterfs-cli rh-rhua-selinux-policy
  2. Initialize the physical volume on the new disk.

    # pvcreate /dev/vdb
  3. Create a Volume Group on /dev/vdb.

    # vgcreate vg_gluster /dev/vdb
  4. Create a logical volume on LVM.

    # lvcreate -n lv_brick1 -l 100%FREE vg_gluster
  5. Format the device.

    mkfs.xfs -f -i size=512 /dev/mapper/vg_gluster-lv_brick1
  6. Create a mount directory, mount the disk, enable glusterd, and start glusterd.

    # mkdir -p /export/xvdb; mount /dev/mapper/vg_gluster-lv_brick1 /export/xvdb; mkdir -p /export/xvdb/brick; systemctl enable glusterd.service; systemctl start glusterd.service
  7. Add the following entry in /etc/fstab on each CDS node.

    /dev/mapper/vg_gluster-lv_brick1 /export/xvdb xfs defaults 0 0
  8. Run the following steps on only one CDS node, for example, cds1.

    [root@cds1 ~]# gluster peer probe cds2.example.com
    peer probe: success.
    [root@cds1 ~]# gluster peer probe cds3.example.com
    peer probe: success.
    Note

    Make sure DNS resolution is working. A bad name resolution error is shown below.

    [root@cds1 ~]# gluster peer probe <cds[23].example.com hostnames>
     peer probe: failed: Probe returned with Transport endpoint is not connected
    Important

    The Gluster peer probe might also fail with "peer probe: failed: Probe returned with Transport endpoint is not connected" when there is a communication or port issue. A workaround to this failure is to disable the firewalld service. If you prefer not to disable the firewall, you can allow the correct ports as described in Chapter 3. Verifying Port Access of the Red Gluster Storage Administration Guide 3.3.

  9. Before proceeding, verify that the peer connections were successful. You should see a similar output.

    [root@cds1 ~]# gluster peer status
    Number of Peers: 2
    Hostname: cds2.v3.example.com
    Uuid: 6cb9fdf9-1486-4db5-a438-24c64f47e63e
    State: Peer in Cluster (Connected)
    Hostname: cds3.v3.example.com
    Uuid: 5e0eea6c-933d-48ff-8c2f-0228effa6b82
    State: Peer in Cluster (Connected)
    [root@cds1 ~]# gluster volume create rhui_content_0 replica 3 \
    cds1.example.com:/export/xvdb/brick cds2.example.com:/export/xvdb/brick \
    cds3.example.com:/export/xvdb/brick
    volume create: rhui_content_0: success: please start the volume to access data
    [root@cds1 ~]# gluster volume start rhui_content_0
    volume start: rhui_content_0: success

5.1.2. Extend the Storage Volume

You can extend a disk’s volume if it is approaching its capacity by adding a new disk of the same size to each CDS node and running the following commands on each CDS node. The name of the device file representing the disk depends on the technology you use, but if the first disk was /dev/vdb, the second can be /dev/vdc. Replace the device file in the following procedure with the actual device file name.

  1. Initialize the physical volume on the new disk.

    # pvcreate /dev/vdc
  2. Extend the logical volume group.

    # vgextend vg_gluster /dev/vdc
  3. Extend the logical volume itself by the amount of free disk space on the new physical volume.

    # lvextend vg_gluster/lv_brick1 /dev/vdc
  4. Expand the file system.

    # xfs_growfs /dev/mapper/vg_gluster-lv_brick1
  5. Run df on the RHUA node to confirm that the mounted Gluster Storage volume has the expected new size.

5.2. Create NFS Storage

You can set up an NFS server for the content managed by RHUI on the RHUA node or on a dedicated machine. The following procedure describes how to set up storage using NFS.

Important

Using a dedicated machine allows CDS nodes, and mainly your RHUI clients, to continue to work if something happens to the RHUA node. Red Hat recommends that you set up an NFS server on a dedicated machine.

  1. Install the nfs-utils package on the node hosting the NFS server, on the RHUA node (if it differs), and also on all your CDS nodes.

    # yum install nfs-utils
  2. Edit the /etc/exports file on the NFS server. Choose a suitable directory to hold the RHUI content and allow the RHUA node and all your CDS nodes to access it. For example, to use the /export directory and make it available to all systems in the example.com domain, put the following line to /etc/exports.

    /export *.example.com(rw,no_root_squash)
  3. Create the directory for the RHUI content as defined in /export.

    # mkdir /export
  4. Start and enable the NFS service.

    1. On RHEL 7, run the following command.

      # systemctl start nfs
      # systemctl start rpcbind
      # systemctl enable nfs-server
      # systemctl enable rpcbind
    2. On RHEL 6, run the following command.

      # service nfs start
      # service rpcbind start
      # chkconfig nfs on
      # chkconfig rpcbind on
      Note

      If you are using an existing NFS server and the NFS service is already running, use restart instead of start.

  5. Test your setup. On a CDS node, run the following commands, which assume that the NFS server has been set up on a machine named filer.example.com.

    # mkdir /mnt/nfstest
    # mount filer.example.com:/export /mnt/nfstest
    # touch /mnt/nfstest/test

    You should not get any error messages.

  6. To clean up after this test, remove the test file, unmount the remote share, and remove the test directory.

    # rm /mnt/nfstest/test
    # umount /mnt/nfstest
    # rmdir /mnt/nfstest

    Your NFS server is now set up. See Section 8.7. NFS Server Configuration for more information on NFS server configuration for RHEL 7. See Chapter 9. Network File System (NFS) for more information on NFS server configuration for RHEL 6.

Report a bug