Red Hat Training

A Red Hat training course is available for Red Hat Gluster Storage

23.4. Expanding Volumes

In a network encrypted Red Hat Gluster Storage trusted storage pool, you must ensure that you meet the prerequisites listed at Section 23.1, “Prerequisites”.

23.4.1. Certificate Signed with a Common Certificate Authority

Adding a server to a storage pool is simple if the servers all use a common Certificate Authority.
  1. Copy /etc/ssl/glusterfs.ca file from one of the existing servers and save it on the/etc/ssl/ directory on the new server.
  2. If you are using management encryption, create /var/lib/glusterd/secure-access file.
    # touch /var/lib/glusterd/secure-access
  3. Start glusterd on the new peer
    # service glusterd start
  4. Add the common name of the new server to the auth.ssl-allow list for all volumes which have encryption enabled.
    # gluster volume set VOLNAME auth.ssl-allow servernew

    Note

    The gluster volume set command does not append to existing values of the options. To append the new name to the list, get the existing list using gluster volume info command, append the new name to the list and set the option again using gluster volume set command.
  5. Run gluster peer probe [server] to add additional servers to the trusted storage pool. For more information on adding servers to the trusted storage pool, see Chapter 4, Adding Servers to the Trusted Storage Pool .

23.4.2. Self-signed Certificates

Using self-signed certificates would require a downtime of servers to add a new server into the trusted storage pool, as the CA list cannot be dynamically reloaded. To add a new server:
  1. Generate the private key and self-signed certificate on the new server using the steps listed at Section 23.1, “Prerequisites”.
  2. Copy the following files:
    1. On an existing server, copy the /etc/ssl/glusterfs.ca file, append the content of new server's certificate to it, and distribute it to all servers, including the new server.
    2. On an existing client, copy the /etc/ssl/glusterfs.ca file, append the content of the new server's certificate to it, and distribute it to all clients.
  3. Stop all gluster-related processes on all servers.
    # pkill glusterfs
  4. Create the /var/lib/glusterd/secure-access file on the server if management encryption is enable in the trusted storage pool.
  5. Start glusterd on the new peer
    # service glusterd start
  6. Add the common name of the new server to the auth.ssl-allow list for all volumes which have encryption enabled.

    Note

    If you set auth.ssl-allow option with * as value, any TLS authenticated clients can mount and access the volume from the application side. Hence, you set the option's value to * or provide common names of clients as well as the nodes in the trusted storage pool.
  7. Restart all the glusterfs processes on existing servers and clients by performing the following .
    1. Unmount the volume on all the clients.
      # umount mount-point
    2. Stop all volumes.
      # gluster volume stop VOLNAME
    3. Restart glusterd on all the servers.
      # service glusterd start
    4. Start the volumes
      # gluster volume start VOLNAME
    5. Mount the volume on all the clients. For example, to manually mount a volume and access data using Native client, use the following command:
      # mount -t glusterfs server1:/test-volume /mnt/glusterfs
  8. Peer probe the new server to add it to the trusted storage pool. For more information on peer probe, see Chapter 4, Adding Servers to the Trusted Storage Pool