23.3. Configuring Network Encryption for an existing Trusted Storage Pool

You can configure network encryption for an existing Red Hat Gluster Storage Trusted Storage Pool for both I/O encryption and management encryption.

23.3.1. Enabling I/O encryption for a Volume

Enable the I/O encryption between the servers and clients:
  1. Unmount the volume on all the clients.
    # umount mount-point
  2. Stop the volume.
    # gluster volume stop VOLNAME
  3. Set the list of common names for clients allowed to access the volume. Be sure to include the common names of all the servers.
    # gluster volume set VOLNAME auth.ssl-allow 'server1,server2,server3,client1,client2,client3'

    Note

    If you set auth.ssl-allow option with * as value, any TLS authenticated clients can mount and access the volume from the application side. Hence, you set the option's value to * or provide common names of clients as well as the nodes in the trusted storage pool.
  4. Enable Transport Layer Security on the volume by setting the client.ssl and server.ssl options to on.
    # gluster volume set VOLNAME client.ssl on
    # gluster volume set VOLNAME server.ssl on
  5. Start the volume.
    # gluster volume start VOLNAME
  6. Mount the volume from the new clients. For example, to manually mount a volume and access data using Native client, use the following command:
    # mount -t glusterfs server1:/test-volume /mnt/glusterfs

23.3.2. Enabling Management Encryption

Though, Red Hat Gluster Storage can be configured only for I/O encryption without using management encryption, management encryption is recommended. On an existing installation, with running servers and clients, schedule a downtime of volumes, applications, clients, and other end-users to enable management encryption.
You cannot currently change between unencrypted and encrypted connections dynamically. Bricks and other local services on the servers and clients do not receive notifications from glusterd if they are running when the switch to management encryption is made.
  1. Unmount all the volumes on all the clients.
    # umount mount-point
  2. If you are using either NFS Ganesha or Samba service, then stop the service. For more information regarding NFS Ganesha see, Section 6.2.3, “NFS Ganesha” . For more information regarding Samba, see Section 6.3, “SMB”.
  3. If shared storage is being used, then unmount the shared storage on all nodes
    # umount /var/run/gluster/shared_storage

    Note

    Services dependent on shared storage, such as snapshot and geo-replication may not work until it is remounted again.
  4. Stop all the volumes including the shared storage.
    # gluster volume stop VOLNAME
  5. Stop glusterd on all servers.
    # service glusterd stop
  6. Stop all gluster-related processes on all servers.
    # pkill glusterfs
  7. Create the /var/lib/glusterd/secure-access file on all servers and clients.
    # touch /var/lib/glusterd/secure-access
  8. Start glusterd on all the servers.
    # service glusterd start
  9. Start all the volumes including shared storage.
    # gluster volume start VOLNAME
  10. Mount the shared used if used earlier.
    # mount -t glusterfs <hostname>:/gluster_shared_storage /run/gluster/shared_storage
  11. If you are using either NFS Ganesha or Samba service, then start the service. For more information regarding NFS Ganesha see, Section 6.2.3, “NFS Ganesha”. For more information regarding Samba, see Section 6.3, “SMB”.
  12. Mount the volume on all the clients. For example, to manually mount a volume and access data using Native client, use the following command:
    # mount -t glusterfs server1:/test-volume /mnt/glusterfs