Chapter 9. Configuring shared storage
The RHUA and CDS nodes require a shared storage volume, which can be accessed by both, to store content managed by RHUI.
Currently, RHUI supports the following storage solutions:
9.1. Configuring shared storage using NFS
When using Network File System (NFS) as your shared storage, you must set up an NFS server either on the RHUA node or on a dedicated machine.
The following instructions explain how to create, configure, and verify NFS to work with RHUI.
Setting up your NFS server on a dedicated machine allows the CDS nodes and your RHUI clients to continue working even if something happens to the RHUA node.
Prerequisites
- Ensure you have root access to the NFS server
- Ensure you have root access to the RHUA node
- Ensure you have root access to all the CDS nodes you plan to use.
Procedure
Install the
nfs-utils
package on the node hosting the NFS server, the RHUA node (if it differs from the NFS node), and all the CDS nodes.# dnf install nfs-utils
Create a suitable directory to hold all the RHUI content.
# mkdir /export
Allow your RHUA and CDS nodes access to the directory by editing the
/etc/exports
file and adding the following line:/export rhua.example.com(rw,no_root_squash) cds01.example.com(rw,no_root_squash) cds02.example.com(rw,no_root_squash)
Start and enable the NFS service.
# systemctl start nfs-server # systemctl start rpcbind # systemctl enable nfs-server # systemctl enable rpcbind
NoteIf the NFS service is already running use the
restart
command instead of thestart
command.
Verification
To test whether an NFS server is set up on a machine named
filer.example.com
, run the following commands on a CDS node:# mkdir /mnt/nfstest # mount filer.example.com:/export /mnt/nfstest # touch /mnt/nfstest/test
Your setup is working properly if you do not get any error messages.
9.2. Configuring shared storage using CephFS
When using Ceph File System (CephFS) as your shared storage, you must set up a file system and share it over the network. RHUI treats the shared file system as a simple mount point, which you can mount on the file systems of the RHUA and CDS nodes.
Do not set up the Ceph shared file storage on the RHUI nodes. You must configure CephFS on independent dedicated machines.
The following instructions explain how to verify whether an existing Ceph file system can work with RHUI.
This document does not provide instructions to set up Ceph shared file storage. For instructions on how to do so, consult your system administrator.
Prerequisites
Ensure you have the following identification information:
The IP Address and port of the host where the cluster monitor daemon for the Ceph distributed file system is running.
-
As a CephFS system administrator, run the command
ceph mon dump
on the Ceph master node. You can find the IP address and port listed as<ceph_monip>:<ceph_port>
.
-
As a CephFS system administrator, run the command
-
The Ceph username, usually
admin
. The Ceph file system name.
-
As a CephFS system administrator, run the command
ceph fs ls
on the Ceph master node. You can find the file system name listed as<cephfs_name>
.
-
As a CephFS system administrator, run the command
The Ceph secret key.
-
As a CephFS system administrator, run the command
ceph auth get client.admin
on the Ceph master node. You can find the secret key listed as<ceph_secretkey>
.
-
As a CephFS system administrator, run the command
- Ensure you have root access to the RHUA node and all the CDS nodes you plan to use.
Enable the Ceph Tools repository on the RHUA and CDS nodes. For more information, see:
Procedure
On the RHUA and CDS nodes install the
ceph-common
package:# dnf install ceph-common
Verification
To test whether a Ceph File Share is available and whether RHUI can use it, run the following commands on the RHUA node or on one of the CDS nodes:
# mkdir /mnt/mycephfs_test # mount -t ceph <ceph_monip>:<ceph_port>:/ /mnt/mycephfs_test -o name=admin,secret=<ceph_secretkey>,fs=<cephfs_name> # touch /mnt/cephfs_test/testfile # ls /mnt/mycephfs_test
Your setup is working properly if you do not get any error messages.
Clean up the test mount point.
# rm /mnt/cephfs_test/testfile # umount /mnt/mycephfs_test