Chapter 12. Management of NFS-Ganesha gateway using the Ceph Orchestrator

As a storage administrator, you can use the Orchestrator with Cephadm in the backend to deploy the NFS-Ganesha gateway. Cephadm deploys NFS Ganesha using a predefined RADOS pool and optional namespace.

Note

Red Hat supports CephFS exports only over the NFS v4.0+ protocol.

This section covers the following administrative tasks:

12.1. Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to all the nodes.
  • Hosts are added to the cluster.
  • All manager, monitor and OSD daemons are deployed.

12.2. Creating the NFS-Ganesha cluster using the Ceph Orchestrator

You can create an NFS-Ganesha cluster using the mgr/nfs module of the Ceph Orchestrator. This module deploys the NFS cluster using Cephadm in the backend.

This creates a common recovery pool for all NFS-Ganesha daemons, new user based on clusterid, and a common NFS-Ganesha config RADOS object.

For each daemon, a new user and a common configuration is created in the pool. Although all the clusters have different namespaces with respect the cluster names, they use the same recovery pool.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Hosts are added to the cluster.
  • All manager, monitor and OSD daemons are deployed.

Procedure

  1. Log into the Cephadm shell:

    Example

    [root@host01 ~]# cephadm shell

  2. Enable the mgr/nfs module:

    Example

    [ceph: root@host01 /]# ceph mgr module enable nfs

  3. Create the cluster:

    Syntax

    ceph nfs cluster create CLUSTER_NAME ["HOST_NAME_1_,HOST_NAME_2,HOST_NAME_3"]

    The CLUSTER_NAME is an arbitrary string and HOST_NAME_1 is an optional string signifying the hosts to deploy NFS-Ganesha daemons.

    Example

    [ceph: root@host01 /]# ceph nfs cluster create nfsganesha "host01, host02"

    This creates an NFS_Ganesha cluster nfsganesha with one daemon on host01 and host02.

Verification

  • List the cluster details:

    Example

    [ceph: root@host01 /]# ceph nfs cluster ls

  • Show NFS-Ganesha cluster information:

    Syntax

    ceph nfs cluster info CLUSTER_NAME

    Example

    [ceph: root@host01 /]# ceph nfs cluster info nfsganesha

Additional Resources

12.3. Deploying the NFS-Ganesha gateway using the command line interface

You can use the Ceph Orchestrator with Cephadm in the backend to deploy the NFS-Ganesha gateway using the placement specification. In this case, you have to create a RADOS pool and create a namespace before deploying the gateway.

Note

Red Hat supports CephFS exports only over the NFS v4.0+ protocol.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Hosts are added to the cluster.
  • All manager, monitor and OSD daemons are deployed.

Procedure

  1. Log into the Cephadm shell:

    Example

    [root@host01 ~]# cephadm shell

  2. Create the RADOS pool namespace, and enable the application:

    Syntax

    ceph osd pool create POOL_NAME _
    ceph osd pool application enable POOL_NAME freeform/rgw/rbd/cephfs/nfs
    rbd pool init -p POOL_NAME

    Example

    [ceph: root@host01 /]# ceph osd pool create nfs-ganesha
    [ceph: root@host01 /]# ceph osd pool application enable nfs-ganesha nfs
    [ceph: root@host01 /]# rbd pool init -p nfs-ganesha

  3. Deploy NFS-Ganesha gateway using placement specification in the command line interface:

    Syntax

    ceph orch apply nfs SERVICE_ID --pool POOL_NAME --namespace NAMESPACE --placement="NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3"

    Example

    [ceph: root@host01 /]# ceph orch apply nfs foo --pool nfs-ganesha --namespace nfs-ns --placement="2 host01 host02"

    This deploys an NFS-Ganesha cluster nfsganesha with one daemon on host01 and host02.

Verification

  • List the service:

    Example

    [ceph: root@host01 /]# ceph orch ls

  • List the hosts, daemons, and processes:

    Syntax

    ceph orch ps --daemon_type=DAEMON_NAME

    Example

    [ceph: root@host01 /]# ceph orch ps --daemon_type=nfs

Additional Resources

12.4. Deploying the NFS-Ganesha gateway using the service specification

You can use the Ceph Orchestrator with Cephadm in the backend to deploy the NFS-Ganesha gateway using the service specification. In this case, you have to create a RADOS pool and create a namespace before deploying the gateway.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Hosts are added to the cluster.

Procedure

  1. Log into the Cephadm shell:

    Example

    [root@host01 ~]# cephadm shell

  2. Create the RADOS pool, namespace, and enable RBD:

    Syntax

    ceph osd pool create POOL_NAME _
    ceph osd pool application enable POOL_NAME rbd
    rbd pool init -p POOL_NAME

    Example

    [ceph: root@host01 /]# ceph osd pool create nfs-ganesha
    [ceph: root@host01 /]#ceph osd pool application enable nfs-ganesha rbd
    [ceph: root@host01 /]#rbd pool init -p nfs-ganesha

  3. Navigate to the following directory:

    Syntax

    cd /var/lib/ceph/DAEMON_PATH/

    Example

    [ceph: root@host01 nfs/]# cd /var/lib/ceph/nfs/

    If the nfs directory does not exist, create a directory in the path.

  4. Create the nfs.yml file:

    Example

    [ceph: root@host01 nfs]# touch nfs.yml

  5. Edit the nfs.yml file to include the following details:

    Syntax

    service_type: nfs
    service_id: SERVICE_ID
    placement:
      hosts:
        - HOST_NAME_1
        - HOST_NAME_2
    spec:
      pool: POOL_NAME
      namespace: NAMESPACE

    Example

    service_type: nfs
    service_id: foo
    placement:
      hosts:
        - host01
        - host02
    spec:
      pool: nfs-ganesha
      namespace: nfs-ns

  6. Deploy NFS-Ganesha gateway using service specification:

    Syntax

    ceph orch apply -i FILE_NAME.yml

    Example

    [ceph: root@host01 nfs]# ceph orch apply -i nfs.yml

Verification

  • List the service:

    Example

    [ceph: root@host01 /]# ceph orch ls

  • List the hosts, daemons, and processes:

    Syntax

    ceph orch ps --daemon_type=DAEMON_NAME

    Example

    [ceph: root@host01 /]# ceph orch ps --daemon_type=nfs

Additional Resources

12.5. Updating the NFS-Ganesha cluster using the Ceph Orchestrator

You can update the NFS-Ganesha cluster by changing the placement of the daemons on the hosts using the Ceph Orchestrator with Cephadm in the backend.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Hosts are added to the cluster.
  • All manager, monitor and OSD daemons are deployed.
  • NFS-Ganesha cluster created using the mgr/nfs module.

Procedure

  1. Log into the Cephadm shell:

    Example

    [root@host01 ~]# cephadm shell

  2. Update the cluster:

    Syntax

    ceph orch apply nfs CLUSTER_NAME ["HOST_NAME_1,HOST_NAME_2,HOST_NAME_3"]

    The CLUSTER_NAME is an arbitrary string, HOST_NAME_1 is an optional string signifying the hosts to update the deployed NFS-Ganesha daemons.

    Example

    [ceph: root@host01 /]# ceph orch apply nfs nfsganesha "host02"

    This updates the nfsganesha cluster on host02.

Verification

  • List the cluster details:

    Example

    [ceph: root@host01 /]# ceph nfs cluster ls

  • Show NFS-Ganesha cluster information:

    Syntax

    ceph nfs cluster info CLUSTER_NAME

    Example

    [ceph: root@host01 /]# ceph nfs cluster info nfsganesha

  • List the hosts, daemons, and processes:

    Syntax

    ceph orch ps --daemon_type=DAEMON_NAME

    Example

    [ceph: root@host01 /]# ceph orch ps --daemon_type=nfs

Additional Resources

12.6. Viewing the NFS-Ganesha cluster information using the Ceph Orchestrator

You can view the information of the NFS-Ganesha cluster using the Ceph Orchestrator. You can get the information about all the NFS Ganesha clusters or specific clusters with their port, IP address and the name of the hosts on which the cluster is created.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Hosts are added to the cluster.
  • All manager, monitor and OSD daemons are deployed.
  • NFS-Ganesha cluster created using the mgr/nfs module.

Procedure

  1. Log into the Cephadm shell:

    Example

    [root@host01 ~]# cephadm shell

  2. View the NFS-Ganesha cluster information:

    Syntax

    ceph nfs cluster info CLUSTER_NAME

    Example

    [ceph: root@host01 /]# ceph nfs cluster info nfsganesha
    
    {
        "nfsganesha": [
            {
                "hostname": "host02",
                "ip": [
                    "10.74.251.164"
                ],
                "port": 2049
            }
        ]
    }

Additional Resources

12.7. Fetching the NFS-Ganesha clutser logs using the Ceph Orchestrator

With the Ceph Orchestrator, you can fetch the NFS-Ganesha cluster logs. You need to be on the node where the service is deployed.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Cephadm installed on the nodes where NFS is deployed.
  • Root-level access to all the nodes.
  • Hosts are added to the cluster.
  • NFS-Ganesha cluster created using the mgr/nfs module.

Procedure

  1. As a root user, fetch the FSID of the storage cluster:

    Example

    [root@host03 ~]# cephadm ls

    Copy the FSID and the name of the service.

  2. Fetch the logs:

    Syntax

    cephadm logs --fsid FSID --name SERVICE_NAME

    Example

    [root@host03 ~]# cephadm logs --fsid 499829b4-832f-11eb-8d6d-001a4a000635 --name nfs.foo.host03

Additional Resources

12.8. Setting custom NFS-Ganesha configuration using the Ceph Orchestrator

The NFS-Ganesha cluster is defined in default configuration blocks. Using Ceph Orchestrator you can customize the configuration and that will have precedence over the default configuration blocks.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Hosts are added to the cluster.
  • All manager, monitor and OSD daemons are deployed.
  • NFS-Ganesha cluster created using the mgr/nfs module.

Procedure

  1. Log into the Cephadm shell:

    Example

    [root@host01 ~]# cephadm shell

  2. The following is an example of the default configuration of NFS-Ganesha cluster:

    Example

    # {{ cephadm_managed }}
    NFS_CORE_PARAM {
            Enable_NLM = false;
            Enable_RQUOTA = false;
            Protocols = 4;
    }
    
    MDCACHE {
            Dir_Chunk = 0;
    }
    
    EXPORT_DEFAULTS {
            Attr_Expiration_Time = 0;
    }
    
    NFSv4 {
            Delegations = false;
            RecoveryBackend = 'rados_cluster';
            Minor_Versions = 1, 2;
    }
    
    RADOS_KV {
            UserId = "{{ user }}";
            nodeid = "{{ nodeid}}";
            pool = "{{ pool }}";
            namespace = "{{ namespace }}";
    }
    
    RADOS_URLS {
            UserId = "{{ user }}";
            watch_url = "{{ url }}";
    }
    
    RGW {
            cluster = "ceph";
            name = "client.{{ rgw_user }}";
    }
    
    %url    {{ url }}

  3. Customize the NFS-Ganesha cluster configuration. The following are two examples for customizing the configuration:

    • Change the log level:

      Example

      LOG {
       COMPONENTS {
           ALL = FULL_DEBUG;
       }
      }

    • Add custom export block:

      1. Create the user.

        Note

        User specified in FSAL blocks should have proper caps for NFS-Ganesha daemons to access the Ceph cluster.

        Syntax

        ceph auth get-or-create client.USER_NAME mon 'allow r' osd 'allow rw pool=POOL_NAME namespace=NFS_CLUSTER_NAME, allow rw tag cephfs data=FILE_SYSTEM_NAME' mds 'allow rw path=EXPORT_PATH'

        Example

        [ceph: root@host01 /]# ceph auth get-or-create client.nfstest1 mon 'allow r' osd 'allow rw pool=nfsganesha namespace=nfs_cluster_name, allow rw tag cephfs data=filesystem_name' mds 'allow rw path=export_path

      2. Navigate to the following directory:

        Syntax

        cd /var/lib/ceph/DAEMON_PATH/

        Example

        [ceph: root@host01 /]# cd /var/lib/ceph/nfs/

        If the nfs directory does not exist, create a directory in the path.

      3. Create a new configuration file:

        Syntax

        touch PATH_TO_CONFIG_FILE

        Example

        [ceph: root@host01 nfs]#  touch nfs-ganesha.conf

      4. Edit the configuration file by adding the custom export block. It creates a single export and that is managed by the Ceph NFS export interface.

        Syntax

        EXPORT {
          Export_Id = NUMERICAL_ID;
          Transports = TCP;
          Path = PATH_WITHIN_CEPHFS;
          Pseudo = BINDING;
          Protocols = 4;
          Access_Type = PERMISSIONS;
          Attr_Expiration_Time = 0;
          Squash = None;
          FSAL {
            Name = CEPH;
            Filesystem = "FILE_SYSTEM_NAME";
            User_Id = "USER_NAME";
            Secret_Access_Key = "USER_SECRET_KEY";
          }
        }

        Example

        EXPORT {
          Export_Id = 100;
          Transports = TCP;
          Path = /;
          Pseudo = /ceph/;
          Protocols = 4;
          Access_Type = RW;
          Attr_Expiration_Time = 0;
          Squash = None;
          FSAL {
            Name = CEPH;
            Filesystem = "filesystem name";
            User_Id = "user id";
            Secret_Access_Key = "secret key";
          }
        }

  4. Apply the new configuration the cluster:

    Syntax

    ceph nfs cluster config set _CLUSTER_NAME_ -i _PATH_TO_CONFIG_FILE_

    Example

    [ceph: root@host01 nfs]# ceph nfs cluster config set nfs-ganesha -i /root/nfs-ganesha.conf

    This also restarts the service for the custom configuration.

Verification

  • List the service:

    Example

    [ceph: root@host01 /]# ceph orch ls

  • List the hosts, daemons, and processes:

    Syntax

    ceph orch ps --daemon_type=DAEMON_NAME

    Example

    [ceph: root@host01 /]# ceph orch ps --daemon_type=nfs

  • Verify the custom configuration:

    Syntax

    ./bin/rados -p POOL_NAME -N CLUSTER_NAME get userconf-nfs.CLUSTER_NAME -

    Example

    [ceph: root@host01 /]# ./bin/rados -p nfs-ganesha -N nfsganesha get userconf-nfs.nfsganesha -

Additional Resources

12.9. Resetting custom NFS-Ganesha configuration using the Ceph Orchestrator

Using the Ceph Orchestrator, you can reset the user-defined configuration to the default configuration.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Hosts are added to the cluster.
  • All manager, monitor and OSD daemons are deployed.
  • NFS-Ganesha deployed using the mgr/nfs module.
  • Custom NFS cluster configuration is set-up

Procedure

  1. Log into the Cephadm shell:

    Example

    [root@host01 ~]# cephadm shell

  2. Reset the NFS_Ganesha configuration:

    Syntax

    ceph nfs cluster config reset CLUSTER_NAME

    Example

    [ceph: root@host01 /]# ceph nfs cluster config reset nfsganesha

Verification

  • List the service:

    Example

    [ceph: root@host01 /]# ceph orch ls

  • List the hosts, daemons, and processes:

    Syntax

    ceph orch ps --daemon_type=DAEMON_NAME

    Example

    [ceph: root@host01 /]# ceph orch ps --daemon_type=nfs

  • Verify the custom configuration is deleted:

    Syntax

    ./bin/rados -p POOL_NAME -N CLUSTER_NAME get userconf-nfs.CLUSTER_NAME -

    Example

    [ceph: root@host01 /]# ./bin/rados -p nfs-ganesha -N nfsganesha get userconf-nfs.nfsganesha -

Additional Resources

12.10. Deleting the NFS-Ganesha cluster using the Ceph Orchestrator

You can use the Ceph Orchestrator with Cephadm in the backend to delete the NFS-Ganesha cluster.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Hosts are added to the cluster.
  • All manager, monitor and OSD daemons are deployed.
  • NFS-Ganesha cluster created using the mgr/nfs module.

Procedure

  1. Log into the Cephadm shell:

    Example

    [root@host01 ~]# cephadm shell

  2. Delete the cluster:

    Syntax

    ceph nfs cluster delete CLUSTER_NAME

    The CLUSTER_NAME is an arbitrary string.

    Example

    [ceph: root@host01 /]# ceph nfs cluster delete nfsganesha
    NFS Cluster Deleted Successfully

Verification

  • List the cluster details:

    Example

    [ceph: root@host01 /]# ceph nfs cluster ls

Additional Resources

12.11. Removing the NFS-Ganesha gateway using the Ceph Orchestrator

You can remove the NFS-Ganesha gateway using the ceph orch rm command.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to all the nodes.
  • Hosts are added to the cluster.
  • At least one NFS-Ganesha gateway deployed on the hosts.

Procedure

  1. Log into the Cephadm shell:

    Example

    [root@host01 ~]# cephadm shell

  2. List the service:

    Example

    [ceph: root@host01 /]# ceph orch ls

  3. Remove the service

    Syntax

    ceph orch rm SERVICE_NAME

    Example

    [ceph: root@host01 /]# ceph orch rm nfs.foo

Verification

  • List the hosts, daemons, and processes:

    Syntax

    ceph orch ps

    Example

    [ceph: root@host01 /]# ceph orch ps

Additional Resources