Chapter 12. Native CephFS post-deployment configuration and verification

You must complete some post-deployment configuration tasks before you create CephFS shares, grant user access, and mount CephFS shares.

  • Map the Networking service (neutron) storage network to the isolated data center storage network.
  • Make the storage provider network available to trusted tenants only through custom role based access control (RBAC). Do not share the storage provider network globally.
  • Create a private share type.
  • Grant access to specific trusted tenants.

After you complete these steps, the tenant compute instances can create, allow access to, and mount native CephFS shares.

Deploying native CephFS as a back end of the Shared File Systems service (manila) adds the following new elements to the overcloud environment:

  • Storage provider network
  • Ceph MDS service on the Controller nodes

The cloud administrator must verify the stability of the native CephFS environment before making it available to service users.

For more information about using the Shared File Systems service with native CephFS, see Configuring the Shared File Systems service (manila) in the Storage Guide.

12.1. Creating the storage provider network

You must map the new isolated storage network to a Networking (neutron) provider network. The Compute VMs attach to the network to access native CephFS share export locations.

For information about network security with the Shared File Systems service (manila), see Hardening the Shared File Systems Service in Hardening Red Hat OpenStack Platform.

Procedure

The openstack network create command defines the configuration for the storage neutron network.

  1. Source the overcloud credentials file:

    $ source ~/<credentials_file>
    • Replace <credentials_file> with the name of your credentials file, for example, overcloudrc.
  2. On an undercloud node, create the storage network:

    (overcloud) [stack@undercloud-0 ~]$ openstack network create Storage --provider-network-type vlan --provider-physical-network datacentre --provider-segment 30

    You can enter this command with the following options:

    • For the --provider-physical-network option, use the default value datacentre, unless you set another tag for the br-isolated bridge through NeutronBridgeMappings in your tripleo-heat-templates.
    • For the --provider-segment option, use the value set for the Storage isolated network in your network environment file. If this was not customized, the default environment file is /usr/share/openstack-tripleo-heat-templates/network_data.yaml. The VLAN associated with the Storage network value is 30 unless you modified the isolated network definitions.
    • For the --provider-network-type option, use the value vlan.

12.2. Configuring the storage provider network

Create a corresponding StorageSubnet on the neutron provider network. Ensure that the subnet is the same for the storage_subnet in the undercloud, and that the allocation range for the storage subnet and the corresponding undercloud subnet do not overlap.

Requirements

  • The starting and ending IP range for the allocation pool
  • The subnet IP range

Procedure

  1. From an undercloud node, enter the following command:

    [stack@undercloud ~]$ source ~/overcloudrc
  2. Use the sample command to provision the network. Update the values to suit your environment.

    (overcloud) [stack@undercloud-0 ~]$ openstack subnet create \
    --allocation-pool start=172.17.3.10,end=172.17.3.149 \
    --dhcp \
    --network Storage \
    --subnet-range 172.17.3.0/24 \
    --gateway none StorageSubnet
    • For the --allocation-pool option, replace the start=172.17.3.10,end=172.17.3.149 IP values with the IP values for your network.
    • For the --subnet-range option, replace the 172.17.3.0/24 subnet range with the subnet range for your network.

12.3. Configuring role-based access control for the storage provider network

After you identify the trusted tenants or projects that can use the storage network, configure role-based access control (RBAC) rules for them through the Networking service (neutron).

Requirements

Names of the projects that need access to the storage network

Procedure

  1. From an undercloud node, enter the following command:

    [stack@undercloud ~]$ source ~/overcloudrc
  2. Identify the projects that require access:

    (overcloud) [stack@undercloud-0 ~]$ openstack project list
    +----------------------------------+---------+
    | ID                               | Name    |
    +----------------------------------+---------+
    | 06f1068f79d2400b88d1c2c33eacea87 | demo    |
    | 5038dde12dfb44fdaa0b3ee4bfe487ce | service |
    | 820e2d9c956644c2b1530b514127fd0d | admin   |
    +----------------------------------+---------+
  3. Create network RBAC rules with the desired projects:

    (overcloud) [stack@undercloud-0 ~]$ openstack network rbac create \
    --action access_as_shared Storage \
    --type network \
    --target-project demo

    Repeat this step for all of the projects that require access to the storage network.

12.4. Configuring a default share type

You can use the Shared File Systems service (manila) to define share types for the creation of shares with specific settings. Share types work like Block Storage volume types. Each type has associated settings, for example, extra specifications. When you invoke the type during share creation, the settings apply to the shared file system.

To secure the native CephFS back end against untrusted users, do not create a default share type. When a default share type does not exist, users are forced to specify a share type, and trusted users can use a custom private share type to which they have exclusive access rights.

If you must create a default share type for untrusted tenants, you can steer provisioning away from the native CephFS back end.

Procedure

  1. Source the overcloud credentials file:

    $ source ~/<credentials_file>
    • Replace <credentials_file> with the name of your credentials file, for example, overcloudrc.
  2. Set an extra specification on the share type:

    (overcloud) [stack@undercloud-0 ~]$ manila type-create default false
    (overcloud) [stack@undercloud-0 ~]$ manila type-key default set share_backend_name='s!= cephfs'
  3. Create a private share type and provide trusted tenants with access to this share type:

    (overcloud) [stack@undercloud-0 ~]$ manila type-create --is-public false nativecephfstype false
    (overcloud) [stack@undercloud-0 ~]$ manila type-key nativecephfstype set share_backend_name='cephfs'
    (overcloud) [stack@undercloud-0 ~]$ manila type-access-add nativecephfstype <trusted_tenant_project_id>
    • Replace <trusted_tenant_project_id> with the ID of the trusted tenant.

For more information about share types, see Creating share types in the Storage Guide.

12.5. Verifying creation of isolated storage network

The network_data.yaml file used to deploy native CephFS as a Shared File Systems service back end creates the storage VLAN. Use this procedure to confirm you successfully created the storage VLAN.

Procedure

  1. Log in to one of the Controller nodes in the overcloud.
  2. Check the connected networks and verify the existence of the VLAN as set in the network_data.yaml file:

    $ ip a
    8: vlan30: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
        link/ether 52:9c:82:7a:d4:75 brd ff:ff:ff:ff:ff:ff
        inet 172.17.3.144/24 brd 172.17.3.255 scope global vlan30
           valid_lft forever preferred_lft forever
        inet6 fe80::509c:82ff:fe7a:d475/64 scope link
           valid_lft forever preferred_lft forever

12.6. Verifying Ceph MDS service

Use the systemctl status command to verify the Ceph MDS service status.

Procedure

  • Enter the following command on all Controller nodes to check the status of the MDS container:

    $ systemctl status ceph-mds<@CONTROLLER-HOST>

    Example:

$ systemctl status ceph-mds@controller-0.service

ceph-mds@controller-0.service - Ceph MDS
   Loaded: loaded (/etc/systemd/system/ceph-mds@.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2018-09-18 20:11:53 UTC; 6 days ago
 Main PID: 65066 (conmon)
   Tasks: 16 (limit: 204320)
   Memory: 38.2M
   CGroup: /system.slice/system-ceph\x2dmds.slice/ceph-mds@controller-0.service
         └─60921 /usr/bin/podman run --rm --net=host --memory=32000m --cpus=4 -v
/var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v
/var/run/ceph:/var/run/ceph:z -v /etc/localtime:/etc/localtime:ro>

12.7. Verifying Ceph cluster status

Verify the Ceph cluster status to confirm that the cluster is active.

Procedure

  1. Log in to any Controller node.
  2. From the Ceph monitor daemon, enter the following command:

    $ sudo podman exec ceph-mon-controller-0 ceph -s
      cluster:
        id:     670dc288-cd36-4772-a4fc-47287f8e2ebf
        health: HEALTH_OK
    
      services:
        mon: 3 daemons, quorum controller-1,controller-2,controller-0 (age 14h)
        mgr: controller-1(active, since 8w), standbys: controller-0, controller-2
        mds: cephfs:1 {0=controller-2=up:active} 2 up:standby
        osd: 15 osds: 15 up (since 8w), 15 in (since 8w)
    
      task status:
        scrub status:
            mds.controller-2: idle
    
      data:
        pools: 6 pools, 192 pgs
        objects: 309 objects, 1.6 GiB
        usage: 21 GiB used, 144 GiB / 165 GiB avail
        pgs: 192 active+clean
    Note

    There is one active MDS and two MDSs on standby.

  3. To see a detailed status of the Ceph File System, enter the following command:

    $ sudo ceph fs ls
    
    name: cephfs metadata pool: manila_metadata, data pools: [manila_data]
    Note

    In this example output, cephfs is the name of Ceph File System that director creates to host CephFS shares that users create through the Shared File Systems service.

12.8. Verifying manila-share service status

Verify the status of the manila-share service.

Procedure

  1. From one of the Controller nodes, confirm that openstack-manila-share started:

    $ sudo pcs status resources | grep manila
    
    * Container bundle: openstack-manila-share [cluster.common.tag/rhosp16-openstack-manila-share:pcmklatest]:
    * openstack-manila-share-podman-0	(ocf::heartbeat:podman):	Started controller-0

12.9. Verifying manila-api services acknowledges scheduler and share services

Complete the following steps to confirm that the manila-api service acknowledges the scheduler and share services.

Procedure

  1. Log in to the undercloud.
  2. Enter the following command:

    $ source /home/stack/overcloudrc
  3. Enter the following command to confirm manila-scheduler and manila-share are enabled:

    $ manila service-list
    
    | Id | Binary          | Host             | Zone | Status | State | Updated_at |
    
    | 2 | manila-scheduler | hostgroup        | nova | enabled | up | 2018-08-08T04:15:03.000000 |
    | 5 | manila-share | hostgroup@cephfs | nova | enabled | up | 2018-08-08T04:15:03.000000 |