Configure OpenStack to Use an NFS Back End

Updated -
This article describes how to configure the OpenStack volume service to use an existing NFS server as an additional back end. In addition, this article will also describe how to create a volume type that you can use to invoke to create volumes backed by the NFS share.


1. Introduction

This article describes how to manually configure the OpenStack Block Storage service (openstack-cinder-volume) to use an existing NFS server as an additional back end. In addition, this article will also describe how to create a volume type that you can use to invoke to create volumes backed by the NFS share.


Director VS Manual Configuration

This article involves manually configuring the Block Storage service. When OpenStack is deployed through the Director, manual configurations are overwritten on subsequent cloud updates.
For Enterprise deployments of Red Hat OpenStack Platform 7, we strongly recommend that you deploy OpenStack through the Director. The Director helps orchestrate all the necessary settings for a properly functional deployment, especially when enabling High Availability. In turn, all service configurations (including Block Storage back end) must be configured through the director to ensure persistence in future cloud updates.
For instructions on how to deploy OpenStack through the Director, see Director Installation and Usage. This guide also includes a section on Configuring NFS Storage through the Director.


Prerequisites

  • The NFS share that you will be using as a back end should already be properly configured. For instructions on how to deploy an NFS share, refer to the Administration Guide.
  • The node hosting the OpenStack volume service should have read/write access to the NFS share.
  • You have root access to the node hosting the OpenStack volume service.


Assumptions

  • Your OpenStack deployment was not provisioned through the Red Hat Enterprise Linux OpenStack Platform Installer.
  • Your OpenStack Block Storage service uses the default back end (which uses the back end name lvm, as deployed by Packstack).


2. Configure SELinux

If a client has SELinux enabled, you should also enable the virt_use_nfs Boolean if the client requires access to NFS volumes on an instance. To enable this Boolean (and make it persistent through reboots), run the following command as root:
# setsebool -P virt_use_nfs on
Run this command on all client hosts that require access to NFS volumes on an instance. This includes all Compute nodes.


3. Configure the share

The first step in adding an NFS back end is defining the NFS share that the OpenStack volume service should use. To do so:
  1. Log in as root to the node hosting the OpenStack volume service.
  2. Create a new text file named nfs_share in the /etc/cinder/ directory:
    /etc/cinder/nfs_share
  3. Define the NFS share in /etc/cinder/nfs_share using the following format:
    HOST:SHARE
    Where:
    • HOST is the IP address or hostname of the NFS server.
    • SHARE is the absolute path to the NFS shares exported on HOST.
  4. Set the root user and cinder group as the owner of /etc/cinder/nfs_share:
    # chown root:cinder /etc/cinder/nfs_share
  5. Finally, configure /etc/cinder/nfs_share to be readable by members of the cinder group:
    # chmod 0640 /etc/cinder/nfs_share

Tip

In a Red Hat Enterprise Linux 7.1 environment, you may experience a delay in the time it takes for the NFS share to mount. For details on the issue (along with a workaround), see https://access.redhat.com/solutions/1397363.


4. Create a new back end definition

By default, Packstack creates a back end definition for LVM in /etc/cinder/cinder.conf:
[lvm]
iscsi_helper=lioadm
volume_group=cinder-volumes
iscsi_ip_address=YOUR_IP
volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
volume_backend_name=lvm
After defining the NFS share in /etc/cinder/cinder.conf, you can now configure an additional back end definition for it. To do so:
  1. Log in as root to the node hosting the OpenStack volume service.
  2. Create a new definition for the NFS back end and set the volume service to use the file defining the NFS share (namely, /etc/cinder/nfs_share):
    # openstack-config --set /etc/cinder/cinder.conf nfs nfs_shares_config /etc/cinder/nfs_share
    Here, we use the name nfsbackend as a definition name.
  3. Configure the volume service to use the NFS volume driver, namely cinder.volume.drivers.nfs.NfsDriver:
    # openstack-config --set /etc/cinder/cinder.conf nfs volume_driver cinder.volume.drivers.nfs.NfsDriver
  4. Define a volume back end name for the NFS back end (the following command uses the name nfs):
    # openstack-config --set /etc/cinder/cinder.conf nfs volume_backend_name nfsbackend
  5. Add any mount options (MOUNTOPTIONS) you need to the nfs_mount_options configuration key:
    # openstack-config --set /etc/cinder/cinder.conf nfs nfs_mount_options MOUNTOPTIONS
At this point, the following section should now appear in /etc/cinder/cinder.conf:
[nfs]
nfs_shares_config = /etc/cinder/nfs_share
volume_driver = cinder.volume.drivers.nfs.NfsDriver
volume_backend_name = nfsbackend
nfs_mount_options = MOUNTOPTIONS
You can now enable the NFS back end. Back ends are enabled through the enabled_backends configuration key of /etc/cinder/cinder.conf. The default back end created by Packstack should already be listed there:
enabled_backends=lvm
Add the new NFS back end definition to this list, as in:
enabled_backends=lvm,nfs
Once the NFS back end is enabled, restart the OpenStack volume service:
# openstack-service restart cinder-volume


5. Create a volume type for the NFS back end

The new NFS back end is now available, but cannot be used yet when creating new volumes. To configure new volumes to use this NFS back end, you need to first create a volume type for it.
  1. View the existing volume types. By default, a volume type should already exist for the lvm back end (namely, iscsi):
    # cinder type-list
    +--------------------------------------+-------+
    |                  ID                  |  Name |
    +--------------------------------------+-------+
    | f8d31dc8-a20e-410c-81bf-6b0a971c61a0 | iscsi |
    +--------------------------------------+-------+
  2. Create a new volume type named nfstype for the NFS back end:
    # cinder type-create nfstype 
  3. Configure the nfstype volume type to use the NFS back end through the back end’s name (namely, nfsbackend):
    # cinder type-key nfstype set volume_backend_name=nfsbackend
  4. Verify that the new type was created and configured correctly:
    # cinder type-list
    +--------------------------------------+---------+
    |                  ID                  |   Name  |
    +--------------------------------------+---------+
    | bbff44b5-52b1-43d6-beb4-83aa2d20bc59 | nfstype |
    | f8d31dc8-a20e-410c-81bf-6b0a971c61a0 |  iscsi  |
    +--------------------------------------+---------+
    # cinder extra-specs-list
    +--------------------------------------+---------+-----------------------------------------+
    |                  ID                  |   Name  |               extra_specs               |
    +--------------------------------------+---------+-----------------------------------------+
    | bbff44b5-52b1-43d6-beb4-83aa2d20bc59 | nfstype | {u'volume_backend_name': u'nfsbackend'} |
    | f8d31dc8-a20e-410c-81bf-6b0a971c61a0 |  iscsi  |     {u'volume_backend_name': u'lvm'}    |
    +--------------------------------------+---------+-----------------------------------------+

Note

You can also create and configure volume types through the dashboard. For more information, refer to the Administration Guide.


6. Test the new NFS back end

To test the new NFS back end, create a new volume named nfsvolume while invoking the volume type nfstype:
# cinder create --volume_type nfstype --display_name nfsvolume 1
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2015-01-06T05:14:09.271114      |
| display_description |                 None                 |
|     display_name    |              nfsvolume               |
|      encrypted      |                False                 |
|          id         | 0cd7ac45-622a-47b0-9503-7025bbedc8ed |
|       metadata      |                  {}                  |
|         size        |                  1                   |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |               nfstype                |
+---------------------+--------------------------------------+
Once the volume is successfully created, check the NFS share (on the NFS server). A corresponding volume (whose name contains the ID of the newly-created volume) should appear there:
# ls -lah /exports
drwxrwxrwx.  2 root      root      4.0K Jan  6 15:14 .
drwxr-xr-x. 18 root      root      4.0K Jan  5 04:03 ..
-rw-rw-rw-.  1 nfsnobody nfsnobody 1.0G Jan  6 15:14 volume-0cd7ac45-622a-47b0-9503-7025bbedc8ed

3 Comments

There is a little errata in the instructions:

You state:

[nfs]
nfs_shares_config = /etc/cinder/nfs_shares
...
...

But the file we create first is /etc/cinder/nfs_share (note the missing 's')

Also in RHEL 7.1 you may need to use this procedure in order to mount properly nfs shares:

https://access.redhat.com/solutions/1397363

Hi Juan, thanks for pointing out the issues. I apologize for the delay in response.

Anyhow, I've updated this article with links to the more updated Director Usage and Installation guide, along with a link to instructions on how to configure an NFS back end for Cinder through Director. I also corrected the nfs_shares_config error you noted.