Chapter 4. Exporting NFS shares
As a system administrator, you can use the NFS server to share a directory on your system over network.
4.1. Introduction to NFS
This section explains the basic concepts of the NFS service.
A Network File System (NFS) allows remote hosts to mount file systems over a network and interact with those file systems as though they are mounted locally. This enables you to consolidate resources onto centralized servers on the network.
The NFS server refers to the
/etc/exports configuration file to determine whether the client is allowed to access any exported file systems. Once verified, all file and directory operations are available to the user.
4.2. Supported NFS versions
This section lists versions of NFS supported in Red Hat Enterprise Linux and their features.
Currently, Red Hat Enterprise Linux 8 supports the following major versions of NFS:
- NFS version 3 (NFSv3) supports safe asynchronous writes and is more robust at error handling than the previous NFSv2; it also supports 64-bit file sizes and offsets, allowing clients to access more than 2 GB of file data.
NFS version 4 (NFSv4) works through firewalls and on the Internet, no longer requires an
rpcbindservice, supports Access Control Lists (ACLs), and utilizes stateful operations.
NFS version 2 (NFSv2) is no longer supported by Red Hat.
Default NFS version
The default NFS version in Red Hat Enterprise Linux 8 is 4.2. NFS clients attempt to mount using NFSv4.2 by default, and fall back to NFSv4.1 when the server does not support NFSv4.2. The mount later falls back to NFSv4.0 and then to NFSv3.
Features of minor NFS versions
Following are the features of NFSv4.2 in Red Hat Enterprise Linux 8:
- Server-side copy
Enables the NFS client to efficiently copy data without wasting network resources using the
- Sparse files
Enables files to have one or more holes, which are unallocated or uninitialized data blocks consisting only of zeroes. The
lseek()operation in NFSv4.2 supports
seek_data(), which enables applications to map out the location of holes in the sparse file.
- Space reservation
Permits storage servers to reserve free space, which prohibits servers to run out of space. NFSv4.2 supports the
allocate()operation to reserve space, the
deallocate()operation to unreserve space, and the
fallocate()operation to preallocate or deallocate space in a file.
- Labeled NFS
- Enforces data access rights and enables SELinux labels between a client and a server for individual files on an NFS file system.
- Layout enhancements
layoutstats()operation, which enables some Parallel NFS (pNFS) servers to collect better performance statistics.
Following are the features of NFSv4.1:
- Enhances performance and security of network, and also includes client-side support for pNFS.
- No longer requires a separate TCP connection for callbacks, which allows an NFS server to grant delegations even when it cannot contact the client: for example, when NAT or a firewall interferes.
- Provides exactly once semantics (except for reboot operations), preventing a previous issue whereby certain operations sometimes returned an inaccurate result if a reply was lost and the operation was sent twice.
4.3. The TCP and UDP protocols in NFSv3 and NFSv4
NFSv4 requires the Transmission Control Protocol (TCP) running over an IP network.
NFSv3 could also use the User Datagram Protocol (UDP) in earlier Red Hat Enterprise Linux versions. In Red Hat Enterprise Linux 8, NFS over UDP is no longer supported. By default, UDP is disabled in the NFS server.
4.4. Services required by NFS
This section lists system services that are required for running an NFS server or mounting NFS shares. Red Hat Enterprise Linux starts these services automatically.
Red Hat Enterprise Linux uses a combination of kernel-level support and service processes to provide NFS file sharing. All NFS versions rely on Remote Procedure Calls (RPC) between clients and servers. To share or mount NFS file systems, the following services work together depending on which version of NFS is implemented:
- The NFS server kernel module that services requests for shared NFS file systems.
Accepts port reservations from local RPC services. These ports are then made available (or advertised) so the corresponding remote RPC services can access them. The
rpcbindservice responds to requests for RPC services and sets up connections to the requested RPC service. This is not used with NFSv4.
This process is used by an NFS server to process
MOUNTrequests from NFSv3 clients. It checks that the requested NFS share is currently exported by the NFS server, and that the client is allowed to access it. If the mount request is allowed, the
nfs-mountdservice replies with a Success status and provides the File-Handle for this NFS share back to the NFS client.
This process enables explicit NFS versions and protocols the server advertises to be defined. It works with the Linux kernel to meet the dynamic demands of NFS clients, such as providing server threads each time an NFS client connects. This process corresponds to the
- This is a kernel thread that runs on both clients and servers. It implements the Network Lock Manager (NLM) protocol, which enables NFSv3 clients to lock files on the server. It is started automatically whenever the NFS server is run and whenever an NFS file system is mounted.
This process implements the Network Status Monitor (NSM) RPC protocol, which notifies NFS clients when an NFS server is restarted without being gracefully brought down. The
rpc-statdservice is started automatically by the
nfs-serverservice, and does not require user configuration. This is not used with NFSv4.
This process provides user quota information for remote users. The
rpc-rquotadservice is started automatically by the
nfs-serverservice and does not require user configuration.
This process provides NFSv4 client and server upcalls, which map between on-the-wire NFSv4 names (strings in the form of
user@domain) and local UIDs and GIDs. For
idmapdto function with NFSv4, the
/etc/idmapd.conffile must be configured. At a minimum, the
Domainparameter should be specified, which defines the NFSv4 mapping domain. If the NFSv4 mapping domain is the same as the DNS domain name, this parameter can be skipped. The client and server must agree on the NFSv4 mapping domain for ID mapping to function properly.
Only the NFSv4 server uses
rpc.idmapd, which is started by the
nfs-idmapdservice. The NFSv4 client uses the keyring-based
nfsidmaputility, which is called by the kernel on-demand to perform ID mapping. If there is a problem with
nfsidmap, the client falls back to using
The RPC services with NFSv4
The mounting and locking protocols have been incorporated into the NFSv4 protocol. The server also listens on the well-known TCP port 2049. As such, NFSv4 does not need to interact with
rpc-statd services. The
nfs-mountd service is still required on the NFS server to set up the exports, but is not involved in any over-the-wire operations.
4.5. NFS host name formats
This section describes different formats that you can use to specify a host when mounting or exporting an NFS share.
You can specify the host in the following formats:
- Single machine
Either of the following:
- A fully-qualified domain name (that can be resolved by the server)
- Host name (that can be resolved by the server)
- An IP address.
- IP networks
Either of the following formats is valid:
a.b.c.dis the network and
zis the number of bits in the netmask; for example
a.b.c.dis the network and
netmaskis the netmask; for example,
@group-nameformat , where
group-nameis the NIS netgroup name.
4.6. NFS server configuration
This section describes the syntax and options of two ways to configure exports on an NFS server:
Manually editing the
exportfsutility on the command line
4.6.1. The /etc/exports configuration file
/etc/exports file controls which file systems are exported to remote hosts and specifies options. It follows the following syntax rules:
- Blank lines are ignored.
To add a comment, start a line with the hash mark (
You can wrap long lines with a backslash (
- Each exported file system should be on its own individual line.
- Any lists of authorized hosts placed after an exported file system must be separated by space characters.
- Options for each of the hosts must be placed in parentheses directly after the host identifier, without any spaces separating the host and the first parenthesis.
Each entry for an exported file system has the following structure:
It is also possible to specify multiple hosts, along with specific options for each host. To do so, list them on the same line as a space-delimited list, with each host name followed by its respective options (in parentheses), as in:
export host1(options1) host2(options2) host3(options3)
In this structure:
- The directory being exported
- The host or network to which the export is being shared
- The options to be used for host
Example 4.1. A simple /etc/exports file
In its simplest form, the
/etc/exports file only specifies the exported directory and the hosts permitted to access it:
bob.example.com can mount
/exported/directory/ from the NFS server. Because no options are specified in this example, NFS uses default options.
The format of the
/etc/exports file is very precise, particularly in regards to use of the space character. Remember to always separate exported file systems from hosts and hosts from one another with a space character. However, there should be no other space characters in the file except on comment lines.
For example, the following two lines do not mean the same thing:
/home bob.example.com(rw) /home bob.example.com (rw)
The first line allows only users from
bob.example.com read and write access to the
/home directory. The second line allows users from
bob.example.com to mount the directory as read-only (the default), while the rest of the world can mount it read/write.
The default options for an export entry are:
- The exported file system is read-only. Remote hosts cannot change the data shared on the file system. To allow hosts to make changes to the file system (that is, read and write), specify the rw option.
The NFS server will not reply to requests before changes made by previous requests are written to disk. To enable asynchronous writes instead, specify the option
The NFS server will delay writing to the disk if it suspects another write request is imminent. This can improve performance as it reduces the number of times the disk must be accessed by separate write commands, thereby reducing write overhead. To disable this, specify the
no_wdelayoption, which is available only if the default sync option is also specified.
This prevents root users connected remotely (as opposed to locally) from having root privileges; instead, the NFS server assigns them the user ID
nobody. This effectively "squashes" the power of the remote root user to the lowest local user, preventing possible unauthorized writes on the remote server. To disable root squashing, specify the
To squash every remote user (including root), use the
all_squashoption. To specify the user and group IDs that the NFS server should assign to remote users from a particular host, use the
anongidoptions, respectively, as in:
Here, uid and gid are user ID number and group ID number, respectively. The
anongidoptions enable you to create a special user and group account for remote NFS users to share.
By default, access control lists (ACLs) are supported by NFS under Red Hat Enterprise Linux. To disable this feature, specify the
no_acl option when exporting the file system.
Default and overridden options
Each default for every exported file system must be explicitly overridden. For example, if the
rw option is not specified, then the exported file system is shared as read-only. The following is a sample line from
/etc/exports which overrides two default options:
In this example,
192.168.0.3 can mount
/another/exported/directory/ read and write, and all writes to disk are asynchronous.
4.6.2. The exportfs utility
exportfs utility enables the root user to selectively export or unexport directories without restarting the NFS service. When given the proper options, the
exportfs utility writes the exported file systems to
/var/lib/nfs/xtab. Because the
nfs-mountd service refers to the
xtab file when deciding access privileges to a file system, changes to the list of exported file systems take effect immediately.
Common exportfs options
The following is a list of commonly-used options available for
Causes all directories listed in
/etc/exportsto be exported by constructing a new export list in
/var/lib/nfs/etab. This option effectively refreshes the export list with any changes made to
Causes all directories to be exported or unexported, depending on what other options are passed to
exportfs. If no other options are specified,
exportfsexports all file systems specified in
Specifies directories to be exported that are not listed in
/etc/exports. Replace file-systems with additional file systems to be exported. These file systems must be formatted in the same way they are specified in
/etc/exports. This option is often used to test an exported file system before adding it permanently to the list of exported file systems.
/etc/exports; only options given from the command line are used to define exported file systems.
Unexports all shared directories. The command
exportfs -uasuspends NFS file sharing while keeping all NFS services up. To re-enable NFS sharing, use
Verbose operation, where the file systems being exported or unexported are displayed in greater detail when the
exportfscommand is executed.
If no options are passed to the
exportfs utility, it displays a list of currently exported file systems.
- For information on different methods for specifying host names, see Section 4.5, “NFS host name formats”.
For a complete list of export options, see the
For more information about the
exportfsutility, see the
4.7. NFS and rpcbind
This section explains the purpose of the
rpcbind service, which is required by NFSv3.
rpcbind service maps Remote Procedure Call (RPC) services to the ports on which they listen. RPC processes notify
rpcbind when they start, registering the ports they are listening on and the RPC program numbers they expect to serve. The client system then contacts
rpcbind on the server with a particular RPC program number. The
rpcbind service redirects the client to the proper port number so it can communicate with the requested service.
Because RPC-based services rely on
rpcbind to make all connections with incoming client requests,
rpcbind must be available before any of these services start.
Access control rules for
rpcbind affect all RPC-based services. Alternatively, it is possible to specify access control rules for each of the NFS RPC daemons.
For the precise syntax of access control rules, see the
4.8. Installing NFS
This procedure installs all packages necessary to mount or export NFS shares.
# yum install nfs-utils
4.9. Starting the NFS server
This procedure describes how to start the NFS server, which is required to export NFS shares.
For servers that support NFSv2 or NFSv3 connections, the
rpcbindservice must be running. To verify that
rpcbindis active, use the following command:
$ systemctl status rpcbind
If the service is stopped, start and enable it:
$ systemctl enable --now rpcbind
To start the NFS server and enable it to start automatically at boot, use the following command:
# systemctl enable --now nfs-server
4.10. Troubleshooting NFS and rpcbind
rpcbind service provides coordination between RPC services and the port numbers used to communicate with them, it is useful to view the status of current RPC services using
rpcbind when troubleshooting. The
rpcinfo utility shows each RPC-based service with port numbers, an RPC program number, a version number, and an IP protocol type (TCP or UDP).
To make sure the proper NFS RPC-based services are enabled for
rpcbind, use the following command:
# rpcinfo -p
Example 4.2. rpcinfo -p command output
The following is sample output from this command:
program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100005 1 udp 20048 mountd 100005 1 tcp 20048 mountd 100005 2 udp 20048 mountd 100005 2 tcp 20048 mountd 100005 3 udp 20048 mountd 100005 3 tcp 20048 mountd 100024 1 udp 37769 status 100024 1 tcp 49349 status 100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100227 3 tcp 2049 nfs_acl 100021 1 udp 56691 nlockmgr 100021 3 udp 56691 nlockmgr 100021 4 udp 56691 nlockmgr 100021 1 tcp 46193 nlockmgr 100021 3 tcp 46193 nlockmgr 100021 4 tcp 46193 nlockmgr
If one of the NFS services does not start up correctly,
rpcbindwill be unable to map RPC requests from clients for that service to the correct port.
In many cases, if NFS is not present in
rpcinfooutput, restarting NFS causes the service to correctly register with
rpcbindand begin working:
# systemctl restart nfs-server
For more information and a list of
rpcinfooptions, see the
To configure an NFSv4-only server, which does not require
rpcbind, see Section 4.14, “Configuring an NFSv4-only server”.
4.11. Configuring the NFS server to run behind a firewall
NFS requires the
rpcbind service, which dynamically assigns ports for RPC services and can cause issues for configuring firewall rules. This procedure describes how to configure the NFS server to work behind a firewall.
To allow clients to access NFS shares behind a firewall, set which ports the RPC services run on in the
[mountd]section of the
This adds the
-p port-numberoption to the
rpc.mount -p port-number.
To allow clients to access NFS shares behind a firewall, configure the firewall by running the following commands on the NFS server:
firewall-cmd --permanent --add-service mountd firewall-cmd --permanent --add-service rpc-bind firewall-cmd --permanent --add-service nfs firewall-cmd --permanent --add-port=<mountd-port>/tcp firewall-cmd --permanent --add-port=<mountd-port>/udp firewall-cmd --reload
In the commands, replace <mountd-port> with the intended port or a port range. When specifying a port range, use the --add-port=<mountd-port>-<mountd-port>/udp syntax.
To allow NFSv4.0 callbacks to pass through firewalls, set
/proc/sys/fs/nfs/nfs_callback_tcpportand allow the server to connect to that port on the client.
This step is not needed for NFSv4.1 or higher, and the other ports for
lockdare not required in a pure NFSv4 environment.
To specify the ports to be used by the RPC service
nlockmgr, set the port number for the
nlm_udpportoptions in the
Restart the NFS server:
# systemctl restart nfs-server
If NFS fails to start, check
/var/log/messages. Commonly, NFS fails to start if you specify a port number that is already in use.
Confirm the changes have taken effect:
# rpcinfo -p
4.12. Exporting RPC quota through a firewall
If you export a file system that uses disk quotas, you can use the quota Remote Procedure Call (RPC) service to provide disk quota data to NFS clients.
Enable and start the
# systemctl enable --now rpc-rquotadNote
rpc-rquotadservice is, if enabled, started automatically after starting the nfs-server service.
To make the quota RPC service accessible behind a firewall, the TCP (or UDP, if UDP is enabled) port 875 need to be open. The default port number is defined in the
You can override the default port number by appending
-p port-numberto the
RPCRQUOTADOPTSvariable in the
By default, remote hosts can only read quotas. If you want to allow clients to set quotas, append the
-Soption to the
RPCRQUOTADOPTSvariable in the
rpc-rquotadfor the changes in the
/etc/sysconfig/rpc-rquotadfile to take effect:
# systemctl restart rpc-rquotad
4.13. Enabling NFS over RDMA (NFSoRDMA)
The remote direct memory access (RDMA) service works automatically in Red Hat Enterprise Linux 8 if there is RDMA-capable hardware present.
# yum install rdma-core
To enable automatic loading of NFSoRDMA server modules, add the
SVCRDMA_LOAD=yesoption on a new line in the
rdma=20049option in the
[nfsd]section of the
/etc/nfs.conffile specifies the port number on which the NFSoRDMA service listens for clients. The RFC 5667 standard specifies that servers must listen on port
20049when providing NFSv4 services over RDMA.
/etc/rdma/rdma.conffile contains a line that sets the
XPRTRDMA_LOAD=yesoption by default, which requests the
rdmaservice to load the NFSoRDMA client module.
# systemctl restart nfs-server
- The RFC 5667 standard: https://tools.ietf.org/html/rfc5667.
4.14. Configuring an NFSv4-only server
As an NFS server administrator, you can configure the NFS server to support only NFSv4, which minimizes the number of open ports and running services on the system.
4.14.1. Benefits and drawbacks of an NFSv4-only server
This section explains the benefits and drawbacks of configuring the NFS server to only support NFSv4.
By default, the NFS server supports NFSv3 and NFSv4 connections in Red Hat Enterprise Linux 8. However, you can also configure NFS to support only NFS version 4.0 and later. This minimizes the number of open ports and running services on the system, because NFSv4 does not require the
rpcbind service to listen on the network.
When your NFS server is configured as NFSv4-only, clients attempting to mount shares using NFSv3 fail with an error like the following:
Requested NFS version or transport protocol is not supported.
Optionally, you can also disable listening for the
NSM protocol calls, which are not necessary in the NFSv4-only case.
The effects of disabling these additional options are:
- Clients that attempt to mount shares from your server using NFSv3 become unresponsive.
- The NFS server itself is unable to mount NFSv3 file systems.
4.14.2. Configuring the NFS server to support only NFSv4
This procedure describes how to configure your NFS server to support only NFS version 4.0 and later.
Disable NFSv3 by adding the following lines to the
[nfsd]section of the
Optionally, disable listening for the
NSMprotocol calls, which are not necessary in the NFSv4-only case. Disable related services:
# systemctl mask --now rpc-statd.service rpcbind.service rpcbind.socket
Restart the NFS server:
# systemctl restart nfs-server
The changes take effect as soon as you start or restart the NFS server.
4.14.3. Verifying the NFSv4-only configuration
This procedure describes how to verify that your NFS server is configured in the NFSv4-only mode by using the
netstatutility to list services listening on the TCP and UDP protocols:
# netstat --listening --tcp --udp
Example 4.3. Output on an NFSv4-only server
The following is an example
netstatoutput on an NFSv4-only server; listening for
NSMis also disabled. Here,
nfsis the only listening NFS service:
# netstat --listening --tcp --udp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0:ssh 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:nfs 0.0.0.0:* LISTEN tcp6 0 0 [::]:ssh [::]:* LISTEN tcp6 0 0 [::]:nfs [::]:* LISTEN udp 0 0 localhost.locald:bootpc 0.0.0.0:*
Example 4.4. Output before configuring an NFSv4-only server
In comparison, the
netstatoutput before configuring an NFSv4-only server includes the
# netstat --listening --tcp --udp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0:ssh 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:40189 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:46813 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:nfs 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:sunrpc 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:mountd 0.0.0.0:* LISTEN tcp6 0 0 [::]:ssh [::]:* LISTEN tcp6 0 0 [::]:51227 [::]:* LISTEN tcp6 0 0 [::]:nfs [::]:* LISTEN tcp6 0 0 [::]:sunrpc [::]:* LISTEN tcp6 0 0 [::]:mountd [::]:* LISTEN tcp6 0 0 [::]:45043 [::]:* LISTEN udp 0 0 localhost:1018 0.0.0.0:* udp 0 0 localhost.locald:bootpc 0.0.0.0:* udp 0 0 0.0.0.0:mountd 0.0.0.0:* udp 0 0 0.0.0.0:46672 0.0.0.0:* udp 0 0 0.0.0.0:sunrpc 0.0.0.0:* udp 0 0 0.0.0.0:33494 0.0.0.0:* udp6 0 0 [::]:33734 [::]:* udp6 0 0 [::]:mountd [::]:* udp6 0 0 [::]:sunrpc [::]:* udp6 0 0 [::]:40243 [::]:*
4.15. Related information
- The Linux NFS wiki: https://linux-nfs.org/wiki/index.php/Main_Page