Chapter 3. Mounting NFS shares

As a system administrator, you can mount remote NFS shares on your system to access shared data.

3.1. NFS host name formats

This section describes different formats that you can use to specify a host when mounting or exporting an NFS share.

You can specify the host in the following formats:

Single machine

Either of the following:

  • A fully-qualified domain name (that can be resolved by the server)
  • Host name (that can be resolved by the server)
  • An IP address.
IP networks

Either of the following formats is valid:

  • a.b.c.d/z, where a.b.c.d is the network and z is the number of bits in the netmask; for example 192.168.0.0/24.
  • a.b.c.d/netmask, where a.b.c.d is the network and netmask is the netmask; for example, 192.168.100.8/255.255.255.0.
Netgroups
The @group-name format , where group-name is the NIS netgroup name.

3.2. Configuring an NFSv3 client to run behind a firewall

The procedure to configure an NFSv3 client to run behind a firewall is similar to the procedure to configure an NFSv3 server to run behind a firewall.

If the machine you are configuring is both an NFS client and an NFS server, follow the procedure described in Configuring an NFSv3 server with optional NFSv4 support.

The following procedure describes how to configure a machine that is an NFS client only to run behind a firewall.

Procedure

  1. To allow the NFS server to perform callbacks to the NFS client when the client is behind a firewall, add the rpc-bind service to the firewall by running the following command on the NFS client:

    firewall-cmd --permanent --add-service rpc-bind
  2. Specify the ports to be used by the RPC service nlockmgr in the /etc/nfs.conf file as follows:

    [lockd]
    
    port=port-number
    udp-port=upd-port-number

    Alternatively, you can specify nlm_tcpport and nlm_udpport in the /etc/modprobe.d/lockd.conf file.

  3. Open the specified ports in the firewall by running the following commands on the NFS client:

    firewall-cmd --permanent --add-port=<lockd-tcp-port>/tcp
    firewall-cmd --permanent --add-port=<lockd-udp-port>/udp
  4. Add static ports for rpc.statd by editing the [statd] section of the /etc/nfs.conf file as follows:

    [statd]
    
    port=port-number
  5. Open the added ports in the firewall by running the following commands on the NFS client:

    firewall-cmd --permanent --add-port=<statd-tcp-port>/tcp
    firewall-cmd --permanent --add-port=<statd-udp-port>/udp
  6. Reload the firewall configuration:

    firewall-cmd --reload
  7. Restart the rpc-statd service:

    # systemctl restart rpc-statd.service

    Alternatively, if you specified the lockd ports in the /etc/modprobe.d/lockd.conf file:

    1. Update the current values of /proc/sys/fs/nfs/nlm_tcpport and /proc/sys/fs/nfs/nlm_udpport:

      # sysctl -w fs.nfs.nlm_tcpport=<tcp-port>
      # sysctl -w fs.nfs.nlm_udpport=<udp-port>
    2. Restart the rpc-statd service:

      # systemctl restart rpc-statd.service

3.3. Configuring an NFSv4 client to run behind a firewall

Perform this procedure only if the client is using NFSv4.0. In that case, it is necessary to open a port for NFSv4.0 callbacks.

This procedure is not needed for NFSv4.1 or higher because in the later protocol versions the server performs callbacks on the same connection that was initiated by the client.

Procedure

  1. To allow NFSv4.0 callbacks to pass through firewalls, set /proc/sys/fs/nfs/nfs_callback_tcpport and allow the server to connect to that port on the client as follows:

    # echo "fs.nfs.nfs_callback_tcpport = <callback-port>" >/etc/sysctl.d/90-nfs-callback-port.conf
    # sysctl -p /etc/sysctl.d/90-nfs-callback-port.conf
  2. Open the specified port in the firewall by running the following command on the NFS client:

    firewall-cmd --permanent --add-port=<callback-port>/tcp
  3. Reload the firewall configuration:

    firewall-cmd --reload

3.4. Discovering NFS exports

This procedure discovers which file systems a given NFSv3 or NFSv4 server exports.

Procedure

  • With any server that supports NFSv3, use the showmount utility:

    $ showmount --exports my-server
    
    Export list for my-server
    /exports/foo
    /exports/bar
  • With any server that supports NFSv4, mount the root directory and look around:

    # mount my-server:/ /mnt/
    # ls /mnt/
    
    exports
    
    # ls /mnt/exports/
    
    foo
    bar

On servers that support both NFSv4 and NFSv3, both methods work and give the same results.

Additional resources

  • showmount(8) man page

3.5. Mounting an NFS share with mount

Mount an NFS share exported from a server by using the mount utility.

Warning

You can experience conflicts in your NFSv4 clientid and their sudden expiration if your NFS clients have the same short hostname. To avoid any possible sudden expiration of your NFSv4 clientid, you must use either unique hostnames for NFS clients or configure identifier on each container, depending on what system you are using. For more information, see the NFSv4 clientid was expired suddenly due to use same hostname on several NFS clients Knowledgebase article.

Procedure

  • To mount an NFS share, use the following command:

    # mount -t nfs -o options host:/remote/export /local/directory

    This command uses the following variables:

    options
    A comma-delimited list of mount options.
    host
    The host name, IP address, or fully qualified domain name of the server exporting the file system you want to mount.
    /remote/export
    The file system or directory being exported from the server, that is, the directory you want to mount.
    /local/directory
    The client location where /remote/export is mounted.

Additional resources

3.6. Setting up pNFS SCSI on the client

This procedure configures an NFS client to mount a pNFS SCSI layout.

Prerequisites

  • The NFS server is configured to export an XFS file system over pNFS SCSI.

Procedure

  • On the client, mount the exported XFS file system using NFS version 4.1 or higher:

    # mount -t nfs -o nfsvers=4.1 host:/remote/export /local/directory

    Do not mount the XFS file system directly without NFS.

3.7. Checking pNFS SCSI operations from the client using mountstats

This procedure uses the /proc/self/mountstats file to monitor pNFS SCSI operations from the client.

Procedure

  1. List the per-mount operation counters:

    # cat /proc/self/mountstats \
          | awk /scsi_lun_0/,/^$/ \
          | egrep device\|READ\|WRITE\|LAYOUT
    
    device 192.168.122.73:/exports/scsi_lun_0 mounted on /mnt/rhel7/scsi_lun_0 with fstype nfs4 statvers=1.1
        nfsv4:  bm0=0xfdffbfff,bm1=0x40f9be3e,bm2=0x803,acl=0x3,sessions,pnfs=LAYOUT_SCSI
                READ: 0 0 0 0 0 0 0 0
               WRITE: 0 0 0 0 0 0 0 0
            READLINK: 0 0 0 0 0 0 0 0
             READDIR: 0 0 0 0 0 0 0 0
           LAYOUTGET: 49 49 0 11172 9604 2 19448 19454
        LAYOUTCOMMIT: 28 28 0 7776 4808 0 24719 24722
        LAYOUTRETURN: 0 0 0 0 0 0 0 0
         LAYOUTSTATS: 0 0 0 0 0 0 0 0
  2. In the results:

    • The LAYOUT statistics indicate requests where the client and server use pNFS SCSI operations.
    • The READ and WRITE statistics indicate requests where the client and server fall back to NFS operations.

3.8. Common NFS mount options

The following are the commonly used options when mounting NFS shares. You can use these options wth manual mount commands, the /etc/fstab settings, and autofs.

lookupcache=mode
Specifies how the kernel should manage its cache of directory entries for a given mount point. Valid arguments for mode are all, none, or positive.
nfsvers=version

Specifies which version of the NFS protocol to use, where version is 3, 4, 4.0, 4.1, or 4.2. This is useful for hosts that run multiple NFS servers, or to disable retrying a mount with lower versions. If no version is specified, NFS uses the highest version supported by the kernel and the mount utility.

The option vers is identical to nfsvers, and is included in this release for compatibility reasons.

noacl
Turns off all ACL processing. This may be needed when interfacing with older versions of Red Hat Enterprise Linux, Red Hat Linux, or Solaris, because the most recent ACL technology is not compatible with older systems.
nolock
Disables file locking. This setting is sometimes required when connecting to very old NFS servers.
noexec
Prevents execution of binaries on mounted file systems. This is useful if the system is mounting a non-Linux file system containing incompatible binaries.
nosuid
Disables the set-user-identifier and set-group-identifier bits. This prevents remote users from gaining higher privileges by running a setuid program.
port=num
Specifies the numeric value of the NFS server port. If num is 0 (the default value), then mount queries the rpcbind service on the remote host for the port number to use. If the NFS service on the remote host is not registered with its rpcbind service, the standard NFS port number of TCP 2049 is used instead.
rsize=num and wsize=num

These options set the maximum number of bytes to be transferred in a single NFS read or write operation.

There is no fixed default value for rsize and wsize. By default, NFS uses the largest possible value that both the server and the client support. In Red Hat Enterprise Linux 9, the client and server maximum is 1,048,576 bytes. For more details, see the What are the default and maximum values for rsize and wsize with NFS mounts? KBase article.

sec=flavors

Security flavors to use for accessing files on the mounted export. The flavors value is a colon-separated list of one or more security flavors.

By default, the client attempts to find a security flavor that both the client and the server support. If the server does not support any of the selected flavors, the mount operation fails.

Available flavors:

  • sec=sys uses local UNIX UIDs and GIDs. These use AUTH_SYS to authenticate NFS operations.
  • sec=krb5 uses Kerberos V5 instead of local UNIX UIDs and GIDs to authenticate users.
  • sec=krb5i uses Kerberos V5 for user authentication and performs integrity checking of NFS operations using secure checksums to prevent data tampering.
  • sec=krb5p uses Kerberos V5 for user authentication, integrity checking, and encrypts NFS traffic to prevent traffic sniffing. This is the most secure setting, but it also involves the most performance overhead.
tcp
Instructs the NFS mount to use the TCP protocol.

Additional resources

  • mount(8) man page
  • nfs(5) man page

3.9. Storing user settings over NFS

If you use GNOME on a system with NFS home directories, you must set the keyfile back end for the dconf database. Otherwise, dconf might not work correctly. With this configuration, dconf stores settings in the ~/.config/dconf-keyfile/user file.

Procedure

  1. Create or edit the /etc/dconf/profile/user file on every client.
  2. At the very beginning of the /etc/dconf/profile/user file, add the following line:

    service-db:keyfile/user
  3. Users must log out and log back in.

    dconf polls the keyfile back end to determine whether updates have been made, so settings might not be updated immediately.

3.10. Getting started with FS-Cache

FS-Cache is a persistent local cache that file systems can use to take data retrieved from over the network and cache it on local disk. This helps minimize network traffic for users accessing data from a file system mounted over the network (for example, NFS).

3.10.1. Overview of the FS-Cache

The following diagram is a high-level illustration of how FS-Cache works:

Figure 3.1. FS-Cache Overview

FS-Cache Overview

FS-Cache is designed to be as transparent as possible to the users and administrators of a system. Unlike cachefs on Solaris, FS-Cache allows a file system on a server to interact directly with a client’s local cache without creating an overmounted file system. With NFS, a mount option instructs the client to mount the NFS share with FS-cache enabled. The mount point will cause automatic upload for two kernel modules: fscache and cachefiles. The cachefilesd daemon communicates with the kernel modules to implement the cache.

FS-Cache does not alter the basic operation of a file system that works over the network - it merely provides that file system with a persistent place in which it can cache data. For example, a client can still mount an NFS share whether or not FS-Cache is enabled. In addition, cached NFS can handle files that will not fit into the cache (whether individually or collectively) as files can be partially cached and do not have to be read completely up front. FS-Cache also hides all I/O errors that occur in the cache from the client file system driver.

To provide caching services, FS-Cache needs a cache back end. A cache back end is a storage driver configured to provide caching services, which is cachefiles. In this case, FS-Cache requires a mounted block-based file system, such as ext3, that supports bmap and extended attributes as its cache back end.

File systems that support functionalities required by FS-Cache cache back end include the Red Hat Enterprise Linux 9 implementations of the following file systems:

  • ext3 (with extended attributes enabled)
  • ext4
  • XFS

FS-Cache cannot arbitrarily cache any file system, whether through the network or otherwise: the shared file system’s driver must be altered to allow interaction with FS-Cache, data storage/retrieval, and metadata setup and validation. FS-Cache needs indexing keys and coherency data from the cached file system to support persistence: indexing keys to match file system objects to cache objects, and coherency data to determine whether the cache objects are still valid.

Note

In Red Hat Enterprise Linux 9, the cachefilesd package is not installed by default and needs to be installed manually.

3.10.2. Performance guarantee

FS-Cache does not guarantee increased performance. Using a cache incurs a performance penalty: for example, cached NFS shares add disk accesses to cross-network lookups. While FS-Cache tries to be as asynchronous as possible, there are synchronous paths, such as read operations, where this is not possible.

For example, using FS-Cache to cache an NFS share between two computers over an otherwise unladen GigE network likely will not demonstrate any performance improvements on file access. Rather, NFS requests would be satisfied faster from server memory rather than from local disk.

The use of FS-Cache, therefore, is a compromise between various factors. If FS-Cache is being used to cache NFS traffic, for example, it may slow the client down a little, but massively reduce the network and server loading by satisfying read requests locally without consuming network bandwidth.

3.10.3. Using the cache with NFS

NFS will not use the cache unless explicitly instructed. This paragraph shows how to configure an NFS mount by using FS-Cache.

NFS indexes cache contents using NFS file handle, not the file name, which means hard-linked files share the cache correctly.

NFS versions 3, 4.0, 4.1 and 4.2 support caching. However, each version uses different branches for caching.

Prerequisites

  • The cachefilesd package is installed and running. To ensure it is running, use the following command:

    # systemctl start cachefilesd
    # systemctl status cachefilesd

    The status must be active (running).

Procedure

  • Mount NFS shares with the following option:

    # mount nfs-share:/ /mount/point -o fsc

    All access to files under /mount/point will go through the cache, unless the file is opened for direct I/O or writing.

3.10.4. Setting up a cache

Currently, Red Hat Enterprise Linux 9 only provides the cachefiles caching back end. The cachefilesd daemon initiates and manages cachefiles. The /etc/cachefilesd.conf file controls how cachefiles provides caching services.

The cache back end works by maintaining a certain amount of free space on the partition hosting the cache. It grows and shrinks the cache in response to other elements of the system using up free space, making it safe to use on the root file system (for example, on a laptop). FS-Cache sets defaults on this behavior, which can be configured via cache cull limits. For more information about configuring cache cull limits, see Cache cull limits configuration.

This procedure shows how to set up a cache.

Prerequisites

  • The cachefilesd package is installed and service has started successfully. To be sure the service is running, use the following command:

    # systemctl start cachefilesd
    # systemctl status cachefilesd

    The status must be active (running).

Procedure

  1. Configure in a cache back end which directory to use as a cache, use the following parameter:

    $ dir /path/to/cache
  2. Typically, the cache back end directory is set in /etc/cachefilesd.conf as /var/cache/fscache, as in:

    $ dir /var/cache/fscache
  3. If you want to change the cache back end directory, the selinux context must be same as /var/cache/fscache:

    # semanage fcontext -a -e /var/cache/fscache /path/to/cache
    # restorecon -Rv /path/to/cache
  4. Replace /path/to/cache with the directory name while setting up cache.
  5. If the given commands for setting selinux context did not work, use the following commands:

    # semanage permissive -a cachefilesd_t
    # semanage permissive -a cachefiles_kernel_t

    FS-Cache will store the cache in the file system that hosts /path/to/cache. On a laptop, it is advisable to use the root file system (/) as the host file system, but for a desktop machine it would be more prudent to mount a disk partition specifically for the cache.

  6. The host file system must support user-defined extended attributes. FS-Cache uses these attributes to store coherency maintenance information. To enable user-defined extended attributes on a device with ext3 file systems, enter:

    # tune2fs -o user_xattr /dev/device
  7. To enable extended attributes for a file system at the mount time, as an alternative, use the following command:

    # mount /dev/device /path/to/cache -o user_xattr
  8. Once the configuration file is in place, start up the cachefilesd service:

    # systemctl start cachefilesd
  9. To configure cachefilesd to start at boot time, execute the following command as root:

    # systemctl enable cachefilesd

3.10.5. Configuring NFS cache sharing

There are several potential issues to do with NFS cache sharing. Because the cache is persistent, blocks of data in the cache are indexed on a sequence of four keys:

  • Level 1: Server details
  • Level 2: Some mount options; security type; FSID; uniquifier
  • Level 3: File Handle
  • Level 4: Page number in file

To avoid coherency management problems between superblocks, all NFS superblocks that require to cache the data have unique Level 2 keys. Normally, two NFS mounts with same source volume and options share a superblock, and therefore share the caching, even if they mount different directories within that volume.

This is an example how to configure cache sharing with different options.

Procedure

  1. Mount NFS shares with the following commands:

    mount home0:/disk0/fred /home/fred -o fsc
    mount home0:/disk0/jim /home/jim -o fsc

    Here, /home/fred and /home/jim likely share the superblock as they have the same options, especially if they come from the same volume/partition on the NFS server (home0).

  2. To not share the superblock, use the mount command with the following options:

    mount home0:/disk0/fred /home/fred -o fsc,rsize=8192
    mount home0:/disk0/jim /home/jim -o fsc,rsize=65536

    In this case, /home/fred and /home/jim will not share the superblock as they have different network access parameters, which are part of the Level 2 key.

  3. To cache the contents of the two subtrees (/home/fred1 and /home/fred2) twice with not sharing the superblock, use the following command:

    mount home0:/disk0/fred /home/fred1 -o fsc,rsize=8192
    mount home0:/disk0/fred /home/fred2 -o fsc,rsize=65536
  4. Another way to avoid superblock sharing is to suppress it explicitly with the nosharecache parameter. Using the same example:

    mount home0:/disk0/fred /home/fred -o nosharecache,fsc
    mount home0:/disk0/jim /home/jim -o nosharecache,fsc

    However, in this case only one of the superblocks is permitted to use cache since there is nothing to distinguish the Level 2 keys of home0:/disk0/fred and home0:/disk0/jim.

  5. To specify the addressing to the superblock, use the fsc=unique-identifier mount option to set a unique identifier on at least one of the mounts, for example:

    mount home0:/disk0/fred /home/fred -o nosharecache,fsc
    mount home0:/disk0/jim /home/jim -o nosharecache,fsc=jim

    Here, the unique identifier jim is added to the Level 2 key used in the cache for /home/jim.

Important

The user can not share caches between superblocks that have different communications or protocol parameters. For example, it is not possible to share between NFSv4.0 and NFSv3 or between NFSv4.1 and NFSv4.2 because they force different superblocks. Also setting parameters, such as the read size (rsize), prevents cache sharing because, again, it forces a different superblock.

3.10.6. Cache limitations with NFS

There are some cache limitations with NFS:

  • Opening a file from a shared file system for direct I/O automatically bypasses the cache. This is because this type of access must be direct to the server.
  • Opening a file from a shared file system for either direct I/O or writing flushes the cached copy of the file. FS-Cache will not cache the file again until it is no longer opened for direct I/O or writing.
  • Furthermore, this release of FS-Cache only caches regular NFS files. FS-Cache will not cache directories, symlinks, device files, FIFOs and sockets.

3.10.7. Cache cull limits configuration

The cachefilesd daemon works by caching remote data from shared file systems to free space on the disk. This could potentially consume all available free space, which could be bad if the disk also contains the root partition. To control this, cachefilesd tries to maintain a certain amount of free space by discarding old objects, such as less-recently accessed objects, from the cache. This behavior is known as cache culling.

Cache culling is done on the basis of the percentage of blocks and the percentage of files available in the underlying file system. There are settings in /etc/cachefilesd.conf which control six limits:

brun N% (percentage of blocks), frun N% (percentage of files)
If the amount of free space and the number of available files in the cache rises above both these limits, then culling is turned off.
bcull N% (percentage of blocks), fcull N% (percentage of files)
If the amount of available space or the number of files in the cache falls below either of these limits, then culling is started.
bstop N% (percentage of blocks), fstop N% (percentage of files)
If the amount of available space or the number of available files in the cache falls below either of these limits, then no further allocation of disk space or files is permitted until culling has raised things above these limits again.

The default value of N for each setting is as follows:

  • brun/frun - 10%
  • bcull/fcull - 7%
  • bstop/fstop - 3%

When configuring these settings, the following must hold true:

  • 0 ≤ bstop < bcull < brun < 100
  • 0 ≤ fstop < fcull < frun < 100

These are the percentages of available space and available files and do not appear as 100 minus the percentage displayed by the df program.

Important

Culling depends on both bxxx and fxxx pairs simultaneously; the user can not treat them separately.

3.10.8. Retrieving statistical information from the fscache kernel module

FS-Cache also keeps track of general statistical information. This procedure shows how to get this information.

Procedure

  1. To view the statistical information about FS-Cache, use the following command:

    # cat /proc/fs/fscache/stats

FS-Cache statistics includes information about decision points and object counters. For more information, see the following kernel document:

/usr/share/doc/kernel-doc-4.18.0/Documentation/filesystems/caching/fscache.txt

3.10.9. FS-Cache references

This section provides reference information for FS-Cache.

  1. For more information about cachefilesd and how to configure it, see man cachefilesd and man cachefilesd.conf. The following kernel documents also provide additional information:

    • /usr/share/doc/cachefilesd/README
    • /usr/share/man/man5/cachefilesd.conf.5.gz
    • /usr/share/man/man8/cachefilesd.8.gz
  2. For general information about FS-Cache, including details on its design constraints, available statistics, and capabilities, see the following kernel document:

    /usr/share/doc/kernel-doc-4.18.0/Documentation/filesystems/caching/fscache.txt