Performance issues on the NFS client with sec=krb* and nconnect>1

Solution Unverified - Updated -

Issue

Performance is evaluated with a test script:

#!/bin/sh
echo 3 > /proc/sys/vm/drop_caches
dd if=/dev/zero of=/mnt/nfs/testfile.bin bs=1M count=50000 conv=fsync

The results for this are in the expected range as we are aware of the performance implications which Kerberos has:

NFS Version sec=sys     sec=krb5    sec=krb5i   sec=krb5p
NFS 4.1     871 MB/s    869 MB/s    409 MB/s    349 MB/s
NFS 4.2     870 MB/s    872 MB/s    407 MB/s    262 MB/s

We were then evaluating the impact/improvement of nconnect in this setup. Here the numbers with nconnect=4. With sec=sys we are close to hardware saturation so, all good. But as soon as sec=krb* the best results we measured are lower then without nconnect:

NFS Version sec=sys     sec=krb5    sec=krb5i   sec=krb5p
NFS 4.1     1.1 GB/s    323 MB/s    327 MB/s    422 MB/s
NFS 4.2     1.0 GB/s    297 MB/s    373 MB/s    380 MB/s

Environment

  • Red Hat Enterprise Linux 8 and above
    • kernel-4.18.0-240.el8 or later.
    • nfs-utils-2.3.3-35.el8 libnfsidmap-2.3.3-35.el8 or later.
    • Any NFS mount using nconnect>1 and kerberos security
    • Seen on Red Hat Enterprise Linux 8 kernel 4.18.0-425.3.1.el8.x86 (NFS client) with NetAPP AFF A250 running Ontap 9.11.1P5 (NFS server)

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content