CIFS performance tuning

Latest response

A network drive is mounted on a REDHAT 6 client using mount.cifs and autofs. To improve the file transfer performance, after searching the web, I included the CIFSMaxBufSize=130048 option for the cifs.ko kernel module. This increased the file transfer speed from ~35 Mb/s to ~ 80 Mb/s.
I found very little documentation related to CIFS performance tuning on the customer portal - are there any best practices related to this that the community would be able to share?
Thanks,
Balazs

Responses

Be careful setting a large buffer size. If your CIFS server is a Windows system, there is a long-standing Windows bug where large reads will fail, because Windows only supports a 60k read/write block but the UNIX and Linux Samba implementation can use the full NetBIOS packet size of 128k:

This can be avoided by making sure your largest buffer is 60k (61440). If you're not specifying rsize or wsize in the mount, then a Windows mount should automatically negotiate to rsize=61440,wsize=61440 which you can confirm with cat /proc/mount

If your workload is write-heavy and asynchronous, there is value in tuning dirty pages to flush early and flush often. The following will allow for 512Mb of dirty pages, writes will block at 1Gb of dirty pages, and page flushing will happen every 5 seconds, for anything over 2.5 seconds old:

vm.dirty_background_bytes = 536870912
vm.dirty_bytes = 1073741824
vm.dirty_expire_centisecs = 250
vm.dirty_writeback_centisecs = 500

This is only a starting point, feel free to modify the bytes values as you see fit. I have a system where I only allow 1Mb of dirty pages before flushing starts. As I understand it, there is little value in decreasing the writeback parameter. In EL5 there was a hard-coded one-second pause if the writeback thread started when there was already a writeback in effect. However the code is significantly different in EL6 and EL7, I haven't had a chance to look into how it works. I just leave writeback alone.

I checked the negotiated buffer sizes and it is down to rsize=61440 and wsize=65536 - so it looks like the server supports 60k buffers at read and 64k buffers at write operations. I will look into a dirty pages flushing suggestion and do some experimenting.

Is there a way to reduce the CIFS connection timeout value from default value of 120s on RHEL 6.6 (when CIFS server is not reachable) from client or can only be modified from cifs server (Netapp or windows) ?

Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.