Is GFS2 suitable for concurrent writes?

Latest response

I need to setup a shared filesystem between two servers. These servers will need to write and read data on different files, but at the same filesystem.

I started installing an environment with three servers: the first one is sharing a block device via iSCSI and the other two are part of the same cluster and are accessing that device formatted as GFS2 mounted at /mnt.

When I ran on each server

dd if=/dev/urandom of=/mnt/block${hostname}

I saw the data rate was incredibly low: like 50kB/s. I've read the documentation at https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/global_file_system_2/s1-ov-lockbounce , which specifies the following:

Due to the way in which GFS2's caching is implemented the best performance is obtained when either of the following takes place:
- An inode is used in a read only fashion across all nodes.
- An inode is written or modified from a single node only.

Is there any way to get performance on concurrent writes to different files?

Responses

Hi Nico,

We know very little about your environment: which version of RHEL.

I assume you already went through some standard checks and performance guidelines:

https://access.redhat.com/articles/628093

Here is one for RHEL 8:

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_gfs2_file_systems/assembly_gfs2-performance-creating-mounting-gfs2

For example, it is recommended to set the noatime or nodiratime mount option with GFS2 whenever possible, with the preference for noatime. This prevents reads from requiring exclusive locks to update the atime timestamp.

If the shared block device has no write cache or if the write cache is not volatile, then verify if nobarrier mount option will help you.

Regards,

Dusan Baljevic (amateur radio VK2COT)