Why are Applications Failing to Set / Update Posix Locks, When Running on the top of GlusterFS File Systems?

Solution Verified - Updated -

Issue

  • Applications running on the top of Glusterfs file systems, get error messages pointing to failures while setting Posix locks. The type of error posted by the application, is application-specific.

    In the Gluster client logs ( available at /var/log/glusterfs/client-mount-point.log ), the most common error messages observed when this issue occurs are:

    [2020-04-07 08:59:47.100562] E [rpc-clnt.c:183:call_bail] 0-vol_dcs-dev_pv2-client-0: bailing out frame type(GlusterFS 4.x v1), op(LK(26)), xid = 0x84a, unique = 3262, sent = 2020-04-07 08:29:46.766853, timeout = 1800 for 192.168.1.1:49239
    [2020-04-07 08:59:47.100675] E [rpc-clnt.c:183:call_bail] 0-vol_dcs-dev_pv2-client-0: bailing out frame type(GlusterFS 4.x v1), op(LK(26)), xid = 0x840, unique = 3254, sent = 2020-04-07 08:29:46.761577, timeout = 1800 for 192.168.1.1:49239
    [2020-04-07 09:00:37.104212] E [rpc-clnt.c:183:call_bail] 0-vol_dcs-dev_pv2-client-0: bailing out frame type(GlusterFS 4.x v1), op(LK(26)), xid = 0xbd1, unique = 4381, sent = 2020-04-07 08:30:31.763128, timeout = 1800 for 192.168.1.1:49239
    

    The above message means that gluster has tried an operation with ID 26, which translates to 'lock', and it timed out after waiting for 1800 seconds.

  • How to solve this issue?

Environment

  • Red Hat Gluster Storage 3.x versions.
  • Red Hat Openshift Container storage 3.x versions.

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content