After restart of OpenShift nodes Elasticsearch logging pod fails to start with XFS mount failure

Solution Verified - Updated -


During a rolling reboot of OpenShift nodes; one or more Elasticsearch pods using PVC's backed by Glusterfs block storage fails to start. Pod log output will contain the following or similar lines:

3s        3s        1         logging-es-data-master-cfec75kl-1-hr4n5   Pod                                        Warning   FailedMount      kubelet,   MountVolume.SetUp failed for volume "" (spec.Name: "pvc-3fedde73-0554-11e8-97d9-005056a82bfc") pod "eacececf-b17d-11e8-978b-005056a81830" (UID: "eacececf-b17d-11e8-978b-005056a81830") with: mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/origin/openshift.local.volumes/plugins/ --scope -- mount -t xfs -o defaults /dev/dm-12 /var/lib/origin/openshift.local.volumes/plugins/
Output: Running scope as unit run-27870.scope.
mount: mount /dev/mapper/mpatha on /var/lib/origin/openshift.local.volumes/plugins/ failed: Structure needs cleaning


  • OpenShift Container Platform 3.9 / 3.6
  • Container Native Storage (OCS) 3.9

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content