After restart of OpenShift nodes Elasticsearch logging pod fails to start with XFS mount failure

Solution Verified - Updated -

Issue

During a rolling reboot of OpenShift nodes; one or more Elasticsearch pods using PVC's backed by Glusterfs block storage fails to start. Pod log output will contain the following or similar lines:

3s        3s        1         logging-es-data-master-cfec75kl-1-hr4n5   Pod                                        Warning   FailedMount      kubelet, node4.cluster.hostname.com   MountVolume.SetUp failed for volume "kubernetes.io/iscsi/eacececf-b17d-11e8-978b-005056a81830-pvc-3fedde73-0554-11e8-97d9-005056a82bfc" (spec.Name: "pvc-3fedde73-0554-11e8-97d9-005056a82bfc") pod "eacececf-b17d-11e8-978b-005056a81830" (UID: "eacececf-b17d-11e8-978b-005056a81830") with: mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/iscsi/iface-default/10.177.105.43:3260-iqn.2016-12.org.gluster-block:de1e831b-b812-4623-921d-4e8244b14db2-lun-0 --scope -- mount -t xfs -o defaults /dev/dm-12 /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/iscsi/iface-default/10.177.105.43:3260-iqn.2016-12.org.gluster-block:de1e831b-b812-4623-921d-4e8244b14db2-lun-0
Output: Running scope as unit run-27870.scope.
mount: mount /dev/mapper/mpatha on /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/iscsi/iface-default/10.177.105.43:3260-iqn.2016-12.org.gluster-block:de1e831b-b812-4623-921d-4e8244b14db2-lun-0 failed: Structure needs cleaning

Environment

  • OpenShift Container Platform 3.9 / 3.6
  • Container Native Storage (OCS) 3.9

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase of over 48,000 articles and solutions.

Current Customers and Partners

Log in for full access

Log In