Unable to provision volume for 3scale operator with Azure File storage class

Solution Verified - Updated -

Environment

  • Red Hat OpenShift Container Platform [OCP]
    • 4.x
  • Azure Red Hat OpenShift [ARO]
    • 4.x
  • Red Hat 3scale API management

Issue

While installing 3scale Manager with 3scale Operator on ARO, the pod is unable to mount the PV with Azure File storage class and getting the error message:

POD <pod-xxx> failed when it tries to provision azure file (ReadWriteMany) with error :
(combined from similar events): MountVolume.SetUp failed for volume "pvc-xxxxx" : mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/xxxxxx/volumes/kubernetes.io~azure-file/pvc-xxxxx --scope -- mount -t cifs -o cache=strict,dir_mode=xxxx,file_mode=xxxx,gid=xxxx,mfsymlinks,uid=xxxx,vers=xx,<masked> //xxxxxxx.abc.example.com/xyz-aro-pvc-xxxxx /var/lib/kubelet/pods/xxxxx/volumes/kubernetes.io~azure-file/pvc-xxxx Output: Running scope as unit: run-xxxxxx.scope mount error(2): No such file or directory Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) and kernel log messages (dmesg)

Resolution

As a workaround , edit the azure-file StorageClass and in addition to dir_mode and file_mode, add the following in the field mountOptions:

- uid=$UID-NS-RANGE
- gid=0
- mfsymlinks
- cache=strict

The $UID-NS-RANGE should be replaced with the range-uid value from the namespace annotation openshift.io/sa.scc.uid-range which can be obtained with:

$ oc get ns $3SCALE_PROJECT -o yaml | grep uid-range

Once the SC gets updated, proceed with the creation of a new PVC.

Root Cause

  • There is an ongoing issue with 3scale THREESCALE-4996, when using Azure-File for RWX storage. It is a Kubernetes Azure File storage bug where it is causing the UID/GID to be flipped when the volume gets mounted to the pod.
  • A similar issue was also opened for 3scale THREESCALE-3937.

Diagnostic Steps

  • Check the project events.
# oc get events
  • Check the pod logs.
# oc logs -f <pod-name>
  • Check the storage class being used.
# oc get storageclass <name> -o yaml

This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.

Comments