Glusterfs installation failure - Waiting for Glusterfs Pods

Latest response

I'm trying to install Glusterfs in an OpenShift 3.11 environment (tried both during and post installation) and the gluster pods never post their ready status. The logs from the pods do not show anything, but exec'ing into the pods and manually running the readiness/liveness script shows the beloe

sh-4.2# if command -v /usr/local/bin/; then /usr/local/bin/ readiness; else systemctl status glusterd.service; fi
failed check: systemctl -q is-active gluster-blockd.service
sh-4.2# systemctl is-active gluster-blockd
sh-4.2# systemctl is-active glusterd
sh-4.2# systemctl status glusterd
â glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2019-07-23 12:23:28 UTC; 1min 46s ago
  Process: 389 ExecStart=/usr/sbin/glusterd -p /var/run/ --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 390 (glusterd)
   CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda97d6574_ad44_11e9_9874_0050568cf220.slice/docker-542d58a9dda4150df3ece1c171530102ccb80b70a82e223f657194ec7bc06c5f.scope/system.slice/glusterd.service
           ââ390 /usr/sbin/glusterd -p /var/run/ --log-level INFO
sh-4.2# systemctl status gluster-blockd
â gluster-blockd.service - Gluster block storage utility
   Loaded: loaded (/usr/lib/systemd/system/gluster-blockd.service; enabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since Tue 2019-07-23 12:23:25 UTC; 1min 52s ago
  Process: 380 ExecStart=/usr/sbin/gluster-blockd --glfs-lru-count $GB_GLFS_LRU_COUNT --log-level $GB_LOG_LEVEL $GB_EXTRA_ARGS (code=exited, status=1/FAILURE)
 Main PID: 380 (code=exited, status=1/FAILURE)
sh-4.2# /usr/sbin/gluster-blockd --glfs-lru-count $GB_GLFS_LRU_COUNT --log-level $GB_LOG_LEVEL $GB_EXTRA_ARGS
option '--log-level' needs argument <LOG-LEVEL>

I noticed the gluster Daemonset does not include the $GB_LOG_LEVEL env variable and I'm not sure why.

I even tried disabling the gluster-block storage in the config and that didn't change anything.

Am I missing something?


If I also manually edit the DS readiness and liveness probe to just systemctl status glusterd.service and restart the pods, it works but that's not the correct way to do this.