Openshift nodes going to not ready state while configuring storage in node-config.yaml

Posted on

Im using Openshift 3.11 while Im using azure cloud provider.
Created storage in Azure and mounted in Openshift server
1 master and 2 nodes
1 -infra
1 -compute
while I add these entries in node-config.yaml in
cloud-provider:
- "azure"
cloud-config:
- "/etc/origin/cloudprovider/azure.conf"
/etc/origin/node of every node and restart using the command
systemctl restart atomic-openshift-node.service
nodes are going to not ready state and even worse..
[root@master node]# oc status
The connection to the server master.azurecontinuoustest.com:8443 was refused - did you specify the right host or port?

[root@master ldap]# oc describe sc
Name: ebs
IsDefaultClass: Yes
Annotations: storageclass.kubernetes.io/is-default-class=true
Provisioner: kubernetes.io/azure-disk
Parameters: cachingmode=None,kind=Shared,skuName=Standard_LRS,storageAccount=ocpoctest
AllowVolumeExpansion:
MountOptions:
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events:
[root@master ldap]#

[root@master ldap]# oc describe pod ldap-rs-5vg5j
Name: ldap-rs-5vg5j
Namespace: corestack
Priority: 0
PriorityClassName:
Node: node2/10.1.3.38
Start Time: Thu, 30 Jan 2020 06:14:56 +0000
Labels: app=ldap-rs
Annotations: openshift.io/scc=restricted
Status: Pending
IP:
Controlled By: ReplicaSet/ldap-rs
Containers:
ldap-pod:
Container ID:
Image: accenture/adop-ldap:0.1.3
Image ID:
Port: 389/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment Variables from:
ldap-env ConfigMap Optional: false
Environment:
Mounts:
/etc/ldap from ldap-config (rw)
/var/lib/ldap from ldap-data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-dpsmq (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
ldap-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: ldap-data
ReadOnly: false
ldap-config:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: ldap-config
ReadOnly: false
default-token-dpsmq:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-dpsmq
Optional: false
QoS Class: BestEffort
Node-Selectors: node-role.kubernetes.io/compute=true
Tolerations:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 15s (x7 over 25s) default-scheduler pod has unbound PersistentVolumeClaims
Normal Scheduled 15s default-scheduler Successfully assigned corestack/ldap-rs-5vg5j to node2
Warning FailedMount 14s kubelet, node2 MountVolume.SetUp failed for volume "pvc-d022eb87-4327-11ea-93c8-000d3af2d196" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/origin/openshift.local.volumes/pods/d03b5a13-4327-11ea-93c8-000d3af2d196/volumes/kubernetes.io~azure-disk/pvc-d022eb87-4327-11ea-93c8-000d3af2d196 --scope -- mount -o bind /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/azure-disk/mounts/m4036575597 /var/lib/origin/openshift.local.volumes/pods/d03b5a13-4327-11ea-93c8-000d3af2d196/volumes/kubernetes.io~azure-disk/pvc-d022eb87-4327-11ea-93c8-000d3af2d196
Output: Running scope as unit run-56638.scope.
mount: special device /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/azure-disk/mounts/m4036575597 does not exist

Responses