DEBUG module.vpc.aws_subnet.public_subnet[2]: Refreshing state... [id=subnet-0e94fdf5549292b9d] DEBUG module.vpc.aws_subnet.private_subnet[1]: Refreshing state... [id=subnet-07fc25e42fd376a5c] DEBUG module.vpc.aws_subnet.private_subnet[0]: Refreshing state... [id=subnet-059ca6813c2580d30] DEBUG module.vpc.aws_subnet.private_subnet[3]: Refreshing state... [id=subnet-08b5bd79fecb9e9ba] DEBUG module.bootstrap.aws_security_group.bootstrap: Refreshing state... [id=sg-0076018975a554624] DEBUG module.vpc.data.aws_subnet.public[0]: Refreshing state... DEBUG module.vpc.data.aws_subnet.public[3]: Refreshing state... DEBUG module.vpc.data.aws_subnet.public[2]: Refreshing state... DEBUG module.vpc.data.aws_subnet.public[1]: Refreshing state... DEBUG module.vpc.data.aws_subnet.private[2]: Refreshing state... DEBUG module.vpc.data.aws_subnet.private[3]: Refreshing state... DEBUG module.vpc.data.aws_subnet.private[0]: Refreshing state... DEBUG module.vpc.data.aws_subnet.private[1]: Refreshing state... DEBUG module.bootstrap.aws_security_group_rule.bootstrap_journald_gateway: Refreshing state... [id=sgrule-3068767826] DEBUG module.bootstrap.aws_security_group_rule.ssh: Refreshing state... [id=sgrule-2704341228] DEBUG module.bootstrap.aws_instance.bootstrap: Refreshing state... [id=i-0c492467bc75b1b33] DEBUG module.bootstrap.aws_lb_target_group_attachment.bootstrap[2]: Refreshing state... [id=arn:aws:elasticloadbalancing:us-west-2:347786937011:targetgroup/demo-cluster-89bp9-aext/d92eb6d9bc7f6bb0-20200922085037812900000006] DEBUG module.bootstrap.aws_lb_target_group_attachment.bootstrap[1]: Refreshing state... [id=arn:aws:elasticloadbalancing:us-west-2:347786937011:targetgroup/demo-cluster-89bp9-sint/a2696575902dea3c-20200922085038234400000007] DEBUG module.bootstrap.aws_lb_target_group_attachment.bootstrap[0]: Refreshing state... [id=arn:aws:elasticloadbalancing:us-west-2:347786937011:targetgroup/demo-cluster-89bp9-aint/8dccde732e304f93-20200922085037525400000005] DEBUG module.bootstrap.aws_lb_target_group_attachment.bootstrap[2]: Destroying... [id=arn:aws:elasticloadbalancing:us-west-2:347786937011:targetgroup/demo-cluster-89bp9-aext/d92eb6d9bc7f6bb0-20200922085037812900000006] DEBUG module.bootstrap.aws_security_group_rule.ssh: Destroying... [id=sgrule-2704341228] DEBUG module.bootstrap.aws_lb_target_group_attachment.bootstrap[1]: Destroying... [id=arn:aws:elasticloadbalancing:us-west-2:347786937011:targetgroup/demo-cluster-89bp9-sint/a2696575902dea3c-20200922085038234400000007] DEBUG module.bootstrap.aws_security_group_rule.bootstrap_journald_gateway: Destroying... [id=sgrule-3068767826] DEBUG module.bootstrap.aws_iam_role_policy.bootstrap: Destroying... [id=demo-cluster-89bp9-bootstrap-role:demo-cluster-89bp9-bootstrap-policy] DEBUG module.bootstrap.aws_lb_target_group_attachment.bootstrap[0]: Destroying... [id=arn:aws:elasticloadbalancing:us-west-2:347786937011:targetgroup/demo-cluster-89bp9-aint/8dccde732e304f93-20200922085037525400000005] DEBUG module.bootstrap.aws_s3_bucket_object.ignition: Destroying... [id=bootstrap.ign] DEBUG module.bootstrap.aws_lb_target_group_attachment.bootstrap[2]: Destruction complete after 0s DEBUG module.bootstrap.aws_lb_target_group_attachment.bootstrap[0]: Destruction complete after 0s DEBUG module.bootstrap.aws_s3_bucket_object.ignition: Destruction complete after 0s DEBUG module.bootstrap.aws_lb_target_group_attachment.bootstrap[1]: Destruction complete after 0s DEBUG module.bootstrap.aws_instance.bootstrap: Destroying... [id=i-0c492467bc75b1b33] DEBUG module.bootstrap.aws_security_group_rule.bootstrap_journald_gateway: Destruction complete after 0s DEBUG module.bootstrap.aws_security_group_rule.ssh: Destruction complete after 1s DEBUG module.bootstrap.aws_iam_role_policy.bootstrap: Destruction complete after 1s DEBUG module.bootstrap.aws_instance.bootstrap: Still destroying... [id=i-0c492467bc75b1b33, 10s elapsed] DEBUG module.bootstrap.aws_instance.bootstrap: Still destroying... [id=i-0c492467bc75b1b33, 20s elapsed] DEBUG module.bootstrap.aws_instance.bootstrap: Still destroying... [id=i-0c492467bc75b1b33, 30s elapsed] DEBUG module.bootstrap.aws_instance.bootstrap: Still destroying... [id=i-0c492467bc75b1b33, 40s elapsed] DEBUG module.bootstrap.aws_instance.bootstrap: Still destroying... [id=i-0c492467bc75b1b33, 50s elapsed] DEBUG module.bootstrap.aws_instance.bootstrap: Still destroying... [id=i-0c492467bc75b1b33, 1m0s elapsed] DEBUG module.bootstrap.aws_instance.bootstrap: Still destroying... [id=i-0c492467bc75b1b33, 1m10s elapsed] DEBUG module.bootstrap.aws_instance.bootstrap: Destruction complete after 1m11s DEBUG module.bootstrap.aws_iam_instance_profile.bootstrap: Destroying... [id=demo-cluster-89bp9-bootstrap-profile] DEBUG module.bootstrap.aws_security_group.bootstrap: Destroying... [id=sg-0076018975a554624] DEBUG module.bootstrap.aws_s3_bucket.ignition: Destroying... [id=terraform-20200922084947895000000001] DEBUG module.bootstrap.aws_s3_bucket.ignition: Destruction complete after 1s DEBUG module.bootstrap.aws_security_group.bootstrap: Destruction complete after 1s DEBUG module.bootstrap.aws_iam_instance_profile.bootstrap: Destruction complete after 1s DEBUG module.bootstrap.aws_iam_role.bootstrap: Destroying... [id=demo-cluster-89bp9-bootstrap-role] DEBUG module.bootstrap.aws_iam_role.bootstrap: Destruction complete after 1s DEBUG DEBUG Warning: Resource targeting is in effect DEBUG DEBUG You are creating a plan with the -target option, which means that the result DEBUG of this plan may not represent all of the changes requested by the current DEBUG configuration. DEBUG DEBUG The -target option is not for routine use, and is provided only for DEBUG exceptional situations such as recovering from errors or mistakes, or when DEBUG Terraform specifically suggests to use it as part of an error message. DEBUG DEBUG DEBUG Warning: Applied changes may be incomplete DEBUG DEBUG The plan was created with the -target option in effect, so some changes DEBUG requested in the configuration may have been ignored and the output values may DEBUG not be fully updated. Run the following command to verify that no other DEBUG changes are pending: DEBUG terraform plan DEBUG DEBUG Note that the -target option is not suitable for routine use, and is provided DEBUG only for exceptional situations such as recovering from errors or mistakes, or DEBUG when Terraform specifically suggests to use it as part of an error message. DEBUG DEBUG DEBUG Destroy complete! Resources: 12 destroyed. DEBUG Fetching Install Config... DEBUG Loading Install Config... DEBUG Loading SSH Key... DEBUG Using SSH Key loaded from state file DEBUG Loading Base Domain... DEBUG Loading Platform... DEBUG Using Platform loaded from state file DEBUG Using Base Domain loaded from state file DEBUG Loading Cluster Name... DEBUG Loading Base Domain... DEBUG Loading Platform... DEBUG Using Cluster Name loaded from state file DEBUG Loading Pull Secret... DEBUG Using Pull Secret loaded from state file DEBUG Loading Platform... DEBUG Using Install Config loaded from state file DEBUG Reusing previously-fetched Install Config INFO Waiting up to 30m0s for the cluster at https://api.demo-cluster.devcloudedge.com:6443 to initialize... DEBUG Still waiting for the cluster to initialize: Working towards 4.5.11: 83% complete DEBUG Still waiting for the cluster to initialize: Working towards 4.5.11: 85% complete DEBUG Still waiting for the cluster to initialize: Working towards 4.5.11 DEBUG Still waiting for the cluster to initialize: Working towards 4.5.11: downloading update DEBUG Still waiting for the cluster to initialize: Working towards 4.5.11 DEBUG Still waiting for the cluster to initialize: Working towards 4.5.11: 2% complete DEBUG Still waiting for the cluster to initialize: Working towards 4.5.11: 10% complete DEBUG Still waiting for the cluster to initialize: Working towards 4.5.11: 12% complete DEBUG Still waiting for the cluster to initialize: Working towards 4.5.11: 13% complete DEBUG Still waiting for the cluster to initialize: Working towards 4.5.11: 86% complete DEBUG Still waiting for the cluster to initialize: Working towards 4.5.11: 87% complete DEBUG Still waiting for the cluster to initialize: Working towards 4.5.11: 87% complete, waiting on authentication DEBUG Still waiting for the cluster to initialize: Working towards 4.5.11: 87% complete, waiting on authentication DEBUG Still waiting for the cluster to initialize: Cluster operator authentication is still updating DEBUG Still waiting for the cluster to initialize: Multiple errors are preventing progress: * Cluster operator machine-config is reporting a failure: Failed to resync 4.5.11 because: timed out waiting for the condition during waitForDaemonsetRollout: Daemonset machine-config-daemon is not ready. status: (desired: 6, updated: 6, ready: 5, unavailable: 1) * Cluster operator monitoring is reporting a failure: Failed to rollout the stack. Error: running task Updating node-exporter failed: reconciling node-exporter DaemonSet failed: updating DaemonSet object failed: waiting for DaemonSetRollout of node-exporter: daemonset node-exporter is not ready. status: (desired: 6, updated: 6, ready: 5, unavailable: 1) INFO Cluster operator authentication Progressing is True with _WellKnownNotReady: Progressing: got '404 Not Found' status while trying to GET the OAuth well-known https://10.0.136.145:6443/.well-known/oauth-authorization-server endpoint data INFO Cluster operator authentication Available is False with : INFO Cluster operator dns Progressing is True with Reconciling: At least 1 DNS DaemonSet is progressing. ERROR Cluster operator etcd Degraded is True with NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-136-145.us-west-2.compute.internal" not ready since 2020-09-22 09:15:52 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) INFO Cluster operator insights Disabled is False with : ERROR Cluster operator kube-apiserver Degraded is True with NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-136-145.us-west-2.compute.internal" not ready since 2020-09-22 09:15:52 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) INFO Cluster operator kube-apiserver Progressing is True with NodeInstaller: NodeInstallerProgressing: 1 nodes are at revision 5; 2 nodes are at revision 6 ERROR Cluster operator kube-controller-manager Degraded is True with NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-136-145.us-west-2.compute.internal" not ready since 2020-09-22 09:15:52 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) ERROR Cluster operator kube-scheduler Degraded is True with NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-136-145.us-west-2.compute.internal" not ready since 2020-09-22 09:15:52 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) ERROR Cluster operator machine-config Degraded is True with MachineConfigDaemonFailed: Failed to resync 4.5.11 because: timed out waiting for the condition during waitForDaemonsetRollout: Daemonset machine-config-daemon is not ready. status: (desired: 6, updated: 6, ready: 5, unavailable: 1) INFO Cluster operator machine-config Available is False with : Cluster not available for 4.5.11 INFO Cluster operator monitoring Available is False with : INFO Cluster operator monitoring Progressing is True with RollOutInProgress: Rolling out the stack. ERROR Cluster operator monitoring Degraded is True with UpdatingnodeExporterFailed: Failed to rollout the stack. Error: running task Updating node-exporter failed: reconciling node-exporter DaemonSet failed: updating DaemonSet object failed: waiting for DaemonSetRollout of node-exporter: daemonset node-exporter is not ready. status: (desired: 6, updated: 6, ready: 5, unavailable: 1) ERROR Cluster operator network Degraded is True with RolloutHung: DaemonSet "openshift-multus/multus" rollout is not making progress - last change 2020-09-22T09:15:52Z DaemonSet "openshift-sdn/ovs" rollout is not making progress - last change 2020-09-22T09:15:53Z DaemonSet "openshift-sdn/sdn" rollout is not making progress - last change 2020-09-22T09:15:52Z INFO Cluster operator network Progressing is True with Deploying: DaemonSet "openshift-multus/multus" is not available (awaiting 1 nodes) DaemonSet "openshift-sdn/ovs" is not available (awaiting 1 nodes) DaemonSet "openshift-sdn/sdn" is not available (awaiting 1 nodes) ERROR Cluster operator openshift-apiserver Degraded is True with APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver FATAL failed to initialize the cluster: Multiple errors are preventing progress: * Cluster operator machine-config is reporting a failure: Failed to resync 4.5.11 because: timed out waiting for the condition during waitForDaemonsetRollout: Daemonset machine-config-daemon is not ready. status: (desired: 6, updated: 6, ready: 5, unavailable: 1) * Cluster operator monitoring is reporting a failure: Failed to rollout the stack. Error: running task Updating node-exporter failed: reconciling node-exporter DaemonSet failed: updating DaemonSet object failed: waiting for DaemonSetRollout of node-exporter: daemonset node-exporter is not ready. status: (desired: 6, updated: 6, ready: 5, unavailable: 1)