ResultSUCCESS
Tests 5 failed / 42 succeeded
Started2020-09-24 13:46
Elapsed1h44m
Work namespaceci-op-qhbmgt9g
Refs release-4.4:8547b646
4204:e833b3e2
pod5b6d7de7-fe6c-11ea-81fc-0a580a810d1a
repoopenshift/installer
revision1

Test Failures


Cluster upgrade Application behind service load balancer with PDB is not disrupted 41m8s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sApplication\sbehind\sservice\sload\sbalancer\swith\sPDB\sis\snot\sdisrupted$'
Service was unreachable during disruption for at least 1s of 37m30s (0%):

Sep 24 15:01:53.395 E ns/e2e-k8s-service-lb-available-6766 svc/service-test Service stopped responding to GET requests on reused connections
Sep 24 15:01:53.569 I ns/e2e-k8s-service-lb-available-6766 svc/service-test Service started responding to GET requests on reused connections
				from junit_upgrade_1600960645.xml

Filter through log files


Cluster upgrade Cluster frontend ingress remain available 40m38s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sCluster\sfrontend\singress\sremain\savailable$'
Frontends were unreachable during disruption for at least 11s of 40m29s (0%):

Sep 24 14:34:30.402 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Sep 24 14:34:30.754 I ns/openshift-console route/console Route started responding to GET requests over new connections
Sep 24 14:36:04.402 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Sep 24 14:36:04.757 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Sep 24 14:45:45.402 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Sep 24 14:45:45.744 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Sep 24 14:47:21.402 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Sep 24 14:47:21.404 E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Sep 24 14:47:21.768 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Sep 24 14:48:03.402 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Sep 24 14:48:03.742 I ns/openshift-console route/console Route started responding to GET requests over new connections
Sep 24 14:50:32.402 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Sep 24 14:50:32.742 I ns/openshift-console route/console Route started responding to GET requests over new connections
Sep 24 14:53:07.402 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Sep 24 14:53:07.738 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Sep 24 14:54:44.402 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Sep 24 14:54:44.749 I ns/openshift-console route/console Route started responding to GET requests over new connections
Sep 24 14:56:21.402 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Sep 24 14:56:21.732 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Sep 24 14:58:46.402 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Sep 24 14:58:46.750 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
				from junit_upgrade_1600960645.xml

Filter through log files


Cluster upgrade Kubernetes APIs remain available 40m38s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sKubernetes\sAPIs\sremain\savailable$'
API was unreachable during disruption for at least 2s of 40m29s (0%):

Sep 24 14:40:20.445 E kube-apiserver Kube API started failing: Get https://api.ci-op-qhbmgt9g-506dd.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=15s: dial tcp 44.241.8.186:6443: connect: connection refused
Sep 24 14:40:21.210 E kube-apiserver Kube API is not responding to GET requests
Sep 24 14:40:21.293 I kube-apiserver Kube API started responding to GET requests
				from junit_upgrade_1600960645.xml

Filter through log files


Cluster upgrade OpenShift APIs remain available 40m38s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sOpenShift\sAPIs\sremain\savailable$'
API was unreachable during disruption for at least 1s of 40m29s (0%):

Sep 24 15:01:36.553 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-qhbmgt9g-506dd.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: dial tcp 44.238.41.64:6443: connect: connection refused
Sep 24 15:01:36.938 I openshift-apiserver OpenShift API started responding to GET requests
Sep 24 15:03:28.398 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-qhbmgt9g-506dd.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: dial tcp 44.238.41.64:6443: connect: connection refused
Sep 24 15:03:29.210 E openshift-apiserver OpenShift API is not responding to GET requests
Sep 24 15:03:29.301 I openshift-apiserver OpenShift API started responding to GET requests
				from junit_upgrade_1600960645.xml

Filter through log files


openshift-tests Monitor cluster while tests execute 46m12s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
175 error level events were detected during this test run:

Sep 24 14:31:20.869 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-232-115.us-west-2.compute.internal node/ip-10-0-232-115.us-west-2.compute.internal container=kube-controller-manager container exited with code 255 (Error): /localhost:6443/apis/networking.k8s.io/v1/networkpolicies?allowWatchBookmarks=true&resourceVersion=15616&timeout=9m2s&timeoutSeconds=542&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 14:31:19.317584       1 reflector.go:307] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.BuildConfig: Get https://localhost:6443/apis/build.openshift.io/v1/buildconfigs?allowWatchBookmarks=true&resourceVersion=17321&timeout=5m49s&timeoutSeconds=349&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 14:31:19.318677       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/imageregistry.operator.openshift.io/v1/imagepruners?allowWatchBookmarks=true&resourceVersion=16620&timeout=5m30s&timeoutSeconds=330&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 14:31:19.321307       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/operator.openshift.io/v1/consoles?allowWatchBookmarks=true&resourceVersion=19282&timeout=5m11s&timeoutSeconds=311&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 14:31:19.322485       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.CSIDriver: Get https://localhost:6443/apis/storage.k8s.io/v1beta1/csidrivers?allowWatchBookmarks=true&resourceVersion=15617&timeout=6m24s&timeoutSeconds=384&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 14:31:19.827822       1 cronjob_controller.go:125] Failed to extract job list: Get https://localhost:6443/apis/batch/v1/jobs?limit=500: dial tcp [::1]:6443: connect: connection refused\nI0924 14:31:19.933711       1 leaderelection.go:288] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0924 14:31:19.933798       1 controllermanager.go:291] leaderelection lost\n
Sep 24 14:31:43.921 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-232-115.us-west-2.compute.internal node/ip-10-0-232-115.us-west-2.compute.internal container=setup init container exited with code 124 (Error): ................................................................................
Sep 24 14:31:47.021 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-232-115.us-west-2.compute.internal node/ip-10-0-232-115.us-west-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): nnect: connection refused\nE0924 14:31:46.159415       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Lease: Get https://localhost:6443/apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=20765&timeout=8m53s&timeoutSeconds=533&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 14:31:46.161618       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Endpoints: Get https://localhost:6443/api/v1/endpoints?allowWatchBookmarks=true&resourceVersion=20445&timeout=8m6s&timeoutSeconds=486&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 14:31:46.162904       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: Get https://localhost:6443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=15617&timeout=6m42s&timeoutSeconds=402&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 14:31:46.163881       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ControllerRevision: Get https://localhost:6443/apis/apps/v1/controllerrevisions?allowWatchBookmarks=true&resourceVersion=18562&timeout=6m50s&timeoutSeconds=410&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0924 14:31:46.645851       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cluster-policy-controller: timed out waiting for the condition\nI0924 14:31:46.645944       1 event.go:281] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"", Name:"", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ip-10-0-232-115 stopped leading\nF0924 14:31:46.645975       1 policy_controller.go:94] leaderelection lost\nI0924 14:31:46.645992       1 reconciliation_controller.go:152] Shutting down ClusterQuotaReconcilationController\nI0924 14:31:46.646007       1 clusterquotamapping.go:142] Shutting down ClusterQuotaMappingController controller\n
Sep 24 14:31:47.021 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-232-115.us-west-2.compute.internal node/ip-10-0-232-115.us-west-2.compute.internal container=kube-controller-manager-recovery-controller container exited with code 255 (Error): ontroller-manager/secrets?allowWatchBookmarks=true&resourceVersion=20030&timeout=9m16s&timeoutSeconds=556&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 14:31:45.999774       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?allowWatchBookmarks=true&resourceVersion=20748&timeout=8m55s&timeoutSeconds=535&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 14:31:46.000827       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?allowWatchBookmarks=true&resourceVersion=20769&timeout=7m2s&timeoutSeconds=422&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 14:31:46.007423       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-config-managed/secrets?allowWatchBookmarks=true&resourceVersion=20030&timeout=7m21s&timeoutSeconds=441&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 14:31:46.037337       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-config/configmaps?allowWatchBookmarks=true&resourceVersion=20178&timeout=8m37s&timeoutSeconds=517&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 14:31:46.040554       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-config/secrets?allowWatchBookmarks=true&resourceVersion=20030&timeout=8m58s&timeoutSeconds=538&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0924 14:31:46.694694       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cert-recovery-controller-lock: timed out waiting for the condition\nF0924 14:31:46.694743       1 leaderelection.go:67] leaderelection lost\n
Sep 24 14:35:01.581 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-66c7c464df-mkd52 node/ip-10-0-232-115.us-west-2.compute.internal container=kube-apiserver-operator container exited with code 255 (Error): er-operator", UID:"3d170128-52c6-4923-96d5-f687faf7ab0b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/revision-status-7 -n openshift-kube-apiserver:\ncause by changes in data.status\nI0924 14:32:10.095447       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"3d170128-52c6-4923-96d5-f687faf7ab0b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-7-ip-10-0-232-115.us-west-2.compute.internal -n openshift-kube-apiserver because it was missing\nI0924 14:35:00.665714       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0924 14:35:00.665801       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nI0924 14:35:00.665832       1 migration_controller.go:327] Shutting down EncryptionMigrationController\nI0924 14:35:00.665837       1 base_controller.go:74] Shutting down InstallerStateController ...\nI0924 14:35:00.665851       1 base_controller.go:74] Shutting down InstallerController ...\nI0924 14:35:00.665864       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "InternalLoadBalancerServing"\nI0924 14:35:00.665880       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "ServiceNetworkServing"\nI0924 14:35:00.665890       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "AggregatorProxyClientCert"\nI0924 14:35:00.665904       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "ExternalLoadBalancerServing"\nI0924 14:35:00.665966       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "LocalhostServing"\nF0924 14:35:00.665990       1 builder.go:243] stopped\nI0924 14:35:00.671288       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeAPIServerToKubeletClientCert"\n
Sep 24 14:35:24.665 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-6f468d66c6-jxr25 node/ip-10-0-232-115.us-west-2.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): een terminated\nI0924 14:35:23.736234       1 base_controller.go:74] Shutting down StaticPodStateController ...\nI0924 14:35:23.736238       1 base_controller.go:49] Shutting down worker of UnsupportedConfigOverridesController controller ...\nI0924 14:35:23.736244       1 base_controller.go:39] All UnsupportedConfigOverridesController workers have been terminated\nI0924 14:35:23.736244       1 base_controller.go:74] Shutting down InstallerStateController ...\nI0924 14:35:23.736258       1 base_controller.go:49] Shutting down worker of PruneController controller ...\nI0924 14:35:23.736268       1 base_controller.go:39] All PruneController workers have been terminated\nI0924 14:35:23.736273       1 base_controller.go:49] Shutting down worker of RevisionController controller ...\nI0924 14:35:23.736278       1 base_controller.go:39] All RevisionController workers have been terminated\nI0924 14:35:23.736280       1 base_controller.go:49] Shutting down worker of  controller ...\nI0924 14:35:23.736285       1 base_controller.go:39] All  workers have been terminated\nI0924 14:35:23.736295       1 base_controller.go:49] Shutting down worker of InstallerController controller ...\nI0924 14:35:23.736298       1 base_controller.go:49] Shutting down worker of NodeController controller ...\nI0924 14:35:23.736300       1 base_controller.go:39] All InstallerController workers have been terminated\nI0924 14:35:23.736303       1 base_controller.go:39] All NodeController workers have been terminated\nI0924 14:35:23.736311       1 base_controller.go:49] Shutting down worker of StaticPodStateController controller ...\nI0924 14:35:23.736328       1 base_controller.go:49] Shutting down worker of InstallerStateController controller ...\nI0924 14:35:23.736333       1 base_controller.go:39] All InstallerStateController workers have been terminated\nF0924 14:35:23.736383       1 builder.go:209] server exited\nI0924 14:35:23.740229       1 configmap_cafile_content.go:226] Shutting down client-ca::kube-system::extension-apiserver-authentication::client-ca-file\n
Sep 24 14:36:18.500 E ns/openshift-machine-api pod/machine-api-operator-556654dc66-hglfp node/ip-10-0-177-150.us-west-2.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Sep 24 14:38:07.631 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-232-115.us-west-2.compute.internal node/ip-10-0-232-115.us-west-2.compute.internal container=setup init container exited with code 124 (Error): ................................................................................
Sep 24 14:40:27.525 E ns/openshift-kube-storage-version-migrator pod/migrator-7595f64984-x2lzx node/ip-10-0-134-21.us-west-2.compute.internal container=migrator container exited with code 2 (Error): I0924 14:31:06.947206       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\n
Sep 24 14:40:56.618 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-177-150.us-west-2.compute.internal node/ip-10-0-177-150.us-west-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): tch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 14:40:55.419812       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Endpoints: Get https://localhost:6443/api/v1/endpoints?allowWatchBookmarks=true&resourceVersion=25504&timeout=6m23s&timeoutSeconds=383&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 14:40:55.424493       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Deployment: Get https://localhost:6443/apis/apps/v1/deployments?allowWatchBookmarks=true&resourceVersion=25506&timeout=9m6s&timeoutSeconds=546&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 14:40:55.426755       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ResourceQuota: Get https://localhost:6443/api/v1/resourcequotas?allowWatchBookmarks=true&resourceVersion=23548&timeout=5m42s&timeoutSeconds=342&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 14:40:55.427952       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://localhost:6443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=23548&timeout=5m58s&timeoutSeconds=358&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 14:40:55.436738       1 reflector.go:307] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: Get https://localhost:6443/apis/image.openshift.io/v1/imagestreams?allowWatchBookmarks=true&resourceVersion=24386&timeout=8m3s&timeoutSeconds=483&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0924 14:40:56.102955       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cluster-policy-controller: timed out waiting for the condition\nF0924 14:40:56.102991       1 policy_controller.go:94] leaderelection lost\nI0924 14:40:56.107147       1 resource_quota_controller.go:290] Shutting down resource quota controller\n
Sep 24 14:40:56.619 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-177-150.us-west-2.compute.internal node/ip-10-0-177-150.us-west-2.compute.internal container=setup init container exited with code 124 (Error): ................................................................................
Sep 24 14:40:58.623 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-177-150.us-west-2.compute.internal node/ip-10-0-177-150.us-west-2.compute.internal container=kube-controller-manager-recovery-controller container exited with code 255 (Error): hift.io/v1/kubecontrollermanagers?allowWatchBookmarks=true&resourceVersion=24764&timeoutSeconds=582&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 14:40:57.369655       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?allowWatchBookmarks=true&resourceVersion=23416&timeout=8m27s&timeoutSeconds=507&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 14:40:57.372686       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?allowWatchBookmarks=true&resourceVersion=23416&timeout=8m54s&timeoutSeconds=534&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 14:40:57.373693       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-config-managed/secrets?allowWatchBookmarks=true&resourceVersion=23416&timeout=9m25s&timeoutSeconds=565&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 14:40:57.375262       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-config/secrets?allowWatchBookmarks=true&resourceVersion=23416&timeout=8m33s&timeoutSeconds=513&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 14:40:57.377393       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?allowWatchBookmarks=true&resourceVersion=25764&timeout=6m42s&timeoutSeconds=402&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0924 14:40:57.798703       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cert-recovery-controller-lock: timed out waiting for the condition\nF0924 14:40:57.798747       1 leaderelection.go:67] leaderelection lost\n
Sep 24 14:43:29.406 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-173-108.us-west-2.compute.internal node/ip-10-0-173-108.us-west-2.compute.internal container=kube-scheduler container exited with code 255 (Error): 4 14:43:28.875828       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: Get https://localhost:6443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=20134&timeout=9m23s&timeoutSeconds=563&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 14:43:28.877122       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://localhost:6443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=20125&timeout=6m15s&timeoutSeconds=375&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 14:43:28.880095       1 reflector.go:307] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=26269&timeout=5m12s&timeoutSeconds=312&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 14:43:28.881995       1 reflector.go:307] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=23725&timeout=7m20s&timeoutSeconds=440&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 14:43:28.884942       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://localhost:6443/api/v1/services?allowWatchBookmarks=true&resourceVersion=21365&timeout=7m23s&timeoutSeconds=443&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0924 14:43:29.261541       1 leaderelection.go:288] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0924 14:43:29.261636       1 server.go:257] leaderelection lost\n
Sep 24 14:43:29.488 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-173-108.us-west-2.compute.internal node/ip-10-0-173-108.us-west-2.compute.internal container=kube-controller-manager container exited with code 255 (Error): 9s&timeoutSeconds=449&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 14:43:28.047258       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?allowWatchBookmarks=true&resourceVersion=20781&timeout=9m45s&timeoutSeconds=585&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 14:43:28.048327       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.LimitRange: Get https://localhost:6443/api/v1/limitranges?allowWatchBookmarks=true&resourceVersion=20125&timeout=8m37s&timeoutSeconds=517&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 14:43:28.049415       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.NetworkPolicy: Get https://localhost:6443/apis/networking.k8s.io/v1/networkpolicies?allowWatchBookmarks=true&resourceVersion=20129&timeout=6m2s&timeoutSeconds=362&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 14:43:28.051005       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/config.openshift.io/v1/featuregates?allowWatchBookmarks=true&resourceVersion=20781&timeout=6m51s&timeoutSeconds=411&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 14:43:28.051995       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/operator.openshift.io/v1/servicecas?allowWatchBookmarks=true&resourceVersion=20784&timeout=7m20s&timeoutSeconds=440&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0924 14:43:28.642017       1 leaderelection.go:288] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0924 14:43:28.642097       1 controllermanager.go:291] leaderelection lost\n
Sep 24 14:43:33.628 E ns/openshift-cluster-machine-approver pod/machine-approver-f55dd8569-k6jp5 node/ip-10-0-232-115.us-west-2.compute.internal container=machine-approver-controller container exited with code 2 (Error): sts?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0924 14:38:27.095676       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0924 14:38:28.096243       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0924 14:38:29.096787       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0924 14:38:30.097283       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0924 14:38:31.097809       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0924 14:38:32.098326       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\n
Sep 24 14:43:52.050 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-786d7565dc-b7p8l node/ip-10-0-232-115.us-west-2.compute.internal container=operator container exited with code 255 (Error): Error on reading termination message from logs: failed to try resolving symlinks in path "/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-786d7565dc-b7p8l_48363b4b-4467-4c90-be8a-b05d2083514b/operator/0.log": lstat /var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-786d7565dc-b7p8l_48363b4b-4467-4c90-be8a-b05d2083514b/operator/0.log: no such file or directory
Sep 24 14:43:53.834 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-173-108.us-west-2.compute.internal node/ip-10-0-173-108.us-west-2.compute.internal container=setup init container exited with code 124 (Error): ................................................................................
Sep 24 14:43:55.843 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-173-108.us-west-2.compute.internal node/ip-10-0-173-108.us-west-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): 4.995956       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/secrets?allowWatchBookmarks=true&resourceVersion=26122&timeout=7m8s&timeoutSeconds=428&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 14:43:55.001124       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PodTemplate: Get https://localhost:6443/api/v1/podtemplates?allowWatchBookmarks=true&resourceVersion=20125&timeout=7m16s&timeoutSeconds=436&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 14:43:55.002178       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicaSet: Get https://localhost:6443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=26261&timeout=7m55s&timeoutSeconds=475&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 14:43:55.006018       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.CronJob: Get https://localhost:6443/apis/batch/v1beta1/cronjobs?allowWatchBookmarks=true&resourceVersion=20128&timeout=7m21s&timeoutSeconds=441&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 14:43:55.013760       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Lease: Get https://localhost:6443/apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=26806&timeout=5m44s&timeoutSeconds=344&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0924 14:43:55.359665       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cluster-policy-controller: timed out waiting for the condition\nF0924 14:43:55.359708       1 policy_controller.go:94] leaderelection lost\nI0924 14:43:55.366442       1 event.go:281] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"", Name:"", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ip-10-0-173-108 stopped leading\n
Sep 24 14:44:24.353 E ns/openshift-monitoring pod/telemeter-client-77f96bcd77-bkr5t node/ip-10-0-151-127.us-west-2.compute.internal container=telemeter-client container exited with code 2 (Error): 
Sep 24 14:44:24.353 E ns/openshift-monitoring pod/telemeter-client-77f96bcd77-bkr5t node/ip-10-0-151-127.us-west-2.compute.internal container=reload container exited with code 2 (Error): 
Sep 24 14:44:27.391 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-151-127.us-west-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/09/24 14:25:20 Watching directory: "/etc/alertmanager/config"\n
Sep 24 14:44:27.391 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-151-127.us-west-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/09/24 14:26:55 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/24 14:26:55 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/24 14:26:55 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/24 14:26:55 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/24 14:26:55 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/24 14:26:55 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/24 14:26:55 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0924 14:26:55.686107       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/24 14:26:55 http.go:107: HTTPS: listening on [::]:9095\n
Sep 24 14:44:32.463 E ns/openshift-controller-manager pod/controller-manager-4n25k node/ip-10-0-177-150.us-west-2.compute.internal container=controller-manager container exited with code 137 (Error): I0924 14:19:08.993729       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (v0.0.0-alpha.0-111-gb28647e)\nI0924 14:19:08.996208       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.build02.ci.openshift.org/ci-op-qhbmgt9g/stable-initial@sha256:254af6bfa713baa27c7274ec4cebc6ffef6b4c56372c68bc5e1dea4aac828687"\nI0924 14:19:08.996351       1 controller_manager.go:56] Build controller using images from "registry.build02.ci.openshift.org/ci-op-qhbmgt9g/stable-initial@sha256:2e9fa701fb05ce0c7a3a0ce59d48165fbc50bedfbe3033f5eec1051fbda305b0"\nI0924 14:19:08.996311       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0924 14:19:08.997158       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Sep 24 14:44:40.867 E ns/openshift-monitoring pod/node-exporter-z7xfl node/ip-10-0-232-115.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): -24T14:18:38Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-09-24T14:18:38Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-09-24T14:18:38Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-09-24T14:18:38Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-09-24T14:18:38Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-09-24T14:18:38Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-09-24T14:18:38Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-09-24T14:18:38Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-09-24T14:18:38Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-09-24T14:18:38Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-09-24T14:18:38Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-09-24T14:18:38Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-09-24T14:18:38Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-09-24T14:18:38Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-09-24T14:18:38Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-09-24T14:18:38Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-09-24T14:18:38Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-09-24T14:18:38Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-09-24T14:18:38Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-09-24T14:18:38Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-09-24T14:18:38Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-09-24T14:18:38Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-09-24T14:18:38Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-09-24T14:18:38Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Sep 24 14:44:45.559 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-151-127.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-09-24T14:44:24.759Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-24T14:44:24.767Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-24T14:44:24.767Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-24T14:44:24.768Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-24T14:44:24.768Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-09-24T14:44:24.768Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-24T14:44:24.768Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-24T14:44:24.768Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-24T14:44:24.768Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-24T14:44:24.768Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-24T14:44:24.768Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-24T14:44:24.768Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-24T14:44:24.768Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-24T14:44:24.768Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-09-24T14:44:24.769Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-24T14:44:24.769Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-09-24
Sep 24 14:44:52.519 E ns/openshift-console-operator pod/console-operator-965469794-tnl4q node/ip-10-0-177-150.us-west-2.compute.internal container=console-operator container exited with code 255 (Error): xpected EOF during watch stream event decoding: unexpected EOF\nI0924 14:43:17.816536       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0924 14:43:17.816548       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0924 14:43:17.816554       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0924 14:43:17.816570       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0924 14:43:17.816583       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0924 14:43:17.816595       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0924 14:43:17.816602       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0924 14:43:17.816629       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0924 14:43:17.816643       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0924 14:43:17.816661       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0924 14:44:51.823773       1 cmd.go:83] Received SIGTERM or SIGINT signal, shutting down controller.\nI0924 14:44:51.824505       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0924 14:44:51.824552       1 status_controller.go:212] Shutting down StatusSyncer-console\nI0924 14:44:51.824587       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0924 14:44:51.824614       1 controller.go:70] Shutting down Console\nI0924 14:44:51.824643       1 controller.go:109] shutting down ConsoleResourceSyncDestinationController\nI0924 14:44:51.824672       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0924 14:44:51.824696       1 controller.go:138] shutting down ConsoleServiceSyncController\nF0924 14:44:51.824709       1 builder.go:243] stopped\n
Sep 24 14:44:55.560 E ns/openshift-monitoring pod/node-exporter-9t9rn node/ip-10-0-151-127.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): -24T14:22:57Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-09-24T14:22:57Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-09-24T14:22:57Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-09-24T14:22:57Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-09-24T14:22:57Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-09-24T14:22:57Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-09-24T14:22:57Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-09-24T14:22:57Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-09-24T14:22:57Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-09-24T14:22:57Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-09-24T14:22:57Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-09-24T14:22:57Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-09-24T14:22:57Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-09-24T14:22:57Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-09-24T14:22:57Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-09-24T14:22:57Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-09-24T14:22:57Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-09-24T14:22:57Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-09-24T14:22:57Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-09-24T14:22:57Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-09-24T14:22:57Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-09-24T14:22:57Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-09-24T14:22:57Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-09-24T14:22:57Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Sep 24 14:45:08.945 E ns/openshift-marketplace pod/redhat-marketplace-6cb568584c-jr57c node/ip-10-0-134-21.us-west-2.compute.internal container=redhat-marketplace container exited with code 2 (Error): 
Sep 24 14:45:22.976 E ns/openshift-marketplace pod/certified-operators-86cc8d4456-zzzrg node/ip-10-0-134-21.us-west-2.compute.internal container=certified-operators container exited with code 2 (Error): 
Sep 24 14:45:30.428 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-224-214.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-09-24T14:45:25.291Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-24T14:45:25.294Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-24T14:45:25.296Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-24T14:45:25.297Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-24T14:45:25.297Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-09-24T14:45:25.297Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-24T14:45:25.298Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-24T14:45:25.298Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-24T14:45:25.298Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-24T14:45:25.298Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-24T14:45:25.298Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-24T14:45:25.298Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-24T14:45:25.298Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-24T14:45:25.298Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-09-24T14:45:25.301Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-24T14:45:25.301Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-09-24
Sep 24 14:46:45.353 E ns/openshift-sdn pod/sdn-controller-cjm6r node/ip-10-0-232-115.us-west-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0924 14:13:30.334644       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0924 14:17:54.361706       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-qhbmgt9g-506dd.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: unexpected EOF\n
Sep 24 14:46:53.857 E ns/openshift-sdn pod/sdn-controller-pbgmz node/ip-10-0-177-150.us-west-2.compute.internal container=sdn-controller container exited with code 2 (Error): cated netid 4227437 for namespace "openshift-ingress"\nI0924 14:22:00.467770       1 subnets.go:149] Created HostSubnet ip-10-0-134-21.us-west-2.compute.internal (host: "ip-10-0-134-21.us-west-2.compute.internal", ip: "10.0.134.21", subnet: "10.131.0.0/23")\nI0924 14:22:01.025202       1 subnets.go:149] Created HostSubnet ip-10-0-151-127.us-west-2.compute.internal (host: "ip-10-0-151-127.us-west-2.compute.internal", ip: "10.0.151.127", subnet: "10.128.2.0/23")\nI0924 14:22:25.389722       1 subnets.go:149] Created HostSubnet ip-10-0-224-214.us-west-2.compute.internal (host: "ip-10-0-224-214.us-west-2.compute.internal", ip: "10.0.224.214", subnet: "10.129.2.0/23")\nI0924 14:31:18.263370       1 vnids.go:115] Allocated netid 14809905 for namespace "e2e-k8s-sig-apps-job-upgrade-2829"\nI0924 14:31:18.276431       1 vnids.go:115] Allocated netid 9869223 for namespace "e2e-check-for-critical-alerts-9001"\nI0924 14:31:18.292568       1 vnids.go:115] Allocated netid 16686453 for namespace "e2e-kubernetes-api-available-5374"\nI0924 14:31:18.310893       1 vnids.go:115] Allocated netid 11555328 for namespace "e2e-k8s-sig-apps-daemonset-upgrade-1693"\nI0924 14:31:18.322122       1 vnids.go:115] Allocated netid 8818719 for namespace "e2e-openshift-api-available-5073"\nI0924 14:31:18.332141       1 vnids.go:115] Allocated netid 14902067 for namespace "e2e-k8s-sig-storage-sig-api-machinery-secret-upgrade-6666"\nI0924 14:31:18.362132       1 vnids.go:115] Allocated netid 16702295 for namespace "e2e-k8s-sig-apps-deployment-upgrade-7827"\nI0924 14:31:18.442723       1 vnids.go:115] Allocated netid 10582848 for namespace "e2e-k8s-service-lb-available-6766"\nI0924 14:31:18.449603       1 vnids.go:115] Allocated netid 10753313 for namespace "e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-8804"\nI0924 14:31:18.468713       1 vnids.go:115] Allocated netid 16439019 for namespace "e2e-k8s-sig-apps-replicaset-upgrade-4916"\nI0924 14:31:18.476786       1 vnids.go:115] Allocated netid 9386320 for namespace "e2e-frontend-ingress-available-4912"\n
Sep 24 14:47:00.715 E ns/openshift-sdn pod/sdn-controller-6vblx node/ip-10-0-173-108.us-west-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0924 14:13:55.114789       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0924 14:17:54.350478       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-qhbmgt9g-506dd.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: unexpected EOF\n
Sep 24 14:47:13.414 E ns/openshift-multus pod/multus-h75b6 node/ip-10-0-134-21.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Sep 24 14:47:52.870 E ns/openshift-multus pod/multus-admission-controller-jn9sz node/ip-10-0-173-108.us-west-2.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Sep 24 14:48:08.154 E ns/openshift-multus pod/multus-d5nh4 node/ip-10-0-177-150.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Sep 24 14:48:33.698 E ns/openshift-multus pod/multus-admission-controller-8rbnl node/ip-10-0-232-115.us-west-2.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Sep 24 14:49:06.032 E ns/openshift-multus pod/multus-5vdfm node/ip-10-0-224-214.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Sep 24 14:49:59.047 E ns/openshift-multus pod/multus-wrx9d node/ip-10-0-232-115.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Sep 24 14:52:00.604 E ns/openshift-multus pod/multus-28zhz node/ip-10-0-173-108.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Sep 24 14:52:43.464 E ns/openshift-machine-config-operator pod/machine-config-operator-7599cd47dd-tprd6 node/ip-10-0-232-115.us-west-2.compute.internal container=machine-config-operator container exited with code 2 (Error): nalversions/factory.go:101: Failed to list *v1.MachineConfigPool: the server could not find the requested resource (get machineconfigpools.machineconfiguration.openshift.io)\nE0924 14:14:15.521841       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.ControllerConfig: the server could not find the requested resource (get controllerconfigs.machineconfiguration.openshift.io)\nE0924 14:14:16.553504       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.MachineConfigPool: the server could not find the requested resource (get machineconfigpools.machineconfiguration.openshift.io)\nI0924 14:14:20.537242       1 sync.go:61] [init mode] synced RenderConfig in 5.443177339s\nI0924 14:14:20.783059       1 sync.go:61] [init mode] synced MachineConfigPools in 245.638662ms\nI0924 14:14:51.749578       1 sync.go:61] [init mode] synced MachineConfigDaemon in 30.966487156s\nI0924 14:15:09.790814       1 sync.go:61] [init mode] synced MachineConfigController in 18.041202127s\nI0924 14:15:27.891606       1 sync.go:61] [init mode] synced MachineConfigServer in 18.100754271s\nI0924 14:15:47.901480       1 sync.go:61] [init mode] synced RequiredPools in 20.009836442s\nI0924 14:15:47.924726       1 event.go:281] Event(v1.ObjectReference{Kind:"", Namespace:"", Name:"machine-config", UID:"c32edc83-6654-416b-ac48-98d0681e3dd8", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorVersionChanged' clusteroperator/machine-config-operator version changed from [] to [{operator 4.4.0-0.ci.test-2020-09-24-134645-ci-op-qhbmgt9g}]\nI0924 14:15:48.104384       1 sync.go:89] Initialization complete\nE0924 14:17:54.374978       1 leaderelection.go:331] error retrieving resource lock openshift-machine-config-operator/machine-config: Get https://172.30.0.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps/machine-config: unexpected EOF\n
Sep 24 14:54:38.754 E ns/openshift-machine-config-operator pod/machine-config-daemon-x99wh node/ip-10-0-224-214.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Sep 24 14:54:45.391 E ns/openshift-machine-config-operator pod/machine-config-daemon-x56cp node/ip-10-0-177-150.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Sep 24 14:54:55.502 E ns/openshift-machine-config-operator pod/machine-config-daemon-76m8f node/ip-10-0-134-21.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Sep 24 14:55:14.899 E ns/openshift-machine-config-operator pod/machine-config-daemon-bszv6 node/ip-10-0-151-127.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Sep 24 14:55:49.160 E ns/openshift-machine-config-operator pod/machine-config-controller-77c9787f58-7275v node/ip-10-0-173-108.us-west-2.compute.internal container=machine-config-controller container exited with code 2 (Error): ft.io/desiredConfig = rendered-worker-348fec2c5f06660e0ffcd0f4c8f3aa23\nI0924 14:23:22.182024       1 node_controller.go:452] Pool worker: node ip-10-0-151-127.us-west-2.compute.internal changed machineconfiguration.openshift.io/state = Done\nI0924 14:23:36.133718       1 node_controller.go:452] Pool worker: node ip-10-0-224-214.us-west-2.compute.internal changed machineconfiguration.openshift.io/currentConfig = rendered-worker-348fec2c5f06660e0ffcd0f4c8f3aa23\nI0924 14:23:36.133735       1 node_controller.go:452] Pool worker: node ip-10-0-224-214.us-west-2.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-348fec2c5f06660e0ffcd0f4c8f3aa23\nI0924 14:23:36.133740       1 node_controller.go:452] Pool worker: node ip-10-0-224-214.us-west-2.compute.internal changed machineconfiguration.openshift.io/state = Done\nI0924 14:23:40.603754       1 node_controller.go:435] Pool worker: node ip-10-0-134-21.us-west-2.compute.internal is now reporting ready\nI0924 14:23:41.191329       1 node_controller.go:435] Pool worker: node ip-10-0-151-127.us-west-2.compute.internal is now reporting ready\nI0924 14:23:55.588342       1 node_controller.go:435] Pool worker: node ip-10-0-224-214.us-west-2.compute.internal is now reporting ready\nI0924 14:24:26.979269       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool worker\nI0924 14:24:27.027316       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool master\nI0924 14:31:08.831493       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool master\nI0924 14:31:09.021556       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool worker\nI0924 14:43:20.207751       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool master\nI0924 14:43:20.262516       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool worker\n
Sep 24 14:57:23.753 E ns/openshift-machine-config-operator pod/machine-config-server-pc6t6 node/ip-10-0-177-150.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0924 14:15:11.822161       1 start.go:38] Version: machine-config-daemon-4.4.0-202006242133.p0-32-g287dd2cf-dirty (287dd2cfa692ecbbce7b3bc1913b99b3e2d2f5c7)\nI0924 14:15:11.822996       1 api.go:56] Launching server on :22624\nI0924 14:15:11.823077       1 api.go:56] Launching server on :22623\nI0924 14:19:23.465953       1 api.go:102] Pool worker requested by 10.0.214.78:12753\nI0924 14:19:24.377846       1 api.go:102] Pool worker requested by 10.0.214.78:9852\nI0924 14:19:29.018531       1 api.go:102] Pool worker requested by 10.0.214.78:60331\n
Sep 24 14:57:26.450 E ns/openshift-machine-config-operator pod/machine-config-server-4rnqh node/ip-10-0-173-108.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0924 14:15:12.479176       1 start.go:38] Version: machine-config-daemon-4.4.0-202006242133.p0-32-g287dd2cf-dirty (287dd2cfa692ecbbce7b3bc1913b99b3e2d2f5c7)\nI0924 14:15:12.480358       1 api.go:56] Launching server on :22624\nI0924 14:15:12.480427       1 api.go:56] Launching server on :22623\n
Sep 24 14:57:34.217 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-151-127.us-west-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/09/24 14:44:44 Watching directory: "/etc/alertmanager/config"\n
Sep 24 14:57:34.217 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-151-127.us-west-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/09/24 14:44:44 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/24 14:44:44 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/24 14:44:44 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/24 14:44:44 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/24 14:44:44 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/24 14:44:44 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/24 14:44:44 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/24 14:44:44 http.go:107: HTTPS: listening on [::]:9095\nI0924 14:44:44.979428       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Sep 24 14:57:34.664 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-78f95tqjt node/ip-10-0-173-108.us-west-2.compute.internal container=operator container exited with code 255 (Error): ]\nI0924 14:54:44.468743       1 httplog.go:90] GET /metrics: (6.755909ms) 200 [Prometheus/2.15.2 10.128.2.19:39494]\nI0924 14:54:53.161488       1 httplog.go:90] GET /metrics: (1.529789ms) 200 [Prometheus/2.15.2 10.129.2.29:43082]\nI0924 14:55:14.467272       1 httplog.go:90] GET /metrics: (5.292103ms) 200 [Prometheus/2.15.2 10.128.2.19:39494]\nI0924 14:55:15.886305       1 reflector.go:418] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: Watch close - *v1.Proxy total 0 items received\nI0924 14:55:23.162699       1 httplog.go:90] GET /metrics: (2.468706ms) 200 [Prometheus/2.15.2 10.129.2.29:43082]\nI0924 14:55:44.467772       1 httplog.go:90] GET /metrics: (5.805131ms) 200 [Prometheus/2.15.2 10.128.2.19:39494]\nI0924 14:55:53.161665       1 httplog.go:90] GET /metrics: (1.718578ms) 200 [Prometheus/2.15.2 10.129.2.29:43082]\nI0924 14:56:14.468076       1 httplog.go:90] GET /metrics: (5.87759ms) 200 [Prometheus/2.15.2 10.128.2.19:39494]\nI0924 14:56:23.161621       1 httplog.go:90] GET /metrics: (1.700311ms) 200 [Prometheus/2.15.2 10.129.2.29:43082]\nI0924 14:56:44.468148       1 httplog.go:90] GET /metrics: (6.160689ms) 200 [Prometheus/2.15.2 10.128.2.19:39494]\nI0924 14:56:53.161697       1 httplog.go:90] GET /metrics: (1.749238ms) 200 [Prometheus/2.15.2 10.129.2.29:43082]\nI0924 14:57:14.470017       1 httplog.go:90] GET /metrics: (8.053057ms) 200 [Prometheus/2.15.2 10.128.2.19:39494]\nI0924 14:57:22.882366       1 reflector.go:418] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: Watch close - *v1.Namespace total 0 items received\nI0924 14:57:23.161705       1 httplog.go:90] GET /metrics: (1.689725ms) 200 [Prometheus/2.15.2 10.129.2.29:43082]\nI0924 14:57:33.406243       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0924 14:57:33.406716       1 status_controller.go:212] Shutting down StatusSyncer-service-catalog-controller-manager\nI0924 14:57:33.406798       1 operator.go:227] Shutting down ServiceCatalogControllerManagerOperator\nF0924 14:57:33.406879       1 builder.go:243] stopped\n
Sep 24 14:57:37.611 E ns/openshift-authentication-operator pod/authentication-operator-ccb6dff64-s6jrk node/ip-10-0-173-108.us-west-2.compute.internal container=operator container exited with code 255 (Error): 53656ec9-9a63-4de6-8c7c-98440d73073e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorVersionChanged' clusteroperator/authentication version "oauth-openshift" changed from "4.4.0-0.ci.test-2020-09-24-134645-ci-op-qhbmgt9g_openshift" to "4.4.0-0.ci.test-2020-09-24-135838-ci-op-qhbmgt9g_openshift"\nI0924 14:44:38.320868       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"53656ec9-9a63-4de6-8c7c-98440d73073e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Progressing changed from True to False ("")\nI0924 14:44:38.321143       1 status_controller.go:176] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2020-09-24T14:24:24Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-09-24T14:44:38Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-09-24T14:30:05Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-09-24T14:14:15Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0924 14:44:38.326610       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"53656ec9-9a63-4de6-8c7c-98440d73073e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Progressing changed from True to False ("")\nI0924 14:57:36.309837       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0924 14:57:36.310317       1 logging_controller.go:93] Shutting down LogLevelController\nI0924 14:57:36.310344       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0924 14:57:36.310355       1 builder.go:210] server exited\n
Sep 24 14:57:53.968 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-134-21.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-09-24T14:57:48.206Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-24T14:57:48.211Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-24T14:57:48.212Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-24T14:57:48.212Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-24T14:57:48.212Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-09-24T14:57:48.212Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-24T14:57:48.213Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-24T14:57:48.213Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-24T14:57:48.213Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-24T14:57:48.213Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-24T14:57:48.213Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-24T14:57:48.213Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-24T14:57:48.213Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-24T14:57:48.213Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-09-24T14:57:48.214Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-24T14:57:48.214Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-09-24
Sep 24 14:59:07.937 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Sep 24 15:00:12.337 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable
Sep 24 15:00:13.570 E ns/openshift-cluster-node-tuning-operator pod/tuned-9tv7m node/ip-10-0-151-127.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:00:13.570 E ns/openshift-image-registry pod/node-ca-2h72j node/ip-10-0-151-127.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:00:13.577 E ns/openshift-monitoring pod/node-exporter-nf7xv node/ip-10-0-151-127.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:00:13.588 E ns/openshift-sdn pod/sdn-226ds node/ip-10-0-151-127.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:00:13.600 E ns/openshift-sdn pod/ovs-pxzft node/ip-10-0-151-127.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:00:13.611 E ns/openshift-multus pod/multus-kg69h node/ip-10-0-151-127.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:00:13.623 E ns/openshift-dns pod/dns-default-h9fcw node/ip-10-0-151-127.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:00:13.640 E ns/openshift-machine-config-operator pod/machine-config-daemon-42zvw node/ip-10-0-151-127.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:00:22.959 E ns/openshift-machine-config-operator pod/machine-config-daemon-42zvw node/ip-10-0-151-127.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Sep 24 15:00:32.956 E ns/openshift-monitoring pod/grafana-64c46d5fb-946kw node/ip-10-0-224-214.us-west-2.compute.internal container=grafana container exited with code 1 (Error): 
Sep 24 15:00:32.956 E ns/openshift-monitoring pod/grafana-64c46d5fb-946kw node/ip-10-0-224-214.us-west-2.compute.internal container=grafana-proxy container exited with code 2 (Error): 
Sep 24 15:00:37.809 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers::NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-173-108.us-west-2.compute.internal" not ready since 2020-09-24 14:58:37 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-173-108.us-west-2.compute.internal is unhealthy
Sep 24 15:00:37.817 E clusteroperator/kube-apiserver changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-173-108.us-west-2.compute.internal" not ready since 2020-09-24 14:58:37 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)
Sep 24 15:00:37.857 E clusteroperator/kube-controller-manager changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-173-108.us-west-2.compute.internal" not ready since 2020-09-24 14:58:37 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)
Sep 24 15:00:37.857 E clusteroperator/kube-scheduler changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-173-108.us-west-2.compute.internal" not ready since 2020-09-24 14:58:37 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)
Sep 24 15:00:44.586 E ns/openshift-controller-manager pod/controller-manager-85s9x node/ip-10-0-173-108.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:00:44.598 E ns/openshift-cluster-node-tuning-operator pod/tuned-stn69 node/ip-10-0-173-108.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:00:44.615 E ns/openshift-monitoring pod/node-exporter-22xxh node/ip-10-0-173-108.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:00:44.628 E ns/openshift-image-registry pod/node-ca-x84sm node/ip-10-0-173-108.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:00:44.644 E ns/openshift-sdn pod/ovs-w7mpf node/ip-10-0-173-108.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:00:44.654 E ns/openshift-sdn pod/sdn-controller-vdgkg node/ip-10-0-173-108.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:00:44.669 E ns/openshift-sdn pod/sdn-jcqsj node/ip-10-0-173-108.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:00:44.683 E ns/openshift-multus pod/multus-wnk9k node/ip-10-0-173-108.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:00:44.693 E ns/openshift-dns pod/dns-default-m6vnv node/ip-10-0-173-108.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:00:44.704 E ns/openshift-multus pod/multus-admission-controller-s2dmh node/ip-10-0-173-108.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:00:44.715 E ns/openshift-machine-config-operator pod/machine-config-daemon-csbwk node/ip-10-0-173-108.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:00:44.747 E ns/openshift-machine-config-operator pod/machine-config-server-btx2v node/ip-10-0-173-108.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:00:52.116 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-151-127.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-09-24T15:00:44.914Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-24T15:00:44.919Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-24T15:00:44.920Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-24T15:00:44.921Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-24T15:00:44.921Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-09-24T15:00:44.921Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-24T15:00:44.921Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-24T15:00:44.921Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-24T15:00:44.921Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-24T15:00:44.921Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-24T15:00:44.921Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-24T15:00:44.921Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-24T15:00:44.921Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-24T15:00:44.921Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-09-24T15:00:44.922Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-24T15:00:44.926Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-09-24
Sep 24 15:00:55.585 E ns/openshift-machine-config-operator pod/machine-config-daemon-csbwk node/ip-10-0-173-108.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Sep 24 15:00:57.269 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers: EtcdMembersDegraded: 2 of 3 members are available, ip-10-0-173-108.us-west-2.compute.internal is unhealthy
Sep 24 15:01:30.507 E ns/openshift-console pod/console-6648c88454-dgtw8 node/ip-10-0-177-150.us-west-2.compute.internal container=console container exited with code 2 (Error): 2020-09-24T14:45:10Z cmd/main: cookies are secure!\n2020-09-24T14:45:10Z cmd/main: Binding to [::]:8443...\n2020-09-24T14:45:10Z cmd/main: using TLS\n2020-09-24T14:58:31Z auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-qhbmgt9g-506dd.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-qhbmgt9g-506dd.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n
Sep 24 15:02:35.429 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Sep 24 15:03:02.168 E ns/openshift-cluster-node-tuning-operator pod/tuned-c52km node/ip-10-0-224-214.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:03:02.181 E ns/openshift-monitoring pod/node-exporter-2pd7x node/ip-10-0-224-214.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:03:02.195 E ns/openshift-image-registry pod/node-ca-64b68 node/ip-10-0-224-214.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:03:02.210 E ns/openshift-sdn pod/sdn-6xt2q node/ip-10-0-224-214.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:03:02.231 E ns/openshift-sdn pod/ovs-kzx2h node/ip-10-0-224-214.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:03:02.250 E ns/openshift-multus pod/multus-ng99g node/ip-10-0-224-214.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:03:02.263 E ns/openshift-machine-config-operator pod/machine-config-daemon-9fh74 node/ip-10-0-224-214.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:03:02.264 E ns/openshift-dns pod/dns-default-sjgm7 node/ip-10-0-224-214.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:03:12.117 E ns/openshift-machine-config-operator pod/machine-config-daemon-9fh74 node/ip-10-0-224-214.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Sep 24 15:03:20.280 E ns/openshift-monitoring pod/thanos-querier-c6c66c8f7-bljnm node/ip-10-0-134-21.us-west-2.compute.internal container=oauth-proxy container exited with code 2 (Error): hproxy.go:774: basicauth: 10.130.0.40:46810 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/24 14:47:59 oauthproxy.go:774: basicauth: 10.130.0.40:49390 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/24 14:48:59 oauthproxy.go:774: basicauth: 10.130.0.40:50168 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/24 14:49:59 oauthproxy.go:774: basicauth: 10.130.0.40:50786 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/24 14:50:59 oauthproxy.go:774: basicauth: 10.130.0.40:51546 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/24 14:51:59 oauthproxy.go:774: basicauth: 10.130.0.40:52214 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/24 14:52:59 oauthproxy.go:774: basicauth: 10.130.0.40:52976 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/24 14:53:59 oauthproxy.go:774: basicauth: 10.130.0.40:53692 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/24 14:54:59 oauthproxy.go:774: basicauth: 10.130.0.40:54456 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/24 14:55:59 oauthproxy.go:774: basicauth: 10.130.0.40:55286 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/24 14:59:42 oauthproxy.go:774: basicauth: 10.129.0.71:53084 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/24 15:00:42 oauthproxy.go:774: basicauth: 10.129.0.71:54280 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/24 15:02:16 oauthproxy.go:774: basicauth: 10.130.0.19:49754 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/24 15:03:16 oauthproxy.go:774: basicauth: 10.130.0.19:54824 Authorization header does not start with 'Basic', skipping basic authentication\n
Sep 24 15:03:21.358 E ns/openshift-monitoring pod/telemeter-client-66787d4566-dd998 node/ip-10-0-134-21.us-west-2.compute.internal container=telemeter-client container exited with code 2 (Error): 
Sep 24 15:03:21.358 E ns/openshift-monitoring pod/telemeter-client-66787d4566-dd998 node/ip-10-0-134-21.us-west-2.compute.internal container=reload container exited with code 2 (Error): 
Sep 24 15:03:21.420 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-134-21.us-west-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/09/24 14:57:38 Watching directory: "/etc/alertmanager/config"\n
Sep 24 15:03:21.420 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-134-21.us-west-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/09/24 14:57:38 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/24 14:57:38 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/24 14:57:38 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/24 14:57:38 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/24 14:57:38 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/24 14:57:38 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/24 14:57:38 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0924 14:57:38.328005       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/24 14:57:38 http.go:107: HTTPS: listening on [::]:9095\n
Sep 24 15:03:21.432 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-134-21.us-west-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/09/24 14:44:24 Watching directory: "/etc/alertmanager/config"\n
Sep 24 15:03:21.432 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-134-21.us-west-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/09/24 14:44:26 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/24 14:44:26 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/24 14:44:26 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/24 14:44:26 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/24 14:44:26 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/24 14:44:26 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/24 14:44:26 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0924 14:44:26.067152       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/24 14:44:26 http.go:107: HTTPS: listening on [::]:9095\n
Sep 24 15:03:22.454 E ns/openshift-monitoring pod/prometheus-adapter-75b95bffdc-bzmbf node/ip-10-0-134-21.us-west-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0924 14:44:19.729854       1 adapter.go:93] successfully using in-cluster auth\nI0924 14:44:21.003028       1 secure_serving.go:116] Serving securely on [::]:6443\n
Sep 24 15:03:22.492 E ns/openshift-marketplace pod/redhat-operators-68dd8fdfdf-pmwz2 node/ip-10-0-134-21.us-west-2.compute.internal container=redhat-operators container exited with code 2 (Error): 
Sep 24 15:03:27.776 E kube-apiserver Kube API started failing: Get https://api.ci-op-qhbmgt9g-506dd.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: unexpected EOF
Sep 24 15:03:27.974 E kube-apiserver failed contacting the API: Get https://api.ci-op-qhbmgt9g-506dd.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=41125&timeout=8m52s&timeoutSeconds=532&watch=true: dial tcp 44.238.41.64:6443: connect: connection refused
Sep 24 15:03:34.653 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-224-214.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-09-24T15:03:33.197Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-24T15:03:33.203Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-24T15:03:33.204Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-24T15:03:33.205Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-24T15:03:33.205Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-09-24T15:03:33.205Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-24T15:03:33.205Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-24T15:03:33.205Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-24T15:03:33.205Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-24T15:03:33.205Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-24T15:03:33.205Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-24T15:03:33.205Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-24T15:03:33.205Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-24T15:03:33.205Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-09-24T15:03:33.206Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-24T15:03:33.206Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-09-24
Sep 24 15:04:03.994 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-232-115.us-west-2.compute.internal node/ip-10-0-232-115.us-west-2.compute.internal container=setup init container exited with code 124 (Error): ................................................................................
Sep 24 15:04:13.109 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-232-115.us-west-2.compute.internal node/ip-10-0-232-115.us-west-2.compute.internal container=kube-controller-manager-recovery-controller container exited with code 255 (Error): -controller-manager/configmaps?allowWatchBookmarks=true&resourceVersion=41133&timeout=5m56s&timeoutSeconds=356&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 15:04:11.871579       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?allowWatchBookmarks=true&resourceVersion=39876&timeout=9m54s&timeoutSeconds=594&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 15:04:11.872721       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?allowWatchBookmarks=true&resourceVersion=40905&timeout=5m15s&timeoutSeconds=315&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 15:04:11.876603       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-config/configmaps?allowWatchBookmarks=true&resourceVersion=37752&timeout=8m36s&timeoutSeconds=516&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 15:04:11.877537       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-config/secrets?allowWatchBookmarks=true&resourceVersion=34529&timeout=6m44s&timeoutSeconds=404&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 15:04:11.878581       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-config-managed/secrets?allowWatchBookmarks=true&resourceVersion=39876&timeout=9m28s&timeoutSeconds=568&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0924 15:04:12.560220       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cert-recovery-controller-lock: timed out waiting for the condition\nF0924 15:04:12.560270       1 leaderelection.go:67] leaderelection lost\n
Sep 24 15:04:13.109 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-232-115.us-west-2.compute.internal node/ip-10-0-232-115.us-west-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): /networkpolicies?allowWatchBookmarks=true&resourceVersion=36805&timeout=6m29s&timeoutSeconds=389&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 15:04:11.839145       1 reflector.go:307] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.Build: Get https://localhost:6443/apis/build.openshift.io/v1/builds?allowWatchBookmarks=true&resourceVersion=35643&timeout=7m26s&timeoutSeconds=446&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 15:04:11.840828       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: Get https://localhost:6443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=41104&timeout=5m52s&timeoutSeconds=352&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 15:04:11.841492       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.Event: Get https://localhost:6443/apis/events.k8s.io/v1beta1/events?allowWatchBookmarks=true&resourceVersion=41135&timeout=9m45s&timeoutSeconds=585&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 15:04:11.843269       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Job: Get https://localhost:6443/apis/batch/v1/jobs?allowWatchBookmarks=true&resourceVersion=36805&timeout=9m17s&timeoutSeconds=557&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 15:04:11.844370       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PodTemplate: Get https://localhost:6443/api/v1/podtemplates?allowWatchBookmarks=true&resourceVersion=39819&timeout=8m50s&timeoutSeconds=530&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0924 15:04:11.956869       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cluster-policy-controller: timed out waiting for the condition\nF0924 15:04:11.956915       1 policy_controller.go:94] leaderelection lost\n
Sep 24 15:04:18.907 E ns/openshift-image-registry pod/node-ca-ptwjh node/ip-10-0-177-150.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:04:18.925 E ns/openshift-controller-manager pod/controller-manager-srzvg node/ip-10-0-177-150.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:04:18.946 E ns/openshift-cluster-node-tuning-operator pod/tuned-jjlmv node/ip-10-0-177-150.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:04:18.965 E ns/openshift-monitoring pod/node-exporter-7fff4 node/ip-10-0-177-150.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:04:18.975 E ns/openshift-sdn pod/sdn-controller-kqt45 node/ip-10-0-177-150.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:04:18.987 E ns/openshift-multus pod/multus-admission-controller-z8rd8 node/ip-10-0-177-150.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:04:18.998 E ns/openshift-sdn pod/sdn-8vqpv node/ip-10-0-177-150.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:04:19.010 E ns/openshift-sdn pod/ovs-pn88r node/ip-10-0-177-150.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:04:19.024 E ns/openshift-multus pod/multus-dxf99 node/ip-10-0-177-150.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:04:19.033 E ns/openshift-dns pod/dns-default-vqqmw node/ip-10-0-177-150.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:04:19.058 E ns/openshift-machine-config-operator pod/machine-config-daemon-jjd87 node/ip-10-0-177-150.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:04:19.069 E ns/openshift-machine-config-operator pod/machine-config-server-kxg9g node/ip-10-0-177-150.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:04:19.081 E ns/openshift-cluster-version pod/cluster-version-operator-84cd54b5d9-285sd node/ip-10-0-177-150.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:04:29.916 E ns/openshift-machine-config-operator pod/machine-config-daemon-jjd87 node/ip-10-0-177-150.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Sep 24 15:04:31.866 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers: EtcdMembersDegraded: 2 of 3 members are available, ip-10-0-177-150.us-west-2.compute.internal is unhealthy
Sep 24 15:04:40.186 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-6d5b86d974-cbfr7 node/ip-10-0-232-115.us-west-2.compute.internal container=operator container exited with code 255 (Error): tools/cache/reflector.go:105\nI0924 15:03:46.669128       1 request.go:565] Throttling request took 156.92841ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0924 15:03:46.869163       1 request.go:565] Throttling request took 195.864577ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0924 15:03:54.191237       1 httplog.go:90] GET /metrics: (6.260181ms) 200 [Prometheus/2.15.2 10.129.2.20:32818]\nI0924 15:03:54.641445       1 httplog.go:90] GET /metrics: (1.660888ms) 200 [Prometheus/2.15.2 10.128.2.17:38994]\nI0924 15:04:06.669476       1 request.go:565] Throttling request took 155.994425ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0924 15:04:06.869463       1 request.go:565] Throttling request took 195.177542ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0924 15:04:24.188142       1 httplog.go:90] GET /metrics: (5.932445ms) 200 [Prometheus/2.15.2 10.129.2.20:32818]\nI0924 15:04:24.641475       1 httplog.go:90] GET /metrics: (1.71901ms) 200 [Prometheus/2.15.2 10.128.2.17:38994]\nI0924 15:04:26.668794       1 request.go:565] Throttling request took 156.982958ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0924 15:04:26.868787       1 request.go:565] Throttling request took 196.117946ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0924 15:04:36.273226       1 cmd.go:83] Received SIGTERM or SIGINT signal, shutting down controller.\nI0924 15:04:36.273716       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0924 15:04:36.277378       1 builder.go:243] stopped\n
Sep 24 15:04:40.393 E ns/openshift-cluster-machine-approver pod/machine-approver-6479fc5697-n7xb6 node/ip-10-0-232-115.us-west-2.compute.internal container=machine-approver-controller container exited with code 2 (Error): sts?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0924 15:04:27.768314       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0924 15:04:28.768800       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0924 15:04:29.769307       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0924 15:04:30.769894       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0924 15:04:31.770410       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0924 15:04:32.770981       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\n
Sep 24 15:04:40.456 E ns/openshift-authentication-operator pod/authentication-operator-ccb6dff64-r4fst node/ip-10-0-232-115.us-west-2.compute.internal container=operator container exited with code 255 (Error): ed","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-09-24T15:01:26Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-09-24T14:30:05Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-09-24T14:14:15Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0924 15:02:39.084126       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"53656ec9-9a63-4de6-8c7c-98440d73073e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "RouteHealthDegraded: failed to GET route: dial tcp: lookup oauth-openshift.apps.ci-op-qhbmgt9g-506dd.origin-ci-int-aws.dev.rhcloud.com on 172.30.0.10:53: read udp 10.128.0.97:44570->172.30.0.10:53: i/o timeout" to ""\nI0924 15:04:35.619290       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0924 15:04:35.619475       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0924 15:04:35.619537       1 controller.go:215] Shutting down RouterCertsDomainValidationController\nI0924 15:04:35.619548       1 logging_controller.go:93] Shutting down LogLevelController\nI0924 15:04:35.619559       1 remove_stale_conditions.go:83] Shutting down RemoveStaleConditions\nI0924 15:04:35.619569       1 controller.go:70] Shutting down AuthenticationOperator2\nI0924 15:04:35.619591       1 management_state_controller.go:112] Shutting down management-state-controller-authentication\nI0924 15:04:35.619601       1 status_controller.go:212] Shutting down StatusSyncer-authentication\nI0924 15:04:35.619610       1 unsupportedconfigoverrides_controller.go:162] Shutting down UnsupportedConfigOverridesController\nI0924 15:04:35.619619       1 ingress_state_controller.go:157] Shutting down IngressStateController\nF0924 15:04:35.619705       1 builder.go:243] stopped\n
Sep 24 15:04:41.474 E ns/openshift-insights pod/insights-operator-dbb67956-9zj5m node/ip-10-0-232-115.us-west-2.compute.internal container=operator container exited with code 2 (Error): ng: unexpected EOF\nI0924 15:03:27.726477       1 reflector.go:418] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Watch close - *v1.ConfigMap total 0 items received\nI0924 15:03:27.726982       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0924 15:03:27.735043       1 reflector.go:418] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Watch close - *v1.ConfigMap total 0 items received\nI0924 15:03:28.415891       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: watch of *v1.ConfigMap ended with: too old resource version: 34252 (40397)\nI0924 15:03:28.415926       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: watch of *v1.ConfigMap ended with: too old resource version: 37383 (40397)\nI0924 15:03:29.416047       1 reflector.go:185] Listing and watching *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209\nI0924 15:03:29.416417       1 reflector.go:185] Listing and watching *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209\nI0924 15:03:40.144625       1 httplog.go:90] GET /metrics: (6.309873ms) 200 [Prometheus/2.15.2 10.128.2.17:57666]\nI0924 15:03:56.369556       1 configobserver.go:68] Refreshing configuration from cluster pull secret\nI0924 15:03:56.373191       1 status.go:314] The operator is healthy\nI0924 15:03:56.374800       1 configobserver.go:93] Found cloud.openshift.com token\nI0924 15:03:56.374820       1 configobserver.go:110] Refreshing configuration from cluster secret\nI0924 15:03:56.404818       1 httplog.go:90] GET /metrics: (5.563321ms) 200 [Prometheus/2.15.2 10.129.2.20:56886]\nI0924 15:04:10.144682       1 httplog.go:90] GET /metrics: (6.5256ms) 200 [Prometheus/2.15.2 10.128.2.17:57666]\nI0924 15:04:26.405052       1 httplog.go:90] GET /metrics: (8.997829ms) 200 [Prometheus/2.15.2 10.129.2.20:56886]\n
Sep 24 15:04:41.716 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-232-115.us-west-2.compute.internal node/ip-10-0-232-115.us-west-2.compute.internal container=kube-scheduler container exited with code 255 (Error): eflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://localhost:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=40656&timeout=7m33s&timeoutSeconds=453&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 15:04:32.832138       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: Get https://localhost:6443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=39819&timeout=9m44s&timeoutSeconds=584&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 15:04:32.887711       1 reflector.go:307] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=36378&timeout=5m7s&timeoutSeconds=307&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 15:04:32.891316       1 reflector.go:307] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=37445&timeout=8m27s&timeoutSeconds=507&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 15:04:39.493633       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)\nE0924 15:04:40.761966       1 eventhandlers.go:242] scheduler cache UpdatePod failed: pod 1564462f-5911-4dd6-b801-8b6c5299bb09 is not added to scheduler cache, so cannot be updated\nE0924 15:04:40.762595       1 cache.go:444] Pod f0b77dae-0c28-4aed-a92c-1812fa3964f0 updated on a different node than previously added to.\nF0924 15:04:40.762685       1 cache.go:445] Schedulercache is corrupted and can badly affect scheduling decisions\n
Sep 24 15:04:41.852 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-78f9hfgpf node/ip-10-0-232-115.us-west-2.compute.internal container=operator container exited with code 255 (Error): .go:185] Listing and watching *v1.ConfigMap from k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105\nI0924 15:03:28.780293       1 reflector.go:185] Listing and watching *v1.Proxy from k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105\nI0924 15:03:28.780407       1 reflector.go:185] Listing and watching *v1.ConfigMap from k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105\nI0924 15:03:28.780419       1 reflector.go:185] Listing and watching *v1.Deployment from k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105\nI0924 15:03:28.780924       1 reflector.go:185] Listing and watching *v1.ServiceCatalogControllerManager from k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105\nI0924 15:03:28.780976       1 reflector.go:185] Listing and watching *v1.ConfigMap from k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105\nI0924 15:03:28.780963       1 reflector.go:185] Listing and watching *v1.Service from k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105\nI0924 15:03:28.780970       1 reflector.go:185] Listing and watching *v1.ServiceAccount from k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105\nI0924 15:03:41.903745       1 httplog.go:90] GET /metrics: (6.065626ms) 200 [Prometheus/2.15.2 10.129.2.20:32794]\nI0924 15:03:46.808175       1 httplog.go:90] GET /metrics: (5.680725ms) 200 [Prometheus/2.15.2 10.128.2.17:54886]\nI0924 15:04:11.899302       1 httplog.go:90] GET /metrics: (6.265976ms) 200 [Prometheus/2.15.2 10.129.2.20:32794]\nI0924 15:04:16.804393       1 httplog.go:90] GET /metrics: (1.998662ms) 200 [Prometheus/2.15.2 10.128.2.17:54886]\nI0924 15:04:35.919040       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0924 15:04:35.919448       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nI0924 15:04:35.919759       1 status_controller.go:212] Shutting down StatusSyncer-service-catalog-controller-manager\nI0924 15:04:35.924686       1 operator.go:227] Shutting down ServiceCatalogControllerManagerOperator\nF0924 15:04:35.924710       1 builder.go:243] stopped\n
Sep 24 15:04:44.345 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-7499c685fd-d24dj node/ip-10-0-232-115.us-west-2.compute.internal container=kube-storage-version-migrator-operator container exited with code 255 (Error): IVersion:"v1", ResourceVersion:"37178", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' 042ecb5f-1c33-4282-b0e8-4dd9131d7cfe became leader\nI0924 14:59:04.300422       1 logging_controller.go:83] Starting LogLevelController\nI0924 14:59:04.300467       1 status_controller.go:199] Starting StatusSyncer-kube-storage-version-migrator\nI0924 14:59:04.300471       1 controller.go:109] Starting KubeStorageVersionMigratorOperator\nE0924 15:01:35.335455       1 reflector.go:320] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to watch *v1.ConfigMap: Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=36429&timeout=6m5s&timeoutSeconds=365&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nI0924 15:03:18.435317       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"bf0e4b3e-c01d-40d7-8e40-acd60f5ae359", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from True to False ("Available: deployment/migrator.openshift-kube-storage-version-migrator: no replicas are available")\nI0924 15:03:31.608885       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"bf0e4b3e-c01d-40d7-8e40-acd60f5ae359", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from False to True ("")\nI0924 15:04:41.920618       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0924 15:04:41.920719       1 leaderelection.go:66] leaderelection lost\n
Sep 24 15:04:47.474 E ns/openshift-monitoring pod/thanos-querier-c6c66c8f7-xh7jg node/ip-10-0-232-115.us-west-2.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/09/24 15:00:38 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/09/24 15:00:38 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/24 15:00:38 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/24 15:00:38 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/09/24 15:00:38 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/24 15:00:38 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/09/24 15:00:38 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/24 15:00:38 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/09/24 15:00:38 http.go:107: HTTPS: listening on [::]:9091\nI0924 15:00:38.644233       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/24 15:01:17 oauthproxy.go:774: basicauth: 10.130.0.19:43956 Authorization header does not start with 'Basic', skipping basic authentication\n
Sep 24 15:05:03.328 E ns/openshift-console pod/console-6648c88454-6n7sv node/ip-10-0-232-115.us-west-2.compute.internal container=console container exited with code 2 (Error): 2020-09-24T14:45:20Z cmd/main: cookies are secure!\n2020-09-24T14:45:20Z cmd/main: Binding to [::]:8443...\n2020-09-24T14:45:20Z cmd/main: using TLS\n
Sep 24 15:05:16.837 E ns/openshift-operator-lifecycle-manager pod/packageserver-c98c5c996-4864w node/ip-10-0-177-150.us-west-2.compute.internal container=packageserver container exited with code 1 (Error): C_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA\n      --tls-min-version string                                  Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13\n      --tls-private-key-file string                             File containing the default x509 private key matching --tls-cert-file.\n      --tls-sni-cert-key namedCertKey                           A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com". (default [])\n  -v, --v Level                                                 number for the log level verbosity (default 0)\n      --vmodule moduleSpec                                      comma-separated list of pattern=N settings for file-filtered logging\n\ntime="2020-09-24T15:05:16Z" level=fatal msg="Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: connection refused"\n
Sep 24 15:05:19.700 E ns/openshift-cloud-credential-operator pod/cloud-credential-operator-79b54f4f55-8c856 node/ip-10-0-177-150.us-west-2.compute.internal container=manager container exited with code 1 (Error): Copying system trust bundle\ntime="2020-09-24T15:05:18Z" level=debug msg="debug logging enabled"\ntime="2020-09-24T15:05:18Z" level=info msg="setting up client for manager"\ntime="2020-09-24T15:05:18Z" level=info msg="setting up manager"\ntime="2020-09-24T15:05:18Z" level=fatal msg="unable to set up overall controller manager" error="Get https://172.30.0.1:443/api?timeout=32s: dial tcp 172.30.0.1:443: connect: connection refused"\n
Sep 24 15:05:22.407 E ns/openshift-dns-operator pod/dns-operator-5989f597c4-fl7ds node/ip-10-0-177-150.us-west-2.compute.internal container=dns-operator container exited with code 1 (Error): time="2020-09-24T15:05:19Z" level=fatal msg="failed to create operator: failed to create operator manager: Get https://172.30.0.1:443/api?timeout=32s: dial tcp 172.30.0.1:443: connect: connection refused"\n
Sep 24 15:05:24.445 E ns/openshift-operator-lifecycle-manager pod/packageserver-c98c5c996-4864w node/ip-10-0-177-150.us-west-2.compute.internal container=packageserver container exited with code 1 (Error): C_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA\n      --tls-min-version string                                  Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13\n      --tls-private-key-file string                             File containing the default x509 private key matching --tls-cert-file.\n      --tls-sni-cert-key namedCertKey                           A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com". (default [])\n  -v, --v Level                                                 number for the log level verbosity (default 0)\n      --vmodule moduleSpec                                      comma-separated list of pattern=N settings for file-filtered logging\n\ntime="2020-09-24T15:05:23Z" level=fatal msg="Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: connection refused"\n
Sep 24 15:05:24.506 E ns/openshift-dns-operator pod/dns-operator-5989f597c4-fl7ds node/ip-10-0-177-150.us-west-2.compute.internal container=dns-operator container exited with code 1 (Error): time="2020-09-24T15:05:23Z" level=fatal msg="failed to create operator: failed to create operator manager: Get https://172.30.0.1:443/api?timeout=32s: dial tcp 172.30.0.1:443: connect: connection refused"\n
Sep 24 15:06:01.385 E ns/openshift-monitoring pod/node-exporter-jc8ls node/ip-10-0-134-21.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:06:01.407 E ns/openshift-image-registry pod/node-ca-96675 node/ip-10-0-134-21.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:06:01.421 E ns/openshift-cluster-node-tuning-operator pod/tuned-j5j77 node/ip-10-0-134-21.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:06:01.435 E ns/openshift-multus pod/multus-pfgbs node/ip-10-0-134-21.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:06:01.456 E ns/openshift-sdn pod/sdn-r678m node/ip-10-0-134-21.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:06:01.468 E ns/openshift-sdn pod/ovs-mbnc4 node/ip-10-0-134-21.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:06:01.469 E ns/openshift-dns pod/dns-default-hcgl7 node/ip-10-0-134-21.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:06:01.490 E ns/openshift-machine-config-operator pod/machine-config-daemon-bwtws node/ip-10-0-134-21.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:06:06.305 E ns/openshift-sdn pod/sdn-r678m node/ip-10-0-134-21.us-west-2.compute.internal container=sdn container exited with code 255 (Error): s:grpc to [10.129.2.16:50051]\nI0924 15:04:00.999566   83166 proxier.go:368] userspace proxy: processing 0 service events\nI0924 15:04:00.999590   83166 proxier.go:347] userspace syncProxyRules took 28.56886ms\nI0924 15:04:03.716496   83166 pod.go:540] CNI_DEL e2e-k8s-service-lb-available-6766/service-test-qlsnn\nI0924 15:04:05.144721   83166 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-marketplace/redhat-marketplace:grpc to [10.129.2.13:50051]\nI0924 15:04:05.266958   83166 proxier.go:368] userspace proxy: processing 0 service events\nI0924 15:04:05.266983   83166 proxier.go:347] userspace syncProxyRules took 30.43404ms\nI0924 15:04:05.430526   83166 pod.go:540] CNI_DEL openshift-ingress/router-default-5f5fd8f646-45pfm\nI0924 15:04:12.997181   83166 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-kube-controller-manager/kube-controller-manager:https to [10.0.173.108:10257]\nI0924 15:04:12.997224   83166 roundrobin.go:217] Delete endpoint 10.0.232.115:10257 for service "openshift-kube-controller-manager/kube-controller-manager:https"\nI0924 15:04:13.115398   83166 proxier.go:368] userspace proxy: processing 0 service events\nI0924 15:04:13.115422   83166 proxier.go:347] userspace syncProxyRules took 28.271727ms\ninterrupt: Gracefully shutting down ...\nI0924 15:06:04.486680    1745 cmd.go:123] Reading proxy configuration from /config/kube-proxy-config.yaml\nI0924 15:06:04.491346    1745 feature_gate.go:243] feature gates: &{map[]}\nI0924 15:06:04.492741    1745 cmd.go:227] Watching config file /config/kube-proxy-config.yaml for changes\nI0924 15:06:04.492859    1745 cmd.go:227] Watching config file /config/..2020_09_24_14_48_08.604117396/kube-proxy-config.yaml for changes\nF0924 15:06:04.515979    1745 cmd.go:106] Failed to initialize sdn: failed to initialize SDN: could not get ClusterNetwork resource: Get https://api-int.ci-op-qhbmgt9g-506dd.origin-ci-int-aws.dev.rhcloud.com:6443/apis/network.openshift.io/v1/clusternetworks/default: dial tcp 10.0.142.41:6443: connect: connection refused\n
Sep 24 15:06:07.933 E clusteroperator/monitoring changed Degraded to True: UpdatingPrometheusK8SFailed: Failed to rollout the stack. Error: running task Updating Prometheus-k8s failed: waiting for Prometheus GRPC secret failed: waiting for secret grpc-tls: Get https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/secrets/grpc-tls: dial tcp 172.30.0.1:443: connect: connection refused
Sep 24 15:06:10.963 E ns/openshift-machine-config-operator pod/machine-config-daemon-bwtws node/ip-10-0-134-21.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Sep 24 15:06:11.049 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Sep 24 15:06:52.991 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-177-150.us-west-2.compute.internal node/ip-10-0-177-150.us-west-2.compute.internal container=kube-controller-manager container exited with code 255 (Error):    1 endpoints_controller.go:198] Shutting down endpoint controller\nI0924 15:06:51.895037       1 pvc_protection_controller.go:112] Shutting down PVC protection controller\nI0924 15:06:51.903516       1 expand_controller.go:331] Shutting down expand controller\nI0924 15:06:51.903521       1 namespace_controller.go:212] Shutting down namespace controller\nI0924 15:06:51.903526       1 certificate_controller.go:130] Shutting down certificate controller "csrapproving"\nI0924 15:06:51.903533       1 certificate_controller.go:130] Shutting down certificate controller "csrsigning"\nI0924 15:06:51.903555       1 graph_builder.go:311] GraphBuilder stopping\nI0924 15:06:51.903606       1 resource_quota_controller.go:259] resource quota controller worker shutting down\nI0924 15:06:51.903614       1 resource_quota_controller.go:259] resource quota controller worker shutting down\nI0924 15:06:51.903624       1 resource_quota_controller.go:259] resource quota controller worker shutting down\nI0924 15:06:51.903633       1 resource_quota_controller.go:259] resource quota controller worker shutting down\nI0924 15:06:51.903641       1 resource_quota_controller.go:259] resource quota controller worker shutting down\nI0924 15:06:51.903741       1 horizontal.go:202] horizontal pod autoscaler controller worker shutting down\nI0924 15:06:51.903815       1 pv_controller_base.go:421] claim worker queue shutting down\nI0924 15:06:51.903851       1 pv_controller_base.go:364] volume worker queue shutting down\nI0924 15:06:51.904119       1 dynamic_serving_content.go:145] Shutting down csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key\nI0924 15:06:51.904147       1 cleaner.go:89] Shutting down CSR cleaner controller\nI0924 15:06:51.904158       1 cronjob_controller.go:101] Shutting down CronJob Manager\nI0924 15:06:51.904164       1 tokens_controller.go:188] Shutting down\nI0924 15:06:51.904204       1 pv_protection_controller.go:93] Shutting down PV protection controller\n
Sep 24 15:07:03.016 E ns/openshift-marketplace pod/community-operators-b59546d8-7fm9p node/ip-10-0-151-127.us-west-2.compute.internal container=community-operators container exited with code 2 (Error): 
Sep 24 15:07:03.220 E ns/openshift-marketplace pod/certified-operators-8cbbb84d9-zt9rw node/ip-10-0-224-214.us-west-2.compute.internal container=certified-operators container exited with code 2 (Error): 
Sep 24 15:07:17.978 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-177-150.us-west-2.compute.internal node/ip-10-0-177-150.us-west-2.compute.internal container=setup init container exited with code 124 (Error): ................................................................................
Sep 24 15:07:53.296 E ns/openshift-cluster-node-tuning-operator pod/tuned-k7mkq node/ip-10-0-232-115.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:07:53.313 E ns/openshift-controller-manager pod/controller-manager-s9jvl node/ip-10-0-232-115.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:07:53.330 E ns/openshift-sdn pod/sdn-controller-vz5cd node/ip-10-0-232-115.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:07:53.345 E ns/openshift-sdn pod/ovs-c2pvh node/ip-10-0-232-115.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:07:53.367 E ns/openshift-monitoring pod/node-exporter-chkdb node/ip-10-0-232-115.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:07:53.377 E ns/openshift-image-registry pod/node-ca-q6m46 node/ip-10-0-232-115.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:07:53.394 E ns/openshift-sdn pod/sdn-dpdqd node/ip-10-0-232-115.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:07:53.410 E ns/openshift-multus pod/multus-admission-controller-8rrfr node/ip-10-0-232-115.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:07:53.435 E ns/openshift-multus pod/multus-7cn4v node/ip-10-0-232-115.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:07:53.458 E ns/openshift-dns pod/dns-default-nswk7 node/ip-10-0-232-115.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:07:53.475 E ns/openshift-machine-config-operator pod/machine-config-daemon-fzdwn node/ip-10-0-232-115.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:07:53.488 E ns/openshift-machine-config-operator pod/machine-config-server-sldgc node/ip-10-0-232-115.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 24 15:08:03.872 E ns/openshift-machine-config-operator pod/machine-config-daemon-fzdwn node/ip-10-0-232-115.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Sep 24 15:09:28.501 E kube-apiserver failed contacting the API: Get https://api.ci-op-qhbmgt9g-506dd.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=45848&timeout=7m2s&timeoutSeconds=422&watch=true: dial tcp 44.241.8.186:6443: connect: connection refused
Sep 24 15:09:38.859 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-173-108.us-west-2.compute.internal node/ip-10-0-173-108.us-west-2.compute.internal container=kube-controller-manager container exited with code 255 (Error): onnection refused\nE0924 15:09:38.460279       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Lease: Get https://localhost:6443/apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=46024&timeout=5m54s&timeoutSeconds=354&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 15:09:38.461664       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/config.openshift.io/v1/clusterversions?allowWatchBookmarks=true&resourceVersion=44475&timeout=9m22s&timeoutSeconds=562&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 15:09:38.467731       1 reflector.go:307] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: Failed to watch *v1.EgressNetworkPolicy: Get https://localhost:6443/apis/network.openshift.io/v1/egressnetworkpolicies?allowWatchBookmarks=true&resourceVersion=39928&timeout=5m54s&timeoutSeconds=354&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 15:09:38.468730       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/monitoring.coreos.com/v1/alertmanagers?allowWatchBookmarks=true&resourceVersion=39961&timeout=5m53s&timeoutSeconds=353&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 15:09:38.469650       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/snapshot.storage.k8s.io/v1beta1/volumesnapshots?allowWatchBookmarks=true&resourceVersion=39819&timeout=5m47s&timeoutSeconds=347&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0924 15:09:38.470127       1 leaderelection.go:288] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0924 15:09:38.470195       1 controllermanager.go:291] leaderelection lost\n
Sep 24 15:09:39.859 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-173-108.us-west-2.compute.internal node/ip-10-0-173-108.us-west-2.compute.internal container=kube-scheduler container exited with code 255 (Error): lient-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: Get https://localhost:6443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=43827&timeout=5m23s&timeoutSeconds=323&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 15:09:38.397128       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://localhost:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=45957&timeout=5m23s&timeoutSeconds=323&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 15:09:38.398777       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicaSet: Get https://localhost:6443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=45840&timeout=8m6s&timeoutSeconds=486&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 15:09:38.399740       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://localhost:6443/api/v1/services?allowWatchBookmarks=true&resourceVersion=43743&timeout=8m19s&timeoutSeconds=499&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 15:09:38.400813       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: Get https://localhost:6443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=41106&timeout=5m43s&timeoutSeconds=343&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 15:09:38.425140       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: Get https://localhost:6443/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=38524&timeout=9m21s&timeoutSeconds=561&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0924 15:09:38.838451       1 leaderelection.go:288] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0924 15:09:38.838486       1 server.go:257] leaderelection lost\n
Sep 24 15:10:04.902 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-173-108.us-west-2.compute.internal node/ip-10-0-173-108.us-west-2.compute.internal container=setup init container exited with code 124 (Error): ................................................................................
Sep 24 15:10:11.005 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-173-108.us-west-2.compute.internal node/ip-10-0-173-108.us-west-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): /localhost:6443/apis/apps/v1/deployments?allowWatchBookmarks=true&resourceVersion=45841&timeout=5m44s&timeoutSeconds=344&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 15:10:09.453291       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/configmaps?allowWatchBookmarks=true&resourceVersion=46030&timeout=6m44s&timeoutSeconds=404&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 15:10:09.454639       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ControllerRevision: Get https://localhost:6443/apis/apps/v1/controllerrevisions?allowWatchBookmarks=true&resourceVersion=38524&timeout=6m19s&timeoutSeconds=379&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 15:10:09.456000       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.RoleBinding: Get https://localhost:6443/apis/rbac.authorization.k8s.io/v1/rolebindings?allowWatchBookmarks=true&resourceVersion=38520&timeout=8m25s&timeoutSeconds=505&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 15:10:09.457063       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Endpoints: Get https://localhost:6443/api/v1/endpoints?allowWatchBookmarks=true&resourceVersion=45849&timeout=7m20s&timeoutSeconds=440&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 15:10:09.458371       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.Ingress: Get https://localhost:6443/apis/networking.k8s.io/v1beta1/ingresses?allowWatchBookmarks=true&resourceVersion=38520&timeout=9m2s&timeoutSeconds=542&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0924 15:10:10.216249       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cluster-policy-controller: timed out waiting for the condition\nF0924 15:10:10.216293       1 policy_controller.go:94] leaderelection lost\n
Sep 24 15:10:13.015 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-173-108.us-west-2.compute.internal node/ip-10-0-173-108.us-west-2.compute.internal container=kube-controller-manager-recovery-controller container exited with code 255 (Error): &watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 15:10:12.397373       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?allowWatchBookmarks=true&resourceVersion=46027&timeout=7m39s&timeoutSeconds=459&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 15:10:12.400245       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-config-managed/secrets?allowWatchBookmarks=true&resourceVersion=43031&timeout=9m33s&timeoutSeconds=573&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 15:10:12.401584       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-config-managed/configmaps?allowWatchBookmarks=true&resourceVersion=43300&timeout=9m22s&timeoutSeconds=562&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 15:10:12.412119       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?allowWatchBookmarks=true&resourceVersion=43031&timeout=9m24s&timeoutSeconds=564&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0924 15:10:12.413180       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *unstructured.Unstructured: Get https://localhost:6443/apis/operator.openshift.io/v1/kubecontrollermanagers?allowWatchBookmarks=true&resourceVersion=45666&timeoutSeconds=334&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0924 15:10:12.426776       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cert-recovery-controller-lock: timed out waiting for the condition\nF0924 15:10:12.426813       1 leaderelection.go:67] leaderelection lost\nI0924 15:10:12.427712       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\n