ResultFAILURE
Tests 5 failed / 20 succeeded
Started2020-04-08 11:30
Elapsed2h8m
Work namespaceci-op-sx6q4li8
Refs openshift-4.5:fe90dcbe
44:8b80929a
pod4a9bc530-798c-11ea-944a-0a58ac102662
repoopenshift/etcd
revision1

Test Failures


Cluster upgrade Application behind service load balancer with PDB is not disrupted 1h19m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sApplication\sbehind\sservice\sload\sbalancer\swith\sPDB\sis\snot\sdisrupted$'
Service was unreachable during disruption for at least 2s of 1h15m20s (0%):

Apr 08 12:19:41.735 E ns/e2e-k8s-service-lb-available-5384 svc/service-test Service stopped responding to GET requests on reused connections
Apr 08 12:19:42.734 E ns/e2e-k8s-service-lb-available-5384 svc/service-test Service is not responding to GET requests on reused connections
Apr 08 12:19:42.909 I ns/e2e-k8s-service-lb-available-5384 svc/service-test Service started responding to GET requests on reused connections
				from junit_upgrade_1586352285.xml

Filter through log files


Cluster upgrade Cluster frontend ingress remain available 1h18m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sCluster\sfrontend\singress\sremain\savailable$'
Frontends were unreachable during disruption for at least 38s of 1h18m31s (1%):

Apr 08 12:20:00.850 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Apr 08 12:20:01.232 I ns/openshift-console route/console Route started responding to GET requests over new connections
Apr 08 12:20:03.849 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Apr 08 12:20:04.207 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Apr 08 12:20:25.849 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Apr 08 12:20:25.849 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests on reused connections
Apr 08 12:20:25.849 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Apr 08 12:20:25.849 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Apr 08 12:20:26.849 - 8s    E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Apr 08 12:20:26.849 - 8s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests on reused connections
Apr 08 12:20:26.849 - 8s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Apr 08 12:20:26.849 - 8s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Apr 08 12:20:34.889 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests on reused connections
Apr 08 12:20:34.889 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Apr 08 12:20:34.992 I ns/openshift-console route/console Route started responding to GET requests over new connections
Apr 08 12:20:34.992 I ns/openshift-console route/console Route started responding to GET requests on reused connections
				from junit_upgrade_1586352285.xml

Filter through log files


openshift-tests Monitor cluster while tests execute 1h19m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
62 error level events were detected during this test run:

Apr 08 12:08:33.631 E clusterversion/version changed Failing to True: WorkloadNotAvailable: deployment openshift-cluster-version/cluster-version-operator is progressing NewReplicaSetAvailable: ReplicaSet "cluster-version-operator-685cd89cbf" has successfully progressed.
Apr 08 12:12:12.558 E kube-apiserver Kube API started failing: Get https://api.ci-op-sx6q4li8-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Apr 08 12:12:22.497 E ns/openshift-machine-api pod/machine-api-controllers-7d8d5b6fc9-cp6w2 node/ip-10-0-155-26.us-west-2.compute.internal container/machineset-controller container exited with code 1 (Error): 
Apr 08 12:13:42.482 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-58fbbdb7b9-h2sqp node/ip-10-0-137-22.us-west-2.compute.internal container/kube-storage-version-migrator-operator container exited with code 255 (Error): e":"Available: deployment/migrator.openshift-kube-storage-version-migrator: no replicas are available"},{"type":"Upgradeable","status":"Unknown","lastTransitionTime":"2020-04-08T11:47:05Z","reason":"NoData"}],"versions":[{"name":"operator","version":"0.0.1-2020-04-08-113034"}\n\nA: ],"relatedObjects":[{"group":"operator.openshift.io","resource":"kubestorageversionmigrators","name":"cluster"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator-operator"}],"extension":null}\n\n\nB: ,{"name":"kube-storage-version-migrator","version":""}],"relatedObjects":[{"group":"operator.openshift.io","resource":"kubestorageversionmigrators","name":"cluster"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator-operator"}],"extension":null}\n\n\nI0408 11:58:51.570667       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"91ae3132-e1a8-4e30-8506-312be23b9c61", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from False to True ("")\nI0408 11:58:51.578780       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"91ae3132-e1a8-4e30-8506-312be23b9c61", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from False to True ("")\nI0408 12:12:40.692551       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0408 12:12:40.692683       1 leaderelection.go:66] leaderelection lost\n
Apr 08 12:15:10.156 E ns/openshift-kube-storage-version-migrator pod/migrator-ffbc44dbd-dbhqp node/ip-10-0-156-60.us-west-2.compute.internal container/migrator container exited with code 2 (Error): I0408 12:13:40.674622       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\n
Apr 08 12:15:11.413 E ns/openshift-cluster-machine-approver pod/machine-approver-f54795df5-9hgxn node/ip-10-0-137-22.us-west-2.compute.internal container/machine-approver-controller container exited with code 2 (Error): 0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=16149&timeoutSeconds=440&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0408 11:58:57.777453       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:240: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=16149&timeoutSeconds=384&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0408 12:00:56.279638       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:240: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=18424&timeoutSeconds=360&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0408 12:00:57.280126       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:240: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=18424&timeoutSeconds=408&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0408 12:01:01.837272       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:240: Failed to watch *v1beta1.CertificateSigningRequest: unknown (get certificatesigningrequests.certificates.k8s.io)\nI0408 12:13:40.676469       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0408 12:13:40.677142       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:240: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=25887&timeoutSeconds=509&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\n
Apr 08 12:15:23.191 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-67785f8d65-4gwkq node/ip-10-0-156-60.us-west-2.compute.internal container/operator container exited with code 255 (Error): 15:08.921704       1 operator.go:147] Finished syncing operator at 35.202161ms\nI0408 12:15:08.921776       1 operator.go:145] Starting syncing operator at 2020-04-08 12:15:08.921770343 +0000 UTC m=+1005.084093867\nI0408 12:15:09.262109       1 operator.go:147] Finished syncing operator at 340.328816ms\nI0408 12:15:14.292894       1 operator.go:145] Starting syncing operator at 2020-04-08 12:15:14.292881608 +0000 UTC m=+1010.455204975\nI0408 12:15:14.310784       1 operator.go:147] Finished syncing operator at 17.893659ms\nI0408 12:15:14.315135       1 operator.go:145] Starting syncing operator at 2020-04-08 12:15:14.315126454 +0000 UTC m=+1010.477449864\nI0408 12:15:14.337378       1 operator.go:147] Finished syncing operator at 22.243586ms\nI0408 12:15:14.337421       1 operator.go:145] Starting syncing operator at 2020-04-08 12:15:14.337417229 +0000 UTC m=+1010.499740403\nI0408 12:15:14.362201       1 operator.go:147] Finished syncing operator at 24.776048ms\nI0408 12:15:14.362252       1 operator.go:145] Starting syncing operator at 2020-04-08 12:15:14.362246667 +0000 UTC m=+1010.524570227\nI0408 12:15:14.703306       1 operator.go:147] Finished syncing operator at 341.045086ms\nI0408 12:15:22.372144       1 operator.go:145] Starting syncing operator at 2020-04-08 12:15:22.37213021 +0000 UTC m=+1018.534453604\nI0408 12:15:22.403158       1 operator.go:147] Finished syncing operator at 31.015722ms\nI0408 12:15:22.403740       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0408 12:15:22.403861       1 logging_controller.go:93] Shutting down LogLevelController\nI0408 12:15:22.403896       1 management_state_controller.go:112] Shutting down management-state-controller-csi-snapshot-controller\nI0408 12:15:22.403913       1 status_controller.go:212] Shutting down StatusSyncer-csi-snapshot-controller\nF0408 12:15:22.403925       1 builder.go:243] stopped\nI0408 12:15:22.406902       1 configmap_cafile_content.go:226] Shutting down client-ca::kube-system::extension-apiserver-authentication::client-ca-file\n
Apr 08 12:15:24.486 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-7d69688c5d-2st85 node/ip-10-0-137-22.us-west-2.compute.internal container/operator container exited with code 1 (Error): oller.go:172] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2020-04-08T11:47:04Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-04-08T12:15:19Z","message":"Progressing: daemonset/controller-manager: observed generation is 9, desired generation is 10.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 3, desired generation is 4.","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2020-04-08T11:52:59Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-04-08T11:47:04Z","reason":"NoData","status":"Unknown","type":"Upgradeable"}]}}\nI0408 12:15:19.101894       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"a7994b9a-aa5f-479f-b351-6dc39366acd6", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing changed from False to True ("Progressing: daemonset/controller-manager: observed generation is 9, desired generation is 10.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 3, desired generation is 4.")\nI0408 12:15:23.308646       1 cmd.go:83] Received SIGTERM or SIGINT signal, shutting down controller.\nI0408 12:15:23.308981       1 dynamic_serving_content.go:145] Shutting down serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key\nI0408 12:15:23.309053       1 operator.go:141] Shutting down OpenShiftControllerManagerOperator\nI0408 12:15:23.309102       1 reflector.go:181] Stopping reflector *v1.ClusterOperator (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nW0408 12:15:23.309146       1 builder.go:88] graceful termination failed, controllers failed with error: stopped\n
Apr 08 12:15:34.222 E ns/openshift-monitoring pod/openshift-state-metrics-78cd7b8c64-p4wgw node/ip-10-0-156-60.us-west-2.compute.internal container/openshift-state-metrics container exited with code 2 (Error): 
Apr 08 12:15:36.435 E ns/openshift-monitoring pod/kube-state-metrics-569bb8d986-7zjtp node/ip-10-0-139-220.us-west-2.compute.internal container/kube-state-metrics container exited with code 2 (Error): 
Apr 08 12:15:43.864 E ns/openshift-insights pod/insights-operator-b65456f6f-k5st6 node/ip-10-0-137-34.us-west-2.compute.internal container/operator container exited with code 2 (Error): 37.195141       1 periodic.go:151] Periodic gather config completed in 72ms\nI0408 12:13:40.725002       1 reflector.go:418] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Watch close - *v1.ConfigMap total 0 items received\nI0408 12:13:40.725002       1 reflector.go:418] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Watch close - *v1.ConfigMap total 0 items received\nI0408 12:13:40.808146       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: watch of *v1.ConfigMap ended with: too old resource version: 25929 (26540)\nI0408 12:13:40.905239       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: watch of *v1.ConfigMap ended with: too old resource version: 24691 (26540)\nI0408 12:13:41.808389       1 reflector.go:185] Listing and watching *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209\nI0408 12:13:41.907039       1 reflector.go:185] Listing and watching *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209\nI0408 12:13:44.692595       1 httplog.go:90] GET /metrics: (5.068144ms) 200 [Prometheus/2.15.2 10.128.2.14:44954]\nI0408 12:13:56.645530       1 httplog.go:90] GET /metrics: (6.235468ms) 200 [Prometheus/2.15.2 10.131.0.22:59300]\nI0408 12:14:14.693544       1 httplog.go:90] GET /metrics: (6.028185ms) 200 [Prometheus/2.15.2 10.128.2.14:44954]\nI0408 12:14:26.645156       1 httplog.go:90] GET /metrics: (5.922168ms) 200 [Prometheus/2.15.2 10.131.0.22:59300]\nI0408 12:14:44.693605       1 httplog.go:90] GET /metrics: (6.136572ms) 200 [Prometheus/2.15.2 10.128.2.14:44954]\nI0408 12:14:56.643954       1 httplog.go:90] GET /metrics: (4.802199ms) 200 [Prometheus/2.15.2 10.131.0.22:59300]\nI0408 12:15:14.699352       1 httplog.go:90] GET /metrics: (11.878136ms) 200 [Prometheus/2.15.2 10.128.2.14:44954]\nI0408 12:15:16.136201       1 status.go:298] The operator is healthy\n
Apr 08 12:15:45.050 E ns/openshift-monitoring pod/node-exporter-2d8qw node/ip-10-0-137-34.us-west-2.compute.internal container/node-exporter container exited with code 143 (Error): -08T11:53:24Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-08T11:53:24Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-08T11:53:24Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-08T11:53:24Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-08T11:53:24Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-08T11:53:24Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-08T11:53:24Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-08T11:53:24Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-08T11:53:24Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-08T11:53:24Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-08T11:53:24Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-08T11:53:24Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-08T11:53:24Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-08T11:53:24Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-08T11:53:24Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-08T11:53:24Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-08T11:53:24Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-08T11:53:24Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-08T11:53:24Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-08T11:53:24Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-08T11:53:24Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-08T11:53:24Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-08T11:53:24Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-08T11:53:24Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 08 12:15:54.310 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-156-60.us-west-2.compute.internal container/rules-configmap-reloader container exited with code 2 (Error): 2020/04/08 12:00:46 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n2020/04/08 12:05:42 config map updated\n2020/04/08 12:05:42 successfully triggered reload\n
Apr 08 12:15:54.310 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-156-60.us-west-2.compute.internal container/prometheus-proxy container exited with code 2 (Error): 2020/04/08 12:00:46 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/04/08 12:00:46 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/08 12:00:46 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/08 12:00:46 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/04/08 12:00:46 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/08 12:00:46 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/04/08 12:00:46 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/08 12:00:46 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/04/08 12:00:46 http.go:107: HTTPS: listening on [::]:9091\nI0408 12:00:46.989294       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/04/08 12:15:41 oauthproxy.go:774: basicauth: 10.131.0.27:35476 Authorization header does not start with 'Basic', skipping basic authentication\n
Apr 08 12:15:54.310 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-156-60.us-west-2.compute.internal container/prometheus-config-reloader container exited with code 2 (Error): ts=2020-04-08T12:00:46.331049308Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-04-08T12:00:46.333076309Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-04-08T12:00:51.442685966Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-04-08T12:00:51.443232185Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\nlevel=info ts=2020-04-08T12:05:42.814089795Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Apr 08 12:15:58.310 E ns/openshift-monitoring pod/prometheus-adapter-7749d4db95-g7rqd node/ip-10-0-156-60.us-west-2.compute.internal container/prometheus-adapter container exited with code 2 (Error): I0408 11:58:58.603831       1 adapter.go:93] successfully using in-cluster auth\nI0408 11:58:59.057035       1 secure_serving.go:116] Serving securely on [::]:6443\n
Apr 08 12:16:02.526 E ns/openshift-monitoring pod/node-exporter-4qqqs node/ip-10-0-155-26.us-west-2.compute.internal container/node-exporter container exited with code 143 (Error): -08T11:53:39Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-08T11:53:39Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-08T11:53:39Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-08T11:53:39Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-08T11:53:39Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-08T11:53:39Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-08T11:53:39Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-08T11:53:39Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-08T11:53:39Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-08T11:53:39Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-08T11:53:39Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-08T11:53:39Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-08T11:53:39Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-08T11:53:39Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-08T11:53:39Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-08T11:53:39Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-08T11:53:39Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-08T11:53:39Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-08T11:53:39Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-08T11:53:39Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-08T11:53:39Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-08T11:53:39Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-08T11:53:39Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-08T11:53:39Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 08 12:16:04.151 E ns/openshift-monitoring pod/prometheus-adapter-7749d4db95-dpqld node/ip-10-0-139-220.us-west-2.compute.internal container/prometheus-adapter container exited with code 2 (Error): I0408 11:58:39.115302       1 adapter.go:93] successfully using in-cluster auth\nI0408 11:58:39.907587       1 secure_serving.go:116] Serving securely on [::]:6443\n
Apr 08 12:16:15.452 E ns/openshift-monitoring pod/grafana-59ff487c9d-8ckg7 node/ip-10-0-156-60.us-west-2.compute.internal container/grafana container exited with code 1 (Error): 
Apr 08 12:16:15.452 E ns/openshift-monitoring pod/grafana-59ff487c9d-8ckg7 node/ip-10-0-156-60.us-west-2.compute.internal container/grafana-proxy container exited with code 2 (Error): 
Apr 08 12:16:19.500 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-156-60.us-west-2.compute.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-08T12:16:10.154Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-08T12:16:10.160Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-08T12:16:10.161Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-08T12:16:10.161Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-08T12:16:10.161Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-08T12:16:10.161Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-08T12:16:10.162Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-08T12:16:10.162Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-08T12:16:10.162Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-08T12:16:10.162Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-08T12:16:10.162Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-08T12:16:10.162Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-08T12:16:10.162Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-08T12:16:10.162Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-08T12:16:10.163Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-08T12:16:10.163Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-08
Apr 08 12:16:26.193 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-139-220.us-west-2.compute.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-08T12:00:55.111Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-08T12:00:55.116Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-08T12:00:55.127Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-08T12:00:55.128Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-08T12:00:55.128Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-08T12:00:55.128Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-08T12:00:55.128Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-08T12:00:55.128Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-08T12:00:55.128Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-08T12:00:55.128Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-08T12:00:55.128Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-08T12:00:55.128Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-08T12:00:55.128Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-08T12:00:55.129Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-08T12:00:55.130Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-08T12:00:55.130Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-08
Apr 08 12:16:26.193 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-139-220.us-west-2.compute.internal container/prometheus-config-reloader container exited with code 2 (Error): ts=2020-04-08T12:00:55.254112549Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-04-08T12:00:55.255659854Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-04-08T12:01:00.369569084Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-04-08T12:01:00.370076503Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\nlevel=info ts=2020-04-08T12:04:43.26874201Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-04-08T12:07:00.601961282Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Apr 08 12:16:27.630 E ns/openshift-console-operator pod/console-operator-f586f8f8-cdf4c node/ip-10-0-155-26.us-west-2.compute.internal container/console-operator container exited with code 1 (Error): own ResourceSyncController ...\nI0408 12:16:26.670450       1 controller.go:115] shutting down ConsoleResourceSyncDestinationController\nI0408 12:16:26.670455       1 base_controller.go:101] Shutting down UnsupportedConfigOverridesController ...\nI0408 12:16:26.670460       1 controller.go:181] shutting down ConsoleRouteSyncController\nI0408 12:16:26.670464       1 controller.go:70] Shutting down Console\nI0408 12:16:26.670501       1 base_controller.go:58] Shutting down worker of ManagementStateController controller ...\nI0408 12:16:26.671330       1 base_controller.go:48] All ManagementStateController workers have been terminated\nI0408 12:16:26.670509       1 base_controller.go:58] Shutting down worker of LoggingSyncer controller ...\nI0408 12:16:26.671343       1 base_controller.go:48] All LoggingSyncer workers have been terminated\nI0408 12:16:26.670515       1 base_controller.go:58] Shutting down worker of StatusSyncer_console controller ...\nI0408 12:16:26.671350       1 base_controller.go:48] All StatusSyncer_console workers have been terminated\nI0408 12:16:26.670521       1 base_controller.go:58] Shutting down worker of ResourceSyncController controller ...\nI0408 12:16:26.671359       1 base_controller.go:48] All ResourceSyncController workers have been terminated\nI0408 12:16:26.670526       1 base_controller.go:58] Shutting down worker of UnsupportedConfigOverridesController controller ...\nI0408 12:16:26.671367       1 base_controller.go:48] All UnsupportedConfigOverridesController workers have been terminated\nI0408 12:16:26.670534       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go/informers/factory.go:135\nI0408 12:16:26.670592       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go/informers/factory.go:135\nI0408 12:16:26.670607       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go/informers/factory.go:135\nW0408 12:16:26.670640       1 builder.go:88] graceful termination failed, controllers failed with error: stopped\n
Apr 08 12:16:36.787 E ns/openshift-monitoring pod/node-exporter-j89mh node/ip-10-0-137-22.us-west-2.compute.internal container/node-exporter container exited with code 143 (Error): -08T11:53:45Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-08T11:53:45Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-08T11:53:45Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-08T11:53:45Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-08T11:53:45Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-08T11:53:45Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-08T11:53:45Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-08T11:53:45Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-08T11:53:45Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-08T11:53:45Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-08T11:53:45Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-08T11:53:45Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-08T11:53:45Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-08T11:53:45Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-08T11:53:45Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-08T11:53:45Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-08T11:53:45Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-08T11:53:45Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-08T11:53:45Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-08T11:53:45Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-08T11:53:45Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-08T11:53:45Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-08T11:53:45Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-08T11:53:45Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 08 12:16:44.381 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-139-220.us-west-2.compute.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-08T12:16:36.506Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-08T12:16:36.513Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-08T12:16:36.513Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-08T12:16:36.514Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-08T12:16:36.514Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-08T12:16:36.514Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-08T12:16:36.514Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-08T12:16:36.514Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-08T12:16:36.514Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-08T12:16:36.515Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-08T12:16:36.515Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-08T12:16:36.515Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-08T12:16:36.515Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-08T12:16:36.515Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-08T12:16:36.516Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-08T12:16:36.516Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-08
Apr 08 12:16:47.598 E ns/openshift-monitoring pod/node-exporter-nhmqb node/ip-10-0-156-60.us-west-2.compute.internal container/node-exporter container exited with code 143 (Error): -08T11:57:40Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-08T11:57:40Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-08T11:57:40Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-08T11:57:40Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-08T11:57:40Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-08T11:57:40Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-08T11:57:40Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-08T11:57:40Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-08T11:57:40Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-08T11:57:40Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-08T11:57:40Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-08T11:57:40Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-08T11:57:40Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-08T11:57:40Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-08T11:57:40Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-08T11:57:40Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-08T11:57:40Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-08T11:57:40Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-08T11:57:40Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-08T11:57:40Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-08T11:57:40Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-08T11:57:40Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-08T11:57:40Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-08T11:57:40Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 08 12:16:51.405 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-5c87bf6cfc-dl5s4 node/ip-10-0-139-220.us-west-2.compute.internal container/snapshot-controller container exited with code 2 (Error): 
Apr 08 12:17:08.731 E ns/openshift-marketplace pod/redhat-operators-55f464cdcd-rtcv8 node/ip-10-0-156-60.us-west-2.compute.internal container/redhat-operators container exited with code 2 (Error): 
Apr 08 12:17:10.730 E ns/openshift-marketplace pod/redhat-marketplace-5bb89b9db8-8lxf7 node/ip-10-0-156-60.us-west-2.compute.internal container/redhat-marketplace container exited with code 2 (Error): 
Apr 08 12:17:14.730 E ns/openshift-marketplace pod/certified-operators-7f874d77f-8snvc node/ip-10-0-156-60.us-west-2.compute.internal container/certified-operators container exited with code 2 (Error): 
Apr 08 12:17:22.751 E ns/openshift-console pod/console-6477854cd9-vdbw5 node/ip-10-0-137-34.us-west-2.compute.internal container/console container exited with code 2 (Error): 2020-04-08T12:01:14Z cmd/main: cookies are secure!\n2020-04-08T12:01:14Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-04-08T12:01:24Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-04-08T12:01:34Z cmd/main: Binding to [::]:8443...\n2020-04-08T12:01:34Z cmd/main: using TLS\n
Apr 08 12:17:29.815 E ns/openshift-marketplace pod/community-operators-64b44448b6-vfvsq node/ip-10-0-156-60.us-west-2.compute.internal container/community-operators container exited with code 2 (Error): 
Apr 08 12:18:57.826 E ns/openshift-sdn pod/sdn-controller-q5gg5 node/ip-10-0-137-22.us-west-2.compute.internal container/sdn-controller container exited with code 2 (Error): I0408 11:45:50.034816       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0408 11:52:41.475143       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-sx6q4li8-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: unexpected EOF\n
Apr 08 12:18:57.856 E ns/openshift-multus pod/multus-admission-controller-5484b node/ip-10-0-137-22.us-west-2.compute.internal container/multus-admission-controller container exited with code 137 (Error): 
Apr 08 12:19:03.854 E ns/openshift-sdn pod/sdn-n6m58 node/ip-10-0-128-157.us-west-2.compute.internal container/sdn container exited with code 255 (Error): ads-554bd75597-bxk8k got IP 10.129.2.17, ofport 18\nI0408 12:15:41.669845    2302 pod.go:539] CNI_DEL openshift-monitoring/alertmanager-main-2\nI0408 12:15:41.835976    2302 pod.go:539] CNI_DEL openshift-image-registry/node-ca-4vnct\nI0408 12:15:42.038902    2302 pod.go:503] CNI_ADD openshift-monitoring/prometheus-adapter-84c6c7494d-6kr98 got IP 10.129.2.18, ofport 19\nI0408 12:15:54.673619    2302 pod.go:539] CNI_DEL openshift-monitoring/telemeter-client-b988bd977-gmn6s\nI0408 12:15:55.021545    2302 pod.go:503] CNI_ADD openshift-monitoring/alertmanager-main-2 got IP 10.129.2.19, ofport 20\nI0408 12:15:55.140726    2302 pod.go:503] CNI_ADD openshift-monitoring/grafana-75fc9d74c6-s6m5q got IP 10.129.2.20, ofport 21\nI0408 12:15:55.270955    2302 pod.go:503] CNI_ADD openshift-image-registry/node-ca-c26gv got IP 10.129.2.21, ofport 22\nI0408 12:15:57.360633    2302 pod.go:539] CNI_DEL openshift-image-registry/image-registry-fcb8fbd78-tprx5\nI0408 12:15:57.585648    2302 pod.go:503] CNI_ADD openshift-image-registry/image-registry-7f64b794d5-jhmkl got IP 10.129.2.22, ofport 23\nI0408 12:16:17.665884    2302 pod.go:503] CNI_ADD openshift-marketplace/redhat-marketplace-7d59f9784b-w8t6n got IP 10.129.2.23, ofport 24\nI0408 12:16:18.305754    2302 pod.go:503] CNI_ADD openshift-marketplace/redhat-operators-68d4dd9f74-5q2bw got IP 10.129.2.24, ofport 25\nI0408 12:16:19.860228    2302 pod.go:503] CNI_ADD openshift-marketplace/certified-operators-7c59568766-78frt got IP 10.129.2.25, ofport 26\nI0408 12:16:21.041783    2302 pod.go:503] CNI_ADD openshift-marketplace/community-operators-74cf4f56db-5m99f got IP 10.129.2.26, ofport 27\nI0408 12:16:36.007074    2302 pod.go:539] CNI_DEL openshift-monitoring/thanos-querier-6b8645b86c-lwjr7\nI0408 12:16:36.477114    2302 pod.go:503] CNI_ADD openshift-cluster-storage-operator/csi-snapshot-controller-6995b94786-gtl89 got IP 10.129.2.27, ofport 28\nF0408 12:19:02.926392    2302 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Apr 08 12:19:30.784 E ns/openshift-sdn pod/sdn-zrpvj node/ip-10-0-139-220.us-west-2.compute.internal container/sdn container exited with code 255 (Error): I0408 12:18:58.059931   62425 node.go:146] Initializing SDN node "ip-10-0-139-220.us-west-2.compute.internal" (10.0.139.220) of type "redhat/openshift-ovs-networkpolicy"\nI0408 12:18:58.064856   62425 cmd.go:151] Starting node networking (unknown)\nI0408 12:18:58.201468   62425 sdn_controller.go:137] [SDN setup] SDN is already set up\nI0408 12:18:58.298990   62425 proxy.go:103] Using unidling+iptables Proxier.\nI0408 12:18:58.299351   62425 proxy.go:129] Tearing down userspace rules.\nI0408 12:18:58.302867   62425 networkpolicy.go:330] SyncVNIDRules: 3 unused VNIDs\nI0408 12:18:58.496546   62425 proxy.go:95] Starting multitenant SDN proxy endpoint filter\nI0408 12:18:58.504759   62425 config.go:313] Starting service config controller\nI0408 12:18:58.504783   62425 config.go:131] Starting endpoints config controller\nI0408 12:18:58.504792   62425 shared_informer.go:197] Waiting for caches to sync for service config\nI0408 12:18:58.504806   62425 shared_informer.go:197] Waiting for caches to sync for endpoints config\nI0408 12:18:58.504860   62425 proxy.go:229] Started Kubernetes Proxy on 0.0.0.0\nI0408 12:18:58.609791   62425 shared_informer.go:204] Caches are synced for endpoints config \nI0408 12:18:58.609791   62425 shared_informer.go:204] Caches are synced for service config \nF0408 12:19:29.738581   62425 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Apr 08 12:20:08.903 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-86f86cd9-9jd6k node/ip-10-0-139-220.us-west-2.compute.internal container/operator container exited with code 255 (Error): 6100       1 operator.go:145] Starting syncing operator at 2020-04-08 12:18:26.406088522 +0000 UTC m=+184.463030010\nI0408 12:18:26.426615       1 operator.go:147] Finished syncing operator at 20.520151ms\nI0408 12:18:26.426657       1 operator.go:145] Starting syncing operator at 2020-04-08 12:18:26.426651164 +0000 UTC m=+184.483592604\nI0408 12:18:26.443125       1 operator.go:147] Finished syncing operator at 16.468033ms\nI0408 12:18:27.192350       1 operator.go:145] Starting syncing operator at 2020-04-08 12:18:27.19234069 +0000 UTC m=+185.249281967\nI0408 12:18:27.209694       1 operator.go:147] Finished syncing operator at 17.345251ms\nI0408 12:18:27.291514       1 operator.go:145] Starting syncing operator at 2020-04-08 12:18:27.291504537 +0000 UTC m=+185.348446022\nI0408 12:18:27.313832       1 operator.go:147] Finished syncing operator at 22.306237ms\nI0408 12:18:27.390981       1 operator.go:145] Starting syncing operator at 2020-04-08 12:18:27.390973564 +0000 UTC m=+185.447914948\nI0408 12:18:27.413931       1 operator.go:147] Finished syncing operator at 22.951136ms\nI0408 12:18:27.494111       1 operator.go:145] Starting syncing operator at 2020-04-08 12:18:27.49410532 +0000 UTC m=+185.551046708\nI0408 12:18:28.012997       1 operator.go:147] Finished syncing operator at 518.882423ms\nI0408 12:20:08.484381       1 leaderelection.go:288] failed to renew lease openshift-cluster-storage-operator/csi-snapshot-controller-operator-lock: failed to tryAcquireOrRenew context deadline exceeded\nE0408 12:20:08.484421       1 leaderelection.go:331] error retrieving resource lock openshift-cluster-storage-operator/csi-snapshot-controller-operator-lock: Get https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/configmaps/csi-snapshot-controller-operator-lock?timeout=35s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nF0408 12:20:08.484458       1 leaderelection.go:67] leaderelection lost\nI0408 12:20:08.485787       1 logging_controller.go:93] Shutting down LogLevelController\n
Apr 08 12:20:14.069 E ns/openshift-sdn pod/sdn-pt5pt node/ip-10-0-137-22.us-west-2.compute.internal container/sdn container exited with code 255 (Error): 39] CNI_DEL openshift-operator-lifecycle-manager/olm-operator-6869bd48c8-jz2fb\nI0408 12:15:28.582396    2136 pod.go:539] CNI_DEL openshift-service-ca-operator/service-ca-operator-7dd4d97db6-wgzxg\nI0408 12:15:38.814287    2136 pod.go:539] CNI_DEL openshift-authentication-operator/authentication-operator-7f5b75fcc9-p8n4t\nI0408 12:15:43.908849    2136 pod.go:539] CNI_DEL openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-5948589447-pbgrd\nI0408 12:15:48.150209    2136 pod.go:539] CNI_DEL openshift-operator-lifecycle-manager/catalog-operator-5d9db9f46b-glkh4\nI0408 12:15:49.437201    2136 pod.go:539] CNI_DEL openshift-controller-manager/controller-manager-tksq2\nI0408 12:15:58.371063    2136 pod.go:503] CNI_ADD openshift-controller-manager/controller-manager-cbqc8 got IP 10.130.0.68, ofport 69\nI0408 12:16:00.853334    2136 pod.go:539] CNI_DEL openshift-image-registry/node-ca-zrr4s\nI0408 12:16:08.504523    2136 pod.go:503] CNI_ADD openshift-image-registry/node-ca-hn6f4 got IP 10.130.0.69, ofport 70\nI0408 12:16:16.278378    2136 pod.go:503] CNI_ADD openshift-console-operator/console-operator-65d7697947-4lv7s got IP 10.130.0.70, ofport 71\nI0408 12:16:25.575078    2136 pod.go:539] CNI_DEL openshift-operator-lifecycle-manager/packageserver-77f89474fb-dhwsp\nI0408 12:16:25.836150    2136 pod.go:503] CNI_ADD openshift-operator-lifecycle-manager/packageserver-7fd8b5f4c8-9zfjl got IP 10.130.0.71, ofport 72\nI0408 12:16:45.722146    2136 pod.go:503] CNI_ADD openshift-console/console-8f46485b8-pwr7p got IP 10.130.0.72, ofport 73\nI0408 12:16:47.841595    2136 pod.go:539] CNI_DEL openshift-authentication/oauth-openshift-7db7bbdd49-r2l2x\nI0408 12:18:56.938603    2136 pod.go:539] CNI_DEL openshift-multus/multus-admission-controller-5484b\nI0408 12:19:08.379650    2136 pod.go:503] CNI_ADD openshift-multus/multus-admission-controller-kpchh got IP 10.130.0.73, ofport 74\nF0408 12:20:13.043187    2136 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Apr 08 12:20:28.128 E clusteroperator/ingress changed Degraded to True: IngressControllersDegraded: Some ingresscontrollers are degraded: default
Apr 08 12:20:33.216 E ns/openshift-sdn pod/sdn-d9x9m node/ip-10-0-156-60.us-west-2.compute.internal container/sdn container exited with code 255 (Error): I0408 12:20:18.582652  100688 node.go:146] Initializing SDN node "ip-10-0-156-60.us-west-2.compute.internal" (10.0.156.60) of type "redhat/openshift-ovs-networkpolicy"\nI0408 12:20:18.587650  100688 cmd.go:151] Starting node networking (unknown)\nI0408 12:20:18.753735  100688 sdn_controller.go:137] [SDN setup] SDN is already set up\nI0408 12:20:18.865853  100688 proxy.go:103] Using unidling+iptables Proxier.\nI0408 12:20:18.866446  100688 proxy.go:129] Tearing down userspace rules.\nI0408 12:20:18.881144  100688 networkpolicy.go:330] SyncVNIDRules: 4 unused VNIDs\nI0408 12:20:19.071153  100688 proxy.go:95] Starting multitenant SDN proxy endpoint filter\nI0408 12:20:19.078983  100688 config.go:313] Starting service config controller\nI0408 12:20:19.079037  100688 shared_informer.go:197] Waiting for caches to sync for service config\nI0408 12:20:19.079044  100688 config.go:131] Starting endpoints config controller\nI0408 12:20:19.079067  100688 shared_informer.go:197] Waiting for caches to sync for endpoints config\nI0408 12:20:19.079743  100688 proxy.go:229] Started Kubernetes Proxy on 0.0.0.0\nI0408 12:20:19.179229  100688 shared_informer.go:204] Caches are synced for service config \nI0408 12:20:19.179269  100688 shared_informer.go:204] Caches are synced for endpoints config \nF0408 12:20:33.074751  100688 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Apr 08 12:20:39.350 E ns/openshift-multus pod/multus-admission-controller-lpwqz node/ip-10-0-155-26.us-west-2.compute.internal container/multus-admission-controller container exited with code 137 (Error): 
Apr 08 12:20:49.688 E ns/openshift-multus pod/multus-7whqx node/ip-10-0-137-22.us-west-2.compute.internal container/kube-multus container exited with code 137 (Error): 
Apr 08 12:20:55.113 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-86f86cd9-9jd6k node/ip-10-0-139-220.us-west-2.compute.internal container/operator container exited with code 255 (Error): p_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 12:20:45.909868       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 12:20:48.410597       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nW0408 12:20:48.908509       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 12:20:48.908690       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 12:20:51.482489       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nW0408 12:20:51.909378       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 12:20:51.909624       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 12:20:54.554507       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nF0408 12:20:54.554551       1 cmd.go:120] unable to load configmap based request-header-client-ca-file: Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: no route to host\n
Apr 08 12:21:44.622 E ns/openshift-multus pod/multus-qn9b2 node/ip-10-0-155-26.us-west-2.compute.internal container/kube-multus container exited with code 137 (Error): 
Apr 08 12:21:46.255 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-86f86cd9-9jd6k node/ip-10-0-139-220.us-west-2.compute.internal container/operator container exited with code 255 (Error): p_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 12:21:36.652606       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 12:21:39.226613       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nW0408 12:21:39.652449       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 12:21:39.652569       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 12:21:42.298550       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nW0408 12:21:42.652849       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 12:21:42.653600       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 12:21:45.370603       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nF0408 12:21:45.370643       1 cmd.go:120] unable to load configmap based request-header-client-ca-file: Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: no route to host\n
Apr 08 12:22:42.687 E ns/openshift-multus pod/multus-cwq76 node/ip-10-0-137-34.us-west-2.compute.internal container/kube-multus container exited with code 137 (Error): 
Apr 08 12:22:47.422 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-86f86cd9-9jd6k node/ip-10-0-139-220.us-west-2.compute.internal container/operator container exited with code 255 (Error): p_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 12:22:37.781842       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 12:22:40.283489       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nW0408 12:22:40.780408       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 12:22:40.780596       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 12:22:43.355514       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nW0408 12:22:43.780449       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 12:22:43.780582       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 12:22:46.426567       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nF0408 12:22:46.426604       1 cmd.go:120] unable to load configmap based request-header-client-ca-file: Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: no route to host\n
Apr 08 12:24:23.706 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-86f86cd9-9jd6k node/ip-10-0-139-220.us-west-2.compute.internal container/operator container exited with code 255 (Error): p_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 12:24:14.164509       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 12:24:16.666497       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nW0408 12:24:17.165157       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 12:24:17.165568       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 12:24:19.738502       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nW0408 12:24:20.164368       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 12:24:20.164555       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 12:24:22.810512       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nF0408 12:24:22.810558       1 cmd.go:120] unable to load configmap based request-header-client-ca-file: Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: no route to host\n
Apr 08 12:26:18.894 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-86f86cd9-9jd6k node/ip-10-0-139-220.us-west-2.compute.internal container/operator container exited with code 255 (Error): p_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 12:26:09.876728       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 12:26:12.378605       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nW0408 12:26:12.876407       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 12:26:12.876555       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 12:26:15.450522       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nW0408 12:26:15.877214       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 12:26:15.877431       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 12:26:18.522515       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nF0408 12:26:18.522556       1 cmd.go:120] unable to load configmap based request-header-client-ca-file: Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: no route to host\n
Apr 08 12:27:12.876 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator ingress is reporting a failure: Some ingresscontrollers are degraded: default
Apr 08 12:29:45.363 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-86f86cd9-9jd6k node/ip-10-0-139-220.us-west-2.compute.internal container/operator container exited with code 255 (Error): p_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 12:29:35.893700       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 12:29:38.394625       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nW0408 12:29:38.892561       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 12:29:38.892783       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 12:29:41.466507       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nW0408 12:29:41.893441       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 12:29:41.893701       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 12:29:44.538538       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nF0408 12:29:44.538575       1 cmd.go:120] unable to load configmap based request-header-client-ca-file: Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: no route to host\n
Apr 08 12:35:30.233 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-86f86cd9-9jd6k node/ip-10-0-139-220.us-west-2.compute.internal container/operator container exited with code 255 (Error): p_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 12:35:21.237944       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 12:35:23.739529       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nW0408 12:35:24.236473       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 12:35:24.236603       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 12:35:26.810556       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nW0408 12:35:27.236547       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 12:35:27.236727       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 12:35:29.882510       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nF0408 12:35:29.882548       1 cmd.go:120] unable to load configmap based request-header-client-ca-file: Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: no route to host\n
Apr 08 12:41:12.942 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-86f86cd9-9jd6k node/ip-10-0-139-220.us-west-2.compute.internal container/operator container exited with code 255 (Error): p_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 12:41:03.316624       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 12:41:05.818542       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nW0408 12:41:06.316821       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 12:41:06.317096       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 12:41:08.890535       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nW0408 12:41:09.316630       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 12:41:09.316811       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 12:41:11.962519       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nF0408 12:41:11.962563       1 cmd.go:120] unable to load configmap based request-header-client-ca-file: Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: no route to host\n
Apr 08 12:46:53.733 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-86f86cd9-9jd6k node/ip-10-0-139-220.us-west-2.compute.internal container/operator container exited with code 255 (Error): p_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 12:46:44.028878       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 12:46:46.554489       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nW0408 12:46:47.028589       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 12:46:47.028794       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 12:46:49.626523       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nW0408 12:46:50.029073       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 12:46:50.029347       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 12:46:52.698520       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nF0408 12:46:52.698570       1 cmd.go:120] unable to load configmap based request-header-client-ca-file: Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: no route to host\n
Apr 08 12:52:31.464 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-86f86cd9-9jd6k node/ip-10-0-139-220.us-west-2.compute.internal container/operator container exited with code 255 (Error): p_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 12:52:22.100731       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 12:52:24.603462       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nW0408 12:52:25.100459       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 12:52:25.100623       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 12:52:27.674517       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nW0408 12:52:28.100655       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 12:52:28.100822       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 12:52:30.747544       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nF0408 12:52:31.013363       1 cmd.go:120] unable to load configmap based request-header-client-ca-file: Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: no route to host\n
Apr 08 12:58:10.227 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-86f86cd9-9jd6k node/ip-10-0-139-220.us-west-2.compute.internal container/operator container exited with code 255 (Error): p_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 12:58:00.981693       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 12:58:03.482603       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nW0408 12:58:03.980633       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 12:58:03.980838       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 12:58:06.554493       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nW0408 12:58:06.980379       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 12:58:06.980508       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 12:58:09.626522       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nF0408 12:58:09.805480       1 cmd.go:120] unable to load configmap based request-header-client-ca-file: Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: no route to host\n
Apr 08 13:03:57.016 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-86f86cd9-9jd6k node/ip-10-0-139-220.us-west-2.compute.internal container/operator container exited with code 255 (Error): p_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 13:03:47.861758       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 13:03:50.362498       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nW0408 13:03:50.860321       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 13:03:50.860497       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 13:03:53.434509       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nW0408 13:03:53.860588       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 13:03:53.860789       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 13:03:56.506539       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nF0408 13:03:56.506585       1 cmd.go:120] unable to load configmap based request-header-client-ca-file: Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: no route to host\n
Apr 08 13:09:31.768 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-86f86cd9-9jd6k node/ip-10-0-139-220.us-west-2.compute.internal container/operator container exited with code 255 (Error): p_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 13:09:22.772598       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 13:09:25.274476       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nW0408 13:09:25.772453       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 13:09:25.772650       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 13:09:28.346607       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nW0408 13:09:28.772550       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 13:09:28.772732       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 13:09:31.418623       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nF0408 13:09:31.418668       1 cmd.go:120] unable to load configmap based request-header-client-ca-file: Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: no route to host\n
Apr 08 13:15:14.538 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-86f86cd9-9jd6k node/ip-10-0-139-220.us-west-2.compute.internal container/operator container exited with code 255 (Error): p_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 13:15:05.429063       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 13:15:07.930484       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nW0408 13:15:08.428967       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 13:15:08.429153       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 13:15:11.002516       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nW0408 13:15:11.430654       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 13:15:11.430849       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 13:15:14.074517       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nF0408 13:15:14.074561       1 cmd.go:120] unable to load configmap based request-header-client-ca-file: Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: no route to host\n
Apr 08 13:18:05.118 E ns/openshift-marketplace pod/community-operators-74cf4f56db-5m99f node/ip-10-0-128-157.us-west-2.compute.internal container/community-operators container exited with code 2 (Error): 
Apr 08 13:20:54.333 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-86f86cd9-9jd6k node/ip-10-0-139-220.us-west-2.compute.internal container/operator container exited with code 255 (Error): p_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 13:20:45.396550       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 13:20:47.898499       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nW0408 13:20:48.396544       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 13:20:48.396738       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 13:20:50.970652       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nW0408 13:20:51.397619       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0408 13:20:51.397870       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nI0408 13:20:54.042610       1 server.go:48] Error initializing delegating authentication (will retry): <nil>\nF0408 13:20:54.042640       1 cmd.go:120] unable to load configmap based request-header-client-ca-file: Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: no route to host\n