ResultSUCCESS
Tests 2 failed / 23 succeeded
Started2020-04-07 23:59
Elapsed1h38m
Work namespaceci-op-wwqd8rgz
Refs openshift-4.5:fe90dcbe
44:8b80929a
poddc08073d-792b-11ea-ab07-0a58ac100bec
repoopenshift/etcd
revision1

Test Failures


Cluster upgrade Kubernetes and OpenShift APIs remain available 50m47s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sKubernetes\sand\sOpenShift\sAPIs\sremain\savailable$'
API was unreachable during disruption for at least 3s of 50m46s (0%):

Apr 08 00:42:09.105 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Apr 08 00:42:10.072 E openshift-apiserver OpenShift API is not responding to GET requests
Apr 08 00:42:10.097 I openshift-apiserver OpenShift API started responding to GET requests
Apr 08 00:45:30.104 E kube-apiserver Kube API started failing: Get https://api.ci-op-wwqd8rgz-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=15s: dial tcp 18.205.223.161:6443: connect: connection refused
Apr 08 00:45:31.072 E kube-apiserver Kube API is not responding to GET requests
Apr 08 00:45:31.110 I kube-apiserver Kube API started responding to GET requests
				from junit_upgrade_1586309349.xml

Filter through log files


openshift-tests Monitor cluster while tests execute 52m49s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
196 error level events were detected during this test run:

Apr 08 00:41:55.043 E ns/openshift-machine-api pod/machine-api-operator-94485dfb7-v7ktl node/ip-10-0-141-101.ec2.internal container/machine-api-operator container exited with code 2 (Error): 
Apr 08 00:43:20.762 E kube-apiserver failed contacting the API: Get https://api.ci-op-wwqd8rgz-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&resourceVersion=26845&timeout=8m41s&timeoutSeconds=521&watch=true: dial tcp 18.205.223.161:6443: connect: connection refused
Apr 08 00:43:20.762 E kube-apiserver failed contacting the API: Get https://api.ci-op-wwqd8rgz-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=26870&timeout=5m32s&timeoutSeconds=332&watch=true: dial tcp 18.205.223.161:6443: connect: connection refused
Apr 08 00:43:51.178 E clusteroperator/monitoring changed Degraded to True: UpdatingAlertmanagerFailed: Failed to rollout the stack. Error: running task Updating Alertmanager failed: reconciling Alertmanager ClusterRole failed: retrieving ClusterRole object failed: Get https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/alertmanager-main: unexpected EOF
Apr 08 00:43:54.322 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers: EtcdMembersDegraded: ip-10-0-152-26.ec2.internal members are unhealthy,  members are unknown
Apr 08 00:44:19.915 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver
Apr 08 00:45:36.148 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-89cf6489d-p2qkx node/ip-10-0-152-26.ec2.internal container/kube-storage-version-migrator-operator container exited with code 255 (Error): ): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: {"conditions":[{"type":"Degraded","status":"False","lastTransitionTime":"2020-04-08T00:18:09Z","reason":"AsExpected"},{"type":"Progressing","status":"False","lastTransitionTime":"2020-04-08T00:18:09Z","reason":"AsExpected"},{"type":"Available","status":"False","lastTransitionTime":"2020-04-08T00:18:09Z","reason":"_NoMigratorPod","message":"Available: deployment/migrator.openshift-kube-storage-version-migrator: no replicas are available"},{"type":"Upgradeable","status":"Unknown","lastTransitionTime":"2020-04-08T00:18:08Z","reason":"NoData"}],"versions":[{"name":"operator","version":"0.0.1-2020-04-08-000012"}\n\nA: ],"relatedObjects":[{"group":"operator.openshift.io","resource":"kubestorageversionmigrators","name":"cluster"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator-operator"}],"extension":null}\n\n\nB: ,{"name":"kube-storage-version-migrator","version":""}],"relatedObjects":[{"group":"operator.openshift.io","resource":"kubestorageversionmigrators","name":"cluster"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator-operator"}],"extension":null}\n\n\nI0408 00:28:55.663760       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"a8d6b1b1-c473-4bc9-a885-95944aaa9aa7", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from False to True ("")\nI0408 00:45:35.364126       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0408 00:45:35.364261       1 leaderelection.go:66] leaderelection lost\n
Apr 08 00:47:17.542 E ns/openshift-cluster-machine-approver pod/machine-approver-5ccc7b4555-zgddx node/ip-10-0-152-26.ec2.internal container/machine-approver-controller container exited with code 2 (Error): al error\nI0408 00:27:39.992783       1 csr_check.go:183] Falling back to machine-api authorization for ip-10-0-132-174.ec2.internal\nI0408 00:27:40.004488       1 main.go:198] CSR csr-s9gzl approved\nI0408 00:27:42.640755       1 main.go:148] CSR csr-bwvgl added\nI0408 00:27:42.703844       1 csr_check.go:418] retrieving serving cert from ip-10-0-134-26.ec2.internal (10.0.134.26:10250)\nW0408 00:27:42.716390       1 csr_check.go:178] Failed to retrieve current serving cert: remote error: tls: internal error\nI0408 00:27:42.716433       1 csr_check.go:183] Falling back to machine-api authorization for ip-10-0-134-26.ec2.internal\nI0408 00:27:42.838366       1 main.go:198] CSR csr-bwvgl approved\nE0408 00:34:40.773031       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:240: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=15865&timeoutSeconds=364&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0408 00:34:41.773797       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:240: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=15865&timeoutSeconds=414&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0408 00:45:30.007322       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:240: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=25448&timeoutSeconds=408&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0408 00:45:36.489934       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:240: Failed to watch *v1beta1.CertificateSigningRequest: unknown (get certificatesigningrequests.certificates.k8s.io)\n
Apr 08 00:47:28.924 E ns/openshift-monitoring pod/node-exporter-lkwqb node/ip-10-0-134-26.ec2.internal container/node-exporter container exited with code 143 (Error): -08T00:28:11Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-08T00:28:11Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-08T00:28:11Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-08T00:28:11Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-08T00:28:11Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-08T00:28:11Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-08T00:28:11Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-08T00:28:11Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-08T00:28:11Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-08T00:28:11Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-08T00:28:11Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-08T00:28:11Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-08T00:28:11Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-08T00:28:11Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-08T00:28:11Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-08T00:28:11Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-08T00:28:11Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-08T00:28:11Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-08T00:28:11Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-08T00:28:11Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-08T00:28:11Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-08T00:28:11Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-08T00:28:11Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-08T00:28:11Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 08 00:47:29.024 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-64f6c6797-jwd86 node/ip-10-0-148-174.ec2.internal container/operator container exited with code 255 (Error): -04-08 00:47:15.054755277 +0000 UTC m=+1100.879146559\nI0408 00:47:15.099347       1 operator.go:147] Finished syncing operator at 44.584579ms\nI0408 00:47:15.099396       1 operator.go:145] Starting syncing operator at 2020-04-08 00:47:15.099389179 +0000 UTC m=+1100.923781234\nI0408 00:47:15.209239       1 operator.go:147] Finished syncing operator at 109.842462ms\nI0408 00:47:15.209283       1 operator.go:145] Starting syncing operator at 2020-04-08 00:47:15.209276681 +0000 UTC m=+1101.033668049\nI0408 00:47:15.444611       1 operator.go:147] Finished syncing operator at 235.319894ms\nI0408 00:47:20.764024       1 operator.go:145] Starting syncing operator at 2020-04-08 00:47:20.764008415 +0000 UTC m=+1106.588399953\nI0408 00:47:20.799910       1 operator.go:147] Finished syncing operator at 35.894121ms\nI0408 00:47:20.800204       1 operator.go:145] Starting syncing operator at 2020-04-08 00:47:20.800196947 +0000 UTC m=+1106.624588293\nI0408 00:47:20.844717       1 operator.go:147] Finished syncing operator at 44.514038ms\nI0408 00:47:20.844759       1 operator.go:145] Starting syncing operator at 2020-04-08 00:47:20.844752646 +0000 UTC m=+1106.669143971\nI0408 00:47:20.906147       1 operator.go:147] Finished syncing operator at 61.386812ms\nI0408 00:47:20.921876       1 operator.go:145] Starting syncing operator at 2020-04-08 00:47:20.921868413 +0000 UTC m=+1106.746259920\nI0408 00:47:21.180404       1 operator.go:147] Finished syncing operator at 258.525676ms\nI0408 00:47:27.826721       1 operator.go:145] Starting syncing operator at 2020-04-08 00:47:27.826707871 +0000 UTC m=+1113.651099258\nI0408 00:47:27.935455       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0408 00:47:27.935518       1 management_state_controller.go:112] Shutting down management-state-controller-csi-snapshot-controller\nI0408 00:47:27.935536       1 logging_controller.go:93] Shutting down LogLevelController\nF0408 00:47:27.935582       1 builder.go:210] server exited\nF0408 00:47:27.938063       1 builder.go:243] stopped\n
Apr 08 00:47:37.801 E ns/openshift-insights pod/insights-operator-589b868cbc-26plc node/ip-10-0-138-154.ec2.internal container/operator container exited with code 2 (Error): figMap total 0 items received\nI0408 00:45:29.925498       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0408 00:45:29.925522       1 reflector.go:418] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Watch close - *v1.ConfigMap total 0 items received\nI0408 00:45:30.088593       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: watch of *v1.ConfigMap ended with: too old resource version: 26878 (27914)\nI0408 00:45:30.258532       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: watch of *v1.ConfigMap ended with: too old resource version: 26878 (27914)\nI0408 00:45:31.089399       1 reflector.go:185] Listing and watching *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209\nI0408 00:45:31.276715       1 reflector.go:185] Listing and watching *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209\nI0408 00:45:37.741062       1 httplog.go:90] GET /metrics: (7.390867ms) 200 [Prometheus/2.15.2 10.131.0.23:48412]\nI0408 00:45:38.915890       1 httplog.go:90] GET /metrics: (1.73524ms) 200 [Prometheus/2.15.2 10.128.2.9:52448]\nI0408 00:46:07.741841       1 httplog.go:90] GET /metrics: (8.228164ms) 200 [Prometheus/2.15.2 10.131.0.23:48412]\nI0408 00:46:08.915723       1 httplog.go:90] GET /metrics: (1.669873ms) 200 [Prometheus/2.15.2 10.128.2.9:52448]\nI0408 00:46:32.419020       1 status.go:298] The operator is healthy\nI0408 00:46:37.740958       1 httplog.go:90] GET /metrics: (7.363852ms) 200 [Prometheus/2.15.2 10.131.0.23:48412]\nI0408 00:46:38.915969       1 httplog.go:90] GET /metrics: (1.830692ms) 200 [Prometheus/2.15.2 10.128.2.9:52448]\nI0408 00:47:07.741933       1 httplog.go:90] GET /metrics: (8.327018ms) 200 [Prometheus/2.15.2 10.131.0.23:48412]\nI0408 00:47:08.915944       1 httplog.go:90] GET /metrics: (1.818267ms) 200 [Prometheus/2.15.2 10.128.2.9:52448]\n
Apr 08 00:47:46.955 E ns/openshift-monitoring pod/node-exporter-j2ps6 node/ip-10-0-138-154.ec2.internal container/node-exporter container exited with code 143 (Error): -08T00:23:38Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-08T00:23:38Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-08T00:23:38Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-08T00:23:38Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-08T00:23:38Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-08T00:23:38Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-08T00:23:38Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-08T00:23:38Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-08T00:23:38Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-08T00:23:38Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-08T00:23:38Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-08T00:23:38Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-08T00:23:38Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-08T00:23:38Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-08T00:23:38Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-08T00:23:38Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-08T00:23:38Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-08T00:23:38Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-08T00:23:38Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-08T00:23:38Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-08T00:23:38Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-08T00:23:38Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-08T00:23:38Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-08T00:23:38Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 08 00:47:48.937 E ns/openshift-controller-manager pod/controller-manager-pnc2q node/ip-10-0-141-101.ec2.internal container/controller-manager container exited with code 137 (Error): I0408 00:24:16.208378       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0408 00:24:16.209869       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-wwqd8rgz/stable-initial@sha256:baf34611b723ba5e9b3ead8872fed2c8af700156096054d720d42a057f5f24be"\nI0408 00:24:16.209889       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-wwqd8rgz/stable-initial@sha256:19880395f98981bdfd98ffbfc9e4e878aa085ecf1e91f2073c24679545e41478"\nI0408 00:24:16.209959       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0408 00:24:16.210089       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Apr 08 00:48:17.758 E ns/openshift-monitoring pod/node-exporter-n987q node/ip-10-0-141-101.ec2.internal container/node-exporter container exited with code 143 (Error): -08T00:23:34Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-08T00:23:34Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-08T00:23:34Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-08T00:23:34Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-08T00:23:34Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-08T00:23:34Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-08T00:23:34Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-08T00:23:34Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-08T00:23:34Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-08T00:23:34Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-08T00:23:34Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-08T00:23:34Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-08T00:23:34Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-08T00:23:34Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-08T00:23:34Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-08T00:23:34Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-08T00:23:34Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-08T00:23:34Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-08T00:23:34Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-08T00:23:34Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-08T00:23:34Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-08T00:23:34Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-08T00:23:34Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-08T00:23:34Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 08 00:48:21.506 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-148-174.ec2.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-08T00:48:02.454Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-08T00:48:02.461Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-08T00:48:02.461Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-08T00:48:02.462Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-08T00:48:02.462Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-08T00:48:02.462Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-08T00:48:02.462Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-08T00:48:02.462Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-08T00:48:02.462Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-08T00:48:02.462Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-08T00:48:02.462Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-08T00:48:02.462Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-08T00:48:02.462Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-08T00:48:02.462Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-08T00:48:02.463Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-08T00:48:02.463Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-08
Apr 08 00:48:34.326 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-132-174.ec2.internal container/config-reloader container exited with code 2 (Error): 2020/04/08 00:30:29 Watching directory: "/etc/alertmanager/config"\n
Apr 08 00:48:34.326 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-132-174.ec2.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/04/08 00:30:29 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/08 00:30:29 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/08 00:30:29 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/08 00:30:29 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/04/08 00:30:29 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/08 00:30:29 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/08 00:30:29 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/08 00:30:29 http.go:107: HTTPS: listening on [::]:9095\nI0408 00:30:29.955256       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Apr 08 00:48:34.406 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-132-174.ec2.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-08T00:30:17.402Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-08T00:30:17.411Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-08T00:30:17.414Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-08T00:30:17.415Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-08T00:30:17.415Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-08T00:30:17.415Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-08T00:30:17.415Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-08T00:30:17.415Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-08T00:30:17.415Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-08T00:30:17.415Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-08T00:30:17.415Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-08T00:30:17.415Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-08T00:30:17.415Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-08T00:30:17.416Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-08T00:30:17.416Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-08T00:30:17.416Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-08
Apr 08 00:48:34.406 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-132-174.ec2.internal container/rules-configmap-reloader container exited with code 2 (Error): 2020/04/08 00:30:17 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n2020/04/08 00:36:58 config map updated\n2020/04/08 00:36:58 successfully triggered reload\n
Apr 08 00:48:34.406 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-132-174.ec2.internal container/prometheus-proxy container exited with code 2 (Error): 2020/04/08 00:30:18 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/04/08 00:30:18 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/08 00:30:18 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/08 00:30:18 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/04/08 00:30:18 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/08 00:30:18 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/04/08 00:30:18 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/08 00:30:18 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/04/08 00:30:18 http.go:107: HTTPS: listening on [::]:9091\nI0408 00:30:18.308382       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/04/08 00:48:00 oauthproxy.go:774: basicauth: 10.129.2.18:54570 Authorization header does not start with 'Basic', skipping basic authentication\n2020/04/08 00:48:01 oauthproxy.go:774: basicauth: 10.131.0.17:44880 Authorization header does not start with 'Basic', skipping basic authentication\n2020/04/08 00:48:02 oauthproxy.go:774: basicauth: 10.130.0.65:36296 Authorization header does not start with 'Basic', skipping basic authentication\n
Apr 08 00:48:34.406 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-132-174.ec2.internal container/prometheus-config-reloader container exited with code 2 (Error): ts=2020-04-08T00:30:17.588908472Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-04-08T00:30:17.590617143Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-04-08T00:30:22.678259543Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-04-08T00:30:22.678352345Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\nlevel=info ts=2020-04-08T00:36:58.30820239Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Apr 08 00:48:39.895 E ns/openshift-console-operator pod/console-operator-67c75b5f7c-qs7c6 node/ip-10-0-141-101.ec2.internal container/console-operator container exited with code 1 (Error): ected EOF during watch stream event decoding: unexpected EOF\nI0408 00:47:35.095593       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0408 00:47:35.095717       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0408 00:48:38.912337       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0408 00:48:38.921379       1 reflector.go:181] Stopping reflector *v1.ClusterOperator (10m0s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0408 00:48:38.921471       1 reflector.go:181] Stopping reflector *v1.Deployment (10m0s) from k8s.io/client-go/informers/factory.go:135\nI0408 00:48:38.921565       1 reflector.go:181] Stopping reflector *v1.OAuthClient (10m0s) from github.com/openshift/client-go/oauth/informers/externalversions/factory.go:101\nI0408 00:48:38.921637       1 reflector.go:181] Stopping reflector *v1.Route (10m0s) from github.com/openshift/client-go/route/informers/externalversions/factory.go:101\nI0408 00:48:38.921709       1 reflector.go:181] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go/informers/factory.go:135\nI0408 00:48:38.921778       1 reflector.go:181] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go/informers/factory.go:135\nI0408 00:48:38.921844       1 reflector.go:181] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go/informers/factory.go:135\nI0408 00:48:38.921974       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go/informers/factory.go:135\nI0408 00:48:38.925744       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go/informers/factory.go:135\nI0408 00:48:38.925834       1 base_controller.go:101] Shutting down LoggingSyncer ...\nW0408 00:48:38.926033       1 builder.go:88] graceful termination failed, controllers failed with error: stopped\nI0408 00:48:38.928253       1 base_controller.go:58] Shutting down worker of ResourceSyncController controller ...\n
Apr 08 00:48:41.565 E ns/openshift-monitoring pod/node-exporter-r522h node/ip-10-0-148-174.ec2.internal container/node-exporter container exited with code 143 (Error): -08T00:28:00Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-08T00:28:00Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-08T00:28:00Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-08T00:28:00Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-08T00:28:00Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-08T00:28:00Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-08T00:28:00Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-08T00:28:00Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-08T00:28:00Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-08T00:28:00Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-08T00:28:00Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-08T00:28:00Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-08T00:28:00Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-08T00:28:00Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-08T00:28:00Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-08T00:28:00Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-08T00:28:00Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-08T00:28:00Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-08T00:28:00Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-08T00:28:00Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-08T00:28:00Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-08T00:28:00Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-08T00:28:00Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-08T00:28:00Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 08 00:48:53.424 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-132-174.ec2.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-08T00:48:50.873Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-08T00:48:50.879Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-08T00:48:50.880Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-08T00:48:50.881Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-08T00:48:50.881Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-08T00:48:50.881Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-08T00:48:50.881Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-08T00:48:50.881Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-08T00:48:50.881Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-08T00:48:50.881Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-08T00:48:50.881Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-08T00:48:50.881Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-08T00:48:50.881Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-08T00:48:50.881Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-08T00:48:50.882Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-08T00:48:50.882Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-08
Apr 08 00:49:00.547 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-fb5544c5d-22h9q node/ip-10-0-132-174.ec2.internal container/snapshot-controller container exited with code 2 (Error): 
Apr 08 00:49:14.686 E ns/openshift-marketplace pod/redhat-marketplace-776df649bc-tl4lz node/ip-10-0-148-174.ec2.internal container/redhat-marketplace container exited with code 2 (Error): 
Apr 08 00:49:15.687 E ns/openshift-marketplace pod/community-operators-85767cd98c-vnm4q node/ip-10-0-148-174.ec2.internal container/community-operators container exited with code 2 (Error): 
Apr 08 00:50:22.278 E ns/openshift-console pod/console-84c577cf8b-xjzg7 node/ip-10-0-141-101.ec2.internal container/console container exited with code 2 (Error): 2020-04-08T00:33:10Z cmd/main: cookies are secure!\n2020-04-08T00:33:10Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-04-08T00:33:20Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-04-08T00:33:30Z cmd/main: Binding to [::]:8443...\n2020-04-08T00:33:30Z cmd/main: using TLS\n
Apr 08 00:58:12.054 E ns/openshift-sdn pod/sdn-5mvfv node/ip-10-0-141-101.ec2.internal container/sdn container exited with code 255 (Error): 947    2154 pod.go:539] CNI_DEL openshift-image-registry/node-ca-qzdvt\nI0408 00:47:55.350477    2154 pod.go:539] CNI_DEL openshift-cluster-samples-operator/cluster-samples-operator-5749c8446c-hh4j7\nI0408 00:47:55.647774    2154 pod.go:539] CNI_DEL openshift-machine-api/cluster-autoscaler-operator-6b49c9bf5b-gx4t8\nI0408 00:47:57.498240    2154 pod.go:503] CNI_ADD openshift-controller-manager/controller-manager-b854h got IP 10.128.0.60, ofport 61\nI0408 00:47:58.529248    2154 pod.go:503] CNI_ADD openshift-image-registry/node-ca-7wlqj got IP 10.128.0.61, ofport 62\nI0408 00:48:08.329662    2154 pod.go:503] CNI_ADD openshift-operator-lifecycle-manager/packageserver-6c58cfb95d-n9ddp got IP 10.128.0.62, ofport 63\nI0408 00:48:10.316809    2154 pod.go:503] CNI_ADD openshift-service-ca/service-ca-854f8b5dfd-cknd6 got IP 10.128.0.63, ofport 64\nI0408 00:48:11.880343    2154 pod.go:503] CNI_ADD openshift-operator-lifecycle-manager/packageserver-64668dbcf4-ww5jz got IP 10.128.0.64, ofport 65\nI0408 00:48:12.827418    2154 pod.go:539] CNI_DEL openshift-operator-lifecycle-manager/packageserver-6c58cfb95d-n9ddp\nI0408 00:48:13.575947    2154 pod.go:503] CNI_ADD openshift-authentication/oauth-openshift-75f7c59c7b-4b9sp got IP 10.128.0.65, ofport 66\nI0408 00:48:31.149989    2154 pod.go:539] CNI_DEL openshift-console/downloads-8564cb648f-7d647\nI0408 00:48:37.395737    2154 pod.go:539] CNI_DEL openshift-authentication/oauth-openshift-86bb46ff-fh4xq\nI0408 00:48:38.934837    2154 pod.go:539] CNI_DEL openshift-controller-manager/controller-manager-b854h\nI0408 00:48:39.250069    2154 pod.go:539] CNI_DEL openshift-console-operator/console-operator-67c75b5f7c-qs7c6\nI0408 00:48:41.787965    2154 pod.go:503] CNI_ADD openshift-controller-manager/controller-manager-pvmxb got IP 10.128.0.66, ofport 67\nI0408 00:50:21.467420    2154 pod.go:539] CNI_DEL openshift-console/console-84c577cf8b-xjzg7\nF0408 00:58:11.037191    2154 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Apr 08 00:58:35.208 E ns/openshift-multus pod/multus-7jlvf node/ip-10-0-134-26.ec2.internal container/kube-multus container exited with code 137 (Error): 
Apr 08 00:58:35.586 E ns/openshift-sdn pod/sdn-controller-pq4wb node/ip-10-0-152-26.ec2.internal container/sdn-controller container exited with code 2 (Error): I0408 00:17:23.297101       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0408 00:24:11.825946       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-wwqd8rgz-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: unexpected EOF\n
Apr 08 00:58:35.952 E ns/openshift-sdn pod/sdn-j8d9m node/ip-10-0-132-174.ec2.internal container/sdn container exited with code 255 (Error): -874c4d69b-hd9vf got IP 10.128.2.19, ofport 20\nI0408 00:47:21.478512    2296 pod.go:503] CNI_ADD openshift-cluster-storage-operator/csi-snapshot-controller-operator-75698c554c-qvf9f got IP 10.128.2.20, ofport 21\nI0408 00:47:25.436204    2296 pod.go:503] CNI_ADD openshift-monitoring/openshift-state-metrics-789887dcd4-mxcbw got IP 10.128.2.21, ofport 22\nI0408 00:47:25.607418    2296 pod.go:503] CNI_ADD openshift-console/downloads-79d46c46-j2wdj got IP 10.128.2.22, ofport 23\nI0408 00:47:27.643169    2296 pod.go:503] CNI_ADD openshift-monitoring/kube-state-metrics-7dc87dc886-2ds2t got IP 10.128.2.23, ofport 24\nI0408 00:47:54.731306    2296 pod.go:503] CNI_ADD openshift-monitoring/prometheus-adapter-75c75f4d8d-tpbwn got IP 10.128.2.24, ofport 25\nI0408 00:47:54.879551    2296 pod.go:503] CNI_ADD openshift-monitoring/thanos-querier-79cb9d45d4-9v8l2 got IP 10.128.2.25, ofport 26\nI0408 00:47:55.673104    2296 pod.go:503] CNI_ADD openshift-image-registry/image-registry-78445dff97-hwtlg got IP 10.128.2.26, ofport 27\nI0408 00:48:33.766262    2296 pod.go:539] CNI_DEL openshift-image-registry/node-ca-wsnkf\nI0408 00:48:33.966614    2296 pod.go:539] CNI_DEL openshift-monitoring/prometheus-k8s-0\nI0408 00:48:34.086316    2296 pod.go:539] CNI_DEL openshift-monitoring/alertmanager-main-0\nI0408 00:48:34.412654    2296 pod.go:503] CNI_ADD openshift-marketplace/redhat-operators-9f48c88dd-n82rz got IP 10.128.2.27, ofport 28\nI0408 00:48:36.216945    2296 pod.go:503] CNI_ADD openshift-monitoring/alertmanager-main-0 got IP 10.128.2.28, ofport 29\nI0408 00:48:36.335713    2296 pod.go:503] CNI_ADD openshift-image-registry/node-ca-b72j4 got IP 10.128.2.29, ofport 30\nI0408 00:48:47.848137    2296 pod.go:503] CNI_ADD openshift-monitoring/prometheus-k8s-0 got IP 10.128.2.30, ofport 31\nI0408 00:48:59.948863    2296 pod.go:539] CNI_DEL openshift-cluster-storage-operator/csi-snapshot-controller-fb5544c5d-22h9q\nF0408 00:58:35.505299    2296 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Apr 08 00:58:46.027 E ns/openshift-sdn pod/sdn-controller-q8kn2 node/ip-10-0-141-101.ec2.internal container/sdn-controller container exited with code 2 (Error): 6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=12495&timeout=6m46s&timeoutSeconds=406&watch=true: dial tcp 10.0.130.104:6443: connect: connection refused\nI0408 00:24:43.223592       1 vnids.go:115] Allocated netid 749729 for namespace "openshift-console"\nI0408 00:24:43.998553       1 vnids.go:115] Allocated netid 11888923 for namespace "openshift-console-operator"\nI0408 00:27:31.438502       1 subnets.go:149] Created HostSubnet ip-10-0-148-174.ec2.internal (host: "ip-10-0-148-174.ec2.internal", ip: "10.0.148.174", subnet: "10.131.0.0/23")\nI0408 00:27:39.709841       1 subnets.go:149] Created HostSubnet ip-10-0-132-174.ec2.internal (host: "ip-10-0-132-174.ec2.internal", ip: "10.0.132.174", subnet: "10.128.2.0/23")\nI0408 00:27:42.393203       1 subnets.go:149] Created HostSubnet ip-10-0-134-26.ec2.internal (host: "ip-10-0-134-26.ec2.internal", ip: "10.0.134.26", subnet: "10.129.2.0/23")\nI0408 00:36:21.408478       1 vnids.go:115] Allocated netid 11930069 for namespace "e2e-k8s-sig-apps-daemonset-upgrade-4580"\nI0408 00:36:21.422912       1 vnids.go:115] Allocated netid 12669974 for namespace "e2e-k8s-sig-apps-deployment-upgrade-2364"\nI0408 00:36:21.435808       1 vnids.go:115] Allocated netid 3160443 for namespace "e2e-k8s-sig-apps-replicaset-upgrade-303"\nI0408 00:36:21.455811       1 vnids.go:115] Allocated netid 15969673 for namespace "e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-237"\nI0408 00:36:21.467320       1 vnids.go:115] Allocated netid 10464025 for namespace "e2e-k8s-sig-storage-sig-api-machinery-secret-upgrade-3424"\nI0408 00:36:21.489516       1 vnids.go:115] Allocated netid 12953782 for namespace "e2e-k8s-sig-apps-job-upgrade-2174"\nI0408 00:36:21.500046       1 vnids.go:115] Allocated netid 9243011 for namespace "e2e-frontend-ingress-available-5845"\nI0408 00:36:21.511682       1 vnids.go:115] Allocated netid 5146228 for namespace "e2e-control-plane-available-7288"\nI0408 00:36:21.524827       1 vnids.go:115] Allocated netid 13546152 for namespace "e2e-k8s-service-lb-available-8395"\n
Apr 08 00:58:49.747 E ns/openshift-sdn pod/sdn-controller-lw747 node/ip-10-0-138-154.ec2.internal container/sdn-controller container exited with code 2 (Error): I0408 00:17:48.867908       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0408 00:21:46.301355       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: etcdserver: request timed out\nE0408 00:24:11.823780       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-wwqd8rgz-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: unexpected EOF\n
Apr 08 00:59:01.740 E ns/openshift-sdn pod/sdn-qqrn6 node/ip-10-0-152-26.ec2.internal container/sdn container exited with code 255 (Error): I0408 00:58:46.080382  111580 node.go:146] Initializing SDN node "ip-10-0-152-26.ec2.internal" (10.0.152.26) of type "redhat/openshift-ovs-networkpolicy"\nI0408 00:58:46.086808  111580 cmd.go:151] Starting node networking (unknown)\nI0408 00:58:46.273948  111580 sdn_controller.go:137] [SDN setup] SDN is already set up\nI0408 00:58:46.391100  111580 proxy.go:103] Using unidling+iptables Proxier.\nI0408 00:58:46.391891  111580 proxy.go:129] Tearing down userspace rules.\nI0408 00:58:46.411048  111580 networkpolicy.go:330] SyncVNIDRules: 17 unused VNIDs\nI0408 00:58:46.618139  111580 proxy.go:95] Starting multitenant SDN proxy endpoint filter\nI0408 00:58:46.625662  111580 config.go:313] Starting service config controller\nI0408 00:58:46.625707  111580 shared_informer.go:197] Waiting for caches to sync for service config\nI0408 00:58:46.625807  111580 proxy.go:229] Started Kubernetes Proxy on 0.0.0.0\nI0408 00:58:46.626977  111580 config.go:131] Starting endpoints config controller\nI0408 00:58:46.627063  111580 shared_informer.go:197] Waiting for caches to sync for endpoints config\nI0408 00:58:46.725946  111580 shared_informer.go:204] Caches are synced for service config \nI0408 00:58:46.727303  111580 shared_informer.go:204] Caches are synced for endpoints config \nI0408 00:58:47.773490  111580 pod.go:539] CNI_DEL openshift-multus/multus-admission-controller-hkq56\nI0408 00:58:50.377933  111580 pod.go:503] CNI_ADD openshift-multus/multus-admission-controller-9xq95 got IP 10.129.0.82, ofport 83\nF0408 00:59:00.891966  111580 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Apr 08 00:59:14.800 E ns/openshift-multus pod/multus-dpdfr node/ip-10-0-152-26.ec2.internal container/kube-multus container exited with code 137 (Error): 
Apr 08 00:59:25.874 E ns/openshift-multus pod/multus-admission-controller-n25bw node/ip-10-0-141-101.ec2.internal container/multus-admission-controller container exited with code 137 (Error): 
Apr 08 00:59:29.336 E ns/openshift-sdn pod/sdn-d9wtm node/ip-10-0-134-26.ec2.internal container/sdn container exited with code 255 (Error): I0408 00:58:23.130374   70899 node.go:146] Initializing SDN node "ip-10-0-134-26.ec2.internal" (10.0.134.26) of type "redhat/openshift-ovs-networkpolicy"\nI0408 00:58:23.136258   70899 cmd.go:151] Starting node networking (unknown)\nI0408 00:58:23.313692   70899 sdn_controller.go:137] [SDN setup] SDN is already set up\nI0408 00:58:23.529559   70899 proxy.go:103] Using unidling+iptables Proxier.\nI0408 00:58:23.529855   70899 proxy.go:129] Tearing down userspace rules.\nI0408 00:58:23.538414   70899 networkpolicy.go:330] SyncVNIDRules: 2 unused VNIDs\nI0408 00:58:23.728151   70899 proxy.go:95] Starting multitenant SDN proxy endpoint filter\nI0408 00:58:23.738648   70899 config.go:131] Starting endpoints config controller\nI0408 00:58:23.738680   70899 shared_informer.go:197] Waiting for caches to sync for endpoints config\nI0408 00:58:23.738709   70899 config.go:313] Starting service config controller\nI0408 00:58:23.738724   70899 shared_informer.go:197] Waiting for caches to sync for service config\nI0408 00:58:23.738779   70899 proxy.go:229] Started Kubernetes Proxy on 0.0.0.0\nI0408 00:58:23.838875   70899 shared_informer.go:204] Caches are synced for endpoints config \nI0408 00:58:23.838875   70899 shared_informer.go:204] Caches are synced for service config \nF0408 00:59:29.223729   70899 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Apr 08 00:59:56.993 E ns/openshift-multus pod/multus-nzcck node/ip-10-0-148-174.ec2.internal container/kube-multus container exited with code 137 (Error): 
Apr 08 01:00:02.123 E ns/openshift-multus pod/multus-admission-controller-d8frt node/ip-10-0-138-154.ec2.internal container/multus-admission-controller container exited with code 137 (Error): 
Apr 08 01:00:05.140 E ns/openshift-sdn pod/sdn-v2qbj node/ip-10-0-138-154.ec2.internal container/sdn container exited with code 255 (Error): I0408 00:59:15.256825  108181 node.go:146] Initializing SDN node "ip-10-0-138-154.ec2.internal" (10.0.138.154) of type "redhat/openshift-ovs-networkpolicy"\nI0408 00:59:15.264968  108181 cmd.go:151] Starting node networking (unknown)\nI0408 00:59:15.430582  108181 sdn_controller.go:137] [SDN setup] SDN is already set up\nI0408 00:59:15.578724  108181 proxy.go:103] Using unidling+iptables Proxier.\nI0408 00:59:15.579137  108181 proxy.go:129] Tearing down userspace rules.\nI0408 00:59:15.590808  108181 networkpolicy.go:330] SyncVNIDRules: 9 unused VNIDs\nI0408 00:59:15.828393  108181 proxy.go:95] Starting multitenant SDN proxy endpoint filter\nI0408 00:59:15.847661  108181 proxy.go:229] Started Kubernetes Proxy on 0.0.0.0\nI0408 00:59:15.848254  108181 config.go:313] Starting service config controller\nI0408 00:59:15.848289  108181 shared_informer.go:197] Waiting for caches to sync for service config\nI0408 00:59:15.849434  108181 config.go:131] Starting endpoints config controller\nI0408 00:59:15.850838  108181 shared_informer.go:197] Waiting for caches to sync for endpoints config\nI0408 00:59:15.950929  108181 shared_informer.go:204] Caches are synced for service config \nI0408 00:59:15.951446  108181 shared_informer.go:204] Caches are synced for endpoints config \nE0408 01:00:01.177891  108181 pod.go:232] Error updating OVS multicast flows for VNID 13560559: exit status 1\nI0408 01:00:01.186009  108181 pod.go:539] CNI_DEL openshift-multus/multus-admission-controller-d8frt\nW0408 01:00:03.997459  108181 pod.go:274] CNI_ADD openshift-multus/multus-admission-controller-l8sdt failed: exit status 1\nI0408 01:00:04.013500  108181 pod.go:539] CNI_DEL openshift-multus/multus-admission-controller-l8sdt\nI0408 01:00:04.093809  108181 pod.go:539] CNI_DEL openshift-multus/multus-admission-controller-l8sdt\nF0408 01:00:04.449777  108181 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Apr 08 01:00:30.090 E ns/openshift-sdn pod/sdn-dgtqd node/ip-10-0-148-174.ec2.internal container/sdn container exited with code 255 (Error): I0408 00:58:55.823845  128043 node.go:146] Initializing SDN node "ip-10-0-148-174.ec2.internal" (10.0.148.174) of type "redhat/openshift-ovs-networkpolicy"\nI0408 00:58:55.828588  128043 cmd.go:151] Starting node networking (unknown)\nI0408 00:58:55.947989  128043 sdn_controller.go:137] [SDN setup] SDN is already set up\nI0408 00:58:56.071014  128043 proxy.go:103] Using unidling+iptables Proxier.\nI0408 00:58:56.071973  128043 proxy.go:129] Tearing down userspace rules.\nI0408 00:58:56.084381  128043 networkpolicy.go:330] SyncVNIDRules: 1 unused VNIDs\nI0408 00:58:56.270199  128043 proxy.go:95] Starting multitenant SDN proxy endpoint filter\nI0408 00:58:56.281999  128043 config.go:313] Starting service config controller\nI0408 00:58:56.282027  128043 config.go:131] Starting endpoints config controller\nI0408 00:58:56.282033  128043 shared_informer.go:197] Waiting for caches to sync for service config\nI0408 00:58:56.282053  128043 shared_informer.go:197] Waiting for caches to sync for endpoints config\nI0408 00:58:56.282120  128043 proxy.go:229] Started Kubernetes Proxy on 0.0.0.0\nI0408 00:58:56.382235  128043 shared_informer.go:204] Caches are synced for endpoints config \nI0408 00:58:56.382242  128043 shared_informer.go:204] Caches are synced for service config \nF0408 01:00:29.753334  128043 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Apr 08 01:00:50.128 E ns/openshift-multus pod/multus-t58tn node/ip-10-0-132-174.ec2.internal container/kube-multus container exited with code 137 (Error): 
Apr 08 01:02:20.506 E ns/openshift-multus pod/multus-k4xb9 node/ip-10-0-141-101.ec2.internal container/kube-multus container exited with code 137 (Error): 
Apr 08 01:02:49.679 E ns/openshift-machine-config-operator pod/machine-config-operator-84b95b86bd-wwgbx node/ip-10-0-152-26.ec2.internal container/machine-config-operator container exited with code 2 (Error): eflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.ControllerConfig: the server could not find the requested resource (get controllerconfigs.machineconfiguration.openshift.io)\nE0408 00:18:05.262604       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.MachineConfigPool: the server could not find the requested resource (get machineconfigpools.machineconfiguration.openshift.io)\nE0408 00:18:06.274859       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.MachineConfigPool: the server could not find the requested resource (get machineconfigpools.machineconfiguration.openshift.io)\nE0408 00:18:07.303955       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.MachineConfigPool: the server could not find the requested resource (get machineconfigpools.machineconfiguration.openshift.io)\nI0408 00:18:10.672186       1 sync.go:61] [init mode] synced RenderConfig in 5.726907757s\nI0408 00:18:10.913702       1 sync.go:61] [init mode] synced MachineConfigPools in 195.797206ms\nI0408 00:18:39.914447       1 sync.go:61] [init mode] synced MachineConfigDaemon in 28.981905022s\nI0408 00:18:44.988444       1 sync.go:61] [init mode] synced MachineConfigController in 5.070867259s\nI0408 00:18:47.079212       1 sync.go:61] [init mode] synced MachineConfigServer in 2.087555023s\nI0408 00:22:06.111122       1 sync.go:61] [init mode] synced RequiredPools in 3m19.028906507s\nI0408 00:22:06.506689       1 sync.go:92] Initialization complete\nE0408 00:24:11.867677       1 leaderelection.go:331] error retrieving resource lock openshift-machine-config-operator/machine-config: Get https://172.30.0.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps/machine-config: unexpected EOF\n
Apr 08 01:04:45.240 E ns/openshift-machine-config-operator pod/machine-config-daemon-xptxm node/ip-10-0-138-154.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 08 01:04:52.708 E ns/openshift-machine-config-operator pod/machine-config-daemon-5lgb4 node/ip-10-0-132-174.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 08 01:04:57.609 E ns/openshift-machine-config-operator pod/machine-config-daemon-6sxpf node/ip-10-0-148-174.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 08 01:05:12.217 E ns/openshift-machine-config-operator pod/machine-config-daemon-jnbfd node/ip-10-0-152-26.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 08 01:05:17.119 E ns/openshift-machine-config-operator pod/machine-config-daemon-p7hjl node/ip-10-0-141-101.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 08 01:05:31.095 E ns/openshift-machine-config-operator pod/machine-config-daemon-ws6dr node/ip-10-0-134-26.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 08 01:05:38.313 E ns/openshift-machine-config-operator pod/machine-config-controller-75df7b8bd4-5rcrz node/ip-10-0-152-26.ec2.internal container/machine-config-controller container exited with code 2 (Error): dered-master-bdd12950a69699f07d8b436d6ae4d898\nI0408 00:22:00.797255       1 node_controller.go:452] Pool master: node ip-10-0-141-101.ec2.internal changed machineconfiguration.openshift.io/state = Done\nI0408 00:22:05.799083       1 status.go:82] Pool master: All nodes are updated with rendered-master-bdd12950a69699f07d8b436d6ae4d898\nI0408 00:29:00.850832       1 node_controller.go:452] Pool worker: node ip-10-0-132-174.ec2.internal changed machineconfiguration.openshift.io/currentConfig = rendered-worker-9b5ff081bf86fcc6e177077da6222a18\nI0408 00:29:00.850862       1 node_controller.go:452] Pool worker: node ip-10-0-132-174.ec2.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-9b5ff081bf86fcc6e177077da6222a18\nI0408 00:29:00.850872       1 node_controller.go:452] Pool worker: node ip-10-0-132-174.ec2.internal changed machineconfiguration.openshift.io/state = Done\nI0408 00:29:13.545567       1 node_controller.go:452] Pool worker: node ip-10-0-134-26.ec2.internal changed machineconfiguration.openshift.io/currentConfig = rendered-worker-9b5ff081bf86fcc6e177077da6222a18\nI0408 00:29:13.545742       1 node_controller.go:452] Pool worker: node ip-10-0-134-26.ec2.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-9b5ff081bf86fcc6e177077da6222a18\nI0408 00:29:13.545807       1 node_controller.go:452] Pool worker: node ip-10-0-134-26.ec2.internal changed machineconfiguration.openshift.io/state = Done\nI0408 00:29:32.661512       1 node_controller.go:452] Pool worker: node ip-10-0-148-174.ec2.internal changed machineconfiguration.openshift.io/currentConfig = rendered-worker-9b5ff081bf86fcc6e177077da6222a18\nI0408 00:29:32.661548       1 node_controller.go:452] Pool worker: node ip-10-0-148-174.ec2.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-9b5ff081bf86fcc6e177077da6222a18\nI0408 00:29:32.661559       1 node_controller.go:452] Pool worker: node ip-10-0-148-174.ec2.internal changed machineconfiguration.openshift.io/state = Done\n
Apr 08 01:07:23.562 E ns/openshift-machine-config-operator pod/machine-config-server-jn8jb node/ip-10-0-141-101.ec2.internal container/machine-config-server container exited with code 2 (Error): I0408 00:21:21.712793       1 start.go:38] Version: machine-config-daemon-4.5.0-202004071701-2-gdd5eeeb2-dirty (dd5eeeb2bf88c50c9b7c2aa2385c4b2078a9eea0)\nI0408 00:21:21.714152       1 api.go:51] Launching server on :22624\nI0408 00:21:21.714246       1 api.go:51] Launching server on :22623\n
Apr 08 01:07:31.731 E ns/openshift-machine-config-operator pod/machine-config-server-v6677 node/ip-10-0-152-26.ec2.internal container/machine-config-server container exited with code 2 (Error): I0408 00:18:46.400675       1 start.go:38] Version: machine-config-daemon-4.5.0-202004071701-2-gdd5eeeb2-dirty (dd5eeeb2bf88c50c9b7c2aa2385c4b2078a9eea0)\nI0408 00:18:46.401762       1 api.go:51] Launching server on :22624\nI0408 00:18:46.401875       1 api.go:51] Launching server on :22623\nI0408 00:25:04.189303       1 api.go:97] Pool worker requested by 10.0.152.224:61749\nI0408 00:25:05.562790       1 api.go:97] Pool worker requested by 10.0.130.104:31154\n
Apr 08 01:07:35.264 E ns/openshift-kube-storage-version-migrator pod/migrator-874c4d69b-hd9vf node/ip-10-0-132-174.ec2.internal container/migrator container exited with code 2 (Error): 
Apr 08 01:07:35.367 E ns/openshift-monitoring pod/thanos-querier-79cb9d45d4-9v8l2 node/ip-10-0-132-174.ec2.internal container/oauth-proxy container exited with code 2 (Error): 2020/04/08 00:47:59 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/04/08 00:47:59 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/08 00:47:59 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/08 00:47:59 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/04/08 00:47:59 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/08 00:47:59 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/04/08 00:47:59 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/08 00:47:59 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/04/08 00:47:59 http.go:107: HTTPS: listening on [::]:9091\nI0408 00:47:59.471220       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Apr 08 01:07:35.412 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-132-174.ec2.internal container/rules-configmap-reloader container exited with code 2 (Error): 2020/04/08 00:48:52 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Apr 08 01:07:35.412 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-132-174.ec2.internal container/prometheus-proxy container exited with code 2 (Error): 2020/04/08 00:48:52 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/04/08 00:48:52 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/08 00:48:52 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/08 00:48:52 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/04/08 00:48:52 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/08 00:48:52 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/04/08 00:48:52 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/08 00:48:52 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0408 00:48:52.807654       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/04/08 00:48:52 http.go:107: HTTPS: listening on [::]:9091\n
Apr 08 01:07:35.412 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-132-174.ec2.internal container/prometheus-config-reloader container exited with code 2 (Error): ts=2020-04-08T00:48:51.951640145Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-04-08T00:48:51.95364434Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-04-08T00:48:57.151289789Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-04-08T00:48:57.151402317Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\n
Apr 08 01:07:36.343 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-132-174.ec2.internal container/config-reloader container exited with code 2 (Error): 2020/04/08 00:48:39 Watching directory: "/etc/alertmanager/config"\n
Apr 08 01:07:36.343 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-132-174.ec2.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/04/08 00:48:40 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/08 00:48:40 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/08 00:48:40 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/08 00:48:40 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/04/08 00:48:40 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/08 00:48:40 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/08 00:48:40 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/08 00:48:40 http.go:107: HTTPS: listening on [::]:9095\nI0408 00:48:40.079353       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Apr 08 01:07:36.361 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-75698c554c-qvf9f node/ip-10-0-132-174.ec2.internal container/operator container exited with code 255 (Error):  m=+638.439160437\nI0408 00:58:05.735296       1 operator.go:147] Finished syncing operator at 526.941486ms\nI0408 01:03:07.106596       1 operator.go:145] Starting syncing operator at 2020-04-08 01:03:07.106587165 +0000 UTC m=+940.337402985\nI0408 01:03:07.138622       1 operator.go:147] Finished syncing operator at 32.027249ms\nI0408 01:03:07.138664       1 operator.go:145] Starting syncing operator at 2020-04-08 01:03:07.138660009 +0000 UTC m=+940.369475781\nI0408 01:03:07.164782       1 operator.go:147] Finished syncing operator at 26.115616ms\nI0408 01:03:07.895121       1 operator.go:145] Starting syncing operator at 2020-04-08 01:03:07.895106626 +0000 UTC m=+941.125922635\nI0408 01:03:07.926254       1 operator.go:147] Finished syncing operator at 31.134894ms\nI0408 01:03:07.993973       1 operator.go:145] Starting syncing operator at 2020-04-08 01:03:07.993958777 +0000 UTC m=+941.224774762\nI0408 01:03:08.023519       1 operator.go:147] Finished syncing operator at 29.550289ms\nI0408 01:03:08.092351       1 operator.go:145] Starting syncing operator at 2020-04-08 01:03:08.092340364 +0000 UTC m=+941.323156213\nI0408 01:03:08.118281       1 operator.go:147] Finished syncing operator at 25.931801ms\nI0408 01:03:08.192543       1 operator.go:145] Starting syncing operator at 2020-04-08 01:03:08.19253147 +0000 UTC m=+941.423347249\nI0408 01:03:08.721915       1 operator.go:147] Finished syncing operator at 529.370781ms\nI0408 01:07:33.366997       1 operator.go:145] Starting syncing operator at 2020-04-08 01:07:33.366986272 +0000 UTC m=+1206.597802196\nI0408 01:07:33.424419       1 operator.go:147] Finished syncing operator at 57.41859ms\nI0408 01:07:33.424473       1 operator.go:145] Starting syncing operator at 2020-04-08 01:07:33.424466751 +0000 UTC m=+1206.655282803\nI0408 01:07:33.483259       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0408 01:07:33.483740       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0408 01:07:33.483763       1 builder.go:210] server exited\n
Apr 08 01:07:36.466 E ns/openshift-monitoring pod/prometheus-adapter-75c75f4d8d-tpbwn node/ip-10-0-132-174.ec2.internal container/prometheus-adapter container exited with code 2 (Error): I0408 00:47:59.423504       1 adapter.go:93] successfully using in-cluster auth\nI0408 00:48:00.460205       1 secure_serving.go:116] Serving securely on [::]:6443\n
Apr 08 01:07:37.827 E ns/openshift-cluster-machine-approver pod/machine-approver-5f46498d47-kvbnw node/ip-10-0-138-154.ec2.internal container/machine-approver-controller container exited with code 2 (Error): .0.0.1:6443: connect: connection refused\nE0408 00:47:35.319012       1 reflector.go:178] github.com/openshift/cluster-machine-approver/main.go:240: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nI0408 00:47:45.911956       1 main.go:148] CSR csr-s9gzl added\nI0408 00:47:45.911992       1 main.go:151] CSR csr-s9gzl is already approved\nI0408 00:47:45.912016       1 main.go:148] CSR csr-tzl2d added\nI0408 00:47:45.912027       1 main.go:151] CSR csr-tzl2d is already approved\nI0408 00:47:45.912040       1 main.go:148] CSR csr-vclqf added\nI0408 00:47:45.912049       1 main.go:151] CSR csr-vclqf is already approved\nI0408 00:47:45.923277       1 main.go:148] CSR csr-5vndt added\nI0408 00:47:45.923300       1 main.go:151] CSR csr-5vndt is already approved\nI0408 00:47:45.923321       1 main.go:148] CSR csr-bwvgl added\nI0408 00:47:45.923332       1 main.go:151] CSR csr-bwvgl is already approved\nI0408 00:47:45.923372       1 main.go:148] CSR csr-mkjbs added\nI0408 00:47:45.923412       1 main.go:151] CSR csr-mkjbs is already approved\nI0408 00:47:45.923428       1 main.go:148] CSR csr-pzjwv added\nI0408 00:47:45.923448       1 main.go:151] CSR csr-pzjwv is already approved\nI0408 00:47:45.923469       1 main.go:148] CSR csr-xhzls added\nI0408 00:47:45.923479       1 main.go:151] CSR csr-xhzls is already approved\nI0408 00:47:45.923509       1 main.go:148] CSR csr-57jgd added\nI0408 00:47:45.923519       1 main.go:151] CSR csr-57jgd is already approved\nI0408 00:47:45.923573       1 main.go:148] CSR csr-9zwg9 added\nI0408 00:47:45.923591       1 main.go:151] CSR csr-9zwg9 is already approved\nI0408 00:47:45.923610       1 main.go:148] CSR csr-h5xcr added\nI0408 00:47:45.923625       1 main.go:151] CSR csr-h5xcr is already approved\nI0408 00:47:45.923645       1 main.go:148] CSR csr-tq4r8 added\nI0408 00:47:45.923655       1 main.go:151] CSR csr-tq4r8 is already approved\n
Apr 08 01:07:44.122 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-66d4fb857d-qcppj node/ip-10-0-148-174.ec2.internal container/snapshot-controller container exited with code 2 (Error): 
Apr 08 01:07:59.540 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-134-26.ec2.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-08T01:07:56.021Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-08T01:07:56.024Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-08T01:07:56.024Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-08T01:07:56.025Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-08T01:07:56.025Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-08T01:07:56.025Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-08T01:07:56.025Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-08T01:07:56.025Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-08T01:07:56.025Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-08T01:07:56.025Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-08T01:07:56.025Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-08T01:07:56.025Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-08T01:07:56.025Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-08T01:07:56.025Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-08T01:07:56.026Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-08T01:07:56.026Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-08
Apr 08 01:08:19.436 E ns/e2e-k8s-service-lb-available-8395 pod/service-test-hmpjh node/ip-10-0-132-174.ec2.internal container/netexec container exited with code 2 (Error): 
Apr 08 01:09:59.088 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver
Apr 08 01:10:29.350 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-138-154.ec2.internal node/ip-10-0-138-154.ec2.internal container/cluster-policy-controller container exited with code 1 (Error): I0408 00:43:58.942979       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0408 00:43:58.950172       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0408 00:43:58.953168       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0408 00:43:58.953384       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\n
Apr 08 01:10:29.350 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-138-154.ec2.internal node/ip-10-0-138-154.ec2.internal container/kube-controller-manager container exited with code 2 (Error): ", Name:"oauth-openshift", UID:"23fb7f59-730e-4ddd-8ee3-c84b319d3c1c", APIVersion:"apps/v1", ResourceVersion:"41607", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set oauth-openshift-75f7c59c7b to 1\nI0408 01:08:05.935353       1 replica_set.go:598] Too many replicas for ReplicaSet openshift-authentication/oauth-openshift-75f7c59c7b, need 1, deleting 1\nI0408 01:08:05.935392       1 replica_set.go:226] Found 5 related ReplicaSets for ReplicaSet openshift-authentication/oauth-openshift-75f7c59c7b: oauth-openshift-75f7c59c7b, oauth-openshift-9cbbb9bb4, oauth-openshift-68dbd699, oauth-openshift-86bb46ff, oauth-openshift-59c7798b59\nI0408 01:08:05.935506       1 controller_utils.go:604] Controller oauth-openshift-75f7c59c7b deleting pod openshift-authentication/oauth-openshift-75f7c59c7b-d6dw6\nI0408 01:08:05.952794       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-authentication", Name:"oauth-openshift-75f7c59c7b", UID:"8f50fd04-6429-4744-ac29-ff5058eaf214", APIVersion:"apps/v1", ResourceVersion:"41857", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: oauth-openshift-75f7c59c7b-d6dw6\nI0408 01:08:05.965042       1 replica_set.go:562] Too few replicas for ReplicaSet openshift-authentication/oauth-openshift-9cbbb9bb4, need 2, creating 1\nI0408 01:08:05.968935       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication", Name:"oauth-openshift", UID:"23fb7f59-730e-4ddd-8ee3-c84b319d3c1c", APIVersion:"apps/v1", ResourceVersion:"41859", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set oauth-openshift-9cbbb9bb4 to 2\nI0408 01:08:06.006368       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-authentication", Name:"oauth-openshift-9cbbb9bb4", UID:"68d45456-a12d-4647-869c-ef286217bf2f", APIVersion:"apps/v1", ResourceVersion:"41863", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: oauth-openshift-9cbbb9bb4-g8kkt\n
Apr 08 01:10:29.350 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-138-154.ec2.internal node/ip-10-0-138-154.ec2.internal container/kube-controller-manager-cert-syncer container exited with code 2 (Error): 7074       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 01:07:45.507520       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 01:07:45.509171       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 01:07:45.509464       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 01:07:45.924887       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 01:07:45.925272       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 01:07:54.666441       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 01:07:54.667118       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 01:07:55.916393       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 01:07:55.916770       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 01:08:04.673700       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 01:08:04.674007       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 01:08:05.933576       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 01:08:05.933988       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\n
Apr 08 01:10:29.388 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-138-154.ec2.internal node/ip-10-0-138-154.ec2.internal container/kube-scheduler-cert-syncer container exited with code 2 (Error): 1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 01:07:51.779255       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 01:07:51.779284       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 01:07:53.794214       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 01:07:53.794247       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 01:07:55.808227       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 01:07:55.808292       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 01:07:57.819865       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 01:07:57.819898       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 01:07:59.838775       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 01:07:59.838806       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 01:08:01.853620       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 01:08:01.853725       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 01:08:03.864073       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 01:08:03.864101       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 01:08:05.877149       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 01:08:05.877182       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 01:08:07.891691       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 01:08:07.891828       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 01:08:09.902842       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 01:08:09.903035       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\n
Apr 08 01:10:29.388 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-138-154.ec2.internal node/ip-10-0-138-154.ec2.internal container/kube-scheduler container exited with code 2 (Error): amespace "openshift-kube-scheduler"\nE0408 00:47:43.411822       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: unknown (get services)\nE0408 00:47:43.500839       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope\nE0408 00:47:43.500950       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope\nE0408 00:47:43.501038       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope\nE0408 00:47:43.501113       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope\nE0408 00:47:43.501187       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope\nE0408 00:47:43.577313       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope\nE0408 00:47:43.577496       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope\n
Apr 08 01:10:29.426 E ns/openshift-etcd pod/etcd-ip-10-0-138-154.ec2.internal node/ip-10-0-138-154.ec2.internal container/etcd-metrics container exited with code 2 (Error): ll-serving-metrics/etcd-serving-metrics-ip-10-0-138-154.ec2.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-138-154.ec2.internal.key, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "}\n{"level":"info","ts":"2020-04-08T00:43:57.135Z","caller":"etcdmain/grpc_proxy.go:320","msg":"listening for gRPC proxy client requests","address":"127.0.0.1:9977"}\n{"level":"info","ts":"2020-04-08T00:43:57.135Z","caller":"etcdmain/grpc_proxy.go:290","msg":"gRPC proxy client TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-138-154.ec2.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-138-154.ec2.internal.key, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "}\n{"level":"info","ts":"2020-04-08T00:43:57.137Z","caller":"etcdmain/grpc_proxy.go:456","msg":"gRPC proxy listening for metrics","address":"https://0.0.0.0:9979"}\n{"level":"info","ts":"2020-04-08T00:43:57.137Z","caller":"etcdmain/grpc_proxy.go:218","msg":"started gRPC proxy","address":"127.0.0.1:9977"}\n{"level":"info","ts":"2020-04-08T00:43:57.137Z","caller":"etcdmain/grpc_proxy.go:208","msg":"gRPC proxy server metrics URL serving"}\n{"level":"warn","ts":"2020-04-08T00:43:57.138Z","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://10.0.138.154:9978 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 10.0.138.154:9978: connect: connection refused\". Reconnecting..."}\n{"level":"warn","ts":"2020-04-08T00:43:58.139Z","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://10.0.138.154:9978 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 10.0.138.154:9978: connect: connection refused\". Reconnecting..."}\n
Apr 08 01:10:29.457 E ns/openshift-cluster-node-tuning-operator pod/tuned-sqfk9 node/ip-10-0-138-154.ec2.internal container/tuned container exited with code 143 (Error): 408 00:47:59.193330   83857 tuned.go:175] failed to disable system tuned: Failed to execute operation: Unit file tuned.service does not exist.\nI0408 00:47:59.216462   83857 tuned.go:513] tuned "rendered" added\nI0408 00:47:59.217601   83857 tuned.go:218] extracting tuned profiles\nI0408 00:48:00.143933   83857 tuned.go:392] getting recommended profile...\nI0408 00:48:00.811354   83857 tuned.go:419] active profile () != recommended profile (openshift-control-plane)\nI0408 00:48:00.811403   83857 tuned.go:434] tuned daemon profiles changed, forcing tuned daemon reload\nI0408 00:48:00.811445   83857 tuned.go:285] starting tuned...\n2020-04-08 00:48:01,433 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-04-08 00:48:01,519 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-04-08 00:48:01,520 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-08 00:48:01,528 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-04-08 00:48:01,546 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-04-08 00:48:01,819 INFO     tuned.daemon.controller: starting controller\n2020-04-08 00:48:01,820 INFO     tuned.daemon.daemon: starting tuning\n2020-04-08 00:48:01,937 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-08 00:48:01,939 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-08 00:48:01,988 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-08 00:48:01,998 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-08 00:48:02,005 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-08 00:48:02,590 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-08 00:48:02,598 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\n
Apr 08 01:10:29.478 E ns/openshift-monitoring pod/node-exporter-245tx node/ip-10-0-138-154.ec2.internal container/node-exporter container exited with code 143 (Error): -08T00:48:03Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-08T00:48:03Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-08T00:48:03Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-08T00:48:03Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-08T00:48:03Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-08T00:48:03Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-08T00:48:03Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-08T00:48:03Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-08T00:48:03Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-08T00:48:03Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-08T00:48:03Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-08T00:48:03Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-08T00:48:03Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-08T00:48:03Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-08T00:48:03Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-08T00:48:03Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-08T00:48:03Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-08T00:48:03Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-08T00:48:03Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-08T00:48:03Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-08T00:48:03Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-08T00:48:03Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-08T00:48:03Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-08T00:48:03Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 08 01:10:29.494 E ns/openshift-controller-manager pod/controller-manager-gngrf node/ip-10-0-138-154.ec2.internal container/controller-manager container exited with code 1 (Error): istry.svc:5000:{}]\nI0408 00:49:22.032951       1 deleted_token_secrets.go:69] caches synced\nW0408 01:07:36.815262       1 reflector.go:340] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: watch of *v1.TemplateInstance ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 861; INTERNAL_ERROR") has prevented the request from succeeding\nW0408 01:07:36.815475       1 reflector.go:340] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.Image ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 889; INTERNAL_ERROR") has prevented the request from succeeding\nW0408 01:07:36.815616       1 reflector.go:340] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 725; INTERNAL_ERROR") has prevented the request from succeeding\nW0408 01:07:36.815751       1 reflector.go:340] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.BuildConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 909; INTERNAL_ERROR") has prevented the request from succeeding\nW0408 01:07:36.815894       1 reflector.go:340] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 745; INTERNAL_ERROR") has prevented the request from succeeding\nW0408 01:07:36.816026       1 reflector.go:340] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.Build ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 923; INTERNAL_ERROR") has prevented the request from succeeding\n
Apr 08 01:10:29.530 E ns/openshift-sdn pod/sdn-controller-8vhp7 node/ip-10-0-138-154.ec2.internal container/sdn-controller container exited with code 2 (Error): I0408 00:58:52.604473       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Apr 08 01:10:29.567 E ns/openshift-sdn pod/ovs-8sjzl node/ip-10-0-138-154.ec2.internal container/openvswitch container exited with code 1 (Error): Z|00227|bridge|INFO|bridge br0: deleted interface vethef287e60 on port 76\n2020-04-08T01:08:00.754Z|00228|bridge|WARN|could not open network device veth81aaf6e0 (No such device)\n2020-04-08T01:08:00.758Z|00229|bridge|WARN|could not open network device veth81aaf6e0 (No such device)\n2020-04-08T01:08:00.801Z|00230|bridge|WARN|could not open network device veth81aaf6e0 (No such device)\n2020-04-08T01:08:00.818Z|00231|bridge|WARN|could not open network device veth81aaf6e0 (No such device)\n2020-04-08T01:08:01.509Z|00232|connmgr|INFO|br0<->unix#528: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T01:08:01.548Z|00233|connmgr|INFO|br0<->unix#531: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T01:08:01.573Z|00234|bridge|INFO|bridge br0: deleted interface veth02f9e08b on port 72\n2020-04-08T01:08:01.584Z|00235|bridge|WARN|could not open network device veth81aaf6e0 (No such device)\n2020-04-08T01:08:01.587Z|00236|bridge|WARN|could not open network device veth81aaf6e0 (No such device)\n2020-04-08T01:08:01.645Z|00237|bridge|WARN|could not open network device veth81aaf6e0 (No such device)\n2020-04-08T01:08:01.652Z|00238|bridge|WARN|could not open network device veth81aaf6e0 (No such device)\n2020-04-08T01:08:01.795Z|00239|connmgr|INFO|br0<->unix#534: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T01:08:01.827Z|00240|connmgr|INFO|br0<->unix#537: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T01:08:01.853Z|00241|bridge|INFO|bridge br0: deleted interface veth75fc19bb on port 77\n2020-04-08T01:08:01.862Z|00242|bridge|WARN|could not open network device veth81aaf6e0 (No such device)\n2020-04-08T01:08:01.867Z|00243|bridge|WARN|could not open network device veth81aaf6e0 (No such device)\n2020-04-08T01:08:01.907Z|00244|bridge|WARN|could not open network device veth81aaf6e0 (No such device)\n2020-04-08T01:08:01.913Z|00245|bridge|WARN|could not open network device veth81aaf6e0 (No such device)\n2020-04-08 01:08:10 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Apr 08 01:10:29.586 E ns/openshift-multus pod/multus-admission-controller-l8sdt node/ip-10-0-138-154.ec2.internal container/multus-admission-controller container exited with code 255 (Error): 
Apr 08 01:10:29.615 E ns/openshift-multus pod/multus-skd7s node/ip-10-0-138-154.ec2.internal container/kube-multus container exited with code 143 (Error): 
Apr 08 01:10:29.672 E ns/openshift-machine-config-operator pod/machine-config-daemon-4rqjb node/ip-10-0-138-154.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 08 01:10:29.830 E ns/openshift-machine-config-operator pod/machine-config-server-9phqn node/ip-10-0-138-154.ec2.internal container/machine-config-server container exited with code 2 (Error): I0408 01:07:29.896741       1 start.go:38] Version: machine-config-daemon-4.5.0-202004071701-2-gdd5eeeb2-dirty (dd5eeeb2bf88c50c9b7c2aa2385c4b2078a9eea0)\nI0408 01:07:29.898392       1 api.go:51] Launching server on :22624\nI0408 01:07:29.898500       1 api.go:51] Launching server on :22623\n
Apr 08 01:10:31.193 E ns/openshift-monitoring pod/node-exporter-lfmlq node/ip-10-0-132-174.ec2.internal container/node-exporter container exited with code 143 (Error): -08T00:48:39Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-08T00:48:39Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-08T00:48:39Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-08T00:48:39Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-08T00:48:39Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-08T00:48:39Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-08T00:48:39Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-08T00:48:39Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-08T00:48:39Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-08T00:48:39Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-08T00:48:39Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-08T00:48:39Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-08T00:48:39Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-08T00:48:39Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-08T00:48:39Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-08T00:48:39Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-08T00:48:39Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-08T00:48:39Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-08T00:48:39Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-08T00:48:39Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-08T00:48:39Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-08T00:48:39Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-08T00:48:39Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-08T00:48:39Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 08 01:10:31.216 E ns/openshift-cluster-node-tuning-operator pod/tuned-pgsk9 node/ip-10-0-132-174.ec2.internal container/tuned container exited with code 143 (Error): d profile...\nI0408 00:49:09.946753   59478 tuned.go:419] active profile () != recommended profile (openshift-node)\nI0408 00:49:09.947240   59478 tuned.go:434] tuned daemon profiles changed, forcing tuned daemon reload\nI0408 00:49:09.947287   59478 tuned.go:285] starting tuned...\n2020-04-08 00:49:10,073 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-04-08 00:49:10,080 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-04-08 00:49:10,080 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-08 00:49:10,081 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-04-08 00:49:10,082 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-04-08 00:49:10,115 INFO     tuned.daemon.controller: starting controller\n2020-04-08 00:49:10,116 INFO     tuned.daemon.daemon: starting tuning\n2020-04-08 00:49:10,126 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-08 00:49:10,127 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-08 00:49:10,130 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-08 00:49:10,132 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-08 00:49:10,134 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-08 00:49:10,274 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-08 00:49:10,282 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0408 01:08:10.705899   59478 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0408 01:08:10.705909   59478 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0408 01:08:43.838881   59478 tuned.go:114] received signal: terminated\nI0408 01:08:43.838923   59478 tuned.go:326] sending TERM to PID 59540\n
Apr 08 01:10:31.232 E ns/openshift-sdn pod/ovs-9xhpq node/ip-10-0-132-174.ec2.internal container/openvswitch container exited with code 143 (Error): ds in the last 0 s (4 deletes)\n2020-04-08T01:07:35.421Z|00100|bridge|INFO|bridge br0: deleted interface vethb42e5826 on port 29\n2020-04-08T01:07:35.491Z|00101|connmgr|INFO|br0<->unix#498: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T01:07:35.560Z|00102|connmgr|INFO|br0<->unix#501: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T01:07:35.596Z|00103|bridge|INFO|bridge br0: deleted interface veth6278f301 on port 21\n2020-04-08T01:07:35.655Z|00104|connmgr|INFO|br0<->unix#504: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T01:07:35.725Z|00105|connmgr|INFO|br0<->unix#507: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T01:07:35.775Z|00106|bridge|INFO|bridge br0: deleted interface veth91d06e5b on port 24\n2020-04-08T01:07:35.839Z|00107|connmgr|INFO|br0<->unix#510: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T01:07:35.912Z|00108|connmgr|INFO|br0<->unix#513: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T01:07:35.953Z|00109|bridge|INFO|bridge br0: deleted interface veth89be93c0 on port 27\n2020-04-08T01:08:03.171Z|00110|connmgr|INFO|br0<->unix#535: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T01:08:03.210Z|00111|connmgr|INFO|br0<->unix#538: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T01:08:03.240Z|00112|bridge|INFO|bridge br0: deleted interface veth4c757a08 on port 15\n2020-04-08T01:08:15.905Z|00011|jsonrpc|WARN|unix#466: receive error: Connection reset by peer\n2020-04-08T01:08:15.905Z|00012|reconnect|WARN|unix#466: connection dropped (Connection reset by peer)\n2020-04-08T01:08:18.710Z|00113|connmgr|INFO|br0<->unix#555: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T01:08:18.739Z|00114|connmgr|INFO|br0<->unix#558: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T01:08:18.765Z|00115|bridge|INFO|bridge br0: deleted interface veth801daa8d on port 18\n2020-04-08T01:08:18.751Z|00013|jsonrpc|WARN|unix#472: receive error: Connection reset by peer\n2020-04-08T01:08:18.751Z|00014|reconnect|WARN|unix#472: connection dropped (Connection reset by peer)\n2020-04-08 01:08:43 info: Saving flows ...\n
Apr 08 01:10:31.358 E ns/openshift-multus pod/multus-hrss9 node/ip-10-0-132-174.ec2.internal container/kube-multus container exited with code 143 (Error): 
Apr 08 01:10:31.431 E ns/openshift-machine-config-operator pod/machine-config-daemon-qfdzd node/ip-10-0-132-174.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 08 01:10:32.284 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-138-154.ec2.internal node/ip-10-0-138-154.ec2.internal container/kube-apiserver container exited with code 1 (Error):  1 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured\nW0408 01:08:10.587236       1 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured\nW0408 01:08:10.585721       1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted\nW0408 01:08:10.589234       1 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured\nW0408 01:08:10.599005       1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted\nW0408 01:08:10.599057       1 cacher.go:166] Terminating all watchers from cacher *rbac.ClusterRoleBinding\nW0408 01:08:10.600353       1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted\nI0408 01:08:10.603186       1 healthz.go:200] [+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-StartOAuthInformers ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[-]shutdown failed: reason withheld\nhealthz check failed\nW0408 01:08:10.625628       1 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured\n
Apr 08 01:10:32.284 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-138-154.ec2.internal node/ip-10-0-138-154.ec2.internal container/kube-apiserver-insecure-readyz container exited with code 2 (Error): I0408 00:47:38.918761       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Apr 08 01:10:32.284 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-138-154.ec2.internal node/ip-10-0-138-154.ec2.internal container/kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0408 01:07:58.448139       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 01:07:58.448525       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0408 01:08:08.458597       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 01:08:08.458939       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Apr 08 01:10:39.903 E ns/openshift-machine-config-operator pod/machine-config-daemon-4rqjb node/ip-10-0-138-154.ec2.internal container/oauth-proxy container exited with code 1 (Error): 
Apr 08 01:10:39.940 E ns/openshift-machine-config-operator pod/machine-config-daemon-qfdzd node/ip-10-0-132-174.ec2.internal container/oauth-proxy container exited with code 1 (Error): 
Apr 08 01:10:49.385 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Apr 08 01:10:49.747 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-5c598f4cc7-2gkq6 node/ip-10-0-148-174.ec2.internal container/snapshot-controller container exited with code 2 (Error): 
Apr 08 01:10:49.813 E ns/openshift-marketplace pod/certified-operators-8fd7ccd48-77lwl node/ip-10-0-148-174.ec2.internal container/certified-operators container exited with code 2 (Error): 
Apr 08 01:10:49.834 E ns/openshift-marketplace pod/redhat-operators-9f48c88dd-b85t4 node/ip-10-0-148-174.ec2.internal container/redhat-operators container exited with code 2 (Error): 
Apr 08 01:10:49.853 E ns/openshift-kube-storage-version-migrator pod/migrator-874c4d69b-rbrjv node/ip-10-0-148-174.ec2.internal container/migrator container exited with code 2 (Error): 
Apr 08 01:10:49.871 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-75698c554c-mk4tz node/ip-10-0-148-174.ec2.internal container/operator container exited with code 255 (Error): at 2020-04-08 01:08:02.128011827 +0000 UTC m=+24.348034232\nI0408 01:08:02.193632       1 operator.go:147] Finished syncing operator at 65.607385ms\nI0408 01:08:10.068020       1 operator.go:145] Starting syncing operator at 2020-04-08 01:08:10.06801178 +0000 UTC m=+32.288034082\nI0408 01:08:10.095239       1 operator.go:147] Finished syncing operator at 27.221081ms\nI0408 01:08:10.095277       1 operator.go:145] Starting syncing operator at 2020-04-08 01:08:10.095273305 +0000 UTC m=+32.315295580\nI0408 01:08:10.121570       1 operator.go:147] Finished syncing operator at 26.289475ms\nI0408 01:08:10.858104       1 operator.go:145] Starting syncing operator at 2020-04-08 01:08:10.858093398 +0000 UTC m=+33.078115911\nI0408 01:08:10.882011       1 operator.go:147] Finished syncing operator at 23.911222ms\nI0408 01:08:10.957917       1 operator.go:145] Starting syncing operator at 2020-04-08 01:08:10.957908691 +0000 UTC m=+33.177930989\nI0408 01:08:11.002951       1 operator.go:147] Finished syncing operator at 45.033257ms\nI0408 01:08:11.055072       1 operator.go:145] Starting syncing operator at 2020-04-08 01:08:11.055062535 +0000 UTC m=+33.275084848\nI0408 01:08:11.079437       1 operator.go:147] Finished syncing operator at 24.366771ms\nI0408 01:08:11.159359       1 operator.go:145] Starting syncing operator at 2020-04-08 01:08:11.15934806 +0000 UTC m=+33.379370461\nI0408 01:08:11.703407       1 operator.go:147] Finished syncing operator at 544.049852ms\nI0408 01:10:47.657679       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0408 01:10:47.657808       1 logging_controller.go:93] Shutting down LogLevelController\nI0408 01:10:47.657847       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nI0408 01:10:47.657904       1 status_controller.go:212] Shutting down StatusSyncer-csi-snapshot-controller\nI0408 01:10:47.657924       1 management_state_controller.go:112] Shutting down management-state-controller-csi-snapshot-controller\nF0408 01:10:47.658004       1 builder.go:243] stopped\n
Apr 08 01:10:50.898 E ns/openshift-monitoring pod/openshift-state-metrics-789887dcd4-m75rj node/ip-10-0-148-174.ec2.internal container/openshift-state-metrics container exited with code 2 (Error): 
Apr 08 01:10:51.194 E ns/openshift-monitoring pod/prometheus-adapter-75c75f4d8d-2mvpc node/ip-10-0-148-174.ec2.internal container/prometheus-adapter container exited with code 2 (Error): I0408 00:48:03.979893       1 adapter.go:93] successfully using in-cluster auth\nI0408 00:48:05.137434       1 secure_serving.go:116] Serving securely on [::]:6443\n
Apr 08 01:10:51.973 E ns/openshift-monitoring pod/thanos-querier-79cb9d45d4-xngbs node/ip-10-0-148-174.ec2.internal container/oauth-proxy container exited with code 2 (Error): 2020/04/08 00:48:18 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/04/08 00:48:18 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/08 00:48:18 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/08 00:48:18 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/04/08 00:48:18 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/08 00:48:18 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/04/08 00:48:18 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/08 00:48:18 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/04/08 00:48:18 http.go:107: HTTPS: listening on [::]:9091\nI0408 00:48:18.259411       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Apr 08 01:10:57.769 E clusterversion/version changed Failing to True: ClusterOperatorNotAvailable: Cluster operator machine-config is still updating
Apr 08 01:11:05.364 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-132-174.ec2.internal container/prometheus container exited with code 1 (Error): caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-08T01:11:03.257Z caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-08T01:11:03.262Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-08T01:11:03.263Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-08T01:11:03.264Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-08T01:11:03.264Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-08T01:11:03.264Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-08T01:11:03.264Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-08T01:11:03.264Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-08T01:11:03.264Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-08T01:11:03.264Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-08T01:11:03.264Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-08T01:11:03.264Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-08T01:11:03.264Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-08T01:11:03.264Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-08T01:11:03.266Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-08T01:11:03.266Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-08
Apr 08 01:11:34.062 E ns/e2e-k8s-service-lb-available-8395 pod/service-test-vp2df node/ip-10-0-148-174.ec2.internal container/netexec container exited with code 2 (Error): 
Apr 08 01:11:39.430 E kube-apiserver failed contacting the API: Get https://api.ci-op-wwqd8rgz-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&resourceVersion=45320&timeout=8m53s&timeoutSeconds=533&watch=true: dial tcp 18.205.223.161:6443: connect: connection refused
Apr 08 01:11:44.890 E kube-apiserver Kube API started failing: Get https://api.ci-op-wwqd8rgz-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Apr 08 01:13:02.566 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers::NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-141-101.ec2.internal" not ready since 2020-04-08 01:12:37 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)\nEtcdMembersDegraded: ip-10-0-141-101.ec2.internal members are unhealthy,  members are unknown
Apr 08 01:13:57.050 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-141-101.ec2.internal node/ip-10-0-141-101.ec2.internal container/cluster-policy-controller container exited with code 1 (Error): r.go:144] Started "openshift.io/resourcequota"\nI0408 00:45:15.877087       1 policy_controller.go:147] Started Origin Controllers\nI0408 00:45:15.877411       1 resource_quota_controller.go:271] Starting resource quota controller\nI0408 00:45:15.877433       1 shared_informer.go:197] Waiting for caches to sync for resource quota\nI0408 00:45:15.977688       1 shared_informer.go:204] Caches are synced for resource quota \nW0408 01:07:36.811462       1 reflector.go:326] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.ImageStream ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 659; INTERNAL_ERROR") has prevented the request from succeeding\nW0408 01:07:36.811668       1 reflector.go:326] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 887; INTERNAL_ERROR") has prevented the request from succeeding\nW0408 01:10:57.975399       1 reflector.go:326] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 985; INTERNAL_ERROR") has prevented the request from succeeding\nW0408 01:10:57.984888       1 reflector.go:326] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.ImageStream ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 969; INTERNAL_ERROR") has prevented the request from succeeding\nW0408 01:10:57.985051       1 reflector.go:326] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 967; INTERNAL_ERROR") has prevented the request from succeeding\n
Apr 08 01:13:57.050 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-141-101.ec2.internal node/ip-10-0-141-101.ec2.internal container/kube-controller-manager-cert-syncer container exited with code 2 (Error): 8077       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 01:11:04.938629       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 01:11:10.793419       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 01:11:10.794200       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 01:11:14.951265       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 01:11:14.951790       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 01:11:20.804499       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 01:11:20.804885       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 01:11:24.959164       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 01:11:24.959543       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 01:11:30.824555       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 01:11:30.824875       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 01:11:34.973643       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 01:11:34.973993       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\n
Apr 08 01:13:57.050 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-141-101.ec2.internal node/ip-10-0-141-101.ec2.internal container/kube-controller-manager container exited with code 2 (Error): Endpoints", Namespace:"openshift-cluster-version", Name:"cluster-version-operator", UID:"e70b87fa-8c37-42c2-b600-21e3d5c921cb", APIVersion:"v1", ResourceVersion:"45161", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpoint' Failed to update endpoint openshift-cluster-version/cluster-version-operator: Operation cannot be fulfilled on endpoints "cluster-version-operator": the object has been modified; please apply your changes to the latest version and try again\nE0408 01:11:32.617526       1 resource_quota_controller.go:408] unable to retrieve the complete list of server APIs: packages.operators.coreos.com/v1: the server is currently unable to handle the request\nI0408 01:11:33.894298       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"f732672a-7205-475c-a9e4-bbc9d48dc729", APIVersion:"apps/v1", ResourceVersion:"45129", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set packageserver-59fcddd7b to 0\nI0408 01:11:33.894936       1 replica_set.go:598] Too many replicas for ReplicaSet openshift-operator-lifecycle-manager/packageserver-59fcddd7b, need 0, deleting 1\nI0408 01:11:33.895116       1 replica_set.go:226] Found 6 related ReplicaSets for ReplicaSet openshift-operator-lifecycle-manager/packageserver-59fcddd7b: packageserver-6c58cfb95d, packageserver-54585dc7ff, packageserver-59fcddd7b, packageserver-64668dbcf4, packageserver-796bdd5d76, packageserver-75b85c8654\nI0408 01:11:33.895317       1 controller_utils.go:604] Controller packageserver-59fcddd7b deleting pod openshift-operator-lifecycle-manager/packageserver-59fcddd7b-ttjtz\nI0408 01:11:33.918629       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver-59fcddd7b", UID:"0d37c26d-ce05-4ebc-a4fa-bbbd3ecf415d", APIVersion:"apps/v1", ResourceVersion:"45259", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: packageserver-59fcddd7b-ttjtz\n
Apr 08 01:13:57.109 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-141-101.ec2.internal node/ip-10-0-141-101.ec2.internal container/kube-apiserver container exited with code 1 (Error): r.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted\nW0408 01:11:38.944345       1 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured\nW0408 01:11:38.944609       1 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured\nW0408 01:11:38.944692       1 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured\nW0408 01:11:38.983607       1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted\nW0408 01:11:38.983678       1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted\nW0408 01:11:38.984756       1 cacher.go:166] Terminating all watchers from cacher *core.ServiceAccount\nI0408 01:11:39.051209       1 trace.go:116] Trace[811847423]: "cacher list" type:*core.ServiceAccount (started: 2020-04-08 01:11:36.544416859 +0000 UTC m=+1694.475262253) (total time: 2.506761212s):\nTrace[811847423]: [2.50666941s] [2.506651348s] watchCache fresh enough\nI0408 01:11:39.051326       1 trace.go:116] Trace[783424770]: "List" url:/api/v1/namespaces/openshift-apiserver/serviceaccounts,user-agent:cluster-openshift-apiserver-operator/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.0.152.26 (started: 2020-04-08 01:11:36.544389248 +0000 UTC m=+1694.475234793) (total time: 2.506908598s):\nTrace[783424770]: [2.506840094s] [2.506819177s] Listing from storage done\nE0408 01:11:39.128911       1 status.go:71] apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:"etcdserver: leader changed"}\nI0408 01:11:39.157203       1 genericapiserver.go:648] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-141-101.ec2.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0408 01:11:39.157438       1 controller.go:181] Shutting down kubernetes service endpoint reconciler\n
Apr 08 01:13:57.109 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-141-101.ec2.internal node/ip-10-0-141-101.ec2.internal container/kube-apiserver-insecure-readyz container exited with code 2 (Error): I0408 00:43:24.361327       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Apr 08 01:13:57.109 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-141-101.ec2.internal node/ip-10-0-141-101.ec2.internal container/kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0408 01:11:18.804092       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 01:11:18.804660       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0408 01:11:28.833072       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 01:11:28.834327       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Apr 08 01:13:57.161 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-141-101.ec2.internal node/ip-10-0-141-101.ec2.internal container/kube-scheduler-cert-syncer container exited with code 2 (Error): 1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 01:11:19.356245       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 01:11:19.356280       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 01:11:21.367114       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 01:11:21.367205       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 01:11:23.378493       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 01:11:23.378530       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 01:11:25.393553       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 01:11:25.393673       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 01:11:27.405170       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 01:11:27.405231       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 01:11:29.420552       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 01:11:29.420586       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 01:11:31.435379       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 01:11:31.435416       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 01:11:33.444419       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 01:11:33.444557       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 01:11:35.455093       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 01:11:35.455277       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 01:11:37.469884       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 01:11:37.469999       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\n
Apr 08 01:13:57.161 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-141-101.ec2.internal node/ip-10-0-141-101.ec2.internal container/kube-scheduler container exited with code 2 (Error): h=true: dial tcp [::1]:6443: connect: connection refused\nE0408 00:43:29.117788       1 eventhandlers.go:256] scheduler cache UpdatePod failed: pod 2b04558a-0f18-4c14-b2ad-dcff76bf561d is not added to scheduler cache, so cannot be updated\nE0408 00:43:55.384815       1 eventhandlers.go:256] scheduler cache UpdatePod failed: pod 2b04558a-0f18-4c14-b2ad-dcff76bf561d is not added to scheduler cache, so cannot be updated\nE0408 00:43:56.426162       1 eventhandlers.go:256] scheduler cache UpdatePod failed: pod 2b04558a-0f18-4c14-b2ad-dcff76bf561d is not added to scheduler cache, so cannot be updated\nE0408 00:43:57.481101       1 eventhandlers.go:256] scheduler cache UpdatePod failed: pod 2b04558a-0f18-4c14-b2ad-dcff76bf561d is not added to scheduler cache, so cannot be updated\nE0408 00:44:00.582271       1 eventhandlers.go:256] scheduler cache UpdatePod failed: pod 2b04558a-0f18-4c14-b2ad-dcff76bf561d is not added to scheduler cache, so cannot be updated\nE0408 01:09:11.571729       1 eventhandlers.go:256] scheduler cache UpdatePod failed: pod 2b04558a-0f18-4c14-b2ad-dcff76bf561d is not added to scheduler cache, so cannot be updated\nE0408 01:10:29.416703       1 eventhandlers.go:256] scheduler cache UpdatePod failed: pod 2b04558a-0f18-4c14-b2ad-dcff76bf561d is not added to scheduler cache, so cannot be updated\nE0408 01:10:32.311344       1 eventhandlers.go:256] scheduler cache UpdatePod failed: pod 2b04558a-0f18-4c14-b2ad-dcff76bf561d is not added to scheduler cache, so cannot be updated\nE0408 01:10:33.469022       1 eventhandlers.go:256] scheduler cache UpdatePod failed: pod 2b04558a-0f18-4c14-b2ad-dcff76bf561d is not added to scheduler cache, so cannot be updated\nE0408 01:10:35.704524       1 eventhandlers.go:256] scheduler cache UpdatePod failed: pod 2b04558a-0f18-4c14-b2ad-dcff76bf561d is not added to scheduler cache, so cannot be updated\nE0408 01:10:39.456894       1 eventhandlers.go:256] scheduler cache UpdatePod failed: pod 2b04558a-0f18-4c14-b2ad-dcff76bf561d is not added to scheduler cache, so cannot be updated\n
Apr 08 01:13:57.224 E ns/openshift-cluster-node-tuning-operator pod/tuned-2lbk9 node/ip-10-0-141-101.ec2.internal container/tuned container exited with code 143 (Error): 8 00:48:03.362226   84555 tuned.go:175] failed to disable system tuned: Failed to execute operation: Unit file tuned.service does not exist.\nI0408 00:48:04.342325   84555 tuned.go:392] getting recommended profile...\nI0408 00:48:04.615970   84555 tuned.go:419] active profile () != recommended profile (openshift-control-plane)\nI0408 00:48:04.616108   84555 tuned.go:434] tuned daemon profiles changed, forcing tuned daemon reload\nI0408 00:48:04.616197   84555 tuned.go:285] starting tuned...\n2020-04-08 00:48:04,799 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-04-08 00:48:04,809 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-04-08 00:48:04,810 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-08 00:48:04,811 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-04-08 00:48:04,812 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-04-08 00:48:04,866 INFO     tuned.daemon.controller: starting controller\n2020-04-08 00:48:04,866 INFO     tuned.daemon.daemon: starting tuning\n2020-04-08 00:48:04,885 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-08 00:48:04,888 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-08 00:48:04,894 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-08 00:48:04,896 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-08 00:48:04,898 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-08 00:48:05,062 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-08 00:48:05,074 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\nI0408 01:11:38.868598   84555 tuned.go:527] tuned "rendered" changed\nI0408 01:11:38.868624   84555 tuned.go:218] extracting tuned profiles\n
Apr 08 01:13:57.298 E ns/openshift-monitoring pod/node-exporter-9smgp node/ip-10-0-141-101.ec2.internal container/node-exporter container exited with code 143 (Error): -08T00:48:25Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-08T00:48:25Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-08T00:48:25Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-08T00:48:25Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-08T00:48:25Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-08T00:48:25Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-08T00:48:25Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-08T00:48:25Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-08T00:48:25Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-08T00:48:25Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-08T00:48:25Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-08T00:48:25Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-08T00:48:25Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-08T00:48:25Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-08T00:48:25Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-08T00:48:25Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-08T00:48:25Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-08T00:48:25Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-08T00:48:25Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-08T00:48:25Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-08T00:48:25Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-08T00:48:25Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-08T00:48:25Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-08T00:48:25Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 08 01:13:57.317 E ns/openshift-controller-manager pod/controller-manager-pvmxb node/ip-10-0-141-101.ec2.internal container/controller-manager container exited with code 1 (Error): 01:09:18.156071       1 create_dockercfg_secrets.go:218] urls found\nI0408 01:09:18.156080       1 create_dockercfg_secrets.go:224] caches synced\nI0408 01:09:18.156488       1 docker_registry_service.go:296] Updating registry URLs from map[172.30.50.184:5000:{} image-registry.openshift-image-registry.svc.cluster.local:5000:{} image-registry.openshift-image-registry.svc:5000:{}] to map[172.30.50.184:5000:{} image-registry.openshift-image-registry.svc.cluster.local:5000:{} image-registry.openshift-image-registry.svc:5000:{}]\nI0408 01:09:18.190195       1 build_controller.go:474] Starting build controller\nI0408 01:09:18.190221       1 build_controller.go:476] OpenShift image registry hostname: image-registry.openshift-image-registry.svc:5000\nW0408 01:10:57.976438       1 reflector.go:340] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 487; INTERNAL_ERROR") has prevented the request from succeeding\nW0408 01:10:57.976152       1 reflector.go:340] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.Image ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 497; INTERNAL_ERROR") has prevented the request from succeeding\nW0408 01:10:57.979777       1 reflector.go:340] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.BuildConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 473; INTERNAL_ERROR") has prevented the request from succeeding\nW0408 01:10:58.020704       1 reflector.go:340] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 481; INTERNAL_ERROR") has prevented the request from succeeding\n
Apr 08 01:13:57.333 E ns/openshift-sdn pod/ovs-sgm7x node/ip-10-0-141-101.ec2.internal container/openvswitch container exited with code 1 (Error): flow_mods in the last 0 s (2 deletes)\n2020-04-08T01:11:16.721Z|00168|bridge|INFO|bridge br0: added interface veth79c25865 on port 80\n2020-04-08T01:11:16.792Z|00169|connmgr|INFO|br0<->unix#823: 5 flow_mods in the last 0 s (5 adds)\n2020-04-08T01:11:16.902Z|00170|connmgr|INFO|br0<->unix#827: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T01:11:16.920Z|00171|connmgr|INFO|br0<->unix#829: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-04-08T01:11:17.679Z|00172|connmgr|INFO|br0<->unix#832: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T01:11:17.783Z|00173|connmgr|INFO|br0<->unix#835: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T01:11:17.901Z|00174|bridge|INFO|bridge br0: deleted interface veth2d5c74ea on port 78\n2020-04-08T01:11:19.483Z|00175|connmgr|INFO|br0<->unix#838: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T01:11:19.520Z|00176|connmgr|INFO|br0<->unix#841: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T01:11:19.554Z|00177|bridge|INFO|bridge br0: deleted interface vethf0c902a4 on port 79\n2020-04-08T01:11:20.471Z|00178|connmgr|INFO|br0<->unix#847: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T01:11:20.504Z|00179|connmgr|INFO|br0<->unix#850: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T01:11:20.527Z|00180|bridge|INFO|bridge br0: deleted interface veth79c25865 on port 80\n2020-04-08T01:11:23.012Z|00181|connmgr|INFO|br0<->unix#853: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T01:11:23.042Z|00182|connmgr|INFO|br0<->unix#856: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T01:11:23.065Z|00183|bridge|INFO|bridge br0: deleted interface vetha98335e5 on port 76\n2020-04-08T01:11:23.474Z|00184|connmgr|INFO|br0<->unix#859: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T01:11:23.514Z|00185|connmgr|INFO|br0<->unix#862: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T01:11:23.538Z|00186|bridge|INFO|bridge br0: deleted interface veth16200d24 on port 70\n info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Apr 08 01:13:57.375 E ns/openshift-sdn pod/sdn-controller-f9cs7 node/ip-10-0-141-101.ec2.internal container/sdn-controller container exited with code 2 (Error): I0408 00:58:48.263799       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0408 00:58:48.283980       1 event.go:319] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"openshift-network-controller", GenerateName:"", Namespace:"openshift-sdn", SelfLink:"/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller", UID:"e6051a03-6c0e-4a71-8dd5-79b0ec1162e0", ResourceVersion:"36663", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63721901825, loc:(*time.Location)(0x2b2b940)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"ip-10-0-141-101\",\"leaseDurationSeconds\":60,\"acquireTime\":\"2020-04-08T00:17:05Z\",\"renewTime\":\"2020-04-08T00:58:48Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"openshift-sdn-controller", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0003675e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000367600)}}}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'ip-10-0-141-101 became leader'\nI0408 00:58:48.284108       1 leaderelection.go:252] successfully acquired lease openshift-sdn/openshift-network-controller\nI0408 00:58:48.289274       1 master.go:51] Initializing SDN master\nI0408 00:58:48.304323       1 network_controller.go:61] Started OpenShift Network Controller\n
Apr 08 01:13:57.390 E ns/openshift-multus pod/multus-admission-controller-54m7j node/ip-10-0-141-101.ec2.internal container/multus-admission-controller container exited with code 137 (Error): 
Apr 08 01:13:57.413 E ns/openshift-multus pod/multus-b9wkd node/ip-10-0-141-101.ec2.internal container/kube-multus container exited with code 143 (Error): 
Apr 08 01:13:57.444 E ns/openshift-machine-config-operator pod/machine-config-daemon-6smr7 node/ip-10-0-141-101.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 08 01:13:57.461 E ns/openshift-machine-config-operator pod/machine-config-server-t6vbz node/ip-10-0-141-101.ec2.internal container/machine-config-server container exited with code 2 (Error): I0408 01:07:25.569613       1 start.go:38] Version: machine-config-daemon-4.5.0-202004071701-2-gdd5eeeb2-dirty (dd5eeeb2bf88c50c9b7c2aa2385c4b2078a9eea0)\nI0408 01:07:25.570860       1 api.go:51] Launching server on :22624\nI0408 01:07:25.570986       1 api.go:51] Launching server on :22623\n
Apr 08 01:13:57.476 E ns/openshift-cluster-version pod/cluster-version-operator-756c7c58cc-g9cpm node/ip-10-0-141-101.ec2.internal container/cluster-version-operator container exited with code 255 (Error): 61       1 sync_worker.go:634] Done syncing for clusterrolebinding "system:openshift:operator:kube-controller-manager-operator" (87 of 565)\nI0408 01:11:38.873228       1 task_graph.go:568] Canceled worker 13\nI0408 01:11:38.846710       1 task_graph.go:568] Canceled worker 1\nI0408 01:11:38.851842       1 task_graph.go:568] Canceled worker 3\nI0408 01:11:38.870054       1 task_graph.go:568] Canceled worker 5\nI0408 01:11:38.870068       1 task_graph.go:568] Canceled worker 4\nI0408 01:11:38.870078       1 task_graph.go:568] Canceled worker 15\nI0408 01:11:38.870097       1 task_graph.go:568] Canceled worker 2\nI0408 01:11:38.870116       1 cvo.go:439] Started syncing cluster version "openshift-cluster-version/version" (2020-04-08 01:11:38.870107983 +0000 UTC m=+8.623299677)\nI0408 01:11:38.873636       1 cvo.go:468] Desired version from spec is v1.Update{Version:"", Image:"registry.svc.ci.openshift.org/ci-op-wwqd8rgz/release@sha256:ac935162c3850af3827eed7e2880041102218ed5e1c046c1092451b992587ac8", Force:true}\nI0408 01:11:38.883767       1 sync_worker.go:634] Done syncing for serviceaccount "openshift-kube-scheduler-operator/openshift-kube-scheduler-operator" (97 of 565)\nI0408 01:11:38.883942       1 task_graph.go:568] Canceled worker 6\nI0408 01:11:38.884072       1 task_graph.go:588] Workers finished\nI0408 01:11:38.884176       1 task_graph.go:596] Result of work: [update was cancelled at 93 of 565 update was cancelled at 94 of 565]\nI0408 01:11:38.884033       1 task_graph.go:516] No more reachable nodes in graph, continue\nI0408 01:11:38.908949       1 sync_worker.go:771] All errors were cancellation errors: [update was cancelled at 93 of 565 update was cancelled at 94 of 565]\nI0408 01:11:38.909035       1 task_graph.go:552] No more work\nI0408 01:11:38.954151       1 cvo.go:441] Finished syncing cluster version "openshift-cluster-version/version" (84.030575ms)\nI0408 01:11:38.954339       1 cvo.go:366] Shutting down ClusterVersionOperator\nF0408 01:11:39.040317       1 start.go:148] Received shutdown signal twice, exiting\n
Apr 08 01:14:01.114 E ns/openshift-etcd pod/etcd-ip-10-0-141-101.ec2.internal node/ip-10-0-141-101.ec2.internal container/etcd-metrics container exited with code 2 (Error): ll-serving-metrics/etcd-serving-metrics-ip-10-0-141-101.ec2.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-141-101.ec2.internal.key, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "}\n{"level":"info","ts":"2020-04-08T00:41:56.226Z","caller":"etcdmain/grpc_proxy.go:320","msg":"listening for gRPC proxy client requests","address":"127.0.0.1:9977"}\n{"level":"info","ts":"2020-04-08T00:41:56.227Z","caller":"etcdmain/grpc_proxy.go:290","msg":"gRPC proxy client TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-141-101.ec2.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-141-101.ec2.internal.key, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "}\n{"level":"warn","ts":"2020-04-08T00:41:56.248Z","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://10.0.141.101:9978 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 10.0.141.101:9978: connect: connection refused\". Reconnecting..."}\n{"level":"info","ts":"2020-04-08T00:41:56.248Z","caller":"etcdmain/grpc_proxy.go:456","msg":"gRPC proxy listening for metrics","address":"https://0.0.0.0:9979"}\n{"level":"info","ts":"2020-04-08T00:41:56.248Z","caller":"etcdmain/grpc_proxy.go:218","msg":"started gRPC proxy","address":"127.0.0.1:9977"}\n{"level":"info","ts":"2020-04-08T00:41:56.249Z","caller":"etcdmain/grpc_proxy.go:208","msg":"gRPC proxy server metrics URL serving"}\n{"level":"warn","ts":"2020-04-08T00:41:57.249Z","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://10.0.141.101:9978 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 10.0.141.101:9978: connect: connection refused\". Reconnecting..."}\n
Apr 08 01:14:12.722 E ns/openshift-machine-config-operator pod/machine-config-daemon-6smr7 node/ip-10-0-141-101.ec2.internal container/oauth-proxy container exited with code 1 (Error): 
Apr 08 01:14:30.263 E ns/openshift-cluster-machine-approver pod/machine-approver-5f46498d47-rqtpp node/ip-10-0-152-26.ec2.internal container/machine-approver-controller container exited with code 2 (Error): no such file or directory\nI0408 01:07:45.494371       1 config.go:23] machine approver config: {NodeClientCert:{Disabled:false}}\nI0408 01:07:45.494424       1 main.go:238] Starting Machine Approver\nI0408 01:07:45.494886       1 reflector.go:175] Starting reflector *v1beta1.CertificateSigningRequest (0s) from github.com/openshift/cluster-machine-approver/main.go:240\nI0408 01:07:45.594707       1 main.go:148] CSR csr-bwvgl added\nI0408 01:07:45.594758       1 main.go:151] CSR csr-bwvgl is already approved\nI0408 01:07:45.594777       1 main.go:148] CSR csr-mkjbs added\nI0408 01:07:45.594787       1 main.go:151] CSR csr-mkjbs is already approved\nI0408 01:07:45.594802       1 main.go:148] CSR csr-tzl2d added\nI0408 01:07:45.594813       1 main.go:151] CSR csr-tzl2d is already approved\nI0408 01:07:45.594826       1 main.go:148] CSR csr-tq4r8 added\nI0408 01:07:45.594837       1 main.go:151] CSR csr-tq4r8 is already approved\nI0408 01:07:45.594852       1 main.go:148] CSR csr-vclqf added\nI0408 01:07:45.594862       1 main.go:151] CSR csr-vclqf is already approved\nI0408 01:07:45.594875       1 main.go:148] CSR csr-57jgd added\nI0408 01:07:45.594885       1 main.go:151] CSR csr-57jgd is already approved\nI0408 01:07:45.594901       1 main.go:148] CSR csr-5vndt added\nI0408 01:07:45.594911       1 main.go:151] CSR csr-5vndt is already approved\nI0408 01:07:45.594925       1 main.go:148] CSR csr-9zwg9 added\nI0408 01:07:45.594936       1 main.go:151] CSR csr-9zwg9 is already approved\nI0408 01:07:45.594949       1 main.go:148] CSR csr-h5xcr added\nI0408 01:07:45.594959       1 main.go:151] CSR csr-h5xcr is already approved\nI0408 01:07:45.594971       1 main.go:148] CSR csr-pzjwv added\nI0408 01:07:45.594982       1 main.go:151] CSR csr-pzjwv is already approved\nI0408 01:07:45.594994       1 main.go:148] CSR csr-s9gzl added\nI0408 01:07:45.595005       1 main.go:151] CSR csr-s9gzl is already approved\nI0408 01:07:45.595017       1 main.go:148] CSR csr-xhzls added\nI0408 01:07:45.595028       1 main.go:151] CSR csr-xhzls is already approved\n
Apr 08 01:14:32.936 E ns/openshift-insights pod/insights-operator-6fb8798b48-ql6sk node/ip-10-0-152-26.ec2.internal container/operator container exited with code 2 (Error):  old resource version: 41627 (44643)\nI0408 01:11:40.558708       1 reflector.go:185] Listing and watching *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209\nI0408 01:11:41.478703       1 reflector.go:185] Listing and watching *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209\nI0408 01:11:48.558084       1 httplog.go:90] GET /metrics: (16.512485ms) 200 [Prometheus/2.15.2 10.128.2.20:54328]\nI0408 01:11:54.595554       1 status.go:298] The operator is healthy\nI0408 01:12:09.477055       1 httplog.go:90] GET /metrics: (8.603944ms) 200 [Prometheus/2.15.2 10.129.2.30:42560]\nI0408 01:12:18.542931       1 httplog.go:90] GET /metrics: (1.940668ms) 200 [Prometheus/2.15.2 10.128.2.20:54328]\nI0408 01:12:39.477638       1 httplog.go:90] GET /metrics: (9.185236ms) 200 [Prometheus/2.15.2 10.129.2.30:42560]\nI0408 01:12:48.544502       1 httplog.go:90] GET /metrics: (3.198965ms) 200 [Prometheus/2.15.2 10.128.2.20:54328]\nI0408 01:12:54.586161       1 configobserver.go:68] Refreshing configuration from cluster pull secret\nI0408 01:12:54.592860       1 configobserver.go:93] Found cloud.openshift.com token\nI0408 01:12:54.592897       1 configobserver.go:110] Refreshing configuration from cluster secret\nI0408 01:13:09.477218       1 httplog.go:90] GET /metrics: (8.751293ms) 200 [Prometheus/2.15.2 10.129.2.30:42560]\nI0408 01:13:18.542976       1 httplog.go:90] GET /metrics: (2.012184ms) 200 [Prometheus/2.15.2 10.128.2.20:54328]\nI0408 01:13:39.477114       1 httplog.go:90] GET /metrics: (8.738902ms) 200 [Prometheus/2.15.2 10.129.2.30:42560]\nI0408 01:13:48.542972       1 httplog.go:90] GET /metrics: (1.886488ms) 200 [Prometheus/2.15.2 10.128.2.20:54328]\nI0408 01:13:54.596313       1 status.go:298] The operator is healthy\nI0408 01:14:09.476697       1 httplog.go:90] GET /metrics: (8.332ms) 200 [Prometheus/2.15.2 10.129.2.30:42560]\nI0408 01:14:18.552398       1 httplog.go:90] GET /metrics: (11.232776ms) 200 [Prometheus/2.15.2 10.128.2.20:54328]\n
Apr 08 01:14:35.712 E ns/openshift-machine-config-operator pod/machine-config-operator-f8b7db6bf-7nvbb node/ip-10-0-152-26.ec2.internal container/machine-config-operator container exited with code 2 (Error): .go:307] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to watch *v1.ControllerConfig: Get https://172.30.0.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?allowWatchBookmarks=true&resourceVersion=40235&timeout=6m23s&timeoutSeconds=383&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0408 01:11:39.474130       1 reflector.go:307] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:117: Failed to watch *v1beta1.CustomResourceDefinition: Get https://172.30.0.1:443/apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions?allowWatchBookmarks=true&labelSelector=openshift.io%2Foperator-managed%3D&resourceVersion=38678&timeout=6m29s&timeoutSeconds=389&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0408 01:11:39.474500       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ConfigMap: Get https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dkube-apiserver-to-kubelet-client-ca&resourceVersion=45167&timeout=6m48s&timeoutSeconds=408&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0408 01:11:39.474906       1 reflector.go:307] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to watch *v1.MachineConfigPool: Get https://172.30.0.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?allowWatchBookmarks=true&resourceVersion=44062&timeout=9m24s&timeoutSeconds=564&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0408 01:11:39.475344       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ConfigMap: Get https://172.30.0.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?allowWatchBookmarks=true&resourceVersion=45286&timeout=9m28s&timeoutSeconds=568&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\n
Apr 08 01:14:35.785 E ns/openshift-machine-api pod/machine-api-operator-7864bcc8c5-mr429 node/ip-10-0-152-26.ec2.internal container/machine-api-operator container exited with code 2 (Error): 
Apr 08 01:14:35.814 E ns/openshift-machine-config-operator pod/machine-config-controller-8fcc95559-fh4lj node/ip-10-0-152-26.ec2.internal container/machine-config-controller container exited with code 2 (Error): Pool master: node ip-10-0-141-101.ec2.internal is now reporting unready: node ip-10-0-141-101.ec2.internal is reporting Unschedulable\nE0408 01:11:39.393666       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Secret: Get https://172.30.0.1:443/api/v1/namespaces/openshift-config/secrets?allowWatchBookmarks=true&resourceVersion=38670&timeout=9m50s&timeoutSeconds=590&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nI0408 01:12:37.597629       1 node_controller.go:433] Pool master: node ip-10-0-141-101.ec2.internal is now reporting unready: node ip-10-0-141-101.ec2.internal is reporting OutOfDisk=Unknown\nI0408 01:13:56.730947       1 node_controller.go:433] Pool master: node ip-10-0-141-101.ec2.internal is now reporting unready: node ip-10-0-141-101.ec2.internal is reporting NotReady=False\nI0408 01:14:16.450443       1 node_controller.go:433] Pool master: node ip-10-0-141-101.ec2.internal is now reporting unready: node ip-10-0-141-101.ec2.internal is reporting Unschedulable\nI0408 01:14:18.865613       1 node_controller.go:442] Pool master: node ip-10-0-141-101.ec2.internal has completed update to rendered-master-b256869b05626cd4536b62661dd0e989\nI0408 01:14:19.027773       1 node_controller.go:435] Pool master: node ip-10-0-141-101.ec2.internal is now reporting ready\nI0408 01:14:21.450942       1 node_controller.go:758] Setting node ip-10-0-152-26.ec2.internal to desired config rendered-master-b256869b05626cd4536b62661dd0e989\nI0408 01:14:21.480435       1 node_controller.go:452] Pool master: node ip-10-0-152-26.ec2.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-master-b256869b05626cd4536b62661dd0e989\nI0408 01:14:22.500540       1 node_controller.go:452] Pool master: node ip-10-0-152-26.ec2.internal changed machineconfiguration.openshift.io/state = Working\nI0408 01:14:23.542616       1 node_controller.go:433] Pool master: node ip-10-0-152-26.ec2.internal is now reporting unready: node ip-10-0-152-26.ec2.internal is reporting Unschedulable\n
Apr 08 01:14:36.929 E ns/openshift-machine-api pod/machine-api-controllers-889dd4dcf-fhrj2 node/ip-10-0-152-26.ec2.internal container/machineset-controller container exited with code 1 (Error): 
Apr 08 01:14:37.971 E ns/openshift-service-ca pod/service-ca-854f8b5dfd-5swkt node/ip-10-0-152-26.ec2.internal container/service-ca-controller container exited with code 1 (Error): 
Apr 08 01:15:16.776 E ns/openshift-apiserver pod/apiserver-569ddc5c5c-7dhjf node/ip-10-0-141-101.ec2.internal container/openshift-apiserver container exited with code 255 (Error): Copying system trust bundle\nF0408 01:15:15.958521       1 cmd.go:72] unable to load configmap based request-header-client-ca-file: Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: connection refused\n
Apr 08 01:15:19.276 E ns/openshift-cloud-credential-operator pod/cloud-credential-operator-9d77dcb54-smv78 node/ip-10-0-141-101.ec2.internal container/manager container exited with code 1 (Error): Copying system trust bundle\ntime="2020-04-08T01:15:18Z" level=debug msg="debug logging enabled"\ntime="2020-04-08T01:15:18Z" level=info msg="setting up client for manager"\ntime="2020-04-08T01:15:18Z" level=info msg="setting up manager"\ntime="2020-04-08T01:15:18Z" level=fatal msg="unable to set up overall controller manager" error="Get https://172.30.0.1:443/api?timeout=32s: dial tcp 172.30.0.1:443: connect: connection refused"\n
Apr 08 01:15:20.326 E ns/openshift-apiserver pod/apiserver-569ddc5c5c-7dhjf node/ip-10-0-141-101.ec2.internal container/openshift-apiserver container exited with code 255 (Error): Copying system trust bundle\nF0408 01:15:19.469787       1 cmd.go:72] unable to load configmap based request-header-client-ca-file: Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: connection refused\n
Apr 08 01:15:21.473 E ns/openshift-operator-lifecycle-manager pod/packageserver-56f6d8d5cc-59p4v node/ip-10-0-141-101.ec2.internal container/packageserver container exited with code 1 (Error): SA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA\n      --tls-min-version string                                  Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13\n      --tls-private-key-file string                             File containing the default x509 private key matching --tls-cert-file.\n      --tls-sni-cert-key namedCertKey                           A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com". (default [])\n  -v, --v Level                                                 number for the log level verbosity (default 0)\n      --vmodule moduleSpec                                      comma-separated list of pattern=N settings for file-filtered logging\n\ntime="2020-04-08T01:15:21Z" level=fatal msg="unable to load configmap based request-header-client-ca-file: Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: connection refused"\n
Apr 08 01:15:29.575 E ns/openshift-machine-api pod/machine-api-controllers-889dd4dcf-znrjb node/ip-10-0-141-101.ec2.internal container/machineset-controller container exited with code 1 (Error): 
Apr 08 01:15:29.575 E ns/openshift-machine-api pod/machine-api-controllers-889dd4dcf-znrjb node/ip-10-0-141-101.ec2.internal container/machine-healthcheck-controller container exited with code 255 (Error): 
Apr 08 01:15:29.575 E ns/openshift-machine-api pod/machine-api-controllers-889dd4dcf-znrjb node/ip-10-0-141-101.ec2.internal container/nodelink-controller container exited with code 255 (Error): 
Apr 08 01:15:31.704 E ns/openshift-operator-lifecycle-manager pod/packageserver-56f6d8d5cc-59p4v node/ip-10-0-141-101.ec2.internal container/packageserver container exited with code 1 (Error): SA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA\n      --tls-min-version string                                  Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13\n      --tls-private-key-file string                             File containing the default x509 private key matching --tls-cert-file.\n      --tls-sni-cert-key namedCertKey                           A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com". (default [])\n  -v, --v Level                                                 number for the log level verbosity (default 0)\n      --vmodule moduleSpec                                      comma-separated list of pattern=N settings for file-filtered logging\n\ntime="2020-04-08T01:15:30Z" level=fatal msg="unable to load configmap based request-header-client-ca-file: Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: connection refused"\n
Apr 08 01:15:32.701 E ns/openshift-apiserver pod/apiserver-569ddc5c5c-7dhjf node/ip-10-0-141-101.ec2.internal container/openshift-apiserver container exited with code 255 (Error): Copying system trust bundle\nF0408 01:15:31.609737       1 cmd.go:72] unable to load configmap based request-header-client-ca-file: Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: connection refused\n
Apr 08 01:16:25.035 E ns/openshift-cluster-node-tuning-operator pod/tuned-4tldq node/ip-10-0-148-174.ec2.internal container/tuned container exited with code 143 (Error): 020-04-08 00:48:41,881 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-08 00:48:41,883 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-08 00:48:42,014 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-08 00:48:42,021 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0408 01:11:38.855047   97649 tuned.go:527] tuned "rendered" changed\nI0408 01:11:38.855076   97649 tuned.go:218] extracting tuned profiles\nI0408 01:11:39.589234   97649 tuned.go:434] tuned daemon profiles changed, forcing tuned daemon reload\nI0408 01:11:39.589260   97649 tuned.go:356] reloading tuned...\nI0408 01:11:39.589268   97649 tuned.go:359] sending HUP to PID 97786\n2020-04-08 01:11:39,589 INFO     tuned.daemon.daemon: stopping tuning\n2020-04-08 01:11:40,403 INFO     tuned.daemon.daemon: terminating Tuned, rolling back all changes\n2020-04-08 01:11:40,413 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-08 01:11:40,413 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-04-08 01:11:40,414 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-04-08 01:11:40,444 INFO     tuned.daemon.daemon: starting tuning\n2020-04-08 01:11:40,446 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-08 01:11:40,447 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-08 01:11:40,450 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-08 01:11:40,451 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-08 01:11:40,452 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-08 01:11:40,455 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-08 01:11:40,466 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\n
Apr 08 01:16:25.051 E ns/openshift-monitoring pod/node-exporter-rf9l2 node/ip-10-0-148-174.ec2.internal container/node-exporter container exited with code 143 (Error): -08T00:48:51Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-08T00:48:51Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-08T00:48:51Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-08T00:48:51Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-08T00:48:51Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-08T00:48:51Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-08T00:48:51Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-08T00:48:51Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-08T00:48:51Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-08T00:48:51Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-08T00:48:51Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-08T00:48:51Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-08T00:48:51Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-08T00:48:51Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-08T00:48:51Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-08T00:48:51Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-08T00:48:51Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-08T00:48:51Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-08T00:48:51Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-08T00:48:51Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-08T00:48:51Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-08T00:48:51Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-08T00:48:51Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-08T00:48:51Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 08 01:16:25.081 E ns/openshift-multus pod/multus-dzxtg node/ip-10-0-148-174.ec2.internal container/kube-multus container exited with code 143 (Error): 
Apr 08 01:16:25.094 E ns/openshift-sdn pod/ovs-bvlhq node/ip-10-0-148-174.ec2.internal container/openvswitch container exited with code 1 (Error): ods in the last 0 s (2 deletes)\n2020-04-08T01:10:50.641Z|00135|connmgr|INFO|br0<->unix#640: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T01:10:50.668Z|00136|bridge|INFO|bridge br0: deleted interface vethad187143 on port 43\n2020-04-08T01:10:50.710Z|00137|connmgr|INFO|br0<->unix#643: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T01:10:50.756Z|00138|connmgr|INFO|br0<->unix#646: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T01:10:50.801Z|00139|bridge|INFO|bridge br0: deleted interface veth0fa96b4d on port 31\n2020-04-08T01:11:01.180Z|00015|jsonrpc|WARN|unix#559: receive error: Connection reset by peer\n2020-04-08T01:11:01.180Z|00016|reconnect|WARN|unix#559: connection dropped (Connection reset by peer)\n2020-04-08T01:11:33.461Z|00140|connmgr|INFO|br0<->unix#680: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T01:11:33.493Z|00141|connmgr|INFO|br0<->unix#683: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T01:11:33.519Z|00142|bridge|INFO|bridge br0: deleted interface veth49dcfbb7 on port 39\n2020-04-08T01:11:35.663Z|00143|connmgr|INFO|br0<->unix#688: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T01:11:35.698Z|00144|connmgr|INFO|br0<->unix#691: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T01:11:35.725Z|00145|bridge|INFO|bridge br0: deleted interface veth92dd7694 on port 32\n2020-04-08T01:12:11.863Z|00017|jsonrpc|WARN|unix#616: receive error: Connection reset by peer\n2020-04-08T01:12:11.864Z|00018|reconnect|WARN|unix#616: connection dropped (Connection reset by peer)\n2020-04-08T01:12:14.580Z|00019|jsonrpc|WARN|unix#617: receive error: Connection reset by peer\n2020-04-08T01:12:14.580Z|00020|reconnect|WARN|unix#617: connection dropped (Connection reset by peer)\n2020-04-08T01:13:31.335Z|00021|jsonrpc|WARN|unix#669: receive error: Connection reset by peer\n2020-04-08T01:13:31.335Z|00022|reconnect|WARN|unix#669: connection dropped (Connection reset by peer)\n2020-04-08 01:14:40 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Apr 08 01:16:25.128 E ns/openshift-machine-config-operator pod/machine-config-daemon-scqfw node/ip-10-0-148-174.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 08 01:16:30.271 E ns/openshift-sdn pod/sdn-dgtqd node/ip-10-0-148-174.ec2.internal container/sdn container exited with code 255 (Error): F0408 01:16:29.761037    2582 cmd.go:100] Failed to initialize sdn: failed to initialize SDN: could not get ClusterNetwork resource: Get https://api-int.ci-op-wwqd8rgz-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/apis/network.openshift.io/v1/clusternetworks/default: dial tcp 10.0.130.104:6443: connect: connection refused\n
Apr 08 01:16:37.343 E ns/openshift-marketplace pod/certified-operators-8fd7ccd48-6llsw node/ip-10-0-132-174.ec2.internal container/certified-operators container exited with code 2 (Error): 
Apr 08 01:16:37.877 E ns/openshift-marketplace pod/community-operators-6dc87f98f8-gmtm8 node/ip-10-0-134-26.ec2.internal container/community-operators container exited with code 2 (Error): 
Apr 08 01:16:46.632 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers::NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-152-26.ec2.internal" not ready since 2020-04-08 01:16:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)\nEtcdMembersDegraded: ip-10-0-152-26.ec2.internal members are unhealthy,  members are unknown
Apr 08 01:16:58.532 E ns/openshift-machine-config-operator pod/machine-config-daemon-scqfw node/ip-10-0-148-174.ec2.internal container/oauth-proxy container exited with code 1 (Error): 
Apr 08 01:17:11.274 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-134-26.ec2.internal container/config-reloader container exited with code 2 (Error): 2020/04/08 01:07:51 Watching directory: "/etc/alertmanager/config"\n
Apr 08 01:17:11.274 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-134-26.ec2.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/04/08 01:07:52 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/08 01:07:52 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/08 01:07:52 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/08 01:07:52 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/04/08 01:07:52 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/08 01:07:52 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/08 01:07:52 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/08 01:07:52 http.go:107: HTTPS: listening on [::]:9095\nI0408 01:07:52.039497       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Apr 08 01:17:11.307 E ns/openshift-monitoring pod/grafana-5c86564674-fllxw node/ip-10-0-134-26.ec2.internal container/grafana container exited with code 1 (Error): 
Apr 08 01:17:11.307 E ns/openshift-monitoring pod/grafana-5c86564674-fllxw node/ip-10-0-134-26.ec2.internal container/grafana-proxy container exited with code 2 (Error): 
Apr 08 01:17:11.361 E ns/openshift-marketplace pod/redhat-operators-f5df45545-mlpw7 node/ip-10-0-134-26.ec2.internal container/redhat-operators container exited with code 2 (Error): 
Apr 08 01:17:11.384 E ns/openshift-monitoring pod/prometheus-adapter-75c75f4d8d-78nqv node/ip-10-0-134-26.ec2.internal container/prometheus-adapter container exited with code 2 (Error): I0408 01:07:37.484520       1 adapter.go:93] successfully using in-cluster auth\nI0408 01:07:38.568901       1 secure_serving.go:116] Serving securely on [::]:6443\n
Apr 08 01:17:11.422 E ns/openshift-monitoring pod/telemeter-client-6bdcf54b65-b9tdf node/ip-10-0-134-26.ec2.internal container/telemeter-client container exited with code 2 (Error): 
Apr 08 01:17:11.422 E ns/openshift-monitoring pod/telemeter-client-6bdcf54b65-b9tdf node/ip-10-0-134-26.ec2.internal container/reload container exited with code 2 (Error): 
Apr 08 01:17:11.454 E ns/openshift-marketplace pod/certified-operators-85756cc689-cqbsn node/ip-10-0-134-26.ec2.internal container/certified-operators container exited with code 2 (Error): 
Apr 08 01:17:12.342 E ns/openshift-monitoring pod/thanos-querier-79cb9d45d4-bf45k node/ip-10-0-134-26.ec2.internal container/oauth-proxy container exited with code 2 (Error): 2020/04/08 01:07:36 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/04/08 01:07:36 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/08 01:07:36 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/08 01:07:37 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/04/08 01:07:37 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/08 01:07:37 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/04/08 01:07:37 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/08 01:07:37 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/04/08 01:07:37 http.go:107: HTTPS: listening on [::]:9091\nI0408 01:07:37.011490       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Apr 08 01:17:12.366 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-134-26.ec2.internal container/config-reloader container exited with code 2 (Error): 2020/04/08 00:48:00 Watching directory: "/etc/alertmanager/config"\n
Apr 08 01:17:12.366 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-134-26.ec2.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/04/08 00:48:02 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/08 00:48:02 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/08 00:48:02 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/08 00:48:02 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/04/08 00:48:02 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/08 00:48:02 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/08 00:48:02 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/08 00:48:02 http.go:107: HTTPS: listening on [::]:9095\nI0408 00:48:02.676290       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\nE0408 01:11:45.358274       1 webhook.go:109] Failed to make webhook authenticator request: Post https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews: dial tcp 172.30.0.1:443: connect: connection refused\n2020/04/08 01:11:45 oauthproxy.go:782: requestauth: 10.129.2.30:41696 Post https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews: dial tcp 172.30.0.1:443: connect: connection refused\n
Apr 08 01:17:24.904 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-148-174.ec2.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-08T01:17:23.024Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-08T01:17:23.027Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-08T01:17:23.027Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-08T01:17:23.028Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-08T01:17:23.028Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-08T01:17:23.028Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-08T01:17:23.028Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-08T01:17:23.028Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-08T01:17:23.028Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-08T01:17:23.028Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-08T01:17:23.028Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-08T01:17:23.028Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-08T01:17:23.028Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-08T01:17:23.028Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-08T01:17:23.031Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-08T01:17:23.031Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-08
Apr 08 01:17:39.402 E ns/e2e-k8s-sig-apps-job-upgrade-2174 pod/foo-ngp9t node/ip-10-0-134-26.ec2.internal container/c container exited with code 137 (Error): 
Apr 08 01:17:39.414 E ns/e2e-k8s-sig-apps-job-upgrade-2174 pod/foo-xswsr node/ip-10-0-134-26.ec2.internal container/c container exited with code 137 (Error): 
Apr 08 01:17:40.587 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-152-26.ec2.internal node/ip-10-0-152-26.ec2.internal container/cluster-policy-controller container exited with code 1 (Error): I0408 00:42:27.061803       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0408 00:42:27.063157       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0408 00:42:27.065375       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0408 00:42:27.065456       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\n
Apr 08 01:17:40.587 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-152-26.ec2.internal node/ip-10-0-152-26.ec2.internal container/kube-controller-manager-cert-syncer container exited with code 2 (Error): 3858       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 01:14:36.574333       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 01:14:44.974876       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 01:14:44.975290       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 01:14:46.583465       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 01:14:46.583982       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 01:14:54.989469       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 01:14:54.989863       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 01:14:56.597263       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 01:14:56.597619       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 01:15:05.002754       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 01:15:05.003130       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 01:15:06.606571       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 01:15:06.607019       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\n
Apr 08 01:17:40.587 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-152-26.ec2.internal node/ip-10-0-152-26.ec2.internal container/kube-controller-manager container exited with code 2 (Error): eter-client: Operation cannot be fulfilled on deployments.apps "telemeter-client": the object has been modified; please apply your changes to the latest version and try again\nW0408 01:14:59.718810       1 garbagecollector.go:644] failed to discover some groups: map[packages.operators.coreos.com/v1:the server is currently unable to handle the request]\nI0408 01:15:00.202138       1 replica_set.go:562] Too few replicas for ReplicaSet openshift-machine-config-operator/etcd-quorum-guard-6774c47dc6, need 3, creating 1\nE0408 01:15:00.230867       1 disruption.go:505] Error syncing PodDisruptionBudget openshift-machine-config-operator/etcd-quorum-guard, requeuing: Operation cannot be fulfilled on poddisruptionbudgets.policy "etcd-quorum-guard": the object has been modified; please apply your changes to the latest version and try again\nI0408 01:15:00.264579       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-machine-config-operator", Name:"etcd-quorum-guard-6774c47dc6", UID:"5c099f8d-27c8-4002-a3da-078c38dcf89a", APIVersion:"apps/v1", ResourceVersion:"47914", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: etcd-quorum-guard-6774c47dc6-w6zl8\nI0408 01:15:00.268211       1 deployment_controller.go:485] Error syncing deployment openshift-monitoring/thanos-querier: Operation cannot be fulfilled on deployments.apps "thanos-querier": the object has been modified; please apply your changes to the latest version and try again\nI0408 01:15:01.236605       1 deployment_controller.go:485] Error syncing deployment openshift-monitoring/prometheus-adapter: Operation cannot be fulfilled on deployments.apps "prometheus-adapter": the object has been modified; please apply your changes to the latest version and try again\nI0408 01:15:06.814147       1 deployment_controller.go:485] Error syncing deployment openshift-monitoring/grafana: Operation cannot be fulfilled on deployments.apps "grafana": the object has been modified; please apply your changes to the latest version and try again\n
Apr 08 01:17:40.631 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-152-26.ec2.internal node/ip-10-0-152-26.ec2.internal container/kube-apiserver container exited with code 1 (Error): .Unstructured\nW0408 01:15:14.574248       1 cacher.go:166] Terminating all watchers from cacher *batch.CronJob\nW0408 01:15:14.574272       1 cacher.go:166] Terminating all watchers from cacher *storage.CSIDriver\nW0408 01:15:14.574318       1 cacher.go:166] Terminating all watchers from cacher *core.Namespace\nW0408 01:15:14.574362       1 cacher.go:166] Terminating all watchers from cacher *node.RuntimeClass\nW0408 01:15:14.574403       1 cacher.go:166] Terminating all watchers from cacher *rbac.Role\nW0408 01:15:14.575125       1 cacher.go:166] Terminating all watchers from cacher *certificates.CertificateSigningRequest\nW0408 01:15:14.575184       1 cacher.go:166] Terminating all watchers from cacher *storage.CSINode\nW0408 01:15:14.575375       1 cacher.go:166] Terminating all watchers from cacher *networking.Ingress\nW0408 01:15:14.575392       1 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured\nW0408 01:15:14.575739       1 cacher.go:166] Terminating all watchers from cacher *storage.StorageClass\nW0408 01:15:14.575786       1 cacher.go:166] Terminating all watchers from cacher *storage.VolumeAttachment\nW0408 01:15:14.575815       1 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured\nW0408 01:15:14.575844       1 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured\nW0408 01:15:14.575874       1 cacher.go:166] Terminating all watchers from cacher *autoscaling.HorizontalPodAutoscaler\nW0408 01:15:14.576055       1 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured\nI0408 01:15:14.618754       1 genericapiserver.go:648] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-152-26.ec2.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0408 01:15:14.618970       1 controller.go:181] Shutting down kubernetes service endpoint reconciler\n
Apr 08 01:17:40.631 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-152-26.ec2.internal node/ip-10-0-152-26.ec2.internal container/kube-apiserver-insecure-readyz container exited with code 2 (Error): I0408 00:45:32.227354       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Apr 08 01:17:40.631 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-152-26.ec2.internal node/ip-10-0-152-26.ec2.internal container/kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0408 01:14:56.671192       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 01:14:56.671611       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0408 01:15:06.683392       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 01:15:06.683793       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Apr 08 01:17:40.654 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-152-26.ec2.internal node/ip-10-0-152-26.ec2.internal container/kube-scheduler container exited with code 2 (Error):  pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0408 01:15:00.301391       1 factory.go:462] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-6774c47dc6-w6zl8: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nE0408 01:15:00.306307       1 factory.go:503] pod: openshift-machine-config-operator/etcd-quorum-guard-6774c47dc6-w6zl8 is already present in unschedulable queue\nI0408 01:15:01.362619       1 factory.go:462] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-6774c47dc6-w6zl8: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0408 01:15:04.150477       1 factory.go:462] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-6774c47dc6-w6zl8: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0408 01:15:06.151720       1 factory.go:462] Unable to schedule openshift-apiserver/apiserver-569ddc5c5c-cv6mw: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0408 01:15:11.273560       1 factory.go:462] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-6774c47dc6-w6zl8: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\n
Apr 08 01:17:40.654 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-152-26.ec2.internal node/ip-10-0-152-26.ec2.internal container/kube-scheduler-cert-syncer container exited with code 2 (Error): 1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 01:14:55.682256       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 01:14:55.682293       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 01:14:57.694170       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 01:14:57.694199       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 01:14:59.721316       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 01:14:59.721347       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 01:15:01.733889       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 01:15:01.733927       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 01:15:03.762611       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 01:15:03.762703       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 01:15:05.779742       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 01:15:05.779901       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 01:15:07.794034       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 01:15:07.794146       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 01:15:09.809149       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 01:15:09.809302       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 01:15:11.829509       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 01:15:11.829546       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 01:15:13.842179       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 01:15:13.842216       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\n
Apr 08 01:17:40.704 E ns/openshift-monitoring pod/node-exporter-ccs8p node/ip-10-0-152-26.ec2.internal container/node-exporter container exited with code 143 (Error): -08T00:48:16Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-08T00:48:16Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-08T00:48:16Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-08T00:48:16Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-08T00:48:16Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-08T00:48:16Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-08T00:48:16Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-08T00:48:16Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-08T00:48:16Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-08T00:48:16Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-08T00:48:16Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-08T00:48:16Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-08T00:48:16Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-08T00:48:16Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-08T00:48:16Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-08T00:48:16Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-08T00:48:16Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-08T00:48:16Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-08T00:48:16Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-08T00:48:16Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-08T00:48:16Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-08T00:48:16Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-08T00:48:16Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-08T00:48:16Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 08 01:17:40.747 E ns/openshift-controller-manager pod/controller-manager-jdrtd node/ip-10-0-152-26.ec2.internal container/controller-manager container exited with code 255 (Error): mers/factory.go:135: Failed to watch *v1.ConfigMap: Get https://172.30.0.1:443/api/v1/namespaces/openshift-config/configmaps?allowWatchBookmarks=true&resourceVersion=45955&timeout=7m53s&timeoutSeconds=473&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nI0408 01:15:15.021385       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0408 01:15:15.030412       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: Get https://172.30.0.1:443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=45198&timeout=7m23s&timeoutSeconds=443&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nI0408 01:15:15.021395       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0408 01:15:15.030538       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Deployment: Get https://172.30.0.1:443/apis/apps/v1/deployments?allowWatchBookmarks=true&resourceVersion=48220&timeout=5m40s&timeoutSeconds=340&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0408 01:15:15.021415       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.Ingress: Get https://172.30.0.1:443/apis/extensions/v1beta1/ingresses?allowWatchBookmarks=true&resourceVersion=28155&timeout=9m0s&timeoutSeconds=540&watch=true: unexpected EOF\nI0408 01:15:15.021435       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0408 01:15:15.020940       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0408 01:15:15.031126       1 reflector.go:320] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.BuildConfig: Get https://172.30.0.1:443/apis/build.openshift.io/v1/buildconfigs?allowWatchBookmarks=true&resourceVersion=45955&timeout=8m6s&timeoutSeconds=486&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\n
Apr 08 01:17:40.765 E ns/openshift-cluster-node-tuning-operator pod/tuned-fd5fb node/ip-10-0-152-26.ec2.internal container/tuned container exited with code 143 (Error): d...\nI0408 01:11:39.403338   92228 tuned.go:359] sending HUP to PID 92312\n2020-04-08 01:11:39,404 INFO     tuned.daemon.daemon: stopping tuning\n2020-04-08 01:11:40,191 INFO     tuned.daemon.daemon: terminating Tuned, rolling back all changes\n2020-04-08 01:11:40,323 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-08 01:11:40,342 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-04-08 01:11:40,343 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-04-08 01:11:40,782 INFO     tuned.daemon.daemon: starting tuning\n2020-04-08 01:11:40,787 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-08 01:11:40,788 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-08 01:11:40,823 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-08 01:11:40,832 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-08 01:11:40,845 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-08 01:11:40,854 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-08 01:11:40,865 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\nI0408 01:14:34.585248   92228 tuned.go:486] profile "ip-10-0-152-26.ec2.internal" changed, tuned profile requested: openshift-node\nI0408 01:14:34.770070   92228 tuned.go:486] profile "ip-10-0-152-26.ec2.internal" changed, tuned profile requested: openshift-control-plane\nI0408 01:14:35.384104   92228 tuned.go:392] getting recommended profile...\nI0408 01:14:35.859863   92228 tuned.go:428] active and recommended profile (openshift-control-plane) match; profile change will not trigger profile reload\n2020-04-08 01:15:14,682 INFO     tuned.daemon.controller: terminating controller\n2020-04-08 01:15:14,682 INFO     tuned.daemon.daemon: stopping tuning\n
Apr 08 01:17:40.779 E ns/openshift-sdn pod/sdn-controller-xqxjs node/ip-10-0-152-26.ec2.internal container/sdn-controller container exited with code 2 (Error): I0408 00:58:45.155952       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0408 01:12:51.855816       1 event.go:319] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"openshift-network-controller", GenerateName:"", Namespace:"openshift-sdn", SelfLink:"/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller", UID:"e6051a03-6c0e-4a71-8dd5-79b0ec1162e0", ResourceVersion:"45900", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63721901825, loc:(*time.Location)(0x2b2b940)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"ip-10-0-152-26\",\"leaseDurationSeconds\":60,\"acquireTime\":\"2020-04-08T01:12:51Z\",\"renewTime\":\"2020-04-08T01:12:51Z\",\"leaderTransitions\":1}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"openshift-sdn-controller", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0007d4380), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0007d43a0)}}}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'ip-10-0-152-26 became leader'\nI0408 01:12:51.855936       1 leaderelection.go:252] successfully acquired lease openshift-sdn/openshift-network-controller\nI0408 01:12:51.865405       1 master.go:51] Initializing SDN master\nI0408 01:12:51.891889       1 network_controller.go:61] Started OpenShift Network Controller\n
Apr 08 01:17:40.792 E ns/openshift-multus pod/multus-admission-controller-9xq95 node/ip-10-0-152-26.ec2.internal container/multus-admission-controller container exited with code 255 (Error): 
Apr 08 01:17:40.814 E ns/openshift-sdn pod/ovs-rq4rv node/ip-10-0-152-26.ec2.internal container/openvswitch container exited with code 1 (Error): es)\n2020-04-08T01:14:54.375Z|00259|connmgr|INFO|br0<->unix#1114: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T01:14:54.400Z|00260|bridge|INFO|bridge br0: deleted interface veth18c0a7a4 on port 105\n2020-04-08T01:14:59.658Z|00261|bridge|INFO|bridge br0: added interface veth66ac50fd on port 106\n2020-04-08T01:14:59.710Z|00262|connmgr|INFO|br0<->unix#1120: 5 flow_mods in the last 0 s (5 adds)\n2020-04-08T01:14:59.771Z|00263|connmgr|INFO|br0<->unix#1124: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-04-08T01:14:59.774Z|00264|connmgr|INFO|br0<->unix#1126: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T01:15:01.281Z|00265|bridge|INFO|bridge br0: deleted interface veth66ac50fd on port 106\n2020-04-08T01:15:01.296Z|00266|bridge|WARN|could not open network device veth66ac50fd (No such device)\n2020-04-08T01:15:03.356Z|00267|connmgr|INFO|br0<->unix#1130: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T01:15:03.400Z|00268|connmgr|INFO|br0<->unix#1133: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T01:15:03.636Z|00269|connmgr|INFO|br0<->unix#1139: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T01:15:03.688Z|00270|connmgr|INFO|br0<->unix#1142: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T01:15:03.726Z|00271|bridge|INFO|bridge br0: deleted interface vethfaaeb5dd on port 36\n2020-04-08T01:15:06.976Z|00272|bridge|INFO|bridge br0: added interface veth9dd9003f on port 107\n2020-04-08T01:15:07.010Z|00273|connmgr|INFO|br0<->unix#1148: 5 flow_mods in the last 0 s (5 adds)\n2020-04-08T01:15:07.062Z|00274|connmgr|INFO|br0<->unix#1151: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T01:15:11.334Z|00275|connmgr|INFO|br0<->unix#1157: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T01:15:11.367Z|00276|connmgr|INFO|br0<->unix#1160: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T01:15:11.397Z|00277|bridge|INFO|bridge br0: deleted interface veth9dd9003f on port 107\n2020-04-08 01:15:14 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Apr 08 01:17:40.833 E ns/openshift-multus pod/multus-2x25c node/ip-10-0-152-26.ec2.internal container/kube-multus container exited with code 143 (Error): 
Apr 08 01:17:40.864 E ns/openshift-machine-config-operator pod/machine-config-daemon-r7xsg node/ip-10-0-152-26.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 08 01:17:40.877 E ns/openshift-machine-config-operator pod/machine-config-server-w885t node/ip-10-0-152-26.ec2.internal container/machine-config-server container exited with code 2 (Error): I0408 01:07:48.482935       1 start.go:38] Version: machine-config-daemon-4.5.0-202004071701-2-gdd5eeeb2-dirty (dd5eeeb2bf88c50c9b7c2aa2385c4b2078a9eea0)\nI0408 01:07:48.486163       1 api.go:51] Launching server on :22624\nI0408 01:07:48.487591       1 api.go:51] Launching server on :22623\n
Apr 08 01:17:44.636 E ns/openshift-etcd pod/etcd-ip-10-0-152-26.ec2.internal node/ip-10-0-152-26.ec2.internal container/etcd-metrics container exited with code 2 (Error): s/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-152-26.ec2.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-152-26.ec2.internal.key, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "}\n{"level":"info","ts":"2020-04-08T00:43:01.728Z","caller":"etcdmain/grpc_proxy.go:320","msg":"listening for gRPC proxy client requests","address":"127.0.0.1:9977"}\n{"level":"info","ts":"2020-04-08T00:43:01.728Z","caller":"etcdmain/grpc_proxy.go:290","msg":"gRPC proxy client TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-152-26.ec2.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-152-26.ec2.internal.key, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "}\n{"level":"info","ts":"2020-04-08T00:43:01.732Z","caller":"etcdmain/grpc_proxy.go:456","msg":"gRPC proxy listening for metrics","address":"https://0.0.0.0:9979"}\n{"level":"info","ts":"2020-04-08T00:43:01.733Z","caller":"etcdmain/grpc_proxy.go:218","msg":"started gRPC proxy","address":"127.0.0.1:9977"}\n{"level":"info","ts":"2020-04-08T00:43:01.733Z","caller":"etcdmain/grpc_proxy.go:208","msg":"gRPC proxy server metrics URL serving"}\n{"level":"warn","ts":"2020-04-08T00:43:01.736Z","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://10.0.152.26:9978 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 10.0.152.26:9978: connect: connection refused\". Reconnecting..."}\n{"level":"warn","ts":"2020-04-08T00:43:02.738Z","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://10.0.152.26:9978 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 10.0.152.26:9978: connect: connection refused\". Reconnecting..."}\n
Apr 08 01:17:51.745 E ns/openshift-machine-config-operator pod/machine-config-daemon-r7xsg node/ip-10-0-152-26.ec2.internal container/oauth-proxy container exited with code 1 (Error): 
Apr 08 01:18:01.907 E clusteroperator/console changed Degraded to True: RouteHealth_FailedGet: RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.ci-op-wwqd8rgz-f83f1.origin-ci-int-aws.dev.rhcloud.com/health): Get https://console-openshift-console.apps.ci-op-wwqd8rgz-f83f1.origin-ci-int-aws.dev.rhcloud.com/health: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Apr 08 01:20:35.628 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator console is reporting a failure: RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.ci-op-wwqd8rgz-f83f1.origin-ci-int-aws.dev.rhcloud.com/health): Get https://console-openshift-console.apps.ci-op-wwqd8rgz-f83f1.origin-ci-int-aws.dev.rhcloud.com/health: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Apr 08 01:24:27.333 E ns/openshift-monitoring pod/node-exporter-fn4pj node/ip-10-0-134-26.ec2.internal container/node-exporter container exited with code 143 (Error): -08T00:47:33Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-08T00:47:33Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-08T00:47:33Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-08T00:47:33Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-08T00:47:33Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-08T00:47:33Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-08T00:47:33Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-08T00:47:33Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-08T00:47:33Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-08T00:47:33Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-08T00:47:33Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-08T00:47:33Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-08T00:47:33Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-08T00:47:33Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-08T00:47:33Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-08T00:47:33Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-08T00:47:33Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-08T00:47:33Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-08T00:47:33Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-08T00:47:33Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-08T00:47:33Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-08T00:47:33Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-08T00:47:33Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-08T00:47:33Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 08 01:24:27.350 E ns/openshift-cluster-node-tuning-operator pod/tuned-tl7gs node/ip-10-0-134-26.ec2.internal container/tuned container exited with code 143 (Error): tracting tuned profiles\nI0408 01:15:15.486361   47756 tuned.go:434] tuned daemon profiles changed, forcing tuned daemon reload\nI0408 01:15:15.486387   47756 tuned.go:356] reloading tuned...\nI0408 01:15:15.486394   47756 tuned.go:359] sending HUP to PID 47813\n2020-04-08 01:15:15,486 INFO     tuned.daemon.daemon: stopping tuning\n2020-04-08 01:15:15,621 INFO     tuned.daemon.daemon: terminating Tuned, rolling back all changes\n2020-04-08 01:15:15,631 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-08 01:15:15,632 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-04-08 01:15:15,633 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-04-08 01:15:15,669 INFO     tuned.daemon.daemon: starting tuning\n2020-04-08 01:15:15,672 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-08 01:15:15,673 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-08 01:15:15,676 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-08 01:15:15,678 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-08 01:15:15,679 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-08 01:15:15,683 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-08 01:15:15,694 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0408 01:15:17.241616   47756 tuned.go:486] profile "ip-10-0-134-26.ec2.internal" changed, tuned profile requested: openshift-node\nI0408 01:15:17.486477   47756 tuned.go:392] getting recommended profile...\nI0408 01:15:17.605324   47756 tuned.go:428] active and recommended profile (openshift-node) match; profile change will not trigger profile reload\nI0408 01:22:41.062304   47756 tuned.go:114] received signal: terminated\nI0408 01:22:41.062344   47756 tuned.go:326] sending TERM to PID 47813\n
Apr 08 01:24:27.403 E ns/openshift-multus pod/multus-mnzc5 node/ip-10-0-134-26.ec2.internal container/kube-multus container exited with code 143 (Error): 
Apr 08 01:24:27.421 E ns/openshift-sdn pod/ovs-nmdrv node/ip-10-0-134-26.ec2.internal container/openvswitch container exited with code 143 (Error): last 0 s (2 deletes)\n2020-04-08T01:17:11.199Z|00133|connmgr|INFO|br0<->unix#938: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T01:17:11.224Z|00134|bridge|INFO|bridge br0: deleted interface vethc5dae580 on port 28\n2020-04-08T01:17:38.491Z|00135|connmgr|INFO|br0<->unix#960: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T01:17:38.521Z|00136|connmgr|INFO|br0<->unix#963: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T01:17:38.544Z|00137|bridge|INFO|bridge br0: deleted interface vetheda44adf on port 29\n2020-04-08T01:17:38.905Z|00138|connmgr|INFO|br0<->unix#966: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T01:17:38.933Z|00139|connmgr|INFO|br0<->unix#969: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T01:17:38.962Z|00140|bridge|INFO|bridge br0: deleted interface vethc1e89ee1 on port 14\n2020-04-08T01:17:54.478Z|00013|jsonrpc|WARN|unix#873: receive error: Connection reset by peer\n2020-04-08T01:17:54.478Z|00014|reconnect|WARN|unix#873: connection dropped (Connection reset by peer)\n2020-04-08T01:17:54.415Z|00141|connmgr|INFO|br0<->unix#982: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T01:17:54.457Z|00142|connmgr|INFO|br0<->unix#985: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T01:17:54.486Z|00143|bridge|INFO|bridge br0: deleted interface veth1ce9c77b on port 15\n2020-04-08T01:17:55.625Z|00015|jsonrpc|WARN|unix#880: receive error: Connection reset by peer\n2020-04-08T01:17:55.625Z|00016|reconnect|WARN|unix#880: connection dropped (Connection reset by peer)\n2020-04-08T01:17:55.507Z|00144|connmgr|INFO|br0<->unix#990: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T01:17:55.542Z|00145|connmgr|INFO|br0<->unix#993: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T01:17:55.632Z|00146|bridge|INFO|bridge br0: deleted interface veth510a56af on port 18\n2020-04-08T01:19:33.089Z|00017|jsonrpc|WARN|unix#946: receive error: Connection reset by peer\n2020-04-08T01:19:33.089Z|00018|reconnect|WARN|unix#946: connection dropped (Connection reset by peer)\n2020-04-08 01:22:41 info: Saving flows ...\nTerminated\n
Apr 08 01:24:27.457 E ns/openshift-machine-config-operator pod/machine-config-daemon-bsgbp node/ip-10-0-134-26.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 08 01:24:36.112 E ns/openshift-machine-config-operator pod/machine-config-daemon-bsgbp node/ip-10-0-134-26.ec2.internal container/oauth-proxy container exited with code 1 (Error):