ResultSUCCESS
Tests 3 failed / 22 succeeded
Started2020-04-07 20:32
Elapsed1h41m
Work namespaceci-op-zvgsjvdr
Refs openshift-4.5:fe90dcbe
44:8b80929a
podc1c12202-790e-11ea-ab07-0a58ac100bec
repoopenshift/etcd
revision1

Test Failures


Cluster upgrade Application behind service load balancer with PDB is not disrupted 37m24s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sApplication\sbehind\sservice\sload\sbalancer\swith\sPDB\sis\snot\sdisrupted$'
Service was unreachable during disruption for at least 2s of 33m40s (0%):

Apr 07 21:50:46.384 E ns/e2e-k8s-service-lb-available-4592 svc/service-test Service stopped responding to GET requests on reused connections
Apr 07 21:50:47.383 E ns/e2e-k8s-service-lb-available-4592 svc/service-test Service is not responding to GET requests on reused connections
Apr 07 21:50:47.416 I ns/e2e-k8s-service-lb-available-4592 svc/service-test Service started responding to GET requests on reused connections
				from junit_upgrade_1586297078.xml

Filter through log files


Cluster upgrade Kubernetes and OpenShift APIs remain available 36m53s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sKubernetes\sand\sOpenShift\sAPIs\sremain\savailable$'
API was unreachable during disruption for at least 3s of 36m53s (0%):

Apr 07 21:37:52.517 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-zvgsjvdr-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: dial tcp 34.235.7.175:6443: connect: connection refused
Apr 07 21:37:53.487 E openshift-apiserver OpenShift API is not responding to GET requests
Apr 07 21:37:53.520 I openshift-apiserver OpenShift API started responding to GET requests
Apr 07 21:50:08.556 E kube-apiserver Kube API started failing: Get https://api.ci-op-zvgsjvdr-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=15s: dial tcp 52.0.244.26:6443: connect: connection refused
Apr 07 21:50:08.613 I kube-apiserver Kube API started responding to GET requests
Apr 07 21:57:13.518 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-zvgsjvdr-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: dial tcp 52.0.244.26:6443: connect: connection refused
Apr 07 21:57:14.487 E openshift-apiserver OpenShift API is not responding to GET requests
Apr 07 21:57:14.582 I openshift-apiserver OpenShift API started responding to GET requests
				from junit_upgrade_1586297078.xml

Filter through log files


openshift-tests Monitor cluster while tests execute 37m25s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
193 error level events were detected during this test run:

Apr 07 21:30:35.092 E clusterversion/version changed Failing to True: WorkloadNotAvailable: deployment openshift-cluster-version/cluster-version-operator is progressing NewReplicaSetAvailable: ReplicaSet "cluster-version-operator-755f4c6799" has successfully progressed.
Apr 07 21:33:35.445 E kube-apiserver Kube API started failing: Get https://api.ci-op-zvgsjvdr-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: unexpected EOF
Apr 07 21:33:44.845 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-128-164.ec2.internal node/ip-10-0-128-164.ec2.internal container/kube-scheduler container exited with code 255 (Error): ptionBudget: Get https://localhost:6443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=31072&timeout=7m24s&timeoutSeconds=444&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0407 21:33:37.532551       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: Get https://localhost:6443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30701&timeout=5m52s&timeoutSeconds=352&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0407 21:33:37.534677       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://localhost:6443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=29767&timeout=5m25s&timeoutSeconds=325&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0407 21:33:37.536980       1 reflector.go:380] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to watch *v1.Pod: Get https://localhost:6443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=31361&timeoutSeconds=597&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0407 21:33:37.548186       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: Get https://localhost:6443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30700&timeout=9m23s&timeoutSeconds=563&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0407 21:33:44.225114       1 leaderelection.go:320] error retrieving resource lock openshift-kube-scheduler/kube-scheduler: Get https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps/kube-scheduler?timeout=10s: context deadline exceeded\nI0407 21:33:44.225402       1 leaderelection.go:277] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0407 21:33:44.226501       1 server.go:244] leaderelection lost\n
Apr 07 21:34:04.118 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers: EtcdMembersDegraded: ip-10-0-136-217.ec2.internal members are unhealthy,  members are unknown
Apr 07 21:34:26.174 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-bf68bffd8-jzpkm node/ip-10-0-136-217.ec2.internal container/kube-storage-version-migrator-operator container exited with code 255 (Error): migrator changed: {"conditions":[{"type":"Degraded","status":"False","lastTransitionTime":"2020-04-07T20:52:33Z","reason":"AsExpected"},{"type":"Progressing","status":"False","lastTransitionTime":"2020-04-07T20:52:33Z","reason":"AsExpected"},{"type":"Available","status":"False","lastTransitionTime":"2020-04-07T20:52:33Z","reason":"_NoMigratorPod","message":"Available: deployment/migrator.openshift-kube-storage-version-migrator: no replicas are available"},{"type":"Upgradeable","status":"Unknown","lastTransitionTime":"2020-04-07T20:52:33Z","reason":"NoData"}],"versions":[{"name":"operator","version":"0.0.1-2020-04-07-203215"}\n\nA: ],"relatedObjects":[{"group":"operator.openshift.io","resource":"kubestorageversionmigrators","name":"cluster"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator-operator"}],"extension":null}\n\n\nB: ,{"name":"kube-storage-version-migrator","version":""}],"relatedObjects":[{"group":"operator.openshift.io","resource":"kubestorageversionmigrators","name":"cluster"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator-operator"}],"extension":null}\n\n\nI0407 21:20:34.731709       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"1ed8f104-9f7d-47cb-bb05-1d59955303d8", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from False to True ("")\nI0407 21:34:25.454582       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0407 21:34:25.454646       1 leaderelection.go:66] leaderelection lost\nI0407 21:34:25.454689       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\n
Apr 07 21:36:09.903 E ns/openshift-kube-storage-version-migrator pod/migrator-5bbc8d768f-xfntj node/ip-10-0-153-218.ec2.internal container/migrator container exited with code 2 (Error): I0407 21:24:25.835860       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0407 21:26:31.008260       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\n
Apr 07 21:36:17.931 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-5676854f75-c527c node/ip-10-0-153-218.ec2.internal container/operator container exited with code 255 (Error): 3.398155       1 operator.go:145] Starting syncing operator at 2020-04-07 21:36:03.398143569 +0000 UTC m=+917.774906383\nI0407 21:36:03.425126       1 operator.go:147] Finished syncing operator at 26.97551ms\nI0407 21:36:03.534691       1 operator.go:145] Starting syncing operator at 2020-04-07 21:36:03.534675163 +0000 UTC m=+917.911438267\nI0407 21:36:03.579111       1 operator.go:147] Finished syncing operator at 44.426612ms\nI0407 21:36:03.677024       1 operator.go:145] Starting syncing operator at 2020-04-07 21:36:03.676997028 +0000 UTC m=+918.053771568\nI0407 21:36:03.756678       1 operator.go:147] Finished syncing operator at 79.657186ms\nI0407 21:36:11.351186       1 operator.go:145] Starting syncing operator at 2020-04-07 21:36:11.351176557 +0000 UTC m=+925.727939395\nI0407 21:36:11.401660       1 operator.go:147] Finished syncing operator at 50.470094ms\nI0407 21:36:11.401712       1 operator.go:145] Starting syncing operator at 2020-04-07 21:36:11.401707258 +0000 UTC m=+925.778470087\nI0407 21:36:11.445197       1 operator.go:147] Finished syncing operator at 43.481291ms\nI0407 21:36:11.445238       1 operator.go:145] Starting syncing operator at 2020-04-07 21:36:11.445234274 +0000 UTC m=+925.821997111\nI0407 21:36:11.492847       1 operator.go:147] Finished syncing operator at 47.602407ms\nI0407 21:36:11.495080       1 operator.go:145] Starting syncing operator at 2020-04-07 21:36:11.495072104 +0000 UTC m=+925.871835060\nI0407 21:36:11.765880       1 operator.go:147] Finished syncing operator at 270.79923ms\nI0407 21:36:17.201569       1 operator.go:145] Starting syncing operator at 2020-04-07 21:36:17.201555208 +0000 UTC m=+931.578318285\nI0407 21:36:17.237768       1 operator.go:147] Finished syncing operator at 36.202815ms\nI0407 21:36:17.239327       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0407 21:36:17.239448       1 builder.go:210] server exited\nI0407 21:36:17.243849       1 management_state_controller.go:112] Shutting down management-state-controller-csi-snapshot-controller\n
Apr 07 21:36:21.261 E ns/openshift-monitoring pod/node-exporter-v9b7x node/ip-10-0-143-92.ec2.internal container/node-exporter container exited with code 143 (Error): -07T21:20:38Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-07T21:20:38Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-07T21:20:38Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-07T21:20:38Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-07T21:20:38Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-07T21:20:38Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-07T21:20:38Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-07T21:20:38Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-07T21:20:38Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-07T21:20:38Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-07T21:20:38Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-07T21:20:38Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-07T21:20:38Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-07T21:20:38Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-07T21:20:38Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-07T21:20:38Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-07T21:20:38Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-07T21:20:38Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-07T21:20:38Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-07T21:20:38Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-07T21:20:38Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-07T21:20:38Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-07T21:20:38Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-07T21:20:38Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 07 21:36:27.239 E ns/openshift-monitoring pod/node-exporter-nb7gf node/ip-10-0-141-178.ec2.internal container/node-exporter container exited with code 143 (Error): -07T21:20:50Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-07T21:20:50Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-07T21:20:50Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-07T21:20:50Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-07T21:20:50Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-07T21:20:50Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-07T21:20:50Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-07T21:20:50Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-07T21:20:50Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-07T21:20:50Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-07T21:20:50Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-07T21:20:50Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-07T21:20:50Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-07T21:20:50Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-07T21:20:50Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-07T21:20:50Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-07T21:20:50Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-07T21:20:50Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-07T21:20:50Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-07T21:20:50Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-07T21:20:50Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-07T21:20:50Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-07T21:20:50Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-07T21:20:50Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 07 21:36:37.506 E clusterversion/version changed Failing to True: MultipleErrors: Multiple errors are preventing progress:\n* deployment openshift-console/downloads is progressing ReplicaSetUpdated: ReplicaSet "downloads-7cddd7b698" is progressing.\n* deployment openshift-image-registry/cluster-image-registry-operator is progressing ReplicaSetUpdated: ReplicaSet "cluster-image-registry-operator-75b5d458ff" is progressing.
Apr 07 21:36:46.229 E ns/openshift-monitoring pod/prometheus-adapter-59c64564b4-jcw64 node/ip-10-0-153-218.ec2.internal container/prometheus-adapter container exited with code 2 (Error): I0407 21:20:34.458826       1 adapter.go:93] successfully using in-cluster auth\nI0407 21:20:36.576204       1 secure_serving.go:116] Serving securely on [::]:6443\n
Apr 07 21:36:48.722 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-141-178.ec2.internal container/config-reloader container exited with code 2 (Error): 2020/04/07 21:36:39 Watching directory: "/etc/alertmanager/config"\n
Apr 07 21:36:48.722 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-141-178.ec2.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/04/07 21:36:40 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/07 21:36:40 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/07 21:36:40 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/07 21:36:40 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/04/07 21:36:40 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/07 21:36:40 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/07 21:36:40 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0407 21:36:40.249275       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/04/07 21:36:40 http.go:107: HTTPS: listening on [::]:9095\n
Apr 07 21:36:52.310 E ns/openshift-monitoring pod/node-exporter-bvqj7 node/ip-10-0-153-218.ec2.internal container/node-exporter container exited with code 143 (Error): -07T21:20:02Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-07T21:20:02Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-07T21:20:02Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-07T21:20:02Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-07T21:20:02Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-07T21:20:02Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-07T21:20:02Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-07T21:20:02Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-07T21:20:02Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-07T21:20:02Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-07T21:20:02Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-07T21:20:02Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-07T21:20:02Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-07T21:20:02Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-07T21:20:02Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-07T21:20:02Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-07T21:20:02Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-07T21:20:02Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-07T21:20:02Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-07T21:20:02Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-07T21:20:02Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-07T21:20:02Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-07T21:20:02Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-07T21:20:02Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 07 21:36:57.398 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-153-218.ec2.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-07T21:36:55.491Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-07T21:36:55.495Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-07T21:36:55.496Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-07T21:36:55.497Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-07T21:36:55.497Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-07T21:36:55.497Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-07T21:36:55.497Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-07T21:36:55.497Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-07T21:36:55.497Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-07T21:36:55.497Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-07T21:36:55.497Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-07T21:36:55.497Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-07T21:36:55.497Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-07T21:36:55.498Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-07T21:36:55.498Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=info ts=2020-04-07T21:36:55.499Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=error ts=2020-04-07
Apr 07 21:36:59.369 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-153-218.ec2.internal container/rules-configmap-reloader container exited with code 2 (Error): 2020/04/07 21:36:55 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Apr 07 21:36:59.369 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-153-218.ec2.internal container/prometheus-proxy container exited with code 2 (Error): 2020/04/07 21:36:56 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/04/07 21:36:56 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/07 21:36:56 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/07 21:36:56 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/04/07 21:36:56 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/07 21:36:56 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/04/07 21:36:56 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/07 21:36:56 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/04/07 21:36:56 http.go:107: HTTPS: listening on [::]:9091\nI0407 21:36:56.397440       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Apr 07 21:36:59.369 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-153-218.ec2.internal container/prometheus-config-reloader container exited with code 2 (Error): ts=2020-04-07T21:36:55.639811689Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-04-07T21:36:55.641284938Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\n
Apr 07 21:37:17.449 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-153-218.ec2.internal container/prometheus container exited with code 1 (Error): caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-07T21:37:14.684Z caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-07T21:37:14.689Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-07T21:37:14.689Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-07T21:37:14.690Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-07T21:37:14.690Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-07T21:37:14.690Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-07T21:37:14.690Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-07T21:37:14.690Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-07T21:37:14.690Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-07T21:37:14.690Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-07T21:37:14.690Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-07T21:37:14.690Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-07T21:37:14.690Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-07T21:37:14.690Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-07T21:37:14.691Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-07T21:37:14.691Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-07
Apr 07 21:37:42.494 E ns/openshift-marketplace pod/redhat-operators-58d968f48d-7rh4b node/ip-10-0-153-218.ec2.internal container/redhat-operators container exited with code 2 (Error): 
Apr 07 21:37:43.630 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-143-92.ec2.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-07T21:37:40.921Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-07T21:37:40.926Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-07T21:37:40.926Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-07T21:37:40.927Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-07T21:37:40.927Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-07T21:37:40.927Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-07T21:37:40.927Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-07T21:37:40.927Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-07T21:37:40.927Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-07T21:37:40.927Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-07T21:37:40.927Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-07T21:37:40.927Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-07T21:37:40.927Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-07T21:37:40.927Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-07T21:37:40.929Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-07T21:37:40.929Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-07
Apr 07 21:37:45.508 E ns/openshift-marketplace pod/redhat-marketplace-986997bb4-gk78l node/ip-10-0-153-218.ec2.internal container/redhat-marketplace container exited with code 2 (Error): 
Apr 07 21:37:46.632 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-b7c9bc8db-6qmlv node/ip-10-0-143-92.ec2.internal container/snapshot-controller container exited with code 2 (Error): 
Apr 07 21:37:53.978 E ns/openshift-monitoring pod/node-exporter-lwhlh node/ip-10-0-145-180.ec2.internal container/node-exporter container exited with code 143 (Error): -07T21:17:12Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-07T21:17:12Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-07T21:17:12Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-07T21:17:12Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-07T21:17:12Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-07T21:17:12Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-07T21:17:12Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-07T21:17:12Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-07T21:17:12Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-07T21:17:12Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-07T21:17:12Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-07T21:17:12Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-07T21:17:12Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-07T21:17:12Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-07T21:17:12Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-07T21:17:12Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-07T21:17:12Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-07T21:17:12Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-07T21:17:12Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-07T21:17:12Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-07T21:17:12Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-07T21:17:12Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-07T21:17:12Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-07T21:17:12Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 07 21:37:54.044 E ns/openshift-controller-manager pod/controller-manager-8bvnm node/ip-10-0-145-180.ec2.internal container/controller-manager container exited with code 137 (Error): the watch stream: stream error: stream ID 615; INTERNAL_ERROR") has prevented the request from succeeding\nW0407 21:32:59.717389       1 reflector.go:340] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.Image ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 617; INTERNAL_ERROR") has prevented the request from succeeding\nW0407 21:32:59.717538       1 reflector.go:340] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 561; INTERNAL_ERROR") has prevented the request from succeeding\nW0407 21:33:24.854048       1 reflector.go:340] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 609; INTERNAL_ERROR") has prevented the request from succeeding\nW0407 21:33:24.854737       1 reflector.go:340] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 669; INTERNAL_ERROR") has prevented the request from succeeding\nW0407 21:33:24.854970       1 reflector.go:340] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.BuildConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 551; INTERNAL_ERROR") has prevented the request from succeeding\nW0407 21:33:24.856090       1 reflector.go:340] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.Build ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 611; INTERNAL_ERROR") has prevented the request from succeeding\n
Apr 07 21:37:57.982 E ns/openshift-console-operator pod/console-operator-57b65cb84c-ktzrl node/ip-10-0-128-164.ec2.internal container/console-operator container exited with code 1 (Error): tch stream event decoding: unexpected EOF\nI0407 21:35:46.676973       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0407 21:35:46.676984       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0407 21:35:46.676993       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0407 21:35:46.677003       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0407 21:35:46.677014       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0407 21:37:56.961296       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0407 21:37:56.962049       1 dynamic_serving_content.go:145] Shutting down serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key\nI0407 21:37:56.962253       1 reflector.go:181] Stopping reflector *v1.ClusterOperator (10m0s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0407 21:37:56.962342       1 builder.go:219] server exited\nI0407 21:37:56.962347       1 configmap_cafile_content.go:223] Shutting down client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\nI0407 21:37:56.962414       1 configmap_cafile_content.go:223] Shutting down client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nI0407 21:37:56.962421       1 tlsconfig.go:255] Shutting down DynamicServingCertificateController\nI0407 21:37:56.962529       1 controller.go:144] shutting down ConsoleServiceSyncController\nI0407 21:37:56.962584       1 base_controller.go:101] Shutting down UnsupportedConfigOverridesController ...\nI0407 21:37:56.962635       1 base_controller.go:101] Shutting down StatusSyncer_console ...\nW0407 21:37:56.962651       1 builder.go:88] graceful termination failed, controllers failed with error: stopped\nI0407 21:37:56.962715       1 secure_serving.go:222] Stopped listening on [::]:8443\n
Apr 07 21:38:06.709 E ns/openshift-monitoring pod/node-exporter-nn5m2 node/ip-10-0-136-217.ec2.internal container/node-exporter container exited with code 143 (Error): -07T21:16:05Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-07T21:16:05Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-07T21:16:05Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-07T21:16:05Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-07T21:16:05Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-07T21:16:05Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-07T21:16:05Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-07T21:16:05Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-07T21:16:05Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-07T21:16:05Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-07T21:16:05Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-07T21:16:05Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-07T21:16:05Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-07T21:16:05Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-07T21:16:05Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-07T21:16:05Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-07T21:16:05Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-07T21:16:05Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-07T21:16:05Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-07T21:16:05Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-07T21:16:05Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-07T21:16:05Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-07T21:16:05Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-07T21:16:05Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 07 21:38:47.205 E ns/openshift-console pod/console-69f6987699-zbfkt node/ip-10-0-128-164.ec2.internal container/console container exited with code 2 (Error): 2020-04-07T21:22:40Z cmd/main: cookies are secure!\n2020-04-07T21:22:40Z cmd/main: Binding to [::]:8443...\n2020-04-07T21:22:40Z cmd/main: using TLS\n
Apr 07 21:39:00.258 E ns/openshift-console pod/console-69f6987699-bm2zz node/ip-10-0-128-164.ec2.internal container/console container exited with code 2 (Error): 2020-04-07T21:21:55Z cmd/main: cookies are secure!\n2020-04-07T21:21:55Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-04-07T21:22:06Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-04-07T21:22:16Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-04-07T21:22:26Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-04-07T21:22:36Z cmd/main: Binding to [::]:8443...\n2020-04-07T21:22:36Z cmd/main: using TLS\n
Apr 07 21:39:37.443 E ns/openshift-sdn pod/sdn-controller-lhjhk node/ip-10-0-128-164.ec2.internal container/sdn-controller container exited with code 2 (Error): rnal (host: "ip-10-0-153-218.ec2.internal", ip: "10.0.153.218", subnet: "10.131.0.0/23")\nI0407 21:20:00.106280       1 subnets.go:149] Created HostSubnet ip-10-0-143-92.ec2.internal (host: "ip-10-0-143-92.ec2.internal", ip: "10.0.143.92", subnet: "10.128.2.0/23")\nI0407 21:20:21.320254       1 subnets.go:149] Created HostSubnet ip-10-0-141-178.ec2.internal (host: "ip-10-0-141-178.ec2.internal", ip: "10.0.141.178", subnet: "10.129.2.0/23")\nI0407 21:26:31.180759       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0407 21:26:31.182027       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0407 21:26:31.183390       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0407 21:26:31.185913       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0407 21:27:14.778293       1 vnids.go:115] Allocated netid 10653246 for namespace "e2e-k8s-sig-apps-replicaset-upgrade-8050"\nI0407 21:27:14.798550       1 vnids.go:115] Allocated netid 9643002 for namespace "e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-2942"\nI0407 21:27:14.820278       1 vnids.go:115] Allocated netid 15200675 for namespace "e2e-frontend-ingress-available-1627"\nI0407 21:27:14.833049       1 vnids.go:115] Allocated netid 281447 for namespace "e2e-k8s-sig-apps-daemonset-upgrade-3429"\nI0407 21:27:14.863313       1 vnids.go:115] Allocated netid 8258362 for namespace "e2e-k8s-sig-storage-sig-api-machinery-secret-upgrade-9127"\nI0407 21:27:14.885150       1 vnids.go:115] Allocated netid 2536392 for namespace "e2e-control-plane-available-9155"\nI0407 21:27:14.913261       1 vnids.go:115] Allocated netid 3881447 for namespace "e2e-k8s-sig-apps-job-upgrade-3905"\nI0407 21:27:14.934906       1 vnids.go:115] Allocated netid 7863730 for namespace "e2e-k8s-service-lb-available-4592"\nI0407 21:27:14.996357       1 vnids.go:115] Allocated netid 169928 for namespace "e2e-k8s-sig-apps-deployment-upgrade-1546"\n
Apr 07 21:39:46.937 E ns/openshift-sdn pod/sdn-bh7b4 node/ip-10-0-143-92.ec2.internal container/sdn container exited with code 255 (Error): 4dd95\nI0407 21:36:04.239684    2287 pod.go:503] CNI_ADD openshift-kube-storage-version-migrator/migrator-77bf7cd9cb-5gnd4 got IP 10.128.2.20, ofport 21\nI0407 21:36:17.734067    2287 pod.go:503] CNI_ADD openshift-monitoring/openshift-state-metrics-7544bf54dd-zm2x7 got IP 10.128.2.21, ofport 22\nI0407 21:36:41.083392    2287 pod.go:503] CNI_ADD openshift-monitoring/prometheus-adapter-9d586d6f9-d6gmg got IP 10.128.2.22, ofport 23\nI0407 21:36:45.491282    2287 pod.go:503] CNI_ADD openshift-console/downloads-7cddd7b698-q9fdl got IP 10.128.2.23, ofport 24\nI0407 21:36:51.251275    2287 pod.go:503] CNI_ADD openshift-image-registry/image-registry-54964999c5-fcw4r got IP 10.128.2.24, ofport 25\nI0407 21:37:07.610578    2287 pod.go:539] CNI_DEL openshift-monitoring/thanos-querier-8f8b7d974-r8tkb\nI0407 21:37:07.786396    2287 pod.go:539] CNI_DEL openshift-monitoring/alertmanager-main-0\nI0407 21:37:08.013603    2287 pod.go:503] CNI_ADD openshift-monitoring/thanos-querier-76bcd76dd6-5mlwt got IP 10.128.2.25, ofport 26\nI0407 21:37:08.398549    2287 pod.go:503] CNI_ADD openshift-ingress/router-default-78fbccc4c4-wfllh got IP 10.128.2.26, ofport 27\nI0407 21:37:18.282084    2287 pod.go:503] CNI_ADD openshift-monitoring/alertmanager-main-0 got IP 10.128.2.27, ofport 28\nI0407 21:37:30.675509    2287 pod.go:539] CNI_DEL openshift-monitoring/prometheus-k8s-0\nI0407 21:37:38.469674    2287 pod.go:503] CNI_ADD openshift-monitoring/prometheus-k8s-0 got IP 10.128.2.28, ofport 29\nI0407 21:37:45.875307    2287 pod.go:539] CNI_DEL openshift-cluster-storage-operator/csi-snapshot-controller-b7c9bc8db-6qmlv\nI0407 21:38:07.386520    2287 pod.go:539] CNI_DEL openshift-image-registry/node-ca-nkz4z\nI0407 21:38:18.319411    2287 pod.go:503] CNI_ADD openshift-image-registry/node-ca-gqwpm got IP 10.128.2.29, ofport 30\nI0407 21:38:37.831543    2287 pod.go:539] CNI_DEL openshift-ingress/router-default-997dd49b7-987tt\nF0407 21:39:46.832384    2287 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Apr 07 21:40:05.148 E ns/openshift-multus pod/multus-pxzq2 node/ip-10-0-143-92.ec2.internal container/kube-multus container exited with code 137 (Error): 
Apr 07 21:40:05.487 E ns/openshift-sdn pod/sdn-controller-g5kfm node/ip-10-0-136-217.ec2.internal container/sdn-controller container exited with code 2 (Error): I0407 20:48:55.191945       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0407 21:03:29.264392       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-zvgsjvdr-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: dial tcp 10.0.157.60:6443: i/o timeout\nE0407 21:03:59.915561       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-zvgsjvdr-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: dial tcp 10.0.157.60:6443: i/o timeout\nE0407 21:04:34.674723       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-zvgsjvdr-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: dial tcp 10.0.157.60:6443: i/o timeout\nE0407 21:14:44.696481       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-zvgsjvdr-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: unexpected EOF\n
Apr 07 21:40:05.511 E ns/openshift-multus pod/multus-admission-controller-6cpwp node/ip-10-0-136-217.ec2.internal container/multus-admission-controller container exited with code 137 (Error): 
Apr 07 21:40:13.825 E ns/openshift-sdn pod/sdn-2xkcm node/ip-10-0-153-218.ec2.internal container/sdn container exited with code 255 (Error): 898044    2305 pod.go:539] CNI_DEL openshift-monitoring/prometheus-k8s-1\nI0407 21:36:45.067752    2305 pod.go:539] CNI_DEL openshift-monitoring/alertmanager-main-1\nI0407 21:36:45.762786    2305 pod.go:539] CNI_DEL openshift-monitoring/prometheus-adapter-59c64564b4-jcw64\nI0407 21:36:50.000905    2305 pod.go:539] CNI_DEL openshift-image-registry/node-ca-f6bzk\nI0407 21:36:50.647419    2305 pod.go:539] CNI_DEL openshift-image-registry/image-registry-7c48b86f5b-6xmnm\nI0407 21:36:51.141610    2305 pod.go:539] CNI_DEL openshift-monitoring/grafana-5856968b4d-ql8g2\nI0407 21:36:52.474676    2305 pod.go:503] CNI_ADD openshift-monitoring/alertmanager-main-1 got IP 10.131.0.30, ofport 31\nI0407 21:36:52.684873    2305 pod.go:503] CNI_ADD openshift-monitoring/prometheus-k8s-1 got IP 10.131.0.31, ofport 32\nI0407 21:36:53.484250    2305 pod.go:503] CNI_ADD openshift-ingress/router-default-78fbccc4c4-jfll4 got IP 10.131.0.32, ofport 33\nI0407 21:36:58.716102    2305 pod.go:539] CNI_DEL openshift-monitoring/prometheus-k8s-1\nI0407 21:37:02.360477    2305 pod.go:503] CNI_ADD openshift-image-registry/node-ca-bnjg4 got IP 10.131.0.33, ofport 34\nI0407 21:37:05.109930    2305 pod.go:539] CNI_DEL openshift-image-registry/image-registry-7c48b86f5b-l7psf\nI0407 21:37:12.458827    2305 pod.go:503] CNI_ADD openshift-monitoring/prometheus-k8s-1 got IP 10.131.0.34, ofport 35\nI0407 21:37:35.563089    2305 pod.go:539] CNI_DEL openshift-marketplace/community-operators-6bc667874-ckqd9\nI0407 21:37:35.674573    2305 pod.go:539] CNI_DEL openshift-marketplace/certified-operators-f5c6db58b-gbwfk\nI0407 21:37:41.695015    2305 pod.go:539] CNI_DEL openshift-marketplace/redhat-operators-58d968f48d-7rh4b\nI0407 21:37:44.836479    2305 pod.go:539] CNI_DEL openshift-marketplace/redhat-marketplace-986997bb4-gk78l\nI0407 21:38:02.564575    2305 pod.go:539] CNI_DEL openshift-ingress/router-default-997dd49b7-dhd2p\nF0407 21:40:13.204529    2305 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Apr 07 21:40:20.696 E ns/openshift-sdn pod/sdn-controller-tqppx node/ip-10-0-145-180.ec2.internal container/sdn-controller container exited with code 2 (Error): I0407 21:17:05.731230       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Apr 07 21:40:38.660 E ns/openshift-sdn pod/sdn-bh492 node/ip-10-0-136-217.ec2.internal container/sdn container exited with code 255 (Error): fb987988-94cc5\nI0407 21:36:26.701233    2133 pod.go:539] CNI_DEL openshift-service-ca-operator/service-ca-operator-6b566ddc88-4wjlj\nI0407 21:36:27.068503    2133 pod.go:539] CNI_DEL openshift-controller-manager-operator/openshift-controller-manager-operator-8677548d-wfnqb\nI0407 21:36:27.243982    2133 pod.go:539] CNI_DEL openshift-authentication-operator/authentication-operator-7bdb8f8685-7r2r4\nI0407 21:36:44.291248    2133 pod.go:539] CNI_DEL openshift-console/downloads-75d98576fc-6bz2l\nI0407 21:36:46.790948    2133 pod.go:503] CNI_ADD openshift-service-ca/service-ca-7dfbf9446b-ljnrx got IP 10.129.0.73, ofport 74\nI0407 21:36:47.336378    2133 pod.go:539] CNI_DEL openshift-operator-lifecycle-manager/packageserver-5f67d445b7-7w8k6\nI0407 21:36:47.785706    2133 pod.go:503] CNI_ADD openshift-operator-lifecycle-manager/packageserver-6f6c4dc9cb-h7fbq got IP 10.129.0.74, ofport 75\nI0407 21:36:53.726347    2133 pod.go:539] CNI_DEL openshift-controller-manager/controller-manager-hwzjn\nI0407 21:36:56.209727    2133 pod.go:503] CNI_ADD openshift-controller-manager/controller-manager-gg2xp got IP 10.129.0.75, ofport 76\nI0407 21:37:05.758507    2133 pod.go:539] CNI_DEL openshift-image-registry/node-ca-5msz6\nI0407 21:37:08.670869    2133 pod.go:503] CNI_ADD openshift-image-registry/node-ca-qn4cd got IP 10.129.0.76, ofport 77\nI0407 21:37:18.826674    2133 pod.go:539] CNI_DEL openshift-console/downloads-75d98576fc-fwltr\nI0407 21:37:19.593820    2133 pod.go:539] CNI_DEL openshift-authentication/oauth-openshift-599974cf57-w4v7r\nI0407 21:38:22.268922    2133 pod.go:503] CNI_ADD openshift-console/console-6566bbbd78-6fxv5 got IP 10.129.0.77, ofport 78\nI0407 21:40:04.653473    2133 pod.go:539] CNI_DEL openshift-multus/multus-admission-controller-6cpwp\nI0407 21:40:18.190175    2133 pod.go:503] CNI_ADD openshift-multus/multus-admission-controller-bmfpz got IP 10.129.0.78, ofport 79\nF0407 21:40:38.554433    2133 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Apr 07 21:40:45.825 E ns/openshift-multus pod/multus-hcb7j node/ip-10-0-145-180.ec2.internal container/kube-multus container exited with code 137 (Error): 
Apr 07 21:40:52.754 E ns/openshift-multus pod/multus-admission-controller-fxw4l node/ip-10-0-128-164.ec2.internal container/multus-admission-controller container exited with code 137 (Error): 
Apr 07 21:41:25.300 E ns/openshift-multus pod/multus-lrh2n node/ip-10-0-153-218.ec2.internal container/kube-multus container exited with code 137 (Error): 
Apr 07 21:41:30.995 E ns/openshift-sdn pod/sdn-tnx5n node/ip-10-0-145-180.ec2.internal container/sdn container exited with code 255 (Error): I0407 21:39:53.633554   70959 node.go:146] Initializing SDN node "ip-10-0-145-180.ec2.internal" (10.0.145.180) of type "redhat/openshift-ovs-networkpolicy"\nI0407 21:39:53.639187   70959 cmd.go:151] Starting node networking (unknown)\nI0407 21:39:53.784728   70959 sdn_controller.go:137] [SDN setup] SDN is already set up\nI0407 21:39:53.905679   70959 proxy.go:103] Using unidling+iptables Proxier.\nI0407 21:39:53.906060   70959 proxy.go:129] Tearing down userspace rules.\nI0407 21:39:53.918084   70959 networkpolicy.go:330] SyncVNIDRules: 8 unused VNIDs\nI0407 21:39:54.148148   70959 proxy.go:95] Starting multitenant SDN proxy endpoint filter\nI0407 21:39:54.159216   70959 config.go:313] Starting service config controller\nI0407 21:39:54.159313   70959 shared_informer.go:197] Waiting for caches to sync for service config\nI0407 21:39:54.159216   70959 config.go:131] Starting endpoints config controller\nI0407 21:39:54.159435   70959 shared_informer.go:197] Waiting for caches to sync for endpoints config\nI0407 21:39:54.159494   70959 proxy.go:229] Started Kubernetes Proxy on 0.0.0.0\nI0407 21:39:54.259484   70959 shared_informer.go:204] Caches are synced for service config \nI0407 21:39:54.259733   70959 shared_informer.go:204] Caches are synced for endpoints config \nF0407 21:41:01.366774   70959 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Apr 07 21:41:31.034 E ns/openshift-multus pod/multus-admission-controller-n278n node/ip-10-0-145-180.ec2.internal container/multus-admission-controller container exited with code 137 (Error): 
Apr 07 21:41:37.191 E ns/openshift-sdn pod/sdn-wmpc7 node/ip-10-0-141-178.ec2.internal container/sdn container exited with code 255 (Error): I0407 21:40:36.049198   52239 node.go:146] Initializing SDN node "ip-10-0-141-178.ec2.internal" (10.0.141.178) of type "redhat/openshift-ovs-networkpolicy"\nI0407 21:40:36.054042   52239 cmd.go:151] Starting node networking (unknown)\nI0407 21:40:36.211983   52239 sdn_controller.go:137] [SDN setup] SDN is already set up\nI0407 21:40:36.410005   52239 proxy.go:103] Using unidling+iptables Proxier.\nI0407 21:40:36.410265   52239 proxy.go:129] Tearing down userspace rules.\nI0407 21:40:36.421107   52239 networkpolicy.go:330] SyncVNIDRules: 2 unused VNIDs\nI0407 21:40:36.598767   52239 proxy.go:95] Starting multitenant SDN proxy endpoint filter\nI0407 21:40:36.609374   52239 config.go:313] Starting service config controller\nI0407 21:40:36.609402   52239 shared_informer.go:197] Waiting for caches to sync for service config\nI0407 21:40:36.609384   52239 config.go:131] Starting endpoints config controller\nI0407 21:40:36.609426   52239 shared_informer.go:197] Waiting for caches to sync for endpoints config\nI0407 21:40:36.609583   52239 proxy.go:229] Started Kubernetes Proxy on 0.0.0.0\nI0407 21:40:36.709607   52239 shared_informer.go:204] Caches are synced for endpoints config \nI0407 21:40:36.709608   52239 shared_informer.go:204] Caches are synced for service config \nF0407 21:41:36.196132   52239 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Apr 07 21:42:05.062 E ns/openshift-sdn pod/sdn-5r9ql node/ip-10-0-128-164.ec2.internal container/sdn container exited with code 255 (Error): I0407 21:40:10.932246  125774 node.go:146] Initializing SDN node "ip-10-0-128-164.ec2.internal" (10.0.128.164) of type "redhat/openshift-ovs-networkpolicy"\nI0407 21:40:10.937892  125774 cmd.go:151] Starting node networking (unknown)\nI0407 21:40:11.087169  125774 sdn_controller.go:137] [SDN setup] SDN is already set up\nI0407 21:40:11.206434  125774 proxy.go:103] Using unidling+iptables Proxier.\nI0407 21:40:11.208427  125774 proxy.go:129] Tearing down userspace rules.\nI0407 21:40:11.226159  125774 networkpolicy.go:330] SyncVNIDRules: 9 unused VNIDs\nI0407 21:40:11.510546  125774 proxy.go:95] Starting multitenant SDN proxy endpoint filter\nI0407 21:40:11.516761  125774 proxy.go:229] Started Kubernetes Proxy on 0.0.0.0\nI0407 21:40:11.517290  125774 config.go:313] Starting service config controller\nI0407 21:40:11.517320  125774 shared_informer.go:197] Waiting for caches to sync for service config\nI0407 21:40:11.517357  125774 config.go:131] Starting endpoints config controller\nI0407 21:40:11.517385  125774 shared_informer.go:197] Waiting for caches to sync for endpoints config\nI0407 21:40:11.617520  125774 shared_informer.go:204] Caches are synced for endpoints config \nI0407 21:40:11.618011  125774 shared_informer.go:204] Caches are synced for service config \nI0407 21:40:51.920588  125774 pod.go:539] CNI_DEL openshift-multus/multus-admission-controller-fxw4l\nI0407 21:40:56.508653  125774 pod.go:503] CNI_ADD openshift-multus/multus-admission-controller-xcll8 got IP 10.128.0.85, ofport 86\nF0407 21:42:04.953669  125774 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Apr 07 21:42:05.070 E ns/openshift-multus pod/multus-mn9w8 node/ip-10-0-128-164.ec2.internal container/kube-multus container exited with code 137 (Error): 
Apr 07 21:42:47.306 E ns/openshift-multus pod/multus-htmbf node/ip-10-0-136-217.ec2.internal container/kube-multus container exited with code 137 (Error): 
Apr 07 21:43:37.491 E ns/openshift-multus pod/multus-wmblr node/ip-10-0-141-178.ec2.internal container/kube-multus container exited with code 137 (Error): 
Apr 07 21:44:16.647 E ns/openshift-machine-config-operator pod/machine-config-operator-5f59fb9c85-wt2xt node/ip-10-0-136-217.ec2.internal container/machine-config-operator container exited with code 2 (Error): onfig.openshift.io/v1/proxies?allowWatchBookmarks=true&resourceVersion=456&timeout=6m36s&timeoutSeconds=396&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0407 21:03:15.413311       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ConfigMap: Get https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dkube-apiserver-to-kubelet-client-ca&resourceVersion=8204&timeout=8m36s&timeoutSeconds=516&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0407 21:03:15.416794       1 reflector.go:307] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to watch *v1.MachineConfig: Get https://172.30.0.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?allowWatchBookmarks=true&resourceVersion=5904&timeout=8m13s&timeoutSeconds=493&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0407 21:03:15.418053       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ServiceAccount: Get https://172.30.0.1:443/api/v1/namespaces/openshift-machine-config-operator/serviceaccounts?allowWatchBookmarks=true&resourceVersion=5771&timeout=5m25s&timeoutSeconds=325&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0407 21:03:15.419130       1 reflector.go:307] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to watch *v1.ControllerConfig: Get https://172.30.0.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?allowWatchBookmarks=true&resourceVersion=5752&timeout=8m20s&timeoutSeconds=500&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0407 21:14:44.734961       1 leaderelection.go:331] error retrieving resource lock openshift-machine-config-operator/machine-config: Get https://172.30.0.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps/machine-config: unexpected EOF\n
Apr 07 21:46:13.126 E ns/openshift-machine-config-operator pod/machine-config-daemon-zzbbx node/ip-10-0-145-180.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 07 21:46:25.073 E ns/openshift-machine-config-operator pod/machine-config-daemon-m66dk node/ip-10-0-136-217.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 07 21:46:29.918 E ns/openshift-machine-config-operator pod/machine-config-daemon-qr87w node/ip-10-0-141-178.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 07 21:46:33.971 E ns/openshift-machine-config-operator pod/machine-config-daemon-l856w node/ip-10-0-153-218.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 07 21:47:00.043 E ns/openshift-machine-config-operator pod/machine-config-daemon-gct9h node/ip-10-0-143-92.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 07 21:47:14.248 E ns/openshift-machine-config-operator pod/machine-config-controller-6f8f4b9b88-5dm2q node/ip-10-0-136-217.ec2.internal container/machine-config-controller container exited with code 2 (Error): ing OutOfDisk=Unknown\nI0407 21:17:00.435115       1 node_controller.go:433] Pool master: node ip-10-0-145-180.ec2.internal is now reporting unready: node ip-10-0-145-180.ec2.internal is reporting NotReady=False\nI0407 21:17:30.512836       1 node_controller.go:435] Pool master: node ip-10-0-145-180.ec2.internal is now reporting ready\nI0407 21:21:15.171262       1 node_controller.go:452] Pool worker: node ip-10-0-153-218.ec2.internal changed machineconfiguration.openshift.io/currentConfig = rendered-worker-9f5d324d64ca62f2f85bd17033111ace\nI0407 21:21:15.171414       1 node_controller.go:452] Pool worker: node ip-10-0-153-218.ec2.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-9f5d324d64ca62f2f85bd17033111ace\nI0407 21:21:15.171465       1 node_controller.go:452] Pool worker: node ip-10-0-153-218.ec2.internal changed machineconfiguration.openshift.io/state = Done\nI0407 21:21:21.248774       1 node_controller.go:452] Pool worker: node ip-10-0-143-92.ec2.internal changed machineconfiguration.openshift.io/currentConfig = rendered-worker-9f5d324d64ca62f2f85bd17033111ace\nI0407 21:21:21.248804       1 node_controller.go:452] Pool worker: node ip-10-0-143-92.ec2.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-9f5d324d64ca62f2f85bd17033111ace\nI0407 21:21:21.248815       1 node_controller.go:452] Pool worker: node ip-10-0-143-92.ec2.internal changed machineconfiguration.openshift.io/state = Done\nI0407 21:22:02.502482       1 node_controller.go:452] Pool worker: node ip-10-0-141-178.ec2.internal changed machineconfiguration.openshift.io/currentConfig = rendered-worker-9f5d324d64ca62f2f85bd17033111ace\nI0407 21:22:02.502515       1 node_controller.go:452] Pool worker: node ip-10-0-141-178.ec2.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-9f5d324d64ca62f2f85bd17033111ace\nI0407 21:22:02.502526       1 node_controller.go:452] Pool worker: node ip-10-0-141-178.ec2.internal changed machineconfiguration.openshift.io/state = Done\n
Apr 07 21:49:18.945 E ns/openshift-machine-config-operator pod/machine-config-server-6h49t node/ip-10-0-128-164.ec2.internal container/machine-config-server container exited with code 2 (Error): I0407 20:53:49.181437       1 start.go:38] Version: machine-config-daemon-4.5.0-202004071701-2-gdd5eeeb2-dirty (dd5eeeb2bf88c50c9b7c2aa2385c4b2078a9eea0)\nI0407 20:53:49.182218       1 api.go:51] Launching server on :22624\nI0407 20:53:49.182289       1 api.go:51] Launching server on :22623\nI0407 21:17:14.161583       1 api.go:97] Pool worker requested by 10.0.140.127:45990\n
Apr 07 21:49:22.855 E ns/openshift-machine-config-operator pod/machine-config-server-dcfxp node/ip-10-0-145-180.ec2.internal container/machine-config-server container exited with code 2 (Error): I0407 21:17:03.999894       1 start.go:38] Version: machine-config-daemon-4.5.0-202004071701-2-gdd5eeeb2-dirty (dd5eeeb2bf88c50c9b7c2aa2385c4b2078a9eea0)\nI0407 21:17:04.031723       1 api.go:51] Launching server on :22623\nI0407 21:17:04.040678       1 api.go:51] Launching server on :22624\n
Apr 07 21:49:30.633 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-6c6f85dc9f-w44hv node/ip-10-0-141-178.ec2.internal container/snapshot-controller container exited with code 2 (Error): 
Apr 07 21:49:31.948 E ns/openshift-marketplace pod/redhat-operators-7867697888-gf4tx node/ip-10-0-141-178.ec2.internal container/redhat-operators container exited with code 2 (Error): 
Apr 07 21:49:32.798 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-141-178.ec2.internal container/config-reloader container exited with code 2 (Error): 2020/04/07 21:37:00 Watching directory: "/etc/alertmanager/config"\n
Apr 07 21:49:32.798 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-141-178.ec2.internal container/alertmanager-proxy container exited with code 2 (Error): Error on reading termination message from logs: failed to try resolving symlinks in path "/var/log/pods/openshift-monitoring_alertmanager-main-2_439b67e5-8161-4802-8dc7-5d2ad4870017/alertmanager-proxy/0.log": lstat /var/log/pods/openshift-monitoring_alertmanager-main-2_439b67e5-8161-4802-8dc7-5d2ad4870017/alertmanager-proxy/0.log: no such file or directory
Apr 07 21:49:59.917 E ns/e2e-k8s-sig-apps-job-upgrade-3905 pod/foo-rsxb9 node/ip-10-0-141-178.ec2.internal container/c container exited with code 137 (Error): 
Apr 07 21:50:15.944 E ns/e2e-k8s-service-lb-available-4592 pod/service-test-wnhp2 node/ip-10-0-141-178.ec2.internal container/netexec container exited with code 2 (Error): 
Apr 07 21:50:41.830 E ns/openshift-marketplace pod/certified-operators-5b889bd988-tf2hd node/ip-10-0-153-218.ec2.internal container/certified-operators container exited with code 2 (Error): 
Apr 07 21:50:44.836 E ns/openshift-marketplace pod/redhat-marketplace-7d64b6d6b6-cdnvn node/ip-10-0-153-218.ec2.internal container/redhat-marketplace container exited with code 2 (Error): 
Apr 07 21:50:48.855 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-6c6f85dc9f-6hgk4 node/ip-10-0-153-218.ec2.internal container/snapshot-controller container exited with code 2 (Error): 
Apr 07 21:51:40.478 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver
Apr 07 21:51:45.274 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers::NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-128-164.ec2.internal" not ready since 2020-04-07 21:50:40 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)\nEtcdMembersDegraded: ip-10-0-128-164.ec2.internal members are unhealthy,  members are unknown
Apr 07 21:52:04.495 E ns/openshift-cluster-node-tuning-operator pod/tuned-zrpnx node/ip-10-0-141-178.ec2.internal container/tuned container exited with code 143 (Error): 020-04-07 21:36:24,878 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-07 21:36:24,880 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-07 21:36:24,991 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-07 21:36:25,007 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0407 21:50:07.271437   35157 tuned.go:527] tuned "rendered" changed\nI0407 21:50:07.271464   35157 tuned.go:218] extracting tuned profiles\nI0407 21:50:07.513908   35157 tuned.go:434] tuned daemon profiles changed, forcing tuned daemon reload\nI0407 21:50:07.513951   35157 tuned.go:356] reloading tuned...\nI0407 21:50:07.513961   35157 tuned.go:359] sending HUP to PID 35239\n2020-04-07 21:50:07,514 INFO     tuned.daemon.daemon: stopping tuning\n2020-04-07 21:50:07,873 INFO     tuned.daemon.daemon: terminating Tuned, rolling back all changes\n2020-04-07 21:50:07,884 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-07 21:50:07,885 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-04-07 21:50:07,885 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-04-07 21:50:07,945 INFO     tuned.daemon.daemon: starting tuning\n2020-04-07 21:50:07,949 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-07 21:50:07,950 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-07 21:50:07,957 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-07 21:50:07,959 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-07 21:50:07,960 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-07 21:50:07,964 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-07 21:50:07,979 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\n
Apr 07 21:52:04.516 E ns/openshift-monitoring pod/node-exporter-svfg9 node/ip-10-0-141-178.ec2.internal container/node-exporter container exited with code 143 (Error): -07T21:36:37Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-07T21:36:37Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-07T21:36:37Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-07T21:36:37Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-07T21:36:37Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-07T21:36:37Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-07T21:36:37Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-07T21:36:37Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-07T21:36:37Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-07T21:36:37Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-07T21:36:37Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-07T21:36:37Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-07T21:36:37Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-07T21:36:37Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-07T21:36:37Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-07T21:36:37Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-07T21:36:37Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-07T21:36:37Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-07T21:36:37Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-07T21:36:37Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-07T21:36:37Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-07T21:36:37Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-07T21:36:37Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-07T21:36:37Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 07 21:52:04.572 E ns/openshift-sdn pod/ovs-jhqfg node/ip-10-0-141-178.ec2.internal container/openvswitch container exited with code 1 (Error): 47: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-07T21:49:31.949Z|00102|bridge|INFO|bridge br0: deleted interface vethe29a4dfa on port 17\n2020-04-07T21:49:31.996Z|00103|connmgr|INFO|br0<->unix#450: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-07T21:49:32.038Z|00104|connmgr|INFO|br0<->unix#453: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-07T21:49:32.091Z|00105|bridge|INFO|bridge br0: deleted interface veth21dff6c9 on port 21\n2020-04-07T21:49:32.149Z|00106|connmgr|INFO|br0<->unix#456: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-07T21:49:32.230Z|00107|connmgr|INFO|br0<->unix#459: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-07T21:49:32.277Z|00108|bridge|INFO|bridge br0: deleted interface veth9ba2f139 on port 29\n2020-04-07T21:49:32.344Z|00109|connmgr|INFO|br0<->unix#462: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-07T21:49:32.444Z|00110|connmgr|INFO|br0<->unix#465: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-07T21:49:32.504Z|00111|bridge|INFO|bridge br0: deleted interface veth28a79913 on port 18\n2020-04-07T21:49:32.562Z|00112|connmgr|INFO|br0<->unix#468: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-07T21:49:32.642Z|00113|connmgr|INFO|br0<->unix#471: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-07T21:49:32.684Z|00114|bridge|INFO|bridge br0: deleted interface vethd480202b on port 19\n2020-04-07T21:49:59.656Z|00115|connmgr|INFO|br0<->unix#495: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-07T21:49:59.686Z|00116|connmgr|INFO|br0<->unix#498: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-07T21:49:59.710Z|00117|bridge|INFO|bridge br0: deleted interface veth726226b0 on port 12\n2020-04-07T21:50:15.047Z|00118|connmgr|INFO|br0<->unix#514: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-07T21:50:15.078Z|00119|connmgr|INFO|br0<->unix#517: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-07T21:50:15.100Z|00120|bridge|INFO|bridge br0: deleted interface veth61b59c4b on port 13\n info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Apr 07 21:52:04.613 E ns/openshift-multus pod/multus-bjbt4 node/ip-10-0-141-178.ec2.internal container/kube-multus container exited with code 143 (Error): 
Apr 07 21:52:04.637 E ns/openshift-machine-config-operator pod/machine-config-daemon-b6npz node/ip-10-0-141-178.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 07 21:52:12.084 E ns/openshift-machine-config-operator pod/machine-config-daemon-b6npz node/ip-10-0-141-178.ec2.internal container/oauth-proxy container exited with code 1 (Error): 
Apr 07 21:52:23.183 E ns/openshift-monitoring pod/telemeter-client-554466756d-n4jbg node/ip-10-0-143-92.ec2.internal container/telemeter-client container exited with code 2 (Error): 
Apr 07 21:52:23.183 E ns/openshift-monitoring pod/telemeter-client-554466756d-n4jbg node/ip-10-0-143-92.ec2.internal container/reload container exited with code 2 (Error): 
Apr 07 21:52:23.293 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-143-92.ec2.internal container/rules-configmap-reloader container exited with code 2 (Error): 2020/04/07 21:37:42 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Apr 07 21:52:23.293 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-143-92.ec2.internal container/prometheus-proxy container exited with code 2 (Error): 2020/04/07 21:37:42 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/04/07 21:37:42 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/07 21:37:42 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/07 21:37:42 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/04/07 21:37:42 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/07 21:37:42 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/04/07 21:37:42 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/07 21:37:42 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0407 21:37:42.478011       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/04/07 21:37:42 http.go:107: HTTPS: listening on [::]:9091\n2020/04/07 21:41:03 oauthproxy.go:774: basicauth: 10.129.2.17:33386 Authorization header does not start with 'Basic', skipping basic authentication\n2020/04/07 21:45:34 oauthproxy.go:774: basicauth: 10.129.2.17:35274 Authorization header does not start with 'Basic', skipping basic authentication\n2020/04/07 21:50:01 oauthproxy.go:774: basicauth: 10.129.0.83:48512 Authorization header does not start with 'Basic', skipping basic authentication\n
Apr 07 21:52:23.293 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-143-92.ec2.internal container/prometheus-config-reloader container exited with code 2 (Error): ts=2020-04-07T21:37:41.851538484Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-04-07T21:37:41.853064111Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-04-07T21:37:47.026878422Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-04-07T21:37:47.026983579Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\n
Apr 07 21:52:23.368 E ns/openshift-marketplace pod/redhat-operators-78c78956c6-jtppb node/ip-10-0-143-92.ec2.internal container/redhat-operators container exited with code 2 (Error): 
Apr 07 21:52:23.443 E ns/openshift-monitoring pod/kube-state-metrics-77dccfc898-z2hcd node/ip-10-0-143-92.ec2.internal container/kube-state-metrics container exited with code 2 (Error): 
Apr 07 21:52:24.283 E ns/openshift-monitoring pod/openshift-state-metrics-7544bf54dd-zm2x7 node/ip-10-0-143-92.ec2.internal container/openshift-state-metrics container exited with code 2 (Error): 
Apr 07 21:52:24.302 E ns/openshift-kube-storage-version-migrator pod/migrator-77bf7cd9cb-5gnd4 node/ip-10-0-143-92.ec2.internal container/migrator container exited with code 2 (Error): 
Apr 07 21:52:24.333 E ns/openshift-monitoring pod/grafana-7cfcb54f98-9lmcb node/ip-10-0-143-92.ec2.internal container/grafana container exited with code 1 (Error): 
Apr 07 21:52:24.333 E ns/openshift-monitoring pod/grafana-7cfcb54f98-9lmcb node/ip-10-0-143-92.ec2.internal container/grafana-proxy container exited with code 2 (Error): 
Apr 07 21:52:27.653 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-128-164.ec2.internal node/ip-10-0-128-164.ec2.internal container/kube-scheduler-cert-syncer container exited with code 2 (Error): 1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0407 21:49:46.993590       1 certsync_controller.go:65] Syncing configmaps: []\nI0407 21:49:46.993617       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0407 21:49:49.008290       1 certsync_controller.go:65] Syncing configmaps: []\nI0407 21:49:49.008319       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0407 21:49:51.027469       1 certsync_controller.go:65] Syncing configmaps: []\nI0407 21:49:51.027556       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0407 21:49:53.040011       1 certsync_controller.go:65] Syncing configmaps: []\nI0407 21:49:53.040037       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0407 21:49:55.112157       1 certsync_controller.go:65] Syncing configmaps: []\nI0407 21:49:55.112188       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0407 21:49:57.150628       1 certsync_controller.go:65] Syncing configmaps: []\nI0407 21:49:57.150656       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0407 21:49:59.179993       1 certsync_controller.go:65] Syncing configmaps: []\nI0407 21:49:59.180018       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0407 21:50:01.195687       1 certsync_controller.go:65] Syncing configmaps: []\nI0407 21:50:01.195717       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0407 21:50:03.210742       1 certsync_controller.go:65] Syncing configmaps: []\nI0407 21:50:03.210770       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0407 21:50:05.226811       1 certsync_controller.go:65] Syncing configmaps: []\nI0407 21:50:05.226843       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\n
Apr 07 21:52:27.653 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-128-164.ec2.internal node/ip-10-0-128-164.ec2.internal container/kube-scheduler container exited with code 2 (Error): ication::requestheader-client-ca-file"]: "aggregator-signer" [] issuer="<self>" (2020-04-07 20:36:47 +0000 UTC to 2020-04-08 20:36:47 +0000 UTC (now=2020-04-07 21:33:48.792700715 +0000 UTC))\nI0407 21:33:48.793213       1 tlsconfig.go:200] loaded serving cert ["serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key"]: "scheduler.openshift-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1586292761" (2020-04-07 20:53:02 +0000 UTC to 2022-04-07 20:53:03 +0000 UTC (now=2020-04-07 21:33:48.793192878 +0000 UTC))\nI0407 21:33:48.793679       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1586295228" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1586295227" (2020-04-07 20:33:46 +0000 UTC to 2021-04-07 20:33:46 +0000 UTC (now=2020-04-07 21:33:48.793659783 +0000 UTC))\nI0407 21:33:48.873754       1 node_tree.go:86] Added node "ip-10-0-128-164.ec2.internal" in group "us-east-1:\x00:us-east-1b" to NodeTree\nI0407 21:33:48.875551       1 node_tree.go:86] Added node "ip-10-0-136-217.ec2.internal" in group "us-east-1:\x00:us-east-1b" to NodeTree\nI0407 21:33:48.875703       1 node_tree.go:86] Added node "ip-10-0-141-178.ec2.internal" in group "us-east-1:\x00:us-east-1b" to NodeTree\nI0407 21:33:48.875759       1 node_tree.go:86] Added node "ip-10-0-143-92.ec2.internal" in group "us-east-1:\x00:us-east-1b" to NodeTree\nI0407 21:33:48.875914       1 node_tree.go:86] Added node "ip-10-0-145-180.ec2.internal" in group "us-east-1:\x00:us-east-1c" to NodeTree\nI0407 21:33:48.875980       1 node_tree.go:86] Added node "ip-10-0-153-218.ec2.internal" in group "us-east-1:\x00:us-east-1c" to NodeTree\nI0407 21:33:48.884500       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\n
Apr 07 21:52:27.748 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-128-164.ec2.internal node/ip-10-0-128-164.ec2.internal container/cluster-policy-controller container exited with code 1 (Error): I0407 21:34:36.893936       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0407 21:34:36.896476       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0407 21:34:36.899005       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0407 21:34:36.899128       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\n
Apr 07 21:52:27.748 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-128-164.ec2.internal node/ip-10-0-128-164.ec2.internal container/kube-controller-manager-cert-syncer container exited with code 2 (Error): 1032       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0407 21:49:34.941493       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0407 21:49:37.855806       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0407 21:49:37.856246       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0407 21:49:44.997141       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0407 21:49:44.997494       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0407 21:49:47.876350       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0407 21:49:47.876752       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0407 21:49:55.032330       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0407 21:49:55.032691       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0407 21:49:57.891115       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0407 21:49:57.891535       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0407 21:50:05.048386       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0407 21:50:05.048776       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\n
Apr 07 21:52:27.748 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-128-164.ec2.internal node/ip-10-0-128-164.ec2.internal container/kube-controller-manager container exited with code 2 (Error): loaded client CA [5/"client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt"]: "kubelet-bootstrap-kubeconfig-signer" [] issuer="<self>" (2020-04-07 20:36:45 +0000 UTC to 2030-04-05 20:36:45 +0000 UTC (now=2020-04-07 21:34:36.794906814 +0000 UTC))\nI0407 21:34:36.794958       1 tlsconfig.go:178] loaded client CA [6/"client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt"]: "aggregator-signer" [] issuer="<self>" (2020-04-07 20:36:47 +0000 UTC to 2020-04-08 20:36:47 +0000 UTC (now=2020-04-07 21:34:36.794944482 +0000 UTC))\nI0407 21:34:36.795348       1 tlsconfig.go:200] loaded serving cert ["serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key"]: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1586292761" (2020-04-07 20:52:56 +0000 UTC to 2022-04-07 20:52:57 +0000 UTC (now=2020-04-07 21:34:36.795325315 +0000 UTC))\nI0407 21:34:36.795665       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1586295276" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1586295276" (2020-04-07 20:34:35 +0000 UTC to 2021-04-07 20:34:35 +0000 UTC (now=2020-04-07 21:34:36.795644404 +0000 UTC))\nI0407 21:34:36.795721       1 secure_serving.go:178] Serving securely on [::]:10257\nI0407 21:34:36.795764       1 tlsconfig.go:240] Starting DynamicServingCertificateController\nI0407 21:34:36.795777       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-controller-manager...\n
Apr 07 21:52:27.792 E ns/openshift-cluster-node-tuning-operator pod/tuned-zbktp node/ip-10-0-128-164.ec2.internal container/tuned container exited with code 143 (Error): o:434] tuned daemon profiles changed, forcing tuned daemon reload\nI0407 21:36:58.375516  117700 tuned.go:285] starting tuned...\n2020-04-07 21:36:58,509 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-04-07 21:36:58,517 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-04-07 21:36:58,517 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-07 21:36:58,518 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-04-07 21:36:58,519 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-04-07 21:36:58,557 INFO     tuned.daemon.controller: starting controller\n2020-04-07 21:36:58,557 INFO     tuned.daemon.daemon: starting tuning\n2020-04-07 21:36:58,568 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-07 21:36:58,569 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-07 21:36:58,572 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-07 21:36:58,574 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-07 21:36:58,576 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-07 21:36:58,724 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-07 21:36:58,734 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\nI0407 21:49:38.996949  117700 tuned.go:486] profile "ip-10-0-128-164.ec2.internal" changed, tuned profile requested: openshift-node\nI0407 21:49:39.104368  117700 tuned.go:486] profile "ip-10-0-128-164.ec2.internal" changed, tuned profile requested: openshift-control-plane\nI0407 21:49:39.227525  117700 tuned.go:392] getting recommended profile...\nI0407 21:49:39.435521  117700 tuned.go:428] active and recommended profile (openshift-control-plane) match; profile change will not trigger profile reload\n
Apr 07 21:52:27.818 E ns/openshift-controller-manager pod/controller-manager-cf985 node/ip-10-0-128-164.ec2.internal container/controller-manager container exited with code 1 (Error): I0407 21:37:09.915090       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0407 21:37:09.917323       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-zvgsjvdr/stable@sha256:baf34611b723ba5e9b3ead8872fed2c8af700156096054d720d42a057f5f24be"\nI0407 21:37:09.917355       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-zvgsjvdr/stable@sha256:19880395f98981bdfd98ffbfc9e4e878aa085ecf1e91f2073c24679545e41478"\nI0407 21:37:09.917472       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0407 21:37:09.918366       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Apr 07 21:52:27.908 E ns/openshift-sdn pod/sdn-controller-hq9zd node/ip-10-0-128-164.ec2.internal container/sdn-controller container exited with code 2 (Error): I0407 21:39:40.634548       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0407 21:39:40.654169       1 event.go:319] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"openshift-network-controller", GenerateName:"", Namespace:"openshift-sdn", SelfLink:"/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller", UID:"a201c43b-b77d-4b9a-91f7-c477a79e8a77", ResourceVersion:"37580", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63721889333, loc:(*time.Location)(0x2b2b940)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"ip-10-0-128-164\",\"leaseDurationSeconds\":60,\"acquireTime\":\"2020-04-07T20:48:53Z\",\"renewTime\":\"2020-04-07T21:39:40Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"openshift-sdn-controller", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0006a42e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006a4300)}}}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'ip-10-0-128-164 became leader'\nI0407 21:39:40.654337       1 leaderelection.go:252] successfully acquired lease openshift-sdn/openshift-network-controller\nI0407 21:39:40.660404       1 master.go:51] Initializing SDN master\nI0407 21:39:40.725368       1 network_controller.go:61] Started OpenShift Network Controller\n
Apr 07 21:52:27.953 E ns/openshift-multus pod/multus-admission-controller-xcll8 node/ip-10-0-128-164.ec2.internal container/multus-admission-controller container exited with code 255 (Error): 
Apr 07 21:52:28.016 E ns/openshift-sdn pod/ovs-gp6x2 node/ip-10-0-128-164.ec2.internal container/openvswitch container exited with code 1 (Error): gr|INFO|br0<->unix#470: 5 flow_mods in the last 0 s (5 adds)\n2020-04-07T21:49:46.097Z|00122|connmgr|INFO|br0<->unix#475: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-07T21:49:46.107Z|00123|connmgr|INFO|br0<->unix#477: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-04-07T21:49:49.531Z|00124|connmgr|INFO|br0<->unix#482: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-07T21:49:49.569Z|00125|connmgr|INFO|br0<->unix#485: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-07T21:49:49.605Z|00126|bridge|INFO|bridge br0: deleted interface veth1f53a89a on port 88\n2020-04-07T21:49:50.218Z|00127|bridge|INFO|bridge br0: added interface veth92890f0a on port 89\n2020-04-07T21:49:50.272Z|00128|connmgr|INFO|br0<->unix#488: 5 flow_mods in the last 0 s (5 adds)\n2020-04-07T21:49:50.379Z|00129|connmgr|INFO|br0<->unix#492: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-04-07T21:49:50.394Z|00130|connmgr|INFO|br0<->unix#494: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-07T21:49:50.234Z|00013|jsonrpc|WARN|unix#411: send error: Broken pipe\n2020-04-07T21:49:50.234Z|00014|reconnect|WARN|unix#411: connection dropped (Broken pipe)\n2020-04-07T21:49:53.571Z|00015|jsonrpc|WARN|unix#420: receive error: Connection reset by peer\n2020-04-07T21:49:53.571Z|00016|reconnect|WARN|unix#420: connection dropped (Connection reset by peer)\n2020-04-07T21:49:53.528Z|00131|connmgr|INFO|br0<->unix#500: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-07T21:49:53.559Z|00132|connmgr|INFO|br0<->unix#503: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-07T21:49:53.584Z|00133|bridge|INFO|bridge br0: deleted interface veth92890f0a on port 89\n2020-04-07T21:49:57.230Z|00134|connmgr|INFO|br0<->unix#507: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-07T21:49:57.264Z|00135|connmgr|INFO|br0<->unix#510: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-07T21:49:57.298Z|00136|bridge|INFO|bridge br0: deleted interface veth1a4e42e1 on port 83\n info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Apr 07 21:52:28.085 E ns/openshift-multus pod/multus-hkjz9 node/ip-10-0-128-164.ec2.internal container/kube-multus container exited with code 143 (Error): 
Apr 07 21:52:28.132 E ns/openshift-machine-config-operator pod/machine-config-daemon-gs9j4 node/ip-10-0-128-164.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 07 21:52:28.154 E ns/openshift-cluster-version pod/cluster-version-operator-8f95777df-bd7q8 node/ip-10-0-128-164.ec2.internal container/cluster-version-operator container exited with code 255 (Error): 568] Canceled worker 8\nI0407 21:50:06.886599       1 task_graph.go:568] Canceled worker 7\nI0407 21:50:06.886639       1 task_graph.go:568] Canceled worker 5\nI0407 21:50:06.886677       1 task_graph.go:568] Canceled worker 6\nI0407 21:50:06.886750       1 task_graph.go:568] Canceled worker 15\nI0407 21:50:06.886794       1 task_graph.go:568] Canceled worker 14\nI0407 21:50:06.886832       1 task_graph.go:568] Canceled worker 13\nI0407 21:50:06.886913       1 task_graph.go:568] Canceled worker 12\nI0407 21:50:06.886968       1 task_graph.go:568] Canceled worker 11\nI0407 21:50:06.887017       1 cvo.go:439] Started syncing cluster version "openshift-cluster-version/version" (2020-04-07 21:50:06.887009112 +0000 UTC m=+3.652273850)\nI0407 21:50:06.895627       1 cvo.go:468] Desired version from spec is v1.Update{Version:"", Image:"registry.svc.ci.openshift.org/ci-op-zvgsjvdr/release@sha256:7c6c266c8b0b2b3e03b2972852ba10d1222815b23127b6e0a128c09ca9186d1e", Force:true}\nI0407 21:50:06.887009       1 task_graph.go:568] Canceled worker 10\nI0407 21:50:06.887059       1 task_graph.go:568] Canceled worker 4\nI0407 21:50:06.902801       1 sync_worker.go:634] Done syncing for prometheusrule "openshift-cluster-version/cluster-version-operator" (9 of 565)\nI0407 21:50:06.905071       1 task_graph.go:516] No more reachable nodes in graph, continue\nI0407 21:50:06.905087       1 task_graph.go:552] No more work\nI0407 21:50:06.923790       1 task_graph.go:568] Canceled worker 9\nI0407 21:50:06.924007       1 task_graph.go:588] Workers finished\nI0407 21:50:06.924081       1 task_graph.go:596] Result of work: [update was cancelled at 9 of 565]\nI0407 21:50:06.924181       1 sync_worker.go:771] All errors were cancellation errors: [update was cancelled at 9 of 565]\nI0407 21:50:06.924301       1 cvo.go:441] Finished syncing cluster version "openshift-cluster-version/version" (37.281142ms)\nI0407 21:50:06.924402       1 cvo.go:366] Shutting down ClusterVersionOperator\nF0407 21:50:06.958781       1 start.go:148] Received shutdown signal twice, exiting\n
Apr 07 21:52:28.183 E ns/openshift-monitoring pod/node-exporter-tf6jw node/ip-10-0-128-164.ec2.internal container/node-exporter container exited with code 143 (Error): -07T21:36:51Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-07T21:36:51Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-07T21:36:51Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-07T21:36:51Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-07T21:36:51Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-07T21:36:51Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-07T21:36:51Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-07T21:36:51Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-07T21:36:51Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-07T21:36:51Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-07T21:36:51Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-07T21:36:51Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-07T21:36:51Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-07T21:36:51Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-07T21:36:51Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-07T21:36:51Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-07T21:36:51Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-07T21:36:51Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-07T21:36:51Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-07T21:36:51Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-07T21:36:51Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-07T21:36:51Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-07T21:36:51Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-07T21:36:51Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 07 21:52:28.240 E ns/openshift-machine-config-operator pod/machine-config-server-zgsm9 node/ip-10-0-128-164.ec2.internal container/machine-config-server container exited with code 2 (Error): I0407 21:49:21.114198       1 start.go:38] Version: machine-config-daemon-4.5.0-202004071701-2-gdd5eeeb2-dirty (dd5eeeb2bf88c50c9b7c2aa2385c4b2078a9eea0)\nI0407 21:49:21.115628       1 api.go:51] Launching server on :22624\nI0407 21:49:21.115696       1 api.go:51] Launching server on :22623\n
Apr 07 21:52:31.115 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-128-164.ec2.internal node/ip-10-0-128-164.ec2.internal container/kube-apiserver container exited with code 1 (Error): om cacher *unstructured.Unstructured\nW0407 21:50:07.253691       1 cacher.go:166] Terminating all watchers from cacher *certificates.CertificateSigningRequest\nW0407 21:50:07.256382       1 cacher.go:166] Terminating all watchers from cacher *core.PersistentVolume\nW0407 21:50:07.256520       1 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured\nW0407 21:50:07.256751       1 cacher.go:166] Terminating all watchers from cacher *storage.StorageClass\nW0407 21:50:07.256990       1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted\nW0407 21:50:07.257033       1 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured\nW0407 21:50:07.257154       1 cacher.go:166] Terminating all watchers from cacher *batch.Job\nW0407 21:50:07.257291       1 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured\nW0407 21:50:07.258734       1 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured\nW0407 21:50:07.260766       1 cacher.go:166] Terminating all watchers from cacher *core.ResourceQuota\nW0407 21:50:07.261046       1 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured\nW0407 21:50:07.261209       1 cacher.go:166] Terminating all watchers from cacher *networking.IngressClass\nW0407 21:50:07.261303       1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted\nW0407 21:50:07.261572       1 cacher.go:166] Terminating all watchers from cacher *admissionregistration.MutatingWebhookConfiguration\nW0407 21:50:07.261701       1 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured\nW0407 21:50:07.266541       1 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured\nW0407 21:50:07.271708       1 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured\nW0407 21:50:07.287481       1 cacher.go:166] Terminating all watchers from cacher *admissionregistration.MutatingWebhookConfiguration\n
Apr 07 21:52:31.115 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-128-164.ec2.internal node/ip-10-0-128-164.ec2.internal container/kube-apiserver-insecure-readyz container exited with code 2 (Error): I0407 21:33:39.017986       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Apr 07 21:52:31.115 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-128-164.ec2.internal node/ip-10-0-128-164.ec2.internal container/kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0407 21:49:49.752397       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0407 21:49:49.776636       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0407 21:49:59.760388       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0407 21:49:59.760768       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Apr 07 21:52:31.176 E ns/openshift-etcd pod/etcd-ip-10-0-128-164.ec2.internal node/ip-10-0-128-164.ec2.internal container/etcd-metrics container exited with code 2 (Error): ll-serving-metrics/etcd-serving-metrics-ip-10-0-128-164.ec2.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-128-164.ec2.internal.key, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "}\n{"level":"info","ts":"2020-04-07T21:32:05.567Z","caller":"etcdmain/grpc_proxy.go:320","msg":"listening for gRPC proxy client requests","address":"127.0.0.1:9977"}\n{"level":"info","ts":"2020-04-07T21:32:05.568Z","caller":"etcdmain/grpc_proxy.go:290","msg":"gRPC proxy client TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-128-164.ec2.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-128-164.ec2.internal.key, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "}\n{"level":"info","ts":"2020-04-07T21:32:05.570Z","caller":"etcdmain/grpc_proxy.go:456","msg":"gRPC proxy listening for metrics","address":"https://0.0.0.0:9979"}\n{"level":"info","ts":"2020-04-07T21:32:05.570Z","caller":"etcdmain/grpc_proxy.go:218","msg":"started gRPC proxy","address":"127.0.0.1:9977"}\n{"level":"info","ts":"2020-04-07T21:32:05.570Z","caller":"etcdmain/grpc_proxy.go:208","msg":"gRPC proxy server metrics URL serving"}\n{"level":"warn","ts":"2020-04-07T21:32:05.571Z","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://10.0.128.164:9978 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 10.0.128.164:9978: connect: connection refused\". Reconnecting..."}\n{"level":"warn","ts":"2020-04-07T21:32:06.572Z","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://10.0.128.164:9978 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 10.0.128.164:9978: connect: connection refused\". Reconnecting..."}\n
Apr 07 21:52:38.523 E ns/openshift-machine-config-operator pod/machine-config-daemon-gs9j4 node/ip-10-0-128-164.ec2.internal container/oauth-proxy container exited with code 1 (Error): 
Apr 07 21:52:40.296 E clusteroperator/kube-scheduler changed Degraded to True: NodeController_MasterNodesReady::StaticPods_Error: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-128-164.ec2.internal" not ready since 2020-04-07 21:52:27 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)\nStaticPodsDegraded: nodes/ip-10-0-128-164.ec2.internal pods/openshift-kube-scheduler-ip-10-0-128-164.ec2.internal container="kube-scheduler" is not ready\nStaticPodsDegraded: nodes/ip-10-0-128-164.ec2.internal pods/openshift-kube-scheduler-ip-10-0-128-164.ec2.internal container="kube-scheduler" is terminated: "Error" - "ication::requestheader-client-ca-file\"]: \"aggregator-signer\" [] issuer=\"<self>\" (2020-04-07 20:36:47 +0000 UTC to 2020-04-08 20:36:47 +0000 UTC (now=2020-04-07 21:33:48.792700715 +0000 UTC))\nI0407 21:33:48.793213       1 tlsconfig.go:200] loaded serving cert [\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\"]: \"scheduler.openshift-kube-scheduler.svc\" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer=\"openshift-service-serving-signer@1586292761\" (2020-04-07 20:53:02 +0000 UTC to 2022-04-07 20:53:03 +0000 UTC (now=2020-04-07 21:33:48.793192878 +0000 UTC))\nI0407 21:33:48.793679       1 named_certificates.go:53] loaded SNI cert [0/\"self-signed loopback\"]: \"apiserver-loopback-client@1586295228\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1586295227\" (2020-04-07 20:33:46 +0000 UTC to 2021-04-07 20:33:46 +0000 UTC (now=2020-04-07 21:33:48.793659783 +0000 UTC))\nI0407 21:33:48.873754       1 node_tree.go:86] Added node \"ip-10-0-128-164.ec2.internal\" in group \"us-east-1:\\x00:us-east-1b\" to NodeTree\nI0407 21:33:48.875551       1 node_tree.go:86] Added node \"ip-10-0-136-217.ec2.internal\" in group \"us-east-1:\\x00:us-east-1b\" to NodeTree\nI0407 21:33:48.875703       1 node_tree.go:86] Added node \"ip-10-0-141-178.ec2.internal\" in group \"us-east-1:\\x00:us-east-1b\" to NodeTree\nI0407 21:33:48.875759       1 node_tree.go:86] Added node \"ip-10-0-143-92.ec2.internal\" in group \"us-east-1:\\x00:us-east-1b\" to NodeTree\nI0407 21:33:48.875914       1 node_tree.go:86] Added node \"ip-10-0-145-180.ec2.internal\" in group \"us-east-1:\\x00:us-east-1c\" to NodeTree\nI0407 21:33:48.875980       1 node_tree.go:86] Added node \"ip-10-0-153-218.ec2.internal\" in group \"us-east-1:\\x00:us-east-1c\" to NodeTree\nI0407 21:33:48.884500       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\n"\nStaticPodsDegraded: nodes/ip-10-0-128-164.ec2.internal pods/openshift-kube-scheduler-ip-10-0-128-164.ec2.internal container="kube-scheduler-cert-syncer" is not ready\nStaticPodsDegraded: nodes/ip-10-0-128-164.ec2.internal pods/openshift-kube-scheduler-ip-10-0-128-164.ec2.internal container="kube-scheduler-cert-syncer" is terminated: "Error" - "1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0407 21:49:46.993590       1 certsync_controller.go:65] Syncing configmaps: []\nI0407 21:49:46.993617       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0407 21:49:49.008290       1 certsync_controller.go:65] Syncing configmaps: []\nI0407 21:49:49.008319       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0407 21:49:51.027469       1 certsync_controller.go:65] Syncing configmaps: []\nI0407 21:49:51.027556       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0407 21:49:53.040011       1 certsync_controller.go:65] Syncing configmaps: []\nI0407 21:49:53.040037       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0407 21:49:55.112157       1 certsync_controller.go:65] Syncing configmaps: []\nI0407 21:49:55.112188       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0407 21:49:57.150628       1 certsync_controller.go:65] Syncing configmaps: []\nI0407 21:49:57.150656       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0407 21:49:59.179993       1 certsync_controller.go:65] Syncing configmaps: []\nI0407 21:49:59.180018       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0407 21:50:01.195687       1 certsync_controller.go:65] Syncing configmaps: []\nI0407 21:50:01.195717       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0407 21:50:03.210742       1 certsync_controller.go:65] Syncing configmaps: []\nI0407 21:50:03.210770       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0407 21:50:05.226811       1 certsync_controller.go:65] Syncing configmaps: []\nI0407 21:50:05.226843       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\n"
Apr 07 21:52:40.309 E clusteroperator/kube-apiserver changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-128-164.ec2.internal" not ready since 2020-04-07 21:52:27 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)
Apr 07 21:52:40.312 E clusteroperator/kube-controller-manager changed Degraded to True: NodeController_MasterNodesReady::StaticPods_Error: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-128-164.ec2.internal" not ready since 2020-04-07 21:52:27 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)\nStaticPodsDegraded: nodes/ip-10-0-128-164.ec2.internal pods/kube-controller-manager-ip-10-0-128-164.ec2.internal container="cluster-policy-controller" is not ready\nStaticPodsDegraded: nodes/ip-10-0-128-164.ec2.internal pods/kube-controller-manager-ip-10-0-128-164.ec2.internal container="cluster-policy-controller" is terminated: "Error" - "I0407 21:34:36.893936       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0407 21:34:36.896476       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0407 21:34:36.899005       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0407 21:34:36.899128       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\n"\nStaticPodsDegraded: nodes/ip-10-0-128-164.ec2.internal pods/kube-controller-manager-ip-10-0-128-164.ec2.internal container="kube-controller-manager" is not ready\nStaticPodsDegraded: nodes/ip-10-0-128-164.ec2.internal pods/kube-controller-manager-ip-10-0-128-164.ec2.internal container="kube-controller-manager" is terminated: "Error" - "loaded client CA [5/\"client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt\"]: \"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"<self>\" (2020-04-07 20:36:45 +0000 UTC to 2030-04-05 20:36:45 +0000 UTC (now=2020-04-07 21:34:36.794906814 +0000 UTC))\nI0407 21:34:36.794958       1 tlsconfig.go:178] loaded client CA [6/\"client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt\"]: \"aggregator-signer\" [] issuer=\"<self>\" (2020-04-07 20:36:47 +0000 UTC to 2020-04-08 20:36:47 +0000 UTC (now=2020-04-07 21:34:36.794944482 +0000 UTC))\nI0407 21:34:36.795348       1 tlsconfig.go:200] loaded serving cert [\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\"]: \"kube-controller-manager.openshift-kube-controller-manager.svc\" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1586292761\" (2020-04-07 20:52:56 +0000 UTC to 2022-04-07 20:52:57 +0000 UTC (now=2020-04-07 21:34:36.795325315 +0000 UTC))\nI0407 21:34:36.795665       1 named_certificates.go:53] loaded SNI cert [0/\"self-signed loopback\"]: \"apiserver-loopback-client@1586295276\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1586295276\" (2020-04-07 20:34:35 +0000 UTC to 2021-04-07 20:34:35 +0000 UTC (now=2020-04-07 21:34:36.795644404 +0000 UTC))\nI0407 21:34:36.795721       1 secure_serving.go:178] Serving securely on [::]:10257\nI0407 21:34:36.795764       1 tlsconfig.go:240] Starting DynamicServingCertificateController\nI0407 21:34:36.795777       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-controller-manager...\n"\nStaticPodsDegraded: nodes/ip-10-0-128-164.ec2.internal pods/kube-controller-manager-ip-10-0-128-164.ec2.internal container="kube-controller-manager-cert-syncer" is not ready\nStaticPodsDegraded: nodes/ip-10-0-128-164.ec2.internal pods/kube-controller-manager-ip-10-0-128-164.ec2.internal container="kube-controller-manager-cert-syncer" is terminated: "Error" - "1032       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0407 21:49:34.941493       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0407 21:49:37.855806       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0407 21:49:37.856246       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0407 21:49:44.997141       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0407 21:49:44.997494       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0407 21:49:47.876350       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0407 21:49:47.876752       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0407 21:49:55.032330       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0407 21:49:55.032691       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0407 21:49:57.891115       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0407 21:49:57.891535       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0407 21:50:05.048386       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0407 21:50:05.048776       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\n"\nStaticPodsDegraded: nodes/ip-10-0-128-164.ec2.internal pods/kube-controller-manager-ip-10-0-128-164.ec2.internal container="kube-controller-manager-recovery-controller" is not ready\nStaticPodsDegraded: nodes/ip-10-0-128-164.ec2.internal pods/kube-controller-manager-ip-10-0-128-164.ec2.internal container="kube-controller-manager-recovery-controller" is terminated: "Completed" - ""
Apr 07 21:52:51.425 E ns/e2e-k8s-sig-apps-job-upgrade-3905 pod/foo-glbpg node/ip-10-0-143-92.ec2.internal container/c container exited with code 137 (Error): 
Apr 07 21:52:51.625 E ns/openshift-cluster-version pod/cluster-version-operator-8f95777df-bd7q8 node/ip-10-0-128-164.ec2.internal container/cluster-version-operator container exited with code 255 (Error): I0407 21:52:33.010359       1 start.go:19] ClusterVersionOperator v1.0.0-207-gd9e62836-dirty\nI0407 21:52:33.011580       1 merged_client_builder.go:122] Using in-cluster configuration\nI0407 21:52:33.074080       1 payload.go:210] Loading updatepayload from "/"\nI0407 21:52:36.967510       1 cvo.go:264] Verifying release authenticity: All release image digests must have GPG signatures from verifier-public-key-openshift-ci (D04761B116203B0C0859B61628B76E05B923888E: openshift-ci) - will check for signatures in containers/image format at https://storage.googleapis.com/openshift-release/test-1/signatures/openshift/release and from config maps in openshift-config-managed with label "release.openshift.io/verification-signatures"\nI0407 21:52:36.974020       1 leaderelection.go:242] attempting to acquire leader lease  openshift-cluster-version/version...\nI0407 21:52:41.352441       1 leaderelection.go:352] lock is held by ip-10-0-128-164_8d03c210-1aa4-4e7a-8cfb-038b271dbb47 and has not yet expired\nI0407 21:52:41.357229       1 leaderelection.go:247] failed to acquire lease openshift-cluster-version/version\nI0407 21:52:46.026239       1 start.go:140] Shutting down due to terminated\nF0407 21:52:51.026945       1 start.go:146] Exiting\n
Apr 07 21:53:05.134 E ns/openshift-machine-config-operator pod/machine-config-operator-5759f6797d-pvdx5 node/ip-10-0-136-217.ec2.internal container/machine-config-operator container exited with code 2 (Error): Error on reading termination message from logs: failed to try resolving symlinks in path "/var/log/pods/openshift-machine-config-operator_machine-config-operator-5759f6797d-pvdx5_6b6af00b-e42d-4fb4-9567-6040213efabc/machine-config-operator/0.log": lstat /var/log/pods/openshift-machine-config-operator_machine-config-operator-5759f6797d-pvdx5_6b6af00b-e42d-4fb4-9567-6040213efabc/machine-config-operator/0.log: no such file or directory
Apr 07 21:53:05.189 E ns/openshift-cluster-version pod/cluster-version-operator-8f95777df-dwpzt node/ip-10-0-136-217.ec2.internal container/cluster-version-operator container exited with code 255 (Error): I0407 21:52:47.154682       1 start.go:19] ClusterVersionOperator v1.0.0-207-gd9e62836-dirty\nI0407 21:52:47.155772       1 merged_client_builder.go:122] Using in-cluster configuration\nI0407 21:52:47.163897       1 payload.go:210] Loading updatepayload from "/"\nI0407 21:52:48.551273       1 cvo.go:264] Verifying release authenticity: All release image digests must have GPG signatures from verifier-public-key-openshift-ci (D04761B116203B0C0859B61628B76E05B923888E: openshift-ci) - will check for signatures in containers/image format at https://storage.googleapis.com/openshift-release/test-1/signatures/openshift/release and from config maps in openshift-config-managed with label "release.openshift.io/verification-signatures"\nI0407 21:52:48.551850       1 leaderelection.go:242] attempting to acquire leader lease  openshift-cluster-version/version...\nI0407 21:52:48.573504       1 leaderelection.go:352] lock is held by ip-10-0-128-164_8d03c210-1aa4-4e7a-8cfb-038b271dbb47 and has not yet expired\nI0407 21:52:48.573531       1 leaderelection.go:247] failed to acquire lease openshift-cluster-version/version\nI0407 21:52:57.930561       1 start.go:140] Shutting down due to terminated\nF0407 21:53:02.930941       1 start.go:146] Exiting\n
Apr 07 21:53:07.463 E ns/e2e-k8s-service-lb-available-4592 pod/service-test-2nr9x node/ip-10-0-143-92.ec2.internal container/netexec container exited with code 2 (Error): 
Apr 07 21:53:09.136 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-141-178.ec2.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-07T21:53:05.182Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-07T21:53:05.187Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-07T21:53:05.188Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-07T21:53:05.188Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-07T21:53:05.188Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-07T21:53:05.188Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-07T21:53:05.189Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-07T21:53:05.189Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-07T21:53:05.189Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-07T21:53:05.189Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-07T21:53:05.189Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-07T21:53:05.189Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-07T21:53:05.189Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-07T21:53:05.189Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-07T21:53:05.190Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-07T21:53:05.190Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-07
Apr 07 21:53:27.481 E ns/openshift-console pod/console-6566bbbd78-6fxv5 node/ip-10-0-136-217.ec2.internal container/console container exited with code 2 (Error): 2020-04-07T21:38:24Z cmd/main: cookies are secure!\n2020-04-07T21:38:24Z cmd/main: Binding to [::]:8443...\n2020-04-07T21:38:24Z cmd/main: using TLS\n2020-04-07T21:40:23Z auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-zvgsjvdr-f83f1.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-zvgsjvdr-f83f1.origin-ci-int-aws.dev.rhcloud.com: EOF\n2020-04-07T21:40:24Z auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-zvgsjvdr-f83f1.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-zvgsjvdr-f83f1.origin-ci-int-aws.dev.rhcloud.com: EOF\n
Apr 07 21:53:39.879 E kube-apiserver failed contacting the API: Get https://api.ci-op-zvgsjvdr-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=47646&timeout=8m37s&timeoutSeconds=517&watch=true: dial tcp 52.0.244.26:6443: connect: connection refused
Apr 07 21:53:59.895 E ns/openshift-marketplace pod/redhat-operators-78c78956c6-njkw6 node/ip-10-0-141-178.ec2.internal container/redhat-operators container exited with code 2 (Error): 
Apr 07 21:54:00.390 E ns/openshift-marketplace pod/redhat-marketplace-85df5c486d-mm597 node/ip-10-0-153-218.ec2.internal container/redhat-marketplace container exited with code 2 (Error): 
Apr 07 21:54:05.890 E ns/openshift-marketplace pod/community-operators-6cf44466b8-v99bq node/ip-10-0-141-178.ec2.internal container/community-operators container exited with code 2 (Error): 
Apr 07 21:55:18.222 E ns/openshift-monitoring pod/node-exporter-gxk69 node/ip-10-0-143-92.ec2.internal container/node-exporter container exited with code 143 (Error): -07T21:36:25Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-07T21:36:25Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-07T21:36:25Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-07T21:36:25Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-07T21:36:25Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-07T21:36:25Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-07T21:36:25Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-07T21:36:25Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-07T21:36:25Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-07T21:36:25Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-07T21:36:25Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-07T21:36:25Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-07T21:36:25Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-07T21:36:25Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-07T21:36:25Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-07T21:36:25Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-07T21:36:25Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-07T21:36:25Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-07T21:36:25Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-07T21:36:25Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-07T21:36:25Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-07T21:36:25Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-07T21:36:25Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-07T21:36:25Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 07 21:55:18.240 E ns/openshift-cluster-node-tuning-operator pod/tuned-b5vg7 node/ip-10-0-143-92.ec2.internal container/tuned container exited with code 143 (Error): e[1747278511]: "Reflector ListAndWatch" name:github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:578 (started: 2020-04-07 21:50:09.353740166 +0000 UTC m=+770.054376001) (total time: 39.050956846s):\nTrace[1747278511]: [39.050956846s] [39.050956846s] END\nE0407 21:50:48.404762   55472 reflector.go:178] github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:578: Failed to list *v1.Tuned: Timeout: Too large resource version: 43559, current: 42751\nI0407 21:51:31.320074   55472 trace.go:116] Trace[817455089]: "Reflector ListAndWatch" name:github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:578 (started: 2020-04-07 21:50:52.290593913 +0000 UTC m=+812.991229792) (total time: 39.029451862s):\nTrace[817455089]: [39.029451862s] [39.029451862s] END\nE0407 21:51:31.320100   55472 reflector.go:178] github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:578: Failed to list *v1.Tuned: Timeout: Too large resource version: 43559, current: 42751\nI0407 21:52:18.787062   55472 trace.go:116] Trace[1006933274]: "Reflector ListAndWatch" name:github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:578 (started: 2020-04-07 21:51:39.755854238 +0000 UTC m=+860.456490020) (total time: 39.031174522s):\nTrace[1006933274]: [39.031174522s] [39.031174522s] END\nE0407 21:52:18.787090   55472 reflector.go:178] github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:578: Failed to list *v1.Tuned: Timeout: Too large resource version: 43559, current: 42751\nI0407 21:53:14.241589   55472 trace.go:116] Trace[629431445]: "Reflector ListAndWatch" name:github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:578 (started: 2020-04-07 21:52:35.210131468 +0000 UTC m=+915.910767256) (total time: 39.031430982s):\nTrace[629431445]: [39.031430982s] [39.031430982s] END\nE0407 21:53:14.241610   55472 reflector.go:178] github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:578: Failed to list *v1.Tuned: Timeout: Too large resource version: 43559, current: 42751\n
Apr 07 21:55:18.323 E ns/openshift-multus pod/multus-frz55 node/ip-10-0-143-92.ec2.internal container/kube-multus container exited with code 143 (Error): 
Apr 07 21:55:18.371 E ns/openshift-machine-config-operator pod/machine-config-daemon-rjxc7 node/ip-10-0-143-92.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 07 21:55:20.610 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers::NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-136-217.ec2.internal" not ready since 2020-04-07 21:54:38 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)\nEtcdMembersDegraded: ip-10-0-145-180.ec2.internal,ip-10-0-136-217.ec2.internal members are unhealthy,  members are unknown
Apr 07 21:55:26.396 E ns/openshift-machine-config-operator pod/machine-config-daemon-rjxc7 node/ip-10-0-143-92.ec2.internal container/oauth-proxy container exited with code 1 (Error): 
Apr 07 21:55:42.893 E ns/openshift-monitoring pod/prometheus-adapter-9d586d6f9-zcm7z node/ip-10-0-153-218.ec2.internal container/prometheus-adapter container exited with code 2 (Error): I0407 21:49:36.011509       1 adapter.go:93] successfully using in-cluster auth\nI0407 21:49:37.729711       1 secure_serving.go:116] Serving securely on [::]:6443\n
Apr 07 21:55:44.057 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-153-218.ec2.internal container/prometheus container exited with code 1 (Error): caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-07T21:37:14.684Z caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-07T21:37:14.689Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-07T21:37:14.689Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-07T21:37:14.690Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-07T21:37:14.690Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-07T21:37:14.690Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-07T21:37:14.690Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-07T21:37:14.690Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-07T21:37:14.690Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-07T21:37:14.690Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-07T21:37:14.690Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-07T21:37:14.690Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-07T21:37:14.690Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-07T21:37:14.690Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-07T21:37:14.691Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-07T21:37:14.691Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-07
Apr 07 21:55:44.057 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-153-218.ec2.internal container/prometheus-config-reloader container exited with code 2 (Error): Error on reading termination message from logs: failed to try resolving symlinks in path "/var/log/pods/openshift-monitoring_prometheus-k8s-1_a894a1bf-923c-4081-be8f-2a45108e1995/prometheus-config-reloader/0.log": lstat /var/log/pods/openshift-monitoring_prometheus-k8s-1_a894a1bf-923c-4081-be8f-2a45108e1995/prometheus-config-reloader/0.log: no such file or directory
Apr 07 21:55:44.085 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-153-218.ec2.internal container/config-reloader container exited with code 2 (Error): 2020/04/07 21:49:50 Watching directory: "/etc/alertmanager/config"\n
Apr 07 21:55:44.085 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-153-218.ec2.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/04/07 21:49:50 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/07 21:49:50 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/07 21:49:50 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/07 21:49:50 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/04/07 21:49:50 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/07 21:49:50 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/07 21:49:50 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/07 21:49:50 http.go:107: HTTPS: listening on [::]:9095\nI0407 21:49:50.585941       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Apr 07 21:55:44.192 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-153-218.ec2.internal container/config-reloader container exited with code 2 (Error): 2020/04/07 21:36:56 Watching directory: "/etc/alertmanager/config"\n
Apr 07 21:55:44.192 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-153-218.ec2.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/04/07 21:36:56 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/07 21:36:56 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/07 21:36:56 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/07 21:36:56 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/04/07 21:36:56 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/07 21:36:56 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/07 21:36:56 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/07 21:36:56 http.go:107: HTTPS: listening on [::]:9095\nI0407 21:36:56.525525       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Apr 07 21:55:56.721 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-143-92.ec2.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-07T21:55:54.878Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-07T21:55:54.884Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-07T21:55:54.885Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-07T21:55:54.886Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-07T21:55:54.886Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-07T21:55:54.886Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-07T21:55:54.886Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-07T21:55:54.886Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-07T21:55:54.886Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-07T21:55:54.886Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-07T21:55:54.886Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-07T21:55:54.886Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-07T21:55:54.886Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-07T21:55:54.886Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-07T21:55:54.887Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-07T21:55:54.887Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-07
Apr 07 21:56:02.101 E ns/openshift-cluster-node-tuning-operator pod/tuned-tdv7t node/ip-10-0-136-217.ec2.internal container/tuned container exited with code 143 (Error):  to list *v1.Tuned: Get https://172.30.0.1:443/apis/tuned.openshift.io/v1/namespaces/openshift-cluster-node-tuning-operator/tuneds?fieldSelector=metadata.name%3Drendered&resourceVersion=31507: unexpected EOF\nI0407 21:50:09.362804  112308 tuned.go:527] tuned "rendered" changed\nI0407 21:50:09.362835  112308 tuned.go:218] extracting tuned profiles\nI0407 21:50:09.541217  112308 tuned.go:434] tuned daemon profiles changed, forcing tuned daemon reload\nI0407 21:50:09.541245  112308 tuned.go:356] reloading tuned...\nI0407 21:50:09.541254  112308 tuned.go:359] sending HUP to PID 112343\n2020-04-07 21:50:09,541 INFO     tuned.daemon.daemon: stopping tuning\n2020-04-07 21:50:10,307 INFO     tuned.daemon.daemon: terminating Tuned, rolling back all changes\n2020-04-07 21:50:10,322 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-07 21:50:10,324 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-04-07 21:50:10,325 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-04-07 21:50:10,490 INFO     tuned.daemon.daemon: starting tuning\n2020-04-07 21:50:10,494 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-07 21:50:10,495 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-07 21:50:10,501 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-07 21:50:10,504 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-07 21:50:10,509 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-07 21:50:10,515 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-07 21:50:10,529 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\nI0407 21:53:39.350309  112308 tuned.go:114] received signal: terminated\nI0407 21:53:39.350360  112308 tuned.go:326] sending TERM to PID 112343\n
Apr 07 21:56:02.155 E ns/openshift-controller-manager pod/controller-manager-gg2xp node/ip-10-0-136-217.ec2.internal container/controller-manager container exited with code 1 (Error): rs/externalversions/factory.go:101: Unexpected watch close - watch lasted less than a second and no items received\nW0407 21:50:07.543850       1 reflector.go:340] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: very short watch: github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: Unexpected watch close - watch lasted less than a second and no items received\nW0407 21:50:07.543900       1 reflector.go:340] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.BuildConfig ended with: very short watch: github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Unexpected watch close - watch lasted less than a second and no items received\nW0407 21:50:07.543945       1 reflector.go:340] k8s.io/client-go/informers/factory.go:135: watch of *v1.ServiceAccount ended with: very short watch: k8s.io/client-go/informers/factory.go:135: Unexpected watch close - watch lasted less than a second and no items received\nW0407 21:52:59.462977       1 reflector.go:340] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 5; INTERNAL_ERROR") has prevented the request from succeeding\nW0407 21:52:59.463256       1 reflector.go:340] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 93; INTERNAL_ERROR") has prevented the request from succeeding\nW0407 21:52:59.463467       1 reflector.go:340] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.ImageStream ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 17; INTERNAL_ERROR") has prevented the request from succeeding\n
Apr 07 21:56:02.185 E ns/openshift-monitoring pod/node-exporter-gt68c node/ip-10-0-136-217.ec2.internal container/node-exporter container exited with code 143 (Error): -07T21:38:10Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-07T21:38:10Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-07T21:38:10Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-07T21:38:10Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-07T21:38:10Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-07T21:38:10Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-07T21:38:10Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-07T21:38:10Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-07T21:38:10Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-07T21:38:10Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-07T21:38:10Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-07T21:38:10Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-07T21:38:10Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-07T21:38:10Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-07T21:38:10Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-07T21:38:10Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-07T21:38:10Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-07T21:38:10Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-07T21:38:10Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-07T21:38:10Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-07T21:38:10Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-07T21:38:10Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-07T21:38:10Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-07T21:38:10Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 07 21:56:02.209 E ns/openshift-multus pod/multus-admission-controller-bmfpz node/ip-10-0-136-217.ec2.internal container/multus-admission-controller container exited with code 255 (Error): 
Apr 07 21:56:02.235 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-136-217.ec2.internal node/ip-10-0-136-217.ec2.internal container/kube-apiserver container exited with code 1 (Error): <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.136.217:2379: connect: connection refused". Reconnecting...\nI0407 21:53:39.575616       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0407 21:53:39.575944       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0407 21:53:39.576308       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0407 21:53:39.576532       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0407 21:53:39.576754       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0407 21:53:39.577285       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nW0407 21:53:39.577426       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://localhost:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp [::1]:2379: connect: connection refused". Reconnecting...\nI0407 21:53:39.578291       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0407 21:53:39.578517       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0407 21:53:39.578902       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nW0407 21:53:39.579431       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://10.0.136.217:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.136.217:2379: connect: connection refused". Reconnecting...\nI0407 21:53:39.591384       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\n
Apr 07 21:56:02.235 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-136-217.ec2.internal node/ip-10-0-136-217.ec2.internal container/kube-apiserver-cert-regeneration-controller container exited with code 1 (Error): e_controller.go:54] Starting #1 worker of CertRotationController controller ...\nI0407 21:36:48.771371       1 shared_informer.go:223] Waiting for caches to sync for CertRotationController\nI0407 21:36:48.771389       1 shared_informer.go:230] Caches are synced for CertRotationController \nI0407 21:36:48.771398       1 base_controller.go:54] Starting #1 worker of CertRotationController controller ...\nI0407 21:36:48.771439       1 shared_informer.go:223] Waiting for caches to sync for CertRotationController\nI0407 21:36:48.771459       1 shared_informer.go:230] Caches are synced for CertRotationController \nI0407 21:36:48.771468       1 base_controller.go:54] Starting #1 worker of CertRotationController controller ...\nI0407 21:36:48.771498       1 shared_informer.go:223] Waiting for caches to sync for CertRotationController\nI0407 21:36:48.771508       1 shared_informer.go:230] Caches are synced for CertRotationController \nI0407 21:36:48.771516       1 base_controller.go:54] Starting #1 worker of CertRotationController controller ...\nI0407 21:36:48.771554       1 shared_informer.go:223] Waiting for caches to sync for CertRotationController\nI0407 21:36:48.771572       1 shared_informer.go:230] Caches are synced for CertRotationController \nI0407 21:36:48.771580       1 base_controller.go:54] Starting #1 worker of CertRotationController controller ...\nI0407 21:46:48.683501       1 externalloadbalancer.go:26] syncing external loadbalancer hostnames: api.ci-op-zvgsjvdr-f83f1.origin-ci-int-aws.dev.rhcloud.com\nI0407 21:46:48.688274       1 servicehostname.go:40] syncing servicenetwork hostnames: [172.30.0.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openshift openshift.default openshift.default.svc openshift.default.svc.cluster.local]\nI0407 21:53:39.359043       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0407 21:53:39.364733       1 reflector.go:181] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\n
Apr 07 21:56:02.235 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-136-217.ec2.internal node/ip-10-0-136-217.ec2.internal container/kube-apiserver-insecure-readyz container exited with code 2 (Error): I0407 21:35:49.409889       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Apr 07 21:56:02.235 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-136-217.ec2.internal node/ip-10-0-136-217.ec2.internal container/kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0407 21:53:20.110335       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0407 21:53:20.110712       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0407 21:53:30.121896       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0407 21:53:30.122374       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Apr 07 21:56:02.284 E ns/openshift-multus pod/multus-2wbvh node/ip-10-0-136-217.ec2.internal container/kube-multus container exited with code 143 (Error): 
Apr 07 21:56:02.301 E ns/openshift-machine-config-operator pod/machine-config-server-kp2z4 node/ip-10-0-136-217.ec2.internal container/machine-config-server container exited with code 2 (Error): I0407 21:49:44.959838       1 start.go:38] Version: machine-config-daemon-4.5.0-202004071701-2-gdd5eeeb2-dirty (dd5eeeb2bf88c50c9b7c2aa2385c4b2078a9eea0)\nI0407 21:49:44.985166       1 api.go:51] Launching server on :22624\nI0407 21:49:44.985332       1 api.go:51] Launching server on :22623\n
Apr 07 21:56:02.360 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-136-217.ec2.internal node/ip-10-0-136-217.ec2.internal container/kube-scheduler container exited with code 2 (Error):        1 scheduler.go:728] pod openshift-marketplace/redhat-marketplace-7946876c57-hfm8l is bound successfully on node "ip-10-0-141-178.ec2.internal", 6 nodes evaluated, 2 nodes were found feasible.\nI0407 21:53:26.845047       1 scheduler.go:728] pod openshift-marketplace/redhat-marketplace-69f7bbfd6c-wppm6 is bound successfully on node "ip-10-0-141-178.ec2.internal", 6 nodes evaluated, 2 nodes were found feasible.\nI0407 21:53:27.098722       1 scheduler.go:728] pod openshift-operator-lifecycle-manager/packageserver-6c4885b769-mk68f is bound successfully on node "ip-10-0-128-164.ec2.internal", 6 nodes evaluated, 2 nodes were found feasible.\nI0407 21:53:30.575882       1 factory.go:462] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-7b6dcd6d8-565mb: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0407 21:53:30.624652       1 factory.go:462] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-7b6dcd6d8-565mb: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0407 21:53:34.997951       1 factory.go:462] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-7b6dcd6d8-565mb: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0407 21:53:35.504549       1 factory.go:462] Unable to schedule openshift-apiserver/apiserver-647484c4c4-ggxc9: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\n
Apr 07 21:56:02.360 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-136-217.ec2.internal node/ip-10-0-136-217.ec2.internal container/kube-scheduler-cert-syncer container exited with code 2 (Error): 1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0407 21:53:19.152722       1 certsync_controller.go:65] Syncing configmaps: []\nI0407 21:53:19.152754       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0407 21:53:21.175303       1 certsync_controller.go:65] Syncing configmaps: []\nI0407 21:53:21.175806       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0407 21:53:23.200464       1 certsync_controller.go:65] Syncing configmaps: []\nI0407 21:53:23.200634       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0407 21:53:25.208675       1 certsync_controller.go:65] Syncing configmaps: []\nI0407 21:53:25.208709       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0407 21:53:27.227834       1 certsync_controller.go:65] Syncing configmaps: []\nI0407 21:53:27.228573       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0407 21:53:29.254348       1 certsync_controller.go:65] Syncing configmaps: []\nI0407 21:53:29.256558       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0407 21:53:31.268505       1 certsync_controller.go:65] Syncing configmaps: []\nI0407 21:53:31.268926       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0407 21:53:33.311345       1 certsync_controller.go:65] Syncing configmaps: []\nI0407 21:53:33.311467       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0407 21:53:35.337815       1 certsync_controller.go:65] Syncing configmaps: []\nI0407 21:53:35.337855       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0407 21:53:37.351399       1 certsync_controller.go:65] Syncing configmaps: []\nI0407 21:53:37.351548       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\n
Apr 07 21:56:02.446 E ns/openshift-machine-config-operator pod/machine-config-daemon-hd6lr node/ip-10-0-136-217.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 07 21:56:02.468 E ns/openshift-sdn pod/ovs-jtwth node/ip-10-0-136-217.ec2.internal container/openvswitch container exited with code 143 (Error): 0-04-07T21:53:26.931Z|00224|bridge|INFO|bridge br0: deleted interface veth3adf8c98 on port 78\n2020-04-07T21:53:27.289Z|00225|bridge|INFO|bridge br0: added interface veth76c1e302 on port 98\n2020-04-07T21:53:27.332Z|00226|connmgr|INFO|br0<->unix#941: 5 flow_mods in the last 0 s (5 adds)\n2020-04-07T21:53:27.402Z|00227|connmgr|INFO|br0<->unix#946: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-07T21:53:27.412Z|00228|connmgr|INFO|br0<->unix#948: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-04-07T21:53:30.682Z|00229|connmgr|INFO|br0<->unix#953: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-07T21:53:30.726Z|00230|connmgr|INFO|br0<->unix#956: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-07T21:53:30.763Z|00231|bridge|INFO|bridge br0: deleted interface veth76c1e302 on port 98\n2020-04-07T21:53:33.276Z|00232|connmgr|INFO|br0<->unix#961: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-07T21:53:33.400Z|00233|connmgr|INFO|br0<->unix#964: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-07T21:53:33.546Z|00234|bridge|INFO|bridge br0: deleted interface veth77b80029 on port 25\n2020-04-07T21:53:36.821Z|00235|bridge|INFO|bridge br0: added interface vethf04e44cf on port 99\n2020-04-07T21:53:36.876Z|00236|connmgr|INFO|br0<->unix#968: 5 flow_mods in the last 0 s (5 adds)\n2020-04-07T21:53:36.930Z|00237|connmgr|INFO|br0<->unix#971: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-07T21:53:37.045Z|00238|bridge|INFO|bridge br0: added interface veth4358c13e on port 100\n2020-04-07T21:53:37.087Z|00239|connmgr|INFO|br0<->unix#974: 5 flow_mods in the last 0 s (5 adds)\n2020-04-07T21:53:37.147Z|00240|connmgr|INFO|br0<->unix#978: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-04-07T21:53:37.151Z|00241|connmgr|INFO|br0<->unix#980: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-07 21:53:39 info: Saving flows ...\n2020-04-07T21:53:39Z|00001|vconn|WARN|unix:/var/run/openvswitch/br0.mgmt: version negotiation failed (we support version 0x01, peer supports version 0x04)\novs-ofctl: br0: failed to connect to socket (Broken pipe)\nTerminated\n
Apr 07 21:56:02.502 E ns/openshift-etcd pod/revision-pruner-3-ip-10-0-136-217.ec2.internal node/ip-10-0-136-217.ec2.internal container/pruner init container exited with code 2 (Error): 
Apr 07 21:56:02.502 E ns/openshift-etcd pod/revision-pruner-3-ip-10-0-136-217.ec2.internal node/ip-10-0-136-217.ec2.internal reason/Failed (): 
Apr 07 21:56:02.502 E ns/openshift-etcd pod/revision-pruner-3-ip-10-0-136-217.ec2.internal node/ip-10-0-136-217.ec2.internal container/pruner container exited with code 2 (Error): 
Apr 07 21:56:02.527 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-136-217.ec2.internal node/ip-10-0-136-217.ec2.internal container/cluster-policy-controller container exited with code 1 (Error): ng: unknown (get rolebindings.rbac.authorization.k8s.io)\nE0407 21:35:53.565065       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: unknown (get services)\nE0407 21:35:53.565094       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.HorizontalPodAutoscaler: unknown (get horizontalpodautoscalers.autoscaling)\nE0407 21:35:53.565145       1 reflector.go:307] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.BuildConfig: unknown (get buildconfigs.build.openshift.io)\nE0407 21:35:53.565206       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Deployment: unknown (get deployments.apps)\nW0407 21:49:33.473562       1 reflector.go:326] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.Build ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 489; INTERNAL_ERROR") has prevented the request from succeeding\nW0407 21:49:33.473741       1 reflector.go:326] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 511; INTERNAL_ERROR") has prevented the request from succeeding\nW0407 21:49:33.473804       1 reflector.go:326] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 459; INTERNAL_ERROR") has prevented the request from succeeding\nW0407 21:52:59.469319       1 reflector.go:326] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.BuildConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 527; INTERNAL_ERROR") has prevented the request from succeeding\n
Apr 07 21:56:02.527 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-136-217.ec2.internal node/ip-10-0-136-217.ec2.internal container/kube-controller-manager container exited with code 2 (Error): 0.517916       1 replica_set.go:562] Too few replicas for ReplicaSet openshift-machine-config-operator/etcd-quorum-guard-7b6dcd6d8, need 3, creating 1\nI0407 21:53:30.542812       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-machine-config-operator", Name:"etcd-quorum-guard-7b6dcd6d8", UID:"1e2a2e33-2b8e-4cbc-ace7-40e29237f368", APIVersion:"apps/v1", ResourceVersion:"47141", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: etcd-quorum-guard-7b6dcd6d8-565mb\nI0407 21:53:33.089274       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"345eccf4-9463-4dcb-9912-1a8e08644387", APIVersion:"apps/v1", ResourceVersion:"47399", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set packageserver-7d7b8877d4 to 0\nI0407 21:53:33.089627       1 replica_set.go:598] Too many replicas for ReplicaSet openshift-operator-lifecycle-manager/packageserver-7d7b8877d4, need 0, deleting 1\nI0407 21:53:33.089672       1 replica_set.go:226] Found 12 related ReplicaSets for ReplicaSet openshift-operator-lifecycle-manager/packageserver-7d7b8877d4: packageserver-bb745b4d8, packageserver-6d9884678b, packageserver-764f68f8d6, packageserver-fb474d986, packageserver-6f6c4dc9cb, packageserver-6c4885b769, packageserver-644899ff69, packageserver-7c5fcd986d, packageserver-7d7b8877d4, packageserver-f99777cf8, packageserver-5f67d445b7, packageserver-866c576cd5\nI0407 21:53:33.089837       1 controller_utils.go:604] Controller packageserver-7d7b8877d4 deleting pod openshift-operator-lifecycle-manager/packageserver-7d7b8877d4-sfxrq\nI0407 21:53:33.124704       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver-7d7b8877d4", UID:"256a89bd-2394-4085-9557-ed6091a223b6", APIVersion:"apps/v1", ResourceVersion:"47563", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: packageserver-7d7b8877d4-sfxrq\n
Apr 07 21:56:02.527 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-136-217.ec2.internal node/ip-10-0-136-217.ec2.internal container/kube-controller-manager-cert-syncer container exited with code 2 (Error): 9999       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0407 21:53:08.560570       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0407 21:53:15.354277       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0407 21:53:15.354745       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0407 21:53:18.605708       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0407 21:53:18.606083       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0407 21:53:25.363341       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0407 21:53:25.363675       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0407 21:53:28.621769       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0407 21:53:28.622300       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0407 21:53:35.385152       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0407 21:53:35.385528       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0407 21:53:38.647402       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0407 21:53:38.648431       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\n
Apr 07 21:56:02.596 E ns/openshift-sdn pod/sdn-controller-2pfhb node/ip-10-0-136-217.ec2.internal container/sdn-controller container exited with code 2 (Error): I0407 21:40:19.357773       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Apr 07 21:56:05.699 E ns/openshift-etcd pod/etcd-ip-10-0-136-217.ec2.internal node/ip-10-0-136-217.ec2.internal container/etcd-metrics container exited with code 2 (Error): ll-serving-metrics/etcd-serving-metrics-ip-10-0-136-217.ec2.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-136-217.ec2.internal.key, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "}\n{"level":"info","ts":"2020-04-07T21:33:01.602Z","caller":"etcdmain/grpc_proxy.go:320","msg":"listening for gRPC proxy client requests","address":"127.0.0.1:9977"}\n{"level":"info","ts":"2020-04-07T21:33:01.603Z","caller":"etcdmain/grpc_proxy.go:290","msg":"gRPC proxy client TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-136-217.ec2.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-136-217.ec2.internal.key, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "}\n{"level":"warn","ts":"2020-04-07T21:33:01.604Z","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://10.0.136.217:9978 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 10.0.136.217:9978: connect: connection refused\". Reconnecting..."}\n{"level":"info","ts":"2020-04-07T21:33:01.610Z","caller":"etcdmain/grpc_proxy.go:456","msg":"gRPC proxy listening for metrics","address":"https://0.0.0.0:9979"}\n{"level":"info","ts":"2020-04-07T21:33:01.610Z","caller":"etcdmain/grpc_proxy.go:218","msg":"started gRPC proxy","address":"127.0.0.1:9977"}\n{"level":"info","ts":"2020-04-07T21:33:01.610Z","caller":"etcdmain/grpc_proxy.go:208","msg":"gRPC proxy server metrics URL serving"}\n{"level":"warn","ts":"2020-04-07T21:33:02.608Z","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://10.0.136.217:9978 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 10.0.136.217:9978: connect: connection refused\". Reconnecting..."}\n
Apr 07 21:56:12.178 E ns/e2e-k8s-sig-apps-job-upgrade-3905 pod/foo-62lrt node/ip-10-0-153-218.ec2.internal container/c container exited with code 137 (Error): 
Apr 07 21:56:12.989 E ns/openshift-machine-config-operator pod/machine-config-daemon-hd6lr node/ip-10-0-136-217.ec2.internal container/oauth-proxy container exited with code 1 (Error): 
Apr 07 21:56:27.203 E ns/e2e-k8s-service-lb-available-4592 pod/service-test-dzwm8 node/ip-10-0-153-218.ec2.internal container/netexec container exited with code 2 (Error): 
Apr 07 21:57:13.041 E kube-apiserver failed contacting the API: Get https://api.ci-op-zvgsjvdr-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=51136&timeout=9m13s&timeoutSeconds=553&watch=true: dial tcp 34.235.7.175:6443: connect: connection refused
Apr 07 21:57:13.279 E kube-apiserver Kube API started failing: Get https://api.ci-op-zvgsjvdr-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: dial tcp 52.0.244.26:6443: connect: connection refused
Apr 07 21:57:14.246 E kube-apiserver Kube API is not responding to GET requests
Apr 07 21:57:21.813 E ns/openshift-machine-api pod/machine-api-controllers-676cd8b99f-fdjfh node/ip-10-0-136-217.ec2.internal container/nodelink-controller container exited with code 255 (Error): 
Apr 07 21:57:21.863 E ns/openshift-monitoring pod/cluster-monitoring-operator-865689c7c8-p26x8 node/ip-10-0-136-217.ec2.internal container/cluster-monitoring-operator container exited with code 1 (Error): W0407 21:57:20.944601       1 client_config.go:543] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.\n
Apr 07 21:58:15.172 E ns/openshift-cluster-node-tuning-operator pod/tuned-bvpxd node/ip-10-0-153-218.ec2.internal container/tuned container exited with code 143 (Error): ing profile: openshift-node\n2020-04-07 21:50:07,763 INFO     tuned.daemon.daemon: starting tuning\n2020-04-07 21:50:07,765 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-07 21:50:07,766 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-07 21:50:07,769 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-07 21:50:07,772 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-07 21:50:07,774 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-07 21:50:07,779 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-07 21:50:07,793 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0407 21:53:39.815109   74642 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0407 21:53:39.815166   74642 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nW0407 21:53:39.819488   74642 reflector.go:402] github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:574: watch of *v1.Profile ended with: very short watch: github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:574: Unexpected watch close - watch lasted less than a second and no items received\nI0407 21:53:39.905402   74642 tuned.go:486] profile "ip-10-0-153-218.ec2.internal" changed, tuned profile requested: openshift-node\nI0407 21:53:40.413540   74642 tuned.go:392] getting recommended profile...\nI0407 21:53:40.534043   74642 tuned.go:428] active and recommended profile (openshift-node) match; profile change will not trigger profile reload\nI0407 21:56:31.582611   74642 tuned.go:114] received signal: terminated\nI0407 21:56:31.582681   74642 tuned.go:326] sending TERM to PID 74908\n2020-04-07 21:56:31,582 INFO     tuned.daemon.controller: terminating controller\n2020-04-07 21:56:31,582 INFO     tuned.daemon.daemon: stopping tuning\n
Apr 07 21:58:15.186 E ns/openshift-monitoring pod/node-exporter-6zms9 node/ip-10-0-153-218.ec2.internal container/node-exporter container exited with code 143 (Error): -07T21:37:03Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-07T21:37:03Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-07T21:37:03Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-07T21:37:03Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-07T21:37:03Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-07T21:37:03Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-07T21:37:03Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-07T21:37:03Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-07T21:37:03Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-07T21:37:03Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-07T21:37:03Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-07T21:37:03Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-07T21:37:03Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-07T21:37:03Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-07T21:37:03Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-07T21:37:03Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-07T21:37:03Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-07T21:37:03Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-07T21:37:03Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-07T21:37:03Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-07T21:37:03Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-07T21:37:03Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-07T21:37:03Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-07T21:37:03Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 07 21:58:15.216 E ns/openshift-sdn pod/ovs-w454n node/ip-10-0-153-218.ec2.internal container/openvswitch container exited with code 1 (Error): : deleted interface veth638cbc1d on port 31\n2020-04-07T21:55:43.687Z|00173|connmgr|INFO|br0<->unix#943: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-07T21:55:43.733Z|00174|connmgr|INFO|br0<->unix#946: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-07T21:55:43.763Z|00175|bridge|INFO|bridge br0: deleted interface veth62eb4917 on port 35\n2020-04-07T21:55:43.814Z|00176|connmgr|INFO|br0<->unix#949: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-07T21:55:43.855Z|00177|connmgr|INFO|br0<->unix#952: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-07T21:55:43.885Z|00178|bridge|INFO|bridge br0: deleted interface vethe197056c on port 30\n2020-04-07T21:55:43.869Z|00009|jsonrpc|WARN|unix#813: receive error: Connection reset by peer\n2020-04-07T21:55:43.869Z|00010|reconnect|WARN|unix#813: connection dropped (Connection reset by peer)\n2020-04-07T21:56:11.669Z|00179|connmgr|INFO|br0<->unix#976: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-07T21:56:11.697Z|00180|connmgr|INFO|br0<->unix#979: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-07T21:56:11.718Z|00181|bridge|INFO|bridge br0: deleted interface vethe91d2fdb on port 46\n2020-04-07T21:56:24.750Z|00011|jsonrpc|WARN|unix#850: receive error: Connection reset by peer\n2020-04-07T21:56:24.750Z|00012|reconnect|WARN|unix#850: connection dropped (Connection reset by peer)\n2020-04-07T21:56:26.883Z|00182|connmgr|INFO|br0<->unix#995: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-07T21:56:26.916Z|00183|connmgr|INFO|br0<->unix#998: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-07T21:56:26.940Z|00184|bridge|INFO|bridge br0: deleted interface veth3e21a397 on port 43\n2020-04-07T21:56:28.622Z|00185|connmgr|INFO|br0<->unix#1001: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-07T21:56:28.651Z|00186|connmgr|INFO|br0<->unix#1004: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-07T21:56:28.672Z|00187|bridge|INFO|bridge br0: deleted interface veth2f3d324b on port 33\n info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Apr 07 21:58:15.247 E ns/openshift-multus pod/multus-rbrdb node/ip-10-0-153-218.ec2.internal container/kube-multus container exited with code 143 (Error): 
Apr 07 21:58:15.281 E ns/openshift-machine-config-operator pod/machine-config-daemon-mmjsg node/ip-10-0-153-218.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 07 21:58:48.408 E ns/openshift-machine-config-operator pod/machine-config-daemon-mmjsg node/ip-10-0-153-218.ec2.internal container/oauth-proxy container exited with code 1 (Error): 
Apr 07 21:59:19.797 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers::NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-145-180.ec2.internal" not ready since 2020-04-07 21:58:13 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)\nEtcdMembersDegraded: ip-10-0-145-180.ec2.internal members are unhealthy,  members are unknown
Apr 07 21:59:36.300 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-145-180.ec2.internal node/ip-10-0-145-180.ec2.internal container/kube-apiserver container exited with code 1 (Error):  grpc: addrConn.createTransport failed to connect to {https://10.0.145.180:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.145.180:2379: connect: connection refused". Reconnecting...\nI0407 21:57:12.735509       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nW0407 21:57:12.735533       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://10.0.145.180:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.145.180:2379: connect: connection refused". Reconnecting...\nW0407 21:57:12.735815       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://10.0.145.180:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.145.180:2379: connect: connection refused". Reconnecting...\nI0407 21:57:12.735878       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nW0407 21:57:12.735979       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://10.0.145.180:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.145.180:2379: connect: connection refused". Reconnecting...\nI0407 21:57:12.736085       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nW0407 21:57:12.736089       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://10.0.145.180:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.145.180:2379: connect: connection refused". Reconnecting...\nW0407 21:57:12.736187       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://10.0.145.180:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.145.180:2379: connect: connection refused". Reconnecting...\n
Apr 07 21:59:36.300 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-145-180.ec2.internal node/ip-10-0-145-180.ec2.internal container/kube-apiserver-cert-regeneration-controller container exited with code 1 (Error): W0407 21:37:55.110139       1 cmd.go:200] Using insecure, self-signed certificates\nI0407 21:37:55.110638       1 crypto.go:588] Generating new CA for cert-regeneration-controller-signer@1586295475 cert, and key in /tmp/serving-cert-468269580/serving-signer.crt, /tmp/serving-cert-468269580/serving-signer.key\nI0407 21:37:56.485747       1 observer_polling.go:155] Starting file observer\nI0407 21:37:59.863706       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-apiserver/cert-regeneration-controller-lock...\n
Apr 07 21:59:36.300 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-145-180.ec2.internal node/ip-10-0-145-180.ec2.internal container/kube-apiserver-insecure-readyz container exited with code 2 (Error): I0407 21:37:55.619053       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Apr 07 21:59:36.300 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-145-180.ec2.internal node/ip-10-0-145-180.ec2.internal container/kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0407 21:57:08.485055       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0407 21:57:08.486136       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0407 21:57:10.678664       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0407 21:57:10.679190       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Apr 07 21:59:36.363 E ns/openshift-controller-manager pod/controller-manager-btc7r node/ip-10-0-145-180.ec2.internal container/controller-manager container exited with code 1 (Error): 1:54:49.291268       1 factory.go:80] Deployer controller caches are synced. Starting workers.\nI0407 21:54:49.502966       1 docker_registry_service.go:154] caches synced\nI0407 21:54:49.502966       1 create_dockercfg_secrets.go:218] urls found\nI0407 21:54:49.503028       1 create_dockercfg_secrets.go:224] caches synced\nI0407 21:54:49.503171       1 docker_registry_service.go:296] Updating registry URLs from map[172.30.97.250:5000:{} image-registry.openshift-image-registry.svc.cluster.local:5000:{} image-registry.openshift-image-registry.svc:5000:{}] to map[172.30.97.250:5000:{} image-registry.openshift-image-registry.svc.cluster.local:5000:{} image-registry.openshift-image-registry.svc:5000:{}]\nI0407 21:54:49.564147       1 build_controller.go:474] Starting build controller\nI0407 21:54:49.564172       1 build_controller.go:476] OpenShift image registry hostname: image-registry.openshift-image-registry.svc:5000\nI0407 21:54:49.599932       1 deleted_token_secrets.go:69] caches synced\nI0407 21:54:49.599949       1 deleted_dockercfg_secrets.go:74] caches synced\nW0407 21:56:31.464447       1 reflector.go:340] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 323; INTERNAL_ERROR") has prevented the request from succeeding\nW0407 21:56:31.464639       1 reflector.go:340] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.Image ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 343; INTERNAL_ERROR") has prevented the request from succeeding\nW0407 21:56:31.464742       1 reflector.go:340] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 321; INTERNAL_ERROR") has prevented the request from succeeding\n
Apr 07 21:59:36.388 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-145-180.ec2.internal node/ip-10-0-145-180.ec2.internal container/kube-controller-manager container exited with code 2 (Error): 2020-04-07 21:19:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,}\nI0407 21:57:09.554977       1 node_lifecycle_controller.go:1127] node ip-10-0-153-218.ec2.internal hasn't been updated for 40.007197124s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-04-07 21:55:15 +0000 UTC,LastTransitionTime:2020-04-07 21:19:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,}\nI0407 21:57:09.572705       1 controller_utils.go:182] Recording status change NodeNotReady event message for node ip-10-0-153-218.ec2.internal\nI0407 21:57:09.572762       1 controller_utils.go:122] Update ready status of pods on node [ip-10-0-153-218.ec2.internal]\nI0407 21:57:09.572826       1 controller_utils.go:139] Updating ready status of pod node-ca-bnjg4 to false\nI0407 21:57:09.573435       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-10-0-153-218.ec2.internal", UID:"8e2c36a4-6961-4de7-8740-e894e8f69e72", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node ip-10-0-153-218.ec2.internal status is now: NodeNotReady\nI0407 21:57:09.591229       1 controller_utils.go:139] Updating ready status of pod tuned-bvpxd to false\nI0407 21:57:09.611677       1 controller_utils.go:139] Updating ready status of pod multus-rbrdb to false\nI0407 21:57:09.650240       1 controller_utils.go:139] Updating ready status of pod node-exporter-6zms9 to false\nI0407 21:57:09.683876       1 controller_utils.go:139] Updating ready status of pod ds1-5df7j to false\nI0407 21:57:09.730780       1 controller_utils.go:139] Updating ready status of pod ovs-w454n to false\nI0407 21:57:09.762277       1 controller_utils.go:139] Updating ready status of pod machine-config-daemon-mmjsg to false\nI0407 21:57:09.814377       1 controller_utils.go:139] Updating ready status of pod dns-default-6xggk to false\nI0407 21:57:09.870697       1 controller_utils.go:139] Updating ready status of pod sdn-kc6qm to false\n
Apr 07 21:59:36.388 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-145-180.ec2.internal node/ip-10-0-145-180.ec2.internal container/kube-controller-manager-cert-syncer container exited with code 2 (Error): 4455       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0407 21:56:35.625019       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0407 21:56:44.742992       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0407 21:56:44.743388       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0407 21:56:45.640096       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0407 21:56:45.640612       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0407 21:56:54.769407       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0407 21:56:54.770005       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0407 21:56:55.650616       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0407 21:56:55.651063       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0407 21:57:04.782326       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0407 21:57:04.782891       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0407 21:57:05.669048       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0407 21:57:05.669876       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\n
Apr 07 21:59:36.388 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-145-180.ec2.internal node/ip-10-0-145-180.ec2.internal container/cluster-policy-controller container exited with code 255 (Error): utSeconds=496&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0407 21:57:49.339202       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://localhost:6443/api/v1/services?allowWatchBookmarks=true&resourceVersion=47353&timeout=5m35s&timeoutSeconds=335&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0407 21:57:49.340489       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.RoleBinding: Get https://localhost:6443/apis/rbac.authorization.k8s.io/v1/rolebindings?allowWatchBookmarks=true&resourceVersion=47660&timeout=9m19s&timeoutSeconds=559&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0407 21:57:49.341673       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.LimitRange: Get https://localhost:6443/api/v1/limitranges?allowWatchBookmarks=true&resourceVersion=36557&timeout=5m32s&timeoutSeconds=332&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0407 21:57:49.348715       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.Ingress: Get https://localhost:6443/apis/networking.k8s.io/v1beta1/ingresses?allowWatchBookmarks=true&resourceVersion=43559&timeout=8m38s&timeoutSeconds=518&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0407 21:57:49.352228       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.NetworkPolicy: Get https://localhost:6443/apis/networking.k8s.io/v1/networkpolicies?allowWatchBookmarks=true&resourceVersion=43557&timeout=6m5s&timeoutSeconds=365&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0407 21:57:49.782966       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cluster-policy-controller: timed out waiting for the condition\nF0407 21:57:49.783021       1 policy_controller.go:94] leaderelection lost\nI0407 21:57:49.790236       1 resource_quota_controller.go:290] Shutting down resource quota controller\n
Apr 07 21:59:36.420 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-145-180.ec2.internal node/ip-10-0-145-180.ec2.internal container/kube-scheduler container exited with code 2 (Error): 't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0407 21:57:03.446743       1 factory.go:462] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-7b6dcd6d8-82474: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nE0407 21:57:03.484905       1 factory.go:503] pod: openshift-machine-config-operator/etcd-quorum-guard-7b6dcd6d8-82474 is already present in the active queue\nI0407 21:57:03.508208       1 factory.go:462] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-7b6dcd6d8-82474: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0407 21:57:07.652970       1 factory.go:462] Unable to schedule openshift-apiserver/apiserver-647484c4c4-ggxc9: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0407 21:57:08.257263       1 factory.go:462] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-7b6dcd6d8-82474: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0407 21:57:10.668001       1 factory.go:462] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-7b6dcd6d8-82474: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\n
Apr 07 21:59:36.420 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-145-180.ec2.internal node/ip-10-0-145-180.ec2.internal container/kube-scheduler-cert-syncer container exited with code 2 (Error): 1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0407 21:56:53.424041       1 certsync_controller.go:65] Syncing configmaps: []\nI0407 21:56:53.424183       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0407 21:56:55.457284       1 certsync_controller.go:65] Syncing configmaps: []\nI0407 21:56:55.457421       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0407 21:56:57.478051       1 certsync_controller.go:65] Syncing configmaps: []\nI0407 21:56:57.478315       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0407 21:56:59.494302       1 certsync_controller.go:65] Syncing configmaps: []\nI0407 21:56:59.494335       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0407 21:57:01.518422       1 certsync_controller.go:65] Syncing configmaps: []\nI0407 21:57:01.518866       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0407 21:57:03.565640       1 certsync_controller.go:65] Syncing configmaps: []\nI0407 21:57:03.565772       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0407 21:57:05.604329       1 certsync_controller.go:65] Syncing configmaps: []\nI0407 21:57:05.604601       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0407 21:57:07.616486       1 certsync_controller.go:65] Syncing configmaps: []\nI0407 21:57:07.616519       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0407 21:57:09.667052       1 certsync_controller.go:65] Syncing configmaps: []\nI0407 21:57:09.667083       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0407 21:57:11.682172       1 certsync_controller.go:65] Syncing configmaps: []\nI0407 21:57:11.682345       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\n
Apr 07 21:59:36.434 E ns/openshift-monitoring pod/node-exporter-gl56m node/ip-10-0-145-180.ec2.internal container/node-exporter container exited with code 143 (Error): -07T21:38:05Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-07T21:38:05Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-07T21:38:05Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-07T21:38:05Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-07T21:38:05Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-07T21:38:05Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-07T21:38:05Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-07T21:38:05Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-07T21:38:05Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-07T21:38:05Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-07T21:38:05Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-07T21:38:05Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-07T21:38:05Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-07T21:38:05Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-07T21:38:05Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-07T21:38:05Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-07T21:38:05Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-07T21:38:05Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-07T21:38:05Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-07T21:38:05Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-07T21:38:05Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-07T21:38:05Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-07T21:38:05Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-07T21:38:05Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 07 21:59:36.447 E ns/openshift-cluster-node-tuning-operator pod/tuned-47vnp node/ip-10-0-145-180.ec2.internal container/tuned container exited with code 143 (Error):   66716 trace.go:116] Trace[1006933274]: "Reflector ListAndWatch" name:github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:578 (started: 2020-04-07 21:52:35.941244921 +0000 UTC m=+871.586246224) (total time: 39.029151785s):\nTrace[1006933274]: [39.029151785s] [39.029151785s] END\nE0407 21:53:14.970542   66716 reflector.go:178] github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:578: Failed to list *v1.Tuned: Timeout: Too large resource version: 43559, current: 42751\nI0407 21:54:26.854429   66716 trace.go:116] Trace[629431445]: "Reflector ListAndWatch" name:github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:578 (started: 2020-04-07 21:53:47.81644957 +0000 UTC m=+943.461450825) (total time: 39.03795061s):\nTrace[629431445]: [39.03795061s] [39.03795061s] END\nE0407 21:54:26.854453   66716 reflector.go:178] github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:578: Failed to list *v1.Tuned: Timeout: Too large resource version: 43559, current: 42751\nI0407 21:55:56.252425   66716 trace.go:116] Trace[436340495]: "Reflector ListAndWatch" name:github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:578 (started: 2020-04-07 21:55:17.22717684 +0000 UTC m=+1032.872177968) (total time: 39.025205433s):\nTrace[436340495]: [39.025205433s] [39.025205433s] END\nE0407 21:55:56.252457   66716 reflector.go:178] github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:578: Failed to list *v1.Tuned: Timeout: Too large resource version: 43559, current: 42751\nI0407 21:56:42.333119   66716 tuned.go:486] profile "ip-10-0-145-180.ec2.internal" changed, tuned profile requested: openshift-node\nI0407 21:56:42.377002   66716 tuned.go:392] getting recommended profile...\nI0407 21:56:42.388822   66716 tuned.go:486] profile "ip-10-0-145-180.ec2.internal" changed, tuned profile requested: openshift-control-plane\nI0407 21:56:42.567536   66716 tuned.go:428] active and recommended profile (openshift-control-plane) match; profile change will not trigger profile reload\n
Apr 07 21:59:36.480 E ns/openshift-sdn pod/sdn-controller-s8s7w node/ip-10-0-145-180.ec2.internal container/sdn-controller container exited with code 2 (Error): I0407 21:40:31.381792       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0407 21:51:16.024935       1 event.go:319] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"openshift-network-controller", GenerateName:"", Namespace:"openshift-sdn", SelfLink:"/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller", UID:"a201c43b-b77d-4b9a-91f7-c477a79e8a77", ResourceVersion:"44373", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63721889333, loc:(*time.Location)(0x2b2b940)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"ip-10-0-145-180\",\"leaseDurationSeconds\":60,\"acquireTime\":\"2020-04-07T21:51:16Z\",\"renewTime\":\"2020-04-07T21:51:16Z\",\"leaderTransitions\":1}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"openshift-sdn-controller", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0004a9960), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0004a9a20)}}}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'ip-10-0-145-180 became leader'\nI0407 21:51:16.025160       1 leaderelection.go:252] successfully acquired lease openshift-sdn/openshift-network-controller\nI0407 21:51:16.038477       1 master.go:51] Initializing SDN master\nI0407 21:51:16.062643       1 network_controller.go:61] Started OpenShift Network Controller\n
Apr 07 21:59:36.498 E ns/openshift-multus pod/multus-29wvq node/ip-10-0-145-180.ec2.internal container/kube-multus container exited with code 143 (Error): 
Apr 07 21:59:36.518 E ns/openshift-sdn pod/ovs-kgpkb node/ip-10-0-145-180.ec2.internal container/openvswitch container exited with code 143 (Error): ort 69\n2020-04-07T21:56:56.717Z|00178|connmgr|INFO|br0<->unix#928: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-07T21:56:56.814Z|00179|connmgr|INFO|br0<->unix#931: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-07T21:56:56.843Z|00180|bridge|INFO|bridge br0: deleted interface vethb4fd60e1 on port 55\n2020-04-07T21:56:58.184Z|00181|connmgr|INFO|br0<->unix#934: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-07T21:56:58.229Z|00182|connmgr|INFO|br0<->unix#937: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-07T21:56:58.272Z|00183|bridge|INFO|bridge br0: deleted interface veth20fd124e on port 59\n2020-04-07T21:56:59.437Z|00184|bridge|INFO|bridge br0: added interface veth5d299139 on port 70\n2020-04-07T21:56:59.472Z|00185|connmgr|INFO|br0<->unix#941: 5 flow_mods in the last 0 s (5 adds)\n2020-04-07T21:56:59.522Z|00186|connmgr|INFO|br0<->unix#945: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-04-07T21:56:59.524Z|00187|connmgr|INFO|br0<->unix#947: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-07T21:57:03.568Z|00188|connmgr|INFO|br0<->unix#955: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-07T21:57:03.614Z|00189|connmgr|INFO|br0<->unix#958: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-07T21:57:03.659Z|00190|bridge|INFO|bridge br0: deleted interface veth5d299139 on port 70\n2020-04-07T21:57:06.242Z|00191|connmgr|INFO|br0<->unix#964: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-07T21:57:06.288Z|00192|connmgr|INFO|br0<->unix#967: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-07T21:57:06.356Z|00193|bridge|INFO|bridge br0: deleted interface vetha196438d on port 3\n2020-04-07T21:57:09.878Z|00194|bridge|INFO|bridge br0: added interface veth96c5b169 on port 71\n2020-04-07T21:57:10.018Z|00195|connmgr|INFO|br0<->unix#971: 5 flow_mods in the last 0 s (5 adds)\n2020-04-07T21:57:10.135Z|00196|connmgr|INFO|br0<->unix#976: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-07T21:57:10.150Z|00197|connmgr|INFO|br0<->unix#978: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-04-07 21:57:12 info: Saving flows ...\nTerminated\n
Apr 07 21:59:36.575 E ns/openshift-machine-config-operator pod/machine-config-daemon-dg5h9 node/ip-10-0-145-180.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 07 21:59:36.594 E ns/openshift-machine-config-operator pod/machine-config-server-ghrp9 node/ip-10-0-145-180.ec2.internal container/machine-config-server container exited with code 2 (Error): I0407 21:49:31.497181       1 start.go:38] Version: machine-config-daemon-4.5.0-202004071701-2-gdd5eeeb2-dirty (dd5eeeb2bf88c50c9b7c2aa2385c4b2078a9eea0)\nI0407 21:49:31.498974       1 api.go:51] Launching server on :22624\nI0407 21:49:31.499141       1 api.go:51] Launching server on :22623\n
Apr 07 21:59:36.611 E ns/openshift-kube-scheduler pod/revision-pruner-9-ip-10-0-145-180.ec2.internal node/ip-10-0-145-180.ec2.internal container/pruner init container exited with code 2 (Error): 
Apr 07 21:59:36.611 E ns/openshift-kube-scheduler pod/revision-pruner-9-ip-10-0-145-180.ec2.internal node/ip-10-0-145-180.ec2.internal reason/Failed (): 
Apr 07 21:59:36.611 E ns/openshift-kube-scheduler pod/revision-pruner-9-ip-10-0-145-180.ec2.internal node/ip-10-0-145-180.ec2.internal container/pruner container exited with code 2 (Error): 
Apr 07 21:59:36.625 E ns/openshift-multus pod/multus-admission-controller-v7rnj node/ip-10-0-145-180.ec2.internal container/multus-admission-controller container exited with code 255 (Error): 
Apr 07 21:59:39.683 E ns/openshift-etcd pod/etcd-ip-10-0-145-180.ec2.internal node/ip-10-0-145-180.ec2.internal container/etcd-metrics container exited with code 2 (Error): h = false, crl-file = "}\n{"level":"info","ts":"2020-04-07T21:34:04.839Z","caller":"etcdmain/grpc_proxy.go:320","msg":"listening for gRPC proxy client requests","address":"127.0.0.1:9977"}\n{"level":"info","ts":"2020-04-07T21:34:04.840Z","caller":"etcdmain/grpc_proxy.go:290","msg":"gRPC proxy client TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-145-180.ec2.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-145-180.ec2.internal.key, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "}\n{"level":"warn","ts":"2020-04-07T21:34:04.846Z","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://10.0.145.180:9978 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 10.0.145.180:9978: connect: connection refused\". Reconnecting..."}\n{"level":"info","ts":"2020-04-07T21:34:04.847Z","caller":"etcdmain/grpc_proxy.go:456","msg":"gRPC proxy listening for metrics","address":"https://0.0.0.0:9979"}\n{"level":"info","ts":"2020-04-07T21:34:04.847Z","caller":"etcdmain/grpc_proxy.go:218","msg":"started gRPC proxy","address":"127.0.0.1:9977"}\n{"level":"info","ts":"2020-04-07T21:34:04.848Z","caller":"etcdmain/grpc_proxy.go:208","msg":"gRPC proxy server metrics URL serving"}\n{"level":"warn","ts":"2020-04-07T21:34:05.847Z","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://10.0.145.180:9978 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 10.0.145.180:9978: connect: connection refused\". Reconnecting..."}\n{"level":"warn","ts":"2020-04-07T21:34:07.212Z","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://10.0.145.180:9978 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 10.0.145.180:9978: connect: connection refused\". Reconnecting..."}\n
Apr 07 21:59:46.779 E ns/openshift-machine-config-operator pod/machine-config-daemon-dg5h9 node/ip-10-0-145-180.ec2.internal container/oauth-proxy container exited with code 1 (Error): 
Apr 07 22:00:28.436 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator openshift-apiserver is reporting a failure: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver