ResultSUCCESS
Tests 1 failed / 21 succeeded
Started2020-09-19 19:08
Elapsed1h22m
Work namespaceci-op-4p53dwbl
pod554fc7c5-faab-11ea-a1fd-0a580a800db2
revision1

Test Failures


openshift-tests Monitor cluster while tests execute 37m1s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
213 error level events were detected during this test run:

Sep 19 19:51:36.341 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-57d98456bd-zx2d6 node/ip-10-0-132-91.ec2.internal container=kube-controller-manager-operator container exited with code 255 (Error): g ended with: too old resource version: 11632 (14979)\nW0919 19:46:57.992012       1 reflector.go:270] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: watch of *v1.KubeControllerManager ended with: too old resource version: 13091 (15502)\nW0919 19:46:57.992111       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 10385 (14974)\nW0919 19:46:58.037454       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Role ended with: too old resource version: 11626 (14979)\nW0919 19:46:58.037604       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 10376 (14974)\nW0919 19:46:58.037710       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 12598 (14976)\nW0919 19:46:58.037828       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ServiceAccount ended with: too old resource version: 12571 (14975)\nW0919 19:46:58.048044       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 10378 (14974)\nW0919 19:46:58.057545       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Network ended with: too old resource version: 13089 (15504)\nW0919 19:46:58.161339       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Infrastructure ended with: too old resource version: 12176 (15507)\nW0919 19:46:58.424475       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.FeatureGate ended with: too old resource version: 13087 (15509)\nI0919 19:51:35.734680       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0919 19:51:35.734757       1 leaderelection.go:65] leaderelection lost\n
Sep 19 19:51:47.393 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-6cd9c58688-4d7px node/ip-10-0-132-91.ec2.internal container=kube-scheduler-operator-container container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 19:53:14.728 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-6b5dd676c9-n8dsr node/ip-10-0-132-91.ec2.internal container=openshift-apiserver-operator container exited with code 255 (Error): 1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 14959 (15383)\nW0919 19:46:58.237898       1 reflector.go:270] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: watch of *v1.OpenShiftAPIServer ended with: too old resource version: 13092 (14337)\nW0919 19:46:58.238023       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 14959 (15385)\nW0919 19:46:58.259437       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 10375 (14300)\nW0919 19:46:58.259525       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 14959 (15385)\nW0919 19:46:58.279193       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ServiceAccount ended with: too old resource version: 12571 (14301)\nW0919 19:46:58.290017       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.DaemonSet ended with: too old resource version: 13339 (14303)\nW0919 19:46:58.290316       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Namespace ended with: too old resource version: 11246 (14301)\nW0919 19:46:58.290994       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 10378 (14300)\nW0919 19:46:58.298679       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Pod ended with: too old resource version: 13333 (14301)\nW0919 19:46:58.299704       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Image ended with: too old resource version: 13091 (14337)\nI0919 19:53:13.776849       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0919 19:53:13.776924       1 leaderelection.go:65] leaderelection lost\n
Sep 19 19:53:27.666 E ns/openshift-machine-api pod/machine-api-operator-59b4994479-rpvb9 node/ip-10-0-132-91.ec2.internal container=machine-api-operator container exited with code 2 (Error): 
Sep 19 19:54:47.161 E ns/openshift-apiserver pod/apiserver-xrx84 node/ip-10-0-154-177.ec2.internal container=openshift-apiserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 19:55:07.820 E ns/openshift-machine-api pod/machine-api-controllers-75df6f4c8-zlgbw node/ip-10-0-140-69.ec2.internal container=controller-manager container exited with code 1 (Error): 
Sep 19 19:55:07.820 E ns/openshift-machine-api pod/machine-api-controllers-75df6f4c8-zlgbw node/ip-10-0-140-69.ec2.internal container=nodelink-controller container exited with code 2 (Error): 
Sep 19 19:55:28.046 E ns/openshift-cluster-machine-approver pod/machine-approver-84875498d8-5xcq5 node/ip-10-0-132-91.ec2.internal container=machine-approver-controller container exited with code 2 (Error): ded\nI0919 19:40:06.967573       1 main.go:110] CSR csr-2k4kf is already approved\nI0919 19:40:06.967583       1 main.go:107] CSR csr-56ntx added\nI0919 19:40:06.986059       1 main.go:147] CSR csr-56ntx approved\nI0919 19:40:19.497535       1 main.go:107] CSR csr-lvqkv added\nI0919 19:40:19.518555       1 main.go:147] CSR csr-lvqkv approved\nI0919 19:40:19.628376       1 main.go:107] CSR csr-nbs95 added\nI0919 19:40:19.644947       1 main.go:147] CSR csr-nbs95 approved\nI0919 19:40:20.539041       1 main.go:107] CSR csr-lm88p added\nI0919 19:40:20.550454       1 main.go:147] CSR csr-lm88p approved\nI0919 19:46:57.764673       1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0919 19:46:57.765242       1 reflector.go:322] github.com/openshift/cluster-machine-approver/main.go:185: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?resourceVersion=10835&timeoutSeconds=582&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0919 19:46:58.765955       1 reflector.go:205] github.com/openshift/cluster-machine-approver/main.go:185: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0919 19:46:59.766727       1 reflector.go:205] github.com/openshift/cluster-machine-approver/main.go:185: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0919 19:47:00.767628       1 reflector.go:205] github.com/openshift/cluster-machine-approver/main.go:185: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\n
Sep 19 19:55:44.653 E ns/openshift-cluster-node-tuning-operator pod/cluster-node-tuning-operator-57ff559bdf-krq7q node/ip-10-0-140-69.ec2.internal container=cluster-node-tuning-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 19:55:48.893 E clusterversion/version changed Failing to True: MultipleErrors: Multiple errors are preventing progress:\n* Cluster operator cluster-autoscaler is still updating\n* Cluster operator monitoring is still updating\n* Cluster operator node-tuning is still updating\n* Cluster operator service-ca is still updating\n* Cluster operator service-catalog-apiserver is still updating\n* Could not update deployment "openshift-authentication-operator/authentication-operator" (107 of 350)\n* Could not update deployment "openshift-cluster-samples-operator/cluster-samples-operator" (185 of 350)\n* Could not update deployment "openshift-cluster-storage-operator/cluster-storage-operator" (199 of 350)\n* Could not update deployment "openshift-console/downloads" (237 of 350)\n* Could not update deployment "openshift-controller-manager-operator/openshift-controller-manager-operator" (173 of 350)\n* Could not update deployment "openshift-image-registry/cluster-image-registry-operator" (133 of 350)\n* Could not update deployment "openshift-marketplace/marketplace-operator" (282 of 350)\n* Could not update deployment "openshift-operator-lifecycle-manager/olm-operator" (253 of 350)\n* Could not update deployment "openshift-service-catalog-controller-manager-operator/openshift-service-catalog-controller-manager-operator" (217 of 350)
Sep 19 19:56:03.463 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-155-3.ec2.internal container=prometheus container exited with code 1 (Error): 
Sep 19 19:56:03.463 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-155-3.ec2.internal container=rules-configmap-reloader container exited with code 2 (Error): 
Sep 19 19:56:03.463 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-155-3.ec2.internal container=prometheus-proxy container exited with code 2 (Error): 
Sep 19 19:56:03.463 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-155-3.ec2.internal container=prometheus-config-reloader container exited with code 2 (Error): 
Sep 19 19:56:09.853 E ns/openshift-monitoring pod/prometheus-operator-89569dd85-hhw5t node/ip-10-0-155-3.ec2.internal container=prometheus-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 19:56:14.744 E ns/openshift-cluster-node-tuning-operator pod/tuned-wj2rc node/ip-10-0-140-69.ec2.internal container=tuned container exited with code 143 (Error): 55:29.511432   18776 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 19:55:29.513209   18776 openshift-tuned.go:326] Getting recommended profile...\nI0919 19:55:29.681473   18776 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0919 19:55:32.052145   18776 openshift-tuned.go:435] Pod (openshift-machine-api/cluster-autoscaler-operator-6dfd97c5c4-jhwrs) labels changed node wide: true\nI0919 19:55:34.512651   18776 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 19:55:34.514668   18776 openshift-tuned.go:326] Getting recommended profile...\nI0919 19:55:34.707517   18776 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0919 19:55:50.436629   18776 openshift-tuned.go:435] Pod (openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-57ff559bdf-krq7q) labels changed node wide: true\nI0919 19:55:54.511461   18776 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 19:55:54.513102   18776 openshift-tuned.go:326] Getting recommended profile...\nI0919 19:55:54.680010   18776 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0919 19:56:06.717524   18776 openshift-tuned.go:435] Pod (openshift-apiserver/apiserver-6z2cp) labels changed node wide: true\nI0919 19:56:09.511363   18776 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 19:56:09.513084   18776 openshift-tuned.go:326] Getting recommended profile...\nI0919 19:56:09.644997   18776 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0919 19:56:10.436795   18776 openshift-tuned.go:435] Pod (openshift-cluster-storage-operator/cluster-storage-operator-7948b6bff5-2jn59) labels changed node wide: true\n
Sep 19 19:56:23.627 E ns/openshift-marketplace pod/community-operators-75566bd5c7-265nh node/ip-10-0-131-93.ec2.internal container=community-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 19:56:25.678 E ns/openshift-console pod/downloads-5c59f467dc-qd2g8 node/ip-10-0-154-177.ec2.internal container=download-server container exited with code 137 (Error): 
Sep 19 19:56:38.098 E ns/openshift-ingress pod/router-default-5d8ddbf59f-45qr8 node/ip-10-0-134-91.ec2.internal container=router container exited with code 2 (Error): 19:55:34.000073       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 19:55:39.067276       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 19:55:44.063087       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 19:55:49.055236       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 19:55:54.074779       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 19:55:59.053670       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 19:56:04.057802       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 19:56:09.108835       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 19:56:14.100890       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 19:56:19.073848       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 19:56:24.086649       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 19:56:29.067369       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 19:56:34.070972       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n
Sep 19 19:56:45.778 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-155-3.ec2.internal container=prometheus container exited with code 1 (Error): 
Sep 19 19:56:59.815 E ns/openshift-marketplace pod/redhat-operators-59f99d94-cvg5k node/ip-10-0-131-93.ec2.internal container=redhat-operators container exited with code 2 (Error): 
Sep 19 19:57:07.844 E ns/openshift-marketplace pod/certified-operators-5bbb48c7cd-j5kcx node/ip-10-0-131-93.ec2.internal container=certified-operators container exited with code 2 (Error): 
Sep 19 19:57:09.281 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-134-91.ec2.internal container=prometheus container exited with code 1 (Error): 
Sep 19 19:57:09.899 E ns/openshift-controller-manager pod/controller-manager-298bx node/ip-10-0-132-91.ec2.internal container=controller-manager container exited with code 137 (Error): 
Sep 19 19:57:10.061 E ns/openshift-console pod/downloads-5c59f467dc-zbhw2 node/ip-10-0-132-91.ec2.internal container=download-server container exited with code 137 (Error): 
Sep 19 19:57:12.863 E ns/openshift-monitoring pod/node-exporter-jx8nf node/ip-10-0-132-91.ec2.internal container=node-exporter container exited with code 143 (Error): 
Sep 19 19:57:15.665 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-6d85b897cc-x9zz6 node/ip-10-0-132-91.ec2.internal container=operator container exited with code 2 (Error): /v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for operator openshift-controller-manager changed: Progressing changed from False to True ("Progressing: daemonset/controller-manager: observed generation is 8, desired generation is 9.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 3, desired generation is 4.")\nI0919 19:55:44.045042       1 wrap.go:47] GET /metrics: (9.606196ms) 200 [Prometheus/2.7.2 10.128.2.7:42714]\nI0919 19:55:44.046216       1 wrap.go:47] GET /metrics: (9.303965ms) 200 [Prometheus/2.7.2 10.129.2.8:46402]\nI0919 19:55:53.614299       1 request.go:530] Throttling request took 85.761566ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0919 19:55:53.817901       1 request.go:530] Throttling request took 195.963801ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0919 19:55:53.913358       1 status_controller.go:160] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2020-09-19T19:34:00Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-09-19T19:55:53Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-09-19T19:34:40Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-09-19T19:34:00Z","reason":"NoData","status":"Unknown","type":"Upgradeable"}]}}\nI0919 19:55:53.928546       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"f262c96f-faae-11ea-84f5-0acd27906edb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for operator openshift-controller-manager changed: Progressing changed from True to False ("")\n
Sep 19 19:57:15.873 E ns/openshift-marketplace pod/community-operators-69b84d5d85-6vkgv node/ip-10-0-131-93.ec2.internal container=community-operators container exited with code 2 (Error): 
Sep 19 19:57:17.471 E ns/openshift-service-ca pod/configmap-cabundle-injector-6764848b9d-75qzl node/ip-10-0-132-91.ec2.internal container=configmap-cabundle-injector-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 19:57:21.661 E ns/openshift-cluster-node-tuning-operator pod/tuned-gmsgg node/ip-10-0-132-91.ec2.internal container=tuned container exited with code 143 (Error): 31 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 19:56:06.101231   16431 openshift-tuned.go:326] Getting recommended profile...\nI0919 19:56:06.232571   16431 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0919 19:56:10.320913   16431 openshift-tuned.go:435] Pod (openshift-kube-controller-manager/revision-pruner-5-ip-10-0-132-91.ec2.internal) labels changed node wide: false\nI0919 19:56:17.217364   16431 openshift-tuned.go:435] Pod (openshift-kube-scheduler/revision-pruner-6-ip-10-0-132-91.ec2.internal) labels changed node wide: false\nI0919 19:56:26.092159   16431 openshift-tuned.go:435] Pod (openshift-console-operator/console-operator-57b75f5867-ctmj2) labels changed node wide: true\nI0919 19:56:26.099171   16431 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 19:56:26.101243   16431 openshift-tuned.go:326] Getting recommended profile...\nI0919 19:56:26.288105   16431 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0919 19:57:02.987871   16431 openshift-tuned.go:435] Pod (openshift-authentication/oauth-openshift-77d598d7d8-5s4pk) labels changed node wide: true\nI0919 19:57:06.099285   16431 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 19:57:06.102595   16431 openshift-tuned.go:326] Getting recommended profile...\nI0919 19:57:06.220468   16431 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nE0919 19:57:08.281063   16431 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=9, ErrCode=NO_ERROR, debug=""\nE0919 19:57:08.285246   16431 openshift-tuned.go:720] Pod event watch channel closed.\nI0919 19:57:08.285268   16431 openshift-tuned.go:722] Increasing resyncPeriod to 120\n
Sep 19 19:57:25.660 E ns/openshift-service-ca pod/apiservice-cabundle-injector-7b45d6d55b-tq2bk node/ip-10-0-132-91.ec2.internal container=apiservice-cabundle-injector-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 19:57:28.860 E ns/openshift-service-ca pod/service-serving-cert-signer-865648474d-f89lv node/ip-10-0-132-91.ec2.internal container=service-serving-cert-signer-controller container exited with code 2 (Error): 
Sep 19 19:57:33.191 E ns/openshift-monitoring pod/node-exporter-ggqkx node/ip-10-0-154-177.ec2.internal container=node-exporter container exited with code 143 (Error): 
Sep 19 19:57:39.172 E ns/openshift-cluster-node-tuning-operator pod/tuned-c7pt5 node/ip-10-0-154-177.ec2.internal container=tuned container exited with code 143 (Error): ne) match.  Label changes will not trigger profile reload.\nI0919 19:56:24.562632   17817 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/catalog-operator-5db5547b4d-lgx9c) labels changed node wide: true\nI0919 19:56:26.476211   17817 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 19:56:26.478240   17817 openshift-tuned.go:326] Getting recommended profile...\nI0919 19:56:26.608198   17817 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0919 19:56:26.708688   17817 openshift-tuned.go:435] Pod (openshift-console/downloads-5c59f467dc-qd2g8) labels changed node wide: true\nI0919 19:56:31.476229   17817 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 19:56:31.477827   17817 openshift-tuned.go:326] Getting recommended profile...\nI0919 19:56:31.596746   17817 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0919 19:56:35.710146   17817 openshift-tuned.go:435] Pod (openshift-kube-scheduler/openshift-kube-scheduler-ip-10-0-154-177.ec2.internal) labels changed node wide: true\nI0919 19:56:36.479257   17817 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 19:56:36.481040   17817 openshift-tuned.go:326] Getting recommended profile...\nI0919 19:56:36.634680   17817 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0919 19:56:58.467460   17817 openshift-tuned.go:691] Lowering resyncPeriod to 56\nE0919 19:57:08.278345   17817 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""\nE0919 19:57:08.280905   17817 openshift-tuned.go:720] Pod event watch channel closed.\nI0919 19:57:08.281003   17817 openshift-tuned.go:722] Increasing resyncPeriod to 112\n
Sep 19 19:57:40.893 E ns/openshift-monitoring pod/node-exporter-p55pb node/ip-10-0-155-3.ec2.internal container=node-exporter container exited with code 143 (Error): 
Sep 19 19:57:46.129 E ns/openshift-monitoring pod/node-exporter-jvxwq node/ip-10-0-140-69.ec2.internal container=node-exporter container exited with code 143 (Error): 
Sep 19 19:57:46.364 E ns/openshift-cluster-node-tuning-operator pod/tuned-66nwv node/ip-10-0-134-91.ec2.internal container=tuned container exited with code 143 (Error): tive and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0919 19:56:29.666878    2418 openshift-tuned.go:435] Pod (openshift-monitoring/node-exporter-lf9qn) labels changed node wide: true\nI0919 19:56:29.839875    2418 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 19:56:29.842867    2418 openshift-tuned.go:326] Getting recommended profile...\nI0919 19:56:30.004052    2418 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0919 19:56:39.108003    2418 openshift-tuned.go:435] Pod (openshift-ingress/router-default-5d8ddbf59f-45qr8) labels changed node wide: true\nI0919 19:56:39.839843    2418 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 19:56:39.841472    2418 openshift-tuned.go:326] Getting recommended profile...\nI0919 19:56:39.957440    2418 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0919 19:56:54.624405    2418 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-k8s-0) labels changed node wide: true\nI0919 19:56:54.839799    2418 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 19:56:54.841476    2418 openshift-tuned.go:326] Getting recommended profile...\nI0919 19:56:54.974151    2418 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0919 19:57:04.847279    2418 openshift-tuned.go:435] Pod (openshift-monitoring/alertmanager-main-0) labels changed node wide: true\nE0919 19:57:08.281580    2418 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""\nE0919 19:57:08.281855    2418 openshift-tuned.go:720] Pod event watch channel closed.\nI0919 19:57:08.281881    2418 openshift-tuned.go:722] Increasing resyncPeriod to 102\n
Sep 19 19:57:47.460 E ns/openshift-operator-lifecycle-manager pod/packageserver-659d6fdd7c-xp4qq node/ip-10-0-132-91.ec2.internal container=packageserver container exited with code 137 (Error): 4       1 wrap.go:47] GET /: (131.295µs) 200 [Go-http-client/2.0 10.128.0.1:46964]\nI0919 19:57:08.255599       1 wrap.go:47] GET /: (136.32µs) 200 [Go-http-client/2.0 10.128.0.1:46964]\nE0919 19:57:08.276567       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=227, ErrCode=NO_ERROR, debug=""\nI0919 19:57:08.276859       1 reflector.go:337] github.com/operator-framework/operator-lifecycle-manager/pkg/lib/queueinformer/queueinformer_operator.go:130: Watch close - *v1alpha1.CatalogSource total 52 items received\ntime="2020-09-19T19:57:08Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-09-19T19:57:08Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\nI0919 19:57:09.942461       1 secure_serving.go:156] Stopped listening on [::]:5443\ntime="2020-09-19T19:57:14Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-09-19T19:57:14Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-09-19T19:57:15Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-09-19T19:57:15Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-09-19T19:57:16Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-09-19T19:57:16Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\n
Sep 19 19:57:55.449 E ns/openshift-image-registry pod/node-ca-bv52s node/ip-10-0-134-91.ec2.internal container=node-ca container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 19:57:57.354 E ns/openshift-operator-lifecycle-manager pod/packageserver-659d6fdd7c-ctbrv node/ip-10-0-154-177.ec2.internal container=packageserver container exited with code 137 (Error):  TLS handshake error from 10.128.0.1:47860: remote error: tls: bad certificate\nI0919 19:57:23.160622       1 wrap.go:47] GET /healthz: (145.686µs) 200 [kube-probe/1.13+ 10.128.0.1:47864]\nI0919 19:57:23.521427       1 log.go:172] http: TLS handshake error from 10.129.0.1:51320: remote error: tls: bad certificate\nI0919 19:57:23.923282       1 log.go:172] http: TLS handshake error from 10.129.0.1:51326: remote error: tls: bad certificate\nI0919 19:57:24.722604       1 log.go:172] http: TLS handshake error from 10.129.0.1:51332: remote error: tls: bad certificate\nI0919 19:57:25.123402       1 log.go:172] http: TLS handshake error from 10.129.0.1:51334: remote error: tls: bad certificate\nI0919 19:57:26.129285       1 wrap.go:47] GET /: (220.245µs) 200 [Go-http-client/2.0 10.129.0.1:49562]\nI0919 19:57:26.134144       1 wrap.go:47] GET /: (4.3929ms) 200 [Go-http-client/2.0 10.129.0.1:49562]\nI0919 19:57:26.134726       1 wrap.go:47] GET /: (142.313µs) 200 [Go-http-client/2.0 10.130.0.1:58074]\nI0919 19:57:26.134868       1 wrap.go:47] GET /: (106.536µs) 200 [Go-http-client/2.0 10.128.0.1:44156]\nI0919 19:57:26.134743       1 wrap.go:47] GET /: (123.841µs) 200 [Go-http-client/2.0 10.128.0.1:44156]\nI0919 19:57:26.321683       1 log.go:172] http: TLS handshake error from 10.129.0.1:51346: remote error: tls: bad certificate\nI0919 19:57:26.664530       1 wrap.go:47] GET /: (11.041889ms) 200 [Go-http-client/2.0 10.129.0.1:49562]\nI0919 19:57:26.668556       1 wrap.go:47] GET /: (16.432675ms) 200 [Go-http-client/2.0 10.129.0.1:49562]\nI0919 19:57:26.671394       1 wrap.go:47] GET /: (120.938µs) 200 [Go-http-client/2.0 10.128.0.1:44156]\nI0919 19:57:26.671620       1 wrap.go:47] GET /: (16.051055ms) 200 [Go-http-client/2.0 10.130.0.1:58074]\nI0919 19:57:26.671742       1 wrap.go:47] GET /: (101.606µs) 200 [Go-http-client/2.0 10.128.0.1:44156]\nI0919 19:57:26.673812       1 wrap.go:47] GET /: (19.230251ms) 200 [Go-http-client/2.0 10.130.0.1:58074]\nI0919 19:57:26.719396       1 secure_serving.go:156] Stopped listening on [::]:5443\n
Sep 19 19:58:12.344 E ns/openshift-controller-manager pod/controller-manager-xq7hl node/ip-10-0-154-177.ec2.internal container=controller-manager container exited with code 137 (Error): 
Sep 19 19:59:06.394 E ns/openshift-controller-manager pod/controller-manager-55549 node/ip-10-0-140-69.ec2.internal container=controller-manager container exited with code 137 (Error): 
Sep 19 19:59:16.363 E ns/openshift-console pod/console-7bb65f8f6-nkskl node/ip-10-0-132-91.ec2.internal container=console container exited with code 2 (Error): 2020/09/19 19:44:04 cmd/main: cookies are secure!\n2020/09/19 19:44:04 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://172.30.0.1:443/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/09/19 19:44:14 auth: error contacting auth provider (retrying in 10s): request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-4p53dwbl-b7012.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-4p53dwbl-b7012.origin-ci-int-aws.dev.rhcloud.com: EOF\n2020/09/19 19:44:24 auth: error contacting auth provider (retrying in 10s): request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-4p53dwbl-b7012.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-4p53dwbl-b7012.origin-ci-int-aws.dev.rhcloud.com: EOF\n2020/09/19 19:44:34 cmd/main: Binding to 0.0.0.0:8443...\n2020/09/19 19:44:34 cmd/main: using TLS\n
Sep 19 19:59:33.765 E ns/openshift-network-operator pod/network-operator-7d4d7b7fc5-jcjth node/ip-10-0-154-177.ec2.internal container=network-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 19:59:36.456 E ns/openshift-dns pod/dns-default-s7wlj node/ip-10-0-132-91.ec2.internal container=dns-node-resolver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 19:59:36.456 E ns/openshift-dns pod/dns-default-s7wlj node/ip-10-0-132-91.ec2.internal container=dns container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 20:00:17.351 E ns/openshift-multus pod/multus-6v6g7 node/ip-10-0-131-93.ec2.internal container=kube-multus container exited with code 137 (Error): 
Sep 19 20:00:26.585 E ns/openshift-sdn pod/ovs-gbf52 node/ip-10-0-132-91.ec2.internal container=openvswitch container exited with code 137 (Error): |00362|connmgr|INFO|br0<->unix#858: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T19:57:40.498Z|00363|bridge|INFO|bridge br0: deleted interface veth90f64d29 on port 36\n2020-09-19T19:57:47.362Z|00364|bridge|INFO|bridge br0: added interface vethdc82167b on port 60\n2020-09-19T19:57:47.424Z|00365|connmgr|INFO|br0<->unix#861: 5 flow_mods in the last 0 s (5 adds)\n2020-09-19T19:57:47.492Z|00366|connmgr|INFO|br0<->unix#864: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T19:57:54.222Z|00367|connmgr|INFO|br0<->unix#867: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T19:57:54.255Z|00368|connmgr|INFO|br0<->unix#870: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T19:57:54.281Z|00369|bridge|INFO|bridge br0: deleted interface vethdc82167b on port 60\n2020-09-19T19:59:15.890Z|00370|connmgr|INFO|br0<->unix#882: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T19:59:15.927Z|00371|connmgr|INFO|br0<->unix#885: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T19:59:15.961Z|00372|bridge|INFO|bridge br0: deleted interface veth398ed1c9 on port 33\n2020-09-19T19:59:30.707Z|00373|bridge|INFO|bridge br0: added interface vethf6e3b344 on port 61\n2020-09-19T19:59:30.744Z|00374|connmgr|INFO|br0<->unix#892: 5 flow_mods in the last 0 s (5 adds)\n2020-09-19T19:59:30.794Z|00375|connmgr|INFO|br0<->unix#896: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T19:59:30.796Z|00376|connmgr|INFO|br0<->unix#898: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-09-19T19:59:35.406Z|00377|connmgr|INFO|br0<->unix#901: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T19:59:35.442Z|00378|connmgr|INFO|br0<->unix#904: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T19:59:35.467Z|00379|bridge|INFO|bridge br0: deleted interface veth08ce3202 on port 17\n2020-09-19T19:59:52.136Z|00380|bridge|INFO|bridge br0: added interface veth0b0975a8 on port 62\n2020-09-19T19:59:52.169Z|00381|connmgr|INFO|br0<->unix#907: 5 flow_mods in the last 0 s (5 adds)\n2020-09-19T19:59:52.209Z|00382|connmgr|INFO|br0<->unix#910: 2 flow_mods in the last 0 s (2 deletes)\n
Sep 19 20:00:36.782 E ns/openshift-dns pod/dns-default-5hj56 node/ip-10-0-134-91.ec2.internal container=dns container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 20:00:36.782 E ns/openshift-dns pod/dns-default-5hj56 node/ip-10-0-134-91.ec2.internal container=dns-node-resolver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 20:00:37.619 E ns/openshift-sdn pod/sdn-tspwp node/ip-10-0-132-91.ec2.internal container=sdn container exited with code 255 (Error): vswitch/db.sock: connect: connection refused\nI0919 20:00:35.779872    3082 proxier.go:367] userspace proxy: processing 0 service events\nI0919 20:00:35.779902    3082 proxier.go:346] userspace syncProxyRules took 60.529588ms\nI0919 20:00:35.791887    3082 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:00:35.891927    3082 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:00:35.991936    3082 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:00:36.091937    3082 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:00:36.191880    3082 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:00:36.291870    3082 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:00:36.391912    3082 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:00:36.491868    3082 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:00:36.591963    3082 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:00:36.592057    3082 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nF0919 20:00:36.592073    3082 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: timed out waiting for the condition\n
Sep 19 20:01:02.300 E ns/openshift-multus pod/multus-5g4x6 node/ip-10-0-155-3.ec2.internal container=kube-multus container exited with code 137 (Error): 
Sep 19 20:01:08.774 E ns/openshift-sdn pod/sdn-controller-zxfs2 node/ip-10-0-140-69.ec2.internal container=sdn-controller container exited with code 137 (Error): I0919 19:33:27.096472       1 leaderelection.go:205] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0919 19:37:40.170375       1 leaderelection.go:270] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-4p53dwbl-b7012.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: dial tcp 10.0.140.74:6443: i/o timeout\n
Sep 19 20:01:18.804 E ns/openshift-sdn pod/ovs-4vv9k node/ip-10-0-140-69.ec2.internal container=openvswitch container exited with code 137 (Error): 19T19:59:05.584Z|00340|bridge|INFO|bridge br0: deleted interface veth63c32ade on port 30\n2020-09-19T19:59:16.672Z|00341|bridge|INFO|bridge br0: added interface veth36f9dfee on port 54\n2020-09-19T19:59:16.704Z|00342|connmgr|INFO|br0<->unix#801: 5 flow_mods in the last 0 s (5 adds)\n2020-09-19T19:59:16.745Z|00343|connmgr|INFO|br0<->unix#804: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T20:00:29.330Z|00344|connmgr|INFO|br0<->unix#820: 2 flow_mods in the last 0 s (2 adds)\n2020-09-19T20:00:29.443Z|00345|connmgr|INFO|br0<->unix#826: 1 flow_mods in the last 0 s (1 deletes)\n2020-09-19T20:00:29.470Z|00346|connmgr|INFO|br0<->unix#829: 1 flow_mods in the last 0 s (1 deletes)\n2020-09-19T20:00:29.499Z|00347|connmgr|INFO|br0<->unix#832: 1 flow_mods in the last 0 s (1 deletes)\n2020-09-19T20:00:29.525Z|00348|connmgr|INFO|br0<->unix#835: 1 flow_mods in the last 0 s (1 deletes)\n2020-09-19T20:00:29.551Z|00349|connmgr|INFO|br0<->unix#838: 1 flow_mods in the last 0 s (1 deletes)\n2020-09-19T20:00:29.578Z|00350|connmgr|INFO|br0<->unix#841: 1 flow_mods in the last 0 s (1 deletes)\n2020-09-19T20:00:29.801Z|00351|connmgr|INFO|br0<->unix#844: 3 flow_mods in the last 0 s (3 adds)\n2020-09-19T20:00:29.836Z|00352|connmgr|INFO|br0<->unix#847: 1 flow_mods in the last 0 s (1 adds)\n2020-09-19T20:00:29.863Z|00353|connmgr|INFO|br0<->unix#850: 3 flow_mods in the last 0 s (3 adds)\n2020-09-19T20:00:29.889Z|00354|connmgr|INFO|br0<->unix#853: 1 flow_mods in the last 0 s (1 adds)\n2020-09-19T20:00:29.917Z|00355|connmgr|INFO|br0<->unix#856: 3 flow_mods in the last 0 s (3 adds)\n2020-09-19T20:00:29.949Z|00356|connmgr|INFO|br0<->unix#859: 1 flow_mods in the last 0 s (1 adds)\n2020-09-19T20:00:29.979Z|00357|connmgr|INFO|br0<->unix#862: 3 flow_mods in the last 0 s (3 adds)\n2020-09-19T20:00:30.008Z|00358|connmgr|INFO|br0<->unix#865: 1 flow_mods in the last 0 s (1 adds)\n2020-09-19T20:00:30.042Z|00359|connmgr|INFO|br0<->unix#868: 3 flow_mods in the last 0 s (3 adds)\n2020-09-19T20:00:30.073Z|00360|connmgr|INFO|br0<->unix#871: 1 flow_mods in the last 0 s (1 adds)\n
Sep 19 20:01:29.840 E ns/openshift-sdn pod/sdn-t8742 node/ip-10-0-140-69.ec2.internal container=sdn container exited with code 255 (Error): nvswitch/db.sock: database connection failed (Connection refused)\nI0919 20:01:28.090923   71242 ovs.go:169] Error executing ovs-vsctl: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Connection refused)\nI0919 20:01:28.098219   71242 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:01:28.142757   71242 ovs.go:169] Error executing ovs-vsctl: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Connection refused)\nI0919 20:01:28.198298   71242 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:01:28.298368   71242 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:01:28.398292   71242 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:01:28.498295   71242 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:01:28.598281   71242 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:01:28.698298   71242 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:01:28.798283   71242 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:01:28.798361   71242 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nF0919 20:01:28.798372   71242 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: timed out waiting for the condition\n
Sep 19 20:01:43.318 E ns/openshift-multus pod/multus-8954s node/ip-10-0-132-91.ec2.internal container=kube-multus container exited with code 137 (Error): 
Sep 19 20:01:43.882 E ns/openshift-machine-api pod/cluster-autoscaler-operator-6dfd97c5c4-jhwrs node/ip-10-0-140-69.ec2.internal container=cluster-autoscaler-operator container exited with code 255 (Error): 
Sep 19 20:02:02.618 E ns/openshift-sdn pod/ovs-dp82p node/ip-10-0-131-93.ec2.internal container=openvswitch container exited with code 137 (Error): es)\n2020-09-19T19:59:11.987Z|00164|bridge|INFO|bridge br0: deleted interface veth8fecbd05 on port 13\n2020-09-19T19:59:30.876Z|00165|bridge|INFO|bridge br0: added interface vethbeb49646 on port 27\n2020-09-19T19:59:30.904Z|00166|connmgr|INFO|br0<->unix#411: 5 flow_mods in the last 0 s (5 adds)\n2020-09-19T19:59:30.939Z|00167|connmgr|INFO|br0<->unix#414: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T20:00:16.548Z|00168|connmgr|INFO|br0<->unix#423: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T20:00:16.588Z|00169|connmgr|INFO|br0<->unix#426: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T20:00:16.624Z|00170|bridge|INFO|bridge br0: deleted interface veth9e29cba9 on port 3\n2020-09-19T20:00:31.713Z|00171|bridge|INFO|bridge br0: added interface veth9b4e0cc5 on port 28\n2020-09-19T20:00:31.741Z|00172|connmgr|INFO|br0<->unix#429: 5 flow_mods in the last 0 s (5 adds)\n2020-09-19T20:00:31.778Z|00173|connmgr|INFO|br0<->unix#432: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T20:01:54.338Z|00174|connmgr|INFO|br0<->unix#451: 2 flow_mods in the last 0 s (2 adds)\n2020-09-19T20:01:54.728Z|00175|connmgr|INFO|br0<->unix#457: 3 flow_mods in the last 0 s (3 adds)\n2020-09-19T20:01:54.757Z|00176|connmgr|INFO|br0<->unix#460: 1 flow_mods in the last 0 s (1 adds)\n2020-09-19T20:01:54.779Z|00177|connmgr|INFO|br0<->unix#463: 3 flow_mods in the last 0 s (3 adds)\n2020-09-19T20:01:54.806Z|00178|connmgr|INFO|br0<->unix#466: 1 flow_mods in the last 0 s (1 adds)\n2020-09-19T20:01:54.834Z|00179|connmgr|INFO|br0<->unix#469: 3 flow_mods in the last 0 s (3 adds)\n2020-09-19T20:01:54.857Z|00180|connmgr|INFO|br0<->unix#472: 1 flow_mods in the last 0 s (1 adds)\n2020-09-19T20:01:54.879Z|00181|connmgr|INFO|br0<->unix#475: 3 flow_mods in the last 0 s (3 adds)\n2020-09-19T20:01:54.904Z|00182|connmgr|INFO|br0<->unix#478: 1 flow_mods in the last 0 s (1 adds)\n2020-09-19T20:01:54.925Z|00183|connmgr|INFO|br0<->unix#481: 3 flow_mods in the last 0 s (3 adds)\n2020-09-19T20:01:54.948Z|00184|connmgr|INFO|br0<->unix#484: 1 flow_mods in the last 0 s (1 adds)\n
Sep 19 20:02:13.641 E ns/openshift-sdn pod/sdn-rtjzs node/ip-10-0-131-93.ec2.internal container=sdn container exited with code 255 (Error): ar/run/openvswitch/db.sock: connect: connection refused\nI0919 20:02:11.706742   64124 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:02:11.806744   64124 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:02:11.906743   64124 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:02:12.006759   64124 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:02:12.106745   64124 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:02:12.206743   64124 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:02:12.306754   64124 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:02:12.406761   64124 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:02:12.506745   64124 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:02:12.606750   64124 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:02:12.606839   64124 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nF0919 20:02:12.606852   64124 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: timed out waiting for the condition\n
Sep 19 20:02:34.400 E ns/openshift-multus pod/multus-p5jc9 node/ip-10-0-154-177.ec2.internal container=kube-multus container exited with code 137 (Error): 
Sep 19 20:02:56.039 E ns/openshift-sdn pod/sdn-4l4f4 node/ip-10-0-134-91.ec2.internal container=sdn container exited with code 255 (Error): ix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:02:54.519731   34538 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:02:54.619793   34538 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:02:54.719821   34538 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:02:54.819814   34538 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:02:54.919825   34538 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:02:55.019842   34538 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:02:55.119841   34538 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:02:55.219830   34538 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:02:55.319795   34538 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:02:55.419841   34538 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:02:55.524299   34538 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0919 20:02:55.524388   34538 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
Sep 19 20:03:21.094 E ns/openshift-multus pod/multus-phl47 node/ip-10-0-134-91.ec2.internal container=kube-multus container exited with code 137 (Error): 
Sep 19 20:03:26.567 E ns/openshift-sdn pod/ovs-k88mc node/ip-10-0-154-177.ec2.internal container=openvswitch container exited with code 137 (Error): r|INFO|br0<->unix#1018: 5 flow_mods in the last 0 s (5 adds)\n2020-09-19T20:01:10.467Z|00429|connmgr|INFO|br0<->unix#1021: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T20:01:40.131Z|00430|connmgr|INFO|br0<->unix#1033: 2 flow_mods in the last 0 s (2 adds)\n2020-09-19T20:01:40.250Z|00431|connmgr|INFO|br0<->unix#1039: 1 flow_mods in the last 0 s (1 deletes)\n2020-09-19T20:01:40.277Z|00432|connmgr|INFO|br0<->unix#1042: 1 flow_mods in the last 0 s (1 deletes)\n2020-09-19T20:01:40.305Z|00433|connmgr|INFO|br0<->unix#1045: 1 flow_mods in the last 0 s (1 deletes)\n2020-09-19T20:01:40.334Z|00434|connmgr|INFO|br0<->unix#1048: 1 flow_mods in the last 0 s (1 deletes)\n2020-09-19T20:01:40.363Z|00435|connmgr|INFO|br0<->unix#1051: 1 flow_mods in the last 0 s (1 deletes)\n2020-09-19T20:01:40.392Z|00436|connmgr|INFO|br0<->unix#1054: 1 flow_mods in the last 0 s (1 deletes)\n2020-09-19T20:01:40.420Z|00437|connmgr|INFO|br0<->unix#1057: 1 flow_mods in the last 0 s (1 deletes)\n2020-09-19T20:01:40.444Z|00438|connmgr|INFO|br0<->unix#1060: 1 flow_mods in the last 0 s (1 deletes)\n2020-09-19T20:01:40.600Z|00439|connmgr|INFO|br0<->unix#1063: 3 flow_mods in the last 0 s (3 adds)\n2020-09-19T20:01:40.628Z|00440|connmgr|INFO|br0<->unix#1066: 1 flow_mods in the last 0 s (1 adds)\n2020-09-19T20:01:40.653Z|00441|connmgr|INFO|br0<->unix#1069: 3 flow_mods in the last 0 s (3 adds)\n2020-09-19T20:01:40.679Z|00442|connmgr|INFO|br0<->unix#1072: 1 flow_mods in the last 0 s (1 adds)\n2020-09-19T20:01:40.705Z|00443|connmgr|INFO|br0<->unix#1075: 3 flow_mods in the last 0 s (3 adds)\n2020-09-19T20:01:40.730Z|00444|connmgr|INFO|br0<->unix#1078: 1 flow_mods in the last 0 s (1 adds)\n2020-09-19T20:01:40.755Z|00445|connmgr|INFO|br0<->unix#1081: 3 flow_mods in the last 0 s (3 adds)\n2020-09-19T20:01:40.782Z|00446|connmgr|INFO|br0<->unix#1084: 1 flow_mods in the last 0 s (1 adds)\n2020-09-19T20:01:40.807Z|00447|connmgr|INFO|br0<->unix#1087: 3 flow_mods in the last 0 s (3 adds)\n2020-09-19T20:01:40.844Z|00448|connmgr|INFO|br0<->unix#1090: 1 flow_mods in the last 0 s (1 adds)\n
Sep 19 20:03:35.607 E ns/openshift-sdn pod/sdn-k9655 node/ip-10-0-154-177.ec2.internal container=sdn container exited with code 255 (Error): ix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:03:33.964964   68678 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:03:34.064946   68678 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:03:34.164957   68678 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:03:34.264956   68678 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:03:34.367270   68678 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:03:34.466283   68678 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:03:34.564958   68678 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:03:34.664998   68678 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:03:34.764956   68678 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:03:34.864972   68678 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:03:34.972285   68678 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0919 20:03:34.972356   68678 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
Sep 19 20:04:02.265 E ns/openshift-multus pod/multus-hwpwm node/ip-10-0-140-69.ec2.internal container=kube-multus container exited with code 137 (Error): 
Sep 19 20:04:06.725 E ns/openshift-sdn pod/ovs-qd64g node/ip-10-0-155-3.ec2.internal container=openvswitch container exited with code 137 (Error): \n2020-09-19T19:58:41.899Z|00138|bridge|INFO|bridge br0: added interface vethc79c8dc2 on port 22\n2020-09-19T19:58:41.929Z|00139|connmgr|INFO|br0<->unix#350: 5 flow_mods in the last 0 s (5 adds)\n2020-09-19T19:58:41.965Z|00140|connmgr|INFO|br0<->unix#353: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T19:59:59.823Z|00141|connmgr|INFO|br0<->unix#366: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T19:59:59.861Z|00142|connmgr|INFO|br0<->unix#369: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T19:59:59.892Z|00143|bridge|INFO|bridge br0: deleted interface veth258c8e0a on port 3\n2020-09-19T20:00:11.903Z|00144|bridge|INFO|bridge br0: added interface veth1750b66d on port 23\n2020-09-19T20:00:11.933Z|00145|connmgr|INFO|br0<->unix#372: 5 flow_mods in the last 0 s (5 adds)\n2020-09-19T20:00:11.970Z|00146|connmgr|INFO|br0<->unix#375: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T20:01:24.022Z|00147|connmgr|INFO|br0<->unix#391: 2 flow_mods in the last 0 s (2 adds)\n2020-09-19T20:01:24.114Z|00148|connmgr|INFO|br0<->unix#397: 1 flow_mods in the last 0 s (1 deletes)\n2020-09-19T20:01:24.436Z|00149|connmgr|INFO|br0<->unix#400: 3 flow_mods in the last 0 s (3 adds)\n2020-09-19T20:01:24.467Z|00150|connmgr|INFO|br0<->unix#403: 1 flow_mods in the last 0 s (1 adds)\n2020-09-19T20:01:24.496Z|00151|connmgr|INFO|br0<->unix#406: 3 flow_mods in the last 0 s (3 adds)\n2020-09-19T20:01:24.520Z|00152|connmgr|INFO|br0<->unix#409: 1 flow_mods in the last 0 s (1 adds)\n2020-09-19T20:01:24.544Z|00153|connmgr|INFO|br0<->unix#412: 3 flow_mods in the last 0 s (3 adds)\n2020-09-19T20:01:24.567Z|00154|connmgr|INFO|br0<->unix#415: 1 flow_mods in the last 0 s (1 adds)\n2020-09-19T20:01:24.597Z|00155|connmgr|INFO|br0<->unix#418: 3 flow_mods in the last 0 s (3 adds)\n2020-09-19T20:01:24.624Z|00156|connmgr|INFO|br0<->unix#421: 1 flow_mods in the last 0 s (1 adds)\n2020-09-19T20:01:24.654Z|00157|connmgr|INFO|br0<->unix#424: 3 flow_mods in the last 0 s (3 adds)\n2020-09-19T20:01:24.682Z|00158|connmgr|INFO|br0<->unix#427: 1 flow_mods in the last 0 s (1 adds)\n
Sep 19 20:04:08.755 E ns/openshift-sdn pod/sdn-2dfxs node/ip-10-0-155-3.ec2.internal container=sdn container exited with code 255 (Error): ix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:04:07.615555   36711 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:04:07.715504   36711 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:04:07.815602   36711 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:04:07.915554   36711 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:04:08.015511   36711 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:04:08.115493   36711 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:04:08.215553   36711 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:04:08.315519   36711 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:04:08.415547   36711 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:04:08.515826   36711 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0919 20:04:08.619972   36711 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0919 20:04:08.620036   36711 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
Sep 19 20:04:25.764 E ns/openshift-machine-config-operator pod/machine-config-operator-57567677cc-xbsdl node/ip-10-0-132-91.ec2.internal container=machine-config-operator container exited with code 2 (Error): 
Sep 19 20:09:17.564 E ns/openshift-machine-config-operator pod/machine-config-server-lq4lw node/ip-10-0-132-91.ec2.internal container=machine-config-server container exited with code 2 (Error): 
Sep 19 20:10:01.894 E ns/openshift-monitoring pod/prometheus-adapter-7ffc899476-m2xwd node/ip-10-0-134-91.ec2.internal container=prometheus-adapter container exited with code 2 (Error): 
Sep 19 20:10:16.934 E ns/openshift-console pod/console-69bb4d5668-qbtns node/ip-10-0-140-69.ec2.internal container=console container exited with code 2 (Error): 2020/09/19 19:58:50 cmd/main: cookies are secure!\n2020/09/19 19:58:50 cmd/main: Binding to 0.0.0.0:8443...\n2020/09/19 19:58:50 cmd/main: using TLS\n
Sep 19 20:10:20.620 E ns/openshift-operator-lifecycle-manager pod/packageserver-688ffd847f-tvbk4 node/ip-10-0-132-91.ec2.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 20:10:29.737 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-131-93.ec2.internal container=prometheus container exited with code 1 (Error): 
Sep 19 20:10:55.395 E openshift-apiserver OpenShift API is not responding to GET requests
Sep 19 20:11:02.355 E clusteroperator/monitoring changed Degraded to True: UpdatingPrometheusK8SFailed: Failed to rollout the stack. Error: running task Updating Prometheus-k8s failed: reconciling Prometheus Role "prometheus-k8s" failed: updating Role object failed: Put https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1beta1/namespaces/kube-system/roles/prometheus-k8s: read tcp 10.128.0.54:43422->172.30.0.1:443: read: connection reset by peer
Sep 19 20:11:08.049 E ns/openshift-operator-lifecycle-manager pod/packageserver-688ffd847f-njh8g node/ip-10-0-154-177.ec2.internal container=packageserver container exited with code 137 (Error): :10:36.941344       1 wrap.go:47] GET /: (201.869µs) 200 [Go-http-client/2.0 10.130.0.1:59344]\nI0919 20:10:36.943738       1 wrap.go:47] GET /: (87.012µs) 200 [Go-http-client/2.0 10.130.0.1:59344]\nI0919 20:10:37.018792       1 secure_serving.go:156] Stopped listening on [::]:5443\ntime="2020-09-19T20:10:38Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-09-19T20:10:38Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-09-19T20:10:45Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-09-19T20:10:45Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-09-19T20:10:47Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-09-19T20:10:47Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-09-19T20:10:53Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-09-19T20:10:53Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-09-19T20:10:59Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-09-19T20:10:59Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\n
Sep 19 20:11:25.395 E openshift-apiserver OpenShift API is not responding to GET requests
Sep 19 20:12:14.283 E ns/openshift-monitoring pod/node-exporter-5hhl8 node/ip-10-0-134-91.ec2.internal container=node-exporter container exited with code 255 (Error): 
Sep 19 20:12:14.283 E ns/openshift-monitoring pod/node-exporter-5hhl8 node/ip-10-0-134-91.ec2.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Sep 19 20:12:14.297 E ns/openshift-cluster-node-tuning-operator pod/tuned-zqz7l node/ip-10-0-134-91.ec2.internal container=tuned container exited with code 255 (Error): g-operator/machine-config-daemon-sgbn8) labels changed node wide: true\nI0919 20:06:40.560355   31661 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 20:06:40.562122   31661 openshift-tuned.go:326] Getting recommended profile...\nI0919 20:06:40.721312   31661 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0919 20:10:00.652703   31661 openshift-tuned.go:435] Pod (e2e-tests-sig-apps-deployment-upgrade-zccvz/dp-57cc5d77b4-gtz74) labels changed node wide: true\nI0919 20:10:05.560856   31661 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 20:10:05.563075   31661 openshift-tuned.go:326] Getting recommended profile...\nI0919 20:10:05.698894   31661 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0919 20:10:14.635934   31661 openshift-tuned.go:435] Pod (e2e-tests-sig-apps-replicaset-upgrade-5hk4l/rs-lhr2j) labels changed node wide: true\nI0919 20:10:15.560354   31661 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 20:10:15.562113   31661 openshift-tuned.go:326] Getting recommended profile...\nI0919 20:10:15.694592   31661 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0919 20:10:21.179192   31661 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/olm-operators-25p9b) labels changed node wide: true\nI0919 20:10:25.560375   31661 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 20:10:25.562060   31661 openshift-tuned.go:326] Getting recommended profile...\nI0919 20:10:25.690694   31661 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0919 20:10:34.635205   31661 openshift-tuned.go:435] Pod (e2e-tests-sig-apps-job-upgrade-8wk66/foo-58bxg) labels changed node wide: true\n
Sep 19 20:12:14.502 E ns/openshift-image-registry pod/node-ca-pfrlp node/ip-10-0-134-91.ec2.internal container=node-ca container exited with code 255 (Error): 
Sep 19 20:12:18.663 E ns/openshift-dns pod/dns-default-qpgs8 node/ip-10-0-134-91.ec2.internal container=dns container exited with code 255 (Error): .:5353\n2020-09-19T20:00:54.254Z [INFO] CoreDNS-1.3.1\n2020-09-19T20:00:54.254Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-09-19T20:00:54.254Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\n[INFO] SIGTERM: Shutting down servers then terminating\n
Sep 19 20:12:18.663 E ns/openshift-dns pod/dns-default-qpgs8 node/ip-10-0-134-91.ec2.internal container=dns-node-resolver container exited with code 255 (Error): /bin/bash: line 1: kill: (87) - No such process\n
Sep 19 20:12:19.032 E ns/openshift-sdn pod/ovs-cwcmf node/ip-10-0-134-91.ec2.internal container=openvswitch container exited with code 255 (Error): og <==\n2020-09-19T20:10:01.227Z|00124|connmgr|INFO|br0<->unix#150: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T20:10:01.292Z|00125|bridge|INFO|bridge br0: deleted interface veth5d31e6b7 on port 4\n2020-09-19T20:10:01.357Z|00126|connmgr|INFO|br0<->unix#153: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T20:10:01.412Z|00127|bridge|INFO|bridge br0: deleted interface veth12fe8503 on port 10\n2020-09-19T20:10:01.474Z|00128|connmgr|INFO|br0<->unix#156: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T20:10:01.523Z|00129|bridge|INFO|bridge br0: deleted interface veth4bc1cdf1 on port 6\n2020-09-19T20:10:01.582Z|00130|connmgr|INFO|br0<->unix#159: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T20:10:01.627Z|00131|bridge|INFO|bridge br0: deleted interface veth8a664139 on port 5\n2020-09-19T20:10:01.687Z|00132|connmgr|INFO|br0<->unix#162: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T20:10:01.721Z|00133|bridge|INFO|bridge br0: deleted interface veth2eb3d256 on port 12\n2020-09-19T20:10:01.774Z|00134|connmgr|INFO|br0<->unix#165: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T20:10:01.813Z|00135|bridge|INFO|bridge br0: deleted interface veth78807f06 on port 9\n2020-09-19T20:10:01.870Z|00136|connmgr|INFO|br0<->unix#168: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T20:10:01.919Z|00137|bridge|INFO|bridge br0: deleted interface vethe47ca57c on port 13\n2020-09-19T20:10:27.375Z|00138|bridge|INFO|bridge br0: added interface veth5f760695 on port 14\n2020-09-19T20:10:27.403Z|00139|connmgr|INFO|br0<->unix#174: 5 flow_mods in the last 0 s (5 adds)\n2020-09-19T20:10:27.450Z|00140|connmgr|INFO|br0<->unix#178: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T20:10:27.452Z|00141|connmgr|INFO|br0<->unix#180: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-09-19T20:10:30.930Z|00142|connmgr|INFO|br0<->unix#183: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T20:10:30.957Z|00143|bridge|INFO|bridge br0: deleted interface vethe0746cea on port 11\nTerminated\novs-vswitchd is not running.\novsdb-server is not running.\n
Sep 19 20:12:19.399 E ns/openshift-machine-config-operator pod/machine-config-daemon-ggn4p node/ip-10-0-134-91.ec2.internal container=machine-config-daemon container exited with code 255 (Error): 
Sep 19 20:12:19.769 E ns/openshift-multus pod/multus-7n27p node/ip-10-0-134-91.ec2.internal container=kube-multus container exited with code 255 (Error): 
Sep 19 20:12:20.875 E ns/openshift-operator-lifecycle-manager pod/olm-operators-25p9b node/ip-10-0-134-91.ec2.internal container=configmap-registry-server container exited with code 255 (Error): 
Sep 19 20:12:21.616 E ns/openshift-sdn pod/sdn-4l4f4 node/ip-10-0-134-91.ec2.internal container=sdn container exited with code 255 (Error): 91 for service "openshift-monitoring/prometheus-operated:web"\nI0919 20:10:31.329240   38319 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-monitoring/prometheus-k8s:web to [10.129.2.18:9091 10.131.0.34:9091]\nI0919 20:10:31.329253   38319 roundrobin.go:240] Delete endpoint 10.131.0.34:9091 for service "openshift-monitoring/prometheus-k8s:web"\nI0919 20:10:31.329268   38319 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-monitoring/prometheus-k8s:tenancy to [10.129.2.18:9092 10.131.0.34:9092]\nI0919 20:10:31.329279   38319 roundrobin.go:240] Delete endpoint 10.131.0.34:9092 for service "openshift-monitoring/prometheus-k8s:tenancy"\nI0919 20:10:31.530376   38319 proxier.go:367] userspace proxy: processing 0 service events\nI0919 20:10:31.530410   38319 proxier.go:346] userspace syncProxyRules took 72.864517ms\nI0919 20:10:31.715488   38319 proxier.go:367] userspace proxy: processing 0 service events\nI0919 20:10:31.715523   38319 proxier.go:346] userspace syncProxyRules took 62.157007ms\ninterrupt: Gracefully shutting down ...\nE0919 20:10:34.775588   38319 healthcheck.go:57] SDN healthcheck disconnected from OVS server: <nil>\nI0919 20:10:34.775710   38319 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0919 20:10:34.925211   38319 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0919 20:10:34.976335   38319 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0919 20:10:35.076901   38319 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0919 20:10:35.176682   38319 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\n
Sep 19 20:12:38.123 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-140-69.ec2.internal node/ip-10-0-140-69.ec2.internal container=kube-apiserver-cert-syncer-8 container exited with code 255 (Error): I0919 19:55:08.642479       1 observer_polling.go:106] Starting file observer\nI0919 19:55:08.642887       1 certsync_controller.go:269] Starting CertSyncer\nW0919 20:04:42.952834       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22645 (25802)\n
Sep 19 20:12:38.123 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-140-69.ec2.internal node/ip-10-0-140-69.ec2.internal container=kube-apiserver-8 container exited with code 255 (Error): tem v1.template.openshift.io\nI0919 20:10:25.219151       1 controller.go:107] OpenAPI AggregationController: Processing item v1.user.openshift.io\nI0919 20:10:28.708901       1 controller.go:107] OpenAPI AggregationController: Processing item v1.image.openshift.io\nI0919 20:10:30.855102       1 controller.go:107] OpenAPI AggregationController: Processing item v1.project.openshift.io\nI0919 20:10:30.886378       1 controller.go:107] OpenAPI AggregationController: Processing item v1.packages.operators.coreos.com\nE0919 20:10:30.890213       1 controller.go:114] loading OpenAPI spec for "v1.packages.operators.coreos.com" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: Error: 'x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "Red Hat, Inc.")'\nTrying to reach: 'https://10.128.0.72:5443/openapi/v2', Header: map[]\nI0919 20:10:30.890240       1 controller.go:127] OpenAPI AggregationController: action for item v1.packages.operators.coreos.com: Rate Limited Requeue.\nI0919 20:10:34.350963       1 controller.go:107] OpenAPI AggregationController: Processing item v1.oauth.openshift.io\nI0919 20:10:35.383549       1 controller.go:107] OpenAPI AggregationController: Processing item v1.quota.openshift.io\nI0919 20:10:35.386505       1 controller.go:107] OpenAPI AggregationController: Processing item v1.build.openshift.io\nI0919 20:10:36.432780       1 controller.go:107] OpenAPI AggregationController: Processing item v1.authorization.openshift.io\nI0919 20:10:39.928237       1 controller.go:107] OpenAPI AggregationController: Processing item v1.security.openshift.io\nI0919 20:10:40.094279       1 controller.go:107] OpenAPI AggregationController: Processing item v1.route.openshift.io\nI0919 20:10:42.375814       1 controller.go:107] OpenAPI AggregationController: Processing item v1.apps.openshift.io\nI0919 20:10:50.692113       1 controller.go:176] Shutting down kubernetes service endpoint reconciler\n
Sep 19 20:12:44.871 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-140-69.ec2.internal node/ip-10-0-140-69.ec2.internal container=kube-controller-manager-5 container exited with code 255 (Error): : "kube-apiserver-to-kubelet-signer" [] issuer="<self>" (2020-09-19 19:16:48 +0000 UTC to 2021-09-19 19:16:48 +0000 UTC (now=2020-09-19 19:55:09.466859611 +0000 UTC))\nI0919 19:55:09.466893       1 clientca.go:92] [4] "/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" client-ca certificate: "kube-control-plane-signer" [] issuer="<self>" (2020-09-19 19:16:48 +0000 UTC to 2021-09-19 19:16:48 +0000 UTC (now=2020-09-19 19:55:09.466881132 +0000 UTC))\nI0919 19:55:09.473986       1 controllermanager.go:169] Version: v1.13.4-138-g41dc99c\nI0919 19:55:09.475774       1 serving.go:195] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1600544042" (2020-09-19 19:34:15 +0000 UTC to 2022-09-19 19:34:16 +0000 UTC (now=2020-09-19 19:55:09.475748774 +0000 UTC))\nI0919 19:55:09.475814       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1600544042" [] issuer="<self>" (2020-09-19 19:34:01 +0000 UTC to 2021-09-19 19:34:02 +0000 UTC (now=2020-09-19 19:55:09.475798533 +0000 UTC))\nI0919 19:55:09.475846       1 secure_serving.go:136] Serving securely on [::]:10257\nI0919 19:55:09.476038       1 serving.go:77] Starting DynamicLoader\nI0919 19:55:09.476311       1 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-controller-manager...\nE0919 19:55:12.969283       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: configmaps "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\nE0919 20:10:50.724376       1 controllermanager.go:282] leaderelection lost\n
Sep 19 20:12:44.871 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-140-69.ec2.internal node/ip-10-0-140-69.ec2.internal container=kube-controller-manager-cert-syncer-5 container exited with code 255 (Error): I0919 19:55:08.711975       1 observer_polling.go:106] Starting file observer\nI0919 19:55:08.712375       1 certsync_controller.go:269] Starting CertSyncer\nE0919 19:55:12.924310       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: secrets is forbidden: User "system:kube-controller-manager" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-controller-manager"\nE0919 19:55:12.935658       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:kube-controller-manager" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\nW0919 20:03:24.952228       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 20386 (25291)\nW0919 20:09:48.958070       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25463 (27414)\n
Sep 19 20:12:48.856 E ns/openshift-apiserver pod/apiserver-cnj8b node/ip-10-0-140-69.ec2.internal container=openshift-apiserver container exited with code 255 (Error): ng"\nI0919 20:10:50.757379       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0919 20:10:50.757416       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0919 20:10:50.757511       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0919 20:10:50.757579       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0919 20:10:50.757630       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0919 20:10:50.757646       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0919 20:10:50.757687       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0919 20:10:50.757711       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0919 20:10:50.757690       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0919 20:10:50.757828       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0919 20:10:50.977330       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/client-ca/ca-bundle.crt\nI0919 20:10:50.977940       1 clusterquotamapping.go:145] Shutting down ClusterQuotaMappingController controller\nI0919 20:10:50.978077       1 controller.go:87] Shutting down OpenAPI AggregationController\nI0919 20:10:50.978122       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/aggregator-client-ca/ca-bundle.crt\nI0919 20:10:50.978138       1 serving.go:88] Shutting down DynamicLoader\nI0919 20:10:50.982507       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\n
Sep 19 20:12:49.253 E ns/openshift-image-registry pod/node-ca-lws7r node/ip-10-0-140-69.ec2.internal container=node-ca container exited with code 255 (Error): 
Sep 19 20:12:49.654 E ns/openshift-cluster-node-tuning-operator pod/tuned-gc7v9 node/ip-10-0-140-69.ec2.internal container=tuned container exited with code 255 (Error): s changed node wide: false\nI0919 20:10:14.718258   61533 openshift-tuned.go:435] Pod (openshift-kube-apiserver/revision-pruner-5-ip-10-0-140-69.ec2.internal) labels changed node wide: false\nI0919 20:10:14.908630   61533 openshift-tuned.go:435] Pod (openshift-kube-controller-manager/installer-5-ip-10-0-140-69.ec2.internal) labels changed node wide: false\nI0919 20:10:14.934805   61533 openshift-tuned.go:435] Pod (openshift-machine-api/cluster-autoscaler-operator-6dfd97c5c4-jhwrs) labels changed node wide: true\nI0919 20:10:15.898657   61533 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 20:10:15.900428   61533 openshift-tuned.go:326] Getting recommended profile...\nI0919 20:10:16.056954   61533 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0919 20:10:16.057498   61533 openshift-tuned.go:435] Pod (openshift-kube-scheduler/installer-2-ip-10-0-140-69.ec2.internal) labels changed node wide: true\nI0919 20:10:20.898580   61533 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 20:10:20.900070   61533 openshift-tuned.go:326] Getting recommended profile...\nI0919 20:10:21.180283   61533 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0919 20:10:21.181029   61533 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/olm-operators-sjf7d) labels changed node wide: true\nI0919 20:10:25.898678   61533 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 20:10:25.900085   61533 openshift-tuned.go:326] Getting recommended profile...\nI0919 20:10:26.020776   61533 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0919 20:10:50.455929   61533 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/packageserver-688ffd847f-bp4w8) labels changed node wide: true\n
Sep 19 20:12:50.055 E ns/openshift-monitoring pod/node-exporter-q5jqj node/ip-10-0-140-69.ec2.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Sep 19 20:12:50.055 E ns/openshift-monitoring pod/node-exporter-q5jqj node/ip-10-0-140-69.ec2.internal container=node-exporter container exited with code 255 (Error): 
Sep 19 20:12:52.654 E ns/openshift-dns pod/dns-default-2zlzp node/ip-10-0-140-69.ec2.internal container=dns container exited with code 255 (Error): .:5353\n2020-09-19T20:01:38.869Z [INFO] CoreDNS-1.3.1\n2020-09-19T20:01:38.869Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-09-19T20:01:38.869Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\n[INFO] SIGTERM: Shutting down servers then terminating\n
Sep 19 20:12:52.654 E ns/openshift-dns pod/dns-default-2zlzp node/ip-10-0-140-69.ec2.internal container=dns-node-resolver container exited with code 255 (Error): /bin/bash: line 1: kill: (88) - No such process\n
Sep 19 20:12:53.054 E ns/openshift-multus pod/multus-stzw5 node/ip-10-0-140-69.ec2.internal invariant violation: pod may not transition Running->Pending
Sep 19 20:12:53.054 E ns/openshift-multus pod/multus-stzw5 node/ip-10-0-140-69.ec2.internal container=kube-multus container exited with code 255 (Error): 
Sep 19 20:13:00.458 E ns/openshift-controller-manager pod/controller-manager-5fqxh node/ip-10-0-140-69.ec2.internal container=controller-manager container exited with code 255 (Error): 
Sep 19 20:13:05.395 E kube-apiserver Kube API started failing: Get https://api.ci-op-4p53dwbl-b7012.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=3s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Sep 19 20:13:15.672 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-140-69.ec2.internal node/ip-10-0-140-69.ec2.internal container=kube-controller-manager-5 container exited with code 255 (Error): : "kube-apiserver-to-kubelet-signer" [] issuer="<self>" (2020-09-19 19:16:48 +0000 UTC to 2021-09-19 19:16:48 +0000 UTC (now=2020-09-19 19:55:09.466859611 +0000 UTC))\nI0919 19:55:09.466893       1 clientca.go:92] [4] "/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" client-ca certificate: "kube-control-plane-signer" [] issuer="<self>" (2020-09-19 19:16:48 +0000 UTC to 2021-09-19 19:16:48 +0000 UTC (now=2020-09-19 19:55:09.466881132 +0000 UTC))\nI0919 19:55:09.473986       1 controllermanager.go:169] Version: v1.13.4-138-g41dc99c\nI0919 19:55:09.475774       1 serving.go:195] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1600544042" (2020-09-19 19:34:15 +0000 UTC to 2022-09-19 19:34:16 +0000 UTC (now=2020-09-19 19:55:09.475748774 +0000 UTC))\nI0919 19:55:09.475814       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1600544042" [] issuer="<self>" (2020-09-19 19:34:01 +0000 UTC to 2021-09-19 19:34:02 +0000 UTC (now=2020-09-19 19:55:09.475798533 +0000 UTC))\nI0919 19:55:09.475846       1 secure_serving.go:136] Serving securely on [::]:10257\nI0919 19:55:09.476038       1 serving.go:77] Starting DynamicLoader\nI0919 19:55:09.476311       1 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-controller-manager...\nE0919 19:55:12.969283       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: configmaps "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\nE0919 20:10:50.724376       1 controllermanager.go:282] leaderelection lost\n
Sep 19 20:13:15.672 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-140-69.ec2.internal node/ip-10-0-140-69.ec2.internal container=kube-controller-manager-cert-syncer-5 container exited with code 255 (Error): I0919 19:55:08.711975       1 observer_polling.go:106] Starting file observer\nI0919 19:55:08.712375       1 certsync_controller.go:269] Starting CertSyncer\nE0919 19:55:12.924310       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: secrets is forbidden: User "system:kube-controller-manager" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-controller-manager"\nE0919 19:55:12.935658       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:kube-controller-manager" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\nW0919 20:03:24.952228       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 20386 (25291)\nW0919 20:09:48.958070       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25463 (27414)\n
Sep 19 20:13:16.071 E ns/openshift-etcd pod/etcd-member-ip-10-0-140-69.ec2.internal node/ip-10-0-140-69.ec2.internal container=etcd-metrics container exited with code 255 (Error): 2020-09-19 20:09:22.958066 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-2.ci-op-4p53dwbl-b7012.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-2.ci-op-4p53dwbl-b7012.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-09-19 20:09:22.959351 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-09-19 20:09:22.960338 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-2.ci-op-4p53dwbl-b7012.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-2.ci-op-4p53dwbl-b7012.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/09/19 20:09:22 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.140.69:9978: connect: connection refused"; Reconnecting to {etcd-2.ci-op-4p53dwbl-b7012.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\n2020-09-19 20:09:23.975385 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Sep 19 20:13:16.071 E ns/openshift-etcd pod/etcd-member-ip-10-0-140-69.ec2.internal node/ip-10-0-140-69.ec2.internal container=etcd-member container exited with code 255 (Error): f3f2 (stream MsgApp v2 reader)\n2020-09-19 20:10:51.257148 E | rafthttp: failed to read ed8c14e47e6f3f2 on stream MsgApp v2 (context canceled)\n2020-09-19 20:10:51.257157 I | rafthttp: peer ed8c14e47e6f3f2 became inactive (message send to peer failed)\n2020-09-19 20:10:51.257166 I | rafthttp: stopped streaming with peer ed8c14e47e6f3f2 (stream MsgApp v2 reader)\n2020-09-19 20:10:51.257243 W | rafthttp: lost the TCP streaming connection with peer ed8c14e47e6f3f2 (stream Message reader)\n2020-09-19 20:10:51.257313 I | rafthttp: stopped streaming with peer ed8c14e47e6f3f2 (stream Message reader)\n2020-09-19 20:10:51.257330 I | rafthttp: stopped peer ed8c14e47e6f3f2\n2020-09-19 20:10:51.257339 I | rafthttp: stopping peer e8413b8492a6582e...\n2020-09-19 20:10:51.257775 I | rafthttp: closed the TCP streaming connection with peer e8413b8492a6582e (stream MsgApp v2 writer)\n2020-09-19 20:10:51.257792 I | rafthttp: stopped streaming with peer e8413b8492a6582e (writer)\n2020-09-19 20:10:51.258190 I | rafthttp: closed the TCP streaming connection with peer e8413b8492a6582e (stream Message writer)\n2020-09-19 20:10:51.258270 I | rafthttp: stopped streaming with peer e8413b8492a6582e (writer)\n2020-09-19 20:10:51.258340 I | rafthttp: stopped HTTP pipelining with peer e8413b8492a6582e\n2020-09-19 20:10:51.258483 W | rafthttp: lost the TCP streaming connection with peer e8413b8492a6582e (stream MsgApp v2 reader)\n2020-09-19 20:10:51.258497 E | rafthttp: failed to read e8413b8492a6582e on stream MsgApp v2 (context canceled)\n2020-09-19 20:10:51.258536 I | rafthttp: peer e8413b8492a6582e became inactive (message send to peer failed)\n2020-09-19 20:10:51.258547 I | rafthttp: stopped streaming with peer e8413b8492a6582e (stream MsgApp v2 reader)\n2020-09-19 20:10:51.258611 W | rafthttp: lost the TCP streaming connection with peer e8413b8492a6582e (stream Message reader)\n2020-09-19 20:10:51.258626 I | rafthttp: stopped streaming with peer e8413b8492a6582e (stream Message reader)\n2020-09-19 20:10:51.258634 I | rafthttp: stopped peer e8413b8492a6582e\n
Sep 19 20:13:17.070 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-140-69.ec2.internal node/ip-10-0-140-69.ec2.internal container=kube-apiserver-cert-syncer-8 container exited with code 255 (Error): I0919 19:55:08.642479       1 observer_polling.go:106] Starting file observer\nI0919 19:55:08.642887       1 certsync_controller.go:269] Starting CertSyncer\nW0919 20:04:42.952834       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22645 (25802)\n
Sep 19 20:13:17.070 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-140-69.ec2.internal node/ip-10-0-140-69.ec2.internal container=kube-apiserver-8 container exited with code 255 (Error): tem v1.template.openshift.io\nI0919 20:10:25.219151       1 controller.go:107] OpenAPI AggregationController: Processing item v1.user.openshift.io\nI0919 20:10:28.708901       1 controller.go:107] OpenAPI AggregationController: Processing item v1.image.openshift.io\nI0919 20:10:30.855102       1 controller.go:107] OpenAPI AggregationController: Processing item v1.project.openshift.io\nI0919 20:10:30.886378       1 controller.go:107] OpenAPI AggregationController: Processing item v1.packages.operators.coreos.com\nE0919 20:10:30.890213       1 controller.go:114] loading OpenAPI spec for "v1.packages.operators.coreos.com" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: Error: 'x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "Red Hat, Inc.")'\nTrying to reach: 'https://10.128.0.72:5443/openapi/v2', Header: map[]\nI0919 20:10:30.890240       1 controller.go:127] OpenAPI AggregationController: action for item v1.packages.operators.coreos.com: Rate Limited Requeue.\nI0919 20:10:34.350963       1 controller.go:107] OpenAPI AggregationController: Processing item v1.oauth.openshift.io\nI0919 20:10:35.383549       1 controller.go:107] OpenAPI AggregationController: Processing item v1.quota.openshift.io\nI0919 20:10:35.386505       1 controller.go:107] OpenAPI AggregationController: Processing item v1.build.openshift.io\nI0919 20:10:36.432780       1 controller.go:107] OpenAPI AggregationController: Processing item v1.authorization.openshift.io\nI0919 20:10:39.928237       1 controller.go:107] OpenAPI AggregationController: Processing item v1.security.openshift.io\nI0919 20:10:40.094279       1 controller.go:107] OpenAPI AggregationController: Processing item v1.route.openshift.io\nI0919 20:10:42.375814       1 controller.go:107] OpenAPI AggregationController: Processing item v1.apps.openshift.io\nI0919 20:10:50.692113       1 controller.go:176] Shutting down kubernetes service endpoint reconciler\n
Sep 19 20:13:17.469 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-140-69.ec2.internal node/ip-10-0-140-69.ec2.internal container=scheduler container exited with code 255 (Error): ory.go:132: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope\nE0919 19:55:13.020837       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope\nE0919 19:55:16.090877       1 factory.go:832] scheduler cache UpdatePod failed: pod 04c03fd1-fab2-11ea-b76b-0ee9b5dee8a3 is not added to scheduler cache, so cannot be updated\nE0919 19:55:17.043206       1 factory.go:832] scheduler cache UpdatePod failed: pod 04c03fd1-fab2-11ea-b76b-0ee9b5dee8a3 is not added to scheduler cache, so cannot be updated\nE0919 19:56:08.963474       1 factory.go:832] scheduler cache UpdatePod failed: pod 04c03fd1-fab2-11ea-b76b-0ee9b5dee8a3 is not added to scheduler cache, so cannot be updated\nW0919 20:09:16.388331       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1beta1.PodDisruptionBudget ended with: too old resource version: 19168 (27373)\nW0919 20:09:16.599250       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 21733 (27376)\nW0919 20:09:16.615222       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 19161 (27376)\nW0919 20:09:16.685074       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 19161 (27376)\nW0919 20:09:16.695625       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 19169 (27376)\nI0919 20:10:50.762457       1 serving.go:88] Shutting down DynamicLoader\nI0919 20:10:50.762624       1 secure_serving.go:180] Stopped listening on [::]:10251\nE0919 20:10:50.762739       1 server.go:259] lost master\n
Sep 19 20:13:22.959 E ns/openshift-monitoring pod/telemeter-client-d4f8856f7-nkpvw node/ip-10-0-155-3.ec2.internal container=reload container exited with code 2 (Error): 
Sep 19 20:13:22.959 E ns/openshift-monitoring pod/telemeter-client-d4f8856f7-nkpvw node/ip-10-0-155-3.ec2.internal container=telemeter-client container exited with code 2 (Error): 
Sep 19 20:13:23.069 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-140-69.ec2.internal node/ip-10-0-140-69.ec2.internal container=scheduler container exited with code 255 (Error): ory.go:132: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope\nE0919 19:55:13.020837       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope\nE0919 19:55:16.090877       1 factory.go:832] scheduler cache UpdatePod failed: pod 04c03fd1-fab2-11ea-b76b-0ee9b5dee8a3 is not added to scheduler cache, so cannot be updated\nE0919 19:55:17.043206       1 factory.go:832] scheduler cache UpdatePod failed: pod 04c03fd1-fab2-11ea-b76b-0ee9b5dee8a3 is not added to scheduler cache, so cannot be updated\nE0919 19:56:08.963474       1 factory.go:832] scheduler cache UpdatePod failed: pod 04c03fd1-fab2-11ea-b76b-0ee9b5dee8a3 is not added to scheduler cache, so cannot be updated\nW0919 20:09:16.388331       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1beta1.PodDisruptionBudget ended with: too old resource version: 19168 (27373)\nW0919 20:09:16.599250       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 21733 (27376)\nW0919 20:09:16.615222       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 19161 (27376)\nW0919 20:09:16.685074       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 19161 (27376)\nW0919 20:09:16.695625       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 19169 (27376)\nI0919 20:10:50.762457       1 serving.go:88] Shutting down DynamicLoader\nI0919 20:10:50.762624       1 secure_serving.go:180] Stopped listening on [::]:10251\nE0919 20:10:50.762739       1 server.go:259] lost master\n
Sep 19 20:13:23.872 E ns/openshift-etcd pod/etcd-member-ip-10-0-140-69.ec2.internal node/ip-10-0-140-69.ec2.internal container=etcd-metrics container exited with code 255 (Error): 2020-09-19 20:09:22.958066 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-2.ci-op-4p53dwbl-b7012.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-2.ci-op-4p53dwbl-b7012.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-09-19 20:09:22.959351 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-09-19 20:09:22.960338 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-2.ci-op-4p53dwbl-b7012.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-2.ci-op-4p53dwbl-b7012.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/09/19 20:09:22 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.140.69:9978: connect: connection refused"; Reconnecting to {etcd-2.ci-op-4p53dwbl-b7012.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\n2020-09-19 20:09:23.975385 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Sep 19 20:13:23.872 E ns/openshift-etcd pod/etcd-member-ip-10-0-140-69.ec2.internal node/ip-10-0-140-69.ec2.internal container=etcd-member container exited with code 255 (Error): f3f2 (stream MsgApp v2 reader)\n2020-09-19 20:10:51.257148 E | rafthttp: failed to read ed8c14e47e6f3f2 on stream MsgApp v2 (context canceled)\n2020-09-19 20:10:51.257157 I | rafthttp: peer ed8c14e47e6f3f2 became inactive (message send to peer failed)\n2020-09-19 20:10:51.257166 I | rafthttp: stopped streaming with peer ed8c14e47e6f3f2 (stream MsgApp v2 reader)\n2020-09-19 20:10:51.257243 W | rafthttp: lost the TCP streaming connection with peer ed8c14e47e6f3f2 (stream Message reader)\n2020-09-19 20:10:51.257313 I | rafthttp: stopped streaming with peer ed8c14e47e6f3f2 (stream Message reader)\n2020-09-19 20:10:51.257330 I | rafthttp: stopped peer ed8c14e47e6f3f2\n2020-09-19 20:10:51.257339 I | rafthttp: stopping peer e8413b8492a6582e...\n2020-09-19 20:10:51.257775 I | rafthttp: closed the TCP streaming connection with peer e8413b8492a6582e (stream MsgApp v2 writer)\n2020-09-19 20:10:51.257792 I | rafthttp: stopped streaming with peer e8413b8492a6582e (writer)\n2020-09-19 20:10:51.258190 I | rafthttp: closed the TCP streaming connection with peer e8413b8492a6582e (stream Message writer)\n2020-09-19 20:10:51.258270 I | rafthttp: stopped streaming with peer e8413b8492a6582e (writer)\n2020-09-19 20:10:51.258340 I | rafthttp: stopped HTTP pipelining with peer e8413b8492a6582e\n2020-09-19 20:10:51.258483 W | rafthttp: lost the TCP streaming connection with peer e8413b8492a6582e (stream MsgApp v2 reader)\n2020-09-19 20:10:51.258497 E | rafthttp: failed to read e8413b8492a6582e on stream MsgApp v2 (context canceled)\n2020-09-19 20:10:51.258536 I | rafthttp: peer e8413b8492a6582e became inactive (message send to peer failed)\n2020-09-19 20:10:51.258547 I | rafthttp: stopped streaming with peer e8413b8492a6582e (stream MsgApp v2 reader)\n2020-09-19 20:10:51.258611 W | rafthttp: lost the TCP streaming connection with peer e8413b8492a6582e (stream Message reader)\n2020-09-19 20:10:51.258626 I | rafthttp: stopped streaming with peer e8413b8492a6582e (stream Message reader)\n2020-09-19 20:10:51.258634 I | rafthttp: stopped peer e8413b8492a6582e\n
Sep 19 20:13:24.134 E ns/openshift-monitoring pod/kube-state-metrics-7b4d49f7bd-w9gb4 node/ip-10-0-155-3.ec2.internal container=kube-state-metrics container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 20:13:24.134 E ns/openshift-monitoring pod/kube-state-metrics-7b4d49f7bd-w9gb4 node/ip-10-0-155-3.ec2.internal container=kube-rbac-proxy-self container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 20:13:24.134 E ns/openshift-monitoring pod/kube-state-metrics-7b4d49f7bd-w9gb4 node/ip-10-0-155-3.ec2.internal container=kube-rbac-proxy-main container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 20:13:52.097 E ns/openshift-console pod/downloads-795f496c64-4v2s2 node/ip-10-0-155-3.ec2.internal container=download-server container exited with code 137 (Error): 
Sep 19 20:14:04.956 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-67768db889-n8h68 node/ip-10-0-154-177.ec2.internal container=kube-scheduler-operator-container container exited with code 255 (Error): ion: 19169 (27376)\\nI0919 20:10:50.762457       1 serving.go:88] Shutting down DynamicLoader\\nI0919 20:10:50.762624       1 secure_serving.go:180] Stopped listening on [::]:10251\\nE0919 20:10:50.762739       1 server.go:259] lost master\\n\"" to "StaticPodsDegraded: nodes/ip-10-0-140-69.ec2.internal pods/openshift-kube-scheduler-ip-10-0-140-69.ec2.internal container=\"scheduler\" is not ready"\nI0919 20:13:34.078017       1 status_controller.go:164] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2020-09-19T19:38:08Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-09-19T19:57:35Z","message":"Progressing: 3 nodes are at revision 6","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-09-19T19:36:31Z","message":"Available: 3 nodes are active; 3 nodes are at revision 6","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-09-19T19:34:00Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0919 20:13:34.086581       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f2130600-faae-11ea-84f5-0acd27906edb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Degraded message changed from "StaticPodsDegraded: nodes/ip-10-0-140-69.ec2.internal pods/openshift-kube-scheduler-ip-10-0-140-69.ec2.internal container=\"scheduler\" is not ready" to ""\nW0919 20:13:55.933105       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 26869 (30113)\nI0919 20:13:57.352328       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0919 20:13:57.352407       1 leaderelection.go:65] leaderelection lost\nI0919 20:13:57.352592       1 secure_serving.go:156] Stopped listening on 0.0.0.0:8443\n
Sep 19 20:14:05.567 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-54bdf4c4c7-jzghl node/ip-10-0-154-177.ec2.internal container=kube-controller-manager-operator container exited with code 255 (Error): ntroller-manager\\\"\\nE0919 19:55:12.935658       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: configmaps is forbidden: User \\\"system:kube-controller-manager\\\" cannot list resource \\\"configmaps\\\" in API group \\\"\\\" in the namespace \\\"openshift-kube-controller-manager\\\"\\nW0919 20:03:24.952228       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 20386 (25291)\\nW0919 20:09:48.958070       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25463 (27414)\\n\"" to ""\nI0919 20:13:26.098890       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"f2067568-faae-11ea-84f5-0acd27906edb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "" to "StaticPodsDegraded: nodes/ip-10-0-140-69.ec2.internal pods/kube-controller-manager-ip-10-0-140-69.ec2.internal container=\"kube-controller-manager-5\" is not ready"\nI0919 20:13:28.683969       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"f2067568-faae-11ea-84f5-0acd27906edb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: nodes/ip-10-0-140-69.ec2.internal pods/kube-controller-manager-ip-10-0-140-69.ec2.internal container=\"kube-controller-manager-5\" is not ready" to ""\nI0919 20:13:54.399167       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0919 20:13:54.399317       1 leaderelection.go:65] leaderelection lost\n
Sep 19 20:14:08.756 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-7d77f99f6f-sdtbz node/ip-10-0-154-177.ec2.internal container=operator container exited with code 2 (Error): rsions/factory.go:101: watch of *v1.Build ended with: too old resource version: 18256 (29669)\nW0919 20:13:08.492209       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ServiceAccount ended with: too old resource version: 18114 (29669)\nI0919 20:13:08.849625       1 request.go:530] Throttling request took 184.314054ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0919 20:13:09.462367       1 reflector.go:169] Listing and watching *v1.OpenShiftControllerManager from github.com/openshift/client-go/operator/informers/externalversions/factory.go:101\nI0919 20:13:09.484552       1 reflector.go:169] Listing and watching *v1.Build from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0919 20:13:09.484961       1 reflector.go:169] Listing and watching *v1.Image from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0919 20:13:09.495311       1 reflector.go:169] Listing and watching *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:132\nI0919 20:13:21.925357       1 request.go:530] Throttling request took 194.968644ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0919 20:13:36.561342       1 wrap.go:47] GET /metrics: (5.750295ms) 200 [Prometheus/2.7.2 10.131.0.34:55510]\nI0919 20:13:41.708744       1 request.go:530] Throttling request took 162.495015ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0919 20:13:41.908768       1 request.go:530] Throttling request took 196.418108ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0919 20:13:49.512325       1 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch close - *v1.Namespace total 0 items received\n
Sep 19 20:14:30.956 E ns/openshift-operator-lifecycle-manager pod/packageserver-d4d8848d-s8xjc node/ip-10-0-154-177.ec2.internal container=packageserver container exited with code 137 (Error): hift-marketplace\ntime="2020-09-19T20:14:20Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-09-19T20:14:20Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-09-19T20:14:20Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-09-19T20:14:20Z" level=info msg="new grpc connection added" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-09-19T20:14:21Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-09-19T20:14:21Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-09-19T20:14:21Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-09-19T20:14:21Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-09-19T20:14:21Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-09-19T20:14:21Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-09-19T20:14:24Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-09-19T20:14:24Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\n
Sep 19 20:14:32.399 E kube-apiserver Kube API started failing: Get https://api.ci-op-4p53dwbl-b7012.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=3s: unexpected EOF
Sep 19 20:14:32.430 E kube-apiserver failed contacting the API: Get https://api.ci-op-4p53dwbl-b7012.origin-ci-int-aws.dev.rhcloud.com:6443/apis/config.openshift.io/v1/clusteroperators?resourceVersion=30994&timeout=6m13s&timeoutSeconds=373&watch=true: dial tcp 54.90.43.248:6443: connect: connection refused
Sep 19 20:15:10.395 E openshift-apiserver OpenShift API is not responding to GET requests
Sep 19 20:15:36.857 E ns/openshift-cluster-node-tuning-operator pod/tuned-dr82g node/ip-10-0-155-3.ec2.internal container=tuned container exited with code 255 (Error): go:435] Pod (openshift-monitoring/prometheus-adapter-7ffc899476-jkbjq) labels changed node wide: true\nI0919 20:10:01.562228   28755 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 20:10:01.564166   28755 openshift-tuned.go:326] Getting recommended profile...\nI0919 20:10:01.699665   28755 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0919 20:13:21.363770   28755 openshift-tuned.go:435] Pod (openshift-monitoring/alertmanager-main-1) labels changed node wide: true\nI0919 20:13:21.572521   28755 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 20:13:21.583305   28755 openshift-tuned.go:326] Getting recommended profile...\nI0919 20:13:21.791501   28755 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0919 20:13:25.939228   28755 openshift-tuned.go:435] Pod (openshift-monitoring/telemeter-client-d4f8856f7-nkpvw) labels changed node wide: true\nI0919 20:13:26.562180   28755 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 20:13:26.565502   28755 openshift-tuned.go:326] Getting recommended profile...\nI0919 20:13:26.676024   28755 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0919 20:13:35.520413   28755 openshift-tuned.go:435] Pod (openshift-ingress/router-default-869968cd88-nrqz5) labels changed node wide: true\nI0919 20:13:36.562219   28755 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 20:13:36.563938   28755 openshift-tuned.go:326] Getting recommended profile...\nI0919 20:13:36.675126   28755 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0919 20:13:55.532109   28755 openshift-tuned.go:435] Pod (openshift-console/downloads-795f496c64-4v2s2) labels changed node wide: true\n
Sep 19 20:15:36.942 E ns/openshift-monitoring pod/node-exporter-wvzjf node/ip-10-0-155-3.ec2.internal container=node-exporter container exited with code 255 (Error): 
Sep 19 20:15:36.942 E ns/openshift-monitoring pod/node-exporter-wvzjf node/ip-10-0-155-3.ec2.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Sep 19 20:15:39.823 E ns/openshift-image-registry pod/node-ca-j68pw node/ip-10-0-155-3.ec2.internal container=node-ca container exited with code 255 (Error): 
Sep 19 20:15:41.873 E ns/openshift-dns pod/dns-default-mcgth node/ip-10-0-155-3.ec2.internal container=dns container exited with code 255 (Error): .:5353\n2020-09-19T20:00:15.813Z [INFO] CoreDNS-1.3.1\n2020-09-19T20:00:15.813Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-09-19T20:00:15.813Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\n[INFO] SIGTERM: Shutting down servers then terminating\n
Sep 19 20:15:41.873 E ns/openshift-dns pod/dns-default-mcgth node/ip-10-0-155-3.ec2.internal container=dns-node-resolver container exited with code 255 (Error): /bin/bash: line 1: kill: (119) - No such process\n
Sep 19 20:15:42.240 E ns/openshift-multus pod/multus-f6kfn node/ip-10-0-155-3.ec2.internal container=kube-multus container exited with code 255 (Error): 
Sep 19 20:15:42.615 E ns/openshift-sdn pod/sdn-2dfxs node/ip-10-0-155-3.ec2.internal container=sdn container exited with code 255 (Error): rators-coreos-com:" at 172.30.4.105:443/TCP\nI0919 20:13:54.759026   40099 roundrobin.go:338] LoadBalancerRR: Removing endpoints for openshift-controller-manager-operator/metrics:https\nI0919 20:13:54.759475   40099 proxier.go:367] userspace proxy: processing 0 service events\nI0919 20:13:54.759494   40099 proxier.go:346] userspace syncProxyRules took 52.622578ms\nI0919 20:13:54.914496   40099 proxier.go:367] userspace proxy: processing 0 service events\nI0919 20:13:54.914520   40099 proxier.go:346] userspace syncProxyRules took 52.525988ms\nI0919 20:13:55.725868   40099 roundrobin.go:338] LoadBalancerRR: Removing endpoints for openshift-kube-apiserver-operator/metrics:https\nI0919 20:13:55.892646   40099 proxier.go:367] userspace proxy: processing 0 service events\nI0919 20:13:55.892672   40099 proxier.go:346] userspace syncProxyRules took 53.433817ms\nE0919 20:13:56.424214   40099 healthcheck.go:57] SDN healthcheck disconnected from OVS server: <nil>\nI0919 20:13:56.424329   40099 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\ninterrupt: Gracefully shutting down ...\nI0919 20:13:56.533298   40099 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0919 20:13:56.629594   40099 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0919 20:13:56.728072   40099 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0919 20:13:56.829735   40099 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0919 20:13:56.924671   40099 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\n
Sep 19 20:15:43.368 E ns/openshift-sdn pod/ovs-k4hr4 node/ip-10-0-155-3.ec2.internal container=openvswitch container exited with code 255 (Error): s (5 adds)\n2020-09-19T20:10:07.048Z|00125|connmgr|INFO|br0<->unix#140: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T20:13:21.969Z|00126|connmgr|INFO|br0<->unix#166: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T20:13:22.020Z|00127|bridge|INFO|bridge br0: deleted interface veth35f1eab9 on port 7\n2020-09-19T20:13:22.138Z|00128|connmgr|INFO|br0<->unix#169: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T20:13:22.188Z|00129|bridge|INFO|bridge br0: deleted interface veth5477bb43 on port 13\n2020-09-19T20:13:22.433Z|00130|connmgr|INFO|br0<->unix#172: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T20:13:22.475Z|00131|bridge|INFO|bridge br0: deleted interface veth7193b537 on port 5\n2020-09-19T20:13:22.559Z|00132|connmgr|INFO|br0<->unix#175: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T20:13:22.612Z|00133|bridge|INFO|bridge br0: deleted interface veth01378e66 on port 9\n2020-09-19T20:13:22.680Z|00134|connmgr|INFO|br0<->unix#178: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T20:13:22.722Z|00135|connmgr|INFO|br0<->unix#181: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T20:13:22.751Z|00136|bridge|INFO|bridge br0: deleted interface veth4e9d9cfe on port 14\n2020-09-19T20:13:22.806Z|00137|connmgr|INFO|br0<->unix#184: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T20:13:22.853Z|00138|bridge|INFO|bridge br0: deleted interface vethe1d2361b on port 12\n2020-09-19T20:13:22.918Z|00139|connmgr|INFO|br0<->unix#187: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T20:13:22.949Z|00140|bridge|INFO|bridge br0: deleted interface veth1b04e8f6 on port 8\n2020-09-19T20:13:23.002Z|00141|connmgr|INFO|br0<->unix#190: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T20:13:23.038Z|00142|bridge|INFO|bridge br0: deleted interface veth2d3e9151 on port 6\n2020-09-19T20:13:51.697Z|00143|connmgr|INFO|br0<->unix#196: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T20:13:51.718Z|00144|bridge|INFO|bridge br0: deleted interface veth61095f69 on port 10\nTerminated\novs-vswitchd is not running.\novsdb-server is not running.\n
Sep 19 20:15:44.366 E ns/openshift-machine-config-operator pod/machine-config-daemon-s5wpt node/ip-10-0-155-3.ec2.internal container=machine-config-daemon container exited with code 255 (Error): 
Sep 19 20:16:05.113 E ns/openshift-cluster-node-tuning-operator pod/tuned-2mbd4 node/ip-10-0-132-91.ec2.internal container=tuned container exited with code 143 (Error): ch.  Label changes will not trigger profile reload.\nI0919 20:10:11.590410   58203 openshift-tuned.go:435] Pod (openshift-machine-api/machine-api-controllers-854c98469b-86bvd) labels changed node wide: true\nI0919 20:10:14.386523   58203 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 20:10:14.388381   58203 openshift-tuned.go:326] Getting recommended profile...\nI0919 20:10:14.553002   58203 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0919 20:10:20.839466   58203 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/packageserver-688ffd847f-tvbk4) labels changed node wide: true\nI0919 20:10:24.386501   58203 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 20:10:24.399917   58203 openshift-tuned.go:326] Getting recommended profile...\nI0919 20:10:24.559378   58203 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0919 20:13:54.960536   58203 openshift-tuned.go:435] Pod (openshift-image-registry/cluster-image-registry-operator-69bc4fdbb7-nrgpw) labels changed node wide: true\nI0919 20:13:59.386603   58203 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 20:13:59.388749   58203 openshift-tuned.go:326] Getting recommended profile...\nI0919 20:13:59.533259   58203 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0919 20:14:29.682806   58203 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/packageserver-f94df49dd-dzmmf) labels changed node wide: true\nI0919 20:14:32.394035   58203 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0919 20:14:32.415257   58203 openshift-tuned.go:720] Pod event watch channel closed.\nI0919 20:14:32.415364   58203 openshift-tuned.go:722] Increasing resyncPeriod to 122\n
Sep 19 20:16:05.164 E ns/openshift-cluster-node-tuning-operator pod/tuned-zqz7l node/ip-10-0-134-91.ec2.internal container=tuned container exited with code 143 (Error): 020-09-19 20:14:27,143 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-09-19 20:14:27,144 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-09-19 20:14:27,145 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-09-19 20:14:27,188 WARNING  tuned.daemon.application: Using one shot no deamon mode, most of the functionality will be not available, it can be changed in global config\n2020-09-19 20:14:27,188 INFO     tuned.daemon.controller: starting controller\n2020-09-19 20:14:27,188 INFO     tuned.daemon.daemon: starting tuning\n2020-09-19 20:14:27,194 INFO     tuned.daemon.controller: terminating controller\n2020-09-19 20:14:27,194 INFO     tuned.daemon.daemon: stopping tuning\n2020-09-19 20:14:27,200 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-09-19 20:14:27,201 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-09-19 20:14:27,205 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-09-19 20:14:27,206 INFO     tuned.plugins.base: instance disk: assigning devices xvda\n2020-09-19 20:14:27,208 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-09-19 20:14:27,366 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-09-19 20:14:27,381 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\n2020-09-19 20:14:27,390 INFO     tuned.daemon.daemon: terminating Tuned in one-shot mode\nI0919 20:14:31.806380    2113 openshift-tuned.go:435] Pod (openshift-marketplace/redhat-operators-7b47d49494-5mrgv) labels changed node wide: true\nI0919 20:14:32.396175    2113 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0919 20:14:32.407134    2113 openshift-tuned.go:720] Pod event watch channel closed.\nI0919 20:14:32.407160    2113 openshift-tuned.go:722] Increasing resyncPeriod to 236\n
Sep 19 20:16:05.269 E ns/openshift-cluster-node-tuning-operator pod/tuned-gc7v9 node/ip-10-0-140-69.ec2.internal container=tuned container exited with code 143 (Error): recommended for your configuration.\n2020-09-19 20:14:55,447 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-09-19 20:14:55,448 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-09-19 20:14:55,510 WARNING  tuned.daemon.application: Using one shot no deamon mode, most of the functionality will be not available, it can be changed in global config\n2020-09-19 20:14:55,510 INFO     tuned.daemon.controller: starting controller\n2020-09-19 20:14:55,510 INFO     tuned.daemon.daemon: starting tuning\n2020-09-19 20:14:55,514 INFO     tuned.daemon.controller: terminating controller\n2020-09-19 20:14:55,514 INFO     tuned.daemon.daemon: stopping tuning\n2020-09-19 20:14:55,526 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-09-19 20:14:55,528 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-09-19 20:14:55,532 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-09-19 20:14:55,533 INFO     tuned.plugins.base: instance disk: assigning devices xvda\n2020-09-19 20:14:55,536 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-09-19 20:14:55,682 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-09-19 20:14:55,703 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\n2020-09-19 20:14:55,713 INFO     tuned.daemon.daemon: terminating Tuned in one-shot mode\nI0919 20:15:02.068012    4111 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/packageserver-d4d8848d-d7wdc) labels changed node wide: true\nI0919 20:15:04.712330    4111 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 20:15:04.714065    4111 openshift-tuned.go:326] Getting recommended profile...\nI0919 20:15:04.834985    4111 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\n
Sep 19 20:16:08.145 E ns/openshift-operator-lifecycle-manager pod/packageserver-d4d8848d-pntl4 node/ip-10-0-132-91.ec2.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 20:16:17.212 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-154-177.ec2.internal node/ip-10-0-154-177.ec2.internal container=kube-controller-manager-cert-syncer-5 container exited with code 255 (Error): I0919 19:53:45.434613       1 certsync_controller.go:269] Starting CertSyncer\nI0919 19:53:45.435497       1 observer_polling.go:106] Starting file observer\nW0919 20:01:24.463019       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 20386 (24293)\nW0919 20:10:07.467980       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 24642 (27529)\nE0919 20:14:32.396882       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?resourceVersion=27792&timeout=6m34s&timeoutSeconds=394&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 20:14:32.398071       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?resourceVersion=18163&timeout=5m53s&timeoutSeconds=353&watch=true: dial tcp [::1]:6443: connect: connection refused\n
Sep 19 20:16:17.212 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-154-177.ec2.internal node/ip-10-0-154-177.ec2.internal container=kube-controller-manager-5 container exited with code 255 (Error): eating 1\nI0919 20:14:31.980739       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-marketplace", Name:"community-operators", UID:"c178ca6c-faaf-11ea-99cb-0ee9b5dee8a3", APIVersion:"apps/v1", ResourceVersion:"31190", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set community-operators-66564b9cd4 to 1\nI0919 20:14:31.992264       1 service_controller.go:734] Service has been deleted openshift-marketplace/community-operators. Attempting to cleanup load balancer resources\nI0919 20:14:31.992508       1 deployment_controller.go:484] Error syncing deployment openshift-marketplace/community-operators: Operation cannot be fulfilled on deployments.apps "community-operators": the object has been modified; please apply your changes to the latest version and try again\nI0919 20:14:32.021725       1 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-marketplace", Name:"community-operators-66564b9cd4", UID:"ba96b582-fab4-11ea-b76b-0ee9b5dee8a3", APIVersion:"apps/v1", ResourceVersion:"31191", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: community-operators-66564b9cd4-n4htz\nE0919 20:14:32.121326       1 reflector.go:237] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: Failed to watch *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io)\nE0919 20:14:32.125450       1 reflector.go:237] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io)\nW0919 20:14:32.145683       1 reflector.go:256] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.Build ended with: The resourceVersion for the provided watch is too old.\nE0919 20:14:32.273725       1 controllermanager.go:282] leaderelection lost\nI0919 20:14:32.273765       1 serving.go:88] Shutting down DynamicLoader\n
Sep 19 20:16:23.749 E ns/openshift-apiserver pod/apiserver-w6m9x node/ip-10-0-154-177.ec2.internal container=openshift-apiserver container exited with code 255 (Error): alancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0919 20:14:30.413113       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0919 20:14:30.415186       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0919 20:14:30.416192       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0919 20:14:32.110752       1 controller.go:87] Shutting down OpenAPI AggregationController\nI0919 20:14:32.111334       1 clusterquotamapping.go:145] Shutting down ClusterQuotaMappingController controller\nI0919 20:14:32.111528       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/client-ca/ca-bundle.crt\nI0919 20:14:32.111545       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/aggregator-client-ca/ca-bundle.crt\nI0919 20:14:32.111568       1 serving.go:88] Shutting down DynamicLoader\nI0919 20:14:32.112813       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0919 20:14:32.113519       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0919 20:14:32.113684       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0919 20:14:32.113815       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0919 20:14:32.114073       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0919 20:14:32.114210       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0919 20:14:32.114346       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0919 20:14:32.114377       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\n
Sep 19 20:16:26.948 E ns/openshift-monitoring pod/node-exporter-bbmf2 node/ip-10-0-154-177.ec2.internal container=node-exporter container exited with code 255 (Error): 
Sep 19 20:16:26.948 E ns/openshift-monitoring pod/node-exporter-bbmf2 node/ip-10-0-154-177.ec2.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Sep 19 20:16:27.375 E ns/openshift-cluster-node-tuning-operator pod/tuned-jbnl6 node/ip-10-0-154-177.ec2.internal container=tuned container exited with code 255 (Error): 7 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 20:14:04.381663   60887 openshift-tuned.go:326] Getting recommended profile...\nI0919 20:14:04.504732   60887 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0919 20:14:04.553456   60887 openshift-tuned.go:435] Pod (openshift-kube-apiserver-operator/kube-apiserver-operator-779f7d99b7-tkhcs) labels changed node wide: true\nI0919 20:14:09.380117   60887 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 20:14:09.381697   60887 openshift-tuned.go:326] Getting recommended profile...\nI0919 20:14:09.506393   60887 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0919 20:14:10.155925   60887 openshift-tuned.go:435] Pod (openshift-cluster-samples-operator/cluster-samples-operator-67798bc-lmlfr) labels changed node wide: true\nI0919 20:14:14.380085   60887 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 20:14:14.381531   60887 openshift-tuned.go:326] Getting recommended profile...\nI0919 20:14:14.503140   60887 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0919 20:14:14.754491   60887 openshift-tuned.go:435] Pod (openshift-cluster-machine-approver/machine-approver-5d4667777c-2j9k6) labels changed node wide: true\nI0919 20:14:19.380562   60887 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 20:14:19.381946   60887 openshift-tuned.go:326] Getting recommended profile...\nI0919 20:14:19.499407   60887 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0919 20:14:31.159499   60887 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/packageserver-d4d8848d-s8xjc) labels changed node wide: true\n
Sep 19 20:16:27.948 E ns/openshift-controller-manager pod/controller-manager-rzws9 node/ip-10-0-154-177.ec2.internal container=controller-manager container exited with code 255 (Error): 
Sep 19 20:16:30.151 E ns/openshift-sdn pod/sdn-k9655 node/ip-10-0-154-177.ec2.internal container=sdn container exited with code 255 (Error): ame: community-operators,csc-owner-namespace: openshift-marketplace,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ServiceSpec{Ports:[{grpc TCP 50051 {0 50051 } 0}],Selector:map[string]string{marketplace.catalogSourceConfig: community-operators,},ClusterIP:172.30.157.190,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[],},},}\nI0919 20:14:32.046023   72058 roundrobin.go:276] LoadBalancerRR: Setting endpoints for openshift-marketplace/community-operators:grpc to [10.131.0.23:50051]\nI0919 20:14:32.046127   72058 roundrobin.go:240] Delete endpoint 10.131.0.23:50051 for service "openshift-marketplace/community-operators:grpc"\ninterrupt: Gracefully shutting down ...\nE0919 20:14:32.125803   72058 proxier.go:356] Failed to ensure iptables: error creating chain "KUBE-PORTALS-HOST": signal: terminated: \nI0919 20:14:32.126043   72058 proxier.go:367] userspace proxy: processing 0 service events\nI0919 20:14:32.126440   72058 proxier.go:346] userspace syncProxyRules took 24.830133ms\nI0919 20:14:32.126534   72058 service.go:321] Updating existing service port "openshift-marketplace/community-operators:grpc" at 172.30.157.190:50051/TCP\nE0919 20:14:32.396985   72058 healthcheck.go:57] SDN healthcheck disconnected from OVS server: <nil>\nI0919 20:14:32.397578   72058 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0919 20:14:32.497830   72058 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0919 20:14:32.601084   72058 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\n
Sep 19 20:16:30.947 E ns/openshift-multus pod/multus-q2ms2 node/ip-10-0-154-177.ec2.internal invariant violation: pod may not transition Running->Pending
Sep 19 20:16:30.947 E ns/openshift-multus pod/multus-q2ms2 node/ip-10-0-154-177.ec2.internal container=kube-multus container exited with code 255 (Error): 
Sep 19 20:16:36.947 E ns/openshift-image-registry pod/node-ca-qqv99 node/ip-10-0-154-177.ec2.internal container=node-ca container exited with code 255 (Error): 
Sep 19 20:16:38.948 E ns/openshift-dns pod/dns-default-lp59w node/ip-10-0-154-177.ec2.internal container=dns-node-resolver container exited with code 255 (Error): 
Sep 19 20:16:38.948 E ns/openshift-dns pod/dns-default-lp59w node/ip-10-0-154-177.ec2.internal container=dns container exited with code 255 (Error): .:5353\n2020-09-19T20:01:13.970Z [INFO] CoreDNS-1.3.1\n2020-09-19T20:01:13.970Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-09-19T20:01:13.970Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\n[INFO] SIGTERM: Shutting down servers then terminating\n
Sep 19 20:16:39.348 E ns/openshift-machine-config-operator pod/machine-config-server-dl8k8 node/ip-10-0-154-177.ec2.internal container=machine-config-server container exited with code 255 (Error): 
Sep 19 20:16:44.395 E kube-apiserver Kube API started failing: Get https://api.ci-op-4p53dwbl-b7012.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=3s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Sep 19 20:16:55.440 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-131-93.ec2.internal container=prometheus-config-reloader container exited with code 2 (Error): 
Sep 19 20:16:55.440 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-131-93.ec2.internal container=rules-configmap-reloader container exited with code 2 (Error): 
Sep 19 20:16:55.440 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-131-93.ec2.internal container=prometheus-proxy container exited with code 2 (Error): 
Sep 19 20:16:55.637 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-154-177.ec2.internal node/ip-10-0-154-177.ec2.internal container=scheduler container exited with code 255 (Error): InterPodAffinityPriority:{} LeastRequestedPriority:{} BalancedResourceAllocation:{} NodePreferAvoidPodsPriority:{}]'\nW0919 19:56:40.113704       1 authorization.go:47] Authorization is disabled\nW0919 19:56:40.113805       1 authentication.go:55] Authentication is disabled\nI0919 19:56:40.113863       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251\nI0919 19:56:40.115997       1 serving.go:195] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "scheduler.openshift-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1600544042" (2020-09-19 19:34:14 +0000 UTC to 2022-09-19 19:34:15 +0000 UTC (now=2020-09-19 19:56:40.115968076 +0000 UTC))\nI0919 19:56:40.116088       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1600544042" [] issuer="<self>" (2020-09-19 19:34:01 +0000 UTC to 2021-09-19 19:34:02 +0000 UTC (now=2020-09-19 19:56:40.116068215 +0000 UTC))\nI0919 19:56:40.116169       1 secure_serving.go:136] Serving securely on [::]:10259\nI0919 19:56:40.116355       1 serving.go:77] Starting DynamicLoader\nI0919 19:56:41.024011       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller\nI0919 19:56:41.124375       1 controller_utils.go:1034] Caches are synced for scheduler controller\nI0919 19:56:41.124410       1 leaderelection.go:205] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\nW0919 20:13:08.450185       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 18113 (29669)\nW0919 20:13:08.470050       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 18123 (29669)\nE0919 20:14:32.321503       1 server.go:259] lost master\n
Sep 19 20:16:56.034 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-154-177.ec2.internal node/ip-10-0-154-177.ec2.internal container=kube-apiserver-8 container exited with code 255 (Error): :184] [-]terminating failed: reason withheld\n[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/kube-apiserver-requestheader-reload ok\n[+]poststarthook/kube-apiserver-clientCA-reload ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-discovery-available ok\n[+]crd-informer-synced ok\n[+]crd-informer-synced ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/openshift.io-clientCA-reload ok\n[+]poststarthook/openshift.io-requestheader-reload ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\nI0919 20:14:32.300313       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\nI0919 20:14:32.301456       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\nI0919 20:14:32.322924       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\nI0919 20:14:32.323161       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\nI0919 20:14:32.323403       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\nI0919 20:14:32.323582       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\n
Sep 19 20:16:56.034 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-154-177.ec2.internal node/ip-10-0-154-177.ec2.internal container=kube-apiserver-cert-syncer-8 container exited with code 255 (Error): I0919 19:53:16.288934       1 observer_polling.go:106] Starting file observer\nI0919 19:53:16.289242       1 certsync_controller.go:269] Starting CertSyncer\nW0919 20:00:58.373823       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22645 (24025)\nW0919 20:10:56.380180       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 24361 (28213)\n
Sep 19 20:16:56.435 E ns/openshift-etcd pod/etcd-member-ip-10-0-154-177.ec2.internal node/ip-10-0-154-177.ec2.internal container=etcd-metrics container exited with code 255 (Error): 2020-09-19 20:13:11.159232 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-4p53dwbl-b7012.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-4p53dwbl-b7012.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-09-19 20:13:11.160171 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-09-19 20:13:11.160782 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-4p53dwbl-b7012.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-4p53dwbl-b7012.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-09-19 20:13:11.207928 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Sep 19 20:16:56.435 E ns/openshift-etcd pod/etcd-member-ip-10-0-154-177.ec2.internal node/ip-10-0-154-177.ec2.internal container=etcd-member container exited with code 255 (Error): f3f2 (stream MsgApp v2 reader)\n2020-09-19 20:14:32.639670 E | rafthttp: failed to read ed8c14e47e6f3f2 on stream MsgApp v2 (context canceled)\n2020-09-19 20:14:32.639718 I | rafthttp: peer ed8c14e47e6f3f2 became inactive (message send to peer failed)\n2020-09-19 20:14:32.639761 I | rafthttp: stopped streaming with peer ed8c14e47e6f3f2 (stream MsgApp v2 reader)\n2020-09-19 20:14:32.639866 W | rafthttp: lost the TCP streaming connection with peer ed8c14e47e6f3f2 (stream Message reader)\n2020-09-19 20:14:32.639928 I | rafthttp: stopped streaming with peer ed8c14e47e6f3f2 (stream Message reader)\n2020-09-19 20:14:32.640009 I | rafthttp: stopped peer ed8c14e47e6f3f2\n2020-09-19 20:14:32.640057 I | rafthttp: stopping peer 78f4297774a82174...\n2020-09-19 20:14:32.641183 I | rafthttp: closed the TCP streaming connection with peer 78f4297774a82174 (stream MsgApp v2 writer)\n2020-09-19 20:14:32.641265 I | rafthttp: stopped streaming with peer 78f4297774a82174 (writer)\n2020-09-19 20:14:32.642476 I | rafthttp: closed the TCP streaming connection with peer 78f4297774a82174 (stream Message writer)\n2020-09-19 20:14:32.642613 I | rafthttp: stopped streaming with peer 78f4297774a82174 (writer)\n2020-09-19 20:14:32.642736 I | rafthttp: stopped HTTP pipelining with peer 78f4297774a82174\n2020-09-19 20:14:32.642872 W | rafthttp: lost the TCP streaming connection with peer 78f4297774a82174 (stream MsgApp v2 reader)\n2020-09-19 20:14:32.642962 E | rafthttp: failed to read 78f4297774a82174 on stream MsgApp v2 (context canceled)\n2020-09-19 20:14:32.643012 I | rafthttp: peer 78f4297774a82174 became inactive (message send to peer failed)\n2020-09-19 20:14:32.643060 I | rafthttp: stopped streaming with peer 78f4297774a82174 (stream MsgApp v2 reader)\n2020-09-19 20:14:32.643176 W | rafthttp: lost the TCP streaming connection with peer 78f4297774a82174 (stream Message reader)\n2020-09-19 20:14:32.643234 I | rafthttp: stopped streaming with peer 78f4297774a82174 (stream Message reader)\n2020-09-19 20:14:32.643282 I | rafthttp: stopped peer 78f4297774a82174\n
Sep 19 20:16:56.838 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-154-177.ec2.internal node/ip-10-0-154-177.ec2.internal container=kube-controller-manager-cert-syncer-5 container exited with code 255 (Error): I0919 19:53:45.434613       1 certsync_controller.go:269] Starting CertSyncer\nI0919 19:53:45.435497       1 observer_polling.go:106] Starting file observer\nW0919 20:01:24.463019       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 20386 (24293)\nW0919 20:10:07.467980       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 24642 (27529)\nE0919 20:14:32.396882       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?resourceVersion=27792&timeout=6m34s&timeoutSeconds=394&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 20:14:32.398071       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?resourceVersion=18163&timeout=5m53s&timeoutSeconds=353&watch=true: dial tcp [::1]:6443: connect: connection refused\n
Sep 19 20:16:56.838 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-154-177.ec2.internal node/ip-10-0-154-177.ec2.internal container=kube-controller-manager-5 container exited with code 255 (Error): eating 1\nI0919 20:14:31.980739       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-marketplace", Name:"community-operators", UID:"c178ca6c-faaf-11ea-99cb-0ee9b5dee8a3", APIVersion:"apps/v1", ResourceVersion:"31190", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set community-operators-66564b9cd4 to 1\nI0919 20:14:31.992264       1 service_controller.go:734] Service has been deleted openshift-marketplace/community-operators. Attempting to cleanup load balancer resources\nI0919 20:14:31.992508       1 deployment_controller.go:484] Error syncing deployment openshift-marketplace/community-operators: Operation cannot be fulfilled on deployments.apps "community-operators": the object has been modified; please apply your changes to the latest version and try again\nI0919 20:14:32.021725       1 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-marketplace", Name:"community-operators-66564b9cd4", UID:"ba96b582-fab4-11ea-b76b-0ee9b5dee8a3", APIVersion:"apps/v1", ResourceVersion:"31191", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: community-operators-66564b9cd4-n4htz\nE0919 20:14:32.121326       1 reflector.go:237] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: Failed to watch *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io)\nE0919 20:14:32.125450       1 reflector.go:237] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io)\nW0919 20:14:32.145683       1 reflector.go:256] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.Build ended with: The resourceVersion for the provided watch is too old.\nE0919 20:14:32.273725       1 controllermanager.go:282] leaderelection lost\nI0919 20:14:32.273765       1 serving.go:88] Shutting down DynamicLoader\n
Sep 19 20:17:20.895 E ns/openshift-console pod/downloads-795f496c64-c9fg8 node/ip-10-0-131-93.ec2.internal container=download-server container exited with code 137 (Error): 
Sep 19 20:17:35.766 E ns/openshift-console-operator pod/console-operator-57b75f5867-ctmj2 node/ip-10-0-132-91.ec2.internal container=console-operator container exited with code 255 (Error): console status"\ntime="2020-09-19T20:17:32Z" level=info msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-4p53dwbl-b7012.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-09-19T20:17:32Z" level=info msg="sync loop 4.0.0 complete"\ntime="2020-09-19T20:17:32Z" level=info msg="finished syncing operator \"cluster\" (59.453µs) \n\n"\ntime="2020-09-19T20:17:32Z" level=info msg="started syncing operator \"cluster\" (2020-09-19 20:17:32.076540038 +0000 UTC m=+1255.374304013)"\ntime="2020-09-19T20:17:32Z" level=info msg="console is in a managed state."\ntime="2020-09-19T20:17:32Z" level=info msg="running sync loop 4.0.0"\ntime="2020-09-19T20:17:32Z" level=info msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-4p53dwbl-b7012.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-09-19T20:17:32Z" level=info msg="service-ca configmap exists and is in the correct state"\ntime="2020-09-19T20:17:32Z" level=info msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-4p53dwbl-b7012.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-09-19T20:17:32Z" level=info msg=-----------------------\ntime="2020-09-19T20:17:32Z" level=info msg="sync loop 4.0.0 resources updated: false \n"\ntime="2020-09-19T20:17:32Z" level=info msg=-----------------------\ntime="2020-09-19T20:17:32Z" level=info msg="deployment is available, ready replicas: 1 \n"\ntime="2020-09-19T20:17:32Z" level=info msg="sync_v400: updating console status"\ntime="2020-09-19T20:17:32Z" level=info msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-4p53dwbl-b7012.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-09-19T20:17:32Z" level=info msg="sync loop 4.0.0 complete"\ntime="2020-09-19T20:17:32Z" level=info msg="finished syncing operator \"cluster\" (31.003µs) \n\n"\nI0919 20:17:33.524595       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0919 20:17:33.524661       1 leaderelection.go:65] leaderelection lost\n
Sep 19 20:17:37.767 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-69bbd49fb5-8c9l9 node/ip-10-0-132-91.ec2.internal container=openshift-apiserver-operator container exited with code 255 (Error):  ready: 503"\nI0919 20:16:22.315522       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"f20a31e7-faae-11ea-84f5-0acd27906edb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available message changed from "Available: v1.apps.openshift.io is not ready: 503\nAvailable: v1.build.openshift.io is not ready: 503\nAvailable: v1.route.openshift.io is not ready: 503\nAvailable: v1.security.openshift.io is not ready: 503" to "Available: v1.image.openshift.io is not ready: 503\nAvailable: v1.quota.openshift.io is not ready: 503\nAvailable: v1.user.openshift.io is not ready: 503"\nI0919 20:16:25.416547       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"f20a31e7-faae-11ea-84f5-0acd27906edb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available message changed from "Available: v1.image.openshift.io is not ready: 503\nAvailable: v1.quota.openshift.io is not ready: 503\nAvailable: v1.user.openshift.io is not ready: 503" to "Available: v1.apps.openshift.io is not ready: 503"\nI0919 20:16:25.698023       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"f20a31e7-faae-11ea-84f5-0acd27906edb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available changed from False to True ("")\nI0919 20:17:32.175217       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0919 20:17:32.175382       1 leaderelection.go:65] leaderelection lost\nF0919 20:17:32.175740       1 builder.go:217] server exited\n
Sep 19 20:17:47.565 E ns/openshift-machine-config-operator pod/machine-config-controller-7555cd54f8-797lp node/ip-10-0-132-91.ec2.internal container=machine-config-controller container exited with code 2 (Error): 
Sep 19 20:17:49.567 E ns/openshift-service-ca-operator pod/service-ca-operator-5ddff586cc-s6xt7 node/ip-10-0-132-91.ec2.internal container=operator container exited with code 2 (Error): 
Sep 19 20:17:50.167 E ns/openshift-machine-api pod/machine-api-operator-6bb5f6c8fd-8j9sb node/ip-10-0-132-91.ec2.internal container=machine-api-operator container exited with code 2 (Error): 
Sep 19 20:17:50.767 E ns/openshift-service-catalog-apiserver-operator pod/openshift-service-catalog-apiserver-operator-7577c6bc4c-9vf7s node/ip-10-0-132-91.ec2.internal container=operator container exited with code 2 (Error): ived\nI0919 20:16:58.828781       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0919 20:17:08.837588       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0919 20:17:13.518355       1 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch close - *v1.ConfigMap total 0 items received\nW0919 20:17:13.519658       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 28892 (32659)\nI0919 20:17:14.519901       1 reflector.go:169] Listing and watching *v1.ConfigMap from k8s.io/client-go/informers/factory.go:132\nI0919 20:17:18.847985       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0919 20:17:25.550315       1 reflector.go:357] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: Watch close - *v1.ServiceCatalogAPIServer total 0 items received\nI0919 20:17:28.855575       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0919 20:17:29.111341       1 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch close - *v1.Service total 0 items received\nI0919 20:17:33.528210       1 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch close - *v1.Secret total 0 items received\nI0919 20:17:35.524231       1 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch close - *v1.ConfigMap total 0 items received\nW0919 20:17:35.525694       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 28892 (33195)\nI0919 20:17:36.525915       1 reflector.go:169] Listing and watching *v1.ConfigMap from k8s.io/client-go/informers/factory.go:132\n
Sep 19 20:17:51.964 E ns/openshift-service-ca pod/apiservice-cabundle-injector-9bdb4c5bd-xmcj7 node/ip-10-0-132-91.ec2.internal container=apiservice-cabundle-injector-controller container exited with code 2 (Error): 
Sep 19 20:17:52.566 E ns/openshift-machine-api pod/machine-api-controllers-854c98469b-86bvd node/ip-10-0-132-91.ec2.internal container=machine-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 20:17:52.566 E ns/openshift-machine-api pod/machine-api-controllers-854c98469b-86bvd node/ip-10-0-132-91.ec2.internal container=controller-manager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 20:17:52.566 E ns/openshift-machine-api pod/machine-api-controllers-854c98469b-86bvd node/ip-10-0-132-91.ec2.internal container=nodelink-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 20:17:53.565 E ns/openshift-machine-config-operator pod/machine-config-operator-644ddc467c-lmd6t node/ip-10-0-132-91.ec2.internal container=machine-config-operator container exited with code 2 (Error): 
Sep 19 20:18:06.965 E ns/openshift-operator-lifecycle-manager pod/packageserver-f94df49dd-dzmmf node/ip-10-0-132-91.ec2.internal container=packageserver container exited with code 137 (Error):  10.129.0.1:57526]\nI0919 20:17:19.896641       1 wrap.go:47] GET /: (190.321µs) 200 [Go-http-client/2.0 10.129.0.1:57526]\nI0919 20:17:23.562252       1 wrap.go:47] GET /healthz: (1.714354ms) 200 [kube-probe/1.13+ 10.130.0.1:56764]\nI0919 20:17:24.721434       1 wrap.go:47] GET /apis/packages.operators.coreos.com/v1?timeout=32s: (1.79413ms) 200 [openshift-apiserver/v1.13.4 (linux/amd64) kubernetes/6458880 10.130.0.1:53170]\nI0919 20:17:26.590163       1 wrap.go:47] GET /healthz: (160.921µs) 200 [kube-probe/1.13+ 10.130.0.1:56786]\nI0919 20:17:30.043636       1 wrap.go:47] GET /: (2.587583ms) 200 [Go-http-client/2.0 10.128.0.1:49888]\nI0919 20:17:30.043898       1 wrap.go:47] GET /: (6.022246ms) 200 [Go-http-client/2.0 10.128.0.1:49888]\nI0919 20:17:30.044069       1 wrap.go:47] GET /: (5.600827ms) 200 [Go-http-client/2.0 10.128.0.1:49888]\nI0919 20:17:30.044501       1 wrap.go:47] GET /: (4.853735ms) 200 [Go-http-client/2.0 10.128.0.1:49888]\nI0919 20:17:30.049315       1 wrap.go:47] GET /: (8.585121ms) 200 [Go-http-client/2.0 10.128.0.1:49888]\nI0919 20:17:30.540467       1 wrap.go:47] GET /apis/packages.operators.coreos.com/v1?timeout=32s: (452.393µs) 200 [openshift-kube-apiserver/v1.13.4 (linux/amd64) kubernetes/6458880 10.128.0.1:49912]\nI0919 20:17:33.292003       1 wrap.go:47] GET /apis/packages.operators.coreos.com/v1?timeout=32s: (2.836841ms) 200 [cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format 10.129.0.1:57602]\nI0919 20:17:33.564227       1 wrap.go:47] GET /healthz: (2.236672ms) 200 [kube-probe/1.13+ 10.130.0.1:56854]\nI0919 20:17:34.136981       1 wrap.go:47] GET /apis/packages.operators.coreos.com/v1?timeout=32s: (2.933466ms) 200 [hyperkube/v1.13.4 (linux/amd64) kubernetes/6458880/controller-discovery 10.129.0.1:57602]\nI0919 20:17:34.805596       1 wrap.go:47] GET /apis/packages.operators.coreos.com/v1?timeout=32s: (2.107162ms) 200 [openshift-apiserver/v1.13.4 (linux/amd64) kubernetes/6458880 10.130.0.1:53170]\nI0919 20:17:36.150803       1 secure_serving.go:156] Stopped listening on [::]:5443\n
Sep 19 20:18:09.120 E kube-apiserver failed contacting the API: Get https://api.ci-op-4p53dwbl-b7012.origin-ci-int-aws.dev.rhcloud.com:6443/apis/config.openshift.io/v1/clusteroperators?resourceVersion=34141&timeout=8m12s&timeoutSeconds=492&watch=true: dial tcp 54.227.40.184:6443: connect: connection refused
Sep 19 20:19:06.504 E ns/openshift-monitoring pod/node-exporter-mqqn6 node/ip-10-0-131-93.ec2.internal container=node-exporter container exited with code 255 (Error): 
Sep 19 20:19:06.504 E ns/openshift-monitoring pod/node-exporter-mqqn6 node/ip-10-0-131-93.ec2.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Sep 19 20:19:06.525 E ns/openshift-image-registry pod/node-ca-8m9n8 node/ip-10-0-131-93.ec2.internal container=node-ca container exited with code 255 (Error): 
Sep 19 20:19:06.725 E ns/openshift-multus pod/multus-v48pv node/ip-10-0-131-93.ec2.internal container=kube-multus container exited with code 255 (Error): 
Sep 19 20:19:09.741 E ns/openshift-dns pod/dns-default-ffkvq node/ip-10-0-131-93.ec2.internal container=dns container exited with code 255 (Error): .:5353\n2020-09-19T20:00:36.060Z [INFO] CoreDNS-1.3.1\n2020-09-19T20:00:36.061Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-09-19T20:00:36.061Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nW0919 20:09:16.561937       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: watch of *v1.Namespace ended with: too old resource version: 19307 (27376)\nW0919 20:09:16.593677       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: watch of *v1.Service ended with: too old resource version: 21733 (27376)\nW0919 20:16:48.616311       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: watch of *v1.Namespace ended with: too old resource version: 27376 (32689)\n[INFO] SIGTERM: Shutting down servers then terminating\n
Sep 19 20:19:09.741 E ns/openshift-dns pod/dns-default-ffkvq node/ip-10-0-131-93.ec2.internal container=dns-node-resolver container exited with code 255 (Error): /bin/bash: line 1: kill: (139) - No such process\n
Sep 19 20:19:11.031 E ns/openshift-sdn pod/sdn-rtjzs node/ip-10-0-131-93.ec2.internal container=sdn container exited with code 255 (Error): default:http"\nI0919 20:17:24.979730   65014 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-ingress/router-internal-default:metrics to [10.128.2.31:1936 10.129.2.38:1936]\nI0919 20:17:24.979743   65014 roundrobin.go:240] Delete endpoint 10.129.2.38:1936 for service "openshift-ingress/router-internal-default:metrics"\nI0919 20:17:25.159142   65014 proxier.go:367] userspace proxy: processing 0 service events\nI0919 20:17:25.159174   65014 proxier.go:346] userspace syncProxyRules took 64.855632ms\nE0919 20:17:25.183443   65014 healthcheck.go:57] SDN healthcheck disconnected from OVS server: <nil>\nI0919 20:17:25.183533   65014 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\ninterrupt: Gracefully shutting down ...\nE0919 20:17:25.226268   65014 proxier.go:692] Failed to ensure that filter chain INPUT jumps to KUBE-EXTERNAL-SERVICES: error checking rule: signal: terminated: \nI0919 20:17:25.286067   65014 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0919 20:17:25.410123   65014 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0919 20:17:25.434082   65014 proxier.go:367] userspace proxy: processing 0 service events\nI0919 20:17:25.434103   65014 proxier.go:346] userspace syncProxyRules took 207.800081ms\nI0919 20:17:25.483977   65014 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0919 20:17:25.585129   65014 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0919 20:17:25.683841   65014 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\n
Sep 19 20:19:11.400 E ns/openshift-sdn pod/ovs-98lq8 node/ip-10-0-131-93.ec2.internal container=openvswitch container exited with code 255 (Error): he last 0 s (4 deletes)\n2020-09-19T20:16:51.892Z|00180|bridge|INFO|bridge br0: deleted interface vethce003203 on port 20\n2020-09-19T20:16:51.936Z|00181|connmgr|INFO|br0<->unix#304: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T20:16:51.996Z|00182|connmgr|INFO|br0<->unix#307: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T20:16:52.021Z|00183|bridge|INFO|bridge br0: deleted interface vetha8e29ade on port 21\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-09-19T20:16:52.007Z|00021|jsonrpc|WARN|Dropped 6 log messages in last 876 seconds (most recently, 876 seconds ago) due to excessive rate\n2020-09-19T20:16:52.007Z|00022|jsonrpc|WARN|unix#235: receive error: Connection reset by peer\n2020-09-19T20:16:52.007Z|00023|reconnect|WARN|unix#235: connection dropped (Connection reset by peer)\n2020-09-19T20:16:52.012Z|00024|jsonrpc|WARN|unix#236: receive error: Connection reset by peer\n2020-09-19T20:16:52.012Z|00025|reconnect|WARN|unix#236: connection dropped (Connection reset by peer)\n\n==> /var/log/openvswitch/ovs-vswitchd.log <==\n2020-09-19T20:17:20.367Z|00184|connmgr|INFO|br0<->unix#313: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T20:17:20.403Z|00185|connmgr|INFO|br0<->unix#316: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T20:17:20.432Z|00186|bridge|INFO|bridge br0: deleted interface veth42412c47 on port 15\n2020-09-19T20:17:20.495Z|00187|connmgr|INFO|br0<->unix#319: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T20:17:20.518Z|00188|bridge|INFO|bridge br0: deleted interface veth30a6b5b5 on port 8\n2020-09-19T20:17:20.567Z|00189|connmgr|INFO|br0<->unix#322: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T20:17:20.600Z|00190|bridge|INFO|bridge br0: deleted interface veth2eaa56c3 on port 12\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-09-19T20:17:20.505Z|00026|jsonrpc|WARN|unix#247: receive error: Connection reset by peer\n2020-09-19T20:17:20.506Z|00027|reconnect|WARN|unix#247: connection dropped (Connection reset by peer)\nTerminated\novs-vswitchd is not running.\novsdb-server is not running.\n
Sep 19 20:19:12.696 E ns/openshift-machine-config-operator pod/machine-config-daemon-2rfxj node/ip-10-0-131-93.ec2.internal container=machine-config-daemon container exited with code 255 (Error): 
Sep 19 20:19:13.798 E ns/openshift-cluster-node-tuning-operator pod/tuned-glf74 node/ip-10-0-131-93.ec2.internal container=tuned container exited with code 255 (Error):    tuned.plugins.base: instance net: assigning devices ens3\n2020-09-19 20:16:16,775 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-09-19 20:16:16,777 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\n2020-09-19 20:16:16,794 INFO     tuned.daemon.daemon: terminating Tuned in one-shot mode\nI0919 20:16:50.048661  100590 openshift-tuned.go:435] Pod (e2e-tests-sig-apps-deployment-upgrade-zccvz/dp-57cc5d77b4-h6t27) labels changed node wide: true\nI0919 20:16:51.152773  100590 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 20:16:51.157968  100590 openshift-tuned.go:326] Getting recommended profile...\nI0919 20:16:51.421128  100590 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0919 20:16:55.042872  100590 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-adapter-7ffc899476-4fhf8) labels changed node wide: true\nI0919 20:16:56.152676  100590 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 20:16:56.154569  100590 openshift-tuned.go:326] Getting recommended profile...\nI0919 20:16:56.267539  100590 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0919 20:16:56.268122  100590 openshift-tuned.go:435] Pod (openshift-ingress/router-default-869968cd88-hmw57) labels changed node wide: true\nI0919 20:17:01.152728  100590 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 20:17:01.154584  100590 openshift-tuned.go:326] Getting recommended profile...\nI0919 20:17:01.268460  100590 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0919 20:17:24.482347  100590 openshift-tuned.go:435] Pod (openshift-console/downloads-795f496c64-c9fg8) labels changed node wide: true\nI0919 20:17:25.145244  100590 openshift-tuned.go:126] Received signal: terminated\n
Sep 19 20:19:14.906 E ns/openshift-multus pod/multus-v48pv node/ip-10-0-131-93.ec2.internal invariant violation: pod may not transition Running->Pending
Sep 19 20:19:51.573 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-132-91.ec2.internal node/ip-10-0-132-91.ec2.internal container=kube-controller-manager-5 container exited with code 255 (Error): -19 19:55:55.599115095 +0000 UTC))\nI0919 19:55:55.608022       1 controllermanager.go:169] Version: v1.13.4-138-g41dc99c\nI0919 19:55:55.610312       1 serving.go:195] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1600544042" (2020-09-19 19:34:15 +0000 UTC to 2022-09-19 19:34:16 +0000 UTC (now=2020-09-19 19:55:55.610285799 +0000 UTC))\nI0919 19:55:55.610362       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1600544042" [] issuer="<self>" (2020-09-19 19:34:01 +0000 UTC to 2021-09-19 19:34:02 +0000 UTC (now=2020-09-19 19:55:55.610337032 +0000 UTC))\nI0919 19:55:55.610393       1 secure_serving.go:136] Serving securely on [::]:10257\nI0919 19:55:55.610590       1 serving.go:77] Starting DynamicLoader\nI0919 19:55:55.611976       1 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-controller-manager...\nE0919 19:57:08.762422       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0919 19:57:16.255749       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: configmaps "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\nI0919 20:18:08.812188       1 serving.go:88] Shutting down DynamicLoader\nI0919 20:18:08.812332       1 secure_serving.go:180] Stopped listening on [::]:10257\nE0919 20:18:08.812368       1 controllermanager.go:282] leaderelection lost\n
Sep 19 20:19:51.573 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-132-91.ec2.internal node/ip-10-0-132-91.ec2.internal container=kube-controller-manager-cert-syncer-5 container exited with code 255 (Error): efused\nE0919 19:57:09.279049       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0919 19:57:09.280586       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0919 19:57:10.280078       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0919 19:57:10.281435       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0919 19:57:16.225033       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: secrets is forbidden: User "system:kube-controller-manager" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-controller-manager"\nE0919 19:57:16.225195       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:kube-controller-manager" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\nW0919 20:06:49.238927       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22280 (26401)\nW0919 20:13:39.243979       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 26631 (29775)\n
Sep 19 20:19:58.255 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-132-91.ec2.internal node/ip-10-0-132-91.ec2.internal container=scheduler container exited with code 255 (Error): usterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-jenkinspipeline" not found, clusterrole.rbac.authorization.k8s.io "cluster-status" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-docker" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "self-access-reviewer" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-source" not found, clusterrole.rbac.authorization.k8s.io "system:scope-impersonation" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:webhook" not found, clusterrole.rbac.authorization.k8s.io "system:oauth-token-deleter" not found]\nW0919 20:09:16.733337       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 22191 (27378)\nW0919 20:09:16.797468       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.ReplicationController ended with: too old resource version: 22179 (27381)\nW0919 20:09:16.804283       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StatefulSet ended with: too old resource version: 22551 (27381)\nW0919 20:16:48.847849       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 22179 (32720)\nW0919 20:16:48.853382       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 22179 (32720)\nE0919 20:18:08.777509       1 server.go:259] lost master\n
Sep 19 20:20:01.455 E ns/openshift-monitoring pod/node-exporter-nfqg7 node/ip-10-0-132-91.ec2.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Sep 19 20:20:01.455 E ns/openshift-monitoring pod/node-exporter-nfqg7 node/ip-10-0-132-91.ec2.internal container=node-exporter container exited with code 255 (Error): 
Sep 19 20:20:01.853 E ns/openshift-multus pod/multus-qpjtz node/ip-10-0-132-91.ec2.internal container=kube-multus container exited with code 255 (Error): 
Sep 19 20:20:03.652 E ns/openshift-apiserver pod/apiserver-p5d9p node/ip-10-0-132-91.ec2.internal container=openshift-apiserver container exited with code 255 (Error): from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0919 20:18:08.666941       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0919 20:18:08.667100       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0919 20:18:08.667146       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0919 20:18:08.667181       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0919 20:18:08.667339       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0919 20:18:08.667385       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0919 20:18:08.667402       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0919 20:18:08.747421       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/aggregator-client-ca/ca-bundle.crt\nI0919 20:18:08.747737       1 clusterquotamapping.go:145] Shutting down ClusterQuotaMappingController controller\nI0919 20:18:08.747852       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/client-ca/ca-bundle.crt\nI0919 20:18:08.747862       1 controller.go:87] Shutting down OpenAPI AggregationController\nI0919 20:18:08.748034       1 serving.go:88] Shutting down DynamicLoader\nI0919 20:18:08.749840       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0919 20:18:08.749956       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0919 20:18:08.750053       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0919 20:18:08.750180       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\n
Sep 19 20:20:04.251 E ns/openshift-controller-manager pod/controller-manager-rd998 node/ip-10-0-132-91.ec2.internal container=controller-manager container exited with code 255 (Error): 
Sep 19 20:20:04.652 E ns/openshift-machine-config-operator pod/machine-config-daemon-bb4p5 node/ip-10-0-132-91.ec2.internal container=machine-config-daemon container exited with code 255 (Error): 
Sep 19 20:20:05.051 E ns/openshift-machine-config-operator pod/machine-config-server-gvhpr node/ip-10-0-132-91.ec2.internal container=machine-config-server container exited with code 255 (Error): 
Sep 19 20:20:06.251 E ns/openshift-image-registry pod/node-ca-x5c4d node/ip-10-0-132-91.ec2.internal container=node-ca container exited with code 255 (Error): 
Sep 19 20:20:06.652 E ns/openshift-dns pod/dns-default-dq5bd node/ip-10-0-132-91.ec2.internal container=dns container exited with code 255 (Error): .:5353\n2020-09-19T19:59:56.367Z [INFO] CoreDNS-1.3.1\n2020-09-19T19:59:56.368Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-09-19T19:59:56.368Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nW0919 20:16:48.618326       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: watch of *v1.Namespace ended with: too old resource version: 22179 (32689)\n[INFO] SIGTERM: Shutting down servers then terminating\n
Sep 19 20:20:06.652 E ns/openshift-dns pod/dns-default-dq5bd node/ip-10-0-132-91.ec2.internal container=dns-node-resolver container exited with code 255 (Error): /bin/bash: line 1: kill: (149) - No such process\n
Sep 19 20:20:12.054 E ns/openshift-sdn pod/sdn-tb7f5 node/ip-10-0-132-91.ec2.internal container=sdn container exited with code 255 (Error): 9100 10.0.154.177:9100 10.0.155.3:9100]\nI0919 20:18:08.061109   65084 roundrobin.go:240] Delete endpoint 10.0.131.93:9100 for service "openshift-monitoring/node-exporter:https"\nI0919 20:18:08.093236   65084 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-sdn/sdn:metrics to [10.0.132.91:9101 10.0.134.91:9101 10.0.140.69:9101 10.0.154.177:9101 10.0.155.3:9101]\nI0919 20:18:08.093275   65084 roundrobin.go:240] Delete endpoint 10.0.131.93:9101 for service "openshift-sdn/sdn:metrics"\nI0919 20:18:08.264565   65084 proxier.go:367] userspace proxy: processing 0 service events\nI0919 20:18:08.264589   65084 proxier.go:346] userspace syncProxyRules took 58.753ms\nI0919 20:18:08.461055   65084 proxier.go:367] userspace proxy: processing 0 service events\nI0919 20:18:08.461089   65084 proxier.go:346] userspace syncProxyRules took 65.254527ms\nE0919 20:18:08.698714   65084 healthcheck.go:57] SDN healthcheck disconnected from OVS server: <nil>\nI0919 20:18:08.698918   65084 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\ninterrupt: Gracefully shutting down ...\nI0919 20:18:08.799295   65084 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0919 20:18:08.899279   65084 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0919 20:18:08.999305   65084 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0919 20:18:09.099266   65084 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0919 20:18:09.200542   65084 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\n
Sep 19 20:20:15.251 E ns/openshift-sdn pod/ovs-k5f6d node/ip-10-0-132-91.ec2.internal container=openvswitch container exited with code 255 (Error): ace vethf38ff86a on port 16\n2020-09-19T20:17:37.321Z|00228|connmgr|INFO|br0<->unix#392: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T20:17:37.378Z|00229|connmgr|INFO|br0<->unix#395: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T20:17:37.426Z|00230|bridge|INFO|bridge br0: deleted interface vethffa90eab on port 25\n2020-09-19T20:17:37.730Z|00231|connmgr|INFO|br0<->unix#398: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T20:17:37.778Z|00232|bridge|INFO|bridge br0: deleted interface veth8ad63dd9 on port 8\n2020-09-19T20:17:37.836Z|00233|connmgr|INFO|br0<->unix#401: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T20:17:37.874Z|00234|bridge|INFO|bridge br0: deleted interface vethc6c0629c on port 10\n2020-09-19T20:17:38.109Z|00235|connmgr|INFO|br0<->unix#404: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T20:17:38.144Z|00236|connmgr|INFO|br0<->unix#407: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T20:17:38.171Z|00237|bridge|INFO|bridge br0: deleted interface vetha239f236 on port 24\n2020-09-19T20:17:38.233Z|00238|connmgr|INFO|br0<->unix#410: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T20:17:38.246Z|00239|bridge|INFO|bridge br0: deleted interface veth3ad76cb2 on port 13\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-09-19T20:17:38.163Z|00025|jsonrpc|WARN|unix#321: receive error: Connection reset by peer\n2020-09-19T20:17:38.163Z|00026|reconnect|WARN|unix#321: connection dropped (Connection reset by peer)\n\n==> /var/log/openvswitch/ovs-vswitchd.log <==\n2020-09-19T20:18:06.458Z|00240|connmgr|INFO|br0<->unix#416: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T20:18:06.490Z|00241|connmgr|INFO|br0<->unix#419: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T20:18:06.512Z|00242|bridge|INFO|bridge br0: deleted interface vethec01feec on port 29\nTerminated\novs-vswitchd is not running.\n2020-09-19T20:18:08Z|00001|unixctl|WARN|failed to connect to /var/run/openvswitch/ovsdb-server.64788.ctl\novs-appctl: cannot connect to "/var/run/openvswitch/ovsdb-server.64788.ctl" (No such file or directory)\n
Sep 19 20:20:29.255 E ns/openshift-etcd pod/etcd-member-ip-10-0-132-91.ec2.internal node/ip-10-0-132-91.ec2.internal container=etcd-metrics container exited with code 255 (Error): 2020-09-19 20:17:06.590342 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-0.ci-op-4p53dwbl-b7012.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-0.ci-op-4p53dwbl-b7012.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-09-19 20:17:06.591707 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-09-19 20:17:06.592860 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-4p53dwbl-b7012.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-4p53dwbl-b7012.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/09/19 20:17:06 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.132.91:9978: connect: connection refused"; Reconnecting to {etcd-0.ci-op-4p53dwbl-b7012.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\n2020-09-19 20:17:07.613897 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Sep 19 20:20:29.255 E ns/openshift-etcd pod/etcd-member-ip-10-0-132-91.ec2.internal node/ip-10-0-132-91.ec2.internal container=etcd-member container exited with code 255 (Error): stream MsgApp v2 reader)\n2020-09-19 20:18:09.167284 E | rafthttp: failed to read e8413b8492a6582e on stream MsgApp v2 (context canceled)\n2020-09-19 20:18:09.167328 I | rafthttp: peer e8413b8492a6582e became inactive (message send to peer failed)\n2020-09-19 20:18:09.167366 I | rafthttp: stopped streaming with peer e8413b8492a6582e (stream MsgApp v2 reader)\n2020-09-19 20:18:09.167471 W | rafthttp: lost the TCP streaming connection with peer e8413b8492a6582e (stream Message reader)\n2020-09-19 20:18:09.167530 I | rafthttp: stopped streaming with peer e8413b8492a6582e (stream Message reader)\n2020-09-19 20:18:09.167570 I | rafthttp: stopped peer e8413b8492a6582e\n2020-09-19 20:18:09.167607 I | rafthttp: stopping peer 78f4297774a82174...\n2020-09-19 20:18:09.168046 I | rafthttp: closed the TCP streaming connection with peer 78f4297774a82174 (stream MsgApp v2 writer)\n2020-09-19 20:18:09.168113 I | rafthttp: stopped streaming with peer 78f4297774a82174 (writer)\n2020-09-19 20:18:09.168567 I | rafthttp: closed the TCP streaming connection with peer 78f4297774a82174 (stream Message writer)\n2020-09-19 20:18:09.168627 I | rafthttp: stopped streaming with peer 78f4297774a82174 (writer)\n2020-09-19 20:18:09.168760 I | rafthttp: stopped HTTP pipelining with peer 78f4297774a82174\n2020-09-19 20:18:09.168872 W | rafthttp: lost the TCP streaming connection with peer 78f4297774a82174 (stream MsgApp v2 reader)\n2020-09-19 20:18:09.168928 E | rafthttp: failed to read 78f4297774a82174 on stream MsgApp v2 (context canceled)\n2020-09-19 20:18:09.168966 I | rafthttp: peer 78f4297774a82174 became inactive (message send to peer failed)\n2020-09-19 20:18:09.169003 I | rafthttp: stopped streaming with peer 78f4297774a82174 (stream MsgApp v2 reader)\n2020-09-19 20:18:09.169095 W | rafthttp: lost the TCP streaming connection with peer 78f4297774a82174 (stream Message reader)\n2020-09-19 20:18:09.169173 I | rafthttp: stopped streaming with peer 78f4297774a82174 (stream Message reader)\n2020-09-19 20:18:09.169232 I | rafthttp: stopped peer 78f4297774a82174\n
Sep 19 20:20:29.659 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-132-91.ec2.internal node/ip-10-0-132-91.ec2.internal container=scheduler container exited with code 255 (Error): usterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-jenkinspipeline" not found, clusterrole.rbac.authorization.k8s.io "cluster-status" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-docker" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "self-access-reviewer" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-source" not found, clusterrole.rbac.authorization.k8s.io "system:scope-impersonation" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:webhook" not found, clusterrole.rbac.authorization.k8s.io "system:oauth-token-deleter" not found]\nW0919 20:09:16.733337       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 22191 (27378)\nW0919 20:09:16.797468       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.ReplicationController ended with: too old resource version: 22179 (27381)\nW0919 20:09:16.804283       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StatefulSet ended with: too old resource version: 22551 (27381)\nW0919 20:16:48.847849       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 22179 (32720)\nW0919 20:16:48.853382       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 22179 (32720)\nE0919 20:18:08.777509       1 server.go:259] lost master\n
Sep 19 20:20:30.054 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-132-91.ec2.internal node/ip-10-0-132-91.ec2.internal container=kube-apiserver-cert-syncer-8 container exited with code 255 (Error): I0919 19:57:11.663961       1 observer_polling.go:106] Starting file observer\nI0919 19:57:11.668229       1 certsync_controller.go:269] Starting CertSyncer\nW0919 20:04:57.334456       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22645 (25877)\nW0919 20:12:32.339943       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 26012 (29001)\n
Sep 19 20:20:30.054 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-132-91.ec2.internal node/ip-10-0-132-91.ec2.internal container=kube-apiserver-8 container exited with code 255 (Error): roller.go:176] Shutting down kubernetes service endpoint reconciler\nI0919 20:18:08.751075       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=1417, ErrCode=NO_ERROR, debug=""\nI0919 20:18:08.751525       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=1417, ErrCode=NO_ERROR, debug=""\nI0919 20:18:08.752568       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=1417, ErrCode=NO_ERROR, debug=""\nI0919 20:18:08.752684       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=1417, ErrCode=NO_ERROR, debug=""\nI0919 20:18:08.752967       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=1417, ErrCode=NO_ERROR, debug=""\nI0919 20:18:08.753046       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=1417, ErrCode=NO_ERROR, debug=""\nI0919 20:18:08.753319       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=1417, ErrCode=NO_ERROR, debug=""\nI0919 20:18:08.762751       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=1417, ErrCode=NO_ERROR, debug=""\nI0919 20:18:08.763000       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=1417, ErrCode=NO_ERROR, debug=""\nI0919 20:18:08.763077       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=1417, ErrCode=NO_ERROR, debug=""\n
Sep 19 20:20:30.452 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-132-91.ec2.internal node/ip-10-0-132-91.ec2.internal container=kube-controller-manager-5 container exited with code 255 (Error): -19 19:55:55.599115095 +0000 UTC))\nI0919 19:55:55.608022       1 controllermanager.go:169] Version: v1.13.4-138-g41dc99c\nI0919 19:55:55.610312       1 serving.go:195] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1600544042" (2020-09-19 19:34:15 +0000 UTC to 2022-09-19 19:34:16 +0000 UTC (now=2020-09-19 19:55:55.610285799 +0000 UTC))\nI0919 19:55:55.610362       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1600544042" [] issuer="<self>" (2020-09-19 19:34:01 +0000 UTC to 2021-09-19 19:34:02 +0000 UTC (now=2020-09-19 19:55:55.610337032 +0000 UTC))\nI0919 19:55:55.610393       1 secure_serving.go:136] Serving securely on [::]:10257\nI0919 19:55:55.610590       1 serving.go:77] Starting DynamicLoader\nI0919 19:55:55.611976       1 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-controller-manager...\nE0919 19:57:08.762422       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0919 19:57:16.255749       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: configmaps "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\nI0919 20:18:08.812188       1 serving.go:88] Shutting down DynamicLoader\nI0919 20:18:08.812332       1 secure_serving.go:180] Stopped listening on [::]:10257\nE0919 20:18:08.812368       1 controllermanager.go:282] leaderelection lost\n
Sep 19 20:20:30.452 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-132-91.ec2.internal node/ip-10-0-132-91.ec2.internal container=kube-controller-manager-cert-syncer-5 container exited with code 255 (Error): efused\nE0919 19:57:09.279049       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0919 19:57:09.280586       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0919 19:57:10.280078       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0919 19:57:10.281435       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0919 19:57:16.225033       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: secrets is forbidden: User "system:kube-controller-manager" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-controller-manager"\nE0919 19:57:16.225195       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:kube-controller-manager" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\nW0919 20:06:49.238927       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22280 (26401)\nW0919 20:13:39.243979       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 26631 (29775)\n
Sep 19 20:20:33.652 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-132-91.ec2.internal node/ip-10-0-132-91.ec2.internal container=scheduler container exited with code 255 (Error): usterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-jenkinspipeline" not found, clusterrole.rbac.authorization.k8s.io "cluster-status" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-docker" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "self-access-reviewer" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-source" not found, clusterrole.rbac.authorization.k8s.io "system:scope-impersonation" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:webhook" not found, clusterrole.rbac.authorization.k8s.io "system:oauth-token-deleter" not found]\nW0919 20:09:16.733337       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 22191 (27378)\nW0919 20:09:16.797468       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.ReplicationController ended with: too old resource version: 22179 (27381)\nW0919 20:09:16.804283       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StatefulSet ended with: too old resource version: 22551 (27381)\nW0919 20:16:48.847849       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 22179 (32720)\nW0919 20:16:48.853382       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 22179 (32720)\nE0919 20:18:08.777509       1 server.go:259] lost master\n