ResultSUCCESS
Tests 1 failed / 21 succeeded
Started2020-04-14 09:35
Elapsed1h15m
Work namespaceci-op-dcmxx0nc
Refs release-4.1:514189df
826:8cbe0949
pod298ad818-7e33-11ea-8cb9-0a58ac10724d
repoopenshift/cluster-kube-apiserver-operator
revision1

Test Failures


openshift-tests Monitor cluster while tests execute 40m20s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
236 error level events were detected during this test run:

Apr 14 10:09:12.834 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-77bb6c8dcc-kfh7r node/ip-10-0-146-77.us-east-2.compute.internal container=kube-scheduler-operator-container container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 14 10:10:25.995 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-6cfc4c4b44-tgfrw node/ip-10-0-146-77.us-east-2.compute.internal container=openshift-apiserver-operator container exited with code 255 (Error): ctor.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 4943 (13810)\nW0414 10:04:12.996540       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 12560 (13810)\nW0414 10:04:13.058706       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Ingress ended with: too old resource version: 4888 (13943)\nW0414 10:04:13.058913       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 12537 (13810)\nW0414 10:04:13.177730       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 12280 (13810)\nW0414 10:04:13.177928       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Pod ended with: too old resource version: 12809 (13810)\nW0414 10:04:13.178179       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 14231 (14245)\nW0414 10:04:13.227381       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 8279 (14245)\nW0414 10:04:13.227563       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Project ended with: too old resource version: 4888 (13949)\nW0414 10:04:13.227677       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Image ended with: too old resource version: 8193 (13923)\nW0414 10:04:13.227737       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Namespace ended with: too old resource version: 8902 (13810)\nI0414 10:10:25.352698       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0414 10:10:25.352834       1 leaderelection.go:65] leaderelection lost\n
Apr 14 10:10:37.027 E ns/openshift-machine-api pod/machine-api-operator-78bf987f7-mfxrc node/ip-10-0-146-77.us-east-2.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Apr 14 10:12:46.293 E ns/openshift-machine-api pod/machine-api-controllers-79f76677b4-4vlzg node/ip-10-0-141-120.us-east-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Apr 14 10:12:46.293 E ns/openshift-machine-api pod/machine-api-controllers-79f76677b4-4vlzg node/ip-10-0-141-120.us-east-2.compute.internal container=nodelink-controller container exited with code 2 (Error): 
Apr 14 10:12:49.940 E ns/openshift-apiserver pod/apiserver-5m2n7 node/ip-10-0-135-38.us-east-2.compute.internal container=openshift-apiserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 14 10:13:02.475 E clusterversion/version changed Failing to True: MultipleErrors: Multiple errors are preventing progress:\n* Cluster operator monitoring is still updating\n* Could not update deployment "openshift-authentication-operator/authentication-operator" (107 of 350)\n* Could not update deployment "openshift-cloud-credential-operator/cloud-credential-operator" (94 of 350)\n* Could not update deployment "openshift-cluster-node-tuning-operator/cluster-node-tuning-operator" (162 of 350)\n* Could not update deployment "openshift-cluster-samples-operator/cluster-samples-operator" (185 of 350)\n* Could not update deployment "openshift-cluster-storage-operator/cluster-storage-operator" (199 of 350)\n* Could not update deployment "openshift-console/downloads" (237 of 350)\n* Could not update deployment "openshift-controller-manager-operator/openshift-controller-manager-operator" (173 of 350)\n* Could not update deployment "openshift-image-registry/cluster-image-registry-operator" (133 of 350)\n* Could not update deployment "openshift-machine-api/cluster-autoscaler-operator" (122 of 350)\n* Could not update deployment "openshift-marketplace/marketplace-operator" (282 of 350)\n* Could not update deployment "openshift-operator-lifecycle-manager/olm-operator" (253 of 350)\n* Could not update deployment "openshift-service-ca-operator/service-ca-operator" (290 of 350)\n* Could not update deployment "openshift-service-catalog-apiserver-operator/openshift-service-catalog-apiserver-operator" (209 of 350)\n* Could not update deployment "openshift-service-catalog-controller-manager-operator/openshift-service-catalog-controller-manager-operator" (217 of 350)
Apr 14 10:13:16.161 E ns/openshift-monitoring pod/prometheus-operator-697786dc6d-vv78d node/ip-10-0-132-253.us-east-2.compute.internal container=prometheus-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 14 10:13:19.519 E ns/openshift-image-registry pod/node-ca-6p9wn node/ip-10-0-141-120.us-east-2.compute.internal container=node-ca container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 14 10:13:27.491 E ns/openshift-ingress pod/router-default-76b7d6488-n72bd node/ip-10-0-149-215.us-east-2.compute.internal container=router container exited with code 2 (Error): 10:12:03.016036       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0414 10:12:08.017727       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0414 10:12:24.605515       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0414 10:12:29.566528       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0414 10:12:34.569938       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0414 10:12:39.566713       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0414 10:12:48.144886       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0414 10:12:53.125426       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0414 10:12:58.126756       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0414 10:13:03.125444       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0414 10:13:08.124580       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0414 10:13:13.150970       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0414 10:13:18.162014       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n
Apr 14 10:13:28.700 E ns/openshift-monitoring pod/node-exporter-nw5dm node/ip-10-0-141-120.us-east-2.compute.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 14 10:13:28.700 E ns/openshift-monitoring pod/node-exporter-nw5dm node/ip-10-0-141-120.us-east-2.compute.internal container=node-exporter container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 14 10:13:35.096 E ns/openshift-console pod/downloads-5c64fbc479-q255k node/ip-10-0-128-162.us-east-2.compute.internal container=download-server container exited with code 137 (Error): 
Apr 14 10:13:38.840 E ns/openshift-monitoring pod/grafana-5c777d9557-kcc6c node/ip-10-0-132-253.us-east-2.compute.internal container=grafana-proxy container exited with code 2 (Error): 
Apr 14 10:13:45.409 E ns/openshift-console pod/downloads-5c64fbc479-5r7qc node/ip-10-0-149-215.us-east-2.compute.internal container=download-server container exited with code 137 (Error): 
Apr 14 10:13:50.330 E ns/openshift-ingress pod/router-default-76b7d6488-v9hpr node/ip-10-0-132-253.us-east-2.compute.internal container=router container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 14 10:13:52.130 E ns/openshift-monitoring pod/node-exporter-5bfmc node/ip-10-0-128-162.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): 
Apr 14 10:13:54.319 E ns/openshift-monitoring pod/prometheus-adapter-5d687c7584-tz498 node/ip-10-0-132-253.us-east-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): 
Apr 14 10:13:59.946 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-149-215.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): 
Apr 14 10:14:07.382 E ns/openshift-controller-manager pod/controller-manager-nn9b4 node/ip-10-0-135-38.us-east-2.compute.internal container=controller-manager container exited with code 137 (Error): 
Apr 14 10:14:07.449 E ns/openshift-service-catalog-apiserver-operator pod/openshift-service-catalog-apiserver-operator-7bfbb69cb9-5jgwg node/ip-10-0-135-38.us-east-2.compute.internal container=operator container exited with code 2 (Error): 14 10:12:27.537759       1 request.go:530] Throttling request took 83.333875ms, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps?limit=500&resourceVersion=0\nI0414 10:12:27.740457       1 request.go:530] Throttling request took 285.745245ms, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-config-managed/secrets?limit=500&resourceVersion=0\nI0414 10:12:27.937674       1 request.go:530] Throttling request took 482.749255ms, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-service-catalog-apiserver\nI0414 10:12:28.137682       1 request.go:530] Throttling request took 675.269642ms, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-service-catalog-apiserver/pods?limit=500&resourceVersion=0\nI0414 10:12:28.337673       1 request.go:530] Throttling request took 872.902615ms, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-config/secrets?limit=500&resourceVersion=0\nI0414 10:12:28.545427       1 request.go:530] Throttling request took 1.078549154s, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-service-catalog-apiserver/configmaps?limit=500&resourceVersion=0\nI0414 10:12:28.747644       1 request.go:530] Throttling request took 802.059014ms, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-service-catalog-apiserver\nI0414 10:12:37.228495       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0414 10:12:47.242801       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0414 10:12:57.260765       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0414 10:13:07.628555       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\n
Apr 14 10:14:15.984 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-76f7r2dqz node/ip-10-0-135-38.us-east-2.compute.internal container=operator container exited with code 2 (Error): ctory.go:132: watch of *v1.Service ended with: too old resource version: 13810 (17114)\nW0414 10:12:26.311215       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ServiceAccount ended with: too old resource version: 13810 (17114)\nW0414 10:12:26.342482       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 14382 (17726)\nW0414 10:12:26.402221       1 reflector.go:270] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: watch of *v1.ServiceCatalogControllerManager ended with: too old resource version: 14379 (17179)\nW0414 10:12:26.406820       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Namespace ended with: too old resource version: 14777 (17114)\nW0414 10:12:26.406942       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Deployment ended with: too old resource version: 13947 (17123)\nI0414 10:12:27.311311       1 reflector.go:169] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:132\nI0414 10:12:27.311324       1 reflector.go:169] Listing and watching *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:132\nI0414 10:12:27.342755       1 reflector.go:169] Listing and watching *v1.ConfigMap from k8s.io/client-go/informers/factory.go:132\nI0414 10:12:27.402557       1 reflector.go:169] Listing and watching *v1.ServiceCatalogControllerManager from github.com/openshift/client-go/operator/informers/externalversions/factory.go:101\nI0414 10:12:27.407140       1 reflector.go:169] Listing and watching *v1.Deployment from k8s.io/client-go/informers/factory.go:132\nI0414 10:12:27.407153       1 reflector.go:169] Listing and watching *v1.Namespace from k8s.io/client-go/informers/factory.go:132\nI0414 10:12:42.110195       1 wrap.go:47] GET /metrics: (8.89128ms) 200 [Prometheus/2.7.2 10.131.0.8:59152]\nI0414 10:12:42.111161       1 wrap.go:47] GET /metrics: (6.995102ms) 200 [Prometheus/2.7.2 10.128.2.9:53338]\n
Apr 14 10:14:17.009 E ns/openshift-cluster-node-tuning-operator pod/cluster-node-tuning-operator-7bfbbcc9-xkjtn node/ip-10-0-135-38.us-east-2.compute.internal container=cluster-node-tuning-operator container exited with code 255 (Error): /pkg/cache/internal/informers_map.go:196: watch of *v1.ClusterRole ended with: too old resource version: 13812 (17118)\nW0414 10:12:26.420603       1 reflector.go:270] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:196: watch of *v1.ClusterRoleBinding ended with: too old resource version: 13812 (17118)\nW0414 10:12:26.454696       1 reflector.go:270] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:196: watch of *v1.ServiceAccount ended with: too old resource version: 13810 (17114)\nW0414 10:12:26.461668       1 reflector.go:270] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:196: watch of *v1.ConfigMap ended with: too old resource version: 17169 (17726)\nI0414 10:12:27.314136       1 tuned_controller.go:419] Reconciling Tuned openshift-cluster-node-tuning-operator/default\nI0414 10:12:27.314168       1 status.go:26] syncOperatorStatus()\nI0414 10:12:27.322339       1 tuned_controller.go:187] syncServiceAccount()\nI0414 10:12:27.322497       1 tuned_controller.go:215] syncClusterRole()\nI0414 10:12:27.448272       1 tuned_controller.go:246] syncClusterRoleBinding()\nI0414 10:12:27.551988       1 tuned_controller.go:277] syncClusterConfigMap()\nI0414 10:12:27.557476       1 tuned_controller.go:277] syncClusterConfigMap()\nI0414 10:12:27.567464       1 tuned_controller.go:315] syncDaemonSet()\nI0414 10:12:27.584351       1 tuned_controller.go:419] Reconciling Tuned openshift-cluster-node-tuning-operator/default\nI0414 10:12:27.584372       1 status.go:26] syncOperatorStatus()\nI0414 10:12:27.598904       1 tuned_controller.go:187] syncServiceAccount()\nI0414 10:12:27.599060       1 tuned_controller.go:215] syncClusterRole()\nI0414 10:12:27.688617       1 tuned_controller.go:246] syncClusterRoleBinding()\nI0414 10:12:27.855681       1 tuned_controller.go:277] syncClusterConfigMap()\nI0414 10:12:27.866349       1 tuned_controller.go:277] syncClusterConfigMap()\nI0414 10:12:27.876991       1 tuned_controller.go:315] syncDaemonSet()\nF0414 10:13:02.431434       1 main.go:85] <nil>\n
Apr 14 10:14:18.573 E ns/openshift-authentication-operator pod/authentication-operator-769576f595-dw8vb node/ip-10-0-135-38.us-east-2.compute.internal container=operator container exited with code 255 (Error): lastTransitionTime":"2020-04-14T10:04:17Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-04-14T09:56:36Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0414 10:13:11.676845       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"8d8764e2-7e35-11ea-b22d-02e7f4354ac0", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Progressing changed from False to True ("Progressing: deployment's observed generation did not reach the expected generation")\nI0414 10:13:14.889788       1 status_controller.go:164] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2020-04-14T10:00:02Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-04-14T10:13:11Z","message":"Progressing: not all deployment replicas are ready","reason":"ProgressingOAuthServerDeploymentNotReady","status":"True","type":"Progressing"},{"lastTransitionTime":"2020-04-14T10:04:17Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-04-14T09:56:36Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0414 10:13:14.943054       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"8d8764e2-7e35-11ea-b22d-02e7f4354ac0", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Progressing message changed from "Progressing: deployment's observed generation did not reach the expected generation" to "Progressing: not all deployment replicas are ready"\nI0414 10:13:16.377417       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0414 10:13:16.377477       1 leaderelection.go:65] leaderelection lost\n
Apr 14 10:14:18.728 E ns/openshift-service-ca-operator pod/service-ca-operator-66b49b4ffd-pdq8h node/ip-10-0-146-77.us-east-2.compute.internal container=operator container exited with code 2 (Error): 
Apr 14 10:14:22.377 E ns/openshift-monitoring pod/node-exporter-rwsvf node/ip-10-0-132-253.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): 
Apr 14 10:14:29.691 E ns/openshift-authentication pod/oauth-openshift-5c8d9b898b-h6ppb node/ip-10-0-141-120.us-east-2.compute.internal container=oauth-openshift container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 14 10:14:32.169 E ns/openshift-monitoring pod/node-exporter-v9xg8 node/ip-10-0-135-38.us-east-2.compute.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 14 10:14:32.169 E ns/openshift-monitoring pod/node-exporter-v9xg8 node/ip-10-0-135-38.us-east-2.compute.internal container=node-exporter container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 14 10:14:37.446 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-132-253.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): 
Apr 14 10:14:37.464 E ns/openshift-cluster-node-tuning-operator pod/tuned-85wkk node/ip-10-0-132-253.us-east-2.compute.internal container=tuned container exited with code 143 (Error): t-node) match.  Label changes will not trigger profile reload.\nI0414 10:13:39.858863    3207 openshift-tuned.go:435] Pod (openshift-monitoring/grafana-5c777d9557-kcc6c) labels changed node wide: true\nI0414 10:13:44.331167    3207 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0414 10:13:44.332643    3207 openshift-tuned.go:326] Getting recommended profile...\nI0414 10:13:44.442759    3207 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0414 10:13:45.455162    3207 openshift-tuned.go:435] Pod (openshift-image-registry/image-registry-6bc6cdf549-w4mjp) labels changed node wide: true\nI0414 10:13:49.331193    3207 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0414 10:13:49.332532    3207 openshift-tuned.go:326] Getting recommended profile...\nI0414 10:13:49.444538    3207 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0414 10:13:55.898457    3207 openshift-tuned.go:435] Pod (openshift-ingress/router-default-76b7d6488-v9hpr) labels changed node wide: true\nI0414 10:13:59.331195    3207 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0414 10:13:59.332677    3207 openshift-tuned.go:326] Getting recommended profile...\nI0414 10:13:59.445526    3207 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0414 10:14:05.462613    3207 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-adapter-5d687c7584-tz498) labels changed node wide: true\nE0414 10:14:06.267704    3207 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=3, ErrCode=NO_ERROR, debug=""\nE0414 10:14:06.272842    3207 openshift-tuned.go:720] Pod event watch channel closed.\nI0414 10:14:06.272862    3207 openshift-tuned.go:722] Increasing resyncPeriod to 204\n
Apr 14 10:14:38.352 E ns/openshift-marketplace pod/certified-operators-5864d94bb-6cl48 node/ip-10-0-149-215.us-east-2.compute.internal container=certified-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 14 10:14:48.882 E ns/openshift-cluster-node-tuning-operator pod/tuned-5xjc8 node/ip-10-0-141-120.us-east-2.compute.internal container=tuned container exited with code 143 (Error):  Getting recommended profile...\nI0414 10:13:30.732491   15182 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0414 10:13:42.426410   15182 openshift-tuned.go:435] Pod (openshift-image-registry/node-ca-x6n88) labels changed node wide: true\nI0414 10:13:45.589398   15182 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0414 10:13:45.590996   15182 openshift-tuned.go:326] Getting recommended profile...\nI0414 10:13:45.749318   15182 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0414 10:13:47.468684   15182 openshift-tuned.go:435] Pod (openshift-monitoring/node-exporter-nw5dm) labels changed node wide: true\nI0414 10:13:50.589392   15182 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0414 10:13:50.591532   15182 openshift-tuned.go:326] Getting recommended profile...\nI0414 10:13:50.726467   15182 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0414 10:13:57.743561   15182 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/olm-operators-zk2w5) labels changed node wide: true\nI0414 10:14:00.592699   15182 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0414 10:14:00.596044   15182 openshift-tuned.go:326] Getting recommended profile...\nI0414 10:14:00.758384   15182 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nE0414 10:14:06.264479   15182 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""\nE0414 10:14:06.271150   15182 openshift-tuned.go:720] Pod event watch channel closed.\nI0414 10:14:06.271259   15182 openshift-tuned.go:722] Increasing resyncPeriod to 118\n
Apr 14 10:14:53.357 E ns/openshift-cluster-node-tuning-operator pod/tuned-cxfv9 node/ip-10-0-128-162.us-east-2.compute.internal container=tuned container exited with code 143 (Error): :523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0414 10:13:41.421897    3578 openshift-tuned.go:435] Pod (openshift-console/downloads-5c64fbc479-q255k) labels changed node wide: true\nI0414 10:13:45.325346    3578 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0414 10:13:45.327116    3578 openshift-tuned.go:326] Getting recommended profile...\nI0414 10:13:45.459505    3578 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0414 10:14:01.421380    3578 openshift-tuned.go:435] Pod (openshift-monitoring/node-exporter-5bfmc) labels changed node wide: true\nI0414 10:14:05.325334    3578 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0414 10:14:05.333704    3578 openshift-tuned.go:326] Getting recommended profile...\nI0414 10:14:05.473682    3578 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0414 10:14:11.443937    3578 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-k8s-0) labels changed node wide: true\nI0414 10:14:15.325362    3578 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0414 10:14:15.327470    3578 openshift-tuned.go:326] Getting recommended profile...\nI0414 10:14:15.452629    3578 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0414 10:14:38.503924    3578 openshift-tuned.go:435] Pod (openshift-marketplace/community-operators-5f5f86cb8f-w7jd7) labels changed node wide: true\nI0414 10:14:40.325366    3578 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0414 10:14:40.326743    3578 openshift-tuned.go:326] Getting recommended profile...\nI0414 10:14:40.449834    3578 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\n
Apr 14 10:15:02.510 E ns/openshift-marketplace pod/redhat-operators-6f588bdd6c-cxmzb node/ip-10-0-132-253.us-east-2.compute.internal container=redhat-operators container exited with code 2 (Error): 
Apr 14 10:15:03.930 E ns/openshift-controller-manager pod/controller-manager-5p2qc node/ip-10-0-141-120.us-east-2.compute.internal container=controller-manager container exited with code 137 (Error): 
Apr 14 10:15:04.675 E ns/openshift-cluster-node-tuning-operator pod/tuned-r8tps node/ip-10-0-149-215.us-east-2.compute.internal container=tuned container exited with code 143 (Error):  profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0414 10:13:42.416237    3524 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-adapter-6f75d557cc-kvcwf) labels changed node wide: true\nI0414 10:13:47.327012    3524 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0414 10:13:47.328384    3524 openshift-tuned.go:326] Getting recommended profile...\nI0414 10:13:47.438595    3524 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0414 10:13:48.821955    3524 openshift-tuned.go:435] Pod (openshift-ingress/router-default-794c47cb65-mb6sd) labels changed node wide: true\nI0414 10:13:52.327055    3524 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0414 10:13:52.328400    3524 openshift-tuned.go:326] Getting recommended profile...\nI0414 10:13:52.447427    3524 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0414 10:13:53.331523    3524 openshift-tuned.go:435] Pod (openshift-console/downloads-5c64fbc479-5r7qc) labels changed node wide: true\nI0414 10:13:57.327016    3524 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0414 10:13:57.328395    3524 openshift-tuned.go:326] Getting recommended profile...\nI0414 10:13:57.441548    3524 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0414 10:14:03.333590    3524 openshift-tuned.go:435] Pod (openshift-image-registry/node-ca-bkf8f) labels changed node wide: true\nE0414 10:14:06.247007    3524 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=3, ErrCode=NO_ERROR, debug=""\nE0414 10:14:06.249017    3524 openshift-tuned.go:720] Pod event watch channel closed.\nI0414 10:14:06.249035    3524 openshift-tuned.go:722] Increasing resyncPeriod to 220\n
Apr 14 10:15:15.401 E ns/openshift-marketplace pod/certified-operators-74b657f79d-qtncq node/ip-10-0-128-162.us-east-2.compute.internal container=certified-operators container exited with code 2 (Error): 
Apr 14 10:15:16.111 E ns/openshift-cluster-node-tuning-operator pod/tuned-j6fsp node/ip-10-0-135-38.us-east-2.compute.internal container=tuned container exited with code 143 (Error): 414 10:12:52.753359   15484 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0414 10:12:54.509127   15484 openshift-tuned.go:435] Pod (openshift-cluster-samples-operator/cluster-samples-operator-7869b9b7f9-vgrlj) labels changed node wide: true\nI0414 10:12:57.614771   15484 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0414 10:12:57.616539   15484 openshift-tuned.go:326] Getting recommended profile...\nI0414 10:12:57.782782   15484 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0414 10:12:57.783560   15484 openshift-tuned.go:435] Pod (openshift-apiserver/apiserver-5m2n7) labels changed node wide: true\nI0414 10:13:02.614761   15484 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0414 10:13:02.616799   15484 openshift-tuned.go:326] Getting recommended profile...\nI0414 10:13:02.797433   15484 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0414 10:13:11.423189   15484 openshift-tuned.go:435] Pod (openshift-authentication/oauth-openshift-5c8d9b898b-xslj9) labels changed node wide: true\nI0414 10:13:12.614839   15484 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0414 10:13:12.616410   15484 openshift-tuned.go:326] Getting recommended profile...\nI0414 10:13:12.749016   15484 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nE0414 10:14:06.270257   15484 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=15, ErrCode=NO_ERROR, debug=""\nE0414 10:14:06.273524   15484 openshift-tuned.go:720] Pod event watch channel closed.\nI0414 10:14:06.273610   15484 openshift-tuned.go:722] Increasing resyncPeriod to 130\n
Apr 14 10:15:17.561 E ns/openshift-marketplace pod/community-operators-5f8d8ff5c8-f98tb node/ip-10-0-132-253.us-east-2.compute.internal container=community-operators container exited with code 2 (Error): 
Apr 14 10:15:22.084 E ns/openshift-service-ca pod/service-serving-cert-signer-647fcd9f9-vmqfz node/ip-10-0-141-120.us-east-2.compute.internal container=service-serving-cert-signer-controller container exited with code 2 (Error): 
Apr 14 10:15:22.128 E ns/openshift-service-ca pod/apiservice-cabundle-injector-54fb96f84c-v5rkz node/ip-10-0-135-38.us-east-2.compute.internal container=apiservice-cabundle-injector-controller container exited with code 2 (Error): 
Apr 14 10:15:47.122 E ns/openshift-controller-manager pod/controller-manager-j456b node/ip-10-0-146-77.us-east-2.compute.internal container=controller-manager container exited with code 137 (Error): 
Apr 14 10:18:59.648 E ns/openshift-operator-lifecycle-manager pod/packageserver-7fbf856684-j5llx node/ip-10-0-141-120.us-east-2.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 14 10:19:00.789 E ns/openshift-operator-lifecycle-manager pod/packageserver-79d8fdfcd7-t4bnq node/ip-10-0-141-120.us-east-2.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 14 10:19:17.475 E clusterversion/version changed Failing to True: MultipleErrors: Multiple errors are preventing progress:\n* Cluster operator openshift-samples is still updating\n* Could not update deployment "openshift-operator-lifecycle-manager/packageserver" (266 of 350)
Apr 14 10:20:03.005 E ns/openshift-operator-lifecycle-manager pod/packageserver-66c64945c-cgxns node/ip-10-0-135-38.us-east-2.compute.internal container=packageserver container exited with code 137 (Error):  certificate\nI0414 10:19:27.630786       1 log.go:172] http: TLS handshake error from 10.128.0.1:36088: remote error: tls: bad certificate\nI0414 10:19:28.031275       1 log.go:172] http: TLS handshake error from 10.128.0.1:36092: remote error: tls: bad certificate\nI0414 10:19:28.711706       1 wrap.go:47] GET /healthz: (135.095µs) 200 [kube-probe/1.13+ 10.130.0.1:51738]\nI0414 10:19:28.830591       1 log.go:172] http: TLS handshake error from 10.128.0.1:36116: remote error: tls: bad certificate\nI0414 10:19:29.230568       1 log.go:172] http: TLS handshake error from 10.128.0.1:36118: remote error: tls: bad certificate\nI0414 10:19:30.030666       1 log.go:172] http: TLS handshake error from 10.128.0.1:36128: remote error: tls: bad certificate\nI0414 10:19:30.765040       1 log.go:172] http: TLS handshake error from 10.128.0.1:36132: remote error: tls: bad certificate\nI0414 10:19:31.231049       1 log.go:172] http: TLS handshake error from 10.128.0.1:36136: remote error: tls: bad certificate\nI0414 10:19:31.844450       1 wrap.go:47] GET /: (16.758726ms) 200 [Go-http-client/2.0 10.129.0.1:37198]\nI0414 10:19:31.844494       1 wrap.go:47] GET /: (16.780938ms) 200 [Go-http-client/2.0 10.129.0.1:37198]\nI0414 10:19:31.844523       1 wrap.go:47] GET /: (16.916307ms) 200 [Go-http-client/2.0 10.129.0.1:37198]\nI0414 10:19:32.027378       1 wrap.go:47] GET /: (248.766µs) 200 [Go-http-client/2.0 10.130.0.1:48912]\nI0414 10:19:32.027795       1 wrap.go:47] GET /: (722.93µs) 200 [Go-http-client/2.0 10.130.0.1:48912]\nI0414 10:19:32.028205       1 wrap.go:47] GET /: (179.589µs) 200 [Go-http-client/2.0 10.129.0.1:37198]\nI0414 10:19:32.028461       1 wrap.go:47] GET /: (1.282658ms) 200 [Go-http-client/2.0 10.130.0.1:48912]\nI0414 10:19:32.028741       1 wrap.go:47] GET /: (190.585µs) 200 [Go-http-client/2.0 10.129.0.1:37198]\nI0414 10:19:32.043655       1 log.go:172] http: TLS handshake error from 10.128.0.1:36146: remote error: tls: bad certificate\nI0414 10:19:32.100676       1 secure_serving.go:156] Stopped listening on [::]:5443\n
Apr 14 10:24:20.766 E ns/openshift-sdn pod/sdn-controller-8q7cw node/ip-10-0-135-38.us-east-2.compute.internal container=sdn-controller container exited with code 137 (Error): I0414 09:52:08.138755       1 leaderelection.go:205] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Apr 14 10:24:44.264 E openshift-apiserver OpenShift API is not responding to GET requests
Apr 14 10:24:48.682 E ns/openshift-machine-api pod/cluster-autoscaler-operator-66b7df7c7d-qp4lz node/ip-10-0-146-77.us-east-2.compute.internal container=cluster-autoscaler-operator container exited with code 255 (Error): 
Apr 14 10:24:54.719 E ns/openshift-sdn pod/sdn-controller-wrk46 node/ip-10-0-146-77.us-east-2.compute.internal container=sdn-controller container exited with code 137 (Error):    1 reflector.go:256] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 8900 (14354)\nW0414 10:04:09.842936       1 reflector.go:256] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 5924 (14354)\nI0414 10:04:34.983656       1 vnids.go:115] Allocated netid 7609189 for namespace "e2e-tests-sig-apps-deployment-upgrade-5kdk6"\nI0414 10:04:34.991289       1 vnids.go:115] Allocated netid 6510815 for namespace "e2e-tests-sig-apps-daemonset-upgrade-h8vcx"\nI0414 10:04:35.001452       1 vnids.go:115] Allocated netid 2501756 for namespace "e2e-tests-sig-storage-sig-api-machinery-configmap-upgrade-ql5g6"\nI0414 10:04:35.013025       1 vnids.go:115] Allocated netid 6009366 for namespace "e2e-tests-sig-storage-sig-api-machinery-secret-upgrade-7qcsk"\nI0414 10:04:35.019923       1 vnids.go:115] Allocated netid 15266073 for namespace "e2e-tests-sig-apps-job-upgrade-qrfj5"\nI0414 10:04:35.039691       1 vnids.go:115] Allocated netid 13885842 for namespace "e2e-tests-sig-apps-replicaset-upgrade-9lzj8"\nI0414 10:04:35.071551       1 vnids.go:115] Allocated netid 15830998 for namespace "e2e-tests-service-upgrade-8hthq"\nW0414 10:12:26.331803       1 reflector.go:256] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 14354 (17965)\nW0414 10:12:26.331966       1 reflector.go:256] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 14565 (17965)\nW0414 10:12:26.463680       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Namespace ended with: too old resource version: 14777 (17114)\nE0414 10:19:08.742775       1 memcache.go:141] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request\n
Apr 14 10:25:00.739 E ns/openshift-multus pod/multus-k9cfd node/ip-10-0-146-77.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 14 10:25:14.981 E ns/openshift-sdn pod/sdn-q82tq node/ip-10-0-149-215.us-east-2.compute.internal container=sdn container exited with code 255 (Error): 128.162:9101 10.0.135.38:9101 10.0.141.120:9101 10.0.146.77:9101 10.0.149.215:9101]\nI0414 10:25:13.513109   55234 roundrobin.go:240] Delete endpoint 10.0.132.253:9101 for service "openshift-sdn/sdn:metrics"\nI0414 10:25:13.557986   55234 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:25:13.568082   55234 proxier.go:367] userspace proxy: processing 0 service events\nI0414 10:25:13.568102   55234 proxier.go:346] userspace syncProxyRules took 61.649155ms\nI0414 10:25:13.657993   55234 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:25:13.749012   55234 proxier.go:367] userspace proxy: processing 0 service events\nI0414 10:25:13.749049   55234 proxier.go:346] userspace syncProxyRules took 63.424609ms\nI0414 10:25:13.758025   55234 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:25:13.858019   55234 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:25:13.957996   55234 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:25:14.058353   55234 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:25:14.158026   55234 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:25:14.263594   55234 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0414 10:25:14.263646   55234 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
Apr 14 10:25:28.713 E ns/openshift-sdn pod/sdn-controller-7wbqq node/ip-10-0-141-120.us-east-2.compute.internal container=sdn-controller container exited with code 137 (Error): I0414 09:52:11.083258       1 leaderelection.go:205] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Apr 14 10:25:40.032 E ns/openshift-multus pod/multus-8t2c9 node/ip-10-0-135-38.us-east-2.compute.internal container=kube-multus container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 14 10:25:45.757 E ns/openshift-sdn pod/ovs-j6hzh node/ip-10-0-141-120.us-east-2.compute.internal container=openvswitch container exited with code 137 (Error): -14T10:23:42.204Z|00312|connmgr|INFO|br0<->unix#795: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-14T10:23:42.241Z|00313|bridge|INFO|bridge br0: deleted interface vethcaa17a0d on port 3\n2020-04-14T10:23:53.876Z|00314|bridge|INFO|bridge br0: added interface veth92fddf06 on port 52\n2020-04-14T10:23:53.907Z|00315|connmgr|INFO|br0<->unix#801: 5 flow_mods in the last 0 s (5 adds)\n2020-04-14T10:23:53.947Z|00316|connmgr|INFO|br0<->unix#804: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-14T10:24:22.335Z|00317|connmgr|INFO|br0<->unix#813: 2 flow_mods in the last 0 s (2 adds)\n2020-04-14T10:24:22.523Z|00318|connmgr|INFO|br0<->unix#819: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-14T10:24:22.561Z|00319|connmgr|INFO|br0<->unix#822: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-14T10:24:22.591Z|00320|connmgr|INFO|br0<->unix#825: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-14T10:24:22.618Z|00321|connmgr|INFO|br0<->unix#828: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-14T10:24:22.651Z|00322|connmgr|INFO|br0<->unix#831: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-14T10:24:22.910Z|00323|connmgr|INFO|br0<->unix#834: 3 flow_mods in the last 0 s (3 adds)\n2020-04-14T10:24:22.936Z|00324|connmgr|INFO|br0<->unix#837: 1 flow_mods in the last 0 s (1 adds)\n2020-04-14T10:24:22.959Z|00325|connmgr|INFO|br0<->unix#840: 3 flow_mods in the last 0 s (3 adds)\n2020-04-14T10:24:22.983Z|00326|connmgr|INFO|br0<->unix#843: 1 flow_mods in the last 0 s (1 adds)\n2020-04-14T10:24:23.015Z|00327|connmgr|INFO|br0<->unix#846: 3 flow_mods in the last 0 s (3 adds)\n2020-04-14T10:24:23.045Z|00328|connmgr|INFO|br0<->unix#849: 1 flow_mods in the last 0 s (1 adds)\n2020-04-14T10:24:23.076Z|00329|connmgr|INFO|br0<->unix#852: 3 flow_mods in the last 0 s (3 adds)\n2020-04-14T10:24:23.105Z|00330|connmgr|INFO|br0<->unix#855: 1 flow_mods in the last 0 s (1 adds)\n2020-04-14T10:24:23.133Z|00331|connmgr|INFO|br0<->unix#858: 3 flow_mods in the last 0 s (3 adds)\n2020-04-14T10:24:23.159Z|00332|connmgr|INFO|br0<->unix#861: 1 flow_mods in the last 0 s (1 adds)\n
Apr 14 10:25:56.791 E ns/openshift-sdn pod/sdn-9kw2k node/ip-10-0-141-120.us-east-2.compute.internal container=sdn container exited with code 255 (Error): ar/run/openvswitch/db.sock: connect: connection refused\nI0414 10:25:54.848684   74251 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:25:54.948680   74251 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:25:55.048677   74251 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:25:55.148653   74251 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:25:55.248662   74251 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:25:55.348647   74251 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:25:55.448662   74251 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:25:55.548687   74251 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:25:55.648668   74251 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:25:55.748654   74251 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:25:55.748732   74251 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nF0414 10:25:55.748746   74251 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: timed out waiting for the condition\n
Apr 14 10:26:23.494 E ns/openshift-multus pod/multus-jkjcm node/ip-10-0-128-162.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 14 10:26:29.844 E ns/openshift-sdn pod/ovs-97k8z node/ip-10-0-132-253.us-east-2.compute.internal container=openvswitch container exited with code 137 (Error): br0: deleted interface vethe1e5fa60 on port 3\n2020-04-14T10:24:08.011Z|00160|bridge|INFO|bridge br0: added interface veth331b2998 on port 24\n2020-04-14T10:24:08.044Z|00161|connmgr|INFO|br0<->unix#463: 5 flow_mods in the last 0 s (5 adds)\n2020-04-14T10:24:08.097Z|00162|connmgr|INFO|br0<->unix#466: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-14T10:25:17.946Z|00163|connmgr|INFO|br0<->unix#482: 2 flow_mods in the last 0 s (2 adds)\n2020-04-14T10:25:18.031Z|00164|connmgr|INFO|br0<->unix#488: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-14T10:25:18.054Z|00165|connmgr|INFO|br0<->unix#491: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-14T10:25:18.076Z|00166|connmgr|INFO|br0<->unix#494: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-14T10:25:18.099Z|00167|connmgr|INFO|br0<->unix#497: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-14T10:25:18.347Z|00168|connmgr|INFO|br0<->unix#500: 3 flow_mods in the last 0 s (3 adds)\n2020-04-14T10:25:18.369Z|00169|connmgr|INFO|br0<->unix#503: 1 flow_mods in the last 0 s (1 adds)\n2020-04-14T10:25:18.390Z|00170|connmgr|INFO|br0<->unix#506: 3 flow_mods in the last 0 s (3 adds)\n2020-04-14T10:25:18.414Z|00171|connmgr|INFO|br0<->unix#509: 1 flow_mods in the last 0 s (1 adds)\n2020-04-14T10:25:18.437Z|00172|connmgr|INFO|br0<->unix#512: 3 flow_mods in the last 0 s (3 adds)\n2020-04-14T10:25:18.466Z|00173|connmgr|INFO|br0<->unix#515: 1 flow_mods in the last 0 s (1 adds)\n2020-04-14T10:25:18.491Z|00174|connmgr|INFO|br0<->unix#518: 3 flow_mods in the last 0 s (3 adds)\n2020-04-14T10:25:18.517Z|00175|connmgr|INFO|br0<->unix#521: 1 flow_mods in the last 0 s (1 adds)\n2020-04-14T10:25:18.545Z|00176|connmgr|INFO|br0<->unix#524: 3 flow_mods in the last 0 s (3 adds)\n2020-04-14T10:25:18.573Z|00177|connmgr|INFO|br0<->unix#527: 1 flow_mods in the last 0 s (1 adds)\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-04-14T10:25:47.980Z|00025|jsonrpc|WARN|unix#316: receive error: Connection reset by peer\n2020-04-14T10:25:47.980Z|00026|reconnect|WARN|unix#316: connection dropped (Connection reset by peer)\n
Apr 14 10:26:36.859 E ns/openshift-sdn pod/sdn-f2cpw node/ip-10-0-132-253.us-east-2.compute.internal container=sdn container exited with code 255 (Error): 75 proxier.go:346] userspace syncProxyRules took 55.135386ms\nI0414 10:26:35.331343   63675 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:26:35.431378   63675 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:26:35.531353   63675 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:26:35.631355   63675 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:26:35.731342   63675 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:26:35.831417   63675 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:26:35.931376   63675 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:26:36.031384   63675 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:26:36.131366   63675 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:26:36.231365   63675 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:26:36.335647   63675 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0414 10:26:36.335716   63675 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
Apr 14 10:27:07.257 E ns/openshift-sdn pod/ovs-dj59m node/ip-10-0-135-38.us-east-2.compute.internal container=openvswitch container exited with code 137 (Error): 397|connmgr|INFO|br0<->unix#996: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-14T10:23:59.740Z|00398|connmgr|INFO|br0<->unix#999: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-14T10:23:59.779Z|00399|connmgr|INFO|br0<->unix#1002: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-14T10:23:59.813Z|00400|connmgr|INFO|br0<->unix#1005: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-14T10:23:59.848Z|00401|connmgr|INFO|br0<->unix#1008: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-14T10:23:59.878Z|00402|connmgr|INFO|br0<->unix#1011: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-14T10:23:59.933Z|00403|connmgr|INFO|br0<->unix#1014: 3 flow_mods in the last 0 s (3 adds)\n2020-04-14T10:23:59.956Z|00404|connmgr|INFO|br0<->unix#1017: 1 flow_mods in the last 0 s (1 adds)\n2020-04-14T10:23:59.979Z|00405|connmgr|INFO|br0<->unix#1020: 3 flow_mods in the last 0 s (3 adds)\n2020-04-14T10:24:00.005Z|00406|connmgr|INFO|br0<->unix#1023: 1 flow_mods in the last 0 s (1 adds)\n2020-04-14T10:24:00.035Z|00407|connmgr|INFO|br0<->unix#1026: 3 flow_mods in the last 0 s (3 adds)\n2020-04-14T10:24:00.069Z|00408|connmgr|INFO|br0<->unix#1029: 1 flow_mods in the last 0 s (1 adds)\n2020-04-14T10:24:00.097Z|00409|connmgr|INFO|br0<->unix#1032: 3 flow_mods in the last 0 s (3 adds)\n2020-04-14T10:24:00.122Z|00410|connmgr|INFO|br0<->unix#1035: 1 flow_mods in the last 0 s (1 adds)\n2020-04-14T10:24:00.151Z|00411|connmgr|INFO|br0<->unix#1038: 3 flow_mods in the last 0 s (3 adds)\n2020-04-14T10:24:00.178Z|00412|connmgr|INFO|br0<->unix#1041: 1 flow_mods in the last 0 s (1 adds)\n2020-04-14T10:24:12.929Z|00413|connmgr|INFO|br0<->unix#1044: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-14T10:24:12.963Z|00414|bridge|INFO|bridge br0: deleted interface vetha28e1f14 on port 3\n2020-04-14T10:24:31.908Z|00415|bridge|INFO|bridge br0: added interface vethed5cb479 on port 64\n2020-04-14T10:24:31.940Z|00416|connmgr|INFO|br0<->unix#1050: 5 flow_mods in the last 0 s (5 adds)\n2020-04-14T10:24:31.980Z|00417|connmgr|INFO|br0<->unix#1053: 2 flow_mods in the last 0 s (2 deletes)\n
Apr 14 10:27:08.218 E ns/openshift-multus pod/multus-btcz6 node/ip-10-0-149-215.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 14 10:27:10.340 E ns/openshift-sdn pod/sdn-xqqch node/ip-10-0-135-38.us-east-2.compute.internal container=sdn container exited with code 255 (Error): ix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:27:08.242469   69200 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:27:08.342406   69200 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:27:08.442494   69200 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:27:08.542437   69200 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:27:08.642512   69200 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:27:08.742367   69200 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:27:08.842432   69200 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:27:08.942445   69200 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:27:09.042416   69200 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:27:09.142511   69200 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:27:09.254214   69200 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0414 10:27:09.263720   69200 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
Apr 14 10:27:39.619 E ns/openshift-sdn pod/ovs-5mkgt node/ip-10-0-128-162.us-east-2.compute.internal container=openvswitch container exited with code 137 (Error): unix#342: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-14T10:24:57.034Z|00131|connmgr|INFO|br0<->unix#410: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-14T10:24:57.063Z|00132|connmgr|INFO|br0<->unix#413: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-14T10:24:57.089Z|00133|bridge|INFO|bridge br0: deleted interface vethfb3d131f on port 3\n2020-04-14T10:25:09.015Z|00134|bridge|INFO|bridge br0: added interface vethf37ed053 on port 20\n2020-04-14T10:25:09.042Z|00135|connmgr|INFO|br0<->unix#416: 5 flow_mods in the last 0 s (5 adds)\n2020-04-14T10:25:09.077Z|00136|connmgr|INFO|br0<->unix#419: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-14T10:25:34.142Z|00137|connmgr|INFO|br0<->unix#431: 2 flow_mods in the last 0 s (2 adds)\n2020-04-14T10:25:34.244Z|00138|connmgr|INFO|br0<->unix#437: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-14T10:25:34.569Z|00139|connmgr|INFO|br0<->unix#440: 3 flow_mods in the last 0 s (3 adds)\n2020-04-14T10:25:34.594Z|00140|connmgr|INFO|br0<->unix#443: 1 flow_mods in the last 0 s (1 adds)\n2020-04-14T10:25:34.617Z|00141|connmgr|INFO|br0<->unix#446: 3 flow_mods in the last 0 s (3 adds)\n2020-04-14T10:25:34.645Z|00142|connmgr|INFO|br0<->unix#449: 1 flow_mods in the last 0 s (1 adds)\n2020-04-14T10:25:34.673Z|00143|connmgr|INFO|br0<->unix#452: 3 flow_mods in the last 0 s (3 adds)\n2020-04-14T10:25:34.704Z|00144|connmgr|INFO|br0<->unix#455: 1 flow_mods in the last 0 s (1 adds)\n2020-04-14T10:25:34.731Z|00145|connmgr|INFO|br0<->unix#458: 3 flow_mods in the last 0 s (3 adds)\n2020-04-14T10:25:34.755Z|00146|connmgr|INFO|br0<->unix#461: 1 flow_mods in the last 0 s (1 adds)\n2020-04-14T10:25:34.784Z|00147|connmgr|INFO|br0<->unix#464: 3 flow_mods in the last 0 s (3 adds)\n2020-04-14T10:25:34.808Z|00148|connmgr|INFO|br0<->unix#467: 1 flow_mods in the last 0 s (1 adds)\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-04-14T10:26:04.178Z|00021|jsonrpc|WARN|unix#279: receive error: Connection reset by peer\n2020-04-14T10:26:04.178Z|00022|reconnect|WARN|unix#279: connection dropped (Connection reset by peer)\n
Apr 14 10:27:49.389 E ns/openshift-service-ca pod/service-serving-cert-signer-774bbff6b6-pwvf5 node/ip-10-0-135-38.us-east-2.compute.internal container=service-serving-cert-signer-controller container exited with code 255 (Error): 
Apr 14 10:27:50.640 E ns/openshift-sdn pod/sdn-q4228 node/ip-10-0-128-162.us-east-2.compute.internal container=sdn container exited with code 255 (Error): ar/run/openvswitch/db.sock: connect: connection refused\nI0414 10:27:48.709099   56019 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:27:48.809168   56019 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:27:48.909162   56019 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:27:49.009201   56019 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:27:49.109187   56019 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:27:49.209070   56019 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:27:49.309160   56019 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:27:49.409075   56019 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:27:49.509050   56019 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:27:49.609159   56019 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0414 10:27:49.609249   56019 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nF0414 10:27:49.609272   56019 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: timed out waiting for the condition\n
Apr 14 10:27:51.002 E ns/openshift-multus pod/multus-lth82 node/ip-10-0-132-253.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 14 10:27:52.399 E ns/openshift-service-ca pod/apiservice-cabundle-injector-685cd447d4-kd75t node/ip-10-0-135-38.us-east-2.compute.internal container=apiservice-cabundle-injector-controller container exited with code 255 (Error): 
Apr 14 10:28:12.270 E ns/openshift-machine-config-operator pod/machine-config-operator-85c4d5dc44-n4n58 node/ip-10-0-146-77.us-east-2.compute.internal container=machine-config-operator container exited with code 2 (Error): 
Apr 14 10:30:51.914 E ns/openshift-machine-config-operator pod/machine-config-controller-78fb8bccdf-hc6c4 node/ip-10-0-135-38.us-east-2.compute.internal container=machine-config-controller container exited with code 2 (Error): 
Apr 14 10:32:48.004 E ns/openshift-machine-config-operator pod/machine-config-server-mxvb7 node/ip-10-0-146-77.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): 
Apr 14 10:32:57.995 E ns/openshift-machine-config-operator pod/machine-config-server-7s8t4 node/ip-10-0-141-120.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): 
Apr 14 10:33:05.825 E ns/openshift-service-ca pod/service-serving-cert-signer-774bbff6b6-pwvf5 node/ip-10-0-135-38.us-east-2.compute.internal container=service-serving-cert-signer-controller container exited with code 2 (Error): 
Apr 14 10:33:06.429 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-774c779dc4-xg46q node/ip-10-0-135-38.us-east-2.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): eGate ended with: too old resource version: 14379 (17171)\nW0414 10:14:06.378751       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 16983 (17114)\nW0414 10:14:06.451944       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.RoleBinding ended with: too old resource version: 14786 (17118)\nW0414 10:14:06.452100       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 16816 (20913)\nW0414 10:14:06.457551       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 16458 (17114)\nW0414 10:22:23.887773       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 24939 (29492)\nW0414 10:23:09.407954       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 24939 (29690)\nW0414 10:24:03.366498       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 24939 (30151)\nW0414 10:28:19.413898       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 29908 (32017)\nW0414 10:29:27.370985       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 30400 (32456)\nW0414 10:30:46.336477       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Pod ended with: too old resource version: 18379 (18837)\nW0414 10:31:21.892079       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 29619 (33104)\nI0414 10:32:58.943035       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0414 10:32:58.943098       1 leaderelection.go:65] leaderelection lost\n
Apr 14 10:33:07.829 E ns/openshift-service-ca pod/configmap-cabundle-injector-566f47b48d-9gg7z node/ip-10-0-135-38.us-east-2.compute.internal container=configmap-cabundle-injector-controller container exited with code 2 (Error): 
Apr 14 10:33:09.628 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-5466659df6-jcnqj node/ip-10-0-135-38.us-east-2.compute.internal container=openshift-apiserver-operator container exited with code 255 (Error):   1 reflector.go:270] k8s.io/kube-aggregator/pkg/client/informers/externalversions/factory.go:117: watch of *v1.APIService ended with: too old resource version: 14428 (17134)\nW0414 10:14:06.507487       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 17759 (20916)\nW0414 10:19:39.460558       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 24939 (28658)\nW0414 10:23:14.607029       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 24939 (29714)\nW0414 10:23:36.460412       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 24939 (29826)\nW0414 10:26:12.425686       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.DaemonSet ended with: too old resource version: 26392 (27134)\nW0414 10:27:23.464386       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 28873 (31691)\nW0414 10:29:04.464523       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 30162 (32344)\nW0414 10:32:01.611928       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 29975 (33277)\nW0414 10:32:58.959828       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Image ended with: too old resource version: 17183 (33780)\nW0414 10:32:58.970969       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ServiceAccount ended with: too old resource version: 23992 (33780)\nI0414 10:32:59.031193       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0414 10:32:59.031247       1 leaderelection.go:65] leaderelection lost\n
Apr 14 10:33:12.025 E ns/openshift-machine-config-operator pod/machine-config-controller-754b969c7b-q499n node/ip-10-0-135-38.us-east-2.compute.internal container=machine-config-controller container exited with code 2 (Error): 
Apr 14 10:33:13.120 E ns/openshift-marketplace pod/redhat-operators-d86f78946-cff5s node/ip-10-0-149-215.us-east-2.compute.internal container=redhat-operators container exited with code 2 (Error): 
Apr 14 10:33:14.429 E ns/openshift-machine-config-operator pod/machine-config-operator-74b5dcc77c-jsmb4 node/ip-10-0-135-38.us-east-2.compute.internal container=machine-config-operator container exited with code 2 (Error): 
Apr 14 10:33:15.027 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-544768b675-vxmql node/ip-10-0-135-38.us-east-2.compute.internal container=kube-apiserver-operator container exited with code 255 (Error): rnal pods/kube-apiserver-ip-10-0-135-38.us-east-2.compute.internal container=\"kube-apiserver-7\" is not ready" to ""\nI0414 10:14:33.382808       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"85fe69b4-7e35-11ea-b22d-02e7f4354ac0", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-7-ip-10-0-135-38.us-east-2.compute.internal -n openshift-kube-apiserver because it was missing\nW0414 10:19:15.356298       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 24939 (28438)\nW0414 10:20:49.493045       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 26480 (29079)\nW0414 10:22:35.685792       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 24939 (29545)\nW0414 10:23:33.416571       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 24939 (29807)\nW0414 10:28:05.692162       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 29672 (31945)\nW0414 10:28:48.501661       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 29210 (32260)\nW0414 10:28:54.362708       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 28709 (32289)\nW0414 10:30:33.422338       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 30153 (32744)\nI0414 10:32:58.979299       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0414 10:32:58.979711       1 leaderelection.go:65] leaderelection lost\n
Apr 14 10:33:16.919 E ns/openshift-monitoring pod/prometheus-adapter-6f75d557cc-kvcwf node/ip-10-0-149-215.us-east-2.compute.internal container=prometheus-adapter container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 14 10:33:19.320 E ns/openshift-image-registry pod/image-registry-5fcd6467d-zkj9b node/ip-10-0-149-215.us-east-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 14 10:33:32.321 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-128-162.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): 
Apr 14 10:33:51.359 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-128-162.us-east-2.compute.internal container=alertmanager-proxy container exited with code 1 (Error): 
Apr 14 10:33:59.401 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-128-162.us-east-2.compute.internal container=prometheus-proxy container exited with code 1 (Error): 
Apr 14 10:34:29.264 E openshift-apiserver OpenShift API is not responding to GET requests
Apr 14 10:34:39.626 E clusteroperator/monitoring changed Degraded to True: UpdatingconfigurationsharingFailed: Failed to rollout the stack. Error: running task Updating configuration sharing failed: failed to retrieve Prometheus host: getting Route object failed: the server is currently unable to handle the request (get routes.route.openshift.io prometheus-k8s)
Apr 14 10:34:58.202 E ns/openshift-image-registry pod/node-ca-ztrbl node/ip-10-0-149-215.us-east-2.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 14 10:34:58.214 E ns/openshift-monitoring pod/node-exporter-czw9w node/ip-10-0-149-215.us-east-2.compute.internal container=node-exporter container exited with code 255 (Error): 
Apr 14 10:34:58.214 E ns/openshift-monitoring pod/node-exporter-czw9w node/ip-10-0-149-215.us-east-2.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 14 10:34:58.440 E ns/openshift-cluster-node-tuning-operator pod/tuned-5zmn7 node/ip-10-0-149-215.us-east-2.compute.internal container=tuned container exited with code 255 (Error): ed profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0414 10:30:15.592187   35955 openshift-tuned.go:435] Pod (openshift-machine-config-operator/machine-config-daemon-5n7ch) labels changed node wide: true\nI0414 10:30:15.624540   35955 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0414 10:30:15.626876   35955 openshift-tuned.go:326] Getting recommended profile...\nI0414 10:30:15.738171   35955 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0414 10:33:07.777540   35955 openshift-tuned.go:435] Pod (e2e-tests-service-upgrade-8hthq/service-test-48984) labels changed node wide: true\nI0414 10:33:10.624222   35955 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0414 10:33:10.625642   35955 openshift-tuned.go:326] Getting recommended profile...\nI0414 10:33:10.733927   35955 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0414 10:33:12.315724   35955 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-operator-865cd4cbb5-7zl67) labels changed node wide: true\nI0414 10:33:15.624190   35955 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0414 10:33:15.626189   35955 openshift-tuned.go:326] Getting recommended profile...\nI0414 10:33:15.736096   35955 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0414 10:33:16.524259   35955 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-k8s-1) labels changed node wide: true\nI0414 10:33:20.624213   35955 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0414 10:33:20.626401   35955 openshift-tuned.go:326] Getting recommended profile...\nI0414 10:33:20.735374   35955 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\n
Apr 14 10:35:01.512 E ns/openshift-dns pod/dns-default-gdpzk node/ip-10-0-149-215.us-east-2.compute.internal container=dns-node-resolver container exited with code 255 (Error): 
Apr 14 10:35:01.512 E ns/openshift-dns pod/dns-default-gdpzk node/ip-10-0-149-215.us-east-2.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-04-14T10:23:39.690Z [INFO] CoreDNS-1.3.1\n2020-04-14T10:23:39.690Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-14T10:23:39.690Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 14 10:35:02.802 E ns/openshift-sdn pod/sdn-q82tq node/ip-10-0-149-215.us-east-2.compute.internal container=sdn container exited with code 255 (Error): 5 proxier.go:346] userspace syncProxyRules took 129.581524ms\nI0414 10:33:08.125095   56005 roundrobin.go:338] LoadBalancerRR: Removing endpoints for openshift-marketplace/certified-operators:grpc\nI0414 10:33:08.324954   56005 proxier.go:367] userspace proxy: processing 0 service events\nI0414 10:33:08.324989   56005 proxier.go:346] userspace syncProxyRules took 92.951359ms\nI0414 10:33:08.337491   56005 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-monitoring/prometheus-adapter:https to [10.131.0.14:6443]\nI0414 10:33:08.337527   56005 roundrobin.go:240] Delete endpoint 10.128.2.21:6443 for service "openshift-monitoring/prometheus-adapter:https"\nI0414 10:33:08.617309   56005 proxier.go:367] userspace proxy: processing 0 service events\nI0414 10:33:08.617361   56005 proxier.go:346] userspace syncProxyRules took 106.505149ms\nI0414 10:33:08.910442   56005 proxier.go:367] userspace proxy: processing 0 service events\nI0414 10:33:08.910484   56005 proxier.go:346] userspace syncProxyRules took 103.477649ms\ninterrupt: Gracefully shutting down ...\nE0414 10:33:20.943718   56005 healthcheck.go:57] SDN healthcheck disconnected from OVS server: <nil>\nI0414 10:33:20.943822   56005 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0414 10:33:21.044995   56005 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0414 10:33:21.144160   56005 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0414 10:33:21.244196   56005 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0414 10:33:21.347041   56005 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\n
Apr 14 10:35:03.172 E ns/openshift-sdn pod/ovs-b4psw node/ip-10-0-149-215.us-east-2.compute.internal container=openvswitch container exited with code 255 (Error): vethef308386 on port 7\n2020-04-14T10:33:09.414Z|00158|connmgr|INFO|br0<->unix#189: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-14T10:33:09.447Z|00159|bridge|INFO|bridge br0: deleted interface veth786c6ea8 on port 8\n2020-04-14T10:33:09.501Z|00160|connmgr|INFO|br0<->unix#192: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-14T10:33:09.538Z|00161|bridge|INFO|bridge br0: deleted interface veth74e9b7a1 on port 6\n2020-04-14T10:33:09.586Z|00162|connmgr|INFO|br0<->unix#195: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-14T10:33:09.619Z|00163|bridge|INFO|bridge br0: deleted interface veth1134df95 on port 12\n2020-04-14T10:33:09.665Z|00164|connmgr|INFO|br0<->unix#198: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-14T10:33:09.712Z|00165|bridge|INFO|bridge br0: deleted interface vethd93c952d on port 10\n2020-04-14T10:33:09.813Z|00166|connmgr|INFO|br0<->unix#201: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-14T10:33:09.869Z|00167|bridge|INFO|bridge br0: deleted interface vethe5f2a307 on port 13\n2020-04-14T10:33:09.952Z|00168|connmgr|INFO|br0<->unix#204: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-14T10:33:09.998Z|00169|bridge|INFO|bridge br0: deleted interface vethebbc72ee on port 4\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-04-14T10:33:09.429Z|00018|jsonrpc|WARN|Dropped 3 log messages in last 473 seconds (most recently, 473 seconds ago) due to excessive rate\n2020-04-14T10:33:09.429Z|00019|jsonrpc|WARN|unix#168: receive error: Connection reset by peer\n2020-04-14T10:33:09.429Z|00020|reconnect|WARN|unix#168: connection dropped (Connection reset by peer)\n2020-04-14T10:33:09.977Z|00021|jsonrpc|WARN|unix#194: receive error: Connection reset by peer\n2020-04-14T10:33:09.977Z|00022|reconnect|WARN|unix#194: connection dropped (Connection reset by peer)\n2020-04-14T10:33:16.148Z|00023|jsonrpc|WARN|unix#197: receive error: Connection reset by peer\n2020-04-14T10:33:16.148Z|00024|reconnect|WARN|unix#197: connection dropped (Connection reset by peer)\nTerminated\novs-vswitchd is not running.\novsdb-server is not running.\n
Apr 14 10:35:03.545 E ns/openshift-multus pod/multus-xhjsx node/ip-10-0-149-215.us-east-2.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 14 10:35:04.095 E ns/openshift-machine-config-operator pod/machine-config-daemon-292ms node/ip-10-0-149-215.us-east-2.compute.internal container=machine-config-daemon container exited with code 255 (Error): 
Apr 14 10:35:05.122 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-135-38.us-east-2.compute.internal node/ip-10-0-135-38.us-east-2.compute.internal container=scheduler container exited with code 255 (Error): e\nE0414 10:14:15.905059       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope\nE0414 10:14:15.910021       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope\nE0414 10:14:16.278045       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope\nE0414 10:14:16.278163       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope\nE0414 10:14:16.324539       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope\nW0414 10:32:58.809219       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 25355 (33746)\nW0414 10:32:58.959611       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 25260 (33780)\nW0414 10:32:59.028929       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 25258 (33784)\nW0414 10:32:59.045489       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 28567 (33784)\nE0414 10:33:17.175155       1 server.go:259] lost master\n
Apr 14 10:35:07.134 E ns/openshift-apiserver pod/apiserver-n6ltq node/ip-10-0-135-38.us-east-2.compute.internal container=openshift-apiserver container exited with code 255 (Error): alancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0414 10:33:11.047751       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0414 10:33:11.069377       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0414 10:33:11.096158       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []\nI0414 10:33:11.096213       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0414 10:33:11.096625       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0414 10:33:11.111346       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0414 10:33:17.198380       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/aggregator-client-ca/ca-bundle.crt\nI0414 10:33:17.198436       1 clusterquotamapping.go:145] Shutting down ClusterQuotaMappingController controller\nI0414 10:33:17.198542       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/client-ca/ca-bundle.crt\nI0414 10:33:17.198553       1 serving.go:88] Shutting down DynamicLoader\nI0414 10:33:17.198571       1 controller.go:87] Shutting down OpenAPI AggregationController\nI0414 10:33:17.199549       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0414 10:33:17.200155       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0414 10:33:17.200233       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0414 10:33:17.200399       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0414 10:33:17.200810       1 secure_serving.go:180] Stopped listening on 0.0.0.0:8443\n
Apr 14 10:35:07.520 E ns/openshift-controller-manager pod/controller-manager-qnrwz node/ip-10-0-135-38.us-east-2.compute.internal container=controller-manager container exited with code 255 (Error): 
Apr 14 10:35:07.922 E ns/openshift-monitoring pod/node-exporter-jd79p node/ip-10-0-135-38.us-east-2.compute.internal container=node-exporter container exited with code 255 (Error): 
Apr 14 10:35:07.922 E ns/openshift-monitoring pod/node-exporter-jd79p node/ip-10-0-135-38.us-east-2.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 14 10:35:08.721 E ns/openshift-image-registry pod/node-ca-2ntbw node/ip-10-0-135-38.us-east-2.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 14 10:35:10.321 E ns/openshift-multus pod/multus-vx2q5 node/ip-10-0-135-38.us-east-2.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 14 10:35:11.922 E ns/openshift-sdn pod/ovs-wn5d6 node/ip-10-0-135-38.us-east-2.compute.internal container=openvswitch container exited with code 255 (Error):  br0: deleted interface vethc0895429 on port 12\n2020-04-14T10:33:00.861Z|00175|connmgr|INFO|br0<->unix#212: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-14T10:33:00.888Z|00176|bridge|INFO|bridge br0: deleted interface veth07bc1b55 on port 6\n2020-04-14T10:33:02.624Z|00177|connmgr|INFO|br0<->unix#215: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-14T10:33:02.647Z|00178|bridge|INFO|bridge br0: deleted interface veth1217f8db on port 7\n2020-04-14T10:33:02.835Z|00179|connmgr|INFO|br0<->unix#218: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-14T10:33:02.872Z|00180|connmgr|INFO|br0<->unix#221: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-14T10:33:02.896Z|00181|bridge|INFO|bridge br0: deleted interface veth9c98a0b3 on port 21\n2020-04-14T10:33:03.694Z|00182|connmgr|INFO|br0<->unix#224: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-14T10:33:03.744Z|00183|connmgr|INFO|br0<->unix#227: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-14T10:33:03.778Z|00184|bridge|INFO|bridge br0: deleted interface vethfa2a32cb on port 20\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-04-14T10:33:03.704Z|00019|jsonrpc|WARN|Dropped 4 log messages in last 352 seconds (most recently, 352 seconds ago) due to excessive rate\n2020-04-14T10:33:03.704Z|00020|jsonrpc|WARN|unix#188: send error: Broken pipe\n2020-04-14T10:33:03.704Z|00021|reconnect|WARN|unix#188: connection dropped (Broken pipe)\n\n==> /var/log/openvswitch/ovs-vswitchd.log <==\n2020-04-14T10:33:04.459Z|00185|connmgr|INFO|br0<->unix#230: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-14T10:33:04.488Z|00186|bridge|INFO|bridge br0: deleted interface vethc32d374f on port 19\n2020-04-14T10:33:04.600Z|00187|connmgr|INFO|br0<->unix#233: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-14T10:33:04.635Z|00188|bridge|INFO|bridge br0: deleted interface vetha82719ab on port 5\n2020-04-14T10:33:04.882Z|00189|connmgr|INFO|br0<->unix#236: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-14T10:33:04.903Z|00190|bridge|INFO|bridge br0: deleted interface veth0beeced1 on port 8\nTerminated\nTerminated\n
Apr 14 10:35:12.323 E ns/openshift-machine-config-operator pod/machine-config-daemon-ksvt7 node/ip-10-0-135-38.us-east-2.compute.internal container=machine-config-daemon container exited with code 255 (Error): 
Apr 14 10:35:13.123 E ns/openshift-sdn pod/sdn-xqqch node/ip-10-0-135-38.us-east-2.compute.internal container=sdn container exited with code 255 (Error): 4 for service "openshift-monitoring/alertmanager-main:web"\nI0414 10:33:08.014455   74787 proxier.go:367] userspace proxy: processing 0 service events\nI0414 10:33:08.014487   74787 proxier.go:346] userspace syncProxyRules took 125.06936ms\nI0414 10:33:08.123043   74787 roundrobin.go:338] LoadBalancerRR: Removing endpoints for openshift-marketplace/certified-operators:grpc\nI0414 10:33:08.224496   74787 proxier.go:367] userspace proxy: processing 0 service events\nI0414 10:33:08.224523   74787 proxier.go:346] userspace syncProxyRules took 69.191213ms\nI0414 10:33:08.336499   74787 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-monitoring/prometheus-adapter:https to [10.131.0.14:6443]\nI0414 10:33:08.336554   74787 roundrobin.go:240] Delete endpoint 10.128.2.21:6443 for service "openshift-monitoring/prometheus-adapter:https"\nI0414 10:33:08.427987   74787 proxier.go:367] userspace proxy: processing 0 service events\nI0414 10:33:08.428016   74787 proxier.go:346] userspace syncProxyRules took 80.190332ms\nI0414 10:33:08.609270   74787 proxier.go:367] userspace proxy: processing 0 service events\nI0414 10:33:08.609412   74787 proxier.go:346] userspace syncProxyRules took 57.609451ms\ninterrupt: Gracefully shutting down ...\nE0414 10:33:17.245713   74787 healthcheck.go:57] SDN healthcheck disconnected from OVS server: <nil>\nI0414 10:33:17.245915   74787 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0414 10:33:17.346368   74787 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0414 10:33:17.446309   74787 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0414 10:33:17.546317   74787 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\n
Apr 14 10:35:13.520 E ns/openshift-sdn pod/sdn-controller-5jbf9 node/ip-10-0-135-38.us-east-2.compute.internal container=sdn-controller container exited with code 255 (Error): I0414 10:24:22.892472       1 leaderelection.go:205] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Apr 14 10:35:21.122 E ns/openshift-cluster-node-tuning-operator pod/tuned-tlght node/ip-10-0-135-38.us-east-2.compute.internal container=tuned container exited with code 255 (Error): .go:435] Pod (openshift-kube-controller-manager/revision-pruner-5-ip-10-0-135-38.us-east-2.compute.internal) labels changed node wide: false\nI0414 10:33:01.957458   57244 openshift-tuned.go:435] Pod (openshift-kube-apiserver/installer-7-ip-10-0-135-38.us-east-2.compute.internal) labels changed node wide: false\nI0414 10:33:02.158998   57244 openshift-tuned.go:435] Pod (openshift-kube-controller-manager/installer-4-ip-10-0-135-38.us-east-2.compute.internal) labels changed node wide: true\nI0414 10:33:06.590047   57244 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0414 10:33:06.591770   57244 openshift-tuned.go:326] Getting recommended profile...\nI0414 10:33:06.732845   57244 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0414 10:33:06.733659   57244 openshift-tuned.go:435] Pod (openshift-kube-controller-manager-operator/kube-controller-manager-operator-774c779dc4-xg46q) labels changed node wide: true\nI0414 10:33:11.589956   57244 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0414 10:33:11.591511   57244 openshift-tuned.go:326] Getting recommended profile...\nI0414 10:33:11.715320   57244 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0414 10:33:11.715874   57244 openshift-tuned.go:435] Pod (openshift-dns-operator/dns-operator-75d56bb7b5-9zwbs) labels changed node wide: true\nI0414 10:33:16.590018   57244 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0414 10:33:16.591463   57244 openshift-tuned.go:326] Getting recommended profile...\nI0414 10:33:16.718732   57244 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0414 10:33:16.823510   57244 openshift-tuned.go:435] Pod (openshift-service-ca-operator/service-ca-operator-59986bbb4d-54k8k) labels changed node wide: true\n
Apr 14 10:35:21.522 E ns/openshift-dns pod/dns-default-n8z4m node/ip-10-0-135-38.us-east-2.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-04-14T10:24:34.832Z [INFO] CoreDNS-1.3.1\n2020-04-14T10:24:34.832Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-14T10:24:34.832Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nW0414 10:32:59.046685       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: watch of *v1.Service ended with: too old resource version: 28567 (33784)\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 14 10:35:21.522 E ns/openshift-dns pod/dns-default-n8z4m node/ip-10-0-135-38.us-east-2.compute.internal container=dns-node-resolver container exited with code 255 (Error): /bin/bash: line 1: kill: (80) - No such process\n
Apr 14 10:35:27.542 E ns/openshift-etcd pod/etcd-member-ip-10-0-135-38.us-east-2.compute.internal node/ip-10-0-135-38.us-east-2.compute.internal container=etcd-metrics container exited with code 1 (Error): 2020-04-14 10:35:02.797419 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-2.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-2.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-14 10:35:02.805479 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-14 10:35:02.809956 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-2.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-2.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/14 10:35:02 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.135.38:9978: connect: connection refused"; Reconnecting to {etcd-2.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\nWARNING: 2020/04/14 10:35:03 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.135.38:9978: connect: connection refused"; Reconnecting to {etcd-2.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\nWARNING: 2020/04/14 10:35:05 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.135.38:9978: connect: connection refused"; Reconnecting to {etcd-2.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\nWARNING: 2020/04/14 10:35:07 Failed to dial etcd-2.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com:9978: context canceled; please retry.\ndial tcp 10.0.135.38:9978: connect: connection refused\n
Apr 14 10:35:34.594 E ns/openshift-console pod/console-5c7d8775d-z8c82 node/ip-10-0-141-120.us-east-2.compute.internal container=console container exited with code 2 (Error): 2020/04/14 10:15:19 cmd/main: cookies are secure!\n2020/04/14 10:15:19 cmd/main: Binding to 0.0.0.0:8443...\n2020/04/14 10:15:19 cmd/main: using TLS\n
Apr 14 10:35:39.193 E ns/openshift-cluster-node-tuning-operator pod/cluster-node-tuning-operator-6474855d77-lsnqb node/ip-10-0-141-120.us-east-2.compute.internal container=cluster-node-tuning-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 14 10:35:40.395 E ns/openshift-machine-api pod/machine-api-controllers-7bbd58f679-4npzc node/ip-10-0-141-120.us-east-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Apr 14 10:35:40.395 E ns/openshift-machine-api pod/machine-api-controllers-7bbd58f679-4npzc node/ip-10-0-141-120.us-east-2.compute.internal container=nodelink-controller container exited with code 2 (Error): 
Apr 14 10:35:57.569 E ns/openshift-console pod/downloads-7f545c6fcb-kv685 node/ip-10-0-132-253.us-east-2.compute.internal container=download-server container exited with code 137 (Error): 
Apr 14 10:35:58.311 E ns/openshift-operator-lifecycle-manager pod/packageserver-c6dbc5c44-wfrbm node/ip-10-0-141-120.us-east-2.compute.internal container=packageserver container exited with code 137 (Error): d-operators namespace=openshift-marketplace\ntime="2020-04-14T10:35:54Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-14T10:35:54Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-14T10:35:54Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-14T10:35:54Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-14T10:35:54Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-14T10:35:54Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-14T10:35:55Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-14T10:35:55Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-14T10:35:55Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-14T10:35:55Z" level=info msg="new grpc connection added" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-14T10:35:57Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-14T10:35:57Z" level=info msg="new grpc connection added" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\n
Apr 14 10:35:58.333 E ns/openshift-console pod/downloads-7f545c6fcb-v7zfd node/ip-10-0-141-120.us-east-2.compute.internal container=download-server container exited with code 137 (Error): 
Apr 14 10:36:00.922 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-135-38.us-east-2.compute.internal node/ip-10-0-135-38.us-east-2.compute.internal container=kube-controller-manager-cert-syncer-5 container exited with code 255 (Error): ction refused\nE0414 10:14:08.233267       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0414 10:14:08.233401       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0414 10:14:15.199632       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: secrets is forbidden: User "system:kube-controller-manager" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-controller-manager"\nE0414 10:14:15.202408       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:kube-controller-manager" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\nE0414 10:14:16.235679       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:kube-controller-manager" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\nE0414 10:14:16.235775       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: secrets is forbidden: User "system:kube-controller-manager" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-controller-manager"\nW0414 10:24:01.281895       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25938 (30139)\nW0414 10:31:57.287432       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 30392 (33261)\n
Apr 14 10:36:00.922 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-135-38.us-east-2.compute.internal node/ip-10-0-135-38.us-east-2.compute.internal container=kube-controller-manager-5 container exited with code 255 (Error): test version and try again\nI0414 10:33:10.929026       1 controller_utils.go:598] Controller machine-config-server deleting pod openshift-machine-config-operator/machine-config-server-msfqd\nI0414 10:33:10.938487       1 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"openshift-machine-config-operator", Name:"machine-config-server", UID:"ba2e1909-7e35-11ea-b22d-02e7f4354ac0", APIVersion:"apps/v1", ResourceVersion:"34037", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: machine-config-server-msfqd\nI0414 10:33:12.650989       1 deployment_controller.go:484] Error syncing deployment openshift-monitoring/grafana: Operation cannot be fulfilled on deployments.apps "grafana": the object has been modified; please apply your changes to the latest version and try again\nI0414 10:33:12.850069       1 deployment_controller.go:484] Error syncing deployment openshift-monitoring/prometheus-adapter: Operation cannot be fulfilled on deployments.apps "prometheus-adapter": the object has been modified; please apply your changes to the latest version and try again\nI0414 10:33:13.619740       1 garbagecollector.go:409] processing item [v1/ConfigMap, namespace: openshift-network-operator, name: cluster-network-operator, uid: 02de18f9-7e3a-11ea-bf65-028f1ca4205e]\nI0414 10:33:13.625832       1 garbagecollector.go:522] delete object [v1/ConfigMap, namespace: openshift-network-operator, name: cluster-network-operator, uid: 02de18f9-7e3a-11ea-bf65-028f1ca4205e] with propagation policy Background\nI0414 10:33:16.607698       1 event.go:221] Event(v1.ObjectReference{Kind:"StatefulSet", Namespace:"openshift-monitoring", Name:"prometheus-k8s", UID:"5fa91a7e-7e36-11ea-9a74-0207acdf813e", APIVersion:"apps/v1", ResourceVersion:"34319", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' create Pod prometheus-k8s-1 in StatefulSet prometheus-k8s successful\nE0414 10:33:17.212878       1 controllermanager.go:282] leaderelection lost\nI0414 10:33:17.212918       1 serving.go:88] Shutting down DynamicLoader\n
Apr 14 10:36:01.521 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-135-38.us-east-2.compute.internal node/ip-10-0-135-38.us-east-2.compute.internal container=scheduler container exited with code 255 (Error): e\nE0414 10:14:15.905059       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope\nE0414 10:14:15.910021       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope\nE0414 10:14:16.278045       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope\nE0414 10:14:16.278163       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope\nE0414 10:14:16.324539       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope\nW0414 10:32:58.809219       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 25355 (33746)\nW0414 10:32:58.959611       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 25260 (33780)\nW0414 10:32:59.028929       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 25258 (33784)\nW0414 10:32:59.045489       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 28567 (33784)\nE0414 10:33:17.175155       1 server.go:259] lost master\n
Apr 14 10:36:01.685 E ns/openshift-marketplace pod/certified-operators-dbf7766d7-gq57r node/ip-10-0-149-215.us-east-2.compute.internal container=certified-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 14 10:36:01.921 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-135-38.us-east-2.compute.internal node/ip-10-0-135-38.us-east-2.compute.internal container=kube-apiserver-7 container exited with code 255 (Error):        1 cacher.go:125] Terminating all watchers from cacher *unstructured.Unstructured\nW0414 10:32:58.994179       1 cacher.go:125] Terminating all watchers from cacher *core.PersistentVolume\nW0414 10:32:59.000955       1 cacher.go:125] Terminating all watchers from cacher *core.PodTemplate\nW0414 10:32:59.001129       1 cacher.go:125] Terminating all watchers from cacher *unstructured.Unstructured\nW0414 10:32:59.001618       1 cacher.go:125] Terminating all watchers from cacher *apiregistration.APIService\nW0414 10:32:59.001738       1 cacher.go:125] Terminating all watchers from cacher *unstructured.Unstructured\nW0414 10:32:59.001838       1 cacher.go:125] Terminating all watchers from cacher *unstructured.Unstructured\nW0414 10:32:59.060404       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 28567 (33784)\nW0414 10:32:59.060497       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 25258 (33784)\nW0414 10:32:59.060551       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.LimitRange ended with: too old resource version: 25253 (33784)\nW0414 10:32:59.061811       1 reflector.go:256] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: watch of *v1.ClusterResourceQuota ended with: too old resource version: 25801 (33784)\nW0414 10:32:59.064109       1 reflector.go:256] k8s.io/kube-aggregator/pkg/client/informers/internalversion/factory.go:117: watch of *apiregistration.APIService ended with: too old resource version: 30767 (33784)\nW0414 10:32:59.064962       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1beta1.PriorityClass ended with: too old resource version: 25350 (33784)\nI0414 10:33:07.858345       1 cacher.go:606] cacher (*apps.ReplicaSet): 1 objects queued in incoming channel.\nI0414 10:33:17.151961       1 controller.go:176] Shutting down kubernetes service endpoint reconciler\n
Apr 14 10:36:01.921 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-135-38.us-east-2.compute.internal node/ip-10-0-135-38.us-east-2.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 255 (Error): I0414 10:14:08.826386       1 certsync_controller.go:269] Starting CertSyncer\nI0414 10:14:08.827353       1 observer_polling.go:106] Starting file observer\nW0414 10:20:05.244028       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 26480 (28851)\nW0414 10:27:15.249308       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 29006 (31654)\nW0414 10:33:01.255080       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 31831 (33537)\n
Apr 14 10:36:02.345 E ns/openshift-etcd pod/etcd-member-ip-10-0-135-38.us-east-2.compute.internal node/ip-10-0-135-38.us-east-2.compute.internal container=etcd-member container exited with code 255 (Error):  v2 (context canceled)\n2020-04-14 10:33:17.088960 I | rafthttp: peer 532e13531728c06a became inactive (message send to peer failed)\n2020-04-14 10:33:17.088966 I | rafthttp: stopped streaming with peer 532e13531728c06a (stream MsgApp v2 reader)\n2020-04-14 10:33:17.089013 W | rafthttp: lost the TCP streaming connection with peer 532e13531728c06a (stream Message reader)\n2020-04-14 10:33:17.089028 I | rafthttp: stopped streaming with peer 532e13531728c06a (stream Message reader)\n2020-04-14 10:33:17.089036 I | rafthttp: stopped peer 532e13531728c06a\n2020-04-14 10:33:17.089041 I | rafthttp: stopping peer 866e24bff86ef2f4...\n2020-04-14 10:33:17.089381 I | rafthttp: closed the TCP streaming connection with peer 866e24bff86ef2f4 (stream MsgApp v2 writer)\n2020-04-14 10:33:17.089388 I | rafthttp: stopped streaming with peer 866e24bff86ef2f4 (writer)\n2020-04-14 10:33:17.089709 I | rafthttp: closed the TCP streaming connection with peer 866e24bff86ef2f4 (stream Message writer)\n2020-04-14 10:33:17.089715 I | rafthttp: stopped streaming with peer 866e24bff86ef2f4 (writer)\n2020-04-14 10:33:17.089800 I | rafthttp: stopped HTTP pipelining with peer 866e24bff86ef2f4\n2020-04-14 10:33:17.089851 W | rafthttp: lost the TCP streaming connection with peer 866e24bff86ef2f4 (stream MsgApp v2 reader)\n2020-04-14 10:33:17.089866 I | rafthttp: stopped streaming with peer 866e24bff86ef2f4 (stream MsgApp v2 reader)\n2020-04-14 10:33:17.089915 W | rafthttp: lost the TCP streaming connection with peer 866e24bff86ef2f4 (stream Message reader)\n2020-04-14 10:33:17.089924 I | rafthttp: stopped streaming with peer 866e24bff86ef2f4 (stream Message reader)\n2020-04-14 10:33:17.089930 I | rafthttp: stopped peer 866e24bff86ef2f4\n2020-04-14 10:33:17.114709 I | embed: rejected connection from "10.0.141.120:41710" (error "set tcp 10.0.135.38:2380: use of closed network connection", ServerName "")\n2020-04-14 10:33:17.114793 I | embed: rejected connection from "10.0.146.77:38000" (error "set tcp 10.0.135.38:2380: use of closed network connection", ServerName "")\n
Apr 14 10:36:02.345 E ns/openshift-etcd pod/etcd-member-ip-10-0-135-38.us-east-2.compute.internal node/ip-10-0-135-38.us-east-2.compute.internal container=etcd-metrics container exited with code 255 (Error): 2020-04-14 10:33:06.532182 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-2.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-2.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-14 10:33:06.533309 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-14 10:33:06.534080 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-2.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-2.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/14 10:33:06 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.135.38:9978: connect: connection refused"; Reconnecting to {etcd-2.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\n2020-04-14 10:33:07.548639 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Apr 14 10:36:05.522 E ns/openshift-etcd pod/etcd-member-ip-10-0-135-38.us-east-2.compute.internal node/ip-10-0-135-38.us-east-2.compute.internal container=etcd-metrics container exited with code 1 (Error): 2020-04-14 10:35:02.797419 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-2.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-2.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-14 10:35:02.805479 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-14 10:35:02.809956 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-2.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-2.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/14 10:35:02 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.135.38:9978: connect: connection refused"; Reconnecting to {etcd-2.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\nWARNING: 2020/04/14 10:35:03 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.135.38:9978: connect: connection refused"; Reconnecting to {etcd-2.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\nWARNING: 2020/04/14 10:35:05 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.135.38:9978: connect: connection refused"; Reconnecting to {etcd-2.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\nWARNING: 2020/04/14 10:35:07 Failed to dial etcd-2.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com:9978: context canceled; please retry.\ndial tcp 10.0.135.38:9978: connect: connection refused\n
Apr 14 10:36:14.264 - 15s   E openshift-apiserver OpenShift API is not responding to GET requests
Apr 14 10:36:30.975 E ns/openshift-operator-lifecycle-manager pod/packageserver-c6dbc5c44-ts6f6 node/ip-10-0-146-77.us-east-2.compute.internal container=packageserver container exited with code 137 (Error): perators namespace=openshift-marketplace\ntime="2020-04-14T10:36:12Z" level=info msg="new grpc connection added" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-14T10:36:13Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-14T10:36:13Z" level=info msg="new grpc connection added" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-14T10:36:16Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-14T10:36:16Z" level=info msg="new grpc connection added" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-14T10:36:18Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-14T10:36:18Z" level=info msg="new grpc connection added" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-14T10:36:26Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-14T10:36:26Z" level=info msg="new grpc connection added" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-14T10:36:27Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-14T10:36:27Z" level=info msg="new grpc connection added" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\nI0414 10:36:28.099448       1 reflector.go:337] github.com/operator-framework/operator-lifecycle-manager/pkg/lib/queueinformer/queueinformer_operator.go:130: Watch close - *v1alpha1.CatalogSource total 109 items received\n
Apr 14 10:36:54.763 E ns/openshift-marketplace pod/certified-operators-57fbf5d5dd-vjk5g node/ip-10-0-128-162.us-east-2.compute.internal container=certified-operators container exited with code 2 (Error): 
Apr 14 10:36:59.777 E ns/openshift-cluster-node-tuning-operator pod/tuned-dcl4q node/ip-10-0-128-162.us-east-2.compute.internal container=tuned container exited with code 143 (Error): le (openshift-node) match.  Label changes will not trigger profile reload.\nI0414 10:33:16.621798   39584 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-k8s-1) labels changed node wide: true\nI0414 10:33:17.245785   39584 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0414 10:33:17.251309   39584 openshift-tuned.go:720] Pod event watch channel closed.\nI0414 10:33:17.251326   39584 openshift-tuned.go:722] Increasing resyncPeriod to 118\nI0414 10:35:15.251565   39584 openshift-tuned.go:187] Extracting tuned profiles\nI0414 10:35:15.253614   39584 openshift-tuned.go:623] Resync period to pull node/pod labels: 118 [s]\nI0414 10:35:15.266231   39584 openshift-tuned.go:435] Pod (openshift-monitoring/alertmanager-main-2) labels changed node wide: true\nI0414 10:35:20.263929   39584 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0414 10:35:20.265475   39584 openshift-tuned.go:275] Dumping labels to /var/lib/tuned/ocp-node-labels.cfg\nI0414 10:35:20.266705   39584 openshift-tuned.go:326] Getting recommended profile...\nI0414 10:35:20.377425   39584 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0414 10:35:58.100061   39584 openshift-tuned.go:435] Pod (openshift-marketplace/certified-operators-57fbf5d5dd-vjk5g) labels changed node wide: true\nI0414 10:36:00.263977   39584 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0414 10:36:00.265595   39584 openshift-tuned.go:326] Getting recommended profile...\nI0414 10:36:00.381152   39584 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0414 10:36:08.520574   39584 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0414 10:36:08.523363   39584 openshift-tuned.go:720] Pod event watch channel closed.\nI0414 10:36:08.523381   39584 openshift-tuned.go:722] Increasing resyncPeriod to 236\n
Apr 14 10:36:59.817 E ns/openshift-cluster-node-tuning-operator pod/tuned-5zmn7 node/ip-10-0-149-215.us-east-2.compute.internal container=tuned container exited with code 143 (Error): Failed to execute operation: Unit file tuned.service does not exist.\nI0414 10:35:04.056513    3311 openshift-tuned.go:187] Extracting tuned profiles\nI0414 10:35:04.061020    3311 openshift-tuned.go:623] Resync period to pull node/pod labels: 54 [s]\nE0414 10:35:07.509171    3311 openshift-tuned.go:720] Get https://172.30.0.1:443/api/v1/nodes/ip-10-0-149-215.us-east-2.compute.internal: dial tcp 172.30.0.1:443: connect: no route to host\nI0414 10:35:07.509298    3311 openshift-tuned.go:722] Increasing resyncPeriod to 108\nI0414 10:36:55.509516    3311 openshift-tuned.go:187] Extracting tuned profiles\nI0414 10:36:55.516281    3311 openshift-tuned.go:623] Resync period to pull node/pod labels: 108 [s]\nI0414 10:36:55.557093    3311 openshift-tuned.go:435] Pod (openshift-marketplace/community-operators-ffd45d797-5rmkt) labels changed node wide: true\n
Apr 14 10:37:00.063 E ns/openshift-cluster-node-tuning-operator pod/tuned-tlght node/ip-10-0-135-38.us-east-2.compute.internal container=tuned container exited with code 143 (Error): Failed to execute operation: Unit file tuned.service does not exist.\nI0414 10:35:08.722412    5112 openshift-tuned.go:187] Extracting tuned profiles\nI0414 10:35:08.728458    5112 openshift-tuned.go:623] Resync period to pull node/pod labels: 52 [s]\nE0414 10:35:12.838954    5112 openshift-tuned.go:720] Get https://172.30.0.1:443/api/v1/nodes/ip-10-0-135-38.us-east-2.compute.internal: dial tcp 172.30.0.1:443: connect: no route to host\nI0414 10:35:12.838988    5112 openshift-tuned.go:722] Increasing resyncPeriod to 104\nI0414 10:36:56.839198    5112 openshift-tuned.go:187] Extracting tuned profiles\nI0414 10:36:56.842088    5112 openshift-tuned.go:623] Resync period to pull node/pod labels: 104 [s]\nI0414 10:36:56.875116    5112 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/packageserver-65d46f8f9d-p2grt) labels changed node wide: true\n
Apr 14 10:37:00.093 E ns/openshift-cluster-node-tuning-operator pod/tuned-s5wq8 node/ip-10-0-146-77.us-east-2.compute.internal container=tuned container exited with code 143 (Error): 6.555546   56675 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0414 10:33:17.244836   56675 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0414 10:33:17.263163   56675 openshift-tuned.go:720] Pod event watch channel closed.\nI0414 10:33:17.263278   56675 openshift-tuned.go:722] Increasing resyncPeriod to 114\nI0414 10:35:11.263589   56675 openshift-tuned.go:187] Extracting tuned profiles\nI0414 10:35:11.265757   56675 openshift-tuned.go:623] Resync period to pull node/pod labels: 114 [s]\nI0414 10:35:11.285011   56675 openshift-tuned.go:435] Pod (openshift-monitoring/cluster-monitoring-operator-5f8bcdd4d7-kqc2z) labels changed node wide: true\nI0414 10:35:16.282412   56675 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0414 10:35:16.284206   56675 openshift-tuned.go:275] Dumping labels to /var/lib/tuned/ocp-node-labels.cfg\nI0414 10:35:16.285415   56675 openshift-tuned.go:326] Getting recommended profile...\nI0414 10:35:16.442656   56675 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0414 10:35:27.887223   56675 openshift-tuned.go:435] Pod (openshift-cluster-version/cluster-version-operator-847ff6c77-9mt74) labels changed node wide: true\nI0414 10:35:31.282456   56675 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0414 10:35:31.284285   56675 openshift-tuned.go:326] Getting recommended profile...\nI0414 10:35:31.455828   56675 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0414 10:36:08.520421   56675 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0414 10:36:08.539244   56675 openshift-tuned.go:720] Pod event watch channel closed.\nI0414 10:36:08.539391   56675 openshift-tuned.go:722] Increasing resyncPeriod to 228\n
Apr 14 10:37:10.125 E clusteroperator/monitoring changed Degraded to True: UpdatingconfigurationsharingFailed: Failed to rollout the stack. Error: running task Updating configuration sharing failed: failed to retrieve Alertmanager host: getting Route object failed: the server is currently unable to handle the request (get routes.route.openshift.io alertmanager-main)
Apr 14 10:37:24.166 E ns/openshift-operator-lifecycle-manager pod/packageserver-f59c7f974-q8mdn node/ip-10-0-146-77.us-east-2.compute.internal container=packageserver container exited with code 137 (Error): tp-client/2.0 10.129.0.1:44532]\nI0414 10:36:53.153541       1 wrap.go:47] GET /: (119.444µs) 200 [Go-http-client/2.0 10.129.0.1:44532]\nI0414 10:36:53.153730       1 wrap.go:47] GET /: (74.414239ms) 200 [Go-http-client/2.0 10.129.0.1:44532]\nI0414 10:36:53.166450       1 wrap.go:47] GET /: (86.279181ms) 200 [Go-http-client/2.0 10.130.0.1:35398]\ntime="2020-04-14T10:36:53Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-14T10:36:53Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\nI0414 10:36:53.229876       1 secure_serving.go:156] Stopped listening on [::]:5443\ntime="2020-04-14T10:36:54Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-14T10:36:54Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-14T10:37:00Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-14T10:37:00Z" level=info msg="new grpc connection added" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-14T10:37:05Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-14T10:37:05Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-14T10:37:06Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-14T10:37:06Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\n
Apr 14 10:37:29.264 E openshift-apiserver OpenShift API is not responding to GET requests
Apr 14 10:37:46.483 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-141-120.us-east-2.compute.internal node/ip-10-0-141-120.us-east-2.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 255 (Error): I0414 10:10:44.378839       1 observer_polling.go:106] Starting file observer\nI0414 10:10:44.379067       1 certsync_controller.go:269] Starting CertSyncer\nW0414 10:19:51.739662       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 26480 (28761)\nW0414 10:26:09.746185       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 28932 (31237)\nW0414 10:33:09.753613       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 31437 (33563)\nE0414 10:36:08.519107       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/secrets?resourceVersion=17114&timeout=6m27s&timeoutSeconds=387&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0414 10:36:08.627220       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/configmaps?resourceVersion=34252&timeout=9m28s&timeoutSeconds=568&watch=true: dial tcp [::1]:6443: connect: connection refused\n
Apr 14 10:37:46.483 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-141-120.us-east-2.compute.internal node/ip-10-0-141-120.us-east-2.compute.internal container=kube-apiserver-7 container exited with code 255 (Error): go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=6067, ErrCode=NO_ERROR, debug=""\nI0414 10:36:08.369372       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=6067, ErrCode=NO_ERROR, debug=""\nI0414 10:36:08.369476       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=6067, ErrCode=NO_ERROR, debug=""\nI0414 10:36:08.369489       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=6067, ErrCode=NO_ERROR, debug=""\nI0414 10:36:08.369606       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=6067, ErrCode=NO_ERROR, debug=""\nI0414 10:36:08.369619       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=6067, ErrCode=NO_ERROR, debug=""\nI0414 10:36:08.369726       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=6067, ErrCode=NO_ERROR, debug=""\nI0414 10:36:08.369737       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=6067, ErrCode=NO_ERROR, debug=""\nI0414 10:36:08.369902       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=6067, ErrCode=NO_ERROR, debug=""\nI0414 10:36:08.369914       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=6067, ErrCode=NO_ERROR, debug=""\nI0414 10:36:08.395856       1 controller.go:176] Shutting down kubernetes service endpoint reconciler\n
Apr 14 10:37:46.920 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-141-120.us-east-2.compute.internal node/ip-10-0-141-120.us-east-2.compute.internal container=kube-controller-manager-cert-syncer-5 container exited with code 255 (Error): I0414 10:11:24.512724       1 observer_polling.go:106] Starting file observer\nI0414 10:11:24.513263       1 certsync_controller.go:269] Starting CertSyncer\nW0414 10:17:44.532783       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 18113 (28002)\nW0414 10:25:43.537995       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 28133 (31037)\nW0414 10:31:12.543071       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 31260 (33048)\n
Apr 14 10:37:46.920 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-141-120.us-east-2.compute.internal node/ip-10-0-141-120.us-east-2.compute.internal container=kube-controller-manager-5 container exited with code 255 (Error): Path:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set redhat-operators-665f6cdc69 to 1\nI0414 10:36:00.456116       1 replica_set.go:477] Too few replicas for ReplicaSet openshift-marketplace/redhat-operators-665f6cdc69, need 1, creating 1\nI0414 10:36:00.466099       1 deployment_controller.go:484] Error syncing deployment openshift-marketplace/redhat-operators: Operation cannot be fulfilled on deployments.apps "redhat-operators": the object has been modified; please apply your changes to the latest version and try again\nI0414 10:36:00.470987       1 service_controller.go:734] Service has been deleted openshift-marketplace/redhat-operators. Attempting to cleanup load balancer resources\nI0414 10:36:00.490480       1 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-marketplace", Name:"redhat-operators-665f6cdc69", UID:"bba46524-7e3b-11ea-8f05-0207acdf813e", APIVersion:"apps/v1", ResourceVersion:"37086", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redhat-operators-665f6cdc69-8xbzr\nI0414 10:36:01.796223       1 deployment_controller.go:484] Error syncing deployment openshift-monitoring/cluster-monitoring-operator: Operation cannot be fulfilled on deployments.apps "cluster-monitoring-operator": the object has been modified; please apply your changes to the latest version and try again\nE0414 10:36:08.381158       1 reflector.go:237] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.BrokerTemplateInstance: the server is currently unable to handle the request (get brokertemplateinstances.template.openshift.io)\nE0414 10:36:08.381208       1 reflector.go:237] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.Build: the server is currently unable to handle the request (get builds.build.openshift.io)\nE0414 10:36:08.421832       1 controllermanager.go:282] leaderelection lost\nI0414 10:36:08.421866       1 serving.go:88] Shutting down DynamicLoader\n
Apr 14 10:37:53.512 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-141-120.us-east-2.compute.internal node/ip-10-0-141-120.us-east-2.compute.internal container=scheduler container exited with code 255 (Error): 9:52:39 +0000 UTC to 2021-04-14 09:52:40 +0000 UTC (now=2020-04-14 10:12:08.885217843 +0000 UTC))\nI0414 10:12:08.885246       1 secure_serving.go:136] Serving securely on [::]:10259\nI0414 10:12:08.892441       1 serving.go:77] Starting DynamicLoader\nI0414 10:12:09.787021       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller\nI0414 10:12:09.887230       1 controller_utils.go:1034] Caches are synced for scheduler controller\nI0414 10:12:09.887262       1 leaderelection.go:205] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\nI0414 10:13:42.309212       1 leaderelection.go:214] successfully acquired lease openshift-kube-scheduler/kube-scheduler\nW0414 10:35:27.223079       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 17114 (35936)\nW0414 10:35:27.240719       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 17114 (35940)\nW0414 10:35:27.300996       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 17118 (35949)\nW0414 10:35:27.466684       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 28567 (35981)\nI0414 10:35:27.640498       1 trace.go:76] Trace[388167971]: "Scheduling openshift-console/downloads-7f545c6fcb-hmmr5" (started: 2020-04-14 10:35:27.489002103 +0000 UTC m=+1399.159064729) (total time: 151.453535ms):\nTrace[388167971]: [143.520803ms] [143.47674ms] Prioritizing\nI0414 10:35:27.953325       1 trace.go:76] Trace[1222864583]: "Scheduling openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-8595697d78-t2h55" (started: 2020-04-14 10:35:27.847312922 +0000 UTC m=+1399.517375427) (total time: 105.970099ms):\nTrace[1222864583]: [105.962771ms] [105.608536ms] Selecting host\nE0414 10:36:08.474547       1 server.go:259] lost master\n
Apr 14 10:37:55.136 E ns/openshift-monitoring pod/node-exporter-9hcpn node/ip-10-0-132-253.us-east-2.compute.internal container=node-exporter container exited with code 255 (Error): 
Apr 14 10:37:55.136 E ns/openshift-monitoring pod/node-exporter-9hcpn node/ip-10-0-132-253.us-east-2.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 14 10:37:55.177 E ns/openshift-image-registry pod/node-ca-t62nx node/ip-10-0-132-253.us-east-2.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 14 10:37:55.349 E ns/openshift-cluster-node-tuning-operator pod/tuned-cmnxb node/ip-10-0-132-253.us-east-2.compute.internal container=tuned container exited with code 255 (Error): deployment-upgrade-5kdk6/dp-57cc5d77b4-jvtdw) labels changed node wide: true\nI0414 10:35:27.673824   47124 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0414 10:35:27.675523   47124 openshift-tuned.go:326] Getting recommended profile...\nI0414 10:35:27.950407   47124 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0414 10:35:29.680017   47124 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-adapter-6f75d557cc-292w7) labels changed node wide: true\nI0414 10:35:32.673268   47124 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0414 10:35:32.674651   47124 openshift-tuned.go:326] Getting recommended profile...\nI0414 10:35:32.785077   47124 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0414 10:35:32.856906   47124 openshift-tuned.go:435] Pod (openshift-image-registry/image-registry-5fcd6467d-fc6gv) labels changed node wide: true\nI0414 10:35:37.673271   47124 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0414 10:35:37.674903   47124 openshift-tuned.go:326] Getting recommended profile...\nI0414 10:35:37.792123   47124 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0414 10:35:39.767737   47124 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/olm-operators-jkx8s) labels changed node wide: true\nI0414 10:35:42.673282   47124 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0414 10:35:42.674686   47124 openshift-tuned.go:326] Getting recommended profile...\nI0414 10:35:42.786988   47124 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0414 10:36:05.436690   47124 openshift-tuned.go:435] Pod (openshift-console/downloads-7f545c6fcb-kv685) labels changed node wide: true\n
Apr 14 10:37:56.716 E ns/openshift-apiserver pod/apiserver-t4lwh node/ip-10-0-141-120.us-east-2.compute.internal container=openshift-apiserver container exited with code 255 (Error):      1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []\nI0414 10:35:59.167227       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0414 10:35:59.167335       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0414 10:35:59.167242       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0414 10:35:59.182902       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0414 10:36:08.366086       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/aggregator-client-ca/ca-bundle.crt\nI0414 10:36:08.366320       1 clusterquotamapping.go:145] Shutting down ClusterQuotaMappingController controller\nI0414 10:36:08.366603       1 controller.go:87] Shutting down OpenAPI AggregationController\nI0414 10:36:08.366629       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/client-ca/ca-bundle.crt\nI0414 10:36:08.366937       1 serving.go:88] Shutting down DynamicLoader\nI0414 10:36:08.369756       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0414 10:36:08.370026       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0414 10:36:08.370111       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0414 10:36:08.371979       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0414 10:36:08.372125       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0414 10:36:08.372238       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0414 10:36:08.372287       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\n
Apr 14 10:37:58.319 E ns/openshift-cluster-node-tuning-operator pod/tuned-pwp68 node/ip-10-0-141-120.us-east-2.compute.internal container=tuned container exited with code 255 (Error): 0:35:32.294553   60062 openshift-tuned.go:326] Getting recommended profile...\nI0414 10:35:32.412176   60062 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0414 10:35:33.592741   60062 openshift-tuned.go:435] Pod (openshift-cluster-version/cluster-version-operator-847ff6c77-89495) labels changed node wide: true\nI0414 10:35:37.293080   60062 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0414 10:35:37.294494   60062 openshift-tuned.go:326] Getting recommended profile...\nI0414 10:35:37.423747   60062 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0414 10:35:37.595165   60062 openshift-tuned.go:435] Pod (openshift-controller-manager-operator/openshift-controller-manager-operator-7bc65dc577-zskk8) labels changed node wide: true\nI0414 10:35:42.293077   60062 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0414 10:35:42.294507   60062 openshift-tuned.go:326] Getting recommended profile...\nI0414 10:35:42.476886   60062 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0414 10:35:42.988311   60062 openshift-tuned.go:435] Pod (openshift-marketplace/marketplace-operator-7594f87bf7-bwxrp) labels changed node wide: true\nI0414 10:35:47.293084   60062 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0414 10:35:47.294556   60062 openshift-tuned.go:326] Getting recommended profile...\nI0414 10:35:47.427845   60062 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0414 10:36:07.455415   60062 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/packageserver-c6dbc5c44-wfrbm) labels changed node wide: true\nI0414 10:36:08.421353   60062 openshift-tuned.go:126] Received signal: terminated\n
Apr 14 10:37:58.913 E ns/openshift-dns pod/dns-default-cwvb9 node/ip-10-0-141-120.us-east-2.compute.internal container=dns-node-resolver container exited with code 255 (Error): 
Apr 14 10:37:58.913 E ns/openshift-dns pod/dns-default-cwvb9 node/ip-10-0-141-120.us-east-2.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-04-14T10:23:58.986Z [INFO] CoreDNS-1.3.1\n2020-04-14T10:23:58.986Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-14T10:23:58.986Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nW0414 10:35:27.453014       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: watch of *v1.Service ended with: too old resource version: 28567 (35981)\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 14 10:37:59.267 E ns/openshift-dns pod/dns-default-72rf7 node/ip-10-0-132-253.us-east-2.compute.internal container=dns-node-resolver container exited with code 255 (Error): 
Apr 14 10:37:59.267 E ns/openshift-dns pod/dns-default-72rf7 node/ip-10-0-132-253.us-east-2.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-04-14T10:24:11.779Z [INFO] CoreDNS-1.3.1\n2020-04-14T10:24:11.779Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-14T10:24:11.779Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nW0414 10:32:59.049217       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: watch of *v1.Service ended with: too old resource version: 28567 (33784)\nE0414 10:33:17.245753       1 reflector.go:322] github.com/coredns/coredns/plugin/kubernetes/controller.go:317: Failed to watch *v1.Endpoints: Get https://172.30.0.1:443/api/v1/endpoints?resourceVersion=34186&timeoutSeconds=390&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0414 10:33:17.247888       1 reflector.go:322] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: Failed to watch *v1.Namespace: Get https://172.30.0.1:443/api/v1/namespaces?resourceVersion=25263&timeoutSeconds=454&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nW0414 10:35:27.237086       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: watch of *v1.Service ended with: too old resource version: 33784 (35943)\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 14 10:37:59.310 E ns/openshift-sdn pod/sdn-controller-5mzch node/ip-10-0-141-120.us-east-2.compute.internal container=sdn-controller container exited with code 255 (Error): I0414 10:25:30.736837       1 leaderelection.go:205] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Apr 14 10:38:00.540 E ns/openshift-sdn pod/sdn-f2cpw node/ip-10-0-132-253.us-east-2.compute.internal container=sdn container exited with code 255 (Error):  65842 roundrobin.go:240] Delete endpoint 10.0.135.38:2379 for service "openshift-etcd/etcd:etcd"\nI0414 10:36:05.686698   65842 proxier.go:367] userspace proxy: processing 0 service events\nI0414 10:36:05.686724   65842 proxier.go:346] userspace syncProxyRules took 53.428963ms\nI0414 10:36:05.916845   65842 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-kube-controller-manager/kube-controller-manager:https to [10.0.135.38:10257 10.0.141.120:10257 10.0.146.77:10257]\nI0414 10:36:05.916900   65842 roundrobin.go:240] Delete endpoint 10.0.135.38:10257 for service "openshift-kube-controller-manager/kube-controller-manager:https"\ninterrupt: Gracefully shutting down ...\nE0414 10:36:06.074594   65842 healthcheck.go:57] SDN healthcheck disconnected from OVS server: <nil>\nI0414 10:36:06.074678   65842 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nE0414 10:36:06.173024   65842 proxier.go:1350] Failed to execute iptables-restore: signal: terminated ()\nI0414 10:36:06.173077   65842 proxier.go:1352] Closing local ports after iptables-restore failure\nI0414 10:36:06.179842   65842 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0414 10:36:06.283215   65842 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0414 10:36:06.379725   65842 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0414 10:36:06.385046   65842 proxier.go:367] userspace proxy: processing 0 service events\nI0414 10:36:06.385070   65842 proxier.go:346] userspace syncProxyRules took 211.968278ms\nI0414 10:36:06.480910   65842 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\n
Apr 14 10:38:00.711 E ns/openshift-machine-config-operator pod/machine-config-server-6qjzc node/ip-10-0-141-120.us-east-2.compute.internal container=machine-config-server container exited with code 255 (Error): 
Apr 14 10:38:00.923 E ns/openshift-sdn pod/ovs-6vm84 node/ip-10-0-132-253.us-east-2.compute.internal container=openvswitch container exited with code 255 (Error): s in the last 0 s (4 deletes)\n2020-04-14T10:35:28.370Z|00157|bridge|INFO|bridge br0: deleted interface vethf51bd34f on port 11\n2020-04-14T10:35:28.427Z|00158|connmgr|INFO|br0<->unix#238: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-14T10:35:28.489Z|00159|connmgr|INFO|br0<->unix#241: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-14T10:35:28.535Z|00160|bridge|INFO|bridge br0: deleted interface vethb83794ad on port 13\n2020-04-14T10:35:28.613Z|00161|connmgr|INFO|br0<->unix#244: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-14T10:35:28.659Z|00162|bridge|INFO|bridge br0: deleted interface veth6fd7e29e on port 8\n2020-04-14T10:35:45.953Z|00163|bridge|INFO|bridge br0: added interface vethd0fed4c2 on port 18\n2020-04-14T10:35:45.982Z|00164|connmgr|INFO|br0<->unix#250: 5 flow_mods in the last 0 s (5 adds)\n2020-04-14T10:35:46.020Z|00165|connmgr|INFO|br0<->unix#253: 2 flow_mods in the last 0 s (2 deletes)\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-04-14T10:35:45.998Z|00017|jsonrpc|WARN|Dropped 2 log messages in last 549 seconds (most recently, 549 seconds ago) due to excessive rate\n2020-04-14T10:35:45.998Z|00018|jsonrpc|WARN|unix#186: receive error: Connection reset by peer\n2020-04-14T10:35:45.998Z|00019|reconnect|WARN|unix#186: connection dropped (Connection reset by peer)\n\n==> /var/log/openvswitch/ovs-vswitchd.log <==\n2020-04-14T10:35:56.686Z|00166|connmgr|INFO|br0<->unix#256: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-14T10:35:56.714Z|00167|bridge|INFO|bridge br0: deleted interface veth43a03cb3 on port 9\n2020-04-14T10:35:57.064Z|00168|connmgr|INFO|br0<->unix#259: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-14T10:35:57.089Z|00169|bridge|INFO|bridge br0: deleted interface veth98fbfaa1 on port 6\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-04-14T10:35:56.706Z|00020|jsonrpc|WARN|unix#190: receive error: Connection reset by peer\n2020-04-14T10:35:56.706Z|00021|reconnect|WARN|unix#190: connection dropped (Connection reset by peer)\nTerminated\novs-vswitchd is not running.\novsdb-server is not running.\n
Apr 14 10:38:01.322 E ns/openshift-multus pod/multus-nsgvx node/ip-10-0-132-253.us-east-2.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 14 10:38:01.512 E ns/openshift-monitoring pod/node-exporter-9tv82 node/ip-10-0-141-120.us-east-2.compute.internal container=node-exporter container exited with code 255 (Error): 
Apr 14 10:38:01.512 E ns/openshift-monitoring pod/node-exporter-9tv82 node/ip-10-0-141-120.us-east-2.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 14 10:38:01.912 E ns/openshift-controller-manager pod/controller-manager-zfx58 node/ip-10-0-141-120.us-east-2.compute.internal container=controller-manager container exited with code 255 (Error): 
Apr 14 10:38:02.521 E ns/openshift-machine-config-operator pod/machine-config-daemon-gb24p node/ip-10-0-132-253.us-east-2.compute.internal container=machine-config-daemon container exited with code 255 (Error): 
Apr 14 10:38:03.120 E ns/openshift-operator-lifecycle-manager pod/olm-operators-jkx8s node/ip-10-0-132-253.us-east-2.compute.internal container=configmap-registry-server container exited with code 255 (Error): 
Apr 14 10:38:05.712 E ns/openshift-multus pod/multus-kdpr5 node/ip-10-0-141-120.us-east-2.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 14 10:38:06.114 E ns/openshift-etcd pod/etcd-member-ip-10-0-141-120.us-east-2.compute.internal node/ip-10-0-141-120.us-east-2.compute.internal container=etcd-metrics container exited with code 1 (Error): 2020-04-14 10:37:50.479102 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-0.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-0.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-14 10:37:50.481892 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-14 10:37:50.482464 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/14 10:37:50 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.141.120:9978: connect: connection refused"; Reconnecting to {etcd-0.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\nWARNING: 2020/04/14 10:37:51 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.141.120:9978: connect: connection refused"; Reconnecting to {etcd-0.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\nWARNING: 2020/04/14 10:37:52 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.141.120:9978: connect: connection refused"; Reconnecting to {etcd-0.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\nWARNING: 2020/04/14 10:37:55 Failed to dial etcd-0.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com:9978: context canceled; please retry.\ndial tcp 10.0.141.120:9978: connect: connection refused\n
Apr 14 10:38:08.112 E ns/openshift-image-registry pod/node-ca-x6n88 node/ip-10-0-141-120.us-east-2.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 14 10:38:22.784 E ns/openshift-console pod/console-5c7d8775d-bqzbc node/ip-10-0-146-77.us-east-2.compute.internal container=console container exited with code 2 (Error): 2020/04/14 10:15:34 cmd/main: cookies are secure!\n2020/04/14 10:15:34 cmd/main: Binding to 0.0.0.0:8443...\n2020/04/14 10:15:34 cmd/main: using TLS\n
Apr 14 10:38:23.055 E ns/openshift-ingress pod/router-default-794c47cb65-78m8p node/ip-10-0-128-162.us-east-2.compute.internal container=router container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 14 10:38:25.784 E ns/openshift-machine-config-operator pod/machine-config-operator-74b5dcc77c-7hgfx node/ip-10-0-146-77.us-east-2.compute.internal container=machine-config-operator container exited with code 2 (Error): 
Apr 14 10:38:26.384 E ns/openshift-console-operator pod/console-operator-84c6db7f5-jzwpk node/ip-10-0-146-77.us-east-2.compute.internal container=console-operator container exited with code 255 (Error):  \n\n"\ntime="2020-04-14T10:38:13Z" level=info msg="started syncing operator \"cluster\" (2020-04-14 10:38:13.797508168 +0000 UTC m=+1459.671610204)"\ntime="2020-04-14T10:38:13Z" level=info msg="console is in a managed state."\ntime="2020-04-14T10:38:13Z" level=info msg="running sync loop 4.0.0"\ntime="2020-04-14T10:38:13Z" level=info msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-04-14T10:38:13Z" level=info msg="service-ca configmap exists and is in the correct state"\ntime="2020-04-14T10:38:13Z" level=info msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-04-14T10:38:13Z" level=info msg=-----------------------\ntime="2020-04-14T10:38:13Z" level=info msg="sync loop 4.0.0 resources updated: false \n"\ntime="2020-04-14T10:38:13Z" level=info msg=-----------------------\ntime="2020-04-14T10:38:13Z" level=info msg="deployment is available, ready replicas: 1 \n"\ntime="2020-04-14T10:38:13Z" level=info msg="sync_v400: updating console status"\ntime="2020-04-14T10:38:13Z" level=info msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-04-14T10:38:13Z" level=info msg="sync loop 4.0.0 complete"\ntime="2020-04-14T10:38:13Z" level=info msg="finished syncing operator \"cluster\" (46.312µs) \n\n"\nI0414 10:38:18.329336       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nI0414 10:38:18.329555       1 status_controller.go:200] Shutting down StatusSyncer-console\nI0414 10:38:18.329629       1 unsupportedconfigoverrides_controller.go:161] Shutting down UnsupportedConfigOverridesController\nI0414 10:38:18.329689       1 controller.go:71] Shutting down Console\nI0414 10:38:18.329693       1 secure_serving.go:156] Stopped listening on 0.0.0.0:8443\nF0414 10:38:18.329707       1 builder.go:217] server exited\n
Apr 14 10:38:26.987 E ns/openshift-authentication pod/oauth-openshift-888bd5b45-zg9kj node/ip-10-0-146-77.us-east-2.compute.internal container=oauth-openshift container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 14 10:38:28.582 E ns/openshift-machine-api pod/machine-api-operator-f54d856d-pqdnk node/ip-10-0-146-77.us-east-2.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Apr 14 10:38:29.783 E ns/openshift-cluster-machine-approver pod/machine-approver-6d4858f9c-wjkkv node/ip-10-0-146-77.us-east-2.compute.internal container=machine-approver-controller container exited with code 2 (Error): 0:12:59.945228       1 config.go:23] machine approver config: {NodeClientCert:{Disabled:false}}\nI0414 10:12:59.945370       1 main.go:183] Starting Machine Approver\nI0414 10:13:00.046008       1 main.go:107] CSR csr-h9xfd added\nI0414 10:13:00.046199       1 main.go:110] CSR csr-h9xfd is already approved\nI0414 10:13:00.046279       1 main.go:107] CSR csr-lks4q added\nI0414 10:13:00.046342       1 main.go:110] CSR csr-lks4q is already approved\nI0414 10:13:00.046428       1 main.go:107] CSR csr-vgvkr added\nI0414 10:13:00.046480       1 main.go:110] CSR csr-vgvkr is already approved\nI0414 10:13:00.046527       1 main.go:107] CSR csr-88kcm added\nI0414 10:13:00.046599       1 main.go:110] CSR csr-88kcm is already approved\nI0414 10:13:00.046650       1 main.go:107] CSR csr-9f6z2 added\nI0414 10:13:00.046694       1 main.go:110] CSR csr-9f6z2 is already approved\nI0414 10:13:00.046760       1 main.go:107] CSR csr-bb8fb added\nI0414 10:13:00.046829       1 main.go:110] CSR csr-bb8fb is already approved\nI0414 10:13:00.046882       1 main.go:107] CSR csr-lwbd7 added\nI0414 10:13:00.046926       1 main.go:110] CSR csr-lwbd7 is already approved\nI0414 10:13:00.046997       1 main.go:107] CSR csr-qz852 added\nI0414 10:13:00.047044       1 main.go:110] CSR csr-qz852 is already approved\nI0414 10:13:00.047094       1 main.go:107] CSR csr-29z9n added\nI0414 10:13:00.047179       1 main.go:110] CSR csr-29z9n is already approved\nI0414 10:13:00.047232       1 main.go:107] CSR csr-5l5nv added\nI0414 10:13:00.047275       1 main.go:110] CSR csr-5l5nv is already approved\nI0414 10:13:00.047334       1 main.go:107] CSR csr-82c67 added\nI0414 10:13:00.047392       1 main.go:110] CSR csr-82c67 is already approved\nI0414 10:13:00.047440       1 main.go:107] CSR csr-xb6jc added\nI0414 10:13:00.047482       1 main.go:110] CSR csr-xb6jc is already approved\nW0414 10:38:13.345264       1 reflector.go:341] github.com/openshift/cluster-machine-approver/main.go:185: watch of *v1beta1.CertificateSigningRequest ended with: too old resource version: 18006 (38716)\n
Apr 14 10:38:32.182 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-8595697d78-t2h55 node/ip-10-0-146-77.us-east-2.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): 00996       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 17118 (35949)\\nW0414 10:35:27.466684       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 28567 (35981)\\nI0414 10:35:27.640498       1 trace.go:76] Trace[388167971]: \\\"Scheduling openshift-console/downloads-7f545c6fcb-hmmr5\\\" (started: 2020-04-14 10:35:27.489002103 +0000 UTC m=+1399.159064729) (total time: 151.453535ms):\\nTrace[388167971]: [143.520803ms] [143.47674ms] Prioritizing\\nI0414 10:35:27.953325       1 trace.go:76] Trace[1222864583]: \\\"Scheduling openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-8595697d78-t2h55\\\" (started: 2020-04-14 10:35:27.847312922 +0000 UTC m=+1399.517375427) (total time: 105.970099ms):\\nTrace[1222864583]: [105.962771ms] [105.608536ms] Selecting host\\nE0414 10:36:08.474547       1 server.go:259] lost master\\n\"" to "StaticPodsDegraded: nodes/ip-10-0-141-120.us-east-2.compute.internal pods/openshift-kube-scheduler-ip-10-0-141-120.us-east-2.compute.internal container=\"scheduler\" is not ready"\nW0414 10:38:13.352214       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Role ended with: too old resource version: 18010 (38716)\nW0414 10:38:13.378251       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.FeatureGate ended with: too old resource version: 24873 (38716)\nW0414 10:38:13.644403       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.RoleBinding ended with: too old resource version: 18010 (38775)\nW0414 10:38:13.765129       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Namespace ended with: too old resource version: 17996 (38785)\nI0414 10:38:17.378876       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0414 10:38:17.379103       1 builder.go:217] server exited\n
Apr 14 10:38:35.984 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-774c779dc4-75k4j node/ip-10-0-146-77.us-east-2.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): ompute.internal container=\"kube-controller-manager-cert-syncer-5\" is terminated: \"Error\" - \"I0414 10:11:24.512724       1 observer_polling.go:106] Starting file observer\\nI0414 10:11:24.513263       1 certsync_controller.go:269] Starting CertSyncer\\nW0414 10:17:44.532783       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 18113 (28002)\\nW0414 10:25:43.537995       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 28133 (31037)\\nW0414 10:31:12.543071       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 31260 (33048)\\n\"" to "StaticPodsDegraded: nodes/ip-10-0-141-120.us-east-2.compute.internal pods/kube-controller-manager-ip-10-0-141-120.us-east-2.compute.internal container=\"kube-controller-manager-5\" is not ready"\nW0414 10:38:13.349673       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.FeatureGate ended with: too old resource version: 24873 (38716)\nW0414 10:38:13.349815       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Infrastructure ended with: too old resource version: 18728 (38716)\nW0414 10:38:13.349886       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Role ended with: too old resource version: 18010 (38716)\nW0414 10:38:13.636373       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.RoleBinding ended with: too old resource version: 18010 (38775)\nW0414 10:38:13.763345       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Namespace ended with: too old resource version: 17996 (38785)\nI0414 10:38:19.777248       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0414 10:38:19.777320       1 leaderelection.go:65] leaderelection lost\n
Apr 14 10:38:40.982 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-57ffq86cp node/ip-10-0-146-77.us-east-2.compute.internal container=operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 14 10:38:41.583 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-5466659df6-w5vxh node/ip-10-0-146-77.us-east-2.compute.internal container=openshift-apiserver-operator container exited with code 255 (Error): ratorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available message changed from "Available: v1.oauth.openshift.io is not ready: 503\nAvailable: v1.project.openshift.io is not ready: 503\nAvailable: v1.quota.openshift.io is not ready: 503" to "Available: v1.apps.openshift.io is not ready: 503\nAvailable: v1.image.openshift.io is not ready: 503\nAvailable: v1.project.openshift.io is not ready: 503\nAvailable: v1.quota.openshift.io is not ready: 503\nAvailable: v1.user.openshift.io is not ready: 503"\nI0414 10:37:33.507576       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"860bab96-7e35-11ea-b22d-02e7f4354ac0", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available message changed from "Available: v1.apps.openshift.io is not ready: 503\nAvailable: v1.image.openshift.io is not ready: 503\nAvailable: v1.project.openshift.io is not ready: 503\nAvailable: v1.quota.openshift.io is not ready: 503\nAvailable: v1.user.openshift.io is not ready: 503" to "Available: v1.image.openshift.io is not ready: 503\nAvailable: v1.oauth.openshift.io is not ready: 503\nAvailable: v1.project.openshift.io is not ready: 503\nAvailable: v1.quota.openshift.io is not ready: 503\nAvailable: v1.route.openshift.io is not ready: 503"\nI0414 10:37:33.887638       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"860bab96-7e35-11ea-b22d-02e7f4354ac0", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available changed from False to True ("")\nI0414 10:38:14.563233       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0414 10:38:14.563390       1 leaderelection.go:65] leaderelection lost\n
Apr 14 10:38:42.182 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-544768b675-zhfhb node/ip-10-0-146-77.us-east-2.compute.internal container=kube-apiserver-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 14 10:38:43.182 E ns/openshift-machine-config-operator pod/machine-config-controller-754b969c7b-wzz2b node/ip-10-0-146-77.us-east-2.compute.internal container=machine-config-controller container exited with code 2 (Error): 
Apr 14 10:38:44.183 E ns/openshift-service-ca-operator pod/service-ca-operator-59986bbb4d-jvnnz node/ip-10-0-146-77.us-east-2.compute.internal container=operator container exited with code 2 (Error): 
Apr 14 10:38:45.182 E ns/openshift-service-ca pod/apiservice-cabundle-injector-685cd447d4-hdvfz node/ip-10-0-146-77.us-east-2.compute.internal container=apiservice-cabundle-injector-controller container exited with code 2 (Error): 
Apr 14 10:38:46.783 E ns/openshift-service-catalog-apiserver-operator pod/openshift-service-catalog-apiserver-operator-5d86d9c955-66z2d node/ip-10-0-146-77.us-east-2.compute.internal container=operator container exited with code 2 (Error): -catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0414 10:37:48.795390       1 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch close - *v1.ConfigMap total 0 items received\nW0414 10:37:48.796684       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 32723 (37927)\nI0414 10:37:49.796931       1 reflector.go:169] Listing and watching *v1.ConfigMap from k8s.io/client-go/informers/factory.go:132\nI0414 10:37:52.771048       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0414 10:38:02.780413       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0414 10:38:07.409137       1 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch close - *v1.Secret total 0 items received\nI0414 10:38:12.791602       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0414 10:38:13.695507       1 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch close - *v1.Namespace total 0 items received\nW0414 10:38:13.758736       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Namespace ended with: too old resource version: 17996 (38785)\nI0414 10:38:14.415637       1 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch close - *v1.DaemonSet total 0 items received\nW0414 10:38:14.446585       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.DaemonSet ended with: too old resource version: 32131 (33018)\nI0414 10:38:14.762198       1 reflector.go:169] Listing and watching *v1.Namespace from k8s.io/client-go/informers/factory.go:132\nI0414 10:38:15.467049       1 reflector.go:169] Listing and watching *v1.DaemonSet from k8s.io/client-go/informers/factory.go:132\n
Apr 14 10:38:47.983 E ns/openshift-service-ca pod/service-serving-cert-signer-774bbff6b6-dl9vc node/ip-10-0-146-77.us-east-2.compute.internal container=service-serving-cert-signer-controller container exited with code 2 (Error): 
Apr 14 10:38:59.264 - 14s   E openshift-apiserver OpenShift API is not responding to GET requests
Apr 14 10:39:23.711 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-141-120.us-east-2.compute.internal node/ip-10-0-141-120.us-east-2.compute.internal container=scheduler container exited with code 255 (Error): 9:52:39 +0000 UTC to 2021-04-14 09:52:40 +0000 UTC (now=2020-04-14 10:12:08.885217843 +0000 UTC))\nI0414 10:12:08.885246       1 secure_serving.go:136] Serving securely on [::]:10259\nI0414 10:12:08.892441       1 serving.go:77] Starting DynamicLoader\nI0414 10:12:09.787021       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller\nI0414 10:12:09.887230       1 controller_utils.go:1034] Caches are synced for scheduler controller\nI0414 10:12:09.887262       1 leaderelection.go:205] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\nI0414 10:13:42.309212       1 leaderelection.go:214] successfully acquired lease openshift-kube-scheduler/kube-scheduler\nW0414 10:35:27.223079       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 17114 (35936)\nW0414 10:35:27.240719       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 17114 (35940)\nW0414 10:35:27.300996       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 17118 (35949)\nW0414 10:35:27.466684       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 28567 (35981)\nI0414 10:35:27.640498       1 trace.go:76] Trace[388167971]: "Scheduling openshift-console/downloads-7f545c6fcb-hmmr5" (started: 2020-04-14 10:35:27.489002103 +0000 UTC m=+1399.159064729) (total time: 151.453535ms):\nTrace[388167971]: [143.520803ms] [143.47674ms] Prioritizing\nI0414 10:35:27.953325       1 trace.go:76] Trace[1222864583]: "Scheduling openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-8595697d78-t2h55" (started: 2020-04-14 10:35:27.847312922 +0000 UTC m=+1399.517375427) (total time: 105.970099ms):\nTrace[1222864583]: [105.962771ms] [105.608536ms] Selecting host\nE0414 10:36:08.474547       1 server.go:259] lost master\n
Apr 14 10:39:24.936 E ns/openshift-etcd pod/etcd-member-ip-10-0-141-120.us-east-2.compute.internal node/ip-10-0-141-120.us-east-2.compute.internal container=etcd-metrics container exited with code 255 (Error): 2020-04-14 10:35:35.989703 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-0.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-0.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-14 10:35:35.991286 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-14 10:35:35.992246 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/14 10:35:35 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.141.120:9978: connect: connection refused"; Reconnecting to {etcd-0.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\n2020-04-14 10:35:37.005824 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Apr 14 10:39:24.936 E ns/openshift-etcd pod/etcd-member-ip-10-0-141-120.us-east-2.compute.internal node/ip-10-0-141-120.us-east-2.compute.internal container=etcd-member container exited with code 255 (Error): 532e13531728c06a (writer)\n2020-04-14 10:36:08.894302 I | rafthttp: stopped HTTP pipelining with peer 532e13531728c06a\n2020-04-14 10:36:08.894406 W | rafthttp: lost the TCP streaming connection with peer 532e13531728c06a (stream MsgApp v2 reader)\n2020-04-14 10:36:08.894434 I | rafthttp: stopped streaming with peer 532e13531728c06a (stream MsgApp v2 reader)\n2020-04-14 10:36:08.894512 W | rafthttp: lost the TCP streaming connection with peer 532e13531728c06a (stream Message reader)\n2020-04-14 10:36:08.894535 I | rafthttp: stopped streaming with peer 532e13531728c06a (stream Message reader)\n2020-04-14 10:36:08.894543 I | rafthttp: stopped peer 532e13531728c06a\n2020-04-14 10:36:08.894551 I | rafthttp: stopping peer d8f9f81ea76f358f...\n2020-04-14 10:36:08.894957 I | rafthttp: closed the TCP streaming connection with peer d8f9f81ea76f358f (stream MsgApp v2 writer)\n2020-04-14 10:36:08.894974 I | rafthttp: stopped streaming with peer d8f9f81ea76f358f (writer)\n2020-04-14 10:36:08.895361 I | rafthttp: closed the TCP streaming connection with peer d8f9f81ea76f358f (stream Message writer)\n2020-04-14 10:36:08.895377 I | rafthttp: stopped streaming with peer d8f9f81ea76f358f (writer)\n2020-04-14 10:36:08.895475 I | rafthttp: stopped HTTP pipelining with peer d8f9f81ea76f358f\n2020-04-14 10:36:08.895589 W | rafthttp: lost the TCP streaming connection with peer d8f9f81ea76f358f (stream MsgApp v2 reader)\n2020-04-14 10:36:08.895609 E | rafthttp: failed to read d8f9f81ea76f358f on stream MsgApp v2 (context canceled)\n2020-04-14 10:36:08.895616 I | rafthttp: peer d8f9f81ea76f358f became inactive (message send to peer failed)\n2020-04-14 10:36:08.895624 I | rafthttp: stopped streaming with peer d8f9f81ea76f358f (stream MsgApp v2 reader)\n2020-04-14 10:36:08.895705 W | rafthttp: lost the TCP streaming connection with peer d8f9f81ea76f358f (stream Message reader)\n2020-04-14 10:36:08.895728 I | rafthttp: stopped streaming with peer d8f9f81ea76f358f (stream Message reader)\n2020-04-14 10:36:08.895736 I | rafthttp: stopped peer d8f9f81ea76f358f\n
Apr 14 10:39:25.514 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-141-120.us-east-2.compute.internal node/ip-10-0-141-120.us-east-2.compute.internal container=kube-controller-manager-cert-syncer-5 container exited with code 255 (Error): I0414 10:11:24.512724       1 observer_polling.go:106] Starting file observer\nI0414 10:11:24.513263       1 certsync_controller.go:269] Starting CertSyncer\nW0414 10:17:44.532783       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 18113 (28002)\nW0414 10:25:43.537995       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 28133 (31037)\nW0414 10:31:12.543071       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 31260 (33048)\n
Apr 14 10:39:25.514 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-141-120.us-east-2.compute.internal node/ip-10-0-141-120.us-east-2.compute.internal container=kube-controller-manager-5 container exited with code 255 (Error): Path:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set redhat-operators-665f6cdc69 to 1\nI0414 10:36:00.456116       1 replica_set.go:477] Too few replicas for ReplicaSet openshift-marketplace/redhat-operators-665f6cdc69, need 1, creating 1\nI0414 10:36:00.466099       1 deployment_controller.go:484] Error syncing deployment openshift-marketplace/redhat-operators: Operation cannot be fulfilled on deployments.apps "redhat-operators": the object has been modified; please apply your changes to the latest version and try again\nI0414 10:36:00.470987       1 service_controller.go:734] Service has been deleted openshift-marketplace/redhat-operators. Attempting to cleanup load balancer resources\nI0414 10:36:00.490480       1 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-marketplace", Name:"redhat-operators-665f6cdc69", UID:"bba46524-7e3b-11ea-8f05-0207acdf813e", APIVersion:"apps/v1", ResourceVersion:"37086", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redhat-operators-665f6cdc69-8xbzr\nI0414 10:36:01.796223       1 deployment_controller.go:484] Error syncing deployment openshift-monitoring/cluster-monitoring-operator: Operation cannot be fulfilled on deployments.apps "cluster-monitoring-operator": the object has been modified; please apply your changes to the latest version and try again\nE0414 10:36:08.381158       1 reflector.go:237] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.BrokerTemplateInstance: the server is currently unable to handle the request (get brokertemplateinstances.template.openshift.io)\nE0414 10:36:08.381208       1 reflector.go:237] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.Build: the server is currently unable to handle the request (get builds.build.openshift.io)\nE0414 10:36:08.421832       1 controllermanager.go:282] leaderelection lost\nI0414 10:36:08.421866       1 serving.go:88] Shutting down DynamicLoader\n
Apr 14 10:39:25.913 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-141-120.us-east-2.compute.internal node/ip-10-0-141-120.us-east-2.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 255 (Error): I0414 10:10:44.378839       1 observer_polling.go:106] Starting file observer\nI0414 10:10:44.379067       1 certsync_controller.go:269] Starting CertSyncer\nW0414 10:19:51.739662       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 26480 (28761)\nW0414 10:26:09.746185       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 28932 (31237)\nW0414 10:33:09.753613       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 31437 (33563)\nE0414 10:36:08.519107       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/secrets?resourceVersion=17114&timeout=6m27s&timeoutSeconds=387&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0414 10:36:08.627220       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/configmaps?resourceVersion=34252&timeout=9m28s&timeoutSeconds=568&watch=true: dial tcp [::1]:6443: connect: connection refused\n
Apr 14 10:39:25.913 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-141-120.us-east-2.compute.internal node/ip-10-0-141-120.us-east-2.compute.internal container=kube-apiserver-7 container exited with code 255 (Error): go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=6067, ErrCode=NO_ERROR, debug=""\nI0414 10:36:08.369372       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=6067, ErrCode=NO_ERROR, debug=""\nI0414 10:36:08.369476       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=6067, ErrCode=NO_ERROR, debug=""\nI0414 10:36:08.369489       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=6067, ErrCode=NO_ERROR, debug=""\nI0414 10:36:08.369606       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=6067, ErrCode=NO_ERROR, debug=""\nI0414 10:36:08.369619       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=6067, ErrCode=NO_ERROR, debug=""\nI0414 10:36:08.369726       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=6067, ErrCode=NO_ERROR, debug=""\nI0414 10:36:08.369737       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=6067, ErrCode=NO_ERROR, debug=""\nI0414 10:36:08.369902       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=6067, ErrCode=NO_ERROR, debug=""\nI0414 10:36:08.369914       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=6067, ErrCode=NO_ERROR, debug=""\nI0414 10:36:08.395856       1 controller.go:176] Shutting down kubernetes service endpoint reconciler\n
Apr 14 10:39:29.712 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-141-120.us-east-2.compute.internal node/ip-10-0-141-120.us-east-2.compute.internal container=scheduler container exited with code 255 (Error): 9:52:39 +0000 UTC to 2021-04-14 09:52:40 +0000 UTC (now=2020-04-14 10:12:08.885217843 +0000 UTC))\nI0414 10:12:08.885246       1 secure_serving.go:136] Serving securely on [::]:10259\nI0414 10:12:08.892441       1 serving.go:77] Starting DynamicLoader\nI0414 10:12:09.787021       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller\nI0414 10:12:09.887230       1 controller_utils.go:1034] Caches are synced for scheduler controller\nI0414 10:12:09.887262       1 leaderelection.go:205] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\nI0414 10:13:42.309212       1 leaderelection.go:214] successfully acquired lease openshift-kube-scheduler/kube-scheduler\nW0414 10:35:27.223079       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 17114 (35936)\nW0414 10:35:27.240719       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 17114 (35940)\nW0414 10:35:27.300996       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 17118 (35949)\nW0414 10:35:27.466684       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 28567 (35981)\nI0414 10:35:27.640498       1 trace.go:76] Trace[388167971]: "Scheduling openshift-console/downloads-7f545c6fcb-hmmr5" (started: 2020-04-14 10:35:27.489002103 +0000 UTC m=+1399.159064729) (total time: 151.453535ms):\nTrace[388167971]: [143.520803ms] [143.47674ms] Prioritizing\nI0414 10:35:27.953325       1 trace.go:76] Trace[1222864583]: "Scheduling openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-8595697d78-t2h55" (started: 2020-04-14 10:35:27.847312922 +0000 UTC m=+1399.517375427) (total time: 105.970099ms):\nTrace[1222864583]: [105.962771ms] [105.608536ms] Selecting host\nE0414 10:36:08.474547       1 server.go:259] lost master\n
Apr 14 10:39:31.336 E ns/openshift-etcd pod/etcd-member-ip-10-0-141-120.us-east-2.compute.internal node/ip-10-0-141-120.us-east-2.compute.internal container=etcd-metrics container exited with code 1 (Error): 2020-04-14 10:37:50.479102 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-0.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-0.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-14 10:37:50.481892 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-14 10:37:50.482464 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/14 10:37:50 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.141.120:9978: connect: connection refused"; Reconnecting to {etcd-0.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\nWARNING: 2020/04/14 10:37:51 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.141.120:9978: connect: connection refused"; Reconnecting to {etcd-0.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\nWARNING: 2020/04/14 10:37:52 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.141.120:9978: connect: connection refused"; Reconnecting to {etcd-0.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\nWARNING: 2020/04/14 10:37:55 Failed to dial etcd-0.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com:9978: context canceled; please retry.\ndial tcp 10.0.141.120:9978: connect: connection refused\n
Apr 14 10:40:05.895 E ns/openshift-authentication pod/oauth-openshift-84b4b64b57-7sdkv node/ip-10-0-141-120.us-east-2.compute.internal container=oauth-openshift container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 14 10:40:29.264 E openshift-apiserver OpenShift API is not responding to GET requests
Apr 14 10:40:34.237 E ns/openshift-apiserver pod/apiserver-vhgln node/ip-10-0-146-77.us-east-2.compute.internal container=openshift-apiserver container exited with code 255 (Error): hift-etcd.svc:2379 <nil>}]\nI0414 10:38:44.988333       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0414 10:38:45.011949       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nE0414 10:38:54.381482       1 watch.go:212] unable to encode watch object <nil>: expected pointer, but got invalid kind\nI0414 10:38:54.381800       1 clusterquotamapping.go:145] Shutting down ClusterQuotaMappingController controller\nI0414 10:38:54.381966       1 controller.go:87] Shutting down OpenAPI AggregationController\nI0414 10:38:54.381993       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/client-ca/ca-bundle.crt\nI0414 10:38:54.382001       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/aggregator-client-ca/ca-bundle.crt\nI0414 10:38:54.382013       1 serving.go:88] Shutting down DynamicLoader\nI0414 10:38:54.382033       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0414 10:38:54.382317       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0414 10:38:54.382708       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nE0414 10:38:54.382831       1 watch.go:212] unable to encode watch object <nil>: expected pointer, but got invalid kind\nI0414 10:38:54.382976       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0414 10:38:54.383324       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0414 10:38:54.383447       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0414 10:38:54.383970       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0414 10:38:54.384112       1 secure_serving.go:180] Stopped listening on 0.0.0.0:8443\n
Apr 14 10:40:34.256 E ns/openshift-monitoring pod/node-exporter-5jn89 node/ip-10-0-146-77.us-east-2.compute.internal container=node-exporter container exited with code 255 (Error): 
Apr 14 10:40:34.256 E ns/openshift-monitoring pod/node-exporter-5jn89 node/ip-10-0-146-77.us-east-2.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 14 10:40:34.468 E ns/openshift-image-registry pod/node-ca-f2wgz node/ip-10-0-146-77.us-east-2.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 14 10:40:42.571 E ns/openshift-controller-manager pod/controller-manager-wpjnr node/ip-10-0-146-77.us-east-2.compute.internal container=controller-manager container exited with code 255 (Error): 
Apr 14 10:40:42.970 E ns/openshift-sdn pod/ovs-k4jgs node/ip-10-0-146-77.us-east-2.compute.internal container=openvswitch container exited with code 255 (Error): 4T10:38:21.466Z|00279|connmgr|INFO|br0<->unix#474: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-14T10:38:21.543Z|00280|connmgr|INFO|br0<->unix#477: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-14T10:38:21.605Z|00281|bridge|INFO|bridge br0: deleted interface vethfc6cd8b0 on port 31\n2020-04-14T10:38:21.880Z|00282|connmgr|INFO|br0<->unix#480: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-14T10:38:21.913Z|00283|bridge|INFO|bridge br0: deleted interface vethc7a279ac on port 4\n2020-04-14T10:38:21.969Z|00284|connmgr|INFO|br0<->unix#483: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-14T10:38:22.040Z|00285|connmgr|INFO|br0<->unix#486: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-14T10:38:22.078Z|00286|bridge|INFO|bridge br0: deleted interface vethf1c51e25 on port 27\n2020-04-14T10:38:22.140Z|00287|connmgr|INFO|br0<->unix#489: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-14T10:38:22.210Z|00288|connmgr|INFO|br0<->unix#492: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-14T10:38:22.235Z|00289|bridge|INFO|bridge br0: deleted interface vethc05e7053 on port 36\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-04-14T10:38:21.896Z|00034|jsonrpc|WARN|unix#371: send error: Broken pipe\n2020-04-14T10:38:21.896Z|00035|reconnect|WARN|unix#371: connection dropped (Broken pipe)\n\n==> /var/log/openvswitch/ovs-vswitchd.log <==\n2020-04-14T10:38:23.962Z|00290|connmgr|INFO|br0<->unix#495: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-14T10:38:23.995Z|00291|connmgr|INFO|br0<->unix#498: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-14T10:38:24.019Z|00292|bridge|INFO|bridge br0: deleted interface veth37beebc8 on port 29\nTerminated\n2020-04-14T10:38:54Z|00001|jsonrpc|WARN|unix:/var/run/openvswitch/ovs-vswitchd.71787.ctl: receive error: Connection reset by peer\n2020-04-14T10:38:54Z|00002|unixctl|WARN|error communicating with unix:/var/run/openvswitch/ovs-vswitchd.71787.ctl: Connection reset by peer\novs-appctl: /var/run/openvswitch/ovs-vswitchd.71787.ctl: transaction error (Connection reset by peer)\novsdb-server is not running.\n
Apr 14 10:40:43.371 E ns/openshift-sdn pod/sdn-rs9zx node/ip-10-0-146-77.us-east-2.compute.internal container=sdn container exited with code 255 (Error): r default/kubernetes:https to [10.0.135.38:6443 10.0.141.120:6443]\nI0414 10:38:54.426024   72098 roundrobin.go:240] Delete endpoint 10.0.146.77:6443 for service "default/kubernetes:https"\nE0414 10:38:54.506200   72098 healthcheck.go:57] SDN healthcheck disconnected from OVS server: <nil>\nI0414 10:38:54.506340   72098 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0414 10:38:54.607476   72098 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0414 10:38:54.706650   72098 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\ninterrupt: Gracefully shutting down ...\nE0414 10:38:54.714652   72098 proxier.go:356] Failed to ensure iptables: error creating chain "KUBE-PORTALS-CONTAINER": signal: terminated: \nI0414 10:38:54.714682   72098 proxier.go:367] userspace proxy: processing 0 service events\nI0414 10:38:54.714696   72098 proxier.go:346] userspace syncProxyRules took 9.953142ms\nI0414 10:38:54.808181   72098 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0414 10:38:54.910275   72098 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0414 10:38:55.006920   72098 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0414 10:38:55.106751   72098 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0414 10:38:55.206679   72098 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\n
Apr 14 10:40:44.050 E ns/openshift-monitoring pod/node-exporter-fdtmw node/ip-10-0-128-162.us-east-2.compute.internal container=node-exporter container exited with code 255 (Error): 
Apr 14 10:40:44.050 E ns/openshift-monitoring pod/node-exporter-fdtmw node/ip-10-0-128-162.us-east-2.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 14 10:40:44.068 E ns/openshift-image-registry pod/node-ca-djqxf node/ip-10-0-128-162.us-east-2.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 14 10:40:44.285 E ns/openshift-dns pod/dns-default-mg6nn node/ip-10-0-128-162.us-east-2.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-04-14T10:25:11.810Z [INFO] CoreDNS-1.3.1\n2020-04-14T10:25:11.810Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-14T10:25:11.810Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nW0414 10:35:27.238430       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: watch of *v1.Service ended with: too old resource version: 28567 (35943)\nW0414 10:38:13.760890       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: watch of *v1.Namespace ended with: too old resource version: 17996 (38785)\nW0414 10:38:54.921996       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: watch of *v1.Service ended with: too old resource version: 37103 (38265)\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 14 10:40:44.285 E ns/openshift-dns pod/dns-default-mg6nn node/ip-10-0-128-162.us-east-2.compute.internal container=dns-node-resolver container exited with code 255 (Error): /bin/bash: line 1: kill: (115) - No such process\n
Apr 14 10:40:44.770 E ns/openshift-sdn pod/sdn-controller-dhgrr node/ip-10-0-146-77.us-east-2.compute.internal container=sdn-controller container exited with code 255 (Error): che.go:141] couldn't get resource list for quota.openshift.io/v1: the server is currently unable to handle the request\nE0414 10:34:55.264642       1 memcache.go:141] couldn't get resource list for template.openshift.io/v1: the server is currently unable to handle the request\nE0414 10:36:43.427434       1 memcache.go:141] couldn't get resource list for apps.openshift.io/v1: the server is currently unable to handle the request\nE0414 10:36:46.498177       1 memcache.go:141] couldn't get resource list for quota.openshift.io/v1: the server is currently unable to handle the request\nE0414 10:36:49.572424       1 memcache.go:141] couldn't get resource list for template.openshift.io/v1: the server is currently unable to handle the request\nE0414 10:36:52.641041       1 memcache.go:141] couldn't get resource list for user.openshift.io/v1: the server is currently unable to handle the request\nE0414 10:37:23.361263       1 memcache.go:141] couldn't get resource list for authorization.openshift.io/v1: the server is currently unable to handle the request\nE0414 10:37:26.433334       1 memcache.go:141] couldn't get resource list for build.openshift.io/v1: the server is currently unable to handle the request\nE0414 10:37:29.504631       1 memcache.go:141] couldn't get resource list for image.openshift.io/v1: the server is currently unable to handle the request\nE0414 10:37:32.577885       1 memcache.go:141] couldn't get resource list for project.openshift.io/v1: the server is currently unable to handle the request\nE0414 10:37:35.648792       1 memcache.go:141] couldn't get resource list for template.openshift.io/v1: the server is currently unable to handle the request\nW0414 10:38:13.536146       1 reflector.go:256] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 24844 (38750)\nW0414 10:38:13.762220       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Namespace ended with: too old resource version: 17996 (38785)\n
Apr 14 10:40:45.969 E ns/openshift-machine-config-operator pod/machine-config-server-qssqf node/ip-10-0-146-77.us-east-2.compute.internal container=machine-config-server container exited with code 255 (Error): 
Apr 14 10:40:48.168 E ns/openshift-sdn pod/sdn-q4228 node/ip-10-0-128-162.us-east-2.compute.internal container=sdn container exited with code 255 (Error): nshift-ingress/router-internal-default:metrics"\nI0414 10:38:50.176848   59618 proxier.go:367] userspace proxy: processing 0 service events\nI0414 10:38:50.176873   59618 proxier.go:346] userspace syncProxyRules took 53.04345ms\nI0414 10:38:50.345451   59618 proxier.go:367] userspace proxy: processing 0 service events\nI0414 10:38:50.345474   59618 proxier.go:346] userspace syncProxyRules took 60.461522ms\nI0414 10:38:54.426751   59618 roundrobin.go:310] LoadBalancerRR: Setting endpoints for default/kubernetes:https to [10.0.135.38:6443 10.0.141.120:6443]\nI0414 10:38:54.426790   59618 roundrobin.go:240] Delete endpoint 10.0.146.77:6443 for service "default/kubernetes:https"\nI0414 10:38:54.585302   59618 proxier.go:367] userspace proxy: processing 0 service events\nI0414 10:38:54.585327   59618 proxier.go:346] userspace syncProxyRules took 50.590305ms\ninterrupt: Gracefully shutting down ...\nE0414 10:39:02.142828   59618 healthcheck.go:57] SDN healthcheck disconnected from OVS server: <nil>\nI0414 10:39:02.142942   59618 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0414 10:39:02.243231   59618 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0414 10:39:02.346497   59618 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0414 10:39:02.448466   59618 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0414 10:39:02.544396   59618 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0414 10:39:02.644371   59618 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\n
Apr 14 10:40:48.546 E ns/openshift-multus pod/multus-8vkqw node/ip-10-0-128-162.us-east-2.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 14 10:40:48.574 E ns/openshift-etcd pod/etcd-member-ip-10-0-146-77.us-east-2.compute.internal node/ip-10-0-146-77.us-east-2.compute.internal container=etcd-metrics container exited with code 1 (Error): 2020-04-14 10:40:38.393544 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-14 10:40:38.397321 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-14 10:40:38.398414 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/14 10:40:38 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.146.77:9978: connect: connection refused"; Reconnecting to {etcd-1.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\nWARNING: 2020/04/14 10:40:39 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.146.77:9978: connect: connection refused"; Reconnecting to {etcd-1.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\nWARNING: 2020/04/14 10:40:41 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.146.77:9978: connect: connection refused"; Reconnecting to {etcd-1.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\nWARNING: 2020/04/14 10:40:43 Failed to dial etcd-1.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com:9978: context canceled; please retry.\ndial tcp 10.0.146.77:9978: connect: connection refused\n
Apr 14 10:40:48.910 E ns/openshift-sdn pod/ovs-tzjfp node/ip-10-0-128-162.us-east-2.compute.internal container=openvswitch container exited with code 255 (Error):  (2 deletes)\n2020-04-14T10:38:21.537Z|00161|connmgr|INFO|br0<->unix#251: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-14T10:38:21.573Z|00162|bridge|INFO|bridge br0: deleted interface veth06fc218e on port 12\n2020-04-14T10:38:21.634Z|00163|connmgr|INFO|br0<->unix#254: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-14T10:38:21.694Z|00164|connmgr|INFO|br0<->unix#257: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-14T10:38:21.729Z|00165|bridge|INFO|bridge br0: deleted interface veth60898d8e on port 17\n2020-04-14T10:38:21.790Z|00166|connmgr|INFO|br0<->unix#260: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-14T10:38:21.856Z|00167|connmgr|INFO|br0<->unix#263: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-14T10:38:21.897Z|00168|bridge|INFO|bridge br0: deleted interface veth903f09e0 on port 16\n2020-04-14T10:38:21.954Z|00169|connmgr|INFO|br0<->unix#266: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-14T10:38:22.014Z|00170|connmgr|INFO|br0<->unix#269: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-14T10:38:22.046Z|00171|bridge|INFO|bridge br0: deleted interface veth8c61cd88 on port 14\n2020-04-14T10:38:22.097Z|00172|connmgr|INFO|br0<->unix#272: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-14T10:38:22.138Z|00173|connmgr|INFO|br0<->unix#275: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-14T10:38:22.173Z|00174|bridge|INFO|bridge br0: deleted interface veth7a45e5cf on port 18\n2020-04-14T10:38:50.354Z|00175|connmgr|INFO|br0<->unix#281: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-14T10:38:50.374Z|00176|bridge|INFO|bridge br0: deleted interface vethbb308604 on port 6\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-04-14T10:38:50.328Z|00020|jsonrpc|WARN|Dropped 5 log messages in last 657 seconds (most recently, 656 seconds ago) due to excessive rate\n2020-04-14T10:38:50.328Z|00021|jsonrpc|WARN|unix#212: receive error: Connection reset by peer\n2020-04-14T10:38:50.328Z|00022|reconnect|WARN|unix#212: connection dropped (Connection reset by peer)\nTerminated\novs-vswitchd is not running.\novsdb-server is not running.\n
Apr 14 10:40:49.278 E ns/openshift-machine-config-operator pod/machine-config-daemon-cb7sh node/ip-10-0-128-162.us-east-2.compute.internal container=machine-config-daemon container exited with code 255 (Error): 
Apr 14 10:40:49.643 E ns/openshift-cluster-node-tuning-operator pod/tuned-62qbl node/ip-10-0-128-162.us-east-2.compute.internal container=tuned container exited with code 255 (Error): ned.go:435] Pod (openshift-marketplace/community-operators-5f5f86cb8f-w7jd7) labels changed node wide: true\nI0414 10:37:12.679340   79278 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0414 10:37:12.681298   79278 openshift-tuned.go:326] Getting recommended profile...\nI0414 10:37:12.790507   79278 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0414 10:38:20.069924   79278 openshift-tuned.go:435] Pod (openshift-monitoring/alertmanager-main-2) labels changed node wide: true\nI0414 10:38:22.679352   79278 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0414 10:38:22.680787   79278 openshift-tuned.go:326] Getting recommended profile...\nI0414 10:38:22.802330   79278 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0414 10:38:25.851513   79278 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-operator-865cd4cbb5-x4ppt) labels changed node wide: true\nI0414 10:38:27.679480   79278 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0414 10:38:27.680970   79278 openshift-tuned.go:326] Getting recommended profile...\nI0414 10:38:27.790053   79278 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0414 10:38:28.059556   79278 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-k8s-1) labels changed node wide: true\nI0414 10:38:32.679328   79278 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0414 10:38:32.680739   79278 openshift-tuned.go:326] Getting recommended profile...\nI0414 10:38:32.788646   79278 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0414 10:39:01.414130   79278 openshift-tuned.go:435] Pod (e2e-tests-sig-apps-job-upgrade-qrfj5/foo-26swg) labels changed node wide: true\n
Apr 14 10:40:53.369 E ns/openshift-machine-config-operator pod/machine-config-daemon-5krb2 node/ip-10-0-146-77.us-east-2.compute.internal container=machine-config-daemon container exited with code 255 (Error): 
Apr 14 10:40:53.970 E ns/openshift-dns pod/dns-default-66rx9 node/ip-10-0-146-77.us-east-2.compute.internal container=dns-node-resolver container exited with code 255 (Error): 
Apr 14 10:40:53.970 E ns/openshift-dns pod/dns-default-66rx9 node/ip-10-0-146-77.us-east-2.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-04-14T10:24:54.791Z [INFO] CoreDNS-1.3.1\n2020-04-14T10:24:54.791Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-14T10:24:54.791Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nW0414 10:32:59.044964       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: watch of *v1.Service ended with: too old resource version: 28567 (33784)\nW0414 10:35:27.242345       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: watch of *v1.Service ended with: too old resource version: 33784 (35943)\nW0414 10:38:13.761451       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: watch of *v1.Namespace ended with: too old resource version: 25263 (38785)\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 14 10:40:54.371 E ns/openshift-multus pod/multus-vrr5p node/ip-10-0-146-77.us-east-2.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 14 10:41:02.773 E ns/openshift-etcd pod/etcd-member-ip-10-0-146-77.us-east-2.compute.internal node/ip-10-0-146-77.us-east-2.compute.internal container=etcd-member container exited with code 255 (Error):  (stream MsgApp v2 reader)\n2020-04-14 10:38:54.874778 E | rafthttp: failed to read 866e24bff86ef2f4 on stream MsgApp v2 (context canceled)\n2020-04-14 10:38:54.874786 I | rafthttp: peer 866e24bff86ef2f4 became inactive (message send to peer failed)\n2020-04-14 10:38:54.874795 I | rafthttp: stopped streaming with peer 866e24bff86ef2f4 (stream MsgApp v2 reader)\n2020-04-14 10:38:54.874854 W | rafthttp: lost the TCP streaming connection with peer 866e24bff86ef2f4 (stream Message reader)\n2020-04-14 10:38:54.874866 I | rafthttp: stopped streaming with peer 866e24bff86ef2f4 (stream Message reader)\n2020-04-14 10:38:54.874874 I | rafthttp: stopped peer 866e24bff86ef2f4\n2020-04-14 10:38:54.874883 I | rafthttp: stopping peer d8f9f81ea76f358f...\n2020-04-14 10:38:54.875321 I | rafthttp: closed the TCP streaming connection with peer d8f9f81ea76f358f (stream MsgApp v2 writer)\n2020-04-14 10:38:54.875333 I | rafthttp: stopped streaming with peer d8f9f81ea76f358f (writer)\n2020-04-14 10:38:54.875726 I | rafthttp: closed the TCP streaming connection with peer d8f9f81ea76f358f (stream Message writer)\n2020-04-14 10:38:54.875737 I | rafthttp: stopped streaming with peer d8f9f81ea76f358f (writer)\n2020-04-14 10:38:54.875845 I | rafthttp: stopped HTTP pipelining with peer d8f9f81ea76f358f\n2020-04-14 10:38:54.875909 W | rafthttp: lost the TCP streaming connection with peer d8f9f81ea76f358f (stream MsgApp v2 reader)\n2020-04-14 10:38:54.875927 I | rafthttp: stopped streaming with peer d8f9f81ea76f358f (stream MsgApp v2 reader)\n2020-04-14 10:38:54.875978 W | rafthttp: lost the TCP streaming connection with peer d8f9f81ea76f358f (stream Message reader)\n2020-04-14 10:38:54.875988 E | rafthttp: failed to read d8f9f81ea76f358f on stream Message (context canceled)\n2020-04-14 10:38:54.875996 I | rafthttp: peer d8f9f81ea76f358f became inactive (message send to peer failed)\n2020-04-14 10:38:54.876003 I | rafthttp: stopped streaming with peer d8f9f81ea76f358f (stream Message reader)\n2020-04-14 10:38:54.876012 I | rafthttp: stopped peer d8f9f81ea76f358f\n
Apr 14 10:41:02.773 E ns/openshift-etcd pod/etcd-member-ip-10-0-146-77.us-east-2.compute.internal node/ip-10-0-146-77.us-east-2.compute.internal container=etcd-metrics container exited with code 255 (Error): 2020-04-14 10:38:26.599712 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-14 10:38:26.600899 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-14 10:38:26.601844 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/14 10:38:26 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.146.77:9978: connect: connection refused"; Reconnecting to {etcd-1.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\n2020-04-14 10:38:27.615417 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Apr 14 10:41:03.170 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-146-77.us-east-2.compute.internal node/ip-10-0-146-77.us-east-2.compute.internal container=kube-apiserver-7 container exited with code 255 (Error): ErrCode=NO_ERROR, debug=""\nI0414 10:38:54.384722       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=6027, ErrCode=NO_ERROR, debug=""\nI0414 10:38:54.384932       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=6027, ErrCode=NO_ERROR, debug=""\nI0414 10:38:54.385088       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=6027, ErrCode=NO_ERROR, debug=""\nI0414 10:38:54.385110       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=6027, ErrCode=NO_ERROR, debug=""\nI0414 10:38:54.385229       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=6027, ErrCode=NO_ERROR, debug=""\nI0414 10:38:54.385254       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=6027, ErrCode=NO_ERROR, debug=""\nI0414 10:38:54.402008       1 controller.go:176] Shutting down kubernetes service endpoint reconciler\nW0414 10:38:54.412447       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [10.0.135.38 10.0.141.120]\nI0414 10:38:54.472431       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\nI0414 10:38:54.473513       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\nI0414 10:38:54.475941       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\nI0414 10:38:54.479130       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\nI0414 10:38:54.483567       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\n
Apr 14 10:41:03.170 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-146-77.us-east-2.compute.internal node/ip-10-0-146-77.us-east-2.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 255 (Error): I0414 10:12:27.019822       1 observer_polling.go:106] Starting file observer\nI0414 10:12:27.020034       1 certsync_controller.go:269] Starting CertSyncer\nW0414 10:22:25.650560       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 26480 (29500)\nW0414 10:28:06.656314       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 29627 (31946)\nW0414 10:34:09.664935       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 32192 (34635)\n
Apr 14 10:41:03.574 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-146-77.us-east-2.compute.internal node/ip-10-0-146-77.us-east-2.compute.internal container=kube-controller-manager-5 container exited with code 255 (Error): "k8s.cni.cncf.io/v1, Resource=network-attachment-definitions", couldn't start monitor for resource "operator.openshift.io/v1, Resource=ingresscontrollers": unable to monitor quota for resource "operator.openshift.io/v1, Resource=ingresscontrollers", couldn't start monitor for resource "machineconfiguration.openshift.io/v1, Resource=mcoconfigs": unable to monitor quota for resource "machineconfiguration.openshift.io/v1, Resource=mcoconfigs", couldn't start monitor for resource "operators.coreos.com/v1, Resource=operatorgroups": unable to monitor quota for resource "operators.coreos.com/v1, Resource=operatorgroups", couldn't start monitor for resource "monitoring.coreos.com/v1, Resource=prometheuses": unable to monitor quota for resource "monitoring.coreos.com/v1, Resource=prometheuses", couldn't start monitor for resource "autoscaling.openshift.io/v1beta1, Resource=machineautoscalers": unable to monitor quota for resource "autoscaling.openshift.io/v1beta1, Resource=machineautoscalers", couldn't start monitor for resource "operators.coreos.com/v1alpha1, Resource=installplans": unable to monitor quota for resource "operators.coreos.com/v1alpha1, Resource=installplans", couldn't start monitor for resource "operators.coreos.com/v1alpha1, Resource=clusterserviceversions": unable to monitor quota for resource "operators.coreos.com/v1alpha1, Resource=clusterserviceversions", couldn't start monitor for resource "tuned.openshift.io/v1, Resource=tuneds": unable to monitor quota for resource "tuned.openshift.io/v1, Resource=tuneds", couldn't start monitor for resource "monitoring.coreos.com/v1, Resource=alertmanagers": unable to monitor quota for resource "monitoring.coreos.com/v1, Resource=alertmanagers", couldn't start monitor for resource "machine.openshift.io/v1beta1, Resource=machines": unable to monitor quota for resource "machine.openshift.io/v1beta1, Resource=machines"]\nE0414 10:38:54.373165       1 controllermanager.go:282] leaderelection lost\nI0414 10:38:54.373208       1 serving.go:88] Shutting down DynamicLoader\n
Apr 14 10:41:03.574 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-146-77.us-east-2.compute.internal node/ip-10-0-146-77.us-east-2.compute.internal container=kube-controller-manager-cert-syncer-5 container exited with code 255 (Error): I0414 10:12:28.751583       1 certsync_controller.go:269] Starting CertSyncer\nI0414 10:12:28.751738       1 observer_polling.go:106] Starting file observer\nE0414 10:12:31.743441       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: secrets is forbidden: User "system:kube-controller-manager" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-controller-manager"\nE0414 10:12:31.743553       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:kube-controller-manager" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\nW0414 10:19:05.764421       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 18113 (28391)\nW0414 10:28:03.769752       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 28630 (31936)\nW0414 10:34:10.776300       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 32182 (34637)\n
Apr 14 10:41:03.971 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-146-77.us-east-2.compute.internal node/ip-10-0-146-77.us-east-2.compute.internal container=scheduler container exited with code 255 (Error): ow=2020-04-14 10:13:29.66587127 +0000 UTC))\nI0414 10:13:29.665947       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1586857960" [] issuer="<self>" (2020-04-14 09:52:39 +0000 UTC to 2021-04-14 09:52:40 +0000 UTC (now=2020-04-14 10:13:29.665922243 +0000 UTC))\nI0414 10:13:29.665967       1 secure_serving.go:136] Serving securely on [::]:10259\nI0414 10:13:29.666058       1 serving.go:77] Starting DynamicLoader\nI0414 10:13:30.568677       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller\nI0414 10:13:30.669010       1 controller_utils.go:1034] Caches are synced for scheduler controller\nI0414 10:13:30.669044       1 leaderelection.go:205] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\nW0414 10:32:59.050744       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.ReplicationController ended with: too old resource version: 18003 (33784)\nW0414 10:35:27.242727       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 28567 (35943)\nW0414 10:35:27.242889       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 18012 (35943)\nW0414 10:35:27.242992       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 17994 (35943)\nI0414 10:36:26.707952       1 leaderelection.go:214] successfully acquired lease openshift-kube-scheduler/kube-scheduler\nW0414 10:38:13.379338       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 17994 (38716)\nE0414 10:38:54.673168       1 server.go:259] lost master\nI0414 10:38:54.675907       1 serving.go:88] Shutting down DynamicLoader\nI0414 10:38:54.676167       1 secure_serving.go:180] Stopped listening on [::]:10251\n
Apr 14 10:41:15.772 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-146-77.us-east-2.compute.internal node/ip-10-0-146-77.us-east-2.compute.internal container=kube-controller-manager-5 container exited with code 255 (Error): "k8s.cni.cncf.io/v1, Resource=network-attachment-definitions", couldn't start monitor for resource "operator.openshift.io/v1, Resource=ingresscontrollers": unable to monitor quota for resource "operator.openshift.io/v1, Resource=ingresscontrollers", couldn't start monitor for resource "machineconfiguration.openshift.io/v1, Resource=mcoconfigs": unable to monitor quota for resource "machineconfiguration.openshift.io/v1, Resource=mcoconfigs", couldn't start monitor for resource "operators.coreos.com/v1, Resource=operatorgroups": unable to monitor quota for resource "operators.coreos.com/v1, Resource=operatorgroups", couldn't start monitor for resource "monitoring.coreos.com/v1, Resource=prometheuses": unable to monitor quota for resource "monitoring.coreos.com/v1, Resource=prometheuses", couldn't start monitor for resource "autoscaling.openshift.io/v1beta1, Resource=machineautoscalers": unable to monitor quota for resource "autoscaling.openshift.io/v1beta1, Resource=machineautoscalers", couldn't start monitor for resource "operators.coreos.com/v1alpha1, Resource=installplans": unable to monitor quota for resource "operators.coreos.com/v1alpha1, Resource=installplans", couldn't start monitor for resource "operators.coreos.com/v1alpha1, Resource=clusterserviceversions": unable to monitor quota for resource "operators.coreos.com/v1alpha1, Resource=clusterserviceversions", couldn't start monitor for resource "tuned.openshift.io/v1, Resource=tuneds": unable to monitor quota for resource "tuned.openshift.io/v1, Resource=tuneds", couldn't start monitor for resource "monitoring.coreos.com/v1, Resource=alertmanagers": unable to monitor quota for resource "monitoring.coreos.com/v1, Resource=alertmanagers", couldn't start monitor for resource "machine.openshift.io/v1beta1, Resource=machines": unable to monitor quota for resource "machine.openshift.io/v1beta1, Resource=machines"]\nE0414 10:38:54.373165       1 controllermanager.go:282] leaderelection lost\nI0414 10:38:54.373208       1 serving.go:88] Shutting down DynamicLoader\n
Apr 14 10:41:15.772 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-146-77.us-east-2.compute.internal node/ip-10-0-146-77.us-east-2.compute.internal container=kube-controller-manager-cert-syncer-5 container exited with code 255 (Error): I0414 10:12:28.751583       1 certsync_controller.go:269] Starting CertSyncer\nI0414 10:12:28.751738       1 observer_polling.go:106] Starting file observer\nE0414 10:12:31.743441       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: secrets is forbidden: User "system:kube-controller-manager" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-controller-manager"\nE0414 10:12:31.743553       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:kube-controller-manager" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\nW0414 10:19:05.764421       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 18113 (28391)\nW0414 10:28:03.769752       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 28630 (31936)\nW0414 10:34:10.776300       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 32182 (34637)\n
Apr 14 10:41:17.173 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-146-77.us-east-2.compute.internal node/ip-10-0-146-77.us-east-2.compute.internal container=kube-apiserver-7 container exited with code 255 (Error): ErrCode=NO_ERROR, debug=""\nI0414 10:38:54.384722       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=6027, ErrCode=NO_ERROR, debug=""\nI0414 10:38:54.384932       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=6027, ErrCode=NO_ERROR, debug=""\nI0414 10:38:54.385088       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=6027, ErrCode=NO_ERROR, debug=""\nI0414 10:38:54.385110       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=6027, ErrCode=NO_ERROR, debug=""\nI0414 10:38:54.385229       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=6027, ErrCode=NO_ERROR, debug=""\nI0414 10:38:54.385254       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=6027, ErrCode=NO_ERROR, debug=""\nI0414 10:38:54.402008       1 controller.go:176] Shutting down kubernetes service endpoint reconciler\nW0414 10:38:54.412447       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [10.0.135.38 10.0.141.120]\nI0414 10:38:54.472431       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\nI0414 10:38:54.473513       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\nI0414 10:38:54.475941       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\nI0414 10:38:54.479130       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\nI0414 10:38:54.483567       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\n
Apr 14 10:41:17.173 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-146-77.us-east-2.compute.internal node/ip-10-0-146-77.us-east-2.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 255 (Error): I0414 10:12:27.019822       1 observer_polling.go:106] Starting file observer\nI0414 10:12:27.020034       1 certsync_controller.go:269] Starting CertSyncer\nW0414 10:22:25.650560       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 26480 (29500)\nW0414 10:28:06.656314       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 29627 (31946)\nW0414 10:34:09.664935       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 32192 (34635)\n
Apr 14 10:41:17.970 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-146-77.us-east-2.compute.internal node/ip-10-0-146-77.us-east-2.compute.internal container=scheduler container exited with code 255 (Error): ow=2020-04-14 10:13:29.66587127 +0000 UTC))\nI0414 10:13:29.665947       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1586857960" [] issuer="<self>" (2020-04-14 09:52:39 +0000 UTC to 2021-04-14 09:52:40 +0000 UTC (now=2020-04-14 10:13:29.665922243 +0000 UTC))\nI0414 10:13:29.665967       1 secure_serving.go:136] Serving securely on [::]:10259\nI0414 10:13:29.666058       1 serving.go:77] Starting DynamicLoader\nI0414 10:13:30.568677       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller\nI0414 10:13:30.669010       1 controller_utils.go:1034] Caches are synced for scheduler controller\nI0414 10:13:30.669044       1 leaderelection.go:205] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\nW0414 10:32:59.050744       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.ReplicationController ended with: too old resource version: 18003 (33784)\nW0414 10:35:27.242727       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 28567 (35943)\nW0414 10:35:27.242889       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 18012 (35943)\nW0414 10:35:27.242992       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 17994 (35943)\nI0414 10:36:26.707952       1 leaderelection.go:214] successfully acquired lease openshift-kube-scheduler/kube-scheduler\nW0414 10:38:13.379338       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 17994 (38716)\nE0414 10:38:54.673168       1 server.go:259] lost master\nI0414 10:38:54.675907       1 serving.go:88] Shutting down DynamicLoader\nI0414 10:38:54.676167       1 secure_serving.go:180] Stopped listening on [::]:10251\n
Apr 14 10:41:18.372 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-146-77.us-east-2.compute.internal node/ip-10-0-146-77.us-east-2.compute.internal container=kube-controller-manager-5 container exited with code 255 (Error): "k8s.cni.cncf.io/v1, Resource=network-attachment-definitions", couldn't start monitor for resource "operator.openshift.io/v1, Resource=ingresscontrollers": unable to monitor quota for resource "operator.openshift.io/v1, Resource=ingresscontrollers", couldn't start monitor for resource "machineconfiguration.openshift.io/v1, Resource=mcoconfigs": unable to monitor quota for resource "machineconfiguration.openshift.io/v1, Resource=mcoconfigs", couldn't start monitor for resource "operators.coreos.com/v1, Resource=operatorgroups": unable to monitor quota for resource "operators.coreos.com/v1, Resource=operatorgroups", couldn't start monitor for resource "monitoring.coreos.com/v1, Resource=prometheuses": unable to monitor quota for resource "monitoring.coreos.com/v1, Resource=prometheuses", couldn't start monitor for resource "autoscaling.openshift.io/v1beta1, Resource=machineautoscalers": unable to monitor quota for resource "autoscaling.openshift.io/v1beta1, Resource=machineautoscalers", couldn't start monitor for resource "operators.coreos.com/v1alpha1, Resource=installplans": unable to monitor quota for resource "operators.coreos.com/v1alpha1, Resource=installplans", couldn't start monitor for resource "operators.coreos.com/v1alpha1, Resource=clusterserviceversions": unable to monitor quota for resource "operators.coreos.com/v1alpha1, Resource=clusterserviceversions", couldn't start monitor for resource "tuned.openshift.io/v1, Resource=tuneds": unable to monitor quota for resource "tuned.openshift.io/v1, Resource=tuneds", couldn't start monitor for resource "monitoring.coreos.com/v1, Resource=alertmanagers": unable to monitor quota for resource "monitoring.coreos.com/v1, Resource=alertmanagers", couldn't start monitor for resource "machine.openshift.io/v1beta1, Resource=machines": unable to monitor quota for resource "machine.openshift.io/v1beta1, Resource=machines"]\nE0414 10:38:54.373165       1 controllermanager.go:282] leaderelection lost\nI0414 10:38:54.373208       1 serving.go:88] Shutting down DynamicLoader\n
Apr 14 10:41:18.372 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-146-77.us-east-2.compute.internal node/ip-10-0-146-77.us-east-2.compute.internal container=kube-controller-manager-cert-syncer-5 container exited with code 255 (Error): I0414 10:12:28.751583       1 certsync_controller.go:269] Starting CertSyncer\nI0414 10:12:28.751738       1 observer_polling.go:106] Starting file observer\nE0414 10:12:31.743441       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: secrets is forbidden: User "system:kube-controller-manager" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-controller-manager"\nE0414 10:12:31.743553       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:kube-controller-manager" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\nW0414 10:19:05.764421       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 18113 (28391)\nW0414 10:28:03.769752       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 28630 (31936)\nW0414 10:34:10.776300       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 32182 (34637)\n
Apr 14 10:41:18.772 E ns/openshift-etcd pod/etcd-member-ip-10-0-146-77.us-east-2.compute.internal node/ip-10-0-146-77.us-east-2.compute.internal container=etcd-member container exited with code 255 (Error):  (stream MsgApp v2 reader)\n2020-04-14 10:38:54.874778 E | rafthttp: failed to read 866e24bff86ef2f4 on stream MsgApp v2 (context canceled)\n2020-04-14 10:38:54.874786 I | rafthttp: peer 866e24bff86ef2f4 became inactive (message send to peer failed)\n2020-04-14 10:38:54.874795 I | rafthttp: stopped streaming with peer 866e24bff86ef2f4 (stream MsgApp v2 reader)\n2020-04-14 10:38:54.874854 W | rafthttp: lost the TCP streaming connection with peer 866e24bff86ef2f4 (stream Message reader)\n2020-04-14 10:38:54.874866 I | rafthttp: stopped streaming with peer 866e24bff86ef2f4 (stream Message reader)\n2020-04-14 10:38:54.874874 I | rafthttp: stopped peer 866e24bff86ef2f4\n2020-04-14 10:38:54.874883 I | rafthttp: stopping peer d8f9f81ea76f358f...\n2020-04-14 10:38:54.875321 I | rafthttp: closed the TCP streaming connection with peer d8f9f81ea76f358f (stream MsgApp v2 writer)\n2020-04-14 10:38:54.875333 I | rafthttp: stopped streaming with peer d8f9f81ea76f358f (writer)\n2020-04-14 10:38:54.875726 I | rafthttp: closed the TCP streaming connection with peer d8f9f81ea76f358f (stream Message writer)\n2020-04-14 10:38:54.875737 I | rafthttp: stopped streaming with peer d8f9f81ea76f358f (writer)\n2020-04-14 10:38:54.875845 I | rafthttp: stopped HTTP pipelining with peer d8f9f81ea76f358f\n2020-04-14 10:38:54.875909 W | rafthttp: lost the TCP streaming connection with peer d8f9f81ea76f358f (stream MsgApp v2 reader)\n2020-04-14 10:38:54.875927 I | rafthttp: stopped streaming with peer d8f9f81ea76f358f (stream MsgApp v2 reader)\n2020-04-14 10:38:54.875978 W | rafthttp: lost the TCP streaming connection with peer d8f9f81ea76f358f (stream Message reader)\n2020-04-14 10:38:54.875988 E | rafthttp: failed to read d8f9f81ea76f358f on stream Message (context canceled)\n2020-04-14 10:38:54.875996 I | rafthttp: peer d8f9f81ea76f358f became inactive (message send to peer failed)\n2020-04-14 10:38:54.876003 I | rafthttp: stopped streaming with peer d8f9f81ea76f358f (stream Message reader)\n2020-04-14 10:38:54.876012 I | rafthttp: stopped peer d8f9f81ea76f358f\n
Apr 14 10:41:18.772 E ns/openshift-etcd pod/etcd-member-ip-10-0-146-77.us-east-2.compute.internal node/ip-10-0-146-77.us-east-2.compute.internal container=etcd-metrics container exited with code 255 (Error): 2020-04-14 10:38:26.599712 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-14 10:38:26.600899 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-14 10:38:26.601844 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/14 10:38:26 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.146.77:9978: connect: connection refused"; Reconnecting to {etcd-1.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\n2020-04-14 10:38:27.615417 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Apr 14 10:41:20.795 E ns/openshift-etcd pod/etcd-member-ip-10-0-146-77.us-east-2.compute.internal node/ip-10-0-146-77.us-east-2.compute.internal container=etcd-metrics container exited with code 1 (Error): 2020-04-14 10:40:38.393544 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-14 10:40:38.397321 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-14 10:40:38.398414 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/14 10:40:38 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.146.77:9978: connect: connection refused"; Reconnecting to {etcd-1.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\nWARNING: 2020/04/14 10:40:39 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.146.77:9978: connect: connection refused"; Reconnecting to {etcd-1.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\nWARNING: 2020/04/14 10:40:41 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.146.77:9978: connect: connection refused"; Reconnecting to {etcd-1.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\nWARNING: 2020/04/14 10:40:43 Failed to dial etcd-1.ci-op-dcmxx0nc-a4243.origin-ci-int-aws.dev.rhcloud.com:9978: context canceled; please retry.\ndial tcp 10.0.146.77:9978: connect: connection refused\n