ResultFAILURE
Tests 3 failed / 19 succeeded
Started2020-04-03 12:08
Elapsed1h24m
Work namespaceci-op-0jlm1jrd
Refs release-4.1:514189df
812:8d0c3f82
podd9b71582-75a3-11ea-ae76-0a58ac104072
repoopenshift/cluster-kube-apiserver-operator
revision1

Test Failures


openshift-tests Monitor cluster while tests execute 47m28s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
217 error level events were detected during this test run:

Apr 03 12:43:42.223 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-c6f9fbbd-4gt5p node/ip-10-0-152-202.us-west-1.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): ctory.go:132: watch of *v1.ClusterRoleBinding ended with: too old resource version: 10798 (12870)\nW0403 12:38:18.877103       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 5469 (12866)\nW0403 12:38:18.877217       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Role ended with: too old resource version: 4862 (12870)\nW0403 12:38:18.877298       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ServiceAccount ended with: too old resource version: 10931 (12866)\nW0403 12:38:18.877359       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 5650 (13864)\nW0403 12:38:19.038766       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Network ended with: too old resource version: 4982 (14039)\nW0403 12:38:19.063434       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Infrastructure ended with: too old resource version: 4982 (14039)\nW0403 12:38:19.206165       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.FeatureGate ended with: too old resource version: 4982 (14039)\nW0403 12:38:19.219011       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 13889 (14039)\nW0403 12:38:19.222903       1 reflector.go:270] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: watch of *v1.KubeControllerManager ended with: too old resource version: 12916 (14039)\nI0403 12:43:41.569093       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 12:43:41.569159       1 builder.go:217] server exited\nI0403 12:43:41.593844       1 secure_serving.go:156] Stopped listening on 0.0.0.0:8443\n
Apr 03 12:45:11.452 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-7f9995fc6-mmg8q node/ip-10-0-152-202.us-west-1.compute.internal container=openshift-apiserver-operator container exited with code 255 (Error): ed with: too old resource version: 8734 (13861)\nW0403 12:38:18.940735       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.DaemonSet ended with: too old resource version: 11953 (12870)\nW0403 12:38:18.940803       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 8866 (13864)\nW0403 12:38:18.985715       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ServiceAccount ended with: too old resource version: 11120 (12866)\nW0403 12:38:19.190074       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Project ended with: too old resource version: 5250 (14039)\nW0403 12:38:19.204788       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Ingress ended with: too old resource version: 5228 (14039)\nW0403 12:38:19.250680       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 13889 (14039)\nW0403 12:38:19.377430       1 reflector.go:270] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: watch of *v1.OpenShiftAPIServer ended with: too old resource version: 8926 (14041)\nW0403 12:43:41.201932       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 14046 (16091)\nW0403 12:44:51.401638       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 14057 (16565)\nW0403 12:45:04.867724       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 14046 (16626)\nI0403 12:45:10.622258       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 12:45:10.622329       1 leaderelection.go:65] leaderelection lost\n
Apr 03 12:47:39.465 E ns/openshift-machine-api pod/machine-api-controllers-85cfb7887f-pkzfv node/ip-10-0-139-84.us-west-1.compute.internal container=controller-manager container exited with code 1 (Error): 
Apr 03 12:47:39.465 E ns/openshift-machine-api pod/machine-api-controllers-85cfb7887f-pkzfv node/ip-10-0-139-84.us-west-1.compute.internal container=nodelink-controller container exited with code 2 (Error): 
Apr 03 12:47:43.714 E ns/openshift-cluster-machine-approver pod/machine-approver-584d8459fc-m96fc node/ip-10-0-152-202.us-west-1.compute.internal container=machine-approver-controller container exited with code 2 (Error): /apis/certificates.k8s.io/v1beta1/certificatesigningrequests?resourceVersion=15131&timeoutSeconds=506&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0403 12:47:23.486711       1 reflector.go:205] github.com/openshift/cluster-machine-approver/main.go:185: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0403 12:47:28.651294       1 reflector.go:205] github.com/openshift/cluster-machine-approver/main.go:185: Failed to list *v1beta1.CertificateSigningRequest: certificatesigningrequests.certificates.k8s.io is forbidden: User "system:serviceaccount:openshift-cluster-machine-approver:machine-approver-sa" cannot list resource "certificatesigningrequests" in API group "certificates.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:build-strategy-docker" not found, clusterrole.rbac.authorization.k8s.io "cluster-status" not found, clusterrole.rbac.authorization.k8s.io "self-access-reviewer" not found, clusterrole.rbac.authorization.k8s.io "system:scope-impersonation" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-jenkinspipeline" not found, clusterrole.rbac.authorization.k8s.io "system:oauth-token-deleter" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:controller:machine-approver" not found, clusterrole.rbac.authorization.k8s.io "system:webhook" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-source" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:public-info-viewer" not found]\n
Apr 03 12:47:46.378 E ns/openshift-ingress-operator pod/ingress-operator-56c894d94b-htl2f node/ip-10-0-130-16.us-west-1.compute.internal container=ingress-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 12:47:54.774 E clusterversion/version changed Failing to True: MultipleErrors: Multiple errors are preventing progress:\n* Cluster operator monitoring is still updating\n* Could not update deployment "openshift-authentication-operator/authentication-operator" (107 of 350)\n* Could not update deployment "openshift-cloud-credential-operator/cloud-credential-operator" (94 of 350)\n* Could not update deployment "openshift-cluster-node-tuning-operator/cluster-node-tuning-operator" (162 of 350)\n* Could not update deployment "openshift-cluster-samples-operator/cluster-samples-operator" (185 of 350)\n* Could not update deployment "openshift-cluster-storage-operator/cluster-storage-operator" (199 of 350)\n* Could not update deployment "openshift-console/downloads" (237 of 350)\n* Could not update deployment "openshift-controller-manager-operator/openshift-controller-manager-operator" (173 of 350)\n* Could not update deployment "openshift-image-registry/cluster-image-registry-operator" (133 of 350)\n* Could not update deployment "openshift-machine-api/cluster-autoscaler-operator" (122 of 350)\n* Could not update deployment "openshift-marketplace/marketplace-operator" (282 of 350)\n* Could not update deployment "openshift-operator-lifecycle-manager/olm-operator" (253 of 350)\n* Could not update deployment "openshift-service-ca-operator/service-ca-operator" (290 of 350)\n* Could not update deployment "openshift-service-catalog-apiserver-operator/openshift-service-catalog-apiserver-operator" (209 of 350)\n* Could not update deployment "openshift-service-catalog-controller-manager-operator/openshift-service-catalog-controller-manager-operator" (217 of 350)
Apr 03 12:48:26.638 E ns/openshift-monitoring pod/prometheus-operator-7968cbc4d9-4x68b node/ip-10-0-139-218.us-west-1.compute.internal container=prometheus-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 12:48:27.211 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-58b554984c-gctpj node/ip-10-0-152-202.us-west-1.compute.internal container=operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 12:48:44.474 E ns/openshift-operator-lifecycle-manager pod/packageserver-687dd4b8cf-hcd6q node/ip-10-0-139-84.us-west-1.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 12:48:44.653 E ns/openshift-monitoring pod/prometheus-adapter-7dfb57b97f-vgshd node/ip-10-0-139-218.us-west-1.compute.internal container=prometheus-adapter container exited with code 2 (Error): 
Apr 03 12:48:55.383 E ns/openshift-controller-manager pod/controller-manager-qvqqg node/ip-10-0-139-84.us-west-1.compute.internal container=controller-manager container exited with code 137 (Error): 
Apr 03 12:48:55.429 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-154-233.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): 
Apr 03 12:48:57.574 E ns/openshift-console pod/downloads-745668bcd8-hmhqp node/ip-10-0-152-202.us-west-1.compute.internal container=download-server container exited with code 137 (Error): 
Apr 03 12:49:00.387 E ns/openshift-console-operator pod/console-operator-669476b85-pdjlw node/ip-10-0-139-84.us-west-1.compute.internal container=console-operator container exited with code 255 (Error): 20-04-03T12:47:43Z" level=info msg="finished syncing operator \"cluster\" (43.326µs) \n\n"\nW0403 12:47:56.354839       1 reflector.go:270] github.com/openshift/client-go/oauth/informers/externalversions/factory.go:101: watch of *v1.OAuthClient ended with: The resourceVersion for the provided watch is too old.\ntime="2020-04-03T12:47:57Z" level=info msg="started syncing operator \"cluster\" (2020-04-03 12:47:57.373321817 +0000 UTC m=+922.345756816)"\ntime="2020-04-03T12:47:57Z" level=info msg="console is in a managed state."\ntime="2020-04-03T12:47:57Z" level=info msg="running sync loop 4.0.0"\ntime="2020-04-03T12:47:57Z" level=info msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-04-03T12:47:57Z" level=info msg="service-ca configmap exists and is in the correct state"\ntime="2020-04-03T12:47:57Z" level=info msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-04-03T12:47:57Z" level=info msg=-----------------------\ntime="2020-04-03T12:47:57Z" level=info msg="sync loop 4.0.0 resources updated: false \n"\ntime="2020-04-03T12:47:57Z" level=info msg=-----------------------\ntime="2020-04-03T12:47:57Z" level=info msg="deployment is available, ready replicas: 2 \n"\ntime="2020-04-03T12:47:57Z" level=info msg="sync_v400: updating console status"\ntime="2020-04-03T12:47:57Z" level=info msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-04-03T12:47:57Z" level=info msg="sync loop 4.0.0 complete"\ntime="2020-04-03T12:47:57Z" level=info msg="finished syncing operator \"cluster\" (231.704µs) \n\n"\nI0403 12:48:59.375753       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 12:48:59.375854       1 builder.go:217] server exited\nI0403 12:48:59.377921       1 controller.go:71] Shutting down Console\n
Apr 03 12:49:02.806 E ns/openshift-monitoring pod/telemeter-client-587546fb9-9kzvq node/ip-10-0-154-233.us-west-1.compute.internal container=telemeter-client container exited with code 2 (Error): 
Apr 03 12:49:02.806 E ns/openshift-monitoring pod/telemeter-client-587546fb9-9kzvq node/ip-10-0-154-233.us-west-1.compute.internal container=reload container exited with code 2 (Error): 
Apr 03 12:49:07.470 E ns/openshift-authentication-operator pod/authentication-operator-686464b77f-kcwxf node/ip-10-0-130-16.us-west-1.compute.internal container=operator container exited with code 255 (Error):  1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0403 12:45:24.909370       1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0403 12:45:24.909380       1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0403 12:45:24.909478       1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0403 12:45:24.909537       1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0403 12:45:24.909548       1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0403 12:45:24.909559       1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nW0403 12:45:25.120178       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 14932 (16732)\nW0403 12:45:25.149389       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 15106 (16732)\nW0403 12:45:25.149864       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 15106 (16732)\nW0403 12:45:25.149999       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 15106 (16732)\nW0403 12:45:25.190252       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.OAuth ended with: too old resource version: 15109 (15783)\nW0403 12:45:25.293123       1 reflector.go:270] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: watch of *v1.Authentication ended with: too old resource version: 15110 (16985)\nI0403 12:48:24.689999       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 12:48:24.690095       1 leaderelection.go:65] leaderelection lost\n
Apr 03 12:49:07.517 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-67ddpt4bg node/ip-10-0-130-16.us-west-1.compute.internal container=operator container exited with code 2 (Error): factory.go:132: Watch close - *v1.Service total 0 items received\nI0403 12:47:22.514282       1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0403 12:47:22.514451       1 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch close - *v1.ServiceAccount total 0 items received\nI0403 12:47:22.514868       1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0403 12:47:22.515101       1 reflector.go:357] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: Watch close - *v1.ClusterOperator total 25 items received\nI0403 12:47:22.515681       1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0403 12:47:22.527410       1 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch close - *v1.Deployment total 0 items received\nW0403 12:47:22.628947       1 reflector.go:270] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: watch of *v1.ServiceCatalogControllerManager ended with: too old resource version: 15161 (17567)\nW0403 12:47:22.640431       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 16994 (17692)\nI0403 12:47:23.498014       1 wrap.go:47] GET /metrics: (5.555918ms) 200 [Prometheus/2.7.2 10.129.2.7:37218]\nI0403 12:47:23.499902       1 wrap.go:47] GET /metrics: (6.349775ms) 200 [Prometheus/2.7.2 10.131.0.9:58860]\nI0403 12:47:23.629272       1 reflector.go:169] Listing and watching *v1.ServiceCatalogControllerManager from github.com/openshift/client-go/operator/informers/externalversions/factory.go:101\nI0403 12:47:23.640769       1 reflector.go:169] Listing and watching *v1.ConfigMap from k8s.io/client-go/informers/factory.go:132\nI0403 12:47:53.498102       1 wrap.go:47] GET /metrics: (4.603923ms) 200 [Prometheus/2.7.2 10.131.0.9:58860]\nI0403 12:47:53.499544       1 wrap.go:47] GET /metrics: (7.064622ms) 200 [Prometheus/2.7.2 10.129.2.7:37218]\n
Apr 03 12:49:11.856 E ns/openshift-service-catalog-apiserver-operator pod/openshift-service-catalog-apiserver-operator-54fbf64767-47wgf node/ip-10-0-130-16.us-west-1.compute.internal container=operator container exited with code 2 (Error): ewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 12:47:03.695339       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 12:47:09.571970       1 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch close - *v1.DaemonSet total 0 items received\nI0403 12:47:13.704505       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 12:47:23.712534       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 12:47:33.726521       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 12:47:43.734447       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 12:47:53.743509       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 12:47:57.576226       1 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch close - *v1.Secret total 0 items received\nI0403 12:48:03.753725       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 12:48:14.031333       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 12:48:15.501832       1 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync\nI0403 12:48:24.072915       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\n
Apr 03 12:49:20.655 E ns/openshift-cluster-node-tuning-operator pod/cluster-node-tuning-operator-697978bd7-lg78s node/ip-10-0-130-16.us-west-1.compute.internal container=cluster-node-tuning-operator container exited with code 255 (Error): t\nI0403 12:38:19.991216       1 status.go:26] syncOperatorStatus()\nI0403 12:38:20.000683       1 tuned_controller.go:187] syncServiceAccount()\nI0403 12:38:20.010595       1 tuned_controller.go:215] syncClusterRole()\nI0403 12:38:20.151786       1 tuned_controller.go:246] syncClusterRoleBinding()\nI0403 12:38:20.274699       1 tuned_controller.go:277] syncClusterConfigMap()\nI0403 12:38:20.295837       1 tuned_controller.go:277] syncClusterConfigMap()\nI0403 12:38:20.313089       1 tuned_controller.go:315] syncDaemonSet()\nI0403 12:38:20.351810       1 tuned_controller.go:419] Reconciling Tuned openshift-cluster-node-tuning-operator/default\nI0403 12:38:20.351834       1 status.go:26] syncOperatorStatus()\nI0403 12:38:20.372653       1 tuned_controller.go:187] syncServiceAccount()\nI0403 12:38:20.372848       1 tuned_controller.go:215] syncClusterRole()\nI0403 12:38:20.459885       1 tuned_controller.go:246] syncClusterRoleBinding()\nI0403 12:38:20.512712       1 tuned_controller.go:277] syncClusterConfigMap()\nI0403 12:38:20.516773       1 tuned_controller.go:277] syncClusterConfigMap()\nI0403 12:38:20.521298       1 tuned_controller.go:315] syncDaemonSet()\nI0403 12:42:12.339259       1 tuned_controller.go:419] Reconciling Tuned openshift-cluster-node-tuning-operator/default\nI0403 12:42:12.339299       1 status.go:26] syncOperatorStatus()\nI0403 12:42:12.344510       1 tuned_controller.go:187] syncServiceAccount()\nI0403 12:42:12.344646       1 tuned_controller.go:215] syncClusterRole()\nI0403 12:42:12.370803       1 tuned_controller.go:246] syncClusterRoleBinding()\nI0403 12:42:12.398358       1 tuned_controller.go:277] syncClusterConfigMap()\nI0403 12:42:12.401832       1 tuned_controller.go:277] syncClusterConfigMap()\nI0403 12:42:12.404993       1 tuned_controller.go:315] syncDaemonSet()\nW0403 12:44:23.778534       1 reflector.go:270] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:196: watch of *v1.ConfigMap ended with: too old resource version: 14040 (16388)\nF0403 12:48:24.498529       1 main.go:85] <nil>\n
Apr 03 12:49:23.992 E ns/openshift-marketplace pod/certified-operators-94bc6c4bd-4rtgg node/ip-10-0-139-218.us-west-1.compute.internal container=certified-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 12:49:24.124 E ns/openshift-marketplace pod/community-operators-8487588589-2m52r node/ip-10-0-139-218.us-west-1.compute.internal container=community-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 12:49:26.800 E ns/openshift-cluster-node-tuning-operator pod/tuned-xgtxs node/ip-10-0-139-218.us-west-1.compute.internal container=tuned container exited with code 143 (Error): ended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 12:48:47.671564    2254 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/olm-operators-vrzdn) labels changed node wide: true\nI0403 12:48:50.942524    2254 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 12:48:50.946353    2254 openshift-tuned.go:326] Getting recommended profile...\nI0403 12:48:51.116598    2254 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 12:48:51.674382    2254 openshift-tuned.go:435] Pod (openshift-monitoring/alertmanager-main-2) labels changed node wide: true\nI0403 12:48:55.942501    2254 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 12:48:55.943919    2254 openshift-tuned.go:326] Getting recommended profile...\nI0403 12:48:56.067932    2254 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 12:49:01.044431    2254 openshift-tuned.go:435] Pod (openshift-ingress/router-default-7cb8f46446-mtdcz) labels changed node wide: true\nI0403 12:49:05.942523    2254 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 12:49:05.943947    2254 openshift-tuned.go:326] Getting recommended profile...\nI0403 12:49:06.065595    2254 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 12:49:22.525931    2254 openshift-tuned.go:435] Pod (openshift-marketplace/certified-operators-94bc6c4bd-4rtgg) labels changed node wide: true\nI0403 12:49:25.942524    2254 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 12:49:25.943958    2254 openshift-tuned.go:326] Getting recommended profile...\nI0403 12:49:26.061543    2254 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\n
Apr 03 12:49:32.771 E ns/openshift-service-ca pod/configmap-cabundle-injector-5d99f498d8-6vqvw node/ip-10-0-139-84.us-west-1.compute.internal container=configmap-cabundle-injector-controller container exited with code 2 (Error): 
Apr 03 12:49:36.568 E ns/openshift-monitoring pod/node-exporter-gh4z7 node/ip-10-0-154-233.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): 
Apr 03 12:49:37.268 E ns/openshift-controller-manager pod/controller-manager-2gdnr node/ip-10-0-152-202.us-west-1.compute.internal container=controller-manager container exited with code 137 (Error): 
Apr 03 12:49:42.234 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-142-104.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): 
Apr 03 12:49:43.827 E ns/openshift-cluster-node-tuning-operator pod/tuned-zt4qb node/ip-10-0-154-233.us-west-1.compute.internal container=tuned container exited with code 143 (Error): -tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 12:48:48.377887    2272 openshift-tuned.go:435] Pod (openshift-image-registry/image-registry-59789cf6b-4k475) labels changed node wide: true\nI0403 12:48:50.926128    2272 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 12:48:50.927746    2272 openshift-tuned.go:326] Getting recommended profile...\nI0403 12:48:51.039159    2272 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 12:48:54.554008    2272 openshift-tuned.go:435] Pod (openshift-ingress/router-default-64cdf664c9-lq79g) labels changed node wide: true\nI0403 12:48:55.926098    2272 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 12:48:55.927652    2272 openshift-tuned.go:326] Getting recommended profile...\nI0403 12:48:56.040272    2272 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 12:48:56.917873    2272 openshift-tuned.go:691] Lowering resyncPeriod to 53\nI0403 12:49:02.424484    2272 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-adapter-7dfb57b97f-qz87s) labels changed node wide: true\nI0403 12:49:05.926103    2272 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 12:49:05.927662    2272 openshift-tuned.go:326] Getting recommended profile...\nI0403 12:49:06.095153    2272 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nE0403 12:49:06.303959    2272 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""\nE0403 12:49:06.312480    2272 openshift-tuned.go:720] Pod event watch channel closed.\nI0403 12:49:06.312498    2272 openshift-tuned.go:722] Increasing resyncPeriod to 106\n
Apr 03 12:49:47.836 E ns/openshift-marketplace pod/redhat-operators-68bbc58694-dxcb8 node/ip-10-0-142-104.us-west-1.compute.internal container=redhat-operators container exited with code 2 (Error): 
Apr 03 12:49:48.299 E ns/openshift-monitoring pod/node-exporter-ljj27 node/ip-10-0-152-202.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): 
Apr 03 12:49:52.626 E ns/openshift-cluster-node-tuning-operator pod/tuned-74mjx node/ip-10-0-142-104.us-west-1.compute.internal container=tuned container exited with code 143 (Error):  labels: 118 [s]\nI0403 12:41:57.288782    2265 openshift-tuned.go:435] Pod (openshift-cluster-node-tuning-operator/tuned-74mjx) labels changed node wide: true\nI0403 12:42:02.287011    2265 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 12:42:02.288319    2265 openshift-tuned.go:275] Dumping labels to /var/lib/tuned/ocp-node-labels.cfg\nI0403 12:42:02.289364    2265 openshift-tuned.go:326] Getting recommended profile...\nI0403 12:42:02.400285    2265 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 12:43:55.278919    2265 openshift-tuned.go:691] Lowering resyncPeriod to 59\nI0403 12:48:26.897031    2265 openshift-tuned.go:435] Pod (openshift-ingress/router-default-7cb8f46446-8cg85) labels changed node wide: true\nI0403 12:48:27.287011    2265 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 12:48:27.288309    2265 openshift-tuned.go:326] Getting recommended profile...\nI0403 12:48:27.397081    2265 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 12:48:29.519475    2265 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-adapter-68546b74-sp6d9) labels changed node wide: true\nI0403 12:48:32.287029    2265 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 12:48:32.288300    2265 openshift-tuned.go:326] Getting recommended profile...\nI0403 12:48:32.396866    2265 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nE0403 12:49:06.303719    2265 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=15, ErrCode=NO_ERROR, debug=""\nE0403 12:49:06.305130    2265 openshift-tuned.go:720] Pod event watch channel closed.\nI0403 12:49:06.305149    2265 openshift-tuned.go:722] Increasing resyncPeriod to 118\n
Apr 03 12:49:58.900 E ns/openshift-cluster-node-tuning-operator pod/tuned-ddlph node/ip-10-0-130-16.us-west-1.compute.internal container=tuned container exited with code 143 (Error): mmended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 12:49:17.252692   14564 openshift-tuned.go:435] Pod (openshift-service-catalog-controller-manager-operator/openshift-service-catalog-controller-manager-operator-67ddpt4bg) labels changed node wide: true\nI0403 12:49:21.933111   14564 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 12:49:21.934557   14564 openshift-tuned.go:326] Getting recommended profile...\nI0403 12:49:22.162846   14564 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 12:49:33.629304   14564 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/packageserver-57bd9fc5b4-c9d6x) labels changed node wide: true\nI0403 12:49:36.933198   14564 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 12:49:36.934709   14564 openshift-tuned.go:326] Getting recommended profile...\nI0403 12:49:37.085605   14564 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 12:49:37.085672   14564 openshift-tuned.go:435] Pod (openshift-kube-apiserver/revision-pruner-7-ip-10-0-130-16.us-west-1.compute.internal) labels changed node wide: false\nI0403 12:49:37.833989   14564 openshift-tuned.go:435] Pod (openshift-kube-controller-manager/revision-pruner-6-ip-10-0-130-16.us-west-1.compute.internal) labels changed node wide: false\nI0403 12:49:40.670343   14564 openshift-tuned.go:435] Pod (openshift-authentication/oauth-openshift-56485748dc-t57mh) labels changed node wide: true\nI0403 12:49:41.939981   14564 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 12:49:41.945448   14564 openshift-tuned.go:326] Getting recommended profile...\nI0403 12:49:42.127965   14564 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\n
Apr 03 12:50:03.894 E ns/openshift-monitoring pod/node-exporter-w7qjz node/ip-10-0-139-218.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): 
Apr 03 12:50:04.336 E ns/openshift-cluster-node-tuning-operator pod/tuned-rvh5k node/ip-10-0-152-202.us-west-1.compute.internal container=tuned container exited with code 143 (Error): -tuned.go:435] Pod (openshift-kube-controller-manager/kube-controller-manager-ip-10-0-152-202.us-west-1.compute.internal) labels changed node wide: true\nI0403 12:46:23.283873   16051 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 12:46:23.285978   16051 openshift-tuned.go:326] Getting recommended profile...\nI0403 12:46:23.403222   16051 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 12:47:22.486558   16051 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0403 12:47:22.490479   16051 openshift-tuned.go:720] Pod event watch channel closed.\nI0403 12:47:22.490499   16051 openshift-tuned.go:722] Increasing resyncPeriod to 134\nI0403 12:49:36.490732   16051 openshift-tuned.go:187] Extracting tuned profiles\nI0403 12:49:36.492803   16051 openshift-tuned.go:623] Resync period to pull node/pod labels: 134 [s]\nI0403 12:49:36.504794   16051 openshift-tuned.go:435] Pod (openshift-sdn/ovs-w6flh) labels changed node wide: true\nI0403 12:49:41.502085   16051 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 12:49:41.503716   16051 openshift-tuned.go:275] Dumping labels to /var/lib/tuned/ocp-node-labels.cfg\nI0403 12:49:41.504952   16051 openshift-tuned.go:326] Getting recommended profile...\nI0403 12:49:41.656554   16051 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 12:49:55.561617   16051 openshift-tuned.go:435] Pod (openshift-monitoring/node-exporter-ljj27) labels changed node wide: true\nI0403 12:49:56.502090   16051 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 12:49:56.503984   16051 openshift-tuned.go:326] Getting recommended profile...\nI0403 12:49:56.636211   16051 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\n
Apr 03 12:50:11.648 E ns/openshift-marketplace pod/certified-operators-8584899975-jk6x2 node/ip-10-0-154-233.us-west-1.compute.internal container=certified-operators container exited with code 2 (Error): 
Apr 03 12:50:11.912 E ns/openshift-monitoring pod/node-exporter-jlklx node/ip-10-0-139-84.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): 
Apr 03 12:50:16.965 E ns/openshift-controller-manager pod/controller-manager-mr8gw node/ip-10-0-130-16.us-west-1.compute.internal container=controller-manager container exited with code 137 (Error): 
Apr 03 12:50:19.937 E ns/openshift-cluster-node-tuning-operator pod/tuned-wdp7x node/ip-10-0-139-84.us-west-1.compute.internal container=tuned container exited with code 143 (Error):  13742 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/catalog-operator-56c857d77c-2ssxz) labels changed node wide: true\nI0403 12:48:37.260091   13742 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 12:48:37.261933   13742 openshift-tuned.go:326] Getting recommended profile...\nI0403 12:48:37.380755   13742 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 12:48:42.863958   13742 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/packageserver-687dd4b8cf-hcd6q) labels changed node wide: true\nI0403 12:48:47.260021   13742 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 12:48:47.261520   13742 openshift-tuned.go:326] Getting recommended profile...\nI0403 12:48:47.379427   13742 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 12:48:53.468684   13742 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/packageserver-687dd4b8cf-hcd6q) labels changed node wide: true\nI0403 12:48:57.260017   13742 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 12:48:57.261735   13742 openshift-tuned.go:326] Getting recommended profile...\nI0403 12:48:57.401244   13742 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 12:49:03.466487   13742 openshift-tuned.go:435] Pod (openshift-console-operator/console-operator-669476b85-pdjlw) labels changed node wide: true\nE0403 12:49:06.299455   13742 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""\nE0403 12:49:06.312880   13742 openshift-tuned.go:720] Pod event watch channel closed.\nI0403 12:49:06.312968   13742 openshift-tuned.go:722] Increasing resyncPeriod to 108\n
Apr 03 12:50:42.016 E ns/openshift-console pod/console-67476856cf-r4hh5 node/ip-10-0-139-84.us-west-1.compute.internal container=console container exited with code 2 (Error): 2020/04/3 12:36:52 cmd/main: cookies are secure!\n2020/04/3 12:36:52 auth: error contacting auth provider (retrying in 10s): request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com: x509: certificate signed by unknown authority\n2020/04/3 12:37:02 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://172.30.0.1:443/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/04/3 12:37:12 cmd/main: Binding to 0.0.0.0:8443...\n2020/04/3 12:37:12 cmd/main: using TLS\n
Apr 03 12:51:45.200 E ns/openshift-controller-manager pod/controller-manager-468pq node/ip-10-0-139-84.us-west-1.compute.internal container=controller-manager container exited with code 137 (Error): 
Apr 03 12:59:14.082 E ns/openshift-sdn pod/sdn-controller-5lkz2 node/ip-10-0-139-84.us-west-1.compute.internal container=sdn-controller container exited with code 137 (Error): I0403 12:27:51.192817       1 leaderelection.go:205] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Apr 03 12:59:15.654 E ns/openshift-sdn pod/ovs-tq48h node/ip-10-0-154-233.us-west-1.compute.internal container=openvswitch container exited with code 137 (Error): ef on port 19\n2020-04-03T12:49:11.423Z|00123|connmgr|INFO|br0<->unix#311: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T12:49:11.459Z|00124|connmgr|INFO|br0<->unix#314: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T12:49:30.328Z|00125|bridge|INFO|bridge br0: added interface vethc544a083 on port 20\n2020-04-03T12:49:30.340Z|00126|rconn|INFO|br0<->unix#315: connection timed out\n2020-04-03T12:49:30.375Z|00127|connmgr|INFO|br0<->unix#318: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T12:49:30.400Z|00128|bridge|INFO|bridge br0: deleted interface vethc544a083 on port 20\n2020-04-03T12:49:37.391Z|00129|bridge|INFO|bridge br0: added interface veth579dab5f on port 21\n2020-04-03T12:49:37.430Z|00130|connmgr|INFO|br0<->unix#324: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T12:49:37.466Z|00131|connmgr|INFO|br0<->unix#327: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T12:50:11.052Z|00132|connmgr|INFO|br0<->unix#333: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T12:50:11.107Z|00133|connmgr|INFO|br0<->unix#336: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T12:50:11.170Z|00134|bridge|INFO|bridge br0: deleted interface veth55403d57 on port 7\n2020-04-03T12:50:17.168Z|00135|connmgr|INFO|br0<->unix#339: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T12:50:17.205Z|00136|connmgr|INFO|br0<->unix#342: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T12:50:17.229Z|00137|bridge|INFO|bridge br0: deleted interface veth69ef2113 on port 6\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-04-03T12:50:17.223Z|00020|jsonrpc|WARN|unix#231: receive error: Connection reset by peer\n2020-04-03T12:50:17.223Z|00021|reconnect|WARN|unix#231: connection dropped (Connection reset by peer)\n\n==> /var/log/openvswitch/ovs-vswitchd.log <==\n2020-04-03T12:50:25.020Z|00138|bridge|INFO|bridge br0: added interface veth450a4d0d on port 22\n2020-04-03T12:50:25.049Z|00139|connmgr|INFO|br0<->unix#345: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T12:50:25.085Z|00140|connmgr|INFO|br0<->unix#348: 2 flow_mods in the last 0 s (2 deletes)\n
Apr 03 12:59:20.670 E ns/openshift-sdn pod/sdn-kmpq8 node/ip-10-0-154-233.us-west-1.compute.internal container=sdn container exited with code 255 (Error): ix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 12:59:19.328483    2221 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 12:59:19.428434    2221 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 12:59:19.528419    2221 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 12:59:19.628432    2221 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 12:59:19.728423    2221 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 12:59:19.828457    2221 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 12:59:19.928430    2221 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 12:59:20.028429    2221 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 12:59:20.128525    2221 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 12:59:20.228430    2221 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 12:59:20.333494    2221 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0403 12:59:20.333555    2221 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
Apr 03 12:59:51.680 E ns/openshift-sdn pod/ovs-ssm2v node/ip-10-0-139-84.us-west-1.compute.internal container=openvswitch container exited with code 137 (Error): T12:50:06.498Z|00339|bridge|INFO|bridge br0: added interface vethdc4fb668 on port 57\n2020-04-03T12:50:06.540Z|00340|connmgr|INFO|br0<->unix#796: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T12:50:06.597Z|00341|connmgr|INFO|br0<->unix#799: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T12:50:12.415Z|00342|bridge|INFO|bridge br0: added interface vethe9000617 on port 58\n2020-04-03T12:50:12.448Z|00343|connmgr|INFO|br0<->unix#802: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T12:50:12.486Z|00344|connmgr|INFO|br0<->unix#805: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T12:50:41.550Z|00345|connmgr|INFO|br0<->unix#812: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T12:50:41.599Z|00346|connmgr|INFO|br0<->unix#815: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T12:50:41.625Z|00347|bridge|INFO|bridge br0: deleted interface veth2cec9f60 on port 25\n2020-04-03T12:51:44.825Z|00348|connmgr|INFO|br0<->unix#824: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T12:51:44.855Z|00349|connmgr|INFO|br0<->unix#827: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T12:51:44.877Z|00350|bridge|INFO|bridge br0: deleted interface veth8ee0eb68 on port 55\n2020-04-03T12:52:00.344Z|00351|bridge|INFO|bridge br0: added interface vethed49041a on port 59\n2020-04-03T12:52:00.375Z|00352|connmgr|INFO|br0<->unix#833: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T12:52:00.415Z|00353|connmgr|INFO|br0<->unix#836: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T12:59:09.293Z|00354|connmgr|INFO|br0<->unix#885: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T12:59:09.328Z|00355|connmgr|INFO|br0<->unix#888: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T12:59:09.358Z|00356|bridge|INFO|bridge br0: deleted interface veth6e1506ec on port 3\n2020-04-03T12:59:19.187Z|00357|bridge|INFO|bridge br0: added interface veth9a561c62 on port 60\n2020-04-03T12:59:19.219Z|00358|connmgr|INFO|br0<->unix#891: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T12:59:19.256Z|00359|connmgr|INFO|br0<->unix#894: 2 flow_mods in the last 0 s (2 deletes)\n
Apr 03 12:59:56.654 E ns/openshift-sdn pod/sdn-controller-8x92l node/ip-10-0-130-16.us-west-1.compute.internal container=sdn-controller container exited with code 137 (Error): I0403 12:27:51.029774       1 leaderelection.go:205] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Apr 03 13:00:00.656 E ns/openshift-multus pod/multus-xkpnb node/ip-10-0-154-233.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 03 13:00:47.139 E ns/openshift-sdn pod/sdn-4cptv node/ip-10-0-152-202.us-west-1.compute.internal container=sdn container exited with code 255 (Error): ix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:00:45.620469   76096 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:00:45.720417   76096 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:00:45.820457   76096 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:00:45.920443   76096 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:00:46.020510   76096 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:00:46.120420   76096 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:00:46.220477   76096 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:00:46.320537   76096 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:00:46.420519   76096 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:00:46.520499   76096 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:00:46.631305   76096 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0403 13:00:46.631372   76096 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
Apr 03 13:00:55.595 E ns/openshift-multus pod/multus-z4mdw node/ip-10-0-142-104.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 03 13:01:17.644 E ns/openshift-sdn pod/ovs-d2vmr node/ip-10-0-142-104.us-west-1.compute.internal container=openvswitch container exited with code 137 (Error): \n2020-04-03T12:49:57.668Z|00142|bridge|INFO|bridge br0: added interface veth639a0dab on port 21\n2020-04-03T12:49:57.696Z|00143|connmgr|INFO|br0<->unix#347: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T12:49:57.733Z|00144|connmgr|INFO|br0<->unix#350: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T12:59:46.178Z|00145|connmgr|INFO|br0<->unix#421: 2 flow_mods in the last 0 s (2 adds)\n2020-04-03T12:59:46.257Z|00146|connmgr|INFO|br0<->unix#427: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T12:59:46.279Z|00147|connmgr|INFO|br0<->unix#430: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T12:59:46.589Z|00148|connmgr|INFO|br0<->unix#433: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T12:59:46.617Z|00149|connmgr|INFO|br0<->unix#436: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T12:59:46.647Z|00150|connmgr|INFO|br0<->unix#439: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T12:59:46.674Z|00151|connmgr|INFO|br0<->unix#442: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T12:59:46.697Z|00152|connmgr|INFO|br0<->unix#445: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T12:59:46.719Z|00153|connmgr|INFO|br0<->unix#448: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T12:59:46.742Z|00154|connmgr|INFO|br0<->unix#451: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T12:59:46.766Z|00155|connmgr|INFO|br0<->unix#454: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T12:59:46.789Z|00156|connmgr|INFO|br0<->unix#457: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T12:59:46.817Z|00157|connmgr|INFO|br0<->unix#460: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T12:59:59.983Z|00158|connmgr|INFO|br0<->unix#463: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T13:00:00.008Z|00159|bridge|INFO|bridge br0: deleted interface veth5a34f94e on port 3\n2020-04-03T13:00:07.909Z|00160|bridge|INFO|bridge br0: added interface vethb53cb2cf on port 22\n2020-04-03T13:00:07.940Z|00161|connmgr|INFO|br0<->unix#466: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T13:00:07.975Z|00162|connmgr|INFO|br0<->unix#469: 2 flow_mods in the last 0 s (2 deletes)\n
Apr 03 13:01:24.168 E ns/openshift-service-ca pod/apiservice-cabundle-injector-696469c575-t6gvv node/ip-10-0-152-202.us-west-1.compute.internal container=apiservice-cabundle-injector-controller container exited with code 255 (Error): 
Apr 03 13:01:27.212 E ns/openshift-service-ca pod/service-serving-cert-signer-6bd7844499-4xj4n node/ip-10-0-152-202.us-west-1.compute.internal container=service-serving-cert-signer-controller container exited with code 255 (Error): 
Apr 03 13:01:28.661 E ns/openshift-sdn pod/sdn-dfc6h node/ip-10-0-142-104.us-west-1.compute.internal container=sdn container exited with code 255 (Error): ar/run/openvswitch/db.sock: connect: connection refused\nI0403 13:01:26.703968   49185 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:01:26.804008   49185 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:01:26.903963   49185 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:01:27.003985   49185 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:01:27.104000   49185 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:01:27.204011   49185 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:01:27.303974   49185 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:01:27.403953   49185 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:01:27.503994   49185 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:01:27.603958   49185 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:01:27.604047   49185 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nF0403 13:01:27.604066   49185 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: timed out waiting for the condition\n
Apr 03 13:01:45.223 E ns/openshift-multus pod/multus-drg24 node/ip-10-0-152-202.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 03 13:02:03.314 E ns/openshift-sdn pod/ovs-q9bx8 node/ip-10-0-139-218.us-west-1.compute.internal container=openvswitch container exited with code 137 (Error): -03T12:51:03.454Z|00171|connmgr|INFO|br0<->unix#423: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T12:58:49.779Z|00172|connmgr|INFO|br0<->unix#478: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T12:58:49.814Z|00173|connmgr|INFO|br0<->unix#481: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T12:58:49.845Z|00174|bridge|INFO|bridge br0: deleted interface vethc082ee49 on port 3\n2020-04-03T12:59:03.382Z|00175|bridge|INFO|bridge br0: added interface veth4b15f45c on port 29\n2020-04-03T12:59:03.419Z|00176|connmgr|INFO|br0<->unix#484: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T12:59:03.459Z|00177|connmgr|INFO|br0<->unix#487: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T12:59:22.450Z|00178|connmgr|INFO|br0<->unix#499: 2 flow_mods in the last 0 s (2 adds)\n2020-04-03T12:59:22.537Z|00179|connmgr|INFO|br0<->unix#505: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T12:59:22.561Z|00180|connmgr|INFO|br0<->unix#508: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T12:59:22.586Z|00181|connmgr|INFO|br0<->unix#511: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T12:59:22.866Z|00182|connmgr|INFO|br0<->unix#514: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T12:59:22.894Z|00183|connmgr|INFO|br0<->unix#517: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T12:59:22.917Z|00184|connmgr|INFO|br0<->unix#520: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T12:59:22.941Z|00185|connmgr|INFO|br0<->unix#523: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T12:59:22.976Z|00186|connmgr|INFO|br0<->unix#526: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T12:59:23.009Z|00187|connmgr|INFO|br0<->unix#529: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T12:59:23.035Z|00188|connmgr|INFO|br0<->unix#532: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T12:59:23.059Z|00189|connmgr|INFO|br0<->unix#535: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T12:59:23.089Z|00190|connmgr|INFO|br0<->unix#538: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T12:59:23.113Z|00191|connmgr|INFO|br0<->unix#541: 1 flow_mods in the last 0 s (1 adds)\n
Apr 03 13:02:08.325 E ns/openshift-sdn pod/sdn-5b497 node/ip-10-0-139-218.us-west-1.compute.internal container=sdn container exited with code 255 (Error): ix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:02:06.878203   62234 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:02:06.978182   62234 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:02:07.078186   62234 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:02:07.178180   62234 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:02:07.278224   62234 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:02:07.378226   62234 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:02:07.478227   62234 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:02:07.578207   62234 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:02:07.678200   62234 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:02:07.778162   62234 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:02:07.883107   62234 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0403 13:02:07.883178   62234 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
Apr 03 13:02:29.368 E ns/openshift-multus pod/multus-w6xhm node/ip-10-0-139-218.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 03 13:02:39.097 E ns/openshift-sdn pod/ovs-h4jcf node/ip-10-0-130-16.us-west-1.compute.internal container=openvswitch container exited with code 137 (Error): 0381|connmgr|INFO|br0<->unix#941: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T13:00:25.653Z|00382|connmgr|INFO|br0<->unix#960: 2 flow_mods in the last 0 s (2 adds)\n2020-04-03T13:00:25.761Z|00383|connmgr|INFO|br0<->unix#966: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T13:00:25.787Z|00384|connmgr|INFO|br0<->unix#969: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T13:00:25.814Z|00385|connmgr|INFO|br0<->unix#972: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T13:00:25.837Z|00386|connmgr|INFO|br0<->unix#975: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T13:00:25.861Z|00387|connmgr|INFO|br0<->unix#978: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T13:00:25.886Z|00388|connmgr|INFO|br0<->unix#981: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T13:00:25.917Z|00389|connmgr|INFO|br0<->unix#984: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T13:00:25.944Z|00390|connmgr|INFO|br0<->unix#987: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T13:00:25.973Z|00391|connmgr|INFO|br0<->unix#990: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T13:00:26.147Z|00392|connmgr|INFO|br0<->unix#993: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T13:00:26.179Z|00393|connmgr|INFO|br0<->unix#996: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T13:00:26.203Z|00394|connmgr|INFO|br0<->unix#999: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T13:00:26.230Z|00395|connmgr|INFO|br0<->unix#1002: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T13:00:26.254Z|00396|connmgr|INFO|br0<->unix#1005: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T13:00:26.284Z|00397|connmgr|INFO|br0<->unix#1008: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T13:00:26.309Z|00398|connmgr|INFO|br0<->unix#1011: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T13:00:26.338Z|00399|connmgr|INFO|br0<->unix#1014: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T13:00:26.362Z|00400|connmgr|INFO|br0<->unix#1017: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T13:00:26.386Z|00401|connmgr|INFO|br0<->unix#1020: 1 flow_mods in the last 0 s (1 adds)\n
Apr 03 13:02:43.130 E ns/openshift-sdn pod/sdn-vnjbx node/ip-10-0-130-16.us-west-1.compute.internal container=sdn container exited with code 255 (Error): ix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:02:41.370651   70278 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:02:41.470625   70278 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:02:41.570618   70278 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:02:41.670628   70278 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:02:41.770655   70278 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:02:41.870610   70278 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:02:41.970616   70278 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:02:42.070665   70278 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:02:42.170742   70278 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:02:42.270660   70278 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 13:02:42.376513   70278 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0403 13:02:42.376586   70278 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
Apr 03 13:02:54.152 E ns/openshift-machine-api pod/cluster-autoscaler-operator-c9d576747-tlkxv node/ip-10-0-130-16.us-west-1.compute.internal container=cluster-autoscaler-operator container exited with code 255 (Error): 
Apr 03 13:03:20.231 E ns/openshift-multus pod/multus-xd67g node/ip-10-0-130-16.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 03 13:03:57.568 E ns/openshift-machine-config-operator pod/machine-config-operator-d698d4f79-wdw4c node/ip-10-0-152-202.us-west-1.compute.internal container=machine-config-operator container exited with code 2 (Error): 
Apr 03 13:06:00.811 E ns/openshift-machine-config-operator pod/machine-config-daemon-6f6sc node/ip-10-0-139-84.us-west-1.compute.internal container=machine-config-daemon container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 13:07:05.883 E ns/openshift-machine-config-operator pod/machine-config-controller-6b655cbd6d-srvnp node/ip-10-0-130-16.us-west-1.compute.internal container=machine-config-controller container exited with code 2 (Error): 
Apr 03 13:09:02.295 E ns/openshift-machine-config-operator pod/machine-config-server-ks9cs node/ip-10-0-139-84.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): 
Apr 03 13:09:12.892 E ns/openshift-marketplace pod/community-operators-84dc7fcc59-x9z2t node/ip-10-0-154-233.us-west-1.compute.internal container=community-operators container exited with code 2 (Error): 
Apr 03 13:09:13.895 E ns/openshift-monitoring pod/prometheus-operator-98bf494b8-v924j node/ip-10-0-154-233.us-west-1.compute.internal container=prometheus-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 13:09:15.190 E ns/openshift-console pod/console-755cc78d69-qmrg8 node/ip-10-0-152-202.us-west-1.compute.internal container=console container exited with code 2 (Error): 2020/04/3 12:50:22 cmd/main: cookies are secure!\n2020/04/3 12:50:22 cmd/main: Binding to 0.0.0.0:8443...\n2020/04/3 12:50:22 cmd/main: using TLS\n
Apr 03 13:09:15.998 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-768fgftqm node/ip-10-0-152-202.us-west-1.compute.internal container=operator container exited with code 2 (Error): 0 [Prometheus/2.7.2 10.129.2.16:40812]\nI0403 13:05:48.928579       1 wrap.go:47] GET /metrics: (6.314466ms) 200 [Prometheus/2.7.2 10.131.0.16:43590]\nI0403 13:05:53.503988       1 reflector.go:357] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: Watch close - *v1.ClusterOperator total 56 items received\nI0403 13:06:18.928449       1 wrap.go:47] GET /metrics: (6.71625ms) 200 [Prometheus/2.7.2 10.129.2.16:40812]\nI0403 13:06:18.928452       1 wrap.go:47] GET /metrics: (6.177396ms) 200 [Prometheus/2.7.2 10.131.0.16:43590]\nI0403 13:06:48.928676       1 wrap.go:47] GET /metrics: (6.820958ms) 200 [Prometheus/2.7.2 10.129.2.16:40812]\nI0403 13:06:48.930164       1 wrap.go:47] GET /metrics: (7.938094ms) 200 [Prometheus/2.7.2 10.131.0.16:43590]\nI0403 13:07:18.929026       1 wrap.go:47] GET /metrics: (7.270707ms) 200 [Prometheus/2.7.2 10.129.2.16:40812]\nI0403 13:07:18.929029       1 wrap.go:47] GET /metrics: (6.833645ms) 200 [Prometheus/2.7.2 10.131.0.16:43590]\nI0403 13:07:48.929878       1 wrap.go:47] GET /metrics: (8.092181ms) 200 [Prometheus/2.7.2 10.129.2.16:40812]\nI0403 13:07:48.930801       1 wrap.go:47] GET /metrics: (8.725014ms) 200 [Prometheus/2.7.2 10.131.0.16:43590]\nI0403 13:08:14.496767       1 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch close - *v1.Service total 0 items received\nI0403 13:08:18.928488       1 wrap.go:47] GET /metrics: (6.656872ms) 200 [Prometheus/2.7.2 10.129.2.16:40812]\nI0403 13:08:18.930262       1 wrap.go:47] GET /metrics: (8.156141ms) 200 [Prometheus/2.7.2 10.131.0.16:43590]\nI0403 13:08:37.495188       1 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync\nI0403 13:08:38.494657       1 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch close - *v1.ServiceAccount total 0 items received\nI0403 13:08:48.929113       1 wrap.go:47] GET /metrics: (7.349129ms) 200 [Prometheus/2.7.2 10.129.2.16:40812]\nI0403 13:08:48.929113       1 wrap.go:47] GET /metrics: (7.070776ms) 200 [Prometheus/2.7.2 10.131.0.16:43590]\n
Apr 03 13:09:17.189 E ns/openshift-console-operator pod/console-operator-6b544b75f4-sq9nv node/ip-10-0-152-202.us-west-1.compute.internal container=console-operator container exited with code 255 (Error): hift/client-go/oauth/informers/externalversions/factory.go:101: watch of *v1.OAuthClient ended with: The resourceVersion for the provided watch is too old.\ntime="2020-04-03T13:08:59Z" level=info msg="started syncing operator \"cluster\" (2020-04-03 13:08:59.869904048 +0000 UTC m=+1202.282119040)"\ntime="2020-04-03T13:08:59Z" level=info msg="console is in a managed state."\ntime="2020-04-03T13:08:59Z" level=info msg="running sync loop 4.0.0"\ntime="2020-04-03T13:08:59Z" level=info msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-04-03T13:08:59Z" level=info msg="service-ca configmap exists and is in the correct state"\ntime="2020-04-03T13:08:59Z" level=info msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-04-03T13:08:59Z" level=info msg=-----------------------\ntime="2020-04-03T13:08:59Z" level=info msg="sync loop 4.0.0 resources updated: false \n"\ntime="2020-04-03T13:08:59Z" level=info msg=-----------------------\ntime="2020-04-03T13:08:59Z" level=info msg="deployment is available, ready replicas: 2 \n"\ntime="2020-04-03T13:08:59Z" level=info msg="sync_v400: updating console status"\ntime="2020-04-03T13:08:59Z" level=info msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-04-03T13:08:59Z" level=info msg="sync loop 4.0.0 complete"\ntime="2020-04-03T13:08:59Z" level=info msg="finished syncing operator \"cluster\" (48.042µs) \n\n"\nW0403 13:09:11.510948       1 reflector.go:270] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: The resourceVersion for the provided watch is too old.\nI0403 13:09:12.257971       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 13:09:12.258041       1 leaderelection.go:65] leaderelection lost\n
Apr 03 13:09:19.588 E ns/openshift-service-ca pod/apiservice-cabundle-injector-696469c575-t6gvv node/ip-10-0-152-202.us-west-1.compute.internal container=apiservice-cabundle-injector-controller container exited with code 2 (Error): 
Apr 03 13:09:24.790 E ns/openshift-machine-config-operator pod/machine-config-operator-7f9b99675c-kfvgv node/ip-10-0-152-202.us-west-1.compute.internal container=machine-config-operator container exited with code 2 (Error): 
Apr 03 13:09:27.790 E ns/openshift-service-ca-operator pod/service-ca-operator-f8f9f7d69-bzc6z node/ip-10-0-152-202.us-west-1.compute.internal container=operator container exited with code 2 (Error): 
Apr 03 13:09:28.988 E ns/openshift-service-ca pod/service-serving-cert-signer-6bd7844499-4xj4n node/ip-10-0-152-202.us-west-1.compute.internal container=service-serving-cert-signer-controller container exited with code 2 (Error): 
Apr 03 13:09:36.188 E ns/openshift-machine-config-operator pod/machine-config-server-gsdhq node/ip-10-0-152-202.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): 
Apr 03 13:09:43.988 E ns/openshift-console pod/downloads-76bbd74bcf-zqgmj node/ip-10-0-152-202.us-west-1.compute.internal container=download-server container exited with code 137 (Error): 
Apr 03 13:10:02.282 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-139-218.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): 
Apr 03 13:10:25.407 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-139-218.us-west-1.compute.internal container=prometheus-proxy container exited with code 1 (Error): 
Apr 03 13:10:29.701 E ns/openshift-machine-config-operator pod/machine-config-server-vnj2l node/ip-10-0-130-16.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): 
Apr 03 13:11:36.852 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-152-202.us-west-1.compute.internal node/ip-10-0-152-202.us-west-1.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 255 (Error): I0403 12:47:24.348806       1 observer_polling.go:106] Starting file observer\nI0403 12:47:24.350180       1 certsync_controller.go:269] Starting CertSyncer\nW0403 12:56:33.597644       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22432 (25098)\nW0403 13:03:36.603262       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25229 (27892)\n
Apr 03 13:11:36.852 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-152-202.us-west-1.compute.internal node/ip-10-0-152-202.us-west-1.compute.internal container=kube-apiserver-7 container exited with code 255 (Error): a/ca-bundle.crt\nI0403 13:10:55.340380       1 serving.go:88] Shutting down DynamicLoader\nI0403 13:10:55.340388       1 clientca.go:69] Shutting down DynamicCA: /etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt\nI0403 13:10:55.340416       1 clientca.go:69] Shutting down DynamicCA: /etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt\nI0403 13:10:55.341815       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\nE0403 13:10:55.341897       1 memcache.go:141] couldn't get resource list for packages.operators.coreos.com/v1: Get https://[::1]:6443/apis/packages.operators.coreos.com/v1?timeout=32s: unexpected EOF\nI0403 13:10:55.342009       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\nE0403 13:10:55.342436       1 memcache.go:141] couldn't get resource list for config.openshift.io/v1: Get https://[::1]:6443/apis/config.openshift.io/v1?timeout=32s: dial tcp [::1]:6443: connect: connection refused\nI0403 13:10:55.342963       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\nI0403 13:10:55.343135       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\nE0403 13:10:55.343630       1 memcache.go:141] couldn't get resource list for operator.openshift.io/v1: Get https://[::1]:6443/apis/operator.openshift.io/v1?timeout=32s: dial tcp [::1]:6443: connect: connection refused\nE0403 13:10:55.347484       1 memcache.go:141] couldn't get resource list for autoscaling.openshift.io/v1: Get https://[::1]:6443/apis/autoscaling.openshift.io/v1?timeout=32s: dial tcp [::1]:6443: connect: connection refused\nI0403 13:10:55.347835       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\nI0403 13:10:55.350791       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\nI0403 13:10:55.350867       1 secure_serving.go:180] Stopped listening on [::]:6443\n
Apr 03 13:11:42.093 E ns/openshift-monitoring pod/node-exporter-dw5xg node/ip-10-0-154-233.us-west-1.compute.internal container=node-exporter container exited with code 255 (Error): 
Apr 03 13:11:42.093 E ns/openshift-monitoring pod/node-exporter-dw5xg node/ip-10-0-154-233.us-west-1.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 03 13:11:42.118 E ns/openshift-cluster-node-tuning-operator pod/tuned-d9npg node/ip-10-0-154-233.us-west-1.compute.internal container=tuned container exited with code 255 (Error):  openshift-tuned.go:435] Pod (openshift-multus/multus-xkpnb) labels changed node wide: true\nI0403 13:00:14.261078   37493 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 13:00:14.263526   37493 openshift-tuned.go:326] Getting recommended profile...\nI0403 13:00:14.373644   37493 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 13:05:54.379114   37493 openshift-tuned.go:435] Pod (openshift-machine-config-operator/machine-config-daemon-fpz6t) labels changed node wide: true\nI0403 13:05:59.260978   37493 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 13:05:59.262632   37493 openshift-tuned.go:326] Getting recommended profile...\nI0403 13:05:59.429514   37493 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 13:09:11.716771   37493 openshift-tuned.go:435] Pod (openshift-monitoring/alertmanager-main-1) labels changed node wide: true\nI0403 13:09:14.261130   37493 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 13:09:14.262663   37493 openshift-tuned.go:326] Getting recommended profile...\nI0403 13:09:14.373766   37493 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 13:09:22.009084   37493 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-k8s-1) labels changed node wide: true\nI0403 13:09:24.261084   37493 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 13:09:24.263520   37493 openshift-tuned.go:326] Getting recommended profile...\nI0403 13:09:24.372719   37493 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 13:10:02.006845   37493 openshift-tuned.go:435] Pod (openshift-marketplace/community-operators-84dc7fcc59-x9z2t) labels changed node wide: true\n
Apr 03 13:11:42.323 E ns/openshift-image-registry pod/node-ca-g6m5p node/ip-10-0-154-233.us-west-1.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 03 13:11:43.532 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-152-202.us-west-1.compute.internal node/ip-10-0-152-202.us-west-1.compute.internal container=kube-controller-manager-6 container exited with code 255 (Error): 12:12:27 +0000 UTC to 2021-04-03 12:12:27 +0000 UTC (now=2020-04-03 12:47:24.579571701 +0000 UTC))\nI0403 12:47:24.579590       1 clientca.go:92] [4] "/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" client-ca certificate: "kube-control-plane-signer" [] issuer="<self>" (2020-04-03 12:12:26 +0000 UTC to 2021-04-03 12:12:26 +0000 UTC (now=2020-04-03 12:47:24.579583944 +0000 UTC))\nI0403 12:47:24.589460       1 controllermanager.go:169] Version: v1.13.4+3040211\nI0403 12:47:24.592776       1 serving.go:195] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1585916932" (2020-04-03 12:29:10 +0000 UTC to 2022-04-03 12:29:11 +0000 UTC (now=2020-04-03 12:47:24.592747952 +0000 UTC))\nI0403 12:47:24.592877       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585916932" [] issuer="<self>" (2020-04-03 12:28:51 +0000 UTC to 2021-04-03 12:28:52 +0000 UTC (now=2020-04-03 12:47:24.592856692 +0000 UTC))\nI0403 12:47:24.592934       1 secure_serving.go:136] Serving securely on [::]:10257\nI0403 12:47:24.593755       1 serving.go:77] Starting DynamicLoader\nI0403 12:47:24.596878       1 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-controller-manager...\nE0403 12:47:28.646242       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: configmaps "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\nE0403 13:09:45.264217       1 controllermanager.go:282] leaderelection lost\nI0403 13:09:45.264302       1 serving.go:88] Shutting down DynamicLoader\n
Apr 03 13:11:43.532 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-152-202.us-west-1.compute.internal node/ip-10-0-152-202.us-west-1.compute.internal container=kube-controller-manager-cert-syncer-6 container exited with code 255 (Error): I0403 12:47:24.403100       1 observer_polling.go:106] Starting file observer\nI0403 12:47:24.403314       1 certsync_controller.go:269] Starting CertSyncer\nE0403 12:47:28.555782       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:kube-controller-manager" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\nE0403 12:47:28.594667       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: secrets is forbidden: User "system:kube-controller-manager" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-controller-manager"\nW0403 12:56:35.560985       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22433 (25107)\nW0403 13:05:40.566657       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25235 (28521)\n
Apr 03 13:11:45.531 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-152-202.us-west-1.compute.internal node/ip-10-0-152-202.us-west-1.compute.internal container=scheduler container exited with code 255 (Error): 47919       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585916932" [] issuer="<self>" (2020-04-03 12:28:51 +0000 UTC to 2021-04-03 12:28:52 +0000 UTC (now=2020-04-03 12:48:15.747901262 +0000 UTC))\nI0403 12:48:15.747966       1 secure_serving.go:136] Serving securely on [::]:10259\nI0403 12:48:15.748214       1 serving.go:77] Starting DynamicLoader\nI0403 12:48:16.656620       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller\nI0403 12:48:16.756989       1 controller_utils.go:1034] Caches are synced for scheduler controller\nI0403 12:48:16.757119       1 leaderelection.go:205] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\nW0403 13:09:12.560150       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 17894 (29976)\nW0403 13:09:12.629874       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 22199 (29976)\nW0403 13:09:12.640626       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 17894 (29976)\nW0403 13:09:12.640694       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.ReplicationController ended with: too old resource version: 26441 (29976)\nW0403 13:09:12.649339       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1beta1.PodDisruptionBudget ended with: too old resource version: 26442 (29976)\nW0403 13:09:12.669502       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 17902 (29976)\nE0403 13:09:45.358557       1 server.go:259] lost master\nI0403 13:09:45.359779       1 serving.go:88] Shutting down DynamicLoader\nI0403 13:09:45.361517       1 secure_serving.go:180] Stopped listening on [::]:10251\n
Apr 03 13:11:46.139 E ns/openshift-sdn pod/ovs-ccxk8 node/ip-10-0-154-233.us-west-1.compute.internal container=openvswitch container exited with code 255 (Error):  0 s (5 adds)\n2020-04-03T12:59:48.420Z|00118|connmgr|INFO|br0<->unix#97: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-04-03T12:59:48.423Z|00119|connmgr|INFO|br0<->unix#99: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T13:09:12.194Z|00120|connmgr|INFO|br0<->unix#163: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T13:09:12.229Z|00121|bridge|INFO|bridge br0: deleted interface vethd977bbf4 on port 11\n2020-04-03T13:09:12.359Z|00122|connmgr|INFO|br0<->unix#166: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T13:09:12.413Z|00123|bridge|INFO|bridge br0: deleted interface veth579dab5f on port 7\n2020-04-03T13:09:12.495Z|00124|connmgr|INFO|br0<->unix#169: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T13:09:12.553Z|00125|bridge|INFO|bridge br0: deleted interface vethd6329e9a on port 8\n2020-04-03T13:09:12.600Z|00126|connmgr|INFO|br0<->unix#172: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T13:09:12.655Z|00127|bridge|INFO|bridge br0: deleted interface vethf2c54f99 on port 10\n2020-04-03T13:09:12.709Z|00128|connmgr|INFO|br0<->unix#175: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T13:09:12.745Z|00129|bridge|INFO|bridge br0: deleted interface veth68c3ffdc on port 6\n2020-04-03T13:09:12.801Z|00130|connmgr|INFO|br0<->unix#178: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T13:09:12.834Z|00131|bridge|INFO|bridge br0: deleted interface veth8367d6ef on port 5\n2020-04-03T13:09:12.883Z|00132|connmgr|INFO|br0<->unix#181: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T13:09:12.916Z|00133|bridge|INFO|bridge br0: deleted interface vethdcc41a94 on port 9\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-04-03T13:09:12.826Z|00023|jsonrpc|WARN|Dropped 6 log messages in last 579 seconds (most recently, 579 seconds ago) due to excessive rate\n2020-04-03T13:09:12.826Z|00024|jsonrpc|WARN|unix#137: receive error: Connection reset by peer\n2020-04-03T13:09:12.826Z|00025|reconnect|WARN|unix#137: connection dropped (Connection reset by peer)\nTerminated\novs-vswitchd is not running.\novsdb-server is not running.\n
Apr 03 13:11:46.332 E ns/openshift-apiserver pod/apiserver-xw7c2 node/ip-10-0-152-202.us-west-1.compute.internal container=openshift-apiserver container exited with code 255 (Error): rWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 13:09:29.656020       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 13:09:29.656051       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 13:09:29.673558       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 13:09:29.679678       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []\nI0403 13:09:29.679710       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 13:09:29.679957       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 13:09:29.703709       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 13:09:45.262019       1 serving.go:88] Shutting down DynamicLoader\nI0403 13:09:45.262223       1 clusterquotamapping.go:145] Shutting down ClusterQuotaMappingController controller\nI0403 13:09:45.262243       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/client-ca/ca-bundle.crt\nI0403 13:09:45.262257       1 controller.go:87] Shutting down OpenAPI AggregationController\nI0403 13:09:45.262270       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/aggregator-client-ca/ca-bundle.crt\nI0403 13:09:45.262472       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nE0403 13:09:45.262947       1 watch.go:233] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoder{writer:(*metrics.fancyResponseWriterDelegator)(0xc00000f8e8), encoder:(*versioning.codec)(0xc004441cb0), buf:(*bytes.Buffer)(0xc0022febd0)})\nI0403 13:09:45.263377       1 secure_serving.go:180] Stopped listening on 0.0.0.0:8443\n
Apr 03 13:11:47.432 E ns/openshift-sdn pod/sdn-lqmr4 node/ip-10-0-154-233.us-west-1.compute.internal container=sdn container exited with code 255 (Error): rRR: Setting endpoints for default/kubernetes:https to [10.0.130.16:6443 10.0.139.84:6443]\nI0403 13:09:45.361714   52748 roundrobin.go:240] Delete endpoint 10.0.152.202:6443 for service "default/kubernetes:https"\nI0403 13:09:45.523583   52748 proxier.go:367] userspace proxy: processing 0 service events\nI0403 13:09:45.523605   52748 proxier.go:346] userspace syncProxyRules took 52.665113ms\nI0403 13:09:53.351105   52748 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-console/console:https to [10.129.0.60:8443 10.130.0.60:8443]\nI0403 13:09:53.351137   52748 roundrobin.go:240] Delete endpoint 10.130.0.60:8443 for service "openshift-console/console:https"\nI0403 13:09:53.515247   52748 proxier.go:367] userspace proxy: processing 0 service events\nI0403 13:09:53.515267   52748 proxier.go:346] userspace syncProxyRules took 54.288971ms\nE0403 13:10:02.723805   52748 healthcheck.go:57] SDN healthcheck disconnected from OVS server: <nil>\nI0403 13:10:02.723918   52748 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\ninterrupt: Gracefully shutting down ...\nI0403 13:10:02.824600   52748 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 13:10:02.930336   52748 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 13:10:03.024215   52748 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 13:10:03.124215   52748 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 13:10:03.224218   52748 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\n
Apr 03 13:11:47.534 E ns/openshift-monitoring pod/node-exporter-srgc6 node/ip-10-0-152-202.us-west-1.compute.internal container=node-exporter container exited with code 255 (Error): 
Apr 03 13:11:47.534 E ns/openshift-monitoring pod/node-exporter-srgc6 node/ip-10-0-152-202.us-west-1.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 03 13:11:47.820 E ns/openshift-dns pod/dns-default-k2bt2 node/ip-10-0-154-233.us-west-1.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-04-03T12:59:55.009Z [INFO] CoreDNS-1.3.1\n2020-04-03T12:59:55.009Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-03T12:59:55.009Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nW0403 13:09:12.103699       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: watch of *v1.Namespace ended with: too old resource version: 21837 (29894)\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 03 13:11:47.820 E ns/openshift-dns pod/dns-default-k2bt2 node/ip-10-0-154-233.us-west-1.compute.internal container=dns-node-resolver container exited with code 255 (Error): /bin/bash: line 1: kill: (95) - No such process\n
Apr 03 13:11:48.218 E ns/openshift-multus pod/multus-6wc2p node/ip-10-0-154-233.us-west-1.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 03 13:11:48.617 E ns/openshift-machine-config-operator pod/machine-config-daemon-5zjgf node/ip-10-0-154-233.us-west-1.compute.internal container=machine-config-daemon container exited with code 255 (Error): 
Apr 03 13:11:48.931 E ns/openshift-image-registry pod/node-ca-5glvw node/ip-10-0-152-202.us-west-1.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 03 13:11:51.731 E ns/openshift-controller-manager pod/controller-manager-2l2kk node/ip-10-0-152-202.us-west-1.compute.internal container=controller-manager container exited with code 255 (Error): 
Apr 03 13:11:52.132 E ns/openshift-dns pod/dns-default-whg86 node/ip-10-0-152-202.us-west-1.compute.internal container=dns-node-resolver container exited with code 255 (Error): 
Apr 03 13:11:52.132 E ns/openshift-dns pod/dns-default-whg86 node/ip-10-0-152-202.us-west-1.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-04-03T13:00:29.366Z [INFO] CoreDNS-1.3.1\n2020-04-03T13:00:29.367Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-03T13:00:29.367Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nW0403 13:09:12.622793       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: watch of *v1.Service ended with: too old resource version: 22199 (29976)\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 03 13:11:59.331 E ns/openshift-multus pod/multus-kpx8g node/ip-10-0-152-202.us-west-1.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 03 13:12:03.965 E ns/openshift-cluster-machine-approver pod/machine-approver-b5cd9f694-7pfp7 node/ip-10-0-139-84.us-west-1.compute.internal container=machine-approver-controller container exited with code 2 (Error): 2:48:09.166236       1 config.go:23] machine approver config: {NodeClientCert:{Disabled:false}}\nI0403 12:48:09.166294       1 main.go:183] Starting Machine Approver\nI0403 12:48:09.267038       1 main.go:107] CSR csr-xxc95 added\nI0403 12:48:09.267148       1 main.go:110] CSR csr-xxc95 is already approved\nI0403 12:48:09.267211       1 main.go:107] CSR csr-6bfzl added\nI0403 12:48:09.267261       1 main.go:110] CSR csr-6bfzl is already approved\nI0403 12:48:09.267311       1 main.go:107] CSR csr-6x8ls added\nI0403 12:48:09.267357       1 main.go:110] CSR csr-6x8ls is already approved\nI0403 12:48:09.267406       1 main.go:107] CSR csr-dggz2 added\nI0403 12:48:09.267451       1 main.go:110] CSR csr-dggz2 is already approved\nI0403 12:48:09.267506       1 main.go:107] CSR csr-jqvgv added\nI0403 12:48:09.267550       1 main.go:110] CSR csr-jqvgv is already approved\nI0403 12:48:09.267600       1 main.go:107] CSR csr-lprqz added\nI0403 12:48:09.267644       1 main.go:110] CSR csr-lprqz is already approved\nI0403 12:48:09.267693       1 main.go:107] CSR csr-r8w6d added\nI0403 12:48:09.267734       1 main.go:110] CSR csr-r8w6d is already approved\nI0403 12:48:09.267778       1 main.go:107] CSR csr-cc2kv added\nI0403 12:48:09.267823       1 main.go:110] CSR csr-cc2kv is already approved\nI0403 12:48:09.267876       1 main.go:107] CSR csr-hbs42 added\nI0403 12:48:09.267923       1 main.go:110] CSR csr-hbs42 is already approved\nI0403 12:48:09.267967       1 main.go:107] CSR csr-q6tx2 added\nI0403 12:48:09.268007       1 main.go:110] CSR csr-q6tx2 is already approved\nI0403 12:48:09.268082       1 main.go:107] CSR csr-t5nhw added\nI0403 12:48:09.268164       1 main.go:110] CSR csr-t5nhw is already approved\nI0403 12:48:09.268214       1 main.go:107] CSR csr-xhfdp added\nI0403 12:48:09.268256       1 main.go:110] CSR csr-xhfdp is already approved\nW0403 13:09:12.407867       1 reflector.go:341] github.com/openshift/cluster-machine-approver/main.go:185: watch of *v1beta1.CertificateSigningRequest ended with: too old resource version: 17036 (29964)\n
Apr 03 13:12:08.404 E ns/openshift-operator-lifecycle-manager pod/olm-operators-vrzdn node/ip-10-0-139-218.us-west-1.compute.internal container=configmap-registry-server container exited with code 2 (Error): 
Apr 03 13:12:09.204 E ns/openshift-marketplace pod/certified-operators-575b7db4dc-xjmcp node/ip-10-0-139-218.us-west-1.compute.internal container=certified-operators container exited with code 2 (Error): 
Apr 03 13:12:09.803 E ns/openshift-marketplace pod/redhat-operators-55bd4fc595-w8d2k node/ip-10-0-139-218.us-west-1.compute.internal container=redhat-operators container exited with code 2 (Error): 
Apr 03 13:12:10.403 E ns/openshift-monitoring pod/prometheus-adapter-68546b74-7qcx8 node/ip-10-0-139-218.us-west-1.compute.internal container=prometheus-adapter container exited with code 2 (Error): 
Apr 03 13:12:12.807 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-139-218.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): 
Apr 03 13:12:18.363 E ns/openshift-cluster-node-tuning-operator pod/cluster-node-tuning-operator-98dcc7848-wxl7t node/ip-10-0-139-84.us-west-1.compute.internal container=cluster-node-tuning-operator container exited with code 255 (Error): om the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=821, ErrCode=NO_ERROR, debug=""\nW0403 13:10:55.534363       1 reflector.go:270] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:196: watch of *v1.ConfigMap ended with: too old resource version: 27609 (30782)\nI0403 13:11:48.990807       1 tuned_controller.go:419] Reconciling Tuned openshift-cluster-node-tuning-operator/default\nI0403 13:11:48.990967       1 status.go:26] syncOperatorStatus()\nI0403 13:11:48.996544       1 tuned_controller.go:187] syncServiceAccount()\nI0403 13:11:48.996681       1 tuned_controller.go:215] syncClusterRole()\nI0403 13:11:49.022578       1 tuned_controller.go:246] syncClusterRoleBinding()\nI0403 13:11:49.047454       1 tuned_controller.go:277] syncClusterConfigMap()\nI0403 13:11:49.050514       1 tuned_controller.go:277] syncClusterConfigMap()\nI0403 13:11:49.054675       1 tuned_controller.go:315] syncDaemonSet()\nI0403 13:11:50.521454       1 tuned_controller.go:419] Reconciling Tuned openshift-cluster-node-tuning-operator/default\nI0403 13:11:50.521489       1 status.go:26] syncOperatorStatus()\nI0403 13:11:50.526201       1 tuned_controller.go:187] syncServiceAccount()\nI0403 13:11:50.526341       1 tuned_controller.go:215] syncClusterRole()\nI0403 13:11:50.547851       1 tuned_controller.go:246] syncClusterRoleBinding()\nI0403 13:11:50.572393       1 tuned_controller.go:277] syncClusterConfigMap()\nI0403 13:11:50.575540       1 tuned_controller.go:277] syncClusterConfigMap()\nI0403 13:11:50.578601       1 tuned_controller.go:315] syncDaemonSet()\nW0403 13:12:01.192253       1 reflector.go:270] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:196: watch of *v1.ServiceAccount ended with: too old resource version: 29976 (31776)\nW0403 13:12:01.662736       1 reflector.go:270] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:196: watch of *v1.ClusterRoleBinding ended with: too old resource version: 29976 (31858)\nF0403 13:12:06.351691       1 main.go:85] <nil>\n
Apr 03 13:12:34.530 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-152-202.us-west-1.compute.internal node/ip-10-0-152-202.us-west-1.compute.internal container=scheduler container exited with code 255 (Error): 47919       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585916932" [] issuer="<self>" (2020-04-03 12:28:51 +0000 UTC to 2021-04-03 12:28:52 +0000 UTC (now=2020-04-03 12:48:15.747901262 +0000 UTC))\nI0403 12:48:15.747966       1 secure_serving.go:136] Serving securely on [::]:10259\nI0403 12:48:15.748214       1 serving.go:77] Starting DynamicLoader\nI0403 12:48:16.656620       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller\nI0403 12:48:16.756989       1 controller_utils.go:1034] Caches are synced for scheduler controller\nI0403 12:48:16.757119       1 leaderelection.go:205] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\nW0403 13:09:12.560150       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 17894 (29976)\nW0403 13:09:12.629874       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 22199 (29976)\nW0403 13:09:12.640626       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 17894 (29976)\nW0403 13:09:12.640694       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.ReplicationController ended with: too old resource version: 26441 (29976)\nW0403 13:09:12.649339       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1beta1.PodDisruptionBudget ended with: too old resource version: 26442 (29976)\nW0403 13:09:12.669502       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 17902 (29976)\nE0403 13:09:45.358557       1 server.go:259] lost master\nI0403 13:09:45.359779       1 serving.go:88] Shutting down DynamicLoader\nI0403 13:09:45.361517       1 secure_serving.go:180] Stopped listening on [::]:10251\n
Apr 03 13:12:34.933 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-152-202.us-west-1.compute.internal node/ip-10-0-152-202.us-west-1.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 255 (Error): I0403 12:47:24.348806       1 observer_polling.go:106] Starting file observer\nI0403 12:47:24.350180       1 certsync_controller.go:269] Starting CertSyncer\nW0403 12:56:33.597644       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22432 (25098)\nW0403 13:03:36.603262       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25229 (27892)\n
Apr 03 13:12:34.933 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-152-202.us-west-1.compute.internal node/ip-10-0-152-202.us-west-1.compute.internal container=kube-apiserver-7 container exited with code 255 (Error): a/ca-bundle.crt\nI0403 13:10:55.340380       1 serving.go:88] Shutting down DynamicLoader\nI0403 13:10:55.340388       1 clientca.go:69] Shutting down DynamicCA: /etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt\nI0403 13:10:55.340416       1 clientca.go:69] Shutting down DynamicCA: /etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt\nI0403 13:10:55.341815       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\nE0403 13:10:55.341897       1 memcache.go:141] couldn't get resource list for packages.operators.coreos.com/v1: Get https://[::1]:6443/apis/packages.operators.coreos.com/v1?timeout=32s: unexpected EOF\nI0403 13:10:55.342009       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\nE0403 13:10:55.342436       1 memcache.go:141] couldn't get resource list for config.openshift.io/v1: Get https://[::1]:6443/apis/config.openshift.io/v1?timeout=32s: dial tcp [::1]:6443: connect: connection refused\nI0403 13:10:55.342963       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\nI0403 13:10:55.343135       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\nE0403 13:10:55.343630       1 memcache.go:141] couldn't get resource list for operator.openshift.io/v1: Get https://[::1]:6443/apis/operator.openshift.io/v1?timeout=32s: dial tcp [::1]:6443: connect: connection refused\nE0403 13:10:55.347484       1 memcache.go:141] couldn't get resource list for autoscaling.openshift.io/v1: Get https://[::1]:6443/apis/autoscaling.openshift.io/v1?timeout=32s: dial tcp [::1]:6443: connect: connection refused\nI0403 13:10:55.347835       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\nI0403 13:10:55.350791       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\nI0403 13:10:55.350867       1 secure_serving.go:180] Stopped listening on [::]:6443\n
Apr 03 13:12:35.339 E ns/openshift-etcd pod/etcd-member-ip-10-0-152-202.us-west-1.compute.internal node/ip-10-0-152-202.us-west-1.compute.internal container=etcd-metrics container exited with code 255 (Error): 2020-04-03 13:09:39.619480 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-03 13:09:39.622617 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-03 13:09:39.623378 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/03 13:09:39 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.152.202:9978: connect: connection refused"; Reconnecting to {etcd-1.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\n2020-04-03 13:09:40.636828 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Apr 03 13:12:35.339 E ns/openshift-etcd pod/etcd-member-ip-10-0-152-202.us-west-1.compute.internal node/ip-10-0-152-202.us-west-1.compute.internal container=etcd-member container exited with code 255 (Error): 44211824cbd821\n2020-04-03 13:09:45.319197 W | rafthttp: lost the TCP streaming connection with peer 6c44211824cbd821 (stream MsgApp v2 reader)\n2020-04-03 13:09:45.319228 I | rafthttp: stopped streaming with peer 6c44211824cbd821 (stream MsgApp v2 reader)\n2020-04-03 13:09:45.319302 W | rafthttp: lost the TCP streaming connection with peer 6c44211824cbd821 (stream Message reader)\n2020-04-03 13:09:45.319328 I | rafthttp: stopped streaming with peer 6c44211824cbd821 (stream Message reader)\n2020-04-03 13:09:45.319340 I | rafthttp: stopped peer 6c44211824cbd821\n2020-04-03 13:09:45.319347 I | rafthttp: stopping peer fe47439cb7b70816...\n2020-04-03 13:09:45.319730 I | rafthttp: closed the TCP streaming connection with peer fe47439cb7b70816 (stream MsgApp v2 writer)\n2020-04-03 13:09:45.319746 I | rafthttp: stopped streaming with peer fe47439cb7b70816 (writer)\n2020-04-03 13:09:45.320526 I | rafthttp: closed the TCP streaming connection with peer fe47439cb7b70816 (stream Message writer)\n2020-04-03 13:09:45.320541 I | rafthttp: stopped streaming with peer fe47439cb7b70816 (writer)\n2020-04-03 13:09:45.320561 I | rafthttp: stopped HTTP pipelining with peer fe47439cb7b70816\n2020-04-03 13:09:45.320634 W | rafthttp: lost the TCP streaming connection with peer fe47439cb7b70816 (stream MsgApp v2 reader)\n2020-04-03 13:09:45.320651 I | rafthttp: stopped streaming with peer fe47439cb7b70816 (stream MsgApp v2 reader)\n2020-04-03 13:09:45.320714 W | rafthttp: lost the TCP streaming connection with peer fe47439cb7b70816 (stream Message reader)\n2020-04-03 13:09:45.320728 I | rafthttp: stopped streaming with peer fe47439cb7b70816 (stream Message reader)\n2020-04-03 13:09:45.320737 I | rafthttp: stopped peer fe47439cb7b70816\n2020-04-03 13:09:45.344573 E | rafthttp: failed to find member fe47439cb7b70816 in cluster 9890e19cc136be50\n2020-04-03 13:09:45.346062 E | rafthttp: failed to find member 6c44211824cbd821 in cluster 9890e19cc136be50\n2020-04-03 13:09:45.357929 E | rafthttp: failed to find member 6c44211824cbd821 in cluster 9890e19cc136be50\n
Apr 03 13:12:35.486 E ns/openshift-console pod/downloads-76bbd74bcf-q97nd node/ip-10-0-139-218.us-west-1.compute.internal container=download-server container exited with code 137 (Error): 
Apr 03 13:12:36.317 E ns/openshift-operator-lifecycle-manager pod/packageserver-5b974ddd76-p8mmm node/ip-10-0-139-84.us-west-1.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 13:12:37.133 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-152-202.us-west-1.compute.internal node/ip-10-0-152-202.us-west-1.compute.internal container=kube-controller-manager-6 container exited with code 255 (Error): 12:12:27 +0000 UTC to 2021-04-03 12:12:27 +0000 UTC (now=2020-04-03 12:47:24.579571701 +0000 UTC))\nI0403 12:47:24.579590       1 clientca.go:92] [4] "/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" client-ca certificate: "kube-control-plane-signer" [] issuer="<self>" (2020-04-03 12:12:26 +0000 UTC to 2021-04-03 12:12:26 +0000 UTC (now=2020-04-03 12:47:24.579583944 +0000 UTC))\nI0403 12:47:24.589460       1 controllermanager.go:169] Version: v1.13.4+3040211\nI0403 12:47:24.592776       1 serving.go:195] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1585916932" (2020-04-03 12:29:10 +0000 UTC to 2022-04-03 12:29:11 +0000 UTC (now=2020-04-03 12:47:24.592747952 +0000 UTC))\nI0403 12:47:24.592877       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585916932" [] issuer="<self>" (2020-04-03 12:28:51 +0000 UTC to 2021-04-03 12:28:52 +0000 UTC (now=2020-04-03 12:47:24.592856692 +0000 UTC))\nI0403 12:47:24.592934       1 secure_serving.go:136] Serving securely on [::]:10257\nI0403 12:47:24.593755       1 serving.go:77] Starting DynamicLoader\nI0403 12:47:24.596878       1 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-controller-manager...\nE0403 12:47:28.646242       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: configmaps "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\nE0403 13:09:45.264217       1 controllermanager.go:282] leaderelection lost\nI0403 13:09:45.264302       1 serving.go:88] Shutting down DynamicLoader\n
Apr 03 13:12:37.133 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-152-202.us-west-1.compute.internal node/ip-10-0-152-202.us-west-1.compute.internal container=kube-controller-manager-cert-syncer-6 container exited with code 255 (Error): I0403 12:47:24.403100       1 observer_polling.go:106] Starting file observer\nI0403 12:47:24.403314       1 certsync_controller.go:269] Starting CertSyncer\nE0403 12:47:28.555782       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:kube-controller-manager" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\nE0403 12:47:28.594667       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: secrets is forbidden: User "system:kube-controller-manager" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-controller-manager"\nW0403 12:56:35.560985       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22433 (25107)\nW0403 13:05:40.566657       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25235 (28521)\n
Apr 03 13:12:40.734 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-152-202.us-west-1.compute.internal node/ip-10-0-152-202.us-west-1.compute.internal container=kube-controller-manager-6 container exited with code 255 (Error): 12:12:27 +0000 UTC to 2021-04-03 12:12:27 +0000 UTC (now=2020-04-03 12:47:24.579571701 +0000 UTC))\nI0403 12:47:24.579590       1 clientca.go:92] [4] "/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" client-ca certificate: "kube-control-plane-signer" [] issuer="<self>" (2020-04-03 12:12:26 +0000 UTC to 2021-04-03 12:12:26 +0000 UTC (now=2020-04-03 12:47:24.579583944 +0000 UTC))\nI0403 12:47:24.589460       1 controllermanager.go:169] Version: v1.13.4+3040211\nI0403 12:47:24.592776       1 serving.go:195] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1585916932" (2020-04-03 12:29:10 +0000 UTC to 2022-04-03 12:29:11 +0000 UTC (now=2020-04-03 12:47:24.592747952 +0000 UTC))\nI0403 12:47:24.592877       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585916932" [] issuer="<self>" (2020-04-03 12:28:51 +0000 UTC to 2021-04-03 12:28:52 +0000 UTC (now=2020-04-03 12:47:24.592856692 +0000 UTC))\nI0403 12:47:24.592934       1 secure_serving.go:136] Serving securely on [::]:10257\nI0403 12:47:24.593755       1 serving.go:77] Starting DynamicLoader\nI0403 12:47:24.596878       1 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-controller-manager...\nE0403 12:47:28.646242       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: configmaps "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\nE0403 13:09:45.264217       1 controllermanager.go:282] leaderelection lost\nI0403 13:09:45.264302       1 serving.go:88] Shutting down DynamicLoader\n
Apr 03 13:12:40.734 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-152-202.us-west-1.compute.internal node/ip-10-0-152-202.us-west-1.compute.internal container=kube-controller-manager-cert-syncer-6 container exited with code 255 (Error): I0403 12:47:24.403100       1 observer_polling.go:106] Starting file observer\nI0403 12:47:24.403314       1 certsync_controller.go:269] Starting CertSyncer\nE0403 12:47:28.555782       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:kube-controller-manager" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\nE0403 12:47:28.594667       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: secrets is forbidden: User "system:kube-controller-manager" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-controller-manager"\nW0403 12:56:35.560985       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22433 (25107)\nW0403 13:05:40.566657       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25235 (28521)\n
Apr 03 13:12:41.134 E ns/openshift-etcd pod/etcd-member-ip-10-0-152-202.us-west-1.compute.internal node/ip-10-0-152-202.us-west-1.compute.internal container=etcd-metrics container exited with code 255 (Error): 2020-04-03 13:09:39.619480 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-03 13:09:39.622617 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-03 13:09:39.623378 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/03 13:09:39 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.152.202:9978: connect: connection refused"; Reconnecting to {etcd-1.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\n2020-04-03 13:09:40.636828 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Apr 03 13:12:41.134 E ns/openshift-etcd pod/etcd-member-ip-10-0-152-202.us-west-1.compute.internal node/ip-10-0-152-202.us-west-1.compute.internal container=etcd-member container exited with code 255 (Error): 44211824cbd821\n2020-04-03 13:09:45.319197 W | rafthttp: lost the TCP streaming connection with peer 6c44211824cbd821 (stream MsgApp v2 reader)\n2020-04-03 13:09:45.319228 I | rafthttp: stopped streaming with peer 6c44211824cbd821 (stream MsgApp v2 reader)\n2020-04-03 13:09:45.319302 W | rafthttp: lost the TCP streaming connection with peer 6c44211824cbd821 (stream Message reader)\n2020-04-03 13:09:45.319328 I | rafthttp: stopped streaming with peer 6c44211824cbd821 (stream Message reader)\n2020-04-03 13:09:45.319340 I | rafthttp: stopped peer 6c44211824cbd821\n2020-04-03 13:09:45.319347 I | rafthttp: stopping peer fe47439cb7b70816...\n2020-04-03 13:09:45.319730 I | rafthttp: closed the TCP streaming connection with peer fe47439cb7b70816 (stream MsgApp v2 writer)\n2020-04-03 13:09:45.319746 I | rafthttp: stopped streaming with peer fe47439cb7b70816 (writer)\n2020-04-03 13:09:45.320526 I | rafthttp: closed the TCP streaming connection with peer fe47439cb7b70816 (stream Message writer)\n2020-04-03 13:09:45.320541 I | rafthttp: stopped streaming with peer fe47439cb7b70816 (writer)\n2020-04-03 13:09:45.320561 I | rafthttp: stopped HTTP pipelining with peer fe47439cb7b70816\n2020-04-03 13:09:45.320634 W | rafthttp: lost the TCP streaming connection with peer fe47439cb7b70816 (stream MsgApp v2 reader)\n2020-04-03 13:09:45.320651 I | rafthttp: stopped streaming with peer fe47439cb7b70816 (stream MsgApp v2 reader)\n2020-04-03 13:09:45.320714 W | rafthttp: lost the TCP streaming connection with peer fe47439cb7b70816 (stream Message reader)\n2020-04-03 13:09:45.320728 I | rafthttp: stopped streaming with peer fe47439cb7b70816 (stream Message reader)\n2020-04-03 13:09:45.320737 I | rafthttp: stopped peer fe47439cb7b70816\n2020-04-03 13:09:45.344573 E | rafthttp: failed to find member fe47439cb7b70816 in cluster 9890e19cc136be50\n2020-04-03 13:09:45.346062 E | rafthttp: failed to find member 6c44211824cbd821 in cluster 9890e19cc136be50\n2020-04-03 13:09:45.357929 E | rafthttp: failed to find member 6c44211824cbd821 in cluster 9890e19cc136be50\n
Apr 03 13:13:40.238 E ns/openshift-marketplace pod/community-operators-84dc7fcc59-lgzcc node/ip-10-0-142-104.us-west-1.compute.internal container=community-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 13:14:09.530 E ns/openshift-operator-lifecycle-manager pod/packageserver-5b974ddd76-b5hcg node/ip-10-0-130-16.us-west-1.compute.internal container=packageserver container exited with code 137 (Error): e=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-03T13:13:53Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-03T13:14:01Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T13:14:01Z" level=info msg="new grpc connection added" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\nI0403 13:14:07.303335       1 reflector.go:202] github.com/operator-framework/operator-lifecycle-manager/pkg/lib/queueinformer/queueinformer_operator.go:130: forcing resync\ntime="2020-04-03T13:14:07Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T13:14:07Z" level=info msg="new grpc connection added" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T13:14:07Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T13:14:07Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T13:14:07Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-03T13:14:07Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T13:14:07Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T13:14:07Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\n
Apr 03 13:14:18.902 E ns/openshift-cluster-node-tuning-operator pod/tuned-sq2m9 node/ip-10-0-152-202.us-west-1.compute.internal container=tuned container exited with code 143 (Error): tting recommended profile...\nI0403 13:13:48.725343    4962 openshift-tuned.go:520] Active profile () != recommended profile (openshift-control-plane)\nI0403 13:13:48.725370    4962 openshift-tuned.go:226] Reloading tuned...\n2020-04-03 13:13:48,891 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-04-03 13:13:48,911 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-04-03 13:13:48,912 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-03 13:13:48,914 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-04-03 13:13:48,915 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-04-03 13:13:48,955 WARNING  tuned.daemon.application: Using one shot no deamon mode, most of the functionality will be not available, it can be changed in global config\n2020-04-03 13:13:48,955 INFO     tuned.daemon.controller: starting controller\n2020-04-03 13:13:48,955 INFO     tuned.daemon.daemon: starting tuning\n2020-04-03 13:13:48,960 INFO     tuned.daemon.controller: terminating controller\n2020-04-03 13:13:48,965 INFO     tuned.daemon.daemon: stopping tuning\n2020-04-03 13:13:48,966 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-03 13:13:48,967 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-03 13:13:48,971 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-03 13:13:48,974 INFO     tuned.plugins.base: instance disk: assigning devices xvda\n2020-04-03 13:13:48,975 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-03 13:13:49,111 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-03 13:13:49,136 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\n2020-04-03 13:13:49,145 INFO     tuned.daemon.daemon: terminating Tuned in one-shot mode\n
Apr 03 13:14:18.959 E ns/openshift-cluster-node-tuning-operator pod/tuned-d9npg node/ip-10-0-154-233.us-west-1.compute.internal container=tuned container exited with code 143 (Error): g in automatic mode, checking what profile is recommended for your configuration.\n2020-04-03 13:13:42,072 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-04-03 13:13:42,073 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-04-03 13:13:42,121 WARNING  tuned.daemon.application: Using one shot no deamon mode, most of the functionality will be not available, it can be changed in global config\n2020-04-03 13:13:42,121 INFO     tuned.daemon.controller: starting controller\n2020-04-03 13:13:42,121 INFO     tuned.daemon.daemon: starting tuning\n2020-04-03 13:13:42,127 INFO     tuned.daemon.controller: terminating controller\n2020-04-03 13:13:42,127 INFO     tuned.daemon.daemon: stopping tuning\n2020-04-03 13:13:42,136 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-03 13:13:42,141 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-03 13:13:42,145 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-03 13:13:42,148 INFO     tuned.plugins.base: instance disk: assigning devices xvda\n2020-04-03 13:13:42,150 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-03 13:13:42,281 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-03 13:13:42,306 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\n2020-04-03 13:13:42,315 INFO     tuned.daemon.daemon: terminating Tuned in one-shot mode\nI0403 13:13:51.449070    2338 openshift-tuned.go:435] Pod (openshift-marketplace/redhat-operators-55bd4fc595-twzbn) labels changed node wide: true\nI0403 13:13:56.307633    2338 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 13:13:56.309187    2338 openshift-tuned.go:326] Getting recommended profile...\nI0403 13:13:56.419888    2338 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\n
Apr 03 13:14:19.313 E ns/openshift-cluster-node-tuning-operator pod/tuned-q87tb node/ip-10-0-142-104.us-west-1.compute.internal container=tuned container exited with code 143 (Error): node) match.  Label changes will not trigger profile reload.\nI0403 13:12:32.058713   37956 openshift-tuned.go:435] Pod (openshift-marketplace/certified-operators-cb79d49d6-2wmqg) labels changed node wide: true\nI0403 13:12:33.108394   37956 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 13:12:33.109723   37956 openshift-tuned.go:326] Getting recommended profile...\nI0403 13:12:33.235636   37956 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 13:12:33.235696   37956 openshift-tuned.go:435] Pod (openshift-marketplace/redhat-operators-5d99d8c6db-528dw) labels changed node wide: true\nI0403 13:12:38.108379   37956 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 13:12:38.109777   37956 openshift-tuned.go:326] Getting recommended profile...\nI0403 13:12:38.220825   37956 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 13:13:41.198815   37956 openshift-tuned.go:435] Pod (openshift-marketplace/certified-operators-cb79d49d6-2wmqg) labels changed node wide: true\nI0403 13:13:43.108375   37956 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 13:13:43.109661   37956 openshift-tuned.go:326] Getting recommended profile...\nI0403 13:13:43.230329   37956 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 13:13:48.822050   37956 openshift-tuned.go:435] Pod (openshift-marketplace/community-operators-84dc7fcc59-lgzcc) labels changed node wide: true\nI0403 13:13:53.108305   37956 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 13:13:53.109601   37956 openshift-tuned.go:326] Getting recommended profile...\nI0403 13:13:53.229384   37956 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\n
Apr 03 13:14:25.285 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-139-84.us-west-1.compute.internal node/ip-10-0-139-84.us-west-1.compute.internal container=scheduler container exited with code 255 (Error):   1 serving.go:195] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "scheduler.openshift-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1585916932" (2020-04-03 12:29:09 +0000 UTC to 2022-04-03 12:29:10 +0000 UTC (now=2020-04-03 12:47:00.040360684 +0000 UTC))\nI0403 12:47:00.040411       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585916932" [] issuer="<self>" (2020-04-03 12:28:51 +0000 UTC to 2021-04-03 12:28:52 +0000 UTC (now=2020-04-03 12:47:00.040395568 +0000 UTC))\nI0403 12:47:00.040442       1 secure_serving.go:136] Serving securely on [::]:10259\nI0403 12:47:00.040528       1 serving.go:77] Starting DynamicLoader\nI0403 12:47:00.942274       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller\nI0403 12:47:01.042504       1 controller_utils.go:1034] Caches are synced for scheduler controller\nI0403 12:47:01.042538       1 leaderelection.go:205] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\nW0403 13:09:12.400563       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StatefulSet ended with: too old resource version: 22866 (29964)\nW0403 13:12:01.477155       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 22199 (31811)\nW0403 13:12:01.566351       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 17033 (31838)\nW0403 13:12:01.590722       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 17036 (31842)\nE0403 13:12:43.816537       1 server.go:259] lost master\nI0403 13:12:43.816802       1 serving.go:88] Shutting down DynamicLoader\n
Apr 03 13:14:30.731 E ns/openshift-cluster-node-tuning-operator pod/tuned-47z9p node/ip-10-0-139-218.us-west-1.compute.internal container=tuned container exited with code 255 (Error):  (openshift-monitoring/alertmanager-main-2) labels changed node wide: true\nI0403 13:12:06.878862   38888 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 13:12:06.890869   38888 openshift-tuned.go:326] Getting recommended profile...\nI0403 13:12:07.123223   38888 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 13:12:08.580778   38888 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/olm-operators-vrzdn) labels changed node wide: true\nI0403 13:12:11.878223   38888 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 13:12:11.880546   38888 openshift-tuned.go:326] Getting recommended profile...\nI0403 13:12:11.997119   38888 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 13:12:12.374808   38888 openshift-tuned.go:435] Pod (openshift-image-registry/image-registry-d885bd477-jjxdc) labels changed node wide: true\nI0403 13:12:16.878249   38888 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 13:12:16.879800   38888 openshift-tuned.go:326] Getting recommended profile...\nI0403 13:12:16.998833   38888 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 13:12:21.052754   38888 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/olm-operators-v4t9b) labels changed node wide: true\nI0403 13:12:21.878325   38888 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 13:12:21.879712   38888 openshift-tuned.go:326] Getting recommended profile...\nI0403 13:12:21.999281   38888 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 13:12:47.016955   38888 openshift-tuned.go:435] Pod (e2e-tests-sig-apps-job-upgrade-tmhr8/foo-mf2w2) labels changed node wide: true\n
Apr 03 13:14:30.778 E ns/openshift-monitoring pod/node-exporter-ztbzg node/ip-10-0-139-218.us-west-1.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 03 13:14:30.778 E ns/openshift-monitoring pod/node-exporter-ztbzg node/ip-10-0-139-218.us-west-1.compute.internal container=node-exporter container exited with code 255 (Error): 
Apr 03 13:14:30.972 E ns/openshift-image-registry pod/node-ca-lj5t9 node/ip-10-0-139-218.us-west-1.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 03 13:14:34.771 E ns/openshift-dns pod/dns-default-dsdhx node/ip-10-0-139-218.us-west-1.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-04-03T12:59:07.877Z [INFO] CoreDNS-1.3.1\n2020-04-03T12:59:07.877Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-03T12:59:07.877Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nW0403 13:12:01.478604       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: watch of *v1.Service ended with: too old resource version: 22199 (31811)\nW0403 13:12:44.231646       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: watch of *v1.Namespace ended with: too old resource version: 17033 (29894)\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 03 13:14:34.771 E ns/openshift-dns pod/dns-default-dsdhx node/ip-10-0-139-218.us-west-1.compute.internal container=dns-node-resolver container exited with code 255 (Error): /bin/bash: line 1: kill: (117) - No such process\n
Apr 03 13:14:35.094 E ns/openshift-apiserver pod/apiserver-64z4d node/ip-10-0-139-84.us-west-1.compute.internal container=openshift-apiserver container exited with code 255 (Error): mwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0403 13:12:44.194071       1 reflector.go:237] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.Service: Get https://172.30.0.1:443/api/v1/services?resourceVersion=33100&timeout=5m39s&timeoutSeconds=339&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0403 13:12:44.194130       1 reflector.go:237] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.Role: Get https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/roles?resourceVersion=17036&timeout=9m40s&timeoutSeconds=580&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0403 13:12:44.194181       1 reflector.go:237] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.ClusterRoleBinding: Get https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?resourceVersion=31858&timeout=6m2s&timeoutSeconds=362&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0403 13:12:44.194221       1 reflector.go:237] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: Get https://172.30.0.1:443/apis/quota.openshift.io/v1/clusterresourcequotas?resourceVersion=17074&timeout=7m13s&timeoutSeconds=433&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nI0403 13:12:44.239381       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/client-ca/ca-bundle.crt\nI0403 13:12:44.239604       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 13:12:44.239985       1 clusterquotamapping.go:145] Shutting down ClusterQuotaMappingController controller\nI0403 13:12:44.240016       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/aggregator-client-ca/ca-bundle.crt\nI0403 13:12:44.240153       1 serving.go:88] Shutting down DynamicLoader\nI0403 13:12:44.241215       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\n
Apr 03 13:14:35.889 E ns/openshift-machine-config-operator pod/machine-config-daemon-ndc9z node/ip-10-0-139-84.us-west-1.compute.internal container=machine-config-daemon container exited with code 255 (Error): 
Apr 03 13:14:37.153 E ns/openshift-sdn pod/sdn-5b497 node/ip-10-0-139-218.us-west-1.compute.internal container=sdn container exited with code 255 (Error): .0.152.202:6443 for service "openshift-kube-apiserver/apiserver:https"\nI0403 13:12:43.271186   68904 proxier.go:367] userspace proxy: processing 0 service events\nI0403 13:12:43.271216   68904 proxier.go:346] userspace syncProxyRules took 58.604437ms\nI0403 13:12:43.500232   68904 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-kube-scheduler/scheduler:https to [10.0.130.16:10259 10.0.139.84:10259 10.0.152.202:10259]\nI0403 13:12:43.500267   68904 roundrobin.go:240] Delete endpoint 10.0.152.202:10259 for service "openshift-kube-scheduler/scheduler:https"\nI0403 13:12:43.673519   68904 proxier.go:367] userspace proxy: processing 0 service events\nI0403 13:12:43.673546   68904 proxier.go:346] userspace syncProxyRules took 58.94406ms\nI0403 13:12:44.076896   68904 roundrobin.go:310] LoadBalancerRR: Setting endpoints for default/kubernetes:https to [10.0.130.16:6443 10.0.152.202:6443]\nI0403 13:12:44.076932   68904 roundrobin.go:240] Delete endpoint 10.0.139.84:6443 for service "default/kubernetes:https"\nI0403 13:12:44.255337   68904 proxier.go:367] userspace proxy: processing 0 service events\nI0403 13:12:44.255366   68904 proxier.go:346] userspace syncProxyRules took 59.842457ms\ninterrupt: Gracefully shutting down ...\nE0403 13:12:48.100770   68904 healthcheck.go:57] SDN healthcheck disconnected from OVS server: <nil>\nI0403 13:12:48.100893   68904 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 13:12:48.208285   68904 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 13:12:48.303703   68904 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 13:12:48.401240   68904 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\n
Apr 03 13:14:37.552 E ns/openshift-sdn pod/ovs-zvgfq node/ip-10-0-139-218.us-west-1.compute.internal container=openvswitch container exited with code 255 (Error): e last 0 s (4 deletes)\n2020-04-03T13:12:06.554Z|00171|bridge|INFO|bridge br0: deleted interface vethe26698f3 on port 6\n2020-04-03T13:12:06.640Z|00172|connmgr|INFO|br0<->unix#241: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T13:12:06.693Z|00173|connmgr|INFO|br0<->unix#244: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T13:12:06.738Z|00174|bridge|INFO|bridge br0: deleted interface veth09db6d1d on port 17\n2020-04-03T13:12:06.812Z|00175|connmgr|INFO|br0<->unix#247: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T13:12:06.857Z|00176|bridge|INFO|bridge br0: deleted interface veth15dfbf44 on port 8\n2020-04-03T13:12:06.914Z|00177|connmgr|INFO|br0<->unix#250: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T13:12:06.980Z|00178|connmgr|INFO|br0<->unix#253: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T13:12:07.053Z|00179|bridge|INFO|bridge br0: deleted interface veth38c34959 on port 20\n2020-04-03T13:12:27.223Z|00180|bridge|INFO|bridge br0: added interface vethddf34151 on port 21\n2020-04-03T13:12:27.253Z|00181|connmgr|INFO|br0<->unix#260: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T13:12:27.292Z|00182|connmgr|INFO|br0<->unix#263: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T13:12:34.823Z|00183|connmgr|INFO|br0<->unix#266: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T13:12:34.861Z|00184|bridge|INFO|bridge br0: deleted interface vetha9b4d54f on port 12\n2020-04-03T13:12:34.931Z|00185|connmgr|INFO|br0<->unix#269: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T13:12:34.955Z|00186|bridge|INFO|bridge br0: deleted interface veth5b533d6d on port 3\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-04-03T13:12:34.942Z|00020|jsonrpc|WARN|Dropped 5 log messages in last 626 seconds (most recently, 625 seconds ago) due to excessive rate\n2020-04-03T13:12:34.942Z|00021|jsonrpc|WARN|unix#228: receive error: Connection reset by peer\n2020-04-03T13:12:34.942Z|00022|reconnect|WARN|unix#228: connection dropped (Connection reset by peer)\nTerminated\novs-vswitchd is not running.\novsdb-server is not running.\n
Apr 03 13:14:37.951 E ns/openshift-multus pod/multus-vm87q node/ip-10-0-139-218.us-west-1.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 03 13:14:38.351 E ns/openshift-machine-config-operator pod/machine-config-daemon-n4ppk node/ip-10-0-139-218.us-west-1.compute.internal container=machine-config-daemon container exited with code 255 (Error): 
Apr 03 13:14:38.750 E ns/openshift-operator-lifecycle-manager pod/olm-operators-v4t9b node/ip-10-0-139-218.us-west-1.compute.internal container=configmap-registry-server container exited with code 255 (Error): 
Apr 03 13:14:39.088 E ns/openshift-image-registry pod/node-ca-kx6fh node/ip-10-0-139-84.us-west-1.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 03 13:14:39.490 E ns/openshift-cluster-node-tuning-operator pod/tuned-659d4 node/ip-10-0-139-84.us-west-1.compute.internal container=tuned container exited with code 255 (Error): ned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 13:12:13.208278   54688 openshift-tuned.go:326] Getting recommended profile...\nI0403 13:12:13.446330   54688 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 13:12:13.449108   54688 openshift-tuned.go:435] Pod (openshift-cluster-samples-operator/cluster-samples-operator-777b4b9c84-4krgb) labels changed node wide: true\nI0403 13:12:18.204563   54688 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 13:12:18.205782   54688 openshift-tuned.go:326] Getting recommended profile...\nI0403 13:12:18.341616   54688 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 13:12:18.533319   54688 openshift-tuned.go:435] Pod (openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-98dcc7848-wxl7t) labels changed node wide: true\nI0403 13:12:23.204565   54688 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 13:12:23.206079   54688 openshift-tuned.go:326] Getting recommended profile...\nI0403 13:12:23.358889   54688 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 13:12:25.133161   54688 openshift-tuned.go:435] Pod (openshift-machine-config-operator/etcd-quorum-guard-5b6f477cbc-9spmk) labels changed node wide: true\nI0403 13:12:28.204557   54688 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 13:12:28.206127   54688 openshift-tuned.go:326] Getting recommended profile...\nI0403 13:12:28.339480   54688 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 13:12:43.462254   54688 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/packageserver-5b974ddd76-p8mmm) labels changed node wide: true\n
Apr 03 13:14:40.089 E ns/openshift-dns pod/dns-default-tmgb7 node/ip-10-0-139-84.us-west-1.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-04-03T12:59:24.218Z [INFO] CoreDNS-1.3.1\n2020-04-03T12:59:24.218Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-03T12:59:24.218Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nW0403 13:09:12.100621       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: watch of *v1.Namespace ended with: too old resource version: 21837 (29894)\nW0403 13:12:01.799407       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: watch of *v1.Service ended with: too old resource version: 22199 (31868)\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 03 13:14:40.089 E ns/openshift-dns pod/dns-default-tmgb7 node/ip-10-0-139-84.us-west-1.compute.internal container=dns-node-resolver container exited with code 255 (Error): /bin/bash: line 1: kill: (117) - No such process\n
Apr 03 13:14:40.489 E ns/openshift-sdn pod/sdn-4dmd7 node/ip-10-0-139-84.us-west-1.compute.internal container=sdn container exited with code 255 (Error): yRules took 54.909882ms\nI0403 13:12:43.098427   68838 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-kube-apiserver/apiserver:https to [10.0.130.16:6443 10.0.139.84:6443 10.0.152.202:6443]\nI0403 13:12:43.098462   68838 roundrobin.go:240] Delete endpoint 10.0.152.202:6443 for service "openshift-kube-apiserver/apiserver:https"\nI0403 13:12:43.268645   68838 proxier.go:367] userspace proxy: processing 0 service events\nI0403 13:12:43.268673   68838 proxier.go:346] userspace syncProxyRules took 55.112139ms\nI0403 13:12:43.498504   68838 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-kube-scheduler/scheduler:https to [10.0.130.16:10259 10.0.139.84:10259 10.0.152.202:10259]\nI0403 13:12:43.498545   68838 roundrobin.go:240] Delete endpoint 10.0.152.202:10259 for service "openshift-kube-scheduler/scheduler:https"\nI0403 13:12:43.678792   68838 proxier.go:367] userspace proxy: processing 0 service events\nI0403 13:12:43.678817   68838 proxier.go:346] userspace syncProxyRules took 55.134875ms\ninterrupt: Gracefully shutting down ...\nE0403 13:12:44.132666   68838 healthcheck.go:57] SDN healthcheck disconnected from OVS server: <nil>\nI0403 13:12:44.132796   68838 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 13:12:44.236448   68838 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 13:12:44.333996   68838 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 13:12:44.433086   68838 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 13:12:44.533067   68838 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\n
Apr 03 13:14:45.488 E ns/openshift-monitoring pod/node-exporter-jcwwf node/ip-10-0-139-84.us-west-1.compute.internal container=node-exporter container exited with code 255 (Error): 
Apr 03 13:14:45.488 E ns/openshift-monitoring pod/node-exporter-jcwwf node/ip-10-0-139-84.us-west-1.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 03 13:14:46.089 E ns/openshift-sdn pod/ovs-mmdlx node/ip-10-0-139-84.us-west-1.compute.internal container=openvswitch container exited with code 255 (Error):  deleted interface veth1f6c7cb2 on port 9\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-04-03T13:12:07.134Z|00034|reconnect|WARN|unix#243: connection dropped (Connection reset by peer)\n\n==> /var/log/openvswitch/ovs-vswitchd.log <==\n2020-04-03T13:12:23.401Z|00142|bridge|INFO|bridge br0: added interface veth466d0aef on port 25\n2020-04-03T13:12:23.434Z|00143|connmgr|INFO|br0<->unix#300: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T13:12:23.493Z|00144|connmgr|INFO|br0<->unix#304: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T13:12:23.496Z|00145|connmgr|INFO|br0<->unix#306: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-04-03T13:12:26.410Z|00146|connmgr|INFO|br0<->unix#309: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T13:12:26.439Z|00147|connmgr|INFO|br0<->unix#312: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T13:12:26.463Z|00148|bridge|INFO|bridge br0: deleted interface veth466d0aef on port 25\n2020-04-03T13:12:33.137Z|00149|connmgr|INFO|br0<->unix#315: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T13:12:33.179Z|00150|connmgr|INFO|br0<->unix#318: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T13:12:33.222Z|00151|bridge|INFO|bridge br0: deleted interface veth54141a51 on port 23\n2020-04-03T13:12:35.073Z|00152|connmgr|INFO|br0<->unix#321: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T13:12:35.100Z|00153|bridge|INFO|bridge br0: deleted interface veth60bbf350 on port 10\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-04-03T13:12:35.088Z|00035|jsonrpc|WARN|Dropped 2 log messages in last 30 seconds (most recently, 28 seconds ago) due to excessive rate\n2020-04-03T13:12:35.088Z|00036|jsonrpc|WARN|unix#270: receive error: Connection reset by peer\n2020-04-03T13:12:35.088Z|00037|reconnect|WARN|unix#270: connection dropped (Connection reset by peer)\n2020-04-03T13:12:35.093Z|00038|jsonrpc|WARN|unix#271: receive error: Connection reset by peer\n2020-04-03T13:12:35.093Z|00039|reconnect|WARN|unix#271: connection dropped (Connection reset by peer)\nTerminated\novsdb-server is not running.\n
Apr 03 13:14:46.888 E ns/openshift-controller-manager pod/controller-manager-5kz5t node/ip-10-0-139-84.us-west-1.compute.internal container=controller-manager container exited with code 255 (Error): 
Apr 03 13:14:47.288 E ns/openshift-multus pod/multus-4br5k node/ip-10-0-139-84.us-west-1.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 03 13:14:47.688 E ns/openshift-sdn pod/sdn-controller-42jqc node/ip-10-0-139-84.us-west-1.compute.internal container=sdn-controller container exited with code 255 (Error): I0403 12:59:24.906127       1 leaderelection.go:205] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Apr 03 13:14:48.089 E ns/openshift-machine-config-operator pod/machine-config-server-tzt7s node/ip-10-0-139-84.us-west-1.compute.internal container=machine-config-server container exited with code 255 (Error): 
Apr 03 13:14:56.880 E ns/openshift-etcd pod/etcd-member-ip-10-0-139-84.us-west-1.compute.internal node/ip-10-0-139-84.us-west-1.compute.internal container=etcd-metrics container exited with code 1 (Error): int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-0.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-03 13:14:30.276106 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-03 13:14:30.276943 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/03 13:14:30 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.139.84:9978: connect: connection refused"; Reconnecting to {etcd-0.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\nWARNING: 2020/04/03 13:14:31 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.139.84:9978: connect: connection refused"; Reconnecting to {etcd-0.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\nWARNING: 2020/04/03 13:14:33 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.139.84:9978: connect: connection refused"; Reconnecting to {etcd-0.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\nWARNING: 2020/04/03 13:14:35 Failed to dial etcd-0.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com:9978: context canceled; please retry.\nWARNING: 2020/04/03 13:14:35 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: operation was canceled"; Reconnecting to {etcd-0.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\ndial tcp 10.0.139.84:9978: connect: connection refused\n
Apr 03 13:15:00.050 E ns/openshift-ingress pod/router-default-64cdf664c9-nmb64 node/ip-10-0-142-104.us-west-1.compute.internal container=router container exited with code 2 (Error): \nE0403 13:13:14.357671       1 reflector.go:322] github.com/openshift/router/pkg/router/controller/factory/factory.go:112: Failed to watch *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io)\nE0403 13:13:25.049364       1 reflector.go:205] github.com/openshift/router/pkg/router/controller/factory/factory.go:112: Failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io)\nI0403 13:13:38.399047       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 13:13:43.388640       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 13:13:48.508498       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 13:13:53.505864       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 13:14:18.517413       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 13:14:23.509283       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 13:14:37.158066       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 13:14:42.091865       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 13:14:47.095000       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 13:14:52.094823       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n
Apr 03 13:15:00.443 E ns/openshift-marketplace pod/redhat-operators-5d99d8c6db-528dw node/ip-10-0-142-104.us-west-1.compute.internal container=redhat-operators container exited with code 2 (Error): 
Apr 03 13:15:00.544 E clusterversion/version changed Failing to True: ClusterOperatorNotAvailable: Cluster operator machine-config is still updating
Apr 03 13:15:02.412 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-67f9cf77f4-pzhnv node/ip-10-0-130-16.us-west-1.compute.internal container=openshift-apiserver-operator container exited with code 255 (Error): t-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 31110 (32349)\nW0403 13:12:44.540499       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 29695 (32349)\nW0403 13:12:44.643312       1 reflector.go:270] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: watch of *v1.OpenShiftAPIServer ended with: too old resource version: 31006 (33378)\nW0403 13:12:44.665295       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Ingress ended with: too old resource version: 19217 (33378)\nW0403 13:12:44.674161       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Project ended with: too old resource version: 29962 (33378)\nI0403 13:14:21.747104       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"6f8c27bd-75a6-11ea-a3cc-066eec24a8ff", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available changed from True to False ("Available: v1.apps.openshift.io is not ready: 0\nAvailable: v1.authorization.openshift.io is not ready: 0\nAvailable: v1.build.openshift.io is not ready: 0")\nI0403 13:14:22.065721       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"6f8c27bd-75a6-11ea-a3cc-066eec24a8ff", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available changed from False to True ("")\nI0403 13:14:57.795856       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 13:14:57.795932       1 leaderelection.go:65] leaderelection lost\n
Apr 03 13:15:02.647 E ns/openshift-marketplace pod/community-operators-5c567fb9b7-znjp6 node/ip-10-0-142-104.us-west-1.compute.internal container=community-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 13:15:08.810 E ns/openshift-console pod/console-755cc78d69-s874d node/ip-10-0-130-16.us-west-1.compute.internal container=console container exited with code 2 (Error): 2020/04/3 12:50:37 cmd/main: cookies are secure!\n2020/04/3 12:50:37 cmd/main: Binding to 0.0.0.0:8443...\n2020/04/3 12:50:37 cmd/main: using TLS\n
Apr 03 13:15:11.010 E ns/openshift-authentication-operator pod/authentication-operator-7648cc6d65-rpssf node/ip-10-0-130-16.us-west-1.compute.internal container=operator container exited with code 255 (Error): 12:38:33Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-04-03T13:12:41Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-04-03T12:39:03Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-04-03T12:32:12Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0403 13:13:47.814256       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"7756c022-75a6-11ea-a3cc-066eec24a8ff", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "RouteStatusDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io openshift-browser-client)" to ""\nW0403 13:14:50.782292       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Console ended with: too old resource version: 24761 (34762)\nW0403 13:14:50.788578       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Infrastructure ended with: too old resource version: 24761 (34762)\nW0403 13:14:50.922094       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Ingress ended with: too old resource version: 24761 (34765)\nW0403 13:14:50.922097       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Authentication ended with: too old resource version: 24761 (34765)\nI0403 13:14:57.783412       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 13:14:57.783472       1 leaderelection.go:65] leaderelection lost\n
Apr 03 13:15:12.410 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-796c44b8b7-kkdcl node/ip-10-0-130-16.us-west-1.compute.internal container=kube-apiserver-operator container exited with code 255 (Error): operator/kube-apiserver changed: Degraded message changed from "" to "StaticPodsDegraded: nodes/ip-10-0-139-84.us-west-1.compute.internal pods/kube-apiserver-ip-10-0-139-84.us-west-1.compute.internal container=\"kube-apiserver-7\" is not ready"\nW0403 13:14:50.791326       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Infrastructure ended with: too old resource version: 29976 (34762)\nW0403 13:14:50.917221       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Scheduler ended with: too old resource version: 24761 (34765)\nW0403 13:14:50.917479       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Authentication ended with: too old resource version: 29976 (34765)\nI0403 13:14:57.242300       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"6f4bf71b-75a6-11ea-a3cc-066eec24a8ff", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "StaticPodsDegraded: nodes/ip-10-0-139-84.us-west-1.compute.internal pods/kube-apiserver-ip-10-0-139-84.us-west-1.compute.internal container=\"kube-apiserver-7\" is not ready" to ""\nI0403 13:14:59.762670       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"6f4bf71b-75a6-11ea-a3cc-066eec24a8ff", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-7-ip-10-0-130-16.us-west-1.compute.internal -n openshift-kube-apiserver because it was missing\nI0403 13:15:02.452625       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 13:15:02.452830       1 leaderelection.go:65] leaderelection lost\n
Apr 03 13:15:15.017 E ns/openshift-monitoring pod/cluster-monitoring-operator-c4484fdfb-hmnjc node/ip-10-0-130-16.us-west-1.compute.internal container=cluster-monitoring-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 13:15:15.811 E ns/openshift-machine-config-operator pod/machine-config-controller-67d4bc5577-n8r4f node/ip-10-0-130-16.us-west-1.compute.internal container=machine-config-controller container exited with code 2 (Error): 
Apr 03 13:15:19.810 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-768fv8mr4 node/ip-10-0-130-16.us-west-1.compute.internal container=operator container exited with code 2 (Error): 47] GET /metrics: (8.020348ms) 200 [Prometheus/2.7.2 10.128.2.32:53174]\nI0403 13:11:30.574675       1 wrap.go:47] GET /metrics: (6.756526ms) 200 [Prometheus/2.7.2 10.129.2.16:35890]\nI0403 13:11:30.574893       1 wrap.go:47] GET /metrics: (5.888071ms) 200 [Prometheus/2.7.2 10.128.2.32:53174]\nI0403 13:12:00.734872       1 wrap.go:47] GET /metrics: (165.925453ms) 200 [Prometheus/2.7.2 10.128.2.32:53174]\nI0403 13:12:00.738465       1 wrap.go:47] GET /metrics: (170.468118ms) 200 [Prometheus/2.7.2 10.129.2.16:35890]\nI0403 13:12:01.780893       1 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch close - *v1.Service total 0 items received\nW0403 13:12:01.813025       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 22199 (31868)\nI0403 13:12:02.818308       1 reflector.go:169] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:132\nI0403 13:12:30.574074       1 wrap.go:47] GET /metrics: (6.096697ms) 200 [Prometheus/2.7.2 10.129.2.16:35890]\nI0403 13:13:00.577025       1 wrap.go:47] GET /metrics: (9.104867ms) 200 [Prometheus/2.7.2 10.129.2.16:35890]\nI0403 13:13:00.580057       1 wrap.go:47] GET /metrics: (2.245735ms) 200 [Prometheus/2.7.2 10.131.0.38:41026]\nI0403 13:13:30.575351       1 wrap.go:47] GET /metrics: (7.38653ms) 200 [Prometheus/2.7.2 10.129.2.16:35890]\nI0403 13:13:30.575351       1 wrap.go:47] GET /metrics: (7.706592ms) 200 [Prometheus/2.7.2 10.131.0.38:41026]\nI0403 13:14:00.576275       1 wrap.go:47] GET /metrics: (8.785427ms) 200 [Prometheus/2.7.2 10.131.0.38:41026]\nI0403 13:14:00.578305       1 wrap.go:47] GET /metrics: (10.458733ms) 200 [Prometheus/2.7.2 10.129.2.16:35890]\nI0403 13:14:30.574340       1 wrap.go:47] GET /metrics: (6.455657ms) 200 [Prometheus/2.7.2 10.129.2.16:35890]\nI0403 13:14:30.574955       1 wrap.go:47] GET /metrics: (7.420276ms) 200 [Prometheus/2.7.2 10.131.0.38:41026]\nI0403 13:15:00.576623       1 wrap.go:47] GET /metrics: (8.520523ms) 200 [Prometheus/2.7.2 10.131.0.38:41026]\n
Apr 03 13:15:20.416 E ns/openshift-machine-config-operator pod/machine-config-operator-7f9b99675c-gxvt6 node/ip-10-0-130-16.us-west-1.compute.internal container=machine-config-operator container exited with code 2 (Error): 
Apr 03 13:15:21.012 E ns/openshift-machine-api pod/machine-api-controllers-848876b59b-wwl8f node/ip-10-0-130-16.us-west-1.compute.internal container=controller-manager container exited with code 1 (Error): 
Apr 03 13:15:21.012 E ns/openshift-machine-api pod/machine-api-controllers-848876b59b-wwl8f node/ip-10-0-130-16.us-west-1.compute.internal container=nodelink-controller container exited with code 2 (Error): 
Apr 03 13:15:22.813 E ns/openshift-service-catalog-apiserver-operator pod/openshift-service-catalog-apiserver-operator-6c85d46867-wvqth node/ip-10-0-130-16.us-west-1.compute.internal container=operator container exited with code 2 (Error): 1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 13:13:18.057399       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 13:13:28.068955       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 13:13:38.093068       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 13:13:48.104936       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 13:13:58.117914       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 13:14:08.129648       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 13:14:18.142501       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 13:14:28.154631       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 13:14:38.166093       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 13:14:48.186691       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 13:14:58.197984       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\n
Apr 03 13:15:23.413 E ns/openshift-service-ca pod/service-serving-cert-signer-6bd7844499-w76zp node/ip-10-0-130-16.us-west-1.compute.internal container=service-serving-cert-signer-controller container exited with code 2 (Error): 
Apr 03 13:15:24.009 E ns/openshift-cluster-version pod/cluster-version-operator-67bf87677f-dkgnv node/ip-10-0-130-16.us-west-1.compute.internal container=cluster-version-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 13:15:24.609 E ns/openshift-service-ca pod/configmap-cabundle-injector-bd5874f49-4jjt4 node/ip-10-0-130-16.us-west-1.compute.internal container=configmap-cabundle-injector-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 13:15:27.761 E ns/openshift-console pod/downloads-76bbd74bcf-ch2fg node/ip-10-0-142-104.us-west-1.compute.internal container=download-server container exited with code 137 (Error): 
Apr 03 13:15:50.898 E ns/openshift-etcd pod/etcd-member-ip-10-0-139-84.us-west-1.compute.internal node/ip-10-0-139-84.us-west-1.compute.internal container=etcd-metrics container exited with code 255 (Error): 2020-04-03 13:12:13.471046 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-0.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-0.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-03 13:12:13.472302 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-03 13:12:13.473066 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/03 13:12:13 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.139.84:9978: connect: connection refused"; Reconnecting to {etcd-0.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\n2020-04-03 13:12:14.486592 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Apr 03 13:15:50.898 E ns/openshift-etcd pod/etcd-member-ip-10-0-139-84.us-west-1.compute.internal node/ip-10-0-139-84.us-west-1.compute.internal container=etcd-member container exited with code 255 (Error): cess raft message (raft: stopped)\n2020-04-03 13:12:44.410615 I | rafthttp: stopped HTTP pipelining with peer 98e841a3fd36360\n2020-04-03 13:12:44.410723 W | rafthttp: lost the TCP streaming connection with peer 98e841a3fd36360 (stream MsgApp v2 reader)\n2020-04-03 13:12:44.410787 I | rafthttp: stopped streaming with peer 98e841a3fd36360 (stream MsgApp v2 reader)\n2020-04-03 13:12:44.410885 W | rafthttp: lost the TCP streaming connection with peer 98e841a3fd36360 (stream Message reader)\n2020-04-03 13:12:44.410939 I | rafthttp: stopped streaming with peer 98e841a3fd36360 (stream Message reader)\n2020-04-03 13:12:44.410978 I | rafthttp: stopped peer 98e841a3fd36360\n2020-04-03 13:12:44.411034 I | rafthttp: stopping peer 6c44211824cbd821...\n2020-04-03 13:12:44.411458 I | rafthttp: closed the TCP streaming connection with peer 6c44211824cbd821 (stream MsgApp v2 writer)\n2020-04-03 13:12:44.411521 I | rafthttp: stopped streaming with peer 6c44211824cbd821 (writer)\n2020-04-03 13:12:44.411941 I | rafthttp: closed the TCP streaming connection with peer 6c44211824cbd821 (stream Message writer)\n2020-04-03 13:12:44.411988 I | rafthttp: stopped streaming with peer 6c44211824cbd821 (writer)\n2020-04-03 13:12:44.412143 I | rafthttp: stopped HTTP pipelining with peer 6c44211824cbd821\n2020-04-03 13:12:44.412251 W | rafthttp: lost the TCP streaming connection with peer 6c44211824cbd821 (stream MsgApp v2 reader)\n2020-04-03 13:12:44.412366 I | rafthttp: stopped streaming with peer 6c44211824cbd821 (stream MsgApp v2 reader)\n2020-04-03 13:12:44.412466 W | rafthttp: lost the TCP streaming connection with peer 6c44211824cbd821 (stream Message reader)\n2020-04-03 13:12:44.412516 E | rafthttp: failed to read 6c44211824cbd821 on stream Message (context canceled)\n2020-04-03 13:12:44.412554 I | rafthttp: peer 6c44211824cbd821 became inactive (message send to peer failed)\n2020-04-03 13:12:44.412591 I | rafthttp: stopped streaming with peer 6c44211824cbd821 (stream Message reader)\n2020-04-03 13:12:44.412639 I | rafthttp: stopped peer 6c44211824cbd821\n
Apr 03 13:15:51.227 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-139-84.us-west-1.compute.internal node/ip-10-0-139-84.us-west-1.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 255 (Error): I0403 12:45:26.504632       1 observer_polling.go:106] Starting file observer\nI0403 12:45:26.505372       1 certsync_controller.go:269] Starting CertSyncer\nW0403 12:53:31.225516       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22432 (24252)\nW0403 13:03:28.232828       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 24399 (27849)\nW0403 13:10:22.237955       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 28008 (30590)\n
Apr 03 13:15:51.227 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-139-84.us-west-1.compute.internal node/ip-10-0-139-84.us-west-1.compute.internal container=kube-apiserver-7 container exited with code 255 (Error): ble_controller.go:400] v1.packages.operators.coreos.com failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.packages.operators.coreos.com": the object has been modified; please apply your changes to the latest version and try again\nE0403 13:12:24.028103       1 available_controller.go:400] v1.packages.operators.coreos.com failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.packages.operators.coreos.com": the object has been modified; please apply your changes to the latest version and try again\nE0403 13:12:34.759800       1 memcache.go:141] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request\nI0403 13:12:43.819620       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\nI0403 13:12:43.819731       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\nI0403 13:12:43.820439       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\nI0403 13:12:43.820861       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\nI0403 13:12:43.821481       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\nI0403 13:12:43.822328       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\nI0403 13:12:43.822725       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\nI0403 13:12:43.823467       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\nI0403 13:12:43.823785       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\nI0403 13:12:44.052742       1 controller.go:176] Shutting down kubernetes service endpoint reconciler\nW0403 13:12:44.071450       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [10.0.130.16 10.0.152.202]\n
Apr 03 13:15:51.626 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-139-84.us-west-1.compute.internal node/ip-10-0-139-84.us-west-1.compute.internal container=scheduler container exited with code 255 (Error):   1 serving.go:195] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "scheduler.openshift-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1585916932" (2020-04-03 12:29:09 +0000 UTC to 2022-04-03 12:29:10 +0000 UTC (now=2020-04-03 12:47:00.040360684 +0000 UTC))\nI0403 12:47:00.040411       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585916932" [] issuer="<self>" (2020-04-03 12:28:51 +0000 UTC to 2021-04-03 12:28:52 +0000 UTC (now=2020-04-03 12:47:00.040395568 +0000 UTC))\nI0403 12:47:00.040442       1 secure_serving.go:136] Serving securely on [::]:10259\nI0403 12:47:00.040528       1 serving.go:77] Starting DynamicLoader\nI0403 12:47:00.942274       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller\nI0403 12:47:01.042504       1 controller_utils.go:1034] Caches are synced for scheduler controller\nI0403 12:47:01.042538       1 leaderelection.go:205] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\nW0403 13:09:12.400563       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StatefulSet ended with: too old resource version: 22866 (29964)\nW0403 13:12:01.477155       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 22199 (31811)\nW0403 13:12:01.566351       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 17033 (31838)\nW0403 13:12:01.590722       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 17036 (31842)\nE0403 13:12:43.816537       1 server.go:259] lost master\nI0403 13:12:43.816802       1 serving.go:88] Shutting down DynamicLoader\n
Apr 03 13:15:52.035 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-139-84.us-west-1.compute.internal node/ip-10-0-139-84.us-west-1.compute.internal container=kube-controller-manager-cert-syncer-6 container exited with code 255 (Error): I0403 12:45:43.643995       1 observer_polling.go:106] Starting file observer\nI0403 12:45:43.646068       1 certsync_controller.go:269] Starting CertSyncer\nW0403 12:54:09.670880       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22433 (24441)\nW0403 13:01:08.675910       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 24575 (26974)\nW0403 13:07:52.683169       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 27157 (29254)\n
Apr 03 13:15:52.035 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-139-84.us-west-1.compute.internal node/ip-10-0-139-84.us-west-1.compute.internal container=kube-controller-manager-6 container exited with code 255 (Error): d-75ac-11ea-a189-061aa894bf79", APIVersion:"apps/v1", ResourceVersion:"33070", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: certified-operators-56b4c867d5-6h258\nI0403 13:12:33.405251       1 replica_set.go:477] Too few replicas for ReplicaSet openshift-marketplace/community-operators-5c567fb9b7, need 1, creating 1\nI0403 13:12:33.406109       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-marketplace", Name:"community-operators", UID:"3f4c137f-75a7-11ea-8cec-02f7bc014c77", APIVersion:"apps/v1", ResourceVersion:"33088", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set community-operators-5c567fb9b7 to 1\nI0403 13:12:33.413435       1 service_controller.go:734] Service has been deleted openshift-marketplace/community-operators. Attempting to cleanup load balancer resources\nI0403 13:12:33.423395       1 deployment_controller.go:484] Error syncing deployment openshift-marketplace/community-operators: Operation cannot be fulfilled on deployments.apps "community-operators": the object has been modified; please apply your changes to the latest version and try again\nI0403 13:12:33.450618       1 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-marketplace", Name:"community-operators-5c567fb9b7", UID:"c7bae7d3-75ac-11ea-a189-061aa894bf79", APIVersion:"apps/v1", ResourceVersion:"33090", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: community-operators-5c567fb9b7-znjp6\nI0403 13:12:35.621068       1 event.go:221] Event(v1.ObjectReference{Kind:"StatefulSet", Namespace:"openshift-monitoring", Name:"alertmanager-main", UID:"5c3a8fe7-75a7-11ea-8cec-02f7bc014c77", APIVersion:"apps/v1", ResourceVersion:"32122", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' create Pod alertmanager-main-2 in StatefulSet alertmanager-main successful\nE0403 13:12:43.793574       1 controllermanager.go:282] leaderelection lost\nI0403 13:12:43.793692       1 serving.go:88] Shutting down DynamicLoader\n
Apr 03 13:15:57.260 E ns/openshift-etcd pod/etcd-member-ip-10-0-139-84.us-west-1.compute.internal node/ip-10-0-139-84.us-west-1.compute.internal container=etcd-metrics container exited with code 1 (Error): int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-0.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-03 13:14:30.276106 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-03 13:14:30.276943 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/03 13:14:30 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.139.84:9978: connect: connection refused"; Reconnecting to {etcd-0.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\nWARNING: 2020/04/03 13:14:31 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.139.84:9978: connect: connection refused"; Reconnecting to {etcd-0.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\nWARNING: 2020/04/03 13:14:33 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.139.84:9978: connect: connection refused"; Reconnecting to {etcd-0.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\nWARNING: 2020/04/03 13:14:35 Failed to dial etcd-0.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com:9978: context canceled; please retry.\nWARNING: 2020/04/03 13:14:35 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: operation was canceled"; Reconnecting to {etcd-0.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\ndial tcp 10.0.139.84:9978: connect: connection refused\n
Apr 03 13:16:43.263 E ns/openshift-authentication pod/oauth-openshift-779854c999-mbkqg node/ip-10-0-139-84.us-west-1.compute.internal container=oauth-openshift container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 13:16:46.876 E openshift-apiserver OpenShift API is not responding to GET requests
Apr 03 13:17:17.449 E ns/openshift-image-registry pod/node-ca-54bfs node/ip-10-0-142-104.us-west-1.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 03 13:17:17.474 E ns/openshift-monitoring pod/node-exporter-vwd8t node/ip-10-0-142-104.us-west-1.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 03 13:17:17.474 E ns/openshift-monitoring pod/node-exporter-vwd8t node/ip-10-0-142-104.us-west-1.compute.internal container=node-exporter container exited with code 255 (Error): 
Apr 03 13:17:17.685 E ns/openshift-sdn pod/sdn-dfc6h node/ip-10-0-142-104.us-west-1.compute.internal container=sdn container exited with code 255 (Error):  13:15:38.059448   51960 proxier.go:367] userspace proxy: processing 0 service events\nI0403 13:15:38.059472   51960 proxier.go:346] userspace syncProxyRules took 51.862099ms\nI0403 13:15:38.218905   51960 proxier.go:367] userspace proxy: processing 0 service events\nI0403 13:15:38.218938   51960 proxier.go:346] userspace syncProxyRules took 55.44677ms\nI0403 13:15:38.798571   51960 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-kube-scheduler/scheduler:https to [10.0.130.16:10259 10.0.139.84:10259 10.0.152.202:10259]\nI0403 13:15:38.798609   51960 roundrobin.go:240] Delete endpoint 10.0.139.84:10259 for service "openshift-kube-scheduler/scheduler:https"\nI0403 13:15:38.958008   51960 proxier.go:367] userspace proxy: processing 0 service events\nI0403 13:15:38.958033   51960 proxier.go:346] userspace syncProxyRules took 51.581385ms\nE0403 13:15:39.078551   51960 healthcheck.go:57] SDN healthcheck disconnected from OVS server: <nil>\nI0403 13:15:39.078644   51960 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\ninterrupt: Gracefully shutting down ...\nI0403 13:15:39.181009   51960 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 13:15:39.284146   51960 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 13:15:39.381177   51960 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 13:15:39.478944   51960 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 13:15:39.579161   51960 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\n
Apr 03 13:17:21.606 E ns/openshift-dns pod/dns-default-hwq9w node/ip-10-0-142-104.us-west-1.compute.internal container=dns-node-resolver container exited with code 255 (Error): 
Apr 03 13:17:21.606 E ns/openshift-dns pod/dns-default-hwq9w node/ip-10-0-142-104.us-west-1.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-04-03T13:00:12.185Z [INFO] CoreDNS-1.3.1\n2020-04-03T13:00:12.185Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-03T13:00:12.185Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nW0403 13:09:12.103393       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: watch of *v1.Namespace ended with: too old resource version: 21837 (29894)\nW0403 13:12:01.809475       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: watch of *v1.Service ended with: too old resource version: 22199 (31868)\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 03 13:17:22.157 E ns/openshift-multus pod/multus-pxn4g node/ip-10-0-142-104.us-west-1.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 03 13:17:22.530 E ns/openshift-sdn pod/ovs-2bbtj node/ip-10-0-142-104.us-west-1.compute.internal container=openvswitch container exited with code 255 (Error): t 0 s (2 deletes)\n2020-04-03T13:14:58.442Z|00163|connmgr|INFO|br0<->unix#273: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T13:14:58.471Z|00164|bridge|INFO|bridge br0: deleted interface vethc7aec625 on port 15\n2020-04-03T13:14:58.523Z|00165|connmgr|INFO|br0<->unix#276: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T13:14:58.560Z|00166|bridge|INFO|bridge br0: deleted interface veth229c15bf on port 4\n2020-04-03T13:14:58.604Z|00167|connmgr|INFO|br0<->unix#279: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T13:14:58.653Z|00168|connmgr|INFO|br0<->unix#282: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T13:14:58.688Z|00169|bridge|INFO|bridge br0: deleted interface vethf0a081a0 on port 19\n2020-04-03T13:14:58.747Z|00170|connmgr|INFO|br0<->unix#285: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T13:14:58.794Z|00171|bridge|INFO|bridge br0: deleted interface vethd252167f on port 7\n2020-04-03T13:14:58.847Z|00172|connmgr|INFO|br0<->unix#288: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T13:14:58.902Z|00173|bridge|INFO|bridge br0: deleted interface vethc868e90c on port 5\n2020-04-03T13:15:27.293Z|00174|connmgr|INFO|br0<->unix#294: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T13:15:27.319Z|00175|bridge|INFO|bridge br0: deleted interface vethd1784dab on port 10\n2020-04-03T13:15:27.456Z|00176|connmgr|INFO|br0<->unix#297: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T13:15:27.487Z|00177|connmgr|INFO|br0<->unix#300: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T13:15:27.508Z|00178|bridge|INFO|bridge br0: deleted interface veth43487be0 on port 16\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-04-03T13:15:27.312Z|00021|jsonrpc|WARN|Dropped 6 log messages in last 835 seconds (most recently, 835 seconds ago) due to excessive rate\n2020-04-03T13:15:27.312Z|00022|jsonrpc|WARN|unix#222: receive error: Connection reset by peer\n2020-04-03T13:15:27.313Z|00023|reconnect|WARN|unix#222: connection dropped (Connection reset by peer)\nTerminated\novs-vswitchd is not running.\novsdb-server is not running.\n
Apr 03 13:17:22.894 E ns/openshift-machine-config-operator pod/machine-config-daemon-7b8fj node/ip-10-0-142-104.us-west-1.compute.internal container=machine-config-daemon container exited with code 255 (Error): 
Apr 03 13:17:23.265 E ns/openshift-cluster-node-tuning-operator pod/tuned-zjl2m node/ip-10-0-142-104.us-west-1.compute.internal container=tuned container exited with code 255 (Error): 20-04-03 13:14:26,233 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-03 13:14:26,234 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\n2020-04-03 13:14:26,243 INFO     tuned.daemon.daemon: terminating Tuned in one-shot mode\nI0403 13:14:28.822150   73581 openshift-tuned.go:435] Pod (openshift-cluster-node-tuning-operator/tuned-q87tb) labels changed node wide: false\nI0403 13:14:57.050913   73581 openshift-tuned.go:435] Pod (e2e-tests-sig-apps-deployment-upgrade-c2g24/dp-57cc5d77b4-jsbhd) labels changed node wide: true\nI0403 13:15:00.816014   73581 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 13:15:00.821933   73581 openshift-tuned.go:326] Getting recommended profile...\nI0403 13:15:00.959906   73581 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 13:15:01.818097   73581 openshift-tuned.go:435] Pod (openshift-marketplace/redhat-operators-5d99d8c6db-528dw) labels changed node wide: true\nI0403 13:15:05.816019   73581 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 13:15:05.819194   73581 openshift-tuned.go:326] Getting recommended profile...\nI0403 13:15:05.946632   73581 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 13:15:08.826836   73581 openshift-tuned.go:435] Pod (openshift-ingress/router-default-64cdf664c9-nmb64) labels changed node wide: true\nI0403 13:15:10.815997   73581 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 13:15:10.817536   73581 openshift-tuned.go:326] Getting recommended profile...\nI0403 13:15:10.927659   73581 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 13:15:38.823144   73581 openshift-tuned.go:435] Pod (e2e-tests-sig-apps-job-upgrade-tmhr8/foo-4xpmw) labels changed node wide: true\n
Apr 03 13:17:23.528 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-130-16.us-west-1.compute.internal node/ip-10-0-130-16.us-west-1.compute.internal container=kube-controller-manager-6 container exited with code 255 (Error): :127] Shutting down service account controller\nI0403 13:15:43.952494       1 clusterroleaggregation_controller.go:160] Shutting down ClusterRoleAggregator\nI0403 13:15:43.952514       1 expand_controller.go:165] Shutting down expand controller\nI0403 13:15:43.952528       1 node_lifecycle_controller.go:467] Shutting down node controller\nI0403 13:15:43.952542       1 service_controller.go:197] Shutting down service controller\nI0403 13:15:43.952558       1 namespace_controller.go:198] Shutting down namespace controller\nI0403 13:15:43.952574       1 certificate_controller.go:125] Shutting down certificate controller\nI0403 13:15:43.952587       1 certificate_controller.go:125] Shutting down certificate controller\nI0403 13:15:43.952602       1 pv_protection_controller.go:93] Shutting down PV protection controller\nI0403 13:15:43.952695       1 graph_builder.go:336] stopped 109 of 109 monitors\nI0403 13:15:43.952707       1 graph_builder.go:337] GraphBuilder stopping\nI0403 13:15:43.952920       1 resource_quota_controller.go:266] resource quota controller worker shutting down\nI0403 13:15:43.952942       1 resource_quota_controller.go:266] resource quota controller worker shutting down\nI0403 13:15:43.952959       1 resource_quota_controller.go:266] resource quota controller worker shutting down\nI0403 13:15:43.952976       1 resource_quota_controller.go:266] resource quota controller worker shutting down\nI0403 13:15:43.952992       1 resource_quota_controller.go:266] resource quota controller worker shutting down\nI0403 13:15:43.957671       1 horizontal.go:200] horizontal pod autoscaler controller worker shutting down\nI0403 13:15:43.957771       1 pv_controller_base.go:341] volume worker queue shutting down\nI0403 13:15:43.958612       1 cleaner.go:89] Shutting down CSR cleaner controller\nI0403 13:15:43.958631       1 cronjob_controller.go:96] Shutting down CronJob Manager\nI0403 13:15:43.958641       1 tokens_controller.go:189] Shutting down\nI0403 13:15:43.958738       1 secure_serving.go:180] Stopped listening on [::]:10257\n
Apr 03 13:17:23.528 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-130-16.us-west-1.compute.internal node/ip-10-0-130-16.us-west-1.compute.internal container=kube-controller-manager-cert-syncer-6 container exited with code 255 (Error): I0403 12:49:09.295570       1 certsync_controller.go:269] Starting CertSyncer\nI0403 12:49:09.305111       1 observer_polling.go:106] Starting file observer\nE0403 12:49:13.519060       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:kube-controller-manager" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\nE0403 12:49:13.519165       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: secrets is forbidden: User "system:kube-controller-manager" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-controller-manager"\nW0403 12:57:32.526134       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22433 (25357)\nW0403 13:05:56.536866       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25485 (28594)\nW0403 13:15:20.542340       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 28744 (34561)\n
Apr 03 13:17:31.074 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-130-16.us-west-1.compute.internal node/ip-10-0-130-16.us-west-1.compute.internal container=scheduler container exited with code 255 (Error): /v1/services?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 12:49:08.336304       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: Get https://localhost:6443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 12:49:08.340414       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: Get https://localhost:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 12:49:08.340590       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 12:49:13.535860       1 leaderelection.go:270] error retrieving resource lock openshift-kube-scheduler/kube-scheduler: configmaps "kube-scheduler" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "openshift-kube-scheduler"\nW0403 13:12:01.821566       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 22199 (31868)\nW0403 13:14:50.793105       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 21837 (34762)\nW0403 13:14:50.793239       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 21837 (34762)\nW0403 13:14:50.794283       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 21845 (34762)\nE0403 13:15:43.530929       1 server.go:259] lost master\nI0403 13:15:43.531245       1 secure_serving.go:180] Stopped listening on [::]:10251\nI0403 13:15:43.531320       1 serving.go:88] Shutting down DynamicLoader\n
Apr 03 13:17:33.273 E ns/openshift-image-registry pod/node-ca-4v259 node/ip-10-0-130-16.us-west-1.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 03 13:17:33.688 E ns/openshift-apiserver pod/apiserver-sbqg8 node/ip-10-0-130-16.us-west-1.compute.internal container=openshift-apiserver container exited with code 255 (Error): cd.svc:2379 <nil>}]\nI0403 13:15:43.562785       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 13:15:43.562844       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 13:15:43.562860       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nE0403 13:15:43.799028       1 watch.go:212] unable to encode watch object <nil>: expected pointer, but got invalid kind\nI0403 13:15:43.799348       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 13:15:43.799483       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 13:15:43.799490       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 13:15:43.799626       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 13:15:43.869498       1 serving.go:88] Shutting down DynamicLoader\nI0403 13:15:43.869786       1 clusterquotamapping.go:145] Shutting down ClusterQuotaMappingController controller\nI0403 13:15:43.870208       1 controller.go:87] Shutting down OpenAPI AggregationController\nI0403 13:15:43.870239       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/aggregator-client-ca/ca-bundle.crt\nI0403 13:15:43.870249       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/client-ca/ca-bundle.crt\nI0403 13:15:43.871567       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 13:15:43.871671       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 13:15:43.871987       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 13:15:43.873817       1 secure_serving.go:180] Stopped listening on 0.0.0.0:8443\n
Apr 03 13:17:34.076 E ns/openshift-dns pod/dns-default-rt945 node/ip-10-0-130-16.us-west-1.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-04-03T12:58:44.762Z [INFO] CoreDNS-1.3.1\n2020-04-03T12:58:44.762Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-03T12:58:44.762Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nW0403 13:09:12.107903       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: watch of *v1.Namespace ended with: too old resource version: 21837 (29894)\nW0403 13:12:01.819780       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: watch of *v1.Service ended with: too old resource version: 22199 (31868)\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 03 13:17:34.076 E ns/openshift-dns pod/dns-default-rt945 node/ip-10-0-130-16.us-west-1.compute.internal container=dns-node-resolver container exited with code 255 (Error): kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]\n
Apr 03 13:17:34.475 E ns/openshift-sdn pod/ovs-44b49 node/ip-10-0-130-16.us-west-1.compute.internal container=openvswitch container exited with code 255 (Error): x#483: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T13:15:06.586Z|00285|bridge|INFO|bridge br0: deleted interface vethde62b9be on port 20\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-04-03T13:15:05.910Z|00036|reconnect|WARN|unix#360: connection dropped (Broken pipe)\n2020-04-03T13:15:05.919Z|00037|reconnect|WARN|unix#361: connection dropped (Broken pipe)\n2020-04-03T13:15:06.367Z|00038|reconnect|WARN|unix#364: connection dropped (Broken pipe)\n\n==> /var/log/openvswitch/ovs-vswitchd.log <==\n2020-04-03T13:15:06.713Z|00286|connmgr|INFO|br0<->unix#486: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T13:15:06.760Z|00287|connmgr|INFO|br0<->unix#489: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T13:15:06.793Z|00288|bridge|INFO|bridge br0: deleted interface veth3bf6e3d8 on port 27\n2020-04-03T13:15:06.867Z|00289|connmgr|INFO|br0<->unix#492: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T13:15:06.898Z|00290|connmgr|INFO|br0<->unix#495: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T13:15:06.939Z|00291|bridge|INFO|bridge br0: deleted interface vethf8543a97 on port 24\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-04-03T13:15:06.874Z|00039|reconnect|WARN|unix#379: connection dropped (Broken pipe)\n\n==> /var/log/openvswitch/ovs-vswitchd.log <==\n2020-04-03T13:15:08.077Z|00292|bridge|INFO|bridge br0: added interface veth4bb11511 on port 37\n2020-04-03T13:15:08.110Z|00293|connmgr|INFO|br0<->unix#498: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T13:15:08.163Z|00294|connmgr|INFO|br0<->unix#504: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-04-03T13:15:08.166Z|00295|connmgr|INFO|br0<->unix#502: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T13:15:10.663Z|00296|connmgr|INFO|br0<->unix#507: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T13:15:10.693Z|00297|connmgr|INFO|br0<->unix#510: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T13:15:10.716Z|00298|bridge|INFO|bridge br0: deleted interface veth4bb11511 on port 37\nTerminated\novs-vswitchd is not running.\novsdb-server is not running.\n
Apr 03 13:17:36.086 E ns/openshift-machine-config-operator pod/machine-config-daemon-tx4l7 node/ip-10-0-130-16.us-west-1.compute.internal container=machine-config-daemon container exited with code 255 (Error): 
Apr 03 13:17:36.673 E ns/openshift-machine-config-operator pod/machine-config-server-grm4g node/ip-10-0-130-16.us-west-1.compute.internal container=machine-config-server container exited with code 255 (Error): 
Apr 03 13:17:37.274 E ns/openshift-cluster-node-tuning-operator pod/tuned-pgcbq node/ip-10-0-130-16.us-west-1.compute.internal container=tuned container exited with code 255 (Error): .070660   95773 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 13:15:11.203264   95773 openshift-tuned.go:435] Pod (openshift-authentication-operator/authentication-operator-7648cc6d65-rpssf) labels changed node wide: true\nI0403 13:15:15.932120   95773 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 13:15:15.933614   95773 openshift-tuned.go:326] Getting recommended profile...\nI0403 13:15:16.109903   95773 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 13:15:16.382641   95773 openshift-tuned.go:435] Pod (openshift-machine-config-operator/machine-config-controller-67d4bc5577-n8r4f) labels changed node wide: true\nI0403 13:15:20.932115   95773 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 13:15:20.933365   95773 openshift-tuned.go:326] Getting recommended profile...\nI0403 13:15:21.065295   95773 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 13:15:21.189172   95773 openshift-tuned.go:435] Pod (openshift-machine-api/machine-api-controllers-848876b59b-wwl8f) labels changed node wide: true\nI0403 13:15:25.932147   95773 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 13:15:25.933340   95773 openshift-tuned.go:326] Getting recommended profile...\nI0403 13:15:26.060847   95773 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 13:15:43.207161   95773 openshift-tuned.go:435] Pod (openshift-kube-scheduler/revision-pruner-7-ip-10-0-130-16.us-west-1.compute.internal) labels changed node wide: false\nI0403 13:15:43.390884   95773 openshift-tuned.go:435] Pod (openshift-machine-config-operator/etcd-quorum-guard-5b6f477cbc-dzttd) labels changed node wide: true\n
Apr 03 13:17:38.474 E ns/openshift-sdn pod/sdn-vnjbx node/ip-10-0-130-16.us-west-1.compute.internal container=sdn container exited with code 255 (Error): ernetes:https to [10.0.139.84:6443 10.0.152.202:6443]\nI0403 13:15:43.576854   73534 roundrobin.go:240] Delete endpoint 10.0.130.16:6443 for service "default/kubernetes:https"\nI0403 13:15:43.626166   73534 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-console/console:https to [10.128.0.74:8443 10.130.0.81:8443]\nI0403 13:15:43.626935   73534 roundrobin.go:240] Delete endpoint 10.130.0.81:8443 for service "openshift-console/console:https"\ninterrupt: Gracefully shutting down ...\nE0403 13:15:43.851166   73534 healthcheck.go:57] SDN healthcheck disconnected from OVS server: <nil>\nI0403 13:15:43.851474   73534 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 13:15:43.957655   73534 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 13:15:44.056361   73534 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nE0403 13:15:44.103623   73534 proxier.go:1350] Failed to execute iptables-restore: signal: terminated ()\nI0403 13:15:44.103792   73534 proxier.go:1352] Closing local ports after iptables-restore failure\nE0403 13:15:44.116878   73534 iptables.go:70] Syncing openshift iptables failed: failed to ensure rule [-m mark --mark 0x1/0x1 -j RETURN] exists: error checking rule: signal: terminated: \nI0403 13:15:44.151967   73534 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 13:15:44.222199   73534 proxier.go:367] userspace proxy: processing 0 service events\nI0403 13:15:44.222318   73534 proxier.go:346] userspace syncProxyRules took 118.452738ms\nI0403 13:15:44.251982   73534 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\n
Apr 03 13:17:38.874 E ns/openshift-multus pod/multus-c79zh node/ip-10-0-130-16.us-west-1.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 03 13:17:41.273 E ns/openshift-controller-manager pod/controller-manager-7pcr7 node/ip-10-0-130-16.us-west-1.compute.internal container=controller-manager container exited with code 255 (Error): 
Apr 03 13:17:45.273 E ns/openshift-sdn pod/sdn-controller-wrgff node/ip-10-0-130-16.us-west-1.compute.internal container=sdn-controller container exited with code 255 (Error): 1 event.go:221] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"openshift-sdn", Name:"openshift-network-controller", UID:"8892a061-75a6-11ea-a3cc-066eec24a8ff", APIVersion:"v1", ResourceVersion:"31015", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ip-10-0-130-16 became leader\nI0403 13:10:50.649499       1 master.go:57] Initializing SDN master of type "redhat/openshift-ovs-networkpolicy"\nI0403 13:10:50.654807       1 network_controller.go:49] Started OpenShift Network Controller\nW0403 13:12:01.471007       1 reflector.go:256] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 20896 (31812)\nW0403 13:12:01.699471       1 reflector.go:256] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 20893 (31863)\nW0403 13:12:44.483256       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Namespace ended with: too old resource version: 17033 (31443)\nW0403 13:12:44.538232       1 reflector.go:256] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 31863 (33378)\nW0403 13:12:44.561003       1 reflector.go:256] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 31812 (33378)\nE0403 13:13:23.372949       1 memcache.go:141] couldn't get resource list for project.openshift.io/v1: Get https://api-int.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com:6443/apis/project.openshift.io/v1?timeout=32s: context deadline exceeded\nE0403 13:13:55.384815       1 memcache.go:141] couldn't get resource list for security.openshift.io/v1: Get https://api-int.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com:6443/apis/security.openshift.io/v1?timeout=32s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n
Apr 03 13:17:46.274 E ns/openshift-monitoring pod/node-exporter-tgp4q node/ip-10-0-130-16.us-west-1.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 03 13:17:46.274 E ns/openshift-monitoring pod/node-exporter-tgp4q node/ip-10-0-130-16.us-west-1.compute.internal container=node-exporter container exited with code 255 (Error): 
Apr 03 13:18:00.675 E ns/openshift-etcd pod/etcd-member-ip-10-0-130-16.us-west-1.compute.internal node/ip-10-0-130-16.us-west-1.compute.internal container=etcd-metrics container exited with code 255 (Error): 2020-04-03 13:15:11.752353 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-2.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-2.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-03 13:15:11.753564 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-03 13:15:11.754518 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-2.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-2.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/03 13:15:11 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.130.16:9978: connect: connection refused"; Reconnecting to {etcd-2.ci-op-0jlm1jrd-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\n2020-04-03 13:15:12.768868 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Apr 03 13:18:00.675 E ns/openshift-etcd pod/etcd-member-ip-10-0-130-16.us-west-1.compute.internal node/ip-10-0-130-16.us-west-1.compute.internal container=etcd-member container exited with code 255 (Error): ith peer 98e841a3fd36360 (writer)\n2020-04-03 13:15:44.064264 I | rafthttp: stopped HTTP pipelining with peer 98e841a3fd36360\n2020-04-03 13:15:44.064361 W | rafthttp: lost the TCP streaming connection with peer 98e841a3fd36360 (stream MsgApp v2 reader)\n2020-04-03 13:15:44.064378 E | rafthttp: failed to read 98e841a3fd36360 on stream MsgApp v2 (context canceled)\n2020-04-03 13:15:44.064387 I | rafthttp: peer 98e841a3fd36360 became inactive (message send to peer failed)\n2020-04-03 13:15:44.064398 I | rafthttp: stopped streaming with peer 98e841a3fd36360 (stream MsgApp v2 reader)\n2020-04-03 13:15:44.064477 W | rafthttp: lost the TCP streaming connection with peer 98e841a3fd36360 (stream Message reader)\n2020-04-03 13:15:44.064496 I | rafthttp: stopped streaming with peer 98e841a3fd36360 (stream Message reader)\n2020-04-03 13:15:44.064511 I | rafthttp: stopped peer 98e841a3fd36360\n2020-04-03 13:15:44.064522 I | rafthttp: stopping peer fe47439cb7b70816...\n2020-04-03 13:15:44.065032 I | rafthttp: closed the TCP streaming connection with peer fe47439cb7b70816 (stream MsgApp v2 writer)\n2020-04-03 13:15:44.065049 I | rafthttp: stopped streaming with peer fe47439cb7b70816 (writer)\n2020-04-03 13:15:44.065567 I | rafthttp: closed the TCP streaming connection with peer fe47439cb7b70816 (stream Message writer)\n2020-04-03 13:15:44.065591 I | rafthttp: stopped streaming with peer fe47439cb7b70816 (writer)\n2020-04-03 13:15:44.067623 I | rafthttp: stopped HTTP pipelining with peer fe47439cb7b70816\n2020-04-03 13:15:44.067759 W | rafthttp: lost the TCP streaming connection with peer fe47439cb7b70816 (stream MsgApp v2 reader)\n2020-04-03 13:15:44.067784 I | rafthttp: stopped streaming with peer fe47439cb7b70816 (stream MsgApp v2 reader)\n2020-04-03 13:15:44.067857 W | rafthttp: lost the TCP streaming connection with peer fe47439cb7b70816 (stream Message reader)\n2020-04-03 13:15:44.067951 I | rafthttp: stopped streaming with peer fe47439cb7b70816 (stream Message reader)\n2020-04-03 13:15:44.067962 I | rafthttp: stopped peer fe47439cb7b70816\n
Apr 03 13:18:01.075 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-130-16.us-west-1.compute.internal node/ip-10-0-130-16.us-west-1.compute.internal container=kube-controller-manager-6 container exited with code 255 (Error): :127] Shutting down service account controller\nI0403 13:15:43.952494       1 clusterroleaggregation_controller.go:160] Shutting down ClusterRoleAggregator\nI0403 13:15:43.952514       1 expand_controller.go:165] Shutting down expand controller\nI0403 13:15:43.952528       1 node_lifecycle_controller.go:467] Shutting down node controller\nI0403 13:15:43.952542       1 service_controller.go:197] Shutting down service controller\nI0403 13:15:43.952558       1 namespace_controller.go:198] Shutting down namespace controller\nI0403 13:15:43.952574       1 certificate_controller.go:125] Shutting down certificate controller\nI0403 13:15:43.952587       1 certificate_controller.go:125] Shutting down certificate controller\nI0403 13:15:43.952602       1 pv_protection_controller.go:93] Shutting down PV protection controller\nI0403 13:15:43.952695       1 graph_builder.go:336] stopped 109 of 109 monitors\nI0403 13:15:43.952707       1 graph_builder.go:337] GraphBuilder stopping\nI0403 13:15:43.952920       1 resource_quota_controller.go:266] resource quota controller worker shutting down\nI0403 13:15:43.952942       1 resource_quota_controller.go:266] resource quota controller worker shutting down\nI0403 13:15:43.952959       1 resource_quota_controller.go:266] resource quota controller worker shutting down\nI0403 13:15:43.952976       1 resource_quota_controller.go:266] resource quota controller worker shutting down\nI0403 13:15:43.952992       1 resource_quota_controller.go:266] resource quota controller worker shutting down\nI0403 13:15:43.957671       1 horizontal.go:200] horizontal pod autoscaler controller worker shutting down\nI0403 13:15:43.957771       1 pv_controller_base.go:341] volume worker queue shutting down\nI0403 13:15:43.958612       1 cleaner.go:89] Shutting down CSR cleaner controller\nI0403 13:15:43.958631       1 cronjob_controller.go:96] Shutting down CronJob Manager\nI0403 13:15:43.958641       1 tokens_controller.go:189] Shutting down\nI0403 13:15:43.958738       1 secure_serving.go:180] Stopped listening on [::]:10257\n
Apr 03 13:18:01.075 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-130-16.us-west-1.compute.internal node/ip-10-0-130-16.us-west-1.compute.internal container=kube-controller-manager-cert-syncer-6 container exited with code 255 (Error): I0403 12:49:09.295570       1 certsync_controller.go:269] Starting CertSyncer\nI0403 12:49:09.305111       1 observer_polling.go:106] Starting file observer\nE0403 12:49:13.519060       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:kube-controller-manager" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\nE0403 12:49:13.519165       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: secrets is forbidden: User "system:kube-controller-manager" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-controller-manager"\nW0403 12:57:32.526134       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22433 (25357)\nW0403 13:05:56.536866       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25485 (28594)\nW0403 13:15:20.542340       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 28744 (34561)\n
Apr 03 13:18:01.874 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-130-16.us-west-1.compute.internal node/ip-10-0-130-16.us-west-1.compute.internal container=kube-apiserver-7 container exited with code 255 (Error): 5.559309       1 controller.go:107] OpenAPI AggregationController: Processing item v1.build.openshift.io\nI0403 13:15:37.143825       1 controller.go:107] OpenAPI AggregationController: Processing item v1.route.openshift.io\nI0403 13:15:39.283958       1 controller.go:107] OpenAPI AggregationController: Processing item v1.quota.openshift.io\nI0403 13:15:40.984082       1 controller.go:107] OpenAPI AggregationController: Processing item v1.apps.openshift.io\nI0403 13:15:43.537792       1 controller.go:176] Shutting down kubernetes service endpoint reconciler\nW0403 13:15:43.563359       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [10.0.139.84 10.0.152.202]\nI0403 13:15:43.625348       1 healthz.go:184] [-]terminating failed: reason withheld\n[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/kube-apiserver-requestheader-reload ok\n[+]poststarthook/kube-apiserver-clientCA-reload ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-discovery-available ok\n[+]crd-informer-synced ok\n[+]crd-informer-synced ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/openshift.io-clientCA-reload ok\n[+]poststarthook/openshift.io-requestheader-reload ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\n
Apr 03 13:18:01.874 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-130-16.us-west-1.compute.internal node/ip-10-0-130-16.us-west-1.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 255 (Error): I0403 12:49:09.615190       1 certsync_controller.go:269] Starting CertSyncer\nI0403 12:49:09.615232       1 observer_polling.go:106] Starting file observer\nW0403 12:56:59.542229       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22432 (25213)\nW0403 13:05:47.547472       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25343 (28552)\nW0403 13:15:08.553303       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 28684 (34475)\n
Apr 03 13:18:02.674 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-130-16.us-west-1.compute.internal node/ip-10-0-130-16.us-west-1.compute.internal container=scheduler container exited with code 255 (Error): /v1/services?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 12:49:08.336304       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: Get https://localhost:6443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 12:49:08.340414       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: Get https://localhost:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 12:49:08.340590       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 12:49:13.535860       1 leaderelection.go:270] error retrieving resource lock openshift-kube-scheduler/kube-scheduler: configmaps "kube-scheduler" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "openshift-kube-scheduler"\nW0403 13:12:01.821566       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 22199 (31868)\nW0403 13:14:50.793105       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 21837 (34762)\nW0403 13:14:50.793239       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 21837 (34762)\nW0403 13:14:50.794283       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 21845 (34762)\nE0403 13:15:43.530929       1 server.go:259] lost master\nI0403 13:15:43.531245       1 secure_serving.go:180] Stopped listening on [::]:10251\nI0403 13:15:43.531320       1 serving.go:88] Shutting down DynamicLoader\n