ResultSUCCESS
Tests 1 failed / 21 succeeded
Started2020-04-03 21:37
Elapsed1h13m
Work namespaceci-op-ir14c5q6
Refs release-4.1:514189df
812:8d0c3f82
pod2f72878b-75f3-11ea-b63e-0a58ac1057a1
repoopenshift/cluster-kube-apiserver-operator
revision1

Test Failures


openshift-tests Monitor cluster while tests execute 36m25s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
208 error level events were detected during this test run:

Apr 03 22:10:56.940 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-7497f89f84-vf4gn node/ip-10-0-130-109.us-east-2.compute.internal container=kube-apiserver-operator container exited with code 255 (Error): r-operator", UID:"f8b26e14-75f5-11ea-8a56-02df5a075760", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/revision-status-6 -n openshift-kube-apiserver: cause by changes in data.status\nI0403 22:07:44.622200       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"f8b26e14-75f5-11ea-8a56-02df5a075760", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("Progressing: 3 nodes are at revision 6"),Available message changed from "Available: 3 nodes are active; 1 nodes are at revision 2; 2 nodes are at revision 6" to "Available: 3 nodes are active; 3 nodes are at revision 6"\nI0403 22:07:44.647745       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"f8b26e14-75f5-11ea-8a56-02df5a075760", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "StaticPodsDegraded: nodes/ip-10-0-156-6.us-east-2.compute.internal pods/kube-apiserver-ip-10-0-156-6.us-east-2.compute.internal container=\"kube-apiserver-6\" is not ready" to ""\nI0403 22:07:45.793844       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"f8b26e14-75f5-11ea-8a56-02df5a075760", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-6-ip-10-0-156-6.us-east-2.compute.internal -n openshift-kube-apiserver because it was missing\nI0403 22:10:56.163635       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 22:10:56.163783       1 builder.go:217] server exited\n
Apr 03 22:12:32.269 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-6fbc44d869-5kb2f node/ip-10-0-130-109.us-east-2.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): amespace ended with: too old resource version: 9553 (12179)\nW0403 22:07:30.937671       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.FeatureGate ended with: too old resource version: 4961 (13423)\nW0403 22:07:30.946348       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Network ended with: too old resource version: 4949 (13435)\nW0403 22:07:30.962231       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 4632 (12179)\nW0403 22:07:30.973859       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 12731 (12767)\nW0403 22:07:30.974021       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 10464 (13965)\nW0403 22:07:31.025666       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ServiceAccount ended with: too old resource version: 10195 (12179)\nW0403 22:07:31.025838       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 13475 (13965)\nW0403 22:07:31.025954       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 4806 (13965)\nW0403 22:07:31.035791       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Infrastructure ended with: too old resource version: 4961 (13423)\nW0403 22:07:31.293124       1 reflector.go:270] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: watch of *v1.KubeControllerManager ended with: too old resource version: 12367 (14134)\nI0403 22:12:31.325793       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 22:12:31.325976       1 builder.go:217] server exited\n
Apr 03 22:14:15.801 E ns/openshift-machine-api pod/machine-api-operator-785888557f-8bdtj node/ip-10-0-130-109.us-east-2.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Apr 03 22:14:15.850 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-76649885bd-n2wtc node/ip-10-0-130-109.us-east-2.compute.internal container=openshift-apiserver-operator container exited with code 255 (Error): ConfigMap ended with: too old resource version: 13426 (13965)\nW0403 22:07:31.011575       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Namespace ended with: too old resource version: 9553 (12179)\nW0403 22:07:31.011784       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 13426 (13965)\nW0403 22:07:31.139540       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ServiceAccount ended with: too old resource version: 10508 (12179)\nW0403 22:07:31.148050       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Image ended with: too old resource version: 8605 (13423)\nW0403 22:07:31.220310       1 reflector.go:270] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: watch of *v1.OpenShiftAPIServer ended with: too old resource version: 11753 (14134)\nW0403 22:07:31.248132       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Project ended with: too old resource version: 11800 (14134)\nW0403 22:07:31.330603       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Ingress ended with: too old resource version: 11765 (14135)\nW0403 22:12:39.825254       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 12861 (12864)\nW0403 22:13:05.972807       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 12802 (12867)\nW0403 22:13:21.038443       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 14137 (16442)\nI0403 22:14:14.846767       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 22:14:14.846855       1 leaderelection.go:65] leaderelection lost\n
Apr 03 22:14:34.732 E ns/openshift-machine-api pod/machine-api-controllers-b5498bc94-6psq9 node/ip-10-0-129-70.us-east-2.compute.internal container=machine-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 22:14:34.732 E ns/openshift-machine-api pod/machine-api-controllers-b5498bc94-6psq9 node/ip-10-0-129-70.us-east-2.compute.internal container=controller-manager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 22:14:34.732 E ns/openshift-machine-api pod/machine-api-controllers-b5498bc94-6psq9 node/ip-10-0-129-70.us-east-2.compute.internal container=nodelink-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 22:16:11.353 E ns/openshift-cluster-machine-approver pod/machine-approver-8677c48dd-dprz2 node/ip-10-0-130-109.us-east-2.compute.internal container=machine-approver-controller container exited with code 2 (Error): s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:build-strategy-jenkinspipeline" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-docker" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:scope-impersonation" not found, clusterrole.rbac.authorization.k8s.io "system:webhook" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "cluster-status" not found, clusterrole.rbac.authorization.k8s.io "basic-user" not found, clusterrole.rbac.authorization.k8s.io "self-access-reviewer" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:oauth-token-deleter" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-source" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:controller:machine-approver" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]\nE0403 22:14:13.610855       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=7, ErrCode=NO_ERROR, debug=""\nE0403 22:14:13.615745       1 reflector.go:322] github.com/openshift/cluster-machine-approver/main.go:185: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?resourceVersion=13453&timeoutSeconds=427&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0403 22:14:14.616401       1 reflector.go:205] github.com/openshift/cluster-machine-approver/main.go:185: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\n
Apr 03 22:16:34.203 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-55f9mptt2 node/ip-10-0-129-70.us-east-2.compute.internal container=operator container exited with code 2 (Error): : Watch close - *v1.Deployment total 0 items received\nW0403 22:13:44.800519       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Deployment ended with: too old resource version: 13202 (13896)\nI0403 22:13:45.800751       1 reflector.go:169] Listing and watching *v1.Deployment from k8s.io/client-go/informers/factory.go:132\nI0403 22:14:10.792798       1 wrap.go:47] GET /metrics: (5.070089ms) 200 [Prometheus/2.7.2 10.131.0.9:53606]\nI0403 22:14:10.794316       1 wrap.go:47] GET /metrics: (2.643981ms) 200 [Prometheus/2.7.2 10.129.2.9:36592]\nI0403 22:14:40.793199       1 wrap.go:47] GET /metrics: (5.501741ms) 200 [Prometheus/2.7.2 10.131.0.9:53606]\nI0403 22:14:40.793928       1 wrap.go:47] GET /metrics: (2.294479ms) 200 [Prometheus/2.7.2 10.129.2.9:36592]\nI0403 22:15:10.792614       1 wrap.go:47] GET /metrics: (4.871056ms) 200 [Prometheus/2.7.2 10.131.0.9:53606]\nI0403 22:15:10.795956       1 wrap.go:47] GET /metrics: (2.779212ms) 200 [Prometheus/2.7.2 10.129.2.9:36592]\nI0403 22:15:40.793546       1 wrap.go:47] GET /metrics: (5.826398ms) 200 [Prometheus/2.7.2 10.131.0.9:53606]\nI0403 22:15:40.793776       1 wrap.go:47] GET /metrics: (2.19065ms) 200 [Prometheus/2.7.2 10.129.2.9:36592]\nI0403 22:15:42.753878       1 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync\nI0403 22:15:42.784847       1 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync\nI0403 22:15:42.785036       1 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync\nI0403 22:15:42.875499       1 reflector.go:215] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: forcing resync\nI0403 22:15:42.903459       1 reflector.go:215] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: forcing resync\nI0403 22:16:10.795874       1 wrap.go:47] GET /metrics: (5.413386ms) 200 [Prometheus/2.7.2 10.131.0.9:53606]\nI0403 22:16:10.797244       1 wrap.go:47] GET /metrics: (5.64966ms) 200 [Prometheus/2.7.2 10.129.2.9:36592]\n
Apr 03 22:16:36.815 E ns/openshift-image-registry pod/cluster-image-registry-operator-76b9bd9fd-zznc2 node/ip-10-0-156-6.us-east-2.compute.internal container=cluster-image-registry-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 22:16:39.197 E ns/openshift-monitoring pod/kube-state-metrics-65589774b8-hgttz node/ip-10-0-138-122.us-east-2.compute.internal container=kube-state-metrics container exited with code 2 (Error): 
Apr 03 22:16:41.314 E ns/openshift-authentication-operator pod/authentication-operator-57c8f964f5-7dr92 node/ip-10-0-156-6.us-east-2.compute.internal container=operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 22:16:45.269 E ns/openshift-ingress pod/router-default-cf5b95c9b-nstc8 node/ip-10-0-138-122.us-east-2.compute.internal container=router container exited with code 2 (Error): 22:14:49.244930       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 22:15:14.221743       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 22:15:19.208158       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 22:15:35.740638       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 22:15:58.319969       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 22:16:03.314467       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 22:16:09.508346       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 22:16:14.506307       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 22:16:19.516007       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 22:16:24.511285       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 22:16:29.509759       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 22:16:34.523914       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 22:16:39.510654       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n
Apr 03 22:16:46.688 E ns/openshift-monitoring pod/prometheus-adapter-867d86b847-dm8x4 node/ip-10-0-149-170.us-east-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): 
Apr 03 22:16:47.890 E ns/openshift-monitoring pod/telemeter-client-77c6d77b4-h2rfx node/ip-10-0-149-170.us-east-2.compute.internal container=telemeter-client container exited with code 2 (Error): 
Apr 03 22:16:47.890 E ns/openshift-monitoring pod/telemeter-client-77c6d77b4-h2rfx node/ip-10-0-149-170.us-east-2.compute.internal container=reload container exited with code 2 (Error): 
Apr 03 22:16:50.812 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-6df98f46c9-z52n7 node/ip-10-0-130-109.us-east-2.compute.internal container=operator container exited with code 2 (Error): v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for operator openshift-controller-manager changed: Progressing changed from False to True ("Progressing: daemonset/controller-manager: observed generation is 8, desired generation is 9.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 3, desired generation is 4.")\nI0403 22:16:23.355918       1 wrap.go:47] GET /metrics: (9.624881ms) 200 [Prometheus/2.7.2 10.131.0.9:36748]\nI0403 22:16:23.357043       1 wrap.go:47] GET /metrics: (2.452721ms) 200 [Prometheus/2.7.2 10.129.2.9:60316]\nI0403 22:16:39.161803       1 request.go:530] Throttling request took 166.563259ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0403 22:16:39.361829       1 request.go:530] Throttling request took 197.257634ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0403 22:16:39.391759       1 status_controller.go:160] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2020-04-03T21:57:36Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-04-03T22:16:39Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-04-03T21:58:16Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-04-03T21:57:35Z","reason":"NoData","status":"Unknown","type":"Upgradeable"}]}}\nI0403 22:16:39.397384       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"fb969009-75f5-11ea-8a56-02df5a075760", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for operator openshift-controller-manager changed: Progressing changed from True to False ("")\n
Apr 03 22:16:52.442 E clusterversion/version changed Failing to True: MultipleErrors: Multiple errors are preventing progress:\n* Cluster operator authentication is still updating: upgrading oauth-openshift from 0.0.1-2020-04-03-213718_openshift to 0.0.1-2020-04-03-213851_openshift\n* Cluster operator cluster-autoscaler is still updating\n* Cluster operator image-registry is still updating\n* Cluster operator marketplace is still updating\n* Cluster operator monitoring is still updating\n* Cluster operator node-tuning is still updating\n* Cluster operator openshift-samples is still updating\n* Cluster operator service-ca is still updating\n* Cluster operator service-catalog-controller-manager is still updating\n* Could not update deployment "openshift-console/downloads" (237 of 350)\n* Could not update deployment "openshift-controller-manager-operator/openshift-controller-manager-operator" (173 of 350)\n* Could not update deployment "openshift-operator-lifecycle-manager/olm-operator" (253 of 350)\n* Could not update deployment "openshift-service-catalog-apiserver-operator/openshift-service-catalog-apiserver-operator" (209 of 350)
Apr 03 22:16:54.477 E ns/openshift-monitoring pod/node-exporter-sqls7 node/ip-10-0-138-122.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): 
Apr 03 22:17:02.688 E ns/openshift-console pod/downloads-65b4f746df-m4mlb node/ip-10-0-149-170.us-east-2.compute.internal container=download-server container exited with code 137 (Error): 
Apr 03 22:17:08.473 E ns/openshift-marketplace pod/certified-operators-55676cddcb-sn867 node/ip-10-0-149-170.us-east-2.compute.internal container=certified-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 22:17:08.509 E ns/openshift-console pod/downloads-65b4f746df-zpjdp node/ip-10-0-138-122.us-east-2.compute.internal container=download-server container exited with code 137 (Error): 
Apr 03 22:17:08.572 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-138-122.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): 
Apr 03 22:17:16.535 E ns/openshift-ingress pod/router-default-cf5b95c9b-9sd76 node/ip-10-0-149-170.us-east-2.compute.internal container=router container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 22:17:16.905 E ns/openshift-cluster-node-tuning-operator pod/tuned-2ql64 node/ip-10-0-130-109.us-east-2.compute.internal container=tuned container exited with code 143 (Error): ue\nI0403 22:16:32.645662   25419 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 22:16:32.647263   25419 openshift-tuned.go:326] Getting recommended profile...\nI0403 22:16:32.814923   25419 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 22:16:34.223833   25419 openshift-tuned.go:435] Pod (openshift-service-ca-operator/service-ca-operator-59c4ff5d7d-7gn88) labels changed node wide: true\nI0403 22:16:37.645677   25419 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 22:16:37.647095   25419 openshift-tuned.go:326] Getting recommended profile...\nI0403 22:16:37.810100   25419 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 22:16:53.819039   25419 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/olm-operator-778ccc4c8d-htpcx) labels changed node wide: true\nI0403 22:16:57.645648   25419 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 22:16:57.647403   25419 openshift-tuned.go:326] Getting recommended profile...\nI0403 22:16:57.806248   25419 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 22:17:07.471233   25419 openshift-tuned.go:435] Pod (openshift-authentication/oauth-openshift-6cf84d9b46-rx2kh) labels changed node wide: true\nI0403 22:17:07.645664   25419 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 22:17:07.647141   25419 openshift-tuned.go:326] Getting recommended profile...\nI0403 22:17:07.765941   25419 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 22:17:14.347340   25419 openshift-tuned.go:435] Pod (openshift-console-operator/console-operator-56898f6769-5dcdw) labels changed node wide: true\n
Apr 03 22:17:27.653 E ns/openshift-console-operator pod/console-operator-788db6876c-j5cqf node/ip-10-0-156-6.us-east-2.compute.internal container=console-operator container exited with code 255 (Error):  and admitted, host: console-openshift-console.apps.ci-op-ir14c5q6-a4243.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-04-03T22:15:58Z" level=info msg="sync loop 4.0.0 complete"\ntime="2020-04-03T22:15:58Z" level=info msg="finished syncing operator \"cluster\" (45.42µs) \n\n"\ntime="2020-04-03T22:15:59Z" level=info msg="started syncing operator \"cluster\" (2020-04-03 22:15:59.411318936 +0000 UTC m=+862.470412835)"\ntime="2020-04-03T22:15:59Z" level=info msg="console is in a managed state."\ntime="2020-04-03T22:15:59Z" level=info msg="running sync loop 4.0.0"\ntime="2020-04-03T22:15:59Z" level=info msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-ir14c5q6-a4243.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-04-03T22:15:59Z" level=info msg="service-ca configmap exists and is in the correct state"\ntime="2020-04-03T22:15:59Z" level=info msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-ir14c5q6-a4243.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-04-03T22:15:59Z" level=info msg=-----------------------\ntime="2020-04-03T22:15:59Z" level=info msg="sync loop 4.0.0 resources updated: false \n"\ntime="2020-04-03T22:15:59Z" level=info msg=-----------------------\ntime="2020-04-03T22:15:59Z" level=info msg="deployment is available, ready replicas: 2 \n"\ntime="2020-04-03T22:15:59Z" level=info msg="sync_v400: updating console status"\ntime="2020-04-03T22:15:59Z" level=info msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-ir14c5q6-a4243.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-04-03T22:15:59Z" level=info msg="sync loop 4.0.0 complete"\ntime="2020-04-03T22:15:59Z" level=info msg="finished syncing operator \"cluster\" (41.795µs) \n\n"\nI0403 22:17:26.970314       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 22:17:26.970372       1 leaderelection.go:65] leaderelection lost\nI0403 22:17:26.972256       1 status_controller.go:200] Shutting down StatusSyncer-console\n
Apr 03 22:17:37.575 E ns/openshift-marketplace pod/redhat-operators-7ffdd95b6c-jv7kj node/ip-10-0-138-122.us-east-2.compute.internal container=redhat-operators container exited with code 2 (Error): 
Apr 03 22:17:50.997 E ns/openshift-operator-lifecycle-manager pod/packageserver-8978db678-m8x2k node/ip-10-0-156-6.us-east-2.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 22:17:52.596 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-149-170.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): 
Apr 03 22:17:54.339 E ns/openshift-cluster-node-tuning-operator pod/tuned-rrhrf node/ip-10-0-129-70.us-east-2.compute.internal container=tuned container exited with code 143 (Error): compute.internal) labels changed node wide: false\nI0403 22:16:35.270746   24721 openshift-tuned.go:435] Pod (openshift-service-catalog-controller-manager-operator/openshift-service-catalog-controller-manager-operator-55f9mptt2) labels changed node wide: true\nI0403 22:16:39.514322   24721 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 22:16:39.516190   24721 openshift-tuned.go:326] Getting recommended profile...\nI0403 22:16:39.640748   24721 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 22:16:42.742343   24721 openshift-tuned.go:435] Pod (openshift-kube-apiserver/kube-apiserver-ip-10-0-129-70.us-east-2.compute.internal) labels changed node wide: true\nI0403 22:16:44.513997   24721 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 22:16:44.515551   24721 openshift-tuned.go:326] Getting recommended profile...\nI0403 22:16:44.649392   24721 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 22:16:48.906888   24721 openshift-tuned.go:435] Pod (openshift-kube-controller-manager/kube-controller-manager-ip-10-0-129-70.us-east-2.compute.internal) labels changed node wide: true\nI0403 22:16:49.513935   24721 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 22:16:49.515518   24721 openshift-tuned.go:326] Getting recommended profile...\nI0403 22:16:49.682067   24721 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 22:17:10.708698   24721 openshift-tuned.go:435] Pod (openshift-kube-scheduler/installer-7-ip-10-0-129-70.us-east-2.compute.internal) labels changed node wide: false\nI0403 22:17:20.664358   24721 openshift-tuned.go:435] Pod (openshift-kube-scheduler/openshift-kube-scheduler-ip-10-0-129-70.us-east-2.compute.internal) labels changed node wide: true\n
Apr 03 22:17:54.520 E ns/openshift-monitoring pod/node-exporter-tnf7b node/ip-10-0-129-70.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): 
Apr 03 22:17:57.905 E ns/openshift-service-catalog-apiserver-operator pod/openshift-service-catalog-apiserver-operator-5d67b58c5c-g2cjs node/ip-10-0-129-70.us-east-2.compute.internal container=operator container exited with code 2 (Error): : forcing resync\nI0403 22:15:42.764065       1 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync\nI0403 22:15:42.767811       1 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync\nI0403 22:15:42.798942       1 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync\nI0403 22:15:42.801252       1 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync\nI0403 22:15:42.802621       1 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync\nI0403 22:15:42.874320       1 reflector.go:215] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: forcing resync\nI0403 22:15:42.877891       1 reflector.go:215] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: forcing resync\nI0403 22:15:44.518918       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 22:15:54.530130       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 22:16:04.541269       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 22:16:14.552328       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 22:16:24.566921       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 22:16:34.579354       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 22:16:44.589437       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\n
Apr 03 22:18:01.502 E ns/openshift-controller-manager pod/controller-manager-t7rm8 node/ip-10-0-129-70.us-east-2.compute.internal container=controller-manager container exited with code 137 (Error): 
Apr 03 22:18:09.102 E ns/openshift-service-ca pod/configmap-cabundle-injector-7f575f654c-9p9r9 node/ip-10-0-129-70.us-east-2.compute.internal container=configmap-cabundle-injector-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 22:18:10.302 E ns/openshift-service-ca pod/service-serving-cert-signer-7c9578d765-7vznx node/ip-10-0-129-70.us-east-2.compute.internal container=service-serving-cert-signer-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 22:18:23.969 E ns/openshift-monitoring pod/node-exporter-tgzkw node/ip-10-0-136-48.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): 
Apr 03 22:18:27.672 E ns/openshift-cluster-node-tuning-operator pod/tuned-p4vj8 node/ip-10-0-138-122.us-east-2.compute.internal container=tuned container exited with code 143 (Error): d.go:326] Getting recommended profile...\nI0403 22:17:02.746139   10166 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 22:17:08.334342   10166 openshift-tuned.go:435] Pod (openshift-marketplace/redhat-operators-7fd97949f6-lt4jk) labels changed node wide: true\nI0403 22:17:12.616690   10166 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 22:17:12.618721   10166 openshift-tuned.go:326] Getting recommended profile...\nI0403 22:17:12.727403   10166 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 22:17:15.547124   10166 openshift-tuned.go:435] Pod (openshift-ingress/router-default-76f8bd8689-7hrcq) labels changed node wide: true\nI0403 22:17:17.616679   10166 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 22:17:17.618045   10166 openshift-tuned.go:326] Getting recommended profile...\nI0403 22:17:17.728296   10166 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 22:17:38.583170   10166 openshift-tuned.go:435] Pod (openshift-marketplace/redhat-operators-7ffdd95b6c-jv7kj) labels changed node wide: true\nI0403 22:17:41.612674   10166 openshift-tuned.go:691] Lowering resyncPeriod to 52\nI0403 22:17:42.616674   10166 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 22:17:42.618025   10166 openshift-tuned.go:326] Getting recommended profile...\nI0403 22:17:42.743218   10166 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 22:17:52.704183   10166 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0403 22:17:52.709645   10166 openshift-tuned.go:720] Pod event watch channel closed.\nI0403 22:17:52.709664   10166 openshift-tuned.go:722] Increasing resyncPeriod to 104\n
Apr 03 22:18:54.751 E ns/openshift-monitoring pod/node-exporter-rs249 node/ip-10-0-149-170.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): 
Apr 03 22:19:01.481 E ns/openshift-controller-manager pod/controller-manager-9r5sc node/ip-10-0-130-109.us-east-2.compute.internal container=controller-manager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 22:19:06.934 E ns/openshift-console pod/console-7666ff6dbb-kv5xq node/ip-10-0-156-6.us-east-2.compute.internal container=console container exited with code 2 (Error): 2020/04/3 22:06:34 cmd/main: cookies are secure!\n2020/04/3 22:06:34 cmd/main: Binding to 0.0.0.0:8443...\n2020/04/3 22:06:34 cmd/main: using TLS\n
Apr 03 22:19:18.548 E ns/openshift-console pod/console-7666ff6dbb-wv4ck node/ip-10-0-129-70.us-east-2.compute.internal container=console container exited with code 2 (Error): 2020/04/3 22:05:26 cmd/main: cookies are secure!\n2020/04/3 22:05:26 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://172.30.0.1:443/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/04/3 22:05:36 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://172.30.0.1:443/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/04/3 22:05:46 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://172.30.0.1:443/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/04/3 22:05:56 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://172.30.0.1:443/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/04/3 22:06:06 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://172.30.0.1:443/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/04/3 22:06:16 cmd/main: Binding to 0.0.0.0:8443...\n2020/04/3 22:06:16 cmd/main: using TLS\n
Apr 03 22:21:04.815 E ns/openshift-sdn pod/sdn-controller-h6bfb node/ip-10-0-129-70.us-east-2.compute.internal container=sdn-controller container exited with code 137 (Error): I0403 21:57:01.650152       1 leaderelection.go:205] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Apr 03 22:21:06.314 E ns/openshift-sdn pod/ovs-2nkgh node/ip-10-0-156-6.us-east-2.compute.internal container=openvswitch container exited with code 137 (Error): 7:27.274Z|00341|bridge|INFO|bridge br0: deleted interface veth4aaea8da on port 16\n2020-04-03T22:17:57.875Z|00342|connmgr|INFO|br0<->unix#796: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T22:17:57.922Z|00343|connmgr|INFO|br0<->unix#799: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T22:17:57.952Z|00344|bridge|INFO|bridge br0: deleted interface veth9a64fafd on port 52\n2020-04-03T22:18:04.417Z|00345|bridge|INFO|bridge br0: added interface vethe7edef43 on port 57\n2020-04-03T22:18:04.451Z|00346|connmgr|INFO|br0<->unix#805: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T22:18:04.491Z|00347|connmgr|INFO|br0<->unix#808: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T22:19:06.364Z|00348|connmgr|INFO|br0<->unix#818: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T22:19:06.409Z|00349|connmgr|INFO|br0<->unix#821: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T22:19:06.441Z|00350|bridge|INFO|bridge br0: deleted interface veth93ab8f60 on port 32\n2020-04-03T22:19:12.387Z|00351|bridge|INFO|bridge br0: added interface veth9472fcd5 on port 58\n2020-04-03T22:19:12.420Z|00352|connmgr|INFO|br0<->unix#824: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T22:19:12.457Z|00353|connmgr|INFO|br0<->unix#827: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T22:19:48.855Z|00354|connmgr|INFO|br0<->unix#833: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T22:19:48.886Z|00355|connmgr|INFO|br0<->unix#836: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T22:19:48.909Z|00356|bridge|INFO|bridge br0: deleted interface vethd16d4907 on port 27\n2020-04-03T22:19:59.870Z|00357|bridge|INFO|bridge br0: added interface veth9941b044 on port 59\n2020-04-03T22:19:59.899Z|00358|connmgr|INFO|br0<->unix#839: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T22:19:59.939Z|00359|connmgr|INFO|br0<->unix#842: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T22:21:03.422Z|00360|bridge|INFO|bridge br0: deleted interface vethe33dbb47 on port 3\n2020-04-03T22:21:03.427Z|00361|bridge|WARN|could not open network device vethe33dbb47 (No such device)\n
Apr 03 22:21:47.178 E ns/openshift-sdn pod/ovs-p87wf node/ip-10-0-138-122.us-east-2.compute.internal container=openvswitch container exited with code 137 (Error): 4-03T22:17:21.713Z|00138|bridge|INFO|bridge br0: added interface vethd15c7fd7 on port 20\n2020-04-03T22:17:21.756Z|00139|connmgr|INFO|br0<->unix#323: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T22:17:21.806Z|00140|connmgr|INFO|br0<->unix#326: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T22:17:36.802Z|00141|connmgr|INFO|br0<->unix#329: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T22:17:36.850Z|00142|connmgr|INFO|br0<->unix#332: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T22:17:36.880Z|00143|bridge|INFO|bridge br0: deleted interface vethded877e1 on port 8\n2020-04-03T22:19:13.029Z|00144|connmgr|INFO|br0<->unix#344: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T22:19:13.056Z|00145|connmgr|INFO|br0<->unix#347: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T22:19:13.079Z|00146|bridge|INFO|bridge br0: deleted interface vethaa826bbf on port 5\n2020-04-03T22:19:26.882Z|00147|bridge|INFO|bridge br0: added interface vethb4599535 on port 21\n2020-04-03T22:19:26.910Z|00148|connmgr|INFO|br0<->unix#354: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T22:19:26.947Z|00149|connmgr|INFO|br0<->unix#357: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T22:20:42.542Z|00150|connmgr|INFO|br0<->unix#366: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T22:20:42.574Z|00151|connmgr|INFO|br0<->unix#369: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T22:20:42.595Z|00152|bridge|INFO|bridge br0: deleted interface vethf86f581e on port 3\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-04-03T22:20:42.589Z|00025|jsonrpc|WARN|unix#244: receive error: Connection reset by peer\n2020-04-03T22:20:42.589Z|00026|reconnect|WARN|unix#244: connection dropped (Connection reset by peer)\n\n==> /var/log/openvswitch/ovs-vswitchd.log <==\n2020-04-03T22:20:56.875Z|00153|bridge|INFO|bridge br0: added interface veth77ef40c9 on port 22\n2020-04-03T22:20:56.912Z|00154|connmgr|INFO|br0<->unix#375: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T22:20:56.948Z|00155|connmgr|INFO|br0<->unix#378: 2 flow_mods in the last 0 s (2 deletes)\n
Apr 03 22:21:53.194 E ns/openshift-sdn pod/sdn-crjft node/ip-10-0-138-122.us-east-2.compute.internal container=sdn container exited with code 255 (Error): 10.0.138.122:9101 10.0.149.170:9101 10.0.156.6:9101]\nI0403 22:21:52.156839    2227 roundrobin.go:240] Delete endpoint 10.0.149.170:9101 for service "openshift-sdn/sdn:metrics"\nI0403 22:21:52.175491    2227 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:21:52.182528    2227 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-sdn/sdn:metrics to [10.0.129.70:9101 10.0.130.109:9101 10.0.136.48:9101 10.0.149.170:9101 10.0.156.6:9101]\nI0403 22:21:52.182558    2227 roundrobin.go:240] Delete endpoint 10.0.138.122:9101 for service "openshift-sdn/sdn:metrics"\ninterrupt: Gracefully shutting down ...\nI0403 22:21:52.275459    2227 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:21:52.347691    2227 proxier.go:367] userspace proxy: processing 0 service events\nI0403 22:21:52.347721    2227 proxier.go:346] userspace syncProxyRules took 57.613384ms\nI0403 22:21:52.375449    2227 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:21:52.476879    2227 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:21:52.566403    2227 proxier.go:367] userspace proxy: processing 0 service events\nI0403 22:21:52.566428    2227 proxier.go:346] userspace syncProxyRules took 80.748669ms\nI0403 22:21:52.575454    2227 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:21:52.680939    2227 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0403 22:21:52.680996    2227 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
Apr 03 22:22:24.142 E ns/openshift-sdn pod/ovs-pjvzw node/ip-10-0-149-170.us-east-2.compute.internal container=openvswitch container exited with code 137 (Error): 9:09.545Z|00156|connmgr|INFO|br0<->unix#386: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T22:21:27.698Z|00157|connmgr|INFO|br0<->unix#406: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T22:21:27.730Z|00158|connmgr|INFO|br0<->unix#409: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T22:21:27.753Z|00159|bridge|INFO|bridge br0: deleted interface veth8e0da829 on port 3\n2020-04-03T22:21:44.429Z|00160|connmgr|INFO|br0<->unix#418: 2 flow_mods in the last 0 s (2 adds)\n2020-04-03T22:21:44.510Z|00161|connmgr|INFO|br0<->unix#424: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T22:21:44.534Z|00162|connmgr|INFO|br0<->unix#427: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T22:21:44.858Z|00163|connmgr|INFO|br0<->unix#430: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T22:21:44.888Z|00164|connmgr|INFO|br0<->unix#433: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T22:21:44.910Z|00165|connmgr|INFO|br0<->unix#436: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T22:21:44.935Z|00166|connmgr|INFO|br0<->unix#439: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T22:21:44.958Z|00167|connmgr|INFO|br0<->unix#442: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T22:21:44.981Z|00168|connmgr|INFO|br0<->unix#445: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T22:21:45.004Z|00169|connmgr|INFO|br0<->unix#448: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T22:21:45.039Z|00170|connmgr|INFO|br0<->unix#451: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T22:21:45.099Z|00171|bridge|INFO|bridge br0: added interface vethf01bea30 on port 26\n2020-04-03T22:21:45.109Z|00172|connmgr|INFO|br0<->unix#454: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T22:21:45.141Z|00173|connmgr|INFO|br0<->unix#459: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T22:21:45.142Z|00174|connmgr|INFO|br0<->unix#460: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T22:21:45.181Z|00175|connmgr|INFO|br0<->unix#463: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T22:21:45.557Z|00176|connmgr|INFO|br0<->unix#466: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n
Apr 03 22:22:32.157 E ns/openshift-multus pod/multus-bdvwt node/ip-10-0-149-170.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 03 22:22:33.261 E ns/openshift-sdn pod/sdn-v2t5g node/ip-10-0-149-170.us-east-2.compute.internal container=sdn container exited with code 255 (Error): ix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:22:31.930608   46652 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:22:32.030674   46652 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:22:32.130630   46652 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:22:32.230611   46652 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:22:32.330605   46652 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:22:32.430629   46652 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:22:32.530624   46652 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:22:32.630595   46652 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:22:32.730616   46652 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:22:32.830629   46652 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:22:32.934957   46652 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0403 22:22:32.935015   46652 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
Apr 03 22:23:04.176 E ns/openshift-sdn pod/ovs-sjzcl node/ip-10-0-129-70.us-east-2.compute.internal container=openvswitch container exited with code 137 (Error): :02.848Z|00345|connmgr|INFO|br0<->unix#819: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T22:22:02.893Z|00346|connmgr|INFO|br0<->unix#822: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T22:22:19.470Z|00347|connmgr|INFO|br0<->unix#834: 2 flow_mods in the last 0 s (2 adds)\n2020-04-03T22:22:19.570Z|00348|connmgr|INFO|br0<->unix#840: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T22:22:19.596Z|00349|connmgr|INFO|br0<->unix#843: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T22:22:19.623Z|00350|connmgr|INFO|br0<->unix#846: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T22:22:19.648Z|00351|connmgr|INFO|br0<->unix#849: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T22:22:19.675Z|00352|connmgr|INFO|br0<->unix#852: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T22:22:19.704Z|00353|connmgr|INFO|br0<->unix#855: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T22:22:19.737Z|00354|connmgr|INFO|br0<->unix#858: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T22:22:19.768Z|00355|connmgr|INFO|br0<->unix#861: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T22:22:19.909Z|00356|connmgr|INFO|br0<->unix#864: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T22:22:19.936Z|00357|connmgr|INFO|br0<->unix#867: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T22:22:19.962Z|00358|connmgr|INFO|br0<->unix#870: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T22:22:19.986Z|00359|connmgr|INFO|br0<->unix#873: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T22:22:20.017Z|00360|connmgr|INFO|br0<->unix#876: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T22:22:20.045Z|00361|connmgr|INFO|br0<->unix#879: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T22:22:20.093Z|00362|connmgr|INFO|br0<->unix#882: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T22:22:20.128Z|00363|connmgr|INFO|br0<->unix#885: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T22:22:20.160Z|00364|connmgr|INFO|br0<->unix#888: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T22:22:20.187Z|00365|connmgr|INFO|br0<->unix#891: 1 flow_mods in the last 0 s (1 adds)\n
Apr 03 22:23:07.207 E ns/openshift-sdn pod/sdn-hkqk5 node/ip-10-0-129-70.us-east-2.compute.internal container=sdn container exited with code 255 (Error): ix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:23:05.175115   67055 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:23:05.272811   67055 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:23:05.372821   67055 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:23:05.472794   67055 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:23:05.572823   67055 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:23:05.672870   67055 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:23:05.772880   67055 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:23:05.873038   67055 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:23:05.972919   67055 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:23:06.072847   67055 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:23:06.188294   67055 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0403 22:23:06.188470   67055 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
Apr 03 22:23:22.231 E ns/openshift-multus pod/multus-r6m4j node/ip-10-0-129-70.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 03 22:23:36.690 E ns/openshift-sdn pod/ovs-gntrj node/ip-10-0-130-109.us-east-2.compute.internal container=openvswitch container exited with code 137 (Error): 29.119Z|00407|connmgr|INFO|br0<->unix#963: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T22:21:29.155Z|00408|connmgr|INFO|br0<->unix#966: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T22:21:29.184Z|00409|connmgr|INFO|br0<->unix#969: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T22:21:29.214Z|00410|connmgr|INFO|br0<->unix#972: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T22:21:29.243Z|00411|connmgr|INFO|br0<->unix#975: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T22:21:29.274Z|00412|connmgr|INFO|br0<->unix#978: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T22:21:29.373Z|00413|connmgr|INFO|br0<->unix#981: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T22:21:29.404Z|00414|connmgr|INFO|br0<->unix#984: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T22:21:29.428Z|00415|connmgr|INFO|br0<->unix#987: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T22:21:29.456Z|00416|connmgr|INFO|br0<->unix#990: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T22:21:29.486Z|00417|connmgr|INFO|br0<->unix#993: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T22:21:29.516Z|00418|connmgr|INFO|br0<->unix#996: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T22:21:29.545Z|00419|connmgr|INFO|br0<->unix#999: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T22:21:29.580Z|00420|connmgr|INFO|br0<->unix#1002: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T22:21:29.624Z|00421|connmgr|INFO|br0<->unix#1005: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T22:21:29.650Z|00422|connmgr|INFO|br0<->unix#1008: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T22:22:09.486Z|00423|connmgr|INFO|br0<->unix#1014: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T22:22:09.524Z|00424|bridge|INFO|bridge br0: deleted interface veth3132aa1a on port 14\n2020-04-03T22:22:23.906Z|00425|bridge|INFO|bridge br0: added interface vethe84c7f09 on port 66\n2020-04-03T22:22:23.944Z|00426|connmgr|INFO|br0<->unix#1017: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T22:22:23.987Z|00427|connmgr|INFO|br0<->unix#1020: 2 flow_mods in the last 0 s (2 deletes)\n
Apr 03 22:23:39.707 E ns/openshift-sdn pod/sdn-kwcrk node/ip-10-0-130-109.us-east-2.compute.internal container=sdn container exited with code 255 (Error): ix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:23:37.685028   63923 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:23:37.784988   63923 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:23:37.885002   63923 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:23:37.985051   63923 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:23:38.085007   63923 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:23:38.185009   63923 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:23:38.285048   63923 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:23:38.384944   63923 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:23:38.485031   63923 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:23:38.585051   63923 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:23:38.693686   63923 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0403 22:23:38.693843   63923 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
Apr 03 22:23:47.334 E ns/openshift-operator-lifecycle-manager pod/packageserver-8978db678-kjctq node/ip-10-0-156-6.us-east-2.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 22:23:48.907 E ns/openshift-operator-lifecycle-manager pod/packageserver-c684d8477-7wwnx node/ip-10-0-129-70.us-east-2.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 22:23:49.743 E ns/openshift-machine-api pod/cluster-autoscaler-operator-7f447d9b5b-76qv6 node/ip-10-0-130-109.us-east-2.compute.internal container=cluster-autoscaler-operator container exited with code 255 (Error): 
Apr 03 22:24:01.641 E ns/openshift-multus pod/multus-6sb4v node/ip-10-0-136-48.us-east-2.compute.internal container=kube-multus container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 22:24:09.630 E ns/openshift-sdn pod/ovs-8p5sr node/ip-10-0-136-48.us-east-2.compute.internal container=openvswitch container exited with code 137 (Error): -03T22:20:28.921Z|00158|connmgr|INFO|br0<->unix#405: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T22:20:28.948Z|00159|connmgr|INFO|br0<->unix#408: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T22:20:28.970Z|00160|bridge|INFO|bridge br0: deleted interface veth32bd4a47 on port 3\n2020-04-03T22:20:36.605Z|00161|bridge|INFO|bridge br0: added interface veth8971b0eb on port 26\n2020-04-03T22:20:36.650Z|00162|connmgr|INFO|br0<->unix#411: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T22:20:36.691Z|00163|connmgr|INFO|br0<->unix#414: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T22:20:46.071Z|00164|connmgr|INFO|br0<->unix#423: 2 flow_mods in the last 0 s (2 adds)\n2020-04-03T22:20:46.167Z|00165|connmgr|INFO|br0<->unix#429: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T22:20:46.192Z|00166|connmgr|INFO|br0<->unix#432: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T22:20:46.216Z|00167|connmgr|INFO|br0<->unix#435: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T22:20:46.244Z|00168|connmgr|INFO|br0<->unix#438: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T22:20:46.482Z|00169|connmgr|INFO|br0<->unix#441: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T22:20:46.507Z|00170|connmgr|INFO|br0<->unix#444: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T22:20:46.530Z|00171|connmgr|INFO|br0<->unix#447: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T22:20:46.553Z|00172|connmgr|INFO|br0<->unix#450: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T22:20:46.585Z|00173|connmgr|INFO|br0<->unix#453: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T22:20:46.614Z|00174|connmgr|INFO|br0<->unix#456: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T22:20:46.643Z|00175|connmgr|INFO|br0<->unix#459: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T22:20:46.671Z|00176|connmgr|INFO|br0<->unix#462: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T22:20:46.694Z|00177|connmgr|INFO|br0<->unix#465: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T22:20:46.721Z|00178|connmgr|INFO|br0<->unix#468: 1 flow_mods in the last 0 s (1 adds)\n
Apr 03 22:24:20.706 E ns/openshift-sdn pod/sdn-7swhb node/ip-10-0-136-48.us-east-2.compute.internal container=sdn container exited with code 255 (Error): ar/run/openvswitch/db.sock: connect: connection refused\nI0403 22:24:18.725869   40773 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:24:18.825875   40773 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:24:18.925851   40773 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:24:19.025850   40773 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:24:19.125878   40773 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:24:19.225902   40773 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:24:19.325889   40773 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:24:19.425913   40773 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:24:19.525897   40773 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:24:19.625917   40773 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 22:24:19.626010   40773 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nF0403 22:24:19.626021   40773 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: timed out waiting for the condition\n
Apr 03 22:24:35.944 E ns/openshift-operator-lifecycle-manager pod/packageserver-9796bb987-gnxxb node/ip-10-0-130-109.us-east-2.compute.internal container=packageserver container exited with code 137 (Error): emote error: tls: bad certificate\nI0403 22:23:58.858203       1 log.go:172] http: TLS handshake error from 10.130.0.1:51174: remote error: tls: bad certificate\nI0403 22:23:59.657327       1 log.go:172] http: TLS handshake error from 10.130.0.1:51180: remote error: tls: bad certificate\nI0403 22:24:00.056372       1 log.go:172] http: TLS handshake error from 10.130.0.1:51188: remote error: tls: bad certificate\nI0403 22:24:01.000057       1 wrap.go:47] GET /: (262.424µs) 200 [Go-http-client/2.0 10.128.0.1:59014]\nI0403 22:24:01.000325       1 wrap.go:47] GET /: (507.029µs) 200 [Go-http-client/2.0 10.128.0.1:59014]\nI0403 22:24:01.409457       1 wrap.go:47] GET /: (324.321µs) 200 [Go-http-client/2.0 10.130.0.1:46214]\nI0403 22:24:02.038480       1 wrap.go:47] GET /healthz: (2.193806ms) 200 [kube-probe/1.13+ 10.129.0.1:46670]\nI0403 22:24:02.350718       1 wrap.go:47] GET /healthz: (129.52µs) 200 [kube-probe/1.13+ 10.129.0.1:46674]\nI0403 22:24:02.456653       1 log.go:172] http: TLS handshake error from 10.130.0.1:51220: remote error: tls: bad certificate\nI0403 22:24:03.605715       1 log.go:172] http: TLS handshake error from 10.129.0.1:46684: remote error: tls: bad certificate\nI0403 22:24:04.056527       1 log.go:172] http: TLS handshake error from 10.130.0.1:51236: remote error: tls: bad certificate\nI0403 22:24:04.456666       1 log.go:172] http: TLS handshake error from 10.130.0.1:51240: remote error: tls: bad certificate\nI0403 22:24:04.743981       1 wrap.go:47] GET /: (232.605µs) 200 [Go-http-client/2.0 10.129.0.1:40240]\nI0403 22:24:04.744013       1 wrap.go:47] GET /: (126.05µs) 200 [Go-http-client/2.0 10.130.0.1:46214]\nI0403 22:24:04.744190       1 wrap.go:47] GET /: (108.907µs) 200 [Go-http-client/2.0 10.128.0.1:59014]\nI0403 22:24:04.744314       1 wrap.go:47] GET /: (101.954µs) 200 [Go-http-client/2.0 10.129.0.1:40240]\nI0403 22:24:04.744385       1 wrap.go:47] GET /: (649.181µs) 200 [Go-http-client/2.0 10.130.0.1:46214]\nI0403 22:24:04.800688       1 secure_serving.go:156] Stopped listening on [::]:5443\n
Apr 03 22:24:55.032 E ns/openshift-multus pod/multus-q6v4v node/ip-10-0-130-109.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 03 22:24:55.474 E ns/openshift-operator-lifecycle-manager pod/packageserver-9796bb987-rlm29 node/ip-10-0-129-70.us-east-2.compute.internal container=packageserver container exited with code 137 (Error): I0403 22:24:21.357918       1 wrap.go:47] GET /: (314.164µs) 200 [Go-http-client/2.0 10.129.0.1:55984]\nI0403 22:24:21.359383       1 wrap.go:47] GET /: (196.89µs) 200 [Go-http-client/2.0 10.129.0.1:55984]\nI0403 22:24:22.054774       1 wrap.go:47] GET /: (188.727µs) 200 [Go-http-client/2.0 10.129.0.1:55984]\nI0403 22:24:22.057631       1 wrap.go:47] GET /: (2.056467ms) 200 [Go-http-client/2.0 10.129.0.1:55984]\nI0403 22:24:22.057804       1 wrap.go:47] GET /: (2.066238ms) 200 [Go-http-client/2.0 10.129.0.1:55984]\nI0403 22:24:22.057998       1 wrap.go:47] GET /: (2.167399ms) 200 [Go-http-client/2.0 10.129.0.1:55984]\nI0403 22:24:22.456345       1 log.go:172] http: TLS handshake error from 10.130.0.1:48658: remote error: tls: bad certificate\nI0403 22:24:23.256375       1 log.go:172] http: TLS handshake error from 10.130.0.1:48666: remote error: tls: bad certificate\nI0403 22:24:23.711871       1 wrap.go:47] GET /healthz: (110.936µs) 200 [kube-probe/1.13+ 10.128.0.1:50680]\nI0403 22:24:23.792027       1 log.go:172] http: TLS handshake error from 10.129.0.1:34400: remote error: tls: bad certificate\nI0403 22:24:24.460672       1 wrap.go:47] GET /: (193.889µs) 200 [Go-http-client/2.0 10.129.0.1:55984]\nI0403 22:24:24.461035       1 wrap.go:47] GET /: (111.286µs) 200 [Go-http-client/2.0 10.129.0.1:55984]\nI0403 22:24:24.461334       1 wrap.go:47] GET /: (634.087µs) 200 [Go-http-client/2.0 10.130.0.1:43400]\nI0403 22:24:24.461948       1 wrap.go:47] GET /: (356.439µs) 200 [Go-http-client/2.0 10.130.0.1:43400]\nI0403 22:24:24.462665       1 wrap.go:47] GET /: (180.823µs) 200 [Go-http-client/2.0 10.128.0.1:47042]\nI0403 22:24:24.462927       1 wrap.go:47] GET /: (128.855µs) 200 [Go-http-client/2.0 10.128.0.1:47042]\nI0403 22:24:24.462932       1 wrap.go:47] GET /: (154.401µs) 200 [Go-http-client/2.0 10.128.0.1:47042]\nI0403 22:24:24.463066       1 log.go:172] http: TLS handshake error from 10.130.0.1:48674: remote error: tls: bad certificate\nI0403 22:24:24.510453       1 secure_serving.go:156] Stopped listening on [::]:5443\n
Apr 03 22:25:15.098 E ns/openshift-machine-config-operator pod/machine-config-operator-6ccd96db54-9l5mc node/ip-10-0-130-109.us-east-2.compute.internal container=machine-config-operator container exited with code 2 (Error): 
Apr 03 22:28:23.654 E ns/openshift-machine-config-operator pod/machine-config-controller-65c5f87ddc-vxbc7 node/ip-10-0-156-6.us-east-2.compute.internal container=machine-config-controller container exited with code 2 (Error): 
Apr 03 22:30:19.943 E ns/openshift-machine-config-operator pod/machine-config-server-fx9f9 node/ip-10-0-156-6.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): 
Apr 03 22:30:27.342 E ns/openshift-machine-config-operator pod/machine-config-server-dn4dk node/ip-10-0-129-70.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): 
Apr 03 22:30:31.986 E ns/openshift-ingress pod/router-default-76f8bd8689-xfww6 node/ip-10-0-136-48.us-east-2.compute.internal container=router container exited with code 2 (Error): pt(s).\nI0403 22:23:07.244114       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 22:23:13.531415       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 22:23:39.744600       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 22:23:46.118860       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 22:23:51.094502       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 22:24:04.781656       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 22:24:09.775872       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 22:24:20.756949       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 22:24:25.463127       1 logs.go:49] http: TLS handshake error from 10.128.2.1:37742: EOF\nI0403 22:24:25.759005       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 22:24:30.731941       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nW0403 22:29:43.093809       1 reflector.go:341] github.com/openshift/router/pkg/router/controller/factory/factory.go:112: watch of *v1.Route ended with: The resourceVersion for the provided watch is too old.\nI0403 22:30:29.311931       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n
Apr 03 22:30:34.581 E ns/openshift-image-registry pod/image-registry-b76944cc7-s6ghp node/ip-10-0-136-48.us-east-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 22:30:35.116 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-d757795b6-qvgcz node/ip-10-0-156-6.us-east-2.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): duler-operator", Name:"openshift-kube-scheduler-operator", UID:"fb41d5c6-75f5-11ea-8a56-02df5a075760", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Progressing changed from True to False ("Progressing: 3 nodes are at revision 7"),Available message changed from "Available: 3 nodes are active; 1 nodes are at revision 6; 2 nodes are at revision 7" to "Available: 3 nodes are active; 3 nodes are at revision 7"\nI0403 22:18:52.534636       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"fb41d5c6-75f5-11ea-8a56-02df5a075760", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/revision-status-7 -n openshift-kube-scheduler: cause by changes in data.status\nI0403 22:18:58.339834       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"fb41d5c6-75f5-11ea-8a56-02df5a075760", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-7-ip-10-0-129-70.us-east-2.compute.internal -n openshift-kube-scheduler because it was missing\nW0403 22:22:49.910623       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 17810 (23242)\nW0403 22:22:52.116973       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 17816 (23251)\nW0403 22:27:56.581691       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 23490 (25226)\nI0403 22:30:30.244039       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 22:30:30.244186       1 leaderelection.go:65] leaderelection lost\n
Apr 03 22:30:37.515 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-54875f9667-gcp22 node/ip-10-0-156-6.us-east-2.compute.internal container=operator container exited with code 2 (Error): tion.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0403 22:29:26.925112       1 request.go:530] Throttling request took 196.652204ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0403 22:29:43.288845       1 wrap.go:47] GET /metrics: (8.332454ms) 200 [Prometheus/2.7.2 10.131.0.16:50570]\nI0403 22:29:43.289903       1 wrap.go:47] GET /metrics: (6.437459ms) 200 [Prometheus/2.7.2 10.129.2.23:56258]\nI0403 22:29:46.725084       1 request.go:530] Throttling request took 162.719027ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0403 22:29:46.925105       1 request.go:530] Throttling request took 196.218832ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0403 22:30:06.725151       1 request.go:530] Throttling request took 159.984961ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0403 22:30:06.925181       1 request.go:530] Throttling request took 195.997382ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0403 22:30:13.286406       1 wrap.go:47] GET /metrics: (6.08042ms) 200 [Prometheus/2.7.2 10.131.0.16:50570]\nI0403 22:30:13.286981       1 wrap.go:47] GET /metrics: (3.765929ms) 200 [Prometheus/2.7.2 10.129.2.23:56258]\nI0403 22:30:26.725152       1 request.go:530] Throttling request took 161.342541ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0403 22:30:26.925163       1 request.go:530] Throttling request took 194.391123ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\n
Apr 03 22:30:39.113 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-59cdh786v node/ip-10-0-156-6.us-east-2.compute.internal container=operator container exited with code 2 (Error): 377       1 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch close - *v1.Service total 0 items received\nI0403 22:27:34.787664       1 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync\nI0403 22:27:34.787663       1 reflector.go:215] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: forcing resync\nI0403 22:27:34.789492       1 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync\nI0403 22:27:34.789552       1 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync\nI0403 22:27:34.792614       1 reflector.go:215] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: forcing resync\nI0403 22:27:43.595308       1 wrap.go:47] GET /metrics: (6.745459ms) 200 [Prometheus/2.7.2 10.131.0.16:50922]\nI0403 22:27:43.597587       1 wrap.go:47] GET /metrics: (6.275489ms) 200 [Prometheus/2.7.2 10.129.2.23:60032]\nI0403 22:28:13.595685       1 wrap.go:47] GET /metrics: (7.182762ms) 200 [Prometheus/2.7.2 10.131.0.16:50922]\nI0403 22:28:13.595759       1 wrap.go:47] GET /metrics: (4.3902ms) 200 [Prometheus/2.7.2 10.129.2.23:60032]\nI0403 22:28:43.598783       1 wrap.go:47] GET /metrics: (7.517163ms) 200 [Prometheus/2.7.2 10.129.2.23:60032]\nI0403 22:28:43.600000       1 wrap.go:47] GET /metrics: (11.497221ms) 200 [Prometheus/2.7.2 10.131.0.16:50922]\nI0403 22:29:13.594987       1 wrap.go:47] GET /metrics: (3.772278ms) 200 [Prometheus/2.7.2 10.129.2.23:60032]\nI0403 22:29:13.595115       1 wrap.go:47] GET /metrics: (6.627742ms) 200 [Prometheus/2.7.2 10.131.0.16:50922]\nI0403 22:29:43.595274       1 wrap.go:47] GET /metrics: (6.75071ms) 200 [Prometheus/2.7.2 10.131.0.16:50922]\nI0403 22:29:43.595479       1 wrap.go:47] GET /metrics: (4.263956ms) 200 [Prometheus/2.7.2 10.129.2.23:60032]\nI0403 22:30:13.595517       1 wrap.go:47] GET /metrics: (7.007387ms) 200 [Prometheus/2.7.2 10.131.0.16:50922]\nI0403 22:30:13.595975       1 wrap.go:47] GET /metrics: (4.653413ms) 200 [Prometheus/2.7.2 10.129.2.23:60032]\n
Apr 03 22:30:41.116 E ns/openshift-machine-api pod/machine-api-controllers-5fbbd5c95c-cdkgx node/ip-10-0-156-6.us-east-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Apr 03 22:30:41.116 E ns/openshift-machine-api pod/machine-api-controllers-5fbbd5c95c-cdkgx node/ip-10-0-156-6.us-east-2.compute.internal container=nodelink-controller container exited with code 2 (Error): 
Apr 03 22:30:42.112 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-6b869cfd6c-wxqx2 node/ip-10-0-156-6.us-east-2.compute.internal container=kube-apiserver-operator container exited with code 255 (Error): apiserver-operator", Name:"kube-apiserver-operator", UID:"f8b26e14-75f5-11ea-8a56-02df5a075760", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "" to "StaticPodsDegraded: nodes/ip-10-0-129-70.us-east-2.compute.internal pods/kube-apiserver-ip-10-0-129-70.us-east-2.compute.internal container=\"kube-apiserver-7\" is not ready"\nI0403 22:18:38.721594       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"f8b26e14-75f5-11ea-8a56-02df5a075760", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "StaticPodsDegraded: nodes/ip-10-0-129-70.us-east-2.compute.internal pods/kube-apiserver-ip-10-0-129-70.us-east-2.compute.internal container=\"kube-apiserver-7\" is not ready" to ""\nW0403 22:23:25.904469       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 17810 (23500)\nW0403 22:24:16.300258       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 17816 (23814)\nW0403 22:25:00.301606       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 21198 (24304)\nW0403 22:25:24.249007       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 17771 (24433)\nW0403 22:30:06.306988       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 24159 (25901)\nI0403 22:30:30.213141       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 22:30:30.213299       1 leaderelection.go:65] leaderelection lost\n
Apr 03 22:30:44.113 E ns/openshift-cluster-machine-approver pod/machine-approver-84cdd4575-dtgq4 node/ip-10-0-156-6.us-east-2.compute.internal container=machine-approver-controller container exited with code 2 (Error): .\nI0403 22:16:18.831297       1 config.go:33] using default as failed to load config /var/run/configmaps/config/config.yaml: open /var/run/configmaps/config/config.yaml: no such file or directory\nI0403 22:16:18.831367       1 config.go:23] machine approver config: {NodeClientCert:{Disabled:false}}\nI0403 22:16:18.831480       1 main.go:183] Starting Machine Approver\nI0403 22:16:18.932311       1 main.go:107] CSR csr-qng78 added\nI0403 22:16:18.932343       1 main.go:110] CSR csr-qng78 is already approved\nI0403 22:16:18.932360       1 main.go:107] CSR csr-cw4jz added\nI0403 22:16:18.932368       1 main.go:110] CSR csr-cw4jz is already approved\nI0403 22:16:18.932408       1 main.go:107] CSR csr-fjlm9 added\nI0403 22:16:18.932419       1 main.go:110] CSR csr-fjlm9 is already approved\nI0403 22:16:18.932432       1 main.go:107] CSR csr-j7nkn added\nI0403 22:16:18.932441       1 main.go:110] CSR csr-j7nkn is already approved\nI0403 22:16:18.932459       1 main.go:107] CSR csr-p2l22 added\nI0403 22:16:18.932468       1 main.go:110] CSR csr-p2l22 is already approved\nI0403 22:16:18.932479       1 main.go:107] CSR csr-pkt6g added\nI0403 22:16:18.932488       1 main.go:110] CSR csr-pkt6g is already approved\nI0403 22:16:18.932506       1 main.go:107] CSR csr-q6m5d added\nI0403 22:16:18.932516       1 main.go:110] CSR csr-q6m5d is already approved\nI0403 22:16:18.932528       1 main.go:107] CSR csr-wvk9z added\nI0403 22:16:18.932536       1 main.go:110] CSR csr-wvk9z is already approved\nI0403 22:16:18.932548       1 main.go:107] CSR csr-xw2ph added\nI0403 22:16:18.932557       1 main.go:110] CSR csr-xw2ph is already approved\nI0403 22:16:18.932568       1 main.go:107] CSR csr-bcp8k added\nI0403 22:16:18.932577       1 main.go:110] CSR csr-bcp8k is already approved\nI0403 22:16:18.932589       1 main.go:107] CSR csr-dgt9k added\nI0403 22:16:18.932599       1 main.go:110] CSR csr-dgt9k is already approved\nI0403 22:16:18.932610       1 main.go:107] CSR csr-zxc7c added\nI0403 22:16:18.932619       1 main.go:110] CSR csr-zxc7c is already approved\n
Apr 03 22:31:00.898 E ns/openshift-console pod/downloads-6d55689f4b-tj9qp node/ip-10-0-136-48.us-east-2.compute.internal container=download-server container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 22:31:06.113 E ns/openshift-operator-lifecycle-manager pod/packageserver-57dd4779b4-7b7hz node/ip-10-0-156-6.us-east-2.compute.internal container=packageserver container exited with code 137 (Error): ving.go:156] Stopped listening on [::]:5443\ntime="2020-04-03T22:30:44Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-03T22:30:44Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-03T22:30:44Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T22:30:44Z" level=info msg="new grpc connection added" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T22:30:45Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T22:30:45Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T22:30:45Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T22:30:45Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T22:30:45Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-03T22:30:45Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-03T22:30:46Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T22:30:46Z" level=info msg="new grpc connection added" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\n
Apr 03 22:31:15.036 E openshift-apiserver OpenShift API is not responding to GET requests
Apr 03 22:31:32.610 E ns/openshift-operator-lifecycle-manager pod/packageserver-57dd4779b4-fv8gr node/ip-10-0-129-70.us-east-2.compute.internal container=packageserver container exited with code 137 (Error):  GET /: (965.722µs) 200 [Go-http-client/2.0 10.128.0.1:53086]\nI0403 22:30:58.041951       1 log.go:172] http: TLS handshake error from 10.129.0.1:57630: remote error: tls: bad certificate\nI0403 22:30:58.593183       1 log.go:172] http: TLS handshake error from 10.128.0.1:53170: remote error: tls: bad certificate\nI0403 22:30:58.991112       1 log.go:172] http: TLS handshake error from 10.128.0.1:53172: remote error: tls: bad certificate\nI0403 22:30:59.791246       1 log.go:172] http: TLS handshake error from 10.128.0.1:53194: remote error: tls: bad certificate\nI0403 22:31:00.191078       1 log.go:172] http: TLS handshake error from 10.128.0.1:53196: remote error: tls: bad certificate\nI0403 22:31:01.004716       1 wrap.go:47] GET /: (186.404µs) 200 [Go-http-client/2.0 10.128.0.1:53086]\nI0403 22:31:01.005066       1 wrap.go:47] GET /: (701.65µs) 200 [Go-http-client/2.0 10.128.0.1:53086]\nI0403 22:31:01.560555       1 wrap.go:47] GET /: (475.988µs) 200 [Go-http-client/2.0 10.130.0.1:52742]\nI0403 22:31:01.560762       1 wrap.go:47] GET /: (775.678µs) 200 [Go-http-client/2.0 10.130.0.1:52742]\nI0403 22:31:01.562698       1 wrap.go:47] GET /: (180.216µs) 200 [Go-http-client/2.0 10.130.0.1:52742]\nI0403 22:31:01.843983       1 wrap.go:47] GET /: (335.811µs) 200 [Go-http-client/2.0 10.128.0.1:53086]\nI0403 22:31:01.843996       1 wrap.go:47] GET /: (340.249µs) 200 [Go-http-client/2.0 10.128.0.1:53086]\nI0403 22:31:01.845984       1 wrap.go:47] GET /: (101.76µs) 200 [Go-http-client/2.0 10.129.0.1:57546]\nI0403 22:31:01.846562       1 wrap.go:47] GET /: (103.889µs) 200 [Go-http-client/2.0 10.129.0.1:57546]\nI0403 22:31:01.901940       1 secure_serving.go:156] Stopped listening on [::]:5443\nI0403 22:31:07.533776       1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0403 22:31:07.540944       1 reflector.go:337] github.com/operator-framework/operator-lifecycle-manager/pkg/lib/queueinformer/queueinformer_operator.go:130: Watch close - *v1alpha1.CatalogSource total 4 items received\n
Apr 03 22:31:42.674 E ns/openshift-operator-lifecycle-manager pod/packageserver-57dd4779b4-fkb6m node/ip-10-0-130-109.us-east-2.compute.internal container=packageserver container exited with code 137 (Error): p: TLS handshake error from 10.128.0.1:40310: remote error: tls: bad certificate\nI0403 22:31:05.992349       1 log.go:172] http: TLS handshake error from 10.128.0.1:40314: remote error: tls: bad certificate\nI0403 22:31:06.030625       1 log.go:172] http: TLS handshake error from 10.128.0.1:40316: remote error: tls: bad certificate\nI0403 22:31:06.392335       1 log.go:172] http: TLS handshake error from 10.128.0.1:40320: remote error: tls: bad certificate\nI0403 22:31:06.793457       1 log.go:172] http: TLS handshake error from 10.128.0.1:40322: remote error: tls: bad certificate\nI0403 22:31:07.106168       1 log.go:172] http: TLS handshake error from 10.129.0.1:57464: remote error: tls: bad certificate\nI0403 22:31:07.149615       1 log.go:172] http: TLS handshake error from 10.129.0.1:57466: remote error: tls: bad certificate\nI0403 22:31:07.654391       1 log.go:172] http: TLS handshake error from 10.128.0.1:40464: remote error: tls: bad certificate\nI0403 22:31:08.392084       1 log.go:172] http: TLS handshake error from 10.128.0.1:40570: remote error: tls: bad certificate\nI0403 22:31:10.547735       1 wrap.go:47] GET /: (2.989465ms) 200 [Go-http-client/2.0 10.129.0.1:51214]\nI0403 22:31:10.547770       1 wrap.go:47] GET /: (2.602239ms) 200 [Go-http-client/2.0 10.128.0.1:36668]\nI0403 22:31:10.548711       1 wrap.go:47] GET /: (3.564837ms) 200 [Go-http-client/2.0 10.128.0.1:36668]\nI0403 22:31:10.792645       1 log.go:172] http: TLS handshake error from 10.128.0.1:40708: remote error: tls: bad certificate\nI0403 22:31:11.192980       1 log.go:172] http: TLS handshake error from 10.128.0.1:40738: remote error: tls: bad certificate\nI0403 22:31:12.053025       1 wrap.go:47] GET /: (569.204µs) 200 [Go-http-client/2.0 10.129.0.1:51214]\nI0403 22:31:12.053376       1 wrap.go:47] GET /: (967.511µs) 200 [Go-http-client/2.0 10.129.0.1:51214]\nI0403 22:31:12.053517       1 wrap.go:47] GET /: (874.06µs) 200 [Go-http-client/2.0 10.129.0.1:51214]\nI0403 22:31:12.119702       1 secure_serving.go:156] Stopped listening on [::]:5443\n
Apr 03 22:31:48.198 E clusteroperator/authentication changed Degraded to True: MultipleConditionsMatching: RouteStatusDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nRouteHealthDegraded: failed to GET route: net/http: TLS handshake timeout
Apr 03 22:32:07.823 E clusteroperator/monitoring changed Degraded to True: UpdatingconfigurationsharingFailed: Failed to rollout the stack. Error: running task Updating configuration sharing failed: failed to retrieve Prometheus host: getting Route object failed: the server is currently unable to handle the request (get routes.route.openshift.io prometheus-k8s)
Apr 03 22:32:47.218 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-156-6.us-east-2.compute.internal node/ip-10-0-156-6.us-east-2.compute.internal container=scheduler container exited with code 255 (Error): :16:01.430869       1 authentication.go:55] Authentication is disabled\nI0403 22:16:01.430929       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251\nI0403 22:16:01.432672       1 serving.go:195] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "scheduler.openshift-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1585951057" (2020-04-03 21:57:55 +0000 UTC to 2022-04-03 21:57:56 +0000 UTC (now=2020-04-03 22:16:01.432630884 +0000 UTC))\nI0403 22:16:01.432821       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585951057" [] issuer="<self>" (2020-04-03 21:57:37 +0000 UTC to 2021-04-03 21:57:38 +0000 UTC (now=2020-04-03 22:16:01.432791239 +0000 UTC))\nI0403 22:16:01.432897       1 secure_serving.go:136] Serving securely on [::]:10259\nI0403 22:16:01.433052       1 serving.go:77] Starting DynamicLoader\nI0403 22:16:02.442693       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller\nI0403 22:16:02.543058       1 controller_utils.go:1034] Caches are synced for scheduler controller\nI0403 22:16:02.543202       1 leaderelection.go:205] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\nI0403 22:17:38.714842       1 leaderelection.go:214] successfully acquired lease openshift-kube-scheduler/kube-scheduler\nW0403 22:30:30.403983       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1beta1.PodDisruptionBudget ended with: too old resource version: 17831 (26383)\nW0403 22:30:30.412884       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 17839 (26384)\nE0403 22:31:07.185029       1 server.go:259] lost master\nI0403 22:31:07.185602       1 serving.go:88] Shutting down DynamicLoader\n
Apr 03 22:32:51.472 E ns/openshift-image-registry pod/node-ca-hltx2 node/ip-10-0-136-48.us-east-2.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 03 22:32:51.484 E ns/openshift-cluster-node-tuning-operator pod/tuned-8trqn node/ip-10-0-136-48.us-east-2.compute.internal container=tuned container exited with code 255 (Error): 089 openshift-tuned.go:326] Getting recommended profile...\nI0403 22:30:35.623996   34089 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 22:30:35.780187   34089 openshift-tuned.go:435] Pod (openshift-monitoring/telemeter-client-5bc5546859-rgdqr) labels changed node wide: true\nI0403 22:30:40.507124   34089 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 22:30:40.508722   34089 openshift-tuned.go:326] Getting recommended profile...\nI0403 22:30:40.641478   34089 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 22:30:41.550268   34089 openshift-tuned.go:435] Pod (openshift-image-registry/image-registry-b76944cc7-s6ghp) labels changed node wide: true\nI0403 22:30:45.507089   34089 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 22:30:45.508510   34089 openshift-tuned.go:326] Getting recommended profile...\nI0403 22:30:45.619825   34089 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 22:31:01.941389   34089 openshift-tuned.go:435] Pod (e2e-tests-sig-apps-job-upgrade-hz2fx/foo-s7q4f) labels changed node wide: false\nI0403 22:31:02.003552   34089 openshift-tuned.go:435] Pod (e2e-tests-sig-apps-job-upgrade-hz2fx/foo-szlhc) labels changed node wide: true\nI0403 22:31:05.507112   34089 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 22:31:05.508511   34089 openshift-tuned.go:326] Getting recommended profile...\nI0403 22:31:05.618458   34089 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 22:31:11.536348   34089 openshift-tuned.go:435] Pod (openshift-console/downloads-6d55689f4b-tj9qp) labels changed node wide: true\nI0403 22:31:12.574921   34089 openshift-tuned.go:126] Received signal: terminated\n
Apr 03 22:32:51.708 E ns/openshift-monitoring pod/node-exporter-tcktk node/ip-10-0-136-48.us-east-2.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 03 22:32:51.708 E ns/openshift-monitoring pod/node-exporter-tcktk node/ip-10-0-136-48.us-east-2.compute.internal container=node-exporter container exited with code 255 (Error): 
Apr 03 22:32:55.711 E ns/openshift-dns pod/dns-default-92vmj node/ip-10-0-136-48.us-east-2.compute.internal container=dns-node-resolver container exited with code 255 (Error): 
Apr 03 22:32:55.711 E ns/openshift-dns pod/dns-default-92vmj node/ip-10-0-136-48.us-east-2.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-04-03T22:20:41.276Z [INFO] CoreDNS-1.3.1\n2020-04-03T22:20:41.276Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-03T22:20:41.276Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nW0403 22:30:30.343996       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: watch of *v1.Service ended with: too old resource version: 23899 (26382)\nW0403 22:30:30.348108       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: watch of *v1.Namespace ended with: too old resource version: 20804 (26382)\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 03 22:32:56.082 E ns/openshift-sdn pod/sdn-7swhb node/ip-10-0-136-48.us-east-2.compute.internal container=sdn container exited with code 255 (Error): cle-manager/v1-packages-operators-coreos-com:"\nI0403 22:31:12.226018   45445 proxier.go:367] userspace proxy: processing 0 service events\nI0403 22:31:12.226036   45445 proxier.go:346] userspace syncProxyRules took 54.743186ms\nI0403 22:31:12.340594   45445 roundrobin.go:310] LoadBalancerRR: Setting endpoints for default/kubernetes:https to [10.0.129.70:6443 10.0.130.109:6443]\nI0403 22:31:12.340617   45445 roundrobin.go:240] Delete endpoint 10.0.156.6:6443 for service "default/kubernetes:https"\nI0403 22:31:12.391400   45445 proxier.go:367] userspace proxy: processing 0 service events\nI0403 22:31:12.391418   45445 proxier.go:346] userspace syncProxyRules took 54.161657ms\nE0403 22:31:12.617170   45445 healthcheck.go:57] SDN healthcheck disconnected from OVS server: <nil>\nI0403 22:31:12.617271   45445 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 22:31:12.654044   45445 proxier.go:367] userspace proxy: processing 0 service events\nI0403 22:31:12.654080   45445 proxier.go:346] userspace syncProxyRules took 154.795454ms\ninterrupt: Gracefully shutting down ...\nI0403 22:31:12.719780   45445 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 22:31:12.817566   45445 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 22:31:12.924428   45445 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 22:31:13.017527   45445 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 22:31:13.117546   45445 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\n
Apr 03 22:32:56.448 E ns/openshift-multus pod/multus-nv7zv node/ip-10-0-136-48.us-east-2.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 03 22:32:56.819 E ns/openshift-sdn pod/ovs-2d9v7 node/ip-10-0-136-48.us-east-2.compute.internal container=openvswitch container exited with code 255 (Error): itchd.log <==\n2020-04-03T22:30:30.023Z|00133|connmgr|INFO|br0<->unix#153: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T22:30:30.062Z|00134|bridge|INFO|bridge br0: deleted interface vetheb9b7178 on port 14\n2020-04-03T22:30:30.123Z|00135|connmgr|INFO|br0<->unix#156: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T22:30:30.172Z|00136|bridge|INFO|bridge br0: deleted interface vethb1473262 on port 6\n2020-04-03T22:30:30.236Z|00137|connmgr|INFO|br0<->unix#159: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T22:30:30.279Z|00138|bridge|INFO|bridge br0: deleted interface veth0ffe5793 on port 11\n2020-04-03T22:30:30.347Z|00139|connmgr|INFO|br0<->unix#162: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T22:30:30.405Z|00140|bridge|INFO|bridge br0: deleted interface vetha3988a26 on port 15\n2020-04-03T22:30:30.464Z|00141|connmgr|INFO|br0<->unix#165: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T22:30:30.504Z|00142|bridge|INFO|bridge br0: deleted interface veth9d449bad on port 3\n2020-04-03T22:30:30.584Z|00143|connmgr|INFO|br0<->unix#168: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T22:30:30.650Z|00144|bridge|INFO|bridge br0: deleted interface veth013ea0b4 on port 13\n2020-04-03T22:30:30.711Z|00145|connmgr|INFO|br0<->unix#171: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T22:30:30.765Z|00146|bridge|INFO|bridge br0: deleted interface vethd47d043e on port 5\n2020-04-03T22:30:59.869Z|00147|connmgr|INFO|br0<->unix#177: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T22:30:59.899Z|00148|bridge|INFO|bridge br0: deleted interface vethbfb6b8be on port 10\n2020-04-03T22:30:59.947Z|00149|connmgr|INFO|br0<->unix#180: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T22:30:59.978Z|00150|bridge|INFO|bridge br0: deleted interface veth66252e62 on port 7\n2020-04-03T22:31:00.022Z|00151|connmgr|INFO|br0<->unix#183: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T22:31:00.048Z|00152|bridge|INFO|bridge br0: deleted interface vethfa6393a8 on port 8\nTerminated\novs-vswitchd is not running.\novsdb-server is not running.\n
Apr 03 22:32:56.842 E ns/openshift-monitoring pod/node-exporter-pzjjl node/ip-10-0-156-6.us-east-2.compute.internal container=node-exporter container exited with code 255 (Error): 
Apr 03 22:32:56.842 E ns/openshift-monitoring pod/node-exporter-pzjjl node/ip-10-0-156-6.us-east-2.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 03 22:32:57.248 E ns/openshift-apiserver pod/apiserver-pg6fz node/ip-10-0-156-6.us-east-2.compute.internal container=openshift-apiserver container exited with code 255 (Error): .svc:2379 <nil>}]\nI0403 22:30:55.220655       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 22:30:55.220745       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 22:30:55.220847       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 22:30:55.234194       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 22:30:55.535194       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []\nI0403 22:30:55.535325       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 22:30:55.535417       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 22:30:55.535471       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 22:30:55.549905       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nE0403 22:30:58.043065       1 memcache.go:141] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request\nI0403 22:31:07.208233       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/client-ca/ca-bundle.crt\nI0403 22:31:07.208432       1 clusterquotamapping.go:145] Shutting down ClusterQuotaMappingController controller\nI0403 22:31:07.208563       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/aggregator-client-ca/ca-bundle.crt\nI0403 22:31:07.208589       1 controller.go:87] Shutting down OpenAPI AggregationController\nI0403 22:31:07.208607       1 serving.go:88] Shutting down DynamicLoader\nI0403 22:31:07.210389       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\n
Apr 03 22:32:57.644 E ns/openshift-controller-manager pod/controller-manager-k4h8h node/ip-10-0-156-6.us-east-2.compute.internal container=controller-manager container exited with code 255 (Error): 
Apr 03 22:32:58.043 E ns/openshift-cluster-node-tuning-operator pod/tuned-p2n9x node/ip-10-0-156-6.us-east-2.compute.internal container=tuned container exited with code 255 (Error): 099021   53172 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 22:30:36.100936   53172 openshift-tuned.go:326] Getting recommended profile...\nI0403 22:30:36.238402   53172 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 22:30:37.105296   53172 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/catalog-operator-846fcf55fd-22zff) labels changed node wide: true\nI0403 22:30:41.099065   53172 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 22:30:41.100574   53172 openshift-tuned.go:326] Getting recommended profile...\nI0403 22:30:41.219368   53172 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 22:30:41.342276   53172 openshift-tuned.go:435] Pod (openshift-machine-api/machine-api-controllers-5fbbd5c95c-cdkgx) labels changed node wide: true\nI0403 22:30:46.099067   53172 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 22:30:46.100996   53172 openshift-tuned.go:326] Getting recommended profile...\nI0403 22:30:46.276432   53172 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 22:30:46.277444   53172 openshift-tuned.go:435] Pod (openshift-machine-config-operator/etcd-quorum-guard-5b845db78d-dvg9w) labels changed node wide: true\nI0403 22:30:51.099109   53172 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 22:30:51.100421   53172 openshift-tuned.go:326] Getting recommended profile...\nI0403 22:30:51.212960   53172 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 22:31:06.707288   53172 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/packageserver-57dd4779b4-7b7hz) labels changed node wide: true\n
Apr 03 22:32:58.445 E ns/openshift-sdn pod/ovs-lnzrq node/ip-10-0-156-6.us-east-2.compute.internal container=openvswitch container exited with code 255 (Error): ction reset by peer\n2020-04-03T22:30:34.106Z|00025|reconnect|WARN|unix#211: connection dropped (Connection reset by peer)\n\n==> /var/log/openvswitch/ovs-vswitchd.log <==\n2020-04-03T22:30:34.821Z|00192|connmgr|INFO|br0<->unix#256: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T22:30:34.853Z|00193|bridge|INFO|bridge br0: deleted interface veth9472fcd5 on port 4\n2020-04-03T22:30:35.209Z|00194|connmgr|INFO|br0<->unix#259: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T22:30:35.246Z|00195|bridge|INFO|bridge br0: deleted interface veth4adc61be on port 9\n2020-04-03T22:30:35.577Z|00196|connmgr|INFO|br0<->unix#262: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T22:30:35.603Z|00197|bridge|INFO|bridge br0: deleted interface vethe7edef43 on port 3\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-04-03T22:30:35.589Z|00026|jsonrpc|WARN|unix#225: receive error: Connection reset by peer\n2020-04-03T22:30:35.590Z|00027|reconnect|WARN|unix#225: connection dropped (Connection reset by peer)\n2020-04-03T22:30:47.481Z|00028|jsonrpc|WARN|unix#229: receive error: Connection reset by peer\n2020-04-03T22:30:47.481Z|00029|reconnect|WARN|unix#229: connection dropped (Connection reset by peer)\n\n==> /var/log/openvswitch/ovs-vswitchd.log <==\n2020-04-03T22:31:04.569Z|00198|connmgr|INFO|br0<->unix#268: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T22:31:04.603Z|00199|connmgr|INFO|br0<->unix#271: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T22:31:04.625Z|00200|bridge|INFO|bridge br0: deleted interface veth61806a28 on port 22\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-04-03T22:31:04.614Z|00030|jsonrpc|WARN|unix#232: receive error: Connection reset by peer\n2020-04-03T22:31:04.614Z|00031|reconnect|WARN|unix#232: connection dropped (Connection reset by peer)\n2020-04-03T22:31:04.618Z|00032|jsonrpc|WARN|unix#233: receive error: Connection reset by peer\n2020-04-03T22:31:04.618Z|00033|reconnect|WARN|unix#233: connection dropped (Connection reset by peer)\nTerminated\novs-vswitchd is not running.\novsdb-server is not running.\n
Apr 03 22:32:58.478 E ns/openshift-machine-config-operator pod/machine-config-daemon-l8zjr node/ip-10-0-136-48.us-east-2.compute.internal container=machine-config-daemon container exited with code 255 (Error): 
Apr 03 22:32:58.846 E ns/openshift-multus pod/multus-6zlj6 node/ip-10-0-156-6.us-east-2.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 03 22:33:02.242 E ns/openshift-image-registry pod/node-ca-t5nbd node/ip-10-0-156-6.us-east-2.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 03 22:33:05.644 E ns/openshift-dns pod/dns-default-llhg6 node/ip-10-0-156-6.us-east-2.compute.internal container=dns-node-resolver container exited with code 255 (Error): 
Apr 03 22:33:05.644 E ns/openshift-dns pod/dns-default-llhg6 node/ip-10-0-156-6.us-east-2.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-04-03T22:21:25.184Z [INFO] CoreDNS-1.3.1\n2020-04-03T22:21:25.184Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-03T22:21:25.184Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nW0403 22:30:30.467626       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: watch of *v1.Namespace ended with: too old resource version: 17822 (26385)\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 03 22:33:10.847 E ns/openshift-sdn pod/sdn-v99c7 node/ip-10-0-156-6.us-east-2.compute.internal container=sdn container exited with code 255 (Error): .go:310] LoadBalancerRR: Setting endpoints for openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com: to [10.129.0.66:5443 10.129.0.75:5443]\nI0403 22:31:01.899482   57929 roundrobin.go:240] Delete endpoint 10.128.0.62:5443 for service "openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com:"\nI0403 22:31:02.068143   57929 proxier.go:367] userspace proxy: processing 0 service events\nI0403 22:31:02.068168   57929 proxier.go:346] userspace syncProxyRules took 59.672583ms\nI0403 22:31:02.252953   57929 proxier.go:367] userspace proxy: processing 0 service events\nI0403 22:31:02.252977   57929 proxier.go:346] userspace syncProxyRules took 60.585563ms\nE0403 22:31:07.162593   57929 healthcheck.go:57] SDN healthcheck disconnected from OVS server: <nil>\nI0403 22:31:07.162840   57929 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 22:31:07.263211   57929 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\ninterrupt: Gracefully shutting down ...\nI0403 22:31:07.363197   57929 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 22:31:07.466001   57929 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 22:31:07.581752   57929 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 22:31:07.666213   57929 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 22:31:07.766187   57929 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\n
Apr 03 22:33:18.248 E ns/openshift-monitoring pod/telemeter-client-5bc5546859-ksqbt node/ip-10-0-138-122.us-east-2.compute.internal container=reload container exited with code 2 (Error): 
Apr 03 22:33:18.248 E ns/openshift-monitoring pod/telemeter-client-5bc5546859-ksqbt node/ip-10-0-138-122.us-east-2.compute.internal container=telemeter-client container exited with code 2 (Error): 
Apr 03 22:33:20.845 E ns/openshift-marketplace pod/community-operators-5c966566f6-96xjq node/ip-10-0-138-122.us-east-2.compute.internal container=community-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 22:33:23.329 E ns/openshift-image-registry pod/cluster-image-registry-operator-b658799f6-zvwtg node/ip-10-0-129-70.us-east-2.compute.internal container=cluster-image-registry-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 22:33:23.928 E ns/openshift-service-ca pod/apiservice-cabundle-injector-fbfb589bb-2r75r node/ip-10-0-129-70.us-east-2.compute.internal container=apiservice-cabundle-injector-controller container exited with code 2 (Error): 
Apr 03 22:33:26.747 E ns/openshift-console pod/console-8458fd9948-9s774 node/ip-10-0-129-70.us-east-2.compute.internal container=console container exited with code 2 (Error): 2020/04/3 22:30:45 cmd/main: cookies are secure!\n2020/04/3 22:30:46 cmd/main: Binding to 0.0.0.0:8443...\n2020/04/3 22:30:46 cmd/main: using TLS\n
Apr 03 22:33:29.727 E ns/openshift-machine-config-operator pod/machine-config-controller-5dcbc96994-jvsdx node/ip-10-0-129-70.us-east-2.compute.internal container=machine-config-controller container exited with code 2 (Error): 
Apr 03 22:33:45.000 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-136-48.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): 
Apr 03 22:33:48.730 E ns/openshift-console pod/downloads-6d55689f4b-stfpk node/ip-10-0-129-70.us-east-2.compute.internal container=download-server container exited with code 137 (Error): 
Apr 03 22:33:58.244 E ns/openshift-etcd pod/etcd-member-ip-10-0-156-6.us-east-2.compute.internal node/ip-10-0-156-6.us-east-2.compute.internal container=etcd-member container exited with code 255 (Error): 026d01c61dd (stream MsgApp v2 reader)\n2020-04-03 22:31:07.725269 E | rafthttp: failed to read 90c2d026d01c61dd on stream MsgApp v2 (context canceled)\n2020-04-03 22:31:07.725277 I | rafthttp: peer 90c2d026d01c61dd became inactive (message send to peer failed)\n2020-04-03 22:31:07.725288 I | rafthttp: stopped streaming with peer 90c2d026d01c61dd (stream MsgApp v2 reader)\n2020-04-03 22:31:07.725350 W | rafthttp: lost the TCP streaming connection with peer 90c2d026d01c61dd (stream Message reader)\n2020-04-03 22:31:07.725364 I | rafthttp: stopped streaming with peer 90c2d026d01c61dd (stream Message reader)\n2020-04-03 22:31:07.725372 I | rafthttp: stopped peer 90c2d026d01c61dd\n2020-04-03 22:31:07.725380 I | rafthttp: stopping peer d96c9ef6e049a8b...\n2020-04-03 22:31:07.726157 I | rafthttp: closed the TCP streaming connection with peer d96c9ef6e049a8b (stream MsgApp v2 writer)\n2020-04-03 22:31:07.726179 I | rafthttp: stopped streaming with peer d96c9ef6e049a8b (writer)\n2020-04-03 22:31:07.732053 I | rafthttp: closed the TCP streaming connection with peer d96c9ef6e049a8b (stream Message writer)\n2020-04-03 22:31:07.732074 I | rafthttp: stopped streaming with peer d96c9ef6e049a8b (writer)\n2020-04-03 22:31:07.732123 I | rafthttp: stopped HTTP pipelining with peer d96c9ef6e049a8b\n2020-04-03 22:31:07.732215 W | rafthttp: lost the TCP streaming connection with peer d96c9ef6e049a8b (stream MsgApp v2 reader)\n2020-04-03 22:31:07.732232 E | rafthttp: failed to read d96c9ef6e049a8b on stream MsgApp v2 (context canceled)\n2020-04-03 22:31:07.732239 I | rafthttp: peer d96c9ef6e049a8b became inactive (message send to peer failed)\n2020-04-03 22:31:07.732248 I | rafthttp: stopped streaming with peer d96c9ef6e049a8b (stream MsgApp v2 reader)\n2020-04-03 22:31:07.732317 W | rafthttp: lost the TCP streaming connection with peer d96c9ef6e049a8b (stream Message reader)\n2020-04-03 22:31:07.732332 I | rafthttp: stopped streaming with peer d96c9ef6e049a8b (stream Message reader)\n2020-04-03 22:31:07.732341 I | rafthttp: stopped peer d96c9ef6e049a8b\n
Apr 03 22:33:58.244 E ns/openshift-etcd pod/etcd-member-ip-10-0-156-6.us-east-2.compute.internal node/ip-10-0-156-6.us-east-2.compute.internal container=etcd-metrics container exited with code 255 (Error): 2020-04-03 22:30:38.074436 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-ir14c5q6-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-ir14c5q6-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-03 22:30:38.075305 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-03 22:30:38.075966 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-ir14c5q6-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-ir14c5q6-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/03 22:30:38 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.156.6:9978: connect: connection refused"; Reconnecting to {etcd-1.ci-op-ir14c5q6-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\n2020-04-03 22:30:39.089250 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Apr 03 22:33:58.683 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-156-6.us-east-2.compute.internal node/ip-10-0-156-6.us-east-2.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 255 (Error): I0403 22:15:56.732572       1 certsync_controller.go:269] Starting CertSyncer\nI0403 22:15:56.733914       1 observer_polling.go:106] Starting file observer\nW0403 22:24:07.397009       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 21198 (23738)\n
Apr 03 22:33:58.683 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-156-6.us-east-2.compute.internal node/ip-10-0-156-6.us-east-2.compute.internal container=kube-apiserver-7 container exited with code 255 (Error): RROR, debug=""\nI0403 22:31:07.209789       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=891, ErrCode=NO_ERROR, debug=""\nI0403 22:31:07.209806       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=891, ErrCode=NO_ERROR, debug=""\nI0403 22:31:07.209852       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=891, ErrCode=NO_ERROR, debug=""\nI0403 22:31:07.209871       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=891, ErrCode=NO_ERROR, debug=""\nE0403 22:31:07.212652       1 reflector.go:237] github.com/openshift/client-go/oauth/informers/externalversions/factory.go:101: Failed to watch *v1.OAuthClient: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io)\nI0403 22:31:07.222038       1 picker_wrapper.go:218] blockingPicker: the picked transport is not ready, loop back to repick\nI0403 22:31:07.222077       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 22:31:07.222443       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd-0.ci-op-ir14c5q6-a4243.origin-ci-int-aws.dev.rhcloud.com:2379 <nil>} {etcd-2.ci-op-ir14c5q6-a4243.origin-ci-int-aws.dev.rhcloud.com:2379 <nil>}]\nI0403 22:31:07.222488       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd-0.ci-op-ir14c5q6-a4243.origin-ci-int-aws.dev.rhcloud.com:2379 <nil>} {etcd-2.ci-op-ir14c5q6-a4243.origin-ci-int-aws.dev.rhcloud.com:2379 <nil>}]\nI0403 22:31:07.265386       1 controller.go:176] Shutting down kubernetes service endpoint reconciler\nW0403 22:31:07.501462       1 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing\n
Apr 03 22:33:59.081 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-156-6.us-east-2.compute.internal node/ip-10-0-156-6.us-east-2.compute.internal container=kube-controller-manager-5 container exited with code 255 (Error): ficate: "kube-apiserver-to-kubelet-signer" [] issuer="<self>" (2020-04-03 21:39:22 +0000 UTC to 2021-04-03 21:39:22 +0000 UTC (now=2020-04-03 22:15:57.23098063 +0000 UTC))\nI0403 22:15:57.231012       1 clientca.go:92] [4] "/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" client-ca certificate: "kube-control-plane-signer" [] issuer="<self>" (2020-04-03 21:39:21 +0000 UTC to 2021-04-03 21:39:21 +0000 UTC (now=2020-04-03 22:15:57.231001092 +0000 UTC))\nI0403 22:15:57.240185       1 controllermanager.go:169] Version: v1.13.4+3040211\nI0403 22:15:57.242290       1 serving.go:195] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1585951057" (2020-04-03 21:57:53 +0000 UTC to 2022-04-03 21:57:54 +0000 UTC (now=2020-04-03 22:15:57.242257876 +0000 UTC))\nI0403 22:15:57.242383       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585951057" [] issuer="<self>" (2020-04-03 21:57:37 +0000 UTC to 2021-04-03 21:57:38 +0000 UTC (now=2020-04-03 22:15:57.242363635 +0000 UTC))\nI0403 22:15:57.242445       1 secure_serving.go:136] Serving securely on [::]:10257\nI0403 22:15:57.243312       1 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-controller-manager...\nI0403 22:15:57.244671       1 serving.go:77] Starting DynamicLoader\nE0403 22:16:01.322461       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: configmaps "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\nE0403 22:31:07.232877       1 controllermanager.go:282] leaderelection lost\n
Apr 03 22:33:59.081 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-156-6.us-east-2.compute.internal node/ip-10-0-156-6.us-east-2.compute.internal container=kube-controller-manager-cert-syncer-5 container exited with code 255 (Error): role.rbac.authorization.k8s.io "system:openshift:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-docker" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]\nE0403 22:16:01.398931       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: secrets is forbidden: User "system:kube-controller-manager" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-controller-manager": RBAC: [clusterrole.rbac.authorization.k8s.io "self-access-reviewer" not found, clusterrole.rbac.authorization.k8s.io "cluster-status" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-jenkinspipeline" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:scope-impersonation" not found, clusterrole.rbac.authorization.k8s.io "system:webhook" not found, clusterrole.rbac.authorization.k8s.io "system:kube-controller-manager" not found, clusterrole.rbac.authorization.k8s.io "system:oauth-token-deleter" not found, clusterrole.rbac.authorization.k8s.io "basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-source" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-docker" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]\nW0403 22:22:02.398811       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 21151 (22803)\nW0403 22:29:04.403809       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 23105 (25633)\n
Apr 03 22:33:59.869 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-156-6.us-east-2.compute.internal node/ip-10-0-156-6.us-east-2.compute.internal container=scheduler container exited with code 255 (Error): :16:01.430869       1 authentication.go:55] Authentication is disabled\nI0403 22:16:01.430929       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251\nI0403 22:16:01.432672       1 serving.go:195] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "scheduler.openshift-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1585951057" (2020-04-03 21:57:55 +0000 UTC to 2022-04-03 21:57:56 +0000 UTC (now=2020-04-03 22:16:01.432630884 +0000 UTC))\nI0403 22:16:01.432821       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585951057" [] issuer="<self>" (2020-04-03 21:57:37 +0000 UTC to 2021-04-03 21:57:38 +0000 UTC (now=2020-04-03 22:16:01.432791239 +0000 UTC))\nI0403 22:16:01.432897       1 secure_serving.go:136] Serving securely on [::]:10259\nI0403 22:16:01.433052       1 serving.go:77] Starting DynamicLoader\nI0403 22:16:02.442693       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller\nI0403 22:16:02.543058       1 controller_utils.go:1034] Caches are synced for scheduler controller\nI0403 22:16:02.543202       1 leaderelection.go:205] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\nI0403 22:17:38.714842       1 leaderelection.go:214] successfully acquired lease openshift-kube-scheduler/kube-scheduler\nW0403 22:30:30.403983       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1beta1.PodDisruptionBudget ended with: too old resource version: 17831 (26383)\nW0403 22:30:30.412884       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 17839 (26384)\nE0403 22:31:07.185029       1 server.go:259] lost master\nI0403 22:31:07.185602       1 serving.go:88] Shutting down DynamicLoader\n
Apr 03 22:34:13.094 E ns/openshift-marketplace pod/redhat-operators-7fd97949f6-m8r25 node/ip-10-0-136-48.us-east-2.compute.internal container=redhat-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 22:34:20.540 E ns/openshift-marketplace pod/certified-operators-95b85c99f-h5vvj node/ip-10-0-149-170.us-east-2.compute.internal container=certified-operators container exited with code 2 (Error): 
Apr 03 22:34:51.162 E ns/openshift-marketplace pod/community-operators-5c966566f6-l2577 node/ip-10-0-136-48.us-east-2.compute.internal container=community-operators container exited with code 2 (Error): 
Apr 03 22:35:28.881 E ns/openshift-monitoring pod/node-exporter-hf87l node/ip-10-0-138-122.us-east-2.compute.internal container=node-exporter container exited with code 255 (Error): 
Apr 03 22:35:28.881 E ns/openshift-monitoring pod/node-exporter-hf87l node/ip-10-0-138-122.us-east-2.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 03 22:35:28.901 E ns/openshift-cluster-node-tuning-operator pod/tuned-zkm89 node/ip-10-0-138-122.us-east-2.compute.internal container=tuned container exited with code 255 (Error):  true\nI0403 22:33:15.766714   39715 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 22:33:15.768120   39715 openshift-tuned.go:326] Getting recommended profile...\nI0403 22:33:16.098206   39715 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 22:33:18.850359   39715 openshift-tuned.go:435] Pod (openshift-ingress/router-default-76f8bd8689-7hrcq) labels changed node wide: true\nI0403 22:33:20.766404   39715 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 22:33:20.768866   39715 openshift-tuned.go:326] Getting recommended profile...\nI0403 22:33:20.880061   39715 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 22:33:21.045300   39715 openshift-tuned.go:435] Pod (openshift-marketplace/community-operators-5c966566f6-96xjq) labels changed node wide: true\nI0403 22:33:25.766402   39715 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 22:33:25.767860   39715 openshift-tuned.go:326] Getting recommended profile...\nI0403 22:33:25.879336   39715 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 22:33:47.058323   39715 openshift-tuned.go:435] Pod (e2e-tests-sig-apps-job-upgrade-hz2fx/foo-xwcw5) labels changed node wide: false\nI0403 22:33:50.582936   39715 openshift-tuned.go:435] Pod (e2e-tests-sig-apps-job-upgrade-hz2fx/foo-5lqzj) labels changed node wide: true\nI0403 22:33:50.766414   39715 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 22:33:50.768115   39715 openshift-tuned.go:326] Getting recommended profile...\nI0403 22:33:50.880450   39715 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 22:33:51.174451   39715 openshift-tuned.go:126] Received signal: terminated\n
Apr 03 22:35:29.105 E ns/openshift-image-registry pod/node-ca-97fsv node/ip-10-0-138-122.us-east-2.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 03 22:35:32.315 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-129-70.us-east-2.compute.internal node/ip-10-0-129-70.us-east-2.compute.internal container=kube-controller-manager-cert-syncer-5 container exited with code 255 (Error): I0403 22:17:56.612005       1 certsync_controller.go:269] Starting CertSyncer\nI0403 22:17:56.612341       1 observer_polling.go:106] Starting file observer\nE0403 22:18:00.940756       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:kube-controller-manager" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\nE0403 22:18:00.940994       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: secrets is forbidden: User "system:kube-controller-manager" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-controller-manager"\nW0403 22:23:48.955315       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 21151 (23630)\nW0403 22:29:00.960677       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 23944 (25619)\n
Apr 03 22:35:32.315 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-129-70.us-east-2.compute.internal node/ip-10-0-129-70.us-east-2.compute.internal container=kube-controller-manager-5 container exited with code 255 (Error): icate: "kube-apiserver-to-kubelet-signer" [] issuer="<self>" (2020-04-03 21:39:22 +0000 UTC to 2021-04-03 21:39:22 +0000 UTC (now=2020-04-03 22:17:56.571281866 +0000 UTC))\nI0403 22:17:56.571389       1 clientca.go:92] [4] "/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" client-ca certificate: "kube-control-plane-signer" [] issuer="<self>" (2020-04-03 21:39:21 +0000 UTC to 2021-04-03 21:39:21 +0000 UTC (now=2020-04-03 22:17:56.571365718 +0000 UTC))\nI0403 22:17:56.580783       1 controllermanager.go:169] Version: v1.13.4+3040211\nI0403 22:17:56.582576       1 serving.go:195] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1585951057" (2020-04-03 21:57:53 +0000 UTC to 2022-04-03 21:57:54 +0000 UTC (now=2020-04-03 22:17:56.582550917 +0000 UTC))\nI0403 22:17:56.582610       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585951057" [] issuer="<self>" (2020-04-03 21:57:37 +0000 UTC to 2021-04-03 21:57:38 +0000 UTC (now=2020-04-03 22:17:56.582594517 +0000 UTC))\nI0403 22:17:56.582654       1 secure_serving.go:136] Serving securely on [::]:10257\nI0403 22:17:56.582830       1 serving.go:77] Starting DynamicLoader\nI0403 22:17:56.586599       1 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-controller-manager...\nE0403 22:18:00.942701       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: configmaps "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\nE0403 22:33:50.204084       1 controllermanager.go:282] leaderelection lost\n
Apr 03 22:35:32.924 E ns/openshift-dns pod/dns-default-8c985 node/ip-10-0-138-122.us-east-2.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-04-03T22:21:00.569Z [INFO] CoreDNS-1.3.1\n2020-04-03T22:21:00.569Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-03T22:21:00.569Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nW0403 22:30:30.390680       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: watch of *v1.Namespace ended with: too old resource version: 17013 (26382)\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 03 22:35:32.924 E ns/openshift-dns pod/dns-default-8c985 node/ip-10-0-138-122.us-east-2.compute.internal container=dns-node-resolver container exited with code 255 (Error): /bin/bash: line 1: kill: (111) - No such process\n
Apr 03 22:35:32.943 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-129-70.us-east-2.compute.internal node/ip-10-0-129-70.us-east-2.compute.internal container=scheduler container exited with code 255 (Error): 00.988353       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251\nI0403 22:18:00.989685       1 serving.go:195] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "scheduler.openshift-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1585951057" (2020-04-03 21:57:55 +0000 UTC to 2022-04-03 21:57:56 +0000 UTC (now=2020-04-03 22:18:00.989653771 +0000 UTC))\nI0403 22:18:00.991662       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585951057" [] issuer="<self>" (2020-04-03 21:57:37 +0000 UTC to 2021-04-03 21:57:38 +0000 UTC (now=2020-04-03 22:18:00.991639387 +0000 UTC))\nI0403 22:18:00.991734       1 secure_serving.go:136] Serving securely on [::]:10259\nI0403 22:18:00.991919       1 serving.go:77] Starting DynamicLoader\nI0403 22:18:01.909622       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller\nI0403 22:18:02.022242       1 controller_utils.go:1034] Caches are synced for scheduler controller\nI0403 22:18:02.026510       1 leaderelection.go:205] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\nW0403 22:30:30.339733       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 23899 (26382)\nI0403 22:31:24.322734       1 leaderelection.go:214] successfully acquired lease openshift-kube-scheduler/kube-scheduler\nW0403 22:33:12.271148       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 20834 (28414)\nW0403 22:33:12.329969       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 20804 (28417)\nE0403 22:33:50.194370       1 server.go:259] lost master\n
Apr 03 22:35:33.901 E ns/openshift-multus pod/multus-rqpgw node/ip-10-0-138-122.us-east-2.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 03 22:35:35.101 E ns/openshift-sdn pod/ovs-9hmgc node/ip-10-0-138-122.us-east-2.compute.internal container=openvswitch container exited with code 255 (Error): rt 11\n2020-04-03T22:33:16.310Z|00145|connmgr|INFO|br0<->unix#227: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T22:33:16.353Z|00146|connmgr|INFO|br0<->unix#230: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T22:33:16.390Z|00147|bridge|INFO|bridge br0: deleted interface veth46e6dfb4 on port 17\n2020-04-03T22:33:16.453Z|00148|connmgr|INFO|br0<->unix#233: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T22:33:16.499Z|00149|bridge|INFO|bridge br0: deleted interface veth09662e7e on port 9\n2020-04-03T22:33:16.546Z|00150|connmgr|INFO|br0<->unix#236: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T22:33:16.589Z|00151|connmgr|INFO|br0<->unix#239: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T22:33:16.617Z|00152|bridge|INFO|bridge br0: deleted interface veth4da53708 on port 14\n2020-04-03T22:33:16.679Z|00153|connmgr|INFO|br0<->unix#242: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T22:33:16.724Z|00154|bridge|INFO|bridge br0: deleted interface vethc63bfbb3 on port 5\n2020-04-03T22:33:16.779Z|00155|connmgr|INFO|br0<->unix#245: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T22:33:16.815Z|00156|bridge|INFO|bridge br0: deleted interface vethdaf196d2 on port 8\n2020-04-03T22:33:16.874Z|00157|connmgr|INFO|br0<->unix#248: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T22:33:16.921Z|00158|bridge|INFO|bridge br0: deleted interface veth7823faaa on port 10\n2020-04-03T22:33:45.364Z|00159|connmgr|INFO|br0<->unix#254: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T22:33:45.391Z|00160|connmgr|INFO|br0<->unix#257: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T22:33:45.411Z|00161|bridge|INFO|bridge br0: deleted interface veth4d5d9348 on port 13\n2020-04-03T22:33:45.443Z|00162|connmgr|INFO|br0<->unix#260: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T22:33:45.483Z|00163|connmgr|INFO|br0<->unix#263: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T22:33:45.504Z|00164|bridge|INFO|bridge br0: deleted interface veth73b83595 on port 12\nTerminated\novs-vswitchd is not running.\novsdb-server is not running.\n
Apr 03 22:35:35.500 E ns/openshift-sdn pod/sdn-s594m node/ip-10-0-138-122.us-east-2.compute.internal container=sdn container exited with code 255 (Error): ed with: very short watch: k8s.io/client-go/informers/factory.go:132: Unexpected watch close - watch lasted less than a second and no items received\nW0403 22:33:50.490688   48101 reflector.go:256] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: very short watch: github.com/openshift/client-go/network/informers/externalversions/factory.go:101: Unexpected watch close - watch lasted less than a second and no items received\nW0403 22:33:50.490732   48101 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Namespace ended with: very short watch: k8s.io/client-go/informers/factory.go:132: Unexpected watch close - watch lasted less than a second and no items received\nW0403 22:33:50.490744   48101 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.NetworkPolicy ended with: very short watch: k8s.io/client-go/informers/factory.go:132: Unexpected watch close - watch lasted less than a second and no items received\ninterrupt: Gracefully shutting down ...\nE0403 22:33:51.175478   48101 healthcheck.go:57] SDN healthcheck disconnected from OVS server: <nil>\nI0403 22:33:51.175644   48101 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 22:33:51.277911   48101 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 22:33:51.375948   48101 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 22:33:51.478247   48101 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 22:33:51.575965   48101 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\n
Apr 03 22:35:36.498 E ns/openshift-machine-config-operator pod/machine-config-daemon-52dqb node/ip-10-0-138-122.us-east-2.compute.internal container=machine-config-daemon container exited with code 255 (Error): 
Apr 03 22:35:37.500 E ns/openshift-multus pod/multus-rqpgw node/ip-10-0-138-122.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Apr 03 22:35:42.933 E ns/openshift-cluster-node-tuning-operator pod/tuned-tx8mk node/ip-10-0-129-70.us-east-2.compute.internal container=tuned container exited with code 255 (Error): els to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 22:33:23.565268   57583 openshift-tuned.go:326] Getting recommended profile...\nI0403 22:33:23.745969   57583 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 22:33:24.127377   57583 openshift-tuned.go:435] Pod (openshift-service-ca/apiservice-cabundle-injector-fbfb589bb-2r75r) labels changed node wide: true\nI0403 22:33:28.562633   57583 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 22:33:28.564061   57583 openshift-tuned.go:326] Getting recommended profile...\nI0403 22:33:28.684397   57583 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 22:33:29.321463   57583 openshift-tuned.go:435] Pod (openshift-marketplace/marketplace-operator-c8dd865f9-gm65q) labels changed node wide: true\nI0403 22:33:33.562710   57583 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 22:33:33.564048   57583 openshift-tuned.go:326] Getting recommended profile...\nI0403 22:33:33.681469   57583 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 22:33:37.925628   57583 openshift-tuned.go:435] Pod (openshift-machine-config-operator/etcd-quorum-guard-5b845db78d-hhtcq) labels changed node wide: true\nI0403 22:33:38.562633   57583 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 22:33:38.563895   57583 openshift-tuned.go:326] Getting recommended profile...\nI0403 22:33:38.721676   57583 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 22:33:48.921901   57583 openshift-tuned.go:435] Pod (openshift-console/downloads-6d55689f4b-stfpk) labels changed node wide: true\nI0403 22:33:50.180417   57583 openshift-tuned.go:126] Received signal: terminated\n
Apr 03 22:35:45.331 E ns/openshift-multus pod/multus-2kjlz node/ip-10-0-129-70.us-east-2.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 03 22:35:45.731 E ns/openshift-image-registry pod/node-ca-gdjln node/ip-10-0-129-70.us-east-2.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 03 22:35:46.358 E ns/openshift-sdn pod/sdn-controller-bpnc9 node/ip-10-0-129-70.us-east-2.compute.internal container=sdn-controller container exited with code 255 (Error): I0403 22:21:17.444509       1 leaderelection.go:205] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nI0403 22:32:27.765019       1 leaderelection.go:214] successfully acquired lease openshift-sdn/openshift-network-controller\nI0403 22:32:27.765254       1 event.go:221] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"openshift-sdn", Name:"openshift-network-controller", UID:"0a28215c-75f6-11ea-8a56-02df5a075760", APIVersion:"v1", ResourceVersion:"27827", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ip-10-0-129-70 became leader\nI0403 22:32:27.960407       1 master.go:57] Initializing SDN master of type "redhat/openshift-ovs-networkpolicy"\nI0403 22:32:27.964703       1 network_controller.go:49] Started OpenShift Network Controller\nE0403 22:33:28.314431       1 memcache.go:141] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request\n
Apr 03 22:35:46.932 E ns/openshift-dns pod/dns-default-mqjlm node/ip-10-0-129-70.us-east-2.compute.internal container=dns-node-resolver container exited with code 255 (Error): 
Apr 03 22:35:46.932 E ns/openshift-dns pod/dns-default-mqjlm node/ip-10-0-129-70.us-east-2.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-04-03T22:22:06.394Z [INFO] CoreDNS-1.3.1\n2020-04-03T22:22:06.394Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-03T22:22:06.394Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nW0403 22:30:30.466272       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: watch of *v1.Namespace ended with: too old resource version: 17822 (26385)\nE0403 22:31:07.534890       1 reflector.go:322] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: Failed to watch *v1.Namespace: Get https://172.30.0.1:443/api/v1/namespaces?resourceVersion=26385&timeoutSeconds=454&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0403 22:31:07.534995       1 reflector.go:322] github.com/coredns/coredns/plugin/kubernetes/controller.go:317: Failed to watch *v1.Endpoints: Get https://172.30.0.1:443/api/v1/endpoints?resourceVersion=27225&timeoutSeconds=390&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 03 22:35:47.730 E ns/openshift-machine-config-operator pod/machine-config-server-ddflp node/ip-10-0-129-70.us-east-2.compute.internal container=machine-config-server container exited with code 255 (Error): 
Apr 03 22:35:50.930 E ns/openshift-controller-manager pod/controller-manager-vqj2x node/ip-10-0-129-70.us-east-2.compute.internal container=controller-manager container exited with code 255 (Error): 
Apr 03 22:35:53.939 E ns/openshift-monitoring pod/node-exporter-cs9fd node/ip-10-0-129-70.us-east-2.compute.internal container=node-exporter container exited with code 255 (Error): 
Apr 03 22:35:53.939 E ns/openshift-monitoring pod/node-exporter-cs9fd node/ip-10-0-129-70.us-east-2.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 03 22:35:54.331 E ns/openshift-apiserver pod/apiserver-4gsc7 node/ip-10-0-129-70.us-east-2.compute.internal container=openshift-apiserver container exited with code 255 (Error): 0.163306       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 22:33:50.163319       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 22:33:50.163331       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 22:33:50.248202       1 clusterquotamapping.go:145] Shutting down ClusterQuotaMappingController controller\nI0403 22:33:50.248260       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/client-ca/ca-bundle.crt\nI0403 22:33:50.248309       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/aggregator-client-ca/ca-bundle.crt\nI0403 22:33:50.248330       1 controller.go:87] Shutting down OpenAPI AggregationController\nE0403 22:33:50.248653       1 watch.go:212] unable to encode watch object <nil>: expected pointer, but got invalid kind\nI0403 22:33:50.248760       1 serving.go:88] Shutting down DynamicLoader\nE0403 22:33:50.249713       1 watch.go:212] unable to encode watch object <nil>: expected pointer, but got invalid kind\nI0403 22:33:50.249898       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 22:33:50.250180       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 22:33:50.250383       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 22:33:50.250463       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 22:33:50.252337       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 22:33:50.252504       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 22:33:50.252655       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\n
Apr 03 22:35:57.623 E ns/openshift-ingress pod/router-default-76f8bd8689-vkfl2 node/ip-10-0-149-170.us-east-2.compute.internal container=router container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 22:36:01.036 E kube-apiserver Kube API started failing: Get https://api.ci-op-ir14c5q6-a4243.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=3s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Apr 03 22:36:17.663 E ns/openshift-service-catalog-apiserver-operator pod/openshift-service-catalog-apiserver-operator-797897c6f9-gk244 node/ip-10-0-130-109.us-east-2.compute.internal container=operator container exited with code 2 (Error): 1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 22:34:26.432784       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 22:34:36.442018       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 22:34:46.450344       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 22:34:56.458477       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 22:35:06.467416       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 22:35:16.476342       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 22:35:26.485933       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 22:35:36.500404       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 22:35:46.508636       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 22:35:56.529762       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 22:36:06.549386       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\n
Apr 03 22:36:18.251 E ns/openshift-cluster-machine-approver pod/machine-approver-84cdd4575-gps9j node/ip-10-0-130-109.us-east-2.compute.internal container=machine-approver-controller container exited with code 2 (Error): 2:30:39.516815       1 config.go:23] machine approver config: {NodeClientCert:{Disabled:false}}\nI0403 22:30:39.517634       1 main.go:183] Starting Machine Approver\nI0403 22:30:39.617991       1 main.go:107] CSR csr-cw4jz added\nI0403 22:30:39.618027       1 main.go:110] CSR csr-cw4jz is already approved\nI0403 22:30:39.618049       1 main.go:107] CSR csr-j7nkn added\nI0403 22:30:39.618133       1 main.go:110] CSR csr-j7nkn is already approved\nI0403 22:30:39.618157       1 main.go:107] CSR csr-p2l22 added\nI0403 22:30:39.618166       1 main.go:110] CSR csr-p2l22 is already approved\nI0403 22:30:39.618178       1 main.go:107] CSR csr-pkt6g added\nI0403 22:30:39.618186       1 main.go:110] CSR csr-pkt6g is already approved\nI0403 22:30:39.618197       1 main.go:107] CSR csr-wvk9z added\nI0403 22:30:39.618205       1 main.go:110] CSR csr-wvk9z is already approved\nI0403 22:30:39.618350       1 main.go:107] CSR csr-bcp8k added\nI0403 22:30:39.618376       1 main.go:110] CSR csr-bcp8k is already approved\nI0403 22:30:39.618402       1 main.go:107] CSR csr-dgt9k added\nI0403 22:30:39.618412       1 main.go:110] CSR csr-dgt9k is already approved\nI0403 22:30:39.618423       1 main.go:107] CSR csr-fjlm9 added\nI0403 22:30:39.618431       1 main.go:110] CSR csr-fjlm9 is already approved\nI0403 22:30:39.618441       1 main.go:107] CSR csr-q6m5d added\nI0403 22:30:39.618449       1 main.go:110] CSR csr-q6m5d is already approved\nI0403 22:30:39.618459       1 main.go:107] CSR csr-qng78 added\nI0403 22:30:39.618525       1 main.go:110] CSR csr-qng78 is already approved\nI0403 22:30:39.618539       1 main.go:107] CSR csr-xw2ph added\nI0403 22:30:39.618547       1 main.go:110] CSR csr-xw2ph is already approved\nI0403 22:30:39.618563       1 main.go:107] CSR csr-zxc7c added\nI0403 22:30:39.618572       1 main.go:110] CSR csr-zxc7c is already approved\nW0403 22:33:12.825757       1 reflector.go:341] github.com/openshift/cluster-machine-approver/main.go:185: watch of *v1beta1.CertificateSigningRequest ended with: too old resource version: 17015 (28506)\n
Apr 03 22:36:20.853 E ns/openshift-operator-lifecycle-manager pod/olm-operators-z282r node/ip-10-0-130-109.us-east-2.compute.internal container=configmap-registry-server container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 22:36:24.165 E ns/openshift-console pod/downloads-6d55689f4b-dpdsr node/ip-10-0-149-170.us-east-2.compute.internal container=download-server container exited with code 137 (Error): 
Apr 03 22:36:25.852 E ns/openshift-console pod/console-8458fd9948-j7pxw node/ip-10-0-130-109.us-east-2.compute.internal container=console container exited with code 2 (Error): 2020/04/3 22:18:59 cmd/main: cookies are secure!\n2020/04/3 22:18:59 cmd/main: Binding to 0.0.0.0:8443...\n2020/04/3 22:18:59 cmd/main: using TLS\n
Apr 03 22:36:26.453 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-59dd7db6b-lfczg node/ip-10-0-130-109.us-east-2.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): es/ip-10-0-129-70.us-east-2.compute.internal pods/kube-controller-manager-ip-10-0-129-70.us-east-2.compute.internal container=\"kube-controller-manager-5\" is not ready"\nI0403 22:36:04.957235       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"fb4c5353-75f5-11ea-8a56-02df5a075760", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: nodes/ip-10-0-129-70.us-east-2.compute.internal pods/kube-controller-manager-ip-10-0-129-70.us-east-2.compute.internal container=\"kube-controller-manager-5\" is not ready" to ""\nI0403 22:36:05.002684       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"fb4c5353-75f5-11ea-8a56-02df5a075760", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: nodes/ip-10-0-129-70.us-east-2.compute.internal pods/kube-controller-manager-ip-10-0-129-70.us-east-2.compute.internal container=\"kube-controller-manager-5\" is not ready" to ""\nI0403 22:36:05.113155       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"fb4c5353-75f5-11ea-8a56-02df5a075760", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-5-ip-10-0-130-109.us-east-2.compute.internal -n openshift-kube-controller-manager because it was missing\nI0403 22:36:06.944125       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 22:36:06.944351       1 leaderelection.go:65] leaderelection lost\n
Apr 03 22:36:31.452 E ns/openshift-operator-lifecycle-manager pod/packageserver-f55b46f49-w92gv node/ip-10-0-130-109.us-east-2.compute.internal container=packageserver container exited with code 137 (Error): "update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-03T22:36:21Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-03T22:36:22Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-03T22:36:22Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-03T22:36:22Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T22:36:22Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T22:36:23Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-03T22:36:23Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-03T22:36:24Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-03T22:36:24Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-03T22:36:26Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-03T22:36:26Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\n
Apr 03 22:36:38.890 E kube-apiserver failed contacting the API: Get https://api.ci-op-ir14c5q6-a4243.origin-ci-int-aws.dev.rhcloud.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&resourceVersion=32251&timeout=6m16s&timeoutSeconds=376&watch=true: dial tcp 3.22.103.127:6443: connect: connection refused
Apr 03 22:36:45.036 - 44s   E openshift-apiserver OpenShift API is not responding to GET requests
Apr 03 22:36:46.503 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-129-70.us-east-2.compute.internal node/ip-10-0-129-70.us-east-2.compute.internal container=kube-apiserver-7 container exited with code 255 (Error): : ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=571, ErrCode=NO_ERROR, debug=""\nI0403 22:33:50.250830       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=571, ErrCode=NO_ERROR, debug=""\nI0403 22:33:50.251194       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=571, ErrCode=NO_ERROR, debug=""\nI0403 22:33:50.251211       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=571, ErrCode=NO_ERROR, debug=""\nI0403 22:33:50.251535       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=571, ErrCode=NO_ERROR, debug=""\nI0403 22:33:50.251551       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=571, ErrCode=NO_ERROR, debug=""\nI0403 22:33:50.251709       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=571, ErrCode=NO_ERROR, debug=""\nI0403 22:33:50.251725       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=571, ErrCode=NO_ERROR, debug=""\nI0403 22:33:50.251868       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=571, ErrCode=NO_ERROR, debug=""\nI0403 22:33:50.251885       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=571, ErrCode=NO_ERROR, debug=""\nW0403 22:33:50.410151       1 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing\n
Apr 03 22:36:46.503 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-129-70.us-east-2.compute.internal node/ip-10-0-129-70.us-east-2.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 255 (Error): I0403 22:17:56.677519       1 observer_polling.go:106] Starting file observer\nI0403 22:17:56.678443       1 certsync_controller.go:269] Starting CertSyncer\nW0403 22:27:32.985884       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 21198 (25081)\nW0403 22:33:34.991409       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25270 (28243)\n
Apr 03 22:36:46.909 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-129-70.us-east-2.compute.internal node/ip-10-0-129-70.us-east-2.compute.internal container=kube-controller-manager-cert-syncer-5 container exited with code 255 (Error): I0403 22:17:56.612005       1 certsync_controller.go:269] Starting CertSyncer\nI0403 22:17:56.612341       1 observer_polling.go:106] Starting file observer\nE0403 22:18:00.940756       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:kube-controller-manager" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\nE0403 22:18:00.940994       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: secrets is forbidden: User "system:kube-controller-manager" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-controller-manager"\nW0403 22:23:48.955315       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 21151 (23630)\nW0403 22:29:00.960677       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 23944 (25619)\n
Apr 03 22:36:46.909 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-129-70.us-east-2.compute.internal node/ip-10-0-129-70.us-east-2.compute.internal container=kube-controller-manager-5 container exited with code 255 (Error): icate: "kube-apiserver-to-kubelet-signer" [] issuer="<self>" (2020-04-03 21:39:22 +0000 UTC to 2021-04-03 21:39:22 +0000 UTC (now=2020-04-03 22:17:56.571281866 +0000 UTC))\nI0403 22:17:56.571389       1 clientca.go:92] [4] "/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" client-ca certificate: "kube-control-plane-signer" [] issuer="<self>" (2020-04-03 21:39:21 +0000 UTC to 2021-04-03 21:39:21 +0000 UTC (now=2020-04-03 22:17:56.571365718 +0000 UTC))\nI0403 22:17:56.580783       1 controllermanager.go:169] Version: v1.13.4+3040211\nI0403 22:17:56.582576       1 serving.go:195] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1585951057" (2020-04-03 21:57:53 +0000 UTC to 2022-04-03 21:57:54 +0000 UTC (now=2020-04-03 22:17:56.582550917 +0000 UTC))\nI0403 22:17:56.582610       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585951057" [] issuer="<self>" (2020-04-03 21:57:37 +0000 UTC to 2021-04-03 21:57:38 +0000 UTC (now=2020-04-03 22:17:56.582594517 +0000 UTC))\nI0403 22:17:56.582654       1 secure_serving.go:136] Serving securely on [::]:10257\nI0403 22:17:56.582830       1 serving.go:77] Starting DynamicLoader\nI0403 22:17:56.586599       1 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-controller-manager...\nE0403 22:18:00.942701       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: configmaps "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\nE0403 22:33:50.204084       1 controllermanager.go:282] leaderelection lost\n
Apr 03 22:36:47.304 E ns/openshift-etcd pod/etcd-member-ip-10-0-129-70.us-east-2.compute.internal node/ip-10-0-129-70.us-east-2.compute.internal container=etcd-metrics container exited with code 255 (Error): 2020-04-03 22:33:22.009584 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-2.ci-op-ir14c5q6-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-2.ci-op-ir14c5q6-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-03 22:33:22.010999 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-03 22:33:22.012098 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-2.ci-op-ir14c5q6-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-2.ci-op-ir14c5q6-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/03 22:33:22 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.129.70:9978: connect: connection refused"; Reconnecting to {etcd-2.ci-op-ir14c5q6-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\n2020-04-03 22:33:23.028471 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Apr 03 22:36:47.304 E ns/openshift-etcd pod/etcd-member-ip-10-0-129-70.us-east-2.compute.internal node/ip-10-0-129-70.us-east-2.compute.internal container=etcd-member container exited with code 255 (Error): ed to read 4fdda73a84b182bd on stream Message (context canceled)\n2020-04-03 22:33:50.664957 I | rafthttp: peer 4fdda73a84b182bd became inactive (message send to peer failed)\n2020-04-03 22:33:50.665029 I | rafthttp: stopped streaming with peer 4fdda73a84b182bd (stream Message reader)\n2020-04-03 22:33:50.665103 I | rafthttp: stopped peer 4fdda73a84b182bd\n2020-04-03 22:33:50.665194 I | rafthttp: stopping peer d96c9ef6e049a8b...\n2020-04-03 22:33:50.665684 I | rafthttp: closed the TCP streaming connection with peer d96c9ef6e049a8b (stream MsgApp v2 writer)\n2020-04-03 22:33:50.675923 I | rafthttp: stopped streaming with peer d96c9ef6e049a8b (writer)\n2020-04-03 22:33:50.676342 I | rafthttp: closed the TCP streaming connection with peer d96c9ef6e049a8b (stream Message writer)\n2020-04-03 22:33:50.676455 I | rafthttp: stopped streaming with peer d96c9ef6e049a8b (writer)\n2020-04-03 22:33:50.676487 I | rafthttp: stopped HTTP pipelining with peer d96c9ef6e049a8b\n2020-04-03 22:33:50.676547 W | rafthttp: lost the TCP streaming connection with peer d96c9ef6e049a8b (stream MsgApp v2 reader)\n2020-04-03 22:33:50.676556 E | rafthttp: failed to read d96c9ef6e049a8b on stream MsgApp v2 (context canceled)\n2020-04-03 22:33:50.676564 I | rafthttp: peer d96c9ef6e049a8b became inactive (message send to peer failed)\n2020-04-03 22:33:50.676573 I | rafthttp: stopped streaming with peer d96c9ef6e049a8b (stream MsgApp v2 reader)\n2020-04-03 22:33:50.676622 W | rafthttp: lost the TCP streaming connection with peer d96c9ef6e049a8b (stream Message reader)\n2020-04-03 22:33:50.676649 I | rafthttp: stopped streaming with peer d96c9ef6e049a8b (stream Message reader)\n2020-04-03 22:33:50.676655 I | rafthttp: stopped peer d96c9ef6e049a8b\n2020-04-03 22:33:50.682755 E | rafthttp: failed to find member d96c9ef6e049a8b in cluster 75a46b2cb03d5a9b\n2020-04-03 22:33:50.686187 E | rafthttp: failed to find member 4fdda73a84b182bd in cluster 75a46b2cb03d5a9b\n2020-04-03 22:33:50.687893 E | rafthttp: failed to find member 4fdda73a84b182bd in cluster 75a46b2cb03d5a9b\n
Apr 03 22:36:47.703 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-129-70.us-east-2.compute.internal node/ip-10-0-129-70.us-east-2.compute.internal container=scheduler container exited with code 255 (Error): 00.988353       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251\nI0403 22:18:00.989685       1 serving.go:195] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "scheduler.openshift-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1585951057" (2020-04-03 21:57:55 +0000 UTC to 2022-04-03 21:57:56 +0000 UTC (now=2020-04-03 22:18:00.989653771 +0000 UTC))\nI0403 22:18:00.991662       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585951057" [] issuer="<self>" (2020-04-03 21:57:37 +0000 UTC to 2021-04-03 21:57:38 +0000 UTC (now=2020-04-03 22:18:00.991639387 +0000 UTC))\nI0403 22:18:00.991734       1 secure_serving.go:136] Serving securely on [::]:10259\nI0403 22:18:00.991919       1 serving.go:77] Starting DynamicLoader\nI0403 22:18:01.909622       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller\nI0403 22:18:02.022242       1 controller_utils.go:1034] Caches are synced for scheduler controller\nI0403 22:18:02.026510       1 leaderelection.go:205] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\nW0403 22:30:30.339733       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 23899 (26382)\nI0403 22:31:24.322734       1 leaderelection.go:214] successfully acquired lease openshift-kube-scheduler/kube-scheduler\nW0403 22:33:12.271148       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 20834 (28414)\nW0403 22:33:12.329969       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 20804 (28417)\nE0403 22:33:50.194370       1 server.go:259] lost master\n
Apr 03 22:37:57.603 E clusteroperator/monitoring changed Degraded to True: UpdatingconfigurationsharingFailed: Failed to rollout the stack. Error: running task Updating configuration sharing failed: failed to retrieve Prometheus host: getting Route object failed: the server is currently unable to handle the request (get routes.route.openshift.io prometheus-k8s)
Apr 03 22:37:58.013 E ns/openshift-authentication pod/oauth-openshift-7bfc4d7b4c-zvk2m node/ip-10-0-129-70.us-east-2.compute.internal container=oauth-openshift container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 22:38:10.211 E ns/openshift-authentication pod/oauth-openshift-7bfc4d7b4c-rtv99 node/ip-10-0-156-6.us-east-2.compute.internal container=oauth-openshift container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 22:38:12.195 E ns/openshift-operator-lifecycle-manager pod/packageserver-f55b46f49-26g6c node/ip-10-0-156-6.us-east-2.compute.internal container=packageserver container exited with code 137 (Error):        1 log.go:172] http: TLS handshake error from 10.128.0.1:39216: remote error: tls: bad certificate\nI0403 22:37:37.467406       1 log.go:172] http: TLS handshake error from 10.128.0.1:39240: remote error: tls: bad certificate\nI0403 22:37:37.580868       1 wrap.go:47] GET /healthz: (1.450669ms) 200 [kube-probe/1.13+ 10.130.0.1:33948]\nI0403 22:37:38.667010       1 log.go:172] http: TLS handshake error from 10.128.0.1:39308: remote error: tls: bad certificate\nI0403 22:37:39.069906       1 log.go:172] http: TLS handshake error from 10.128.0.1:39348: remote error: tls: bad certificate\nI0403 22:37:39.867021       1 log.go:172] http: TLS handshake error from 10.128.0.1:39366: remote error: tls: bad certificate\nI0403 22:37:40.267360       1 log.go:172] http: TLS handshake error from 10.128.0.1:39368: remote error: tls: bad certificate\nI0403 22:37:40.668124       1 log.go:172] http: TLS handshake error from 10.128.0.1:39370: remote error: tls: bad certificate\nI0403 22:37:41.084766       1 wrap.go:47] GET /: (30.922959ms) 200 [Go-http-client/2.0 10.128.0.1:33882]\nI0403 22:37:41.085807       1 wrap.go:47] GET /: (14.070139ms) 200 [Go-http-client/2.0 10.130.0.1:48280]\nI0403 22:37:41.100428       1 wrap.go:47] GET /: (18.326449ms) 200 [Go-http-client/2.0 10.130.0.1:48280]\nI0403 22:37:41.150340       1 secure_serving.go:156] Stopped listening on [::]:5443\ntime="2020-04-03T22:37:44Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-03T22:37:44Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-03T22:37:56Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T22:37:56Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\n
Apr 03 22:38:13.218 E ns/openshift-cluster-node-tuning-operator pod/tuned-kcnm2 node/ip-10-0-149-170.us-east-2.compute.internal container=tuned container exited with code 255 (Error): sig-apps-deployment-upgrade-2pggj/dp-57cc5d77b4-t5gj2) labels changed node wide: true\nI0403 22:35:54.623784   34917 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 22:35:54.646745   34917 openshift-tuned.go:326] Getting recommended profile...\nI0403 22:35:54.880080   34917 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 22:35:56.588215   34917 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-operator-69cbb8cc55-nxmdj) labels changed node wide: true\nI0403 22:36:04.520617   34917 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 22:36:04.522022   34917 openshift-tuned.go:326] Getting recommended profile...\nI0403 22:36:04.643257   34917 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 22:36:04.889516   34917 openshift-tuned.go:435] Pod (openshift-monitoring/grafana-66f44fcdcf-nk5lw) labels changed node wide: true\nI0403 22:36:09.622159   34917 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 22:36:09.623612   34917 openshift-tuned.go:326] Getting recommended profile...\nI0403 22:36:09.730142   34917 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 22:36:21.477779   34917 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/olm-operators-nvvs9) labels changed node wide: true\nI0403 22:36:24.622263   34917 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 22:36:24.623715   34917 openshift-tuned.go:326] Getting recommended profile...\nI0403 22:36:24.739843   34917 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 22:36:32.066027   34917 openshift-tuned.go:435] Pod (openshift-console/downloads-6d55689f4b-dpdsr) labels changed node wide: true\n
Apr 03 22:38:13.236 E ns/openshift-image-registry pod/node-ca-vswnp node/ip-10-0-149-170.us-east-2.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 03 22:38:13.437 E ns/openshift-monitoring pod/node-exporter-88z6p node/ip-10-0-149-170.us-east-2.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 03 22:38:13.437 E ns/openshift-monitoring pod/node-exporter-88z6p node/ip-10-0-149-170.us-east-2.compute.internal container=node-exporter container exited with code 255 (Error): 
Apr 03 22:38:18.034 E ns/openshift-dns pod/dns-default-w9zph node/ip-10-0-149-170.us-east-2.compute.internal container=dns-node-resolver container exited with code 255 (Error): 
Apr 03 22:38:18.034 E ns/openshift-dns pod/dns-default-w9zph node/ip-10-0-149-170.us-east-2.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-04-03T22:21:47.981Z [INFO] CoreDNS-1.3.1\n2020-04-03T22:21:47.981Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-03T22:21:47.981Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nW0403 22:30:30.467725       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: watch of *v1.Namespace ended with: too old resource version: 17822 (26385)\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 03 22:38:18.835 E ns/openshift-sdn pod/sdn-v2t5g node/ip-10-0-149-170.us-east-2.compute.internal container=sdn container exited with code 255 (Error): obin.go:310] LoadBalancerRR: Setting endpoints for openshift-dns/dns-default:dns-tcp to [10.128.0.73:5353 10.128.2.26:5353 10.129.0.65:5353 10.129.2.25:5353 10.130.0.68:5353 10.131.0.28:5353]\nI0403 22:36:30.306279   48459 roundrobin.go:240] Delete endpoint 10.128.0.73:5353 for service "openshift-dns/dns-default:dns-tcp"\nI0403 22:36:30.306288   48459 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-dns/dns-default:metrics to [10.128.0.73:9153 10.128.2.26:9153 10.129.0.65:9153 10.129.2.25:9153 10.130.0.68:9153 10.131.0.28:9153]\nI0403 22:36:30.306296   48459 roundrobin.go:240] Delete endpoint 10.128.0.73:9153 for service "openshift-dns/dns-default:metrics"\nI0403 22:36:30.467332   48459 proxier.go:367] userspace proxy: processing 0 service events\nI0403 22:36:30.467354   48459 proxier.go:346] userspace syncProxyRules took 51.2743ms\nE0403 22:36:32.473441   48459 healthcheck.go:57] SDN healthcheck disconnected from OVS server: <nil>\nI0403 22:36:32.473580   48459 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\ninterrupt: Gracefully shutting down ...\nI0403 22:36:32.582781   48459 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 22:36:32.691707   48459 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 22:36:32.774000   48459 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 22:36:32.875611   48459 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 22:36:32.974941   48459 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\n
Apr 03 22:38:19.632 E ns/openshift-sdn pod/ovs-n7qdx node/ip-10-0-149-170.us-east-2.compute.internal container=openvswitch container exited with code 255 (Error): 496e0e10 on port 7\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-04-03T22:35:55.131Z|00021|jsonrpc|WARN|Dropped 6 log messages in last 801 seconds (most recently, 801 seconds ago) due to excessive rate\n2020-04-03T22:35:55.131Z|00022|jsonrpc|WARN|unix#195: receive error: Connection reset by peer\n2020-04-03T22:35:55.131Z|00023|reconnect|WARN|unix#195: connection dropped (Connection reset by peer)\n\n==> /var/log/openvswitch/ovs-vswitchd.log <==\n2020-04-03T22:35:55.218Z|00162|connmgr|INFO|br0<->unix#255: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T22:35:55.250Z|00163|bridge|INFO|bridge br0: deleted interface veth9822b040 on port 4\n2020-04-03T22:35:55.306Z|00164|connmgr|INFO|br0<->unix#258: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T22:35:55.344Z|00165|connmgr|INFO|br0<->unix#261: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T22:35:55.370Z|00166|bridge|INFO|bridge br0: deleted interface veth54d4035e on port 15\n2020-04-03T22:35:55.426Z|00167|connmgr|INFO|br0<->unix#264: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T22:35:55.458Z|00168|bridge|INFO|bridge br0: deleted interface vethb103da6a on port 6\n2020-04-03T22:36:23.703Z|00169|connmgr|INFO|br0<->unix#270: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T22:36:23.722Z|00170|bridge|INFO|bridge br0: deleted interface veth9d994c7f on port 8\n2020-04-03T22:36:27.617Z|00171|bridge|INFO|bridge br0: added interface veth6af0d647 on port 19\n2020-04-03T22:36:27.646Z|00172|connmgr|INFO|br0<->unix#273: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T22:36:27.694Z|00173|connmgr|INFO|br0<->unix#277: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T22:36:27.701Z|00174|connmgr|INFO|br0<->unix#279: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-04-03T22:36:27.624Z|00024|jsonrpc|WARN|unix#221: receive error: Connection reset by peer\n2020-04-03T22:36:27.625Z|00025|reconnect|WARN|unix#221: connection dropped (Connection reset by peer)\nTerminated\novs-vswitchd is not running.\novsdb-server is not running.\n
Apr 03 22:38:20.033 E ns/openshift-multus pod/multus-gmjw7 node/ip-10-0-149-170.us-east-2.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 03 22:38:20.437 E ns/openshift-machine-config-operator pod/machine-config-daemon-h6mtk node/ip-10-0-149-170.us-east-2.compute.internal container=machine-config-daemon container exited with code 255 (Error): 
Apr 03 22:38:22.200 E ns/openshift-apiserver pod/apiserver-4vds7 node/ip-10-0-130-109.us-east-2.compute.internal container=openshift-apiserver container exited with code 255 (Error): per.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 22:36:21.084205       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []\nI0403 22:36:21.084278       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 22:36:21.084393       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 22:36:21.084426       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 22:36:21.084442       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 22:36:21.107433       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nE0403 22:36:29.314871       1 memcache.go:141] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request\nI0403 22:36:38.719571       1 serving.go:88] Shutting down DynamicLoader\nI0403 22:36:38.719750       1 clusterquotamapping.go:145] Shutting down ClusterQuotaMappingController controller\nE0403 22:36:38.720192       1 watch.go:212] unable to encode watch object <nil>: expected pointer, but got invalid kind\nI0403 22:36:38.720204       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/client-ca/ca-bundle.crt\nI0403 22:36:38.720211       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/aggregator-client-ca/ca-bundle.crt\nI0403 22:36:38.720220       1 controller.go:87] Shutting down OpenAPI AggregationController\nI0403 22:36:38.721144       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 22:36:38.721449       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 22:36:38.721580       1 secure_serving.go:180] Stopped listening on 0.0.0.0:8443\n
Apr 03 22:38:22.234 E ns/openshift-cluster-node-tuning-operator pod/tuned-nhnvq node/ip-10-0-130-109.us-east-2.compute.internal container=tuned container exited with code 255 (Error): p-pod-labels.cfg\nI0403 22:36:20.221130   53840 openshift-tuned.go:326] Getting recommended profile...\nI0403 22:36:20.348610   53840 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 22:36:20.349664   53840 openshift-tuned.go:435] Pod (openshift-authentication/oauth-openshift-7bfc4d7b4c-df8gr) labels changed node wide: true\nI0403 22:36:25.219594   53840 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 22:36:25.221217   53840 openshift-tuned.go:326] Getting recommended profile...\nI0403 22:36:25.338084   53840 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 22:36:26.057970   53840 openshift-tuned.go:435] Pod (openshift-console/console-8458fd9948-j7pxw) labels changed node wide: true\nI0403 22:36:30.219507   53840 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 22:36:30.221045   53840 openshift-tuned.go:326] Getting recommended profile...\nI0403 22:36:30.390254   53840 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 22:36:31.049869   53840 openshift-tuned.go:435] Pod (openshift-controller-manager-operator/openshift-controller-manager-operator-54875f9667-8z49k) labels changed node wide: true\nI0403 22:36:35.219610   53840 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 22:36:35.221611   53840 openshift-tuned.go:326] Getting recommended profile...\nI0403 22:36:35.350823   53840 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 22:36:38.049455   53840 openshift-tuned.go:435] Pod (openshift-machine-config-operator/etcd-quorum-guard-5b845db78d-g8pv4) labels changed node wide: true\nI0403 22:36:38.780605   53840 openshift-tuned.go:126] Received signal: terminated\n
Apr 03 22:38:22.390 E ns/openshift-image-registry pod/node-ca-5trg9 node/ip-10-0-130-109.us-east-2.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 03 22:38:22.634 E ns/openshift-operator-lifecycle-manager pod/olm-operators-nvvs9 node/ip-10-0-149-170.us-east-2.compute.internal container=configmap-registry-server container exited with code 255 (Error): 
Apr 03 22:38:30.147 E ns/openshift-monitoring pod/node-exporter-5qdlw node/ip-10-0-130-109.us-east-2.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 03 22:38:30.147 E ns/openshift-monitoring pod/node-exporter-5qdlw node/ip-10-0-130-109.us-east-2.compute.internal container=node-exporter container exited with code 255 (Error): 
Apr 03 22:38:31.346 E ns/openshift-controller-manager pod/controller-manager-2wvn7 node/ip-10-0-130-109.us-east-2.compute.internal container=controller-manager container exited with code 255 (Error): 
Apr 03 22:38:31.749 E ns/openshift-sdn pod/sdn-kwcrk node/ip-10-0-130-109.us-east-2.compute.internal container=sdn container exited with code 255 (Error): I0403 22:36:36.240860   68690 proxier.go:346] userspace syncProxyRules took 56.648339ms\nI0403 22:36:38.815676   68690 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0403 22:36:38.816231   68690 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0403 22:36:38.816384   68690 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0403 22:36:38.817176   68690 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0403 22:36:38.817309   68690 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0403 22:36:38.817355   68690 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0403 22:36:38.817511   68690 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0403 22:36:38.817650   68690 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\ninterrupt: Gracefully shutting down ...\nE0403 22:36:39.010601   68690 healthcheck.go:57] SDN healthcheck disconnected from OVS server: <nil>\nI0403 22:36:39.010835   68690 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 22:36:39.111245   68690 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 22:36:39.211512   68690 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 22:36:39.313606   68690 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 22:36:39.411216   68690 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\n
Apr 03 22:38:32.348 E ns/openshift-sdn pod/sdn-controller-2scq5 node/ip-10-0-130-109.us-east-2.compute.internal container=sdn-controller container exited with code 255 (Error): I0403 22:21:52.007802       1 leaderelection.go:205] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0403 22:31:09.892992       1 leaderelection.go:270] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-ir14c5q6-a4243.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: dial tcp 10.0.140.58:6443: connect: connection refused\nI0403 22:35:07.960363       1 leaderelection.go:214] successfully acquired lease openshift-sdn/openshift-network-controller\nI0403 22:35:07.960921       1 event.go:221] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"openshift-sdn", Name:"openshift-network-controller", UID:"0a28215c-75f6-11ea-8a56-02df5a075760", APIVersion:"v1", ResourceVersion:"30513", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ip-10-0-130-109 became leader\nI0403 22:35:08.105491       1 master.go:57] Initializing SDN master of type "redhat/openshift-ovs-networkpolicy"\nI0403 22:35:08.113426       1 network_controller.go:49] Started OpenShift Network Controller\nW0403 22:36:04.531866       1 reflector.go:256] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 20440 (31354)\nE0403 22:36:08.489912       1 memcache.go:141] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request\nE0403 22:36:38.604363       1 memcache.go:141] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request\nE0403 22:36:38.834290       1 reflector.go:237] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.Namespace: Get https://api-int.ci-op-ir14c5q6-a4243.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces?resourceVersion=26382&timeout=6m10s&timeoutSeconds=370&watch=true: dial tcp 10.0.158.24:6443: connect: connection refused\n
Apr 03 22:38:33.748 E ns/openshift-dns pod/dns-default-gz7pb node/ip-10-0-130-109.us-east-2.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-04-03T22:22:27.066Z [INFO] CoreDNS-1.3.1\n2020-04-03T22:22:27.066Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-03T22:22:27.066Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nW0403 22:30:30.385342       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: watch of *v1.Namespace ended with: too old resource version: 17013 (26382)\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 03 22:38:33.748 E ns/openshift-dns pod/dns-default-gz7pb node/ip-10-0-130-109.us-east-2.compute.internal container=dns-node-resolver container exited with code 255 (Error): kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]\n
Apr 03 22:38:35.551 E ns/openshift-etcd pod/etcd-member-ip-10-0-130-109.us-east-2.compute.internal node/ip-10-0-130-109.us-east-2.compute.internal container=etcd-metrics container exited with code 1 (Error): 2020-04-03 22:38:26.157446 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-0.ci-op-ir14c5q6-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-0.ci-op-ir14c5q6-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-03 22:38:26.161396 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-03 22:38:26.162312 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-ir14c5q6-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-ir14c5q6-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/03 22:38:26 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.130.109:9978: connect: connection refused"; Reconnecting to {etcd-0.ci-op-ir14c5q6-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\nWARNING: 2020/04/03 22:38:27 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.130.109:9978: connect: connection refused"; Reconnecting to {etcd-0.ci-op-ir14c5q6-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\nWARNING: 2020/04/03 22:38:28 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.130.109:9978: connect: connection refused"; Reconnecting to {etcd-0.ci-op-ir14c5q6-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\ndial tcp 10.0.130.109:9978: connect: connection refused\n
Apr 03 22:38:40.747 E ns/openshift-multus pod/multus-l4xsg node/ip-10-0-130-109.us-east-2.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 03 22:38:44.747 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-130-109.us-east-2.compute.internal node/ip-10-0-130-109.us-east-2.compute.internal container=kube-apiserver-7 container exited with code 255 (Error):        1 available_controller.go:400] v1beta1.metrics.k8s.io failed with: no response from https://10.131.0.38:6443: Get https://10.131.0.38:6443: dial tcp 10.131.0.38:6443: connect: connection refused\nI0403 22:36:14.676308       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io\nE0403 22:36:14.691021       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist\nI0403 22:36:14.691046       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.\nI0403 22:36:20.734831       1 controller.go:608] quota admission added evaluator for: statefulsets.apps\nI0403 22:36:20.734939       1 controller.go:608] quota admission added evaluator for: statefulsets.apps\nI0403 22:36:23.063545       1 controller.go:107] OpenAPI AggregationController: Processing item v1.packages.operators.coreos.com\nE0403 22:36:23.066289       1 controller.go:114] loading OpenAPI spec for "v1.packages.operators.coreos.com" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: Error: 'x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "Red Hat, Inc.")'\nTrying to reach: 'https://10.130.0.77:5443/openapi/v2', Header: map[]\nI0403 22:36:23.066314       1 controller.go:127] OpenAPI AggregationController: action for item v1.packages.operators.coreos.com: Rate Limited Requeue.\nI0403 22:36:23.068833       1 controller.go:107] OpenAPI AggregationController: Processing item v1.oauth.openshift.io\nE0403 22:36:26.975354       1 memcache.go:141] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request\nI0403 22:36:34.688094       1 controller.go:107] OpenAPI AggregationController: Processing item v1.authorization.openshift.io\nI0403 22:36:38.682626       1 controller.go:176] Shutting down kubernetes service endpoint reconciler\n
Apr 03 22:38:44.747 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-130-109.us-east-2.compute.internal node/ip-10-0-130-109.us-east-2.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 255 (Error): I0403 22:14:15.721044       1 certsync_controller.go:269] Starting CertSyncer\nI0403 22:14:15.721308       1 observer_polling.go:106] Starting file observer\nW0403 22:21:09.484145       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 21198 (22325)\nW0403 22:28:01.492538       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22583 (25258)\nW0403 22:33:03.498943       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25444 (27853)\n
Apr 03 22:38:45.349 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-130-109.us-east-2.compute.internal node/ip-10-0-130-109.us-east-2.compute.internal container=kube-controller-manager-5 container exited with code 255 (Error): /kubernetes/pkg/controller/garbagecollector/graph_builder.go:124: Failed to watch <nil>: Get https://localhost:6443/apis/monitoring.coreos.com/v1/prometheuses?resourceVersion=28506&timeoutSeconds=397&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0403 22:36:38.908874       1 reflector.go:237] k8s.io/kubernetes/pkg/controller/garbagecollector/graph_builder.go:124: Failed to watch <nil>: Get https://localhost:6443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?resourceVersion=31352&timeoutSeconds=559&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0403 22:36:38.828217       1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0403 22:36:38.909284       1 reflector.go:237] k8s.io/kubernetes/pkg/controller/garbagecollector/graph_builder.go:124: Failed to watch <nil>: Get https://localhost:6443/apis/config.openshift.io/v1/ingresses?resourceVersion=28506&timeoutSeconds=530&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0403 22:36:38.828226       1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0403 22:36:38.828236       1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0403 22:36:38.828245       1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0403 22:36:38.828254       1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0403 22:36:38.828263       1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0403 22:36:38.911185       1 reflector.go:237] k8s.io/kubernetes/pkg/controller/garbagecollector/graph_builder.go:124: Failed to watch <nil>: Get https://localhost:6443/apis/operator.openshift.io/v1/authentications?resourceVersion=30271&timeoutSeconds=463&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0403 22:36:38.828293       1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\n
Apr 03 22:38:45.349 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-130-109.us-east-2.compute.internal node/ip-10-0-130-109.us-east-2.compute.internal container=kube-controller-manager-cert-syncer-5 container exited with code 255 (Error): I0403 22:14:24.850783       1 observer_polling.go:106] Starting file observer\nI0403 22:14:24.851788       1 certsync_controller.go:269] Starting CertSyncer\nW0403 22:23:12.887018       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 21151 (23425)\nW0403 22:31:32.893881       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 23612 (27084)\n
Apr 03 22:38:45.948 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-130-109.us-east-2.compute.internal node/ip-10-0-130-109.us-east-2.compute.internal container=scheduler container exited with code 255 (Error): ion is disabled\nW0403 22:14:26.772083       1 authentication.go:55] Authentication is disabled\nI0403 22:14:26.772097       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251\nI0403 22:14:26.774079       1 serving.go:195] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "scheduler.openshift-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1585951057" (2020-04-03 21:57:55 +0000 UTC to 2022-04-03 21:57:56 +0000 UTC (now=2020-04-03 22:14:26.774056982 +0000 UTC))\nI0403 22:14:26.774130       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585951057" [] issuer="<self>" (2020-04-03 21:57:37 +0000 UTC to 2021-04-03 21:57:38 +0000 UTC (now=2020-04-03 22:14:26.774104875 +0000 UTC))\nI0403 22:14:26.774163       1 secure_serving.go:136] Serving securely on [::]:10259\nI0403 22:14:26.774292       1 serving.go:77] Starting DynamicLoader\nI0403 22:14:27.678480       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller\nI0403 22:14:27.778747       1 controller_utils.go:1034] Caches are synced for scheduler controller\nI0403 22:14:27.778795       1 leaderelection.go:205] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\nW0403 22:30:30.213314       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 17013 (26358)\nW0403 22:33:12.852791       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 17013 (28506)\nW0403 22:36:04.491431       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 17018 (31354)\nE0403 22:36:38.760714       1 server.go:259] lost master\n
Apr 03 22:38:47.548 E ns/openshift-etcd pod/etcd-member-ip-10-0-130-109.us-east-2.compute.internal node/ip-10-0-130-109.us-east-2.compute.internal container=etcd-member container exited with code 255 (Error): 82bd (stream MsgApp v2 reader)\n2020-04-03 22:36:39.237405 W | rafthttp: lost the TCP streaming connection with peer 4fdda73a84b182bd (stream Message reader)\n2020-04-03 22:36:39.237419 E | rafthttp: failed to read 4fdda73a84b182bd on stream Message (context canceled)\n2020-04-03 22:36:39.237428 I | rafthttp: peer 4fdda73a84b182bd became inactive (message send to peer failed)\n2020-04-03 22:36:39.237436 I | rafthttp: stopped streaming with peer 4fdda73a84b182bd (stream Message reader)\n2020-04-03 22:36:39.237447 I | rafthttp: stopped peer 4fdda73a84b182bd\n2020-04-03 22:36:39.237455 I | rafthttp: stopping peer 90c2d026d01c61dd...\n2020-04-03 22:36:39.237901 I | rafthttp: closed the TCP streaming connection with peer 90c2d026d01c61dd (stream MsgApp v2 writer)\n2020-04-03 22:36:39.237914 I | rafthttp: stopped streaming with peer 90c2d026d01c61dd (writer)\n2020-04-03 22:36:39.238349 I | rafthttp: closed the TCP streaming connection with peer 90c2d026d01c61dd (stream Message writer)\n2020-04-03 22:36:39.238363 I | rafthttp: stopped streaming with peer 90c2d026d01c61dd (writer)\n2020-04-03 22:36:39.238517 I | rafthttp: stopped HTTP pipelining with peer 90c2d026d01c61dd\n2020-04-03 22:36:39.238597 W | rafthttp: lost the TCP streaming connection with peer 90c2d026d01c61dd (stream MsgApp v2 reader)\n2020-04-03 22:36:39.238610 E | rafthttp: failed to read 90c2d026d01c61dd on stream MsgApp v2 (context canceled)\n2020-04-03 22:36:39.238618 I | rafthttp: peer 90c2d026d01c61dd became inactive (message send to peer failed)\n2020-04-03 22:36:39.238627 I | rafthttp: stopped streaming with peer 90c2d026d01c61dd (stream MsgApp v2 reader)\n2020-04-03 22:36:39.238704 W | rafthttp: lost the TCP streaming connection with peer 90c2d026d01c61dd (stream Message reader)\n2020-04-03 22:36:39.238719 I | rafthttp: stopped streaming with peer 90c2d026d01c61dd (stream Message reader)\n2020-04-03 22:36:39.238729 I | rafthttp: stopped peer 90c2d026d01c61dd\n2020-04-03 22:36:39.269332 E | rafthttp: failed to find member 90c2d026d01c61dd in cluster 75a46b2cb03d5a9b\n
Apr 03 22:38:57.947 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-130-109.us-east-2.compute.internal node/ip-10-0-130-109.us-east-2.compute.internal container=kube-controller-manager-5 container exited with code 255 (Error): /kubernetes/pkg/controller/garbagecollector/graph_builder.go:124: Failed to watch <nil>: Get https://localhost:6443/apis/monitoring.coreos.com/v1/prometheuses?resourceVersion=28506&timeoutSeconds=397&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0403 22:36:38.908874       1 reflector.go:237] k8s.io/kubernetes/pkg/controller/garbagecollector/graph_builder.go:124: Failed to watch <nil>: Get https://localhost:6443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?resourceVersion=31352&timeoutSeconds=559&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0403 22:36:38.828217       1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0403 22:36:38.909284       1 reflector.go:237] k8s.io/kubernetes/pkg/controller/garbagecollector/graph_builder.go:124: Failed to watch <nil>: Get https://localhost:6443/apis/config.openshift.io/v1/ingresses?resourceVersion=28506&timeoutSeconds=530&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0403 22:36:38.828226       1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0403 22:36:38.828236       1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0403 22:36:38.828245       1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0403 22:36:38.828254       1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0403 22:36:38.828263       1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0403 22:36:38.911185       1 reflector.go:237] k8s.io/kubernetes/pkg/controller/garbagecollector/graph_builder.go:124: Failed to watch <nil>: Get https://localhost:6443/apis/operator.openshift.io/v1/authentications?resourceVersion=30271&timeoutSeconds=463&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0403 22:36:38.828293       1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\n
Apr 03 22:38:57.947 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-130-109.us-east-2.compute.internal node/ip-10-0-130-109.us-east-2.compute.internal container=kube-controller-manager-cert-syncer-5 container exited with code 255 (Error): I0403 22:14:24.850783       1 observer_polling.go:106] Starting file observer\nI0403 22:14:24.851788       1 certsync_controller.go:269] Starting CertSyncer\nW0403 22:23:12.887018       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 21151 (23425)\nW0403 22:31:32.893881       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 23612 (27084)\n
Apr 03 22:38:58.347 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-130-109.us-east-2.compute.internal node/ip-10-0-130-109.us-east-2.compute.internal container=kube-apiserver-7 container exited with code 255 (Error):        1 available_controller.go:400] v1beta1.metrics.k8s.io failed with: no response from https://10.131.0.38:6443: Get https://10.131.0.38:6443: dial tcp 10.131.0.38:6443: connect: connection refused\nI0403 22:36:14.676308       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io\nE0403 22:36:14.691021       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist\nI0403 22:36:14.691046       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.\nI0403 22:36:20.734831       1 controller.go:608] quota admission added evaluator for: statefulsets.apps\nI0403 22:36:20.734939       1 controller.go:608] quota admission added evaluator for: statefulsets.apps\nI0403 22:36:23.063545       1 controller.go:107] OpenAPI AggregationController: Processing item v1.packages.operators.coreos.com\nE0403 22:36:23.066289       1 controller.go:114] loading OpenAPI spec for "v1.packages.operators.coreos.com" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: Error: 'x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "Red Hat, Inc.")'\nTrying to reach: 'https://10.130.0.77:5443/openapi/v2', Header: map[]\nI0403 22:36:23.066314       1 controller.go:127] OpenAPI AggregationController: action for item v1.packages.operators.coreos.com: Rate Limited Requeue.\nI0403 22:36:23.068833       1 controller.go:107] OpenAPI AggregationController: Processing item v1.oauth.openshift.io\nE0403 22:36:26.975354       1 memcache.go:141] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request\nI0403 22:36:34.688094       1 controller.go:107] OpenAPI AggregationController: Processing item v1.authorization.openshift.io\nI0403 22:36:38.682626       1 controller.go:176] Shutting down kubernetes service endpoint reconciler\n
Apr 03 22:38:58.347 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-130-109.us-east-2.compute.internal node/ip-10-0-130-109.us-east-2.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 255 (Error): I0403 22:14:15.721044       1 certsync_controller.go:269] Starting CertSyncer\nI0403 22:14:15.721308       1 observer_polling.go:106] Starting file observer\nW0403 22:21:09.484145       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 21198 (22325)\nW0403 22:28:01.492538       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22583 (25258)\nW0403 22:33:03.498943       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25444 (27853)\n
Apr 03 22:38:59.147 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-130-109.us-east-2.compute.internal node/ip-10-0-130-109.us-east-2.compute.internal container=scheduler container exited with code 255 (Error): ion is disabled\nW0403 22:14:26.772083       1 authentication.go:55] Authentication is disabled\nI0403 22:14:26.772097       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251\nI0403 22:14:26.774079       1 serving.go:195] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "scheduler.openshift-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1585951057" (2020-04-03 21:57:55 +0000 UTC to 2022-04-03 21:57:56 +0000 UTC (now=2020-04-03 22:14:26.774056982 +0000 UTC))\nI0403 22:14:26.774130       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585951057" [] issuer="<self>" (2020-04-03 21:57:37 +0000 UTC to 2021-04-03 21:57:38 +0000 UTC (now=2020-04-03 22:14:26.774104875 +0000 UTC))\nI0403 22:14:26.774163       1 secure_serving.go:136] Serving securely on [::]:10259\nI0403 22:14:26.774292       1 serving.go:77] Starting DynamicLoader\nI0403 22:14:27.678480       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller\nI0403 22:14:27.778747       1 controller_utils.go:1034] Caches are synced for scheduler controller\nI0403 22:14:27.778795       1 leaderelection.go:205] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\nW0403 22:30:30.213314       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 17013 (26358)\nW0403 22:33:12.852791       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 17013 (28506)\nW0403 22:36:04.491431       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 17018 (31354)\nE0403 22:36:38.760714       1 server.go:259] lost master\n
Apr 03 22:39:00.773 E ns/openshift-etcd pod/etcd-member-ip-10-0-130-109.us-east-2.compute.internal node/ip-10-0-130-109.us-east-2.compute.internal container=etcd-metrics container exited with code 1 (Error): 2020-04-03 22:38:26.157446 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-0.ci-op-ir14c5q6-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-0.ci-op-ir14c5q6-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-03 22:38:26.161396 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-03 22:38:26.162312 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-ir14c5q6-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-ir14c5q6-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/03 22:38:26 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.130.109:9978: connect: connection refused"; Reconnecting to {etcd-0.ci-op-ir14c5q6-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\nWARNING: 2020/04/03 22:38:27 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.130.109:9978: connect: connection refused"; Reconnecting to {etcd-0.ci-op-ir14c5q6-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\nWARNING: 2020/04/03 22:38:28 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.130.109:9978: connect: connection refused"; Reconnecting to {etcd-0.ci-op-ir14c5q6-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\ndial tcp 10.0.130.109:9978: connect: connection refused\n
Apr 03 22:39:08.219 E ns/openshift-machine-config-operator pod/etcd-quorum-guard-5b845db78d-xdgqm node/ip-10-0-129-70.us-east-2.compute.internal container=guard container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 22:41:53.680 E ns/openshift-operator-lifecycle-manager pod/packageserver-778d757bbd-8t79m node/ip-10-0-129-70.us-east-2.compute.internal container=packageserver container exited with code 137 (Error): ficate\nI0403 22:41:20.668143       1 log.go:172] http: TLS handshake error from 10.128.0.1:40136: remote error: tls: bad certificate\nI0403 22:41:22.265230       1 log.go:172] http: TLS handshake error from 10.128.0.1:40154: remote error: tls: bad certificate\nI0403 22:41:22.589520       1 wrap.go:47] GET /: (216.389µs) 200 [Go-http-client/2.0 10.129.0.1:50936]\nI0403 22:41:22.591470       1 wrap.go:47] GET /: (279.19µs) 200 [Go-http-client/2.0 10.130.0.1:34856]\nI0403 22:41:22.629920       1 secure_serving.go:156] Stopped listening on [::]:5443\nI0403 22:41:38.258322       1 reflector.go:202] github.com/operator-framework/operator-lifecycle-manager/pkg/lib/queueinformer/queueinformer_operator.go:130: forcing resync\ntime="2020-04-03T22:41:38Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T22:41:38Z" level=info msg="new grpc connection added" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T22:41:38Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T22:41:38Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T22:41:38Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-03T22:41:38Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-03T22:41:38Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T22:41:38Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\n
Apr 03 22:42:08.983 E ns/openshift-operator-lifecycle-manager pod/packageserver-778d757bbd-2vch8 node/ip-10-0-156-6.us-east-2.compute.internal container=packageserver container exited with code 137 (Error): 22:41:32.267460       1 log.go:172] http: TLS handshake error from 10.128.0.1:57810: remote error: tls: bad certificate\nI0403 22:41:32.667222       1 log.go:172] http: TLS handshake error from 10.128.0.1:57826: remote error: tls: bad certificate\nI0403 22:41:33.066984       1 log.go:172] http: TLS handshake error from 10.128.0.1:57832: remote error: tls: bad certificate\nI0403 22:41:33.867644       1 log.go:172] http: TLS handshake error from 10.128.0.1:57840: remote error: tls: bad certificate\nI0403 22:41:35.067104       1 log.go:172] http: TLS handshake error from 10.128.0.1:57858: remote error: tls: bad certificate\nI0403 22:41:35.467660       1 log.go:172] http: TLS handshake error from 10.128.0.1:57862: remote error: tls: bad certificate\nI0403 22:41:35.970948       1 wrap.go:47] GET /: (173.07µs) 200 [Go-http-client/2.0 10.129.0.1:51268]\nI0403 22:41:36.268353       1 log.go:172] http: TLS handshake error from 10.128.0.1:57868: remote error: tls: bad certificate\nI0403 22:41:36.666981       1 log.go:172] http: TLS handshake error from 10.128.0.1:57870: remote error: tls: bad certificate\nI0403 22:41:37.476831       1 wrap.go:47] GET /healthz: (220.635µs) 200 [kube-probe/1.13+ 10.130.0.1:34912]\nI0403 22:41:37.867251       1 log.go:172] http: TLS handshake error from 10.128.0.1:57880: remote error: tls: bad certificate\nI0403 22:41:38.006312       1 wrap.go:47] GET /: (253.641µs) 200 [Go-http-client/2.0 10.129.0.1:51268]\nI0403 22:41:38.006979       1 wrap.go:47] GET /: (747.146µs) 200 [Go-http-client/2.0 10.129.0.1:51268]\nI0403 22:41:38.007160       1 wrap.go:47] GET /: (119.316µs) 200 [Go-http-client/2.0 10.130.0.1:48522]\nI0403 22:41:38.007249       1 wrap.go:47] GET /: (1.224044ms) 200 [Go-http-client/2.0 10.128.0.1:49290]\nI0403 22:41:38.007022       1 wrap.go:47] GET /: (338.256µs) 200 [Go-http-client/2.0 10.130.0.1:48522]\nI0403 22:41:38.006664       1 wrap.go:47] GET /: (149.773µs) 200 [Go-http-client/2.0 10.130.0.1:48522]\nI0403 22:41:38.070755       1 secure_serving.go:156] Stopped listening on [::]:5443\n