ResultSUCCESS
Tests 1 failed / 21 succeeded
Started2020-05-04 08:27
Elapsed1h16m
Work namespaceci-op-cvzhtcyw
Refs release-4.1:514189df
826:8cbe0949
pod0f9605ae-8de1-11ea-a7b0-0a58ac105ca3
repoopenshift/cluster-kube-apiserver-operator
revision1

Test Failures


openshift-tests Monitor cluster while tests execute 41m0s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
202 error level events were detected during this test run:

May 04 09:01:17.857 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-759f6f9c8d-fhs4z node/ip-10-0-155-226.us-east-2.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error):  08:57:40.357583       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 13968 (14419)\nW0504 08:57:40.923212       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 11995 (13349)\nW0504 08:57:40.929662       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Infrastructure ended with: too old resource version: 5000 (14273)\nW0504 08:57:40.929765       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Network ended with: too old resource version: 5002 (14031)\nW0504 08:57:41.038928       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.FeatureGate ended with: too old resource version: 4998 (13968)\nW0504 08:57:41.039058       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 11973 (13349)\nW0504 08:57:41.051765       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ServiceAccount ended with: too old resource version: 10997 (13352)\nW0504 08:57:41.051942       1 reflector.go:270] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: watch of *v1.KubeControllerManager ended with: too old resource version: 13968 (14234)\nW0504 08:57:41.052036       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Role ended with: too old resource version: 11016 (13354)\nW0504 08:57:41.052126       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 12014 (13349)\nI0504 09:01:17.219617       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0504 09:01:17.219684       1 leaderelection.go:65] leaderelection lost\nF0504 09:01:17.229951       1 builder.go:217] server exited\n
May 04 09:01:28.892 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-57d5bcd9c6-r2tp8 node/ip-10-0-155-226.us-east-2.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): nt-go/informers/factory.go:132: watch of *v1.Role ended with: too old resource version: 4058 (13354)\nW0504 08:55:51.266681       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 12018 (13349)\nW0504 08:55:51.266861       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 4055 (13352)\nW0504 08:55:51.266966       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ServiceAccount ended with: too old resource version: 9825 (13352)\nW0504 08:55:51.267010       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 12039 (13349)\nW0504 08:55:51.267046       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 10781 (13824)\nW0504 08:55:51.269330       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 12027 (13349)\nW0504 08:55:51.269675       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.RoleBinding ended with: too old resource version: 6864 (13354)\nW0504 08:55:51.289623       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.FeatureGate ended with: too old resource version: 4641 (13968)\nW0504 08:55:51.311749       1 reflector.go:270] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: watch of *v1.KubeScheduler ended with: too old resource version: 13692 (13968)\nW0504 08:55:51.328368       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Scheduler ended with: too old resource version: 4641 (13968)\nI0504 09:01:27.877048       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0504 09:01:27.877115       1 leaderelection.go:65] leaderelection lost\n
May 04 09:02:46.118 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-759874778b-sn5nj node/ip-10-0-155-226.us-east-2.compute.internal container=openshift-apiserver-operator container exited with code 255 (Error): 0] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Project ended with: too old resource version: 5044 (14033)\nW0504 08:57:40.269644       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Namespace ended with: too old resource version: 10289 (13987)\nW0504 08:57:40.269695       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Pod ended with: too old resource version: 13170 (13987)\nW0504 08:57:40.277093       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 12027 (13987)\nW0504 08:57:40.381586       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 12028 (13987)\nW0504 08:57:40.381609       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Ingress ended with: too old resource version: 5044 (14033)\nW0504 08:57:40.383661       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 13915 (14419)\nW0504 08:57:40.383702       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ServiceAccount ended with: too old resource version: 10163 (13987)\nW0504 08:57:40.383813       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Image ended with: too old resource version: 9129 (14031)\nW0504 08:57:40.948461       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 13815 (14420)\nW0504 08:57:40.960438       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 5094 (13987)\nI0504 09:02:45.265646       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0504 09:02:45.265785       1 leaderelection.go:65] leaderelection lost\n
May 04 09:02:58.948 E ns/openshift-machine-api pod/machine-api-operator-6956756457-z8tqr node/ip-10-0-155-226.us-east-2.compute.internal container=machine-api-operator container exited with code 2 (Error): 
May 04 09:05:07.320 E ns/openshift-machine-api pod/machine-api-controllers-7fc9bbf75c-7pmb5 node/ip-10-0-131-63.us-east-2.compute.internal container=controller-manager container exited with code 1 (Error): 
May 04 09:05:07.320 E ns/openshift-machine-api pod/machine-api-controllers-7fc9bbf75c-7pmb5 node/ip-10-0-131-63.us-east-2.compute.internal container=nodelink-controller container exited with code 2 (Error): 
May 04 09:05:27.777 E ns/openshift-cluster-node-tuning-operator pod/cluster-node-tuning-operator-57598dbd84-wvq6v node/ip-10-0-155-226.us-east-2.compute.internal container=cluster-node-tuning-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
May 04 09:05:32.769 E clusterversion/version changed Failing to True: MultipleErrors: Multiple errors are preventing progress:\n* Cluster operator image-registry is still updating\n* Cluster operator monitoring is still updating\n* Cluster operator node-tuning is still updating\n* Cluster operator service-catalog-controller-manager is still updating\n* Cluster operator storage is still updating\n* Could not update deployment "openshift-authentication-operator/authentication-operator" (107 of 350)\n* Could not update deployment "openshift-cluster-samples-operator/cluster-samples-operator" (185 of 350)\n* Could not update deployment "openshift-console/downloads" (237 of 350)\n* Could not update deployment "openshift-controller-manager-operator/openshift-controller-manager-operator" (173 of 350)\n* Could not update deployment "openshift-machine-api/cluster-autoscaler-operator" (122 of 350)\n* Could not update deployment "openshift-marketplace/marketplace-operator" (282 of 350)\n* Could not update deployment "openshift-operator-lifecycle-manager/olm-operator" (253 of 350)\n* Could not update deployment "openshift-service-ca-operator/service-ca-operator" (290 of 350)\n* Could not update deployment "openshift-service-catalog-apiserver-operator/openshift-service-catalog-apiserver-operator" (209 of 350)
May 04 09:06:20.916 E ns/openshift-console pod/downloads-84499c4bb-w8dcl node/ip-10-0-140-155.us-east-2.compute.internal container=download-server container exited with code 137 (Error): 
May 04 09:06:23.735 E ns/openshift-monitoring pod/prometheus-adapter-d776fb4df-j7j4z node/ip-10-0-137-68.us-east-2.compute.internal container=prometheus-adapter container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
May 04 09:06:30.433 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-151-195.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): 
May 04 09:06:31.308 E ns/openshift-ingress pod/router-default-54898d55f8-tqqk2 node/ip-10-0-137-68.us-east-2.compute.internal container=router container exited with code 2 (Error): 09:04:59.843126       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0504 09:05:06.056536       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0504 09:05:11.267927       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0504 09:05:16.257331       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0504 09:05:22.409086       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0504 09:05:27.410903       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0504 09:05:32.405327       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0504 09:05:48.915012       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0504 09:05:53.904576       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0504 09:06:08.367932       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0504 09:06:13.352529       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0504 09:06:22.138681       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0504 09:06:27.131150       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n
May 04 09:06:33.588 E ns/openshift-monitoring pod/prometheus-adapter-d776fb4df-d6sg9 node/ip-10-0-151-195.us-east-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): 
May 04 09:06:42.977 E ns/openshift-controller-manager pod/controller-manager-wn2k6 node/ip-10-0-155-226.us-east-2.compute.internal container=controller-manager container exited with code 137 (Error): 
May 04 09:06:47.032 E ns/openshift-authentication-operator pod/authentication-operator-dcf4d5cc7-r4wsq node/ip-10-0-140-155.us-east-2.compute.internal container=operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
May 04 09:06:47.778 E ns/openshift-monitoring pod/node-exporter-67tpv node/ip-10-0-155-226.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): 
May 04 09:06:48.576 E ns/openshift-cluster-node-tuning-operator pod/tuned-mrmm9 node/ip-10-0-155-226.us-east-2.compute.internal container=tuned container exited with code 143 (Error): I0504 09:05:16.700569   14388 openshift-tuned.go:187] Extracting tuned profiles\nI0504 09:05:16.703248   14388 openshift-tuned.go:623] Resync period to pull node/pod labels: 136 [s]\nI0504 09:05:16.721966   14388 openshift-tuned.go:435] Pod (openshift-kube-apiserver/revision-pruner-3-ip-10-0-155-226.us-east-2.compute.internal) labels changed node wide: true\nI0504 09:05:21.718752   14388 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:05:21.720414   14388 openshift-tuned.go:275] Dumping labels to /var/lib/tuned/ocp-node-labels.cfg\nI0504 09:05:21.721531   14388 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:05:21.902129   14388 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0504 09:05:22.573149   14388 openshift-tuned.go:435] Pod (openshift-cluster-machine-approver/machine-approver-79655756dc-kw75k) labels changed node wide: true\nI0504 09:05:26.718891   14388 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:05:26.721757   14388 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:05:26.952786   14388 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0504 09:05:27.401673   14388 openshift-tuned.go:435] Pod (openshift-kube-apiserver/kube-apiserver-ip-10-0-155-226.us-east-2.compute.internal) labels changed node wide: true\nI0504 09:05:31.720122   14388 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:05:31.722197   14388 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:05:31.914566   14388 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0504 09:05:34.478632   14388 openshift-tuned.go:435] Pod (openshift-kube-scheduler/revision-pruner-7-ip-10-0-155-226.us-east-2.compute.internal) labels changed node wide: false\n
May 04 09:06:53.207 E ns/openshift-marketplace pod/certified-operators-7cb4d5b5b9-jjzrm node/ip-10-0-143-66.us-east-2.compute.internal container=certified-operators container exited with code 2 (Error): 
May 04 09:06:53.977 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-5c7cf94794-kvd4q node/ip-10-0-155-226.us-east-2.compute.internal container=operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
May 04 09:06:59.376 E ns/openshift-console pod/downloads-84499c4bb-wc8q6 node/ip-10-0-155-226.us-east-2.compute.internal container=download-server container exited with code 137 (Error): 
May 04 09:07:00.009 E ns/openshift-operator-lifecycle-manager pod/catalog-operator-66c87b7bff-lcrfj node/ip-10-0-131-63.us-east-2.compute.internal container=catalog-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
May 04 09:07:01.108 E ns/openshift-operator-lifecycle-manager pod/packageserver-9c97d98c-n6d55 node/ip-10-0-140-155.us-east-2.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
May 04 09:07:01.260 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-143-66.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): 
May 04 09:07:02.599 E ns/openshift-cluster-node-tuning-operator pod/tuned-q88mj node/ip-10-0-140-155.us-east-2.compute.internal container=tuned container exited with code 143 (Error): o /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:05:32.722488   14089 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:05:32.899785   14089 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0504 09:05:52.819168   14089 openshift-tuned.go:435] Pod (openshift-marketplace/marketplace-operator-64b4dd44fb-8jtnr) labels changed node wide: true\nI0504 09:05:57.720634   14089 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:05:57.722601   14089 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:05:57.854994   14089 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0504 09:06:30.673516   14089 openshift-tuned.go:435] Pod (openshift-console/downloads-84499c4bb-w8dcl) labels changed node wide: true\nI0504 09:06:32.720674   14089 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:06:32.722459   14089 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:06:32.856372   14089 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0504 09:06:44.714509   14089 openshift-tuned.go:691] Lowering resyncPeriod to 56\nI0504 09:06:50.666174   14089 openshift-tuned.go:435] Pod (openshift-authentication-operator/authentication-operator-dcf4d5cc7-r4wsq) labels changed node wide: true\nI0504 09:06:52.720541   14089 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:06:52.722585   14089 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:06:52.897156   14089 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0504 09:06:59.852367   14089 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/packageserver-9c97d98c-n6d55) labels changed node wide: true\n
May 04 09:07:03.599 E ns/openshift-operator-lifecycle-manager pod/packageserver-7669c497bd-s2ssk node/ip-10-0-140-155.us-east-2.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
May 04 09:07:16.844 E ns/openshift-monitoring pod/node-exporter-g2rcs node/ip-10-0-140-155.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): 
May 04 09:07:19.422 E ns/openshift-cluster-node-tuning-operator pod/tuned-dvc55 node/ip-10-0-137-68.us-east-2.compute.internal container=tuned container exited with code 143 (Error): load.\nI0504 09:06:31.238963    1570 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-adapter-d776fb4df-j7j4z) labels changed node wide: true\nI0504 09:06:34.740912    1570 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:06:34.742322    1570 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:06:34.851578    1570 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0504 09:06:41.324280    1570 openshift-tuned.go:435] Pod (openshift-ingress/router-default-54898d55f8-tqqk2) labels changed node wide: true\nI0504 09:06:44.740944    1570 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:06:44.742334    1570 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:06:44.873828    1570 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0504 09:06:45.488899    1570 openshift-tuned.go:435] Pod (openshift-monitoring/alertmanager-main-0) labels changed node wide: true\nI0504 09:06:49.740927    1570 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:06:49.742227    1570 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:06:49.864874    1570 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0504 09:06:59.548554    1570 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/olm-operators-8lvnh) labels changed node wide: true\nI0504 09:06:59.740919    1570 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:06:59.742292    1570 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:06:59.879189    1570 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0504 09:07:08.726842    1570 openshift-tuned.go:691] Lowering resyncPeriod to 62\n
May 04 09:07:24.053 E ns/openshift-monitoring pod/node-exporter-ql8kd node/ip-10-0-131-63.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): 
May 04 09:07:31.387 E ns/openshift-monitoring pod/node-exporter-7zmqv node/ip-10-0-143-66.us-east-2.compute.internal container=node-exporter container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
May 04 09:07:31.387 E ns/openshift-monitoring pod/node-exporter-7zmqv node/ip-10-0-143-66.us-east-2.compute.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
May 04 09:07:35.375 E ns/openshift-cluster-node-tuning-operator pod/tuned-64wfl node/ip-10-0-143-66.us-east-2.compute.internal container=tuned container exited with code 143 (Error): 1469 openshift-tuned.go:435] Pod (openshift-ingress/router-default-7544b76595-4x4rd) labels changed node wide: true\nI0504 09:06:33.701335    1469 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:06:33.768764    1469 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:06:33.887823    1469 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0504 09:06:38.477985    1469 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-k8s-0) labels changed node wide: true\nI0504 09:06:38.701341    1469 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:06:38.703292    1469 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:06:38.815435    1469 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0504 09:06:46.534465    1469 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-k8s-0) labels changed node wide: true\nI0504 09:06:48.701351    1469 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:06:48.702815    1469 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:06:48.815662    1469 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0504 09:07:06.803686    1469 openshift-tuned.go:435] Pod (openshift-marketplace/community-operators-8476b8b489-7dwcz) labels changed node wide: true\nI0504 09:07:08.701436    1469 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:07:08.703529    1469 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:07:08.830932    1469 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0504 09:07:34.255168    1469 openshift-tuned.go:435] Pod (openshift-monitoring/node-exporter-7zmqv) labels changed node wide: true\n
May 04 09:07:36.092 E ns/openshift-controller-manager pod/controller-manager-nhhc7 node/ip-10-0-131-63.us-east-2.compute.internal container=controller-manager container exited with code 137 (Error): 
May 04 09:07:38.462 E ns/openshift-monitoring pod/node-exporter-79ss4 node/ip-10-0-137-68.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): 
May 04 09:07:39.124 E ns/openshift-service-ca pod/service-serving-cert-signer-77995f8c48-vh4xd node/ip-10-0-131-63.us-east-2.compute.internal container=service-serving-cert-signer-controller container exited with code 2 (Error): 
May 04 09:07:39.209 E ns/openshift-service-ca pod/apiservice-cabundle-injector-666b4db8f4-xmj2b node/ip-10-0-140-155.us-east-2.compute.internal container=apiservice-cabundle-injector-controller container exited with code 2 (Error): 
May 04 09:07:39.687 E ns/openshift-service-ca pod/configmap-cabundle-injector-77fcd8c7f8-9d8v7 node/ip-10-0-155-226.us-east-2.compute.internal container=configmap-cabundle-injector-controller container exited with code 2 (Error): 
May 04 09:07:41.585 E ns/openshift-cluster-node-tuning-operator pod/tuned-r8j8h node/ip-10-0-151-195.us-east-2.compute.internal container=tuned container exited with code 143 (Error): shift-tuned.go:326] Getting recommended profile...\nI0504 09:06:10.851292    1568 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0504 09:06:12.373649    1568 openshift-tuned.go:435] Pod (openshift-marketplace/certified-operators-7d58596467-pmwz4) labels changed node wide: true\nI0504 09:06:15.702418    1568 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:06:15.704182    1568 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:06:15.812971    1568 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0504 09:06:29.400739    1568 openshift-tuned.go:435] Pod (openshift-monitoring/telemeter-client-5b5897c448-962vq) labels changed node wide: true\nI0504 09:06:30.703372    1568 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:06:30.709266    1568 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:06:30.851271    1568 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0504 09:06:33.649393    1568 openshift-tuned.go:435] Pod (openshift-monitoring/alertmanager-main-1) labels changed node wide: true\nI0504 09:06:35.702344    1568 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:06:35.704010    1568 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:06:35.847368    1568 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nE0504 09:06:37.040947    1568 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=3, ErrCode=NO_ERROR, debug=""\nE0504 09:06:37.045265    1568 openshift-tuned.go:720] Pod event watch channel closed.\nI0504 09:06:37.045286    1568 openshift-tuned.go:722] Increasing resyncPeriod to 220\n
May 04 09:07:44.532 E ns/openshift-controller-manager pod/controller-manager-xxz5q node/ip-10-0-131-63.us-east-2.compute.internal container=controller-manager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
May 04 09:07:49.720 E ns/openshift-operator-lifecycle-manager pod/packageserver-79589c66bb-vt6nt node/ip-10-0-155-226.us-east-2.compute.internal container=packageserver container exited with code 137 (Error): icate\nI0504 09:07:17.485009       1 log.go:172] http: TLS handshake error from 10.130.0.1:45434: remote error: tls: bad certificate\nI0504 09:07:18.691982       1 log.go:172] http: TLS handshake error from 10.130.0.1:45446: remote error: tls: bad certificate\nI0504 09:07:19.052567       1 wrap.go:47] GET /: (184.031µs) 200 [Go-http-client/2.0 10.128.0.1:33240]\nI0504 09:07:19.052777       1 wrap.go:47] GET /: (358.204µs) 200 [Go-http-client/2.0 10.130.0.1:41962]\nI0504 09:07:19.109435       1 secure_serving.go:156] Stopped listening on [::]:5443\nI0504 09:07:41.065440       1 reflector.go:202] github.com/operator-framework/operator-lifecycle-manager/pkg/lib/queueinformer/queueinformer_operator.go:130: forcing resync\ntime="2020-05-04T09:07:41Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-05-04T09:07:41Z" level=info msg="new grpc connection added" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-05-04T09:07:41Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-05-04T09:07:41Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-05-04T09:07:41Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-05-04T09:07:41Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-05-04T09:07:41Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-05-04T09:07:41Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\n
May 04 09:07:54.034 E ns/openshift-cluster-node-tuning-operator pod/tuned-9q9nf node/ip-10-0-131-63.us-east-2.compute.internal container=tuned container exited with code 143 (Error): anges will not trigger profile reload.\nI0504 09:07:29.454792   13075 openshift-tuned.go:435] Pod (openshift-console/console-7fcd4d55bc-ww4xw) labels changed node wide: true\nI0504 09:07:32.742027   13075 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:07:32.743656   13075 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:07:32.861258   13075 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0504 09:07:37.119755   13075 openshift-tuned.go:435] Pod (openshift-controller-manager/controller-manager-nhhc7) labels changed node wide: true\nI0504 09:07:37.742019   13075 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:07:37.743613   13075 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:07:37.860876   13075 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0504 09:07:40.733233   13075 openshift-tuned.go:435] Pod (openshift-service-ca/configmap-cabundle-injector-58db8dc9cf-c94fh) labels changed node wide: true\nI0504 09:07:42.742010   13075 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:07:42.743726   13075 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:07:42.903225   13075 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0504 09:07:45.125025   13075 openshift-tuned.go:435] Pod (openshift-controller-manager/controller-manager-xxz5q) labels changed node wide: true\nI0504 09:07:47.742085   13075 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:07:47.743705   13075 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:07:47.858532   13075 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\n
May 04 09:08:00.303 E ns/openshift-console pod/console-7cc86b4b4f-w4dbn node/ip-10-0-140-155.us-east-2.compute.internal container=console container exited with code 2 (Error): 2020/05/4 08:54:09 cmd/main: cookies are secure!\n2020/05/4 08:54:09 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://172.30.0.1:443/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/05/4 08:54:19 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://172.30.0.1:443/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/05/4 08:54:29 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://172.30.0.1:443/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/05/4 08:54:39 cmd/main: Binding to 0.0.0.0:8443...\n2020/05/4 08:54:39 cmd/main: using TLS\n
May 04 09:08:01.059 E ns/openshift-operator-lifecycle-manager pod/packageserver-79589c66bb-72fkr node/ip-10-0-131-63.us-east-2.compute.internal container=packageserver container exited with code 137 (Error): 130.0.1:53498: remote error: tls: bad certificate\nI0504 09:07:30.000005       1 wrap.go:47] GET /: (181.507µs) 200 [Go-http-client/2.0 10.128.0.1:53146]\nI0504 09:07:30.000145       1 wrap.go:47] GET /: (106.704µs) 200 [Go-http-client/2.0 10.128.0.1:53146]\nI0504 09:07:30.000012       1 wrap.go:47] GET /: (194.055µs) 200 [Go-http-client/2.0 10.128.0.1:53146]\nI0504 09:07:30.000665       1 wrap.go:47] GET /: (147.059µs) 200 [Go-http-client/2.0 10.129.0.1:38434]\nI0504 09:07:30.046813       1 secure_serving.go:156] Stopped listening on [::]:5443\nI0504 09:07:41.418657       1 reflector.go:202] github.com/operator-framework/operator-lifecycle-manager/pkg/lib/queueinformer/queueinformer_operator.go:130: forcing resync\ntime="2020-05-04T09:07:42Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-05-04T09:07:42Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-05-04T09:07:42Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-05-04T09:07:42Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-05-04T09:07:42Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-05-04T09:07:42Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-05-04T09:07:42Z" level=info msg="new grpc connection added" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-05-04T09:07:42Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\n
May 04 09:08:24.825 E ns/openshift-controller-manager pod/controller-manager-fdtvt node/ip-10-0-155-226.us-east-2.compute.internal container=controller-manager container exited with code 137 (Error): 
May 04 09:09:15.510 E ns/openshift-controller-manager pod/controller-manager-czhbs node/ip-10-0-140-155.us-east-2.compute.internal container=controller-manager container exited with code 137 (Error): 
May 04 09:15:49.715 E ns/openshift-network-operator pod/network-operator-75d9dc5dd7-9dp85 node/ip-10-0-140-155.us-east-2.compute.internal container=network-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
May 04 09:16:10.781 E ns/openshift-dns pod/dns-default-8lvtn node/ip-10-0-140-155.us-east-2.compute.internal container=dns container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
May 04 09:16:10.781 E ns/openshift-dns pod/dns-default-8lvtn node/ip-10-0-140-155.us-east-2.compute.internal container=dns-node-resolver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
May 04 09:16:36.834 E ns/openshift-multus pod/multus-wsxpm node/ip-10-0-140-155.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
May 04 09:16:57.725 E ns/openshift-sdn pod/sdn-t6bwt node/ip-10-0-131-63.us-east-2.compute.internal container=sdn container exited with code 255 (Error): ar/run/openvswitch/db.sock: connect: connection refused\nI0504 09:16:55.616375    2401 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:16:55.716355    2401 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:16:55.816356    2401 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:16:55.916507    2401 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:16:56.016388    2401 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:16:56.116445    2401 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:16:56.216500    2401 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:16:56.316439    2401 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:16:56.416414    2401 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:16:56.516455    2401 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:16:56.516649    2401 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nF0504 09:16:56.516678    2401 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: timed out waiting for the condition\n
May 04 09:17:09.618 E ns/openshift-machine-api pod/cluster-autoscaler-operator-8d9844895-6mwgc node/ip-10-0-131-63.us-east-2.compute.internal container=cluster-autoscaler-operator container exited with code 255 (Error): 
May 04 09:17:21.761 E ns/openshift-multus pod/multus-4h8q5 node/ip-10-0-151-195.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
May 04 09:17:29.345 E ns/openshift-sdn pod/sdn-controller-vbcjt node/ip-10-0-155-226.us-east-2.compute.internal container=sdn-controller container exited with code 137 (Error): I0504 08:45:49.643641       1 leaderelection.go:205] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
May 04 09:17:31.511 E ns/openshift-sdn pod/ovs-xrlqw node/ip-10-0-143-66.us-east-2.compute.internal container=openvswitch container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
May 04 09:17:35.497 E ns/openshift-sdn pod/sdn-249qg node/ip-10-0-143-66.us-east-2.compute.internal container=sdn container exited with code 255 (Error): ix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:17:34.187310   54369 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:17:34.287273   54369 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:17:34.387300   54369 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:17:34.487372   54369 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:17:34.587329   54369 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:17:34.687286   54369 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:17:34.787294   54369 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:17:34.887216   54369 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:17:34.987324   54369 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:17:35.087371   54369 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:17:35.191646   54369 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0504 09:17:35.191717   54369 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
May 04 09:18:03.101 E ns/openshift-sdn pod/sdn-controller-mcchm node/ip-10-0-140-155.us-east-2.compute.internal container=sdn-controller container exited with code 137 (Error): h: very short watch: k8s.io/client-go/informers/factory.go:132: Unexpected watch close - watch lasted less than a second and no items received\nW0504 08:57:39.575852       1 reflector.go:256] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 10280 (14485)\nW0504 08:57:39.575979       1 reflector.go:256] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 6285 (14485)\nI0504 08:57:59.981464       1 vnids.go:115] Allocated netid 7262981 for namespace "e2e-tests-service-upgrade-lsqjg"\nI0504 08:57:59.992892       1 vnids.go:115] Allocated netid 3015808 for namespace "e2e-tests-sig-apps-replicaset-upgrade-qjkvf"\nI0504 08:58:00.002144       1 vnids.go:115] Allocated netid 4723991 for namespace "e2e-tests-sig-storage-sig-api-machinery-secret-upgrade-djrml"\nI0504 08:58:00.020422       1 vnids.go:115] Allocated netid 533451 for namespace "e2e-tests-sig-apps-daemonset-upgrade-jdcqt"\nI0504 08:58:00.029822       1 vnids.go:115] Allocated netid 3157468 for namespace "e2e-tests-sig-apps-job-upgrade-p562f"\nI0504 08:58:00.053168       1 vnids.go:115] Allocated netid 1963342 for namespace "e2e-tests-sig-storage-sig-api-machinery-configmap-upgrade-bkc7f"\nI0504 08:58:00.063850       1 vnids.go:115] Allocated netid 2357122 for namespace "e2e-tests-sig-apps-deployment-upgrade-f5fq7"\nW0504 09:06:37.218306       1 reflector.go:256] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 14485 (20590)\nW0504 09:06:37.373798       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Namespace ended with: too old resource version: 14915 (17739)\nW0504 09:06:37.374083       1 reflector.go:256] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 14739 (20590)\n
May 04 09:18:06.456 E ns/openshift-sdn pod/ovs-wd2pp node/ip-10-0-155-226.us-east-2.compute.internal container=openvswitch container exited with code 137 (Error): |connmgr|INFO|br0<->unix#1063: 3 flow_mods in the last 0 s (3 adds)\n2020-05-04T09:16:44.828Z|00438|connmgr|INFO|br0<->unix#1065: 1 flow_mods in the last 0 s (1 deletes)\n2020-05-04T09:16:44.868Z|00439|connmgr|INFO|br0<->unix#1070: 1 flow_mods in the last 0 s (1 adds)\n2020-05-04T09:16:44.869Z|00440|connmgr|INFO|br0<->unix#1071: 1 flow_mods in the last 0 s (1 deletes)\n2020-05-04T09:16:44.906Z|00441|connmgr|INFO|br0<->unix#1075: 3 flow_mods in the last 0 s (3 adds)\n2020-05-04T09:16:44.931Z|00442|connmgr|INFO|br0<->unix#1078: 1 flow_mods in the last 0 s (1 deletes)\n2020-05-04T09:16:44.956Z|00443|connmgr|INFO|br0<->unix#1081: 1 flow_mods in the last 0 s (1 adds)\n2020-05-04T09:16:44.976Z|00444|connmgr|INFO|br0<->unix#1084: 1 flow_mods in the last 0 s (1 deletes)\n2020-05-04T09:16:44.995Z|00445|connmgr|INFO|br0<->unix#1087: 3 flow_mods in the last 0 s (3 adds)\n2020-05-04T09:16:45.012Z|00446|connmgr|INFO|br0<->unix#1090: 1 flow_mods in the last 0 s (1 deletes)\n2020-05-04T09:16:45.031Z|00447|connmgr|INFO|br0<->unix#1093: 1 flow_mods in the last 0 s (1 adds)\n2020-05-04T09:16:45.049Z|00448|connmgr|INFO|br0<->unix#1096: 1 flow_mods in the last 0 s (1 deletes)\n2020-05-04T09:16:45.062Z|00449|connmgr|INFO|br0<->unix#1098: 3 flow_mods in the last 0 s (3 adds)\n2020-05-04T09:16:45.086Z|00450|connmgr|INFO|br0<->unix#1101: 1 flow_mods in the last 0 s (1 adds)\n2020-05-04T09:16:45.115Z|00451|connmgr|INFO|br0<->unix#1104: 3 flow_mods in the last 0 s (3 adds)\n2020-05-04T09:16:45.142Z|00452|connmgr|INFO|br0<->unix#1107: 1 flow_mods in the last 0 s (1 adds)\n2020-05-04T09:17:25.809Z|00453|connmgr|INFO|br0<->unix#1113: 4 flow_mods in the last 0 s (4 deletes)\n2020-05-04T09:17:25.834Z|00454|bridge|INFO|bridge br0: deleted interface veth7191f2d1 on port 13\n2020-05-04T09:17:42.562Z|00455|bridge|INFO|bridge br0: added interface veth562fb082 on port 70\n2020-05-04T09:17:42.600Z|00456|connmgr|INFO|br0<->unix#1116: 5 flow_mods in the last 0 s (5 adds)\n2020-05-04T09:17:42.642Z|00457|connmgr|INFO|br0<->unix#1119: 2 flow_mods in the last 0 s (2 deletes)\n
May 04 09:18:09.781 E ns/openshift-multus pod/multus-glpk7 node/ip-10-0-131-63.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
May 04 09:18:17.500 E ns/openshift-sdn pod/sdn-rzkgx node/ip-10-0-155-226.us-east-2.compute.internal container=sdn container exited with code 255 (Error): ar/run/openvswitch/db.sock: connect: connection refused\nI0504 09:18:15.536472   70925 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:18:15.636348   70925 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:18:15.736319   70925 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:18:15.836342   70925 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:18:15.936362   70925 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:18:16.036357   70925 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:18:16.136401   70925 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:18:16.236347   70925 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:18:16.336332   70925 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:18:16.436362   70925 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:18:16.436430   70925 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nF0504 09:18:16.436445   70925 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: timed out waiting for the condition\n
May 04 09:18:48.228 E ns/openshift-sdn pod/ovs-cfw4h node/ip-10-0-140-155.us-east-2.compute.internal container=openvswitch container exited with code 137 (Error): 2Z|00380|connmgr|INFO|br0<->unix#936: 2 flow_mods in the last 0 s (2 deletes)\n2020-05-04T09:17:04.670Z|00381|connmgr|INFO|br0<->unix#948: 2 flow_mods in the last 0 s (2 adds)\n2020-05-04T09:17:04.779Z|00382|connmgr|INFO|br0<->unix#954: 1 flow_mods in the last 0 s (1 deletes)\n2020-05-04T09:17:04.805Z|00383|connmgr|INFO|br0<->unix#957: 1 flow_mods in the last 0 s (1 deletes)\n2020-05-04T09:17:04.832Z|00384|connmgr|INFO|br0<->unix#960: 1 flow_mods in the last 0 s (1 deletes)\n2020-05-04T09:17:04.858Z|00385|connmgr|INFO|br0<->unix#963: 1 flow_mods in the last 0 s (1 deletes)\n2020-05-04T09:17:04.885Z|00386|connmgr|INFO|br0<->unix#966: 1 flow_mods in the last 0 s (1 deletes)\n2020-05-04T09:17:04.919Z|00387|connmgr|INFO|br0<->unix#969: 1 flow_mods in the last 0 s (1 deletes)\n2020-05-04T09:17:04.945Z|00388|connmgr|INFO|br0<->unix#972: 1 flow_mods in the last 0 s (1 deletes)\n2020-05-04T09:17:04.973Z|00389|connmgr|INFO|br0<->unix#975: 1 flow_mods in the last 0 s (1 deletes)\n2020-05-04T09:17:05.013Z|00390|connmgr|INFO|br0<->unix#979: 3 flow_mods in the last 0 s (3 adds)\n2020-05-04T09:17:05.028Z|00391|connmgr|INFO|br0<->unix#981: 1 flow_mods in the last 0 s (1 deletes)\n2020-05-04T09:17:05.042Z|00392|connmgr|INFO|br0<->unix#984: 1 flow_mods in the last 0 s (1 adds)\n2020-05-04T09:17:05.065Z|00393|connmgr|INFO|br0<->unix#987: 3 flow_mods in the last 0 s (3 adds)\n2020-05-04T09:17:05.094Z|00394|connmgr|INFO|br0<->unix#990: 1 flow_mods in the last 0 s (1 adds)\n2020-05-04T09:17:05.123Z|00395|connmgr|INFO|br0<->unix#993: 3 flow_mods in the last 0 s (3 adds)\n2020-05-04T09:17:05.151Z|00396|connmgr|INFO|br0<->unix#996: 1 flow_mods in the last 0 s (1 adds)\n2020-05-04T09:17:05.179Z|00397|connmgr|INFO|br0<->unix#999: 3 flow_mods in the last 0 s (3 adds)\n2020-05-04T09:17:05.207Z|00398|connmgr|INFO|br0<->unix#1002: 1 flow_mods in the last 0 s (1 adds)\n2020-05-04T09:17:05.234Z|00399|connmgr|INFO|br0<->unix#1005: 3 flow_mods in the last 0 s (3 adds)\n2020-05-04T09:17:05.258Z|00400|connmgr|INFO|br0<->unix#1008: 1 flow_mods in the last 0 s (1 adds)\n
May 04 09:18:48.565 E ns/openshift-multus pod/multus-bpzqx node/ip-10-0-155-226.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
May 04 09:18:51.243 E ns/openshift-sdn pod/sdn-jx69g node/ip-10-0-140-155.us-east-2.compute.internal container=sdn container exited with code 255 (Error): ix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:18:49.222432   68449 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:18:49.322431   68449 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:18:49.422448   68449 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:18:49.522426   68449 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:18:49.622443   68449 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:18:49.722418   68449 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:18:49.822306   68449 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:18:49.922464   68449 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:18:50.022534   68449 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:18:50.123029   68449 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:18:50.233896   68449 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0504 09:18:50.234072   68449 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
May 04 09:19:20.741 E ns/openshift-sdn pod/ovs-brbgp node/ip-10-0-137-68.us-east-2.compute.internal container=openvswitch container exited with code 137 (Error): -04T09:16:51.959Z|00168|connmgr|INFO|br0<->unix#475: 2 flow_mods in the last 0 s (2 deletes)\n2020-05-04T09:16:51.985Z|00169|connmgr|INFO|br0<->unix#478: 4 flow_mods in the last 0 s (4 deletes)\n2020-05-04T09:16:52.005Z|00170|bridge|INFO|bridge br0: deleted interface vethbbdb4323 on port 4\n2020-05-04T09:16:59.825Z|00171|bridge|INFO|bridge br0: added interface vethce3e7d4c on port 28\n2020-05-04T09:16:59.854Z|00172|connmgr|INFO|br0<->unix#484: 5 flow_mods in the last 0 s (5 adds)\n2020-05-04T09:16:59.890Z|00173|connmgr|INFO|br0<->unix#487: 2 flow_mods in the last 0 s (2 deletes)\n2020-05-04T09:17:15.708Z|00174|connmgr|INFO|br0<->unix#496: 2 flow_mods in the last 0 s (2 adds)\n2020-05-04T09:17:15.808Z|00175|connmgr|INFO|br0<->unix#502: 1 flow_mods in the last 0 s (1 deletes)\n2020-05-04T09:17:15.833Z|00176|connmgr|INFO|br0<->unix#505: 1 flow_mods in the last 0 s (1 deletes)\n2020-05-04T09:17:15.857Z|00177|connmgr|INFO|br0<->unix#508: 1 flow_mods in the last 0 s (1 deletes)\n2020-05-04T09:17:15.880Z|00178|connmgr|INFO|br0<->unix#511: 1 flow_mods in the last 0 s (1 deletes)\n2020-05-04T09:17:16.004Z|00179|connmgr|INFO|br0<->unix#514: 3 flow_mods in the last 0 s (3 adds)\n2020-05-04T09:17:16.030Z|00180|connmgr|INFO|br0<->unix#517: 1 flow_mods in the last 0 s (1 adds)\n2020-05-04T09:17:16.052Z|00181|connmgr|INFO|br0<->unix#520: 3 flow_mods in the last 0 s (3 adds)\n2020-05-04T09:17:16.075Z|00182|connmgr|INFO|br0<->unix#523: 1 flow_mods in the last 0 s (1 adds)\n2020-05-04T09:17:16.103Z|00183|connmgr|INFO|br0<->unix#526: 3 flow_mods in the last 0 s (3 adds)\n2020-05-04T09:17:16.134Z|00184|connmgr|INFO|br0<->unix#529: 1 flow_mods in the last 0 s (1 adds)\n2020-05-04T09:17:16.158Z|00185|connmgr|INFO|br0<->unix#532: 3 flow_mods in the last 0 s (3 adds)\n2020-05-04T09:17:16.181Z|00186|connmgr|INFO|br0<->unix#535: 1 flow_mods in the last 0 s (1 adds)\n2020-05-04T09:17:16.206Z|00187|connmgr|INFO|br0<->unix#538: 3 flow_mods in the last 0 s (3 adds)\n2020-05-04T09:17:16.230Z|00188|connmgr|INFO|br0<->unix#541: 1 flow_mods in the last 0 s (1 adds)\n
May 04 09:19:22.772 E ns/openshift-sdn pod/sdn-zw2c8 node/ip-10-0-137-68.us-east-2.compute.internal container=sdn container exited with code 255 (Error): ix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:19:21.633419   65139 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:19:21.733464   65139 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:19:21.833444   65139 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:19:21.933436   65139 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:19:22.033412   65139 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:19:22.133438   65139 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:19:22.236425   65139 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:19:22.335381   65139 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:19:22.433469   65139 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:19:22.533388   65139 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:19:22.637930   65139 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0504 09:19:22.638021   65139 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
May 04 09:19:26.340 E ns/openshift-service-ca pod/service-serving-cert-signer-669c98947c-qcw6x node/ip-10-0-140-155.us-east-2.compute.internal container=service-serving-cert-signer-controller container exited with code 255 (Error): 
May 04 09:19:26.359 E ns/openshift-service-ca pod/apiservice-cabundle-injector-664d7f9945-lcfhs node/ip-10-0-140-155.us-east-2.compute.internal container=apiservice-cabundle-injector-controller container exited with code 255 (Error): 
May 04 09:19:33.740 E ns/openshift-multus pod/multus-h52fs node/ip-10-0-143-66.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
May 04 09:20:03.175 E ns/openshift-sdn pod/sdn-6svrp node/ip-10-0-151-195.us-east-2.compute.internal container=sdn container exited with code 255 (Error): n/openvswitch/db.sock: connect: connection refused\nI0504 09:20:00.768766   49151 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:20:00.868708   49151 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:20:00.968675   49151 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:20:01.060023   49151 proxier.go:367] userspace proxy: processing 0 service events\nI0504 09:20:01.060051   49151 proxier.go:346] userspace syncProxyRules took 32.15213ms\nI0504 09:20:01.068700   49151 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:20:01.168754   49151 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:20:01.268731   49151 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:20:01.368716   49151 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:20:01.468750   49151 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:20:01.569306   49151 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0504 09:20:01.675020   49151 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0504 09:20:01.675080   49151 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
May 04 09:20:12.844 E ns/openshift-multus pod/multus-7452c node/ip-10-0-137-68.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
May 04 09:20:33.869 E ns/openshift-machine-config-operator pod/machine-config-operator-799fc59f88-r2dcx node/ip-10-0-155-226.us-east-2.compute.internal container=machine-config-operator container exited with code 2 (Error): 
May 04 09:23:50.700 E ns/openshift-machine-config-operator pod/machine-config-controller-b79d85f58-jsctf node/ip-10-0-131-63.us-east-2.compute.internal container=machine-config-controller container exited with code 2 (Error): 
May 04 09:25:53.480 E ns/openshift-machine-config-operator pod/machine-config-server-r6svx node/ip-10-0-140-155.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): 
May 04 09:26:00.752 E kube-apiserver Kube API started failing: Get https://api.ci-op-cvzhtcyw-a4243.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=3s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
May 04 09:26:09.025 E ns/openshift-marketplace pod/redhat-operators-848f57b965-84ltx node/ip-10-0-137-68.us-east-2.compute.internal container=redhat-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
May 04 09:26:10.001 E ns/openshift-authentication-operator pod/authentication-operator-ff7cb5cfc-ffgth node/ip-10-0-155-226.us-east-2.compute.internal container=operator container exited with code 255 (Error): 5:55.367748       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22451 (24786)\nW0504 09:16:07.382608       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22451 (24852)\nW0504 09:16:57.459381       1 reflector.go:270] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: The resourceVersion for the provided watch is too old.\nW0504 09:17:15.371341       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22451 (25438)\nW0504 09:21:30.375743       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 24669 (27269)\nW0504 09:21:36.373778       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25027 (27303)\nW0504 09:23:09.370491       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Deployment ended with: too old resource version: 22832 (23600)\nW0504 09:23:22.377660       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25785 (27818)\nW0504 09:24:07.388861       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25115 (28085)\nW0504 09:25:15.827519       1 reflector.go:270] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: The resourceVersion for the provided watch is too old.\nI0504 09:26:05.157900       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0504 09:26:05.157964       1 leaderelection.go:65] leaderelection lost\nI0504 09:26:05.174984       1 unsupportedconfigoverrides_controller.go:161] Shutting down UnsupportedConfigOverridesController\n
May 04 09:26:13.688 E ns/openshift-machine-config-operator pod/machine-config-server-fcxn9 node/ip-10-0-131-63.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): 
May 04 09:26:28.022 E ns/openshift-console pod/downloads-5877796c85-54jhp node/ip-10-0-137-68.us-east-2.compute.internal container=download-server container exited with code 137 (Error): 
May 04 09:26:40.185 E ns/openshift-operator-lifecycle-manager pod/packageserver-86c6b9fc48-bbhrv node/ip-10-0-155-226.us-east-2.compute.internal container=packageserver container exited with code 137 (Error): d-operators namespace=openshift-marketplace\ntime="2020-05-04T09:26:36Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-05-04T09:26:36Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-05-04T09:26:36Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-05-04T09:26:36Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-05-04T09:26:37Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-05-04T09:26:37Z" level=info msg="new grpc connection added" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-05-04T09:26:37Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-05-04T09:26:37Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-05-04T09:26:38Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-05-04T09:26:38Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-05-04T09:26:38Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-05-04T09:26:38Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\n
May 04 09:26:55.752 E openshift-apiserver OpenShift API is not responding to GET requests
May 04 09:27:15.821 E ns/openshift-operator-lifecycle-manager pod/packageserver-86c6b9fc48-2wth7 node/ip-10-0-140-155.us-east-2.compute.internal container=packageserver container exited with code 137 (Error): logsource" name=community-operators namespace=openshift-marketplace\ntime="2020-05-04T09:26:52Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-05-04T09:27:11Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-05-04T09:27:11Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\nI0504 09:27:13.636765       1 reflector.go:202] github.com/operator-framework/operator-lifecycle-manager/pkg/lib/queueinformer/queueinformer_operator.go:130: forcing resync\ntime="2020-05-04T09:27:13Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-05-04T09:27:13Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-05-04T09:27:13Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-05-04T09:27:13Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-05-04T09:27:13Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-05-04T09:27:13Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-05-04T09:27:13Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-05-04T09:27:13Z" level=info msg="new grpc connection added" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\n
May 04 09:28:10.707 E ns/openshift-cluster-node-tuning-operator pod/tuned-ctw29 node/ip-10-0-137-68.us-east-2.compute.internal container=tuned container exited with code 255 (Error): true\nI0504 09:20:18.472357   46014 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:20:18.482359   46014 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:20:18.591489   46014 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0504 09:23:31.233787   46014 openshift-tuned.go:435] Pod (openshift-machine-config-operator/machine-config-daemon-25z9m) labels changed node wide: true\nI0504 09:23:33.472360   46014 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:23:33.473850   46014 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:23:33.598345   46014 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0504 09:25:56.884714   46014 openshift-tuned.go:435] Pod (openshift-monitoring/alertmanager-main-0) labels changed node wide: true\nI0504 09:25:58.472356   46014 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:25:58.473674   46014 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:25:58.583498   46014 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0504 09:26:09.777956   46014 openshift-tuned.go:435] Pod (openshift-monitoring/kube-state-metrics-6d7fc6cb6d-k2kwb) labels changed node wide: true\nI0504 09:26:13.472353   46014 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:26:13.474660   46014 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:26:13.581532   46014 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0504 09:26:31.224687   46014 openshift-tuned.go:435] Pod (openshift-console/downloads-5877796c85-54jhp) labels changed node wide: true\nI0504 09:26:31.941400   46014 openshift-tuned.go:126] Received signal: terminated\n
May 04 09:28:10.737 E ns/openshift-monitoring pod/node-exporter-tkfw7 node/ip-10-0-137-68.us-east-2.compute.internal container=node-exporter container exited with code 255 (Error): 
May 04 09:28:10.737 E ns/openshift-monitoring pod/node-exporter-tkfw7 node/ip-10-0-137-68.us-east-2.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
May 04 09:28:10.956 E ns/openshift-image-registry pod/node-ca-xh5lw node/ip-10-0-137-68.us-east-2.compute.internal container=node-ca container exited with code 255 (Error): 
May 04 09:28:14.856 E ns/openshift-dns pod/dns-default-ksqwd node/ip-10-0-137-68.us-east-2.compute.internal container=dns-node-resolver container exited with code 255 (Error): 
May 04 09:28:14.856 E ns/openshift-dns pod/dns-default-ksqwd node/ip-10-0-137-68.us-east-2.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-05-04T09:17:02.798Z [INFO] CoreDNS-1.3.1\n2020-05-04T09:17:02.798Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-05-04T09:17:02.798Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nW0504 09:26:03.991719       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: watch of *v1.Namespace ended with: too old resource version: 16943 (28899)\n[INFO] SIGTERM: Shutting down servers then terminating\n
May 04 09:28:15.456 E ns/openshift-sdn pod/sdn-zw2c8 node/ip-10-0-137-68.us-east-2.compute.internal container=sdn container exited with code 255 (Error): luster-version-operator:metrics"\nI0504 09:26:30.188382   69439 proxier.go:367] userspace proxy: processing 0 service events\nI0504 09:26:30.188403   69439 proxier.go:346] userspace syncProxyRules took 25.730988ms\nI0504 09:26:30.883469   69439 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-authentication/oauth-openshift:https to [10.128.0.60:6443 10.130.0.58:6443]\nI0504 09:26:30.883500   69439 roundrobin.go:240] Delete endpoint 10.130.0.58:6443 for service "openshift-authentication/oauth-openshift:https"\nI0504 09:26:30.981471   69439 proxier.go:367] userspace proxy: processing 0 service events\nI0504 09:26:30.981494   69439 proxier.go:346] userspace syncProxyRules took 25.8127ms\nI0504 09:26:31.681619   69439 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-service-catalog-apiserver-operator/metrics:https to [10.130.0.60:8443]\nI0504 09:26:31.681657   69439 roundrobin.go:240] Delete endpoint 10.130.0.60:8443 for service "openshift-service-catalog-apiserver-operator/metrics:https"\nI0504 09:26:31.778397   69439 proxier.go:367] userspace proxy: processing 0 service events\nI0504 09:26:31.778426   69439 proxier.go:346] userspace syncProxyRules took 25.707592ms\ninterrupt: Gracefully shutting down ...\nE0504 09:26:32.138901   69439 healthcheck.go:57] SDN healthcheck disconnected from OVS server: <nil>\nI0504 09:26:32.139011   69439 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0504 09:26:32.239264   69439 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0504 09:26:32.339355   69439 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0504 09:26:32.441956   69439 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\n
May 04 09:28:15.857 E ns/openshift-sdn pod/ovs-82vq5 node/ip-10-0-137-68.us-east-2.compute.internal container=openvswitch container exited with code 255 (Error): |00144|connmgr|INFO|br0<->unix#171: 4 flow_mods in the last 0 s (4 deletes)\n2020-05-04T09:26:04.312Z|00145|bridge|INFO|bridge br0: deleted interface veth5dd5868d on port 9\n2020-05-04T09:26:04.386Z|00146|connmgr|INFO|br0<->unix#174: 4 flow_mods in the last 0 s (4 deletes)\n2020-05-04T09:26:04.413Z|00147|bridge|INFO|bridge br0: deleted interface vethc975bbdb on port 12\n2020-05-04T09:26:04.467Z|00148|connmgr|INFO|br0<->unix#177: 4 flow_mods in the last 0 s (4 deletes)\n2020-05-04T09:26:04.517Z|00149|bridge|INFO|bridge br0: deleted interface veth867e95c1 on port 7\n2020-05-04T09:26:04.583Z|00150|connmgr|INFO|br0<->unix#180: 4 flow_mods in the last 0 s (4 deletes)\n2020-05-04T09:26:04.627Z|00151|bridge|INFO|bridge br0: deleted interface vethc9e38ff2 on port 10\n2020-05-04T09:26:04.681Z|00152|connmgr|INFO|br0<->unix#183: 4 flow_mods in the last 0 s (4 deletes)\n2020-05-04T09:26:04.724Z|00153|bridge|INFO|bridge br0: deleted interface veth1b3d848a on port 3\n2020-05-04T09:26:04.771Z|00154|connmgr|INFO|br0<->unix#186: 4 flow_mods in the last 0 s (4 deletes)\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-05-04T09:26:04.400Z|00019|jsonrpc|WARN|Dropped 4 log messages in last 401 seconds (most recently, 401 seconds ago) due to excessive rate\n2020-05-04T09:26:04.400Z|00020|jsonrpc|WARN|unix#139: receive error: Connection reset by peer\n2020-05-04T09:26:04.400Z|00021|reconnect|WARN|unix#139: connection dropped (Connection reset by peer)\n\n==> /var/log/openvswitch/ovs-vswitchd.log <==\n2020-05-04T09:26:04.806Z|00155|bridge|INFO|bridge br0: deleted interface vethbab78ea9 on port 15\n2020-05-04T09:26:27.158Z|00156|connmgr|INFO|br0<->unix#192: 4 flow_mods in the last 0 s (4 deletes)\n2020-05-04T09:26:27.184Z|00157|bridge|INFO|bridge br0: deleted interface vethce595486 on port 4\n2020-05-04T09:26:27.243Z|00158|connmgr|INFO|br0<->unix#195: 4 flow_mods in the last 0 s (4 deletes)\n2020-05-04T09:26:27.263Z|00159|bridge|INFO|bridge br0: deleted interface veth3acd396b on port 5\nTerminated\novs-vswitchd is not running.\novsdb-server is not running.\n
May 04 09:28:16.257 E ns/openshift-multus pod/multus-n7fln node/ip-10-0-137-68.us-east-2.compute.internal container=kube-multus container exited with code 255 (Error): 
May 04 09:28:16.654 E ns/openshift-machine-config-operator pod/machine-config-daemon-9r9mn node/ip-10-0-137-68.us-east-2.compute.internal container=machine-config-daemon container exited with code 255 (Error): 
May 04 09:28:25.376 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-155-226.us-east-2.compute.internal node/ip-10-0-155-226.us-east-2.compute.internal container=scheduler container exited with code 255 (Error): cheduling openshift-console/downloads-5877796c85-hcvxm: Operation cannot be fulfilled on pods/binding "downloads-5877796c85-hcvxm": etcdserver: request timed out, possibly due to previous leader failure; retrying\nI0504 09:26:03.948134       1 scheduler.go:491] Failed to bind pod: openshift-monitoring/prometheus-operator-7d84969cdb-8tgps\nE0504 09:26:03.948193       1 factory.go:1519] Error scheduling openshift-monitoring/prometheus-operator-7d84969cdb-8tgps: Operation cannot be fulfilled on pods/binding "prometheus-operator-7d84969cdb-8tgps": etcdserver: request timed out, possibly due to previous leader failure; retrying\nW0504 09:26:03.972376       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.ReplicationController ended with: too old resource version: 21105 (28899)\nW0504 09:26:03.972521       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 21107 (28899)\nE0504 09:26:04.002038       1 scheduler.go:598] error binding pod: Operation cannot be fulfilled on pods/binding "prometheus-operator-7d84969cdb-8tgps": etcdserver: request timed out, possibly due to previous leader failure\nE0504 09:26:04.002248       1 scheduler.go:598] error binding pod: Operation cannot be fulfilled on pods/binding "downloads-5877796c85-hcvxm": etcdserver: request timed out, possibly due to previous leader failure\nE0504 09:26:04.011678       1 scheduler.go:448] scheduler cache AssumePod failed: pod 4266413a-8de9-11ea-8d13-020f38042d02 is in the cache, so can't be assumed\nE0504 09:26:04.011775       1 factory.go:1519] Error scheduling openshift-console/downloads-5877796c85-hcvxm: pod 4266413a-8de9-11ea-8d13-020f38042d02 is in the cache, so can't be assumed; retrying\nE0504 09:26:04.043964       1 scheduler.go:573] error assuming pod: pod 4266413a-8de9-11ea-8d13-020f38042d02 is in the cache, so can't be assumed\nE0504 09:26:46.671656       1 server.go:259] lost master\nI0504 09:26:46.672870       1 serving.go:88] Shutting down DynamicLoader\n
May 04 09:28:36.402 E ns/openshift-apiserver pod/apiserver-gwrmx node/ip-10-0-155-226.us-east-2.compute.internal container=openshift-apiserver container exited with code 255 (Error): pdate addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0504 09:26:42.416609       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0504 09:26:42.416623       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0504 09:26:42.428851       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0504 09:26:42.925481       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []\nI0504 09:26:42.925632       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0504 09:26:42.925736       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0504 09:26:42.925863       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0504 09:26:42.941116       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0504 09:26:46.647424       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/aggregator-client-ca/ca-bundle.crt\nI0504 09:26:46.647605       1 clusterquotamapping.go:145] Shutting down ClusterQuotaMappingController controller\nI0504 09:26:46.647711       1 serving.go:88] Shutting down DynamicLoader\nI0504 09:26:46.647795       1 controller.go:87] Shutting down OpenAPI AggregationController\nI0504 09:26:46.647845       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/client-ca/ca-bundle.crt\nI0504 09:26:46.648740       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nE0504 09:26:46.648940       1 watch.go:212] unable to encode watch object <nil>: expected pointer, but got invalid kind\nI0504 09:26:46.649368       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\n
May 04 09:28:36.796 E ns/openshift-dns pod/dns-default-mcs8c node/ip-10-0-155-226.us-east-2.compute.internal container=dns-node-resolver container exited with code 255 (Error): 
May 04 09:28:36.796 E ns/openshift-dns pod/dns-default-mcs8c node/ip-10-0-155-226.us-east-2.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-05-04T09:17:46.417Z [INFO] CoreDNS-1.3.1\n2020-05-04T09:17:46.417Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-05-04T09:17:46.417Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nW0504 09:26:03.993801       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: watch of *v1.Namespace ended with: too old resource version: 16943 (28899)\n[INFO] SIGTERM: Shutting down servers then terminating\n
May 04 09:28:37.070 E ns/openshift-monitoring pod/kube-state-metrics-6d7fc6cb6d-wws48 node/ip-10-0-143-66.us-east-2.compute.internal container=kube-state-metrics container exited with code 2 (Error): 
May 04 09:28:37.996 E ns/openshift-multus pod/multus-xwthg node/ip-10-0-155-226.us-east-2.compute.internal container=kube-multus container exited with code 255 (Error): 
May 04 09:28:38.238 E ns/openshift-monitoring pod/prometheus-adapter-7784fdcd8-2tjvb node/ip-10-0-143-66.us-east-2.compute.internal container=prometheus-adapter container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
May 04 09:28:38.395 E ns/openshift-machine-config-operator pod/machine-config-daemon-669hh node/ip-10-0-155-226.us-east-2.compute.internal container=machine-config-daemon container exited with code 255 (Error): 
May 04 09:28:40.795 E ns/openshift-image-registry pod/node-ca-84tkg node/ip-10-0-155-226.us-east-2.compute.internal container=node-ca container exited with code 255 (Error): 
May 04 09:28:40.839 E ns/openshift-monitoring pod/grafana-d7f95c845-n9xnr node/ip-10-0-143-66.us-east-2.compute.internal container=grafana-proxy container exited with code 2 (Error): 
May 04 09:28:41.396 E ns/openshift-cluster-node-tuning-operator pod/tuned-wqshb node/ip-10-0-155-226.us-east-2.compute.internal container=tuned container exited with code 255 (Error): var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:26:15.855811   54738 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:26:15.983136   54738 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0504 09:26:16.199317   54738 openshift-tuned.go:435] Pod (openshift-service-catalog-apiserver-operator/openshift-service-catalog-apiserver-operator-676f7bf55f-qsh7k) labels changed node wide: true\nI0504 09:26:20.854378   54738 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:26:20.855708   54738 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:26:20.985770   54738 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0504 09:26:21.001144   54738 openshift-tuned.go:435] Pod (openshift-machine-config-operator/etcd-quorum-guard-bcdbbd9c7-ln6x6) labels changed node wide: true\nI0504 09:26:25.854371   54738 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:26:25.855768   54738 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:26:25.998255   54738 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0504 09:26:41.272322   54738 openshift-tuned.go:435] Pod (openshift-kube-scheduler/revision-pruner-7-ip-10-0-155-226.us-east-2.compute.internal) labels changed node wide: true\nI0504 09:26:45.854392   54738 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:26:45.855769   54738 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:26:45.972881   54738 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0504 09:26:46.152420   54738 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/packageserver-86c6b9fc48-bbhrv) labels changed node wide: true\n
May 04 09:28:47.352 E ns/openshift-machine-config-operator pod/machine-config-server-5pdfs node/ip-10-0-155-226.us-east-2.compute.internal container=machine-config-server container exited with code 255 (Error): 
May 04 09:28:47.799 E ns/openshift-controller-manager pod/controller-manager-thjmp node/ip-10-0-155-226.us-east-2.compute.internal container=controller-manager container exited with code 255 (Error): 
May 04 09:28:48.204 E ns/openshift-sdn pod/sdn-controller-dm4d6 node/ip-10-0-155-226.us-east-2.compute.internal container=sdn-controller container exited with code 255 (Error): I0504 09:17:31.901276       1 leaderelection.go:205] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
May 04 09:28:51.058 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-dd4967c6f-6msfm node/ip-10-0-140-155.us-east-2.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): 19] Error scheduling openshift-monitoring/prometheus-operator-7d84969cdb-8tgps: Operation cannot be fulfilled on pods/binding \\\"prometheus-operator-7d84969cdb-8tgps\\\": etcdserver: request timed out, possibly due to previous leader failure; retrying\\nW0504 09:26:03.972376       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.ReplicationController ended with: too old resource version: 21105 (28899)\\nW0504 09:26:03.972521       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 21107 (28899)\\nE0504 09:26:04.002038       1 scheduler.go:598] error binding pod: Operation cannot be fulfilled on pods/binding \\\"prometheus-operator-7d84969cdb-8tgps\\\": etcdserver: request timed out, possibly due to previous leader failure\\nE0504 09:26:04.002248       1 scheduler.go:598] error binding pod: Operation cannot be fulfilled on pods/binding \\\"downloads-5877796c85-hcvxm\\\": etcdserver: request timed out, possibly due to previous leader failure\\nE0504 09:26:04.011678       1 scheduler.go:448] scheduler cache AssumePod failed: pod 4266413a-8de9-11ea-8d13-020f38042d02 is in the cache, so can't be assumed\\nE0504 09:26:04.011775       1 factory.go:1519] Error scheduling openshift-console/downloads-5877796c85-hcvxm: pod 4266413a-8de9-11ea-8d13-020f38042d02 is in the cache, so can't be assumed; retrying\\nE0504 09:26:04.043964       1 scheduler.go:573] error assuming pod: pod 4266413a-8de9-11ea-8d13-020f38042d02 is in the cache, so can't be assumed\\nE0504 09:26:46.671656       1 server.go:259] lost master\\nI0504 09:26:46.672870       1 serving.go:88] Shutting down DynamicLoader\\n\""\nW0504 09:28:46.787373       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 27667 (30713)\nI0504 09:28:48.272892       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0504 09:28:48.273062       1 leaderelection.go:65] leaderelection lost\n
May 04 09:28:51.645 E ns/openshift-cluster-machine-approver pod/machine-approver-54594fb676-qzcgw node/ip-10-0-140-155.us-east-2.compute.internal container=machine-approver-controller container exited with code 2 (Error): .\nI0504 09:05:30.374079       1 config.go:33] using default as failed to load config /var/run/configmaps/config/config.yaml: open /var/run/configmaps/config/config.yaml: no such file or directory\nI0504 09:05:30.374278       1 config.go:23] machine approver config: {NodeClientCert:{Disabled:false}}\nI0504 09:05:30.374387       1 main.go:183] Starting Machine Approver\nI0504 09:05:30.474676       1 main.go:107] CSR csr-xs6ld added\nI0504 09:05:30.474702       1 main.go:110] CSR csr-xs6ld is already approved\nI0504 09:05:30.474717       1 main.go:107] CSR csr-zf6x9 added\nI0504 09:05:30.474723       1 main.go:110] CSR csr-zf6x9 is already approved\nI0504 09:05:30.474748       1 main.go:107] CSR csr-h6hrk added\nI0504 09:05:30.474756       1 main.go:110] CSR csr-h6hrk is already approved\nI0504 09:05:30.474768       1 main.go:107] CSR csr-nkknq added\nI0504 09:05:30.474776       1 main.go:110] CSR csr-nkknq is already approved\nI0504 09:05:30.474787       1 main.go:107] CSR csr-ssgzg added\nI0504 09:05:30.474796       1 main.go:110] CSR csr-ssgzg is already approved\nI0504 09:05:30.474822       1 main.go:107] CSR csr-x6267 added\nI0504 09:05:30.474832       1 main.go:110] CSR csr-x6267 is already approved\nI0504 09:05:30.474845       1 main.go:107] CSR csr-s78zz added\nI0504 09:05:30.474853       1 main.go:110] CSR csr-s78zz is already approved\nI0504 09:05:30.474863       1 main.go:107] CSR csr-sdhkh added\nI0504 09:05:30.474871       1 main.go:110] CSR csr-sdhkh is already approved\nI0504 09:05:30.474894       1 main.go:107] CSR csr-2dszp added\nI0504 09:05:30.474903       1 main.go:110] CSR csr-2dszp is already approved\nI0504 09:05:30.474914       1 main.go:107] CSR csr-5sv8r added\nI0504 09:05:30.474922       1 main.go:110] CSR csr-5sv8r is already approved\nI0504 09:05:30.474933       1 main.go:107] CSR csr-9c98v added\nI0504 09:05:30.474941       1 main.go:110] CSR csr-9c98v is already approved\nI0504 09:05:30.474952       1 main.go:107] CSR csr-s75cw added\nI0504 09:05:30.474960       1 main.go:110] CSR csr-s75cw is already approved\n
May 04 09:28:53.643 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-85846bb985-sk2tq node/ip-10-0-140-155.us-east-2.compute.internal container=kube-apiserver-operator container exited with code 255 (Error): 24:59.484595       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25256 (28340)\nW0504 09:26:04.026976       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Authentication ended with: too old resource version: 19878 (28899)\nW0504 09:26:28.095614       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 26832 (28774)\nI0504 09:26:41.403822       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"98a0c157-8de3-11ea-9966-022d4a28c552", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-7-ip-10-0-155-226.us-east-2.compute.internal -n openshift-kube-apiserver because it was missing\nW0504 09:27:11.415355       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 27209 (29843)\nW0504 09:28:00.888828       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 26575 (30357)\nI0504 09:28:40.411927       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"98a0c157-8de3-11ea-9966-022d4a28c552", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "" to "StaticPodsDegraded: nodes/ip-10-0-155-226.us-east-2.compute.internal pods/kube-apiserver-ip-10-0-155-226.us-east-2.compute.internal container=\"kube-apiserver-7\" is not ready"\nI0504 09:28:48.138853       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0504 09:28:48.138935       1 leaderelection.go:65] leaderelection lost\n
May 04 09:28:56.641 E ns/openshift-service-ca pod/service-serving-cert-signer-669c98947c-qcw6x node/ip-10-0-140-155.us-east-2.compute.internal container=service-serving-cert-signer-controller container exited with code 2 (Error): 
May 04 09:29:03.241 E ns/openshift-marketplace pod/marketplace-operator-bf7bb55b6-h6scv node/ip-10-0-140-155.us-east-2.compute.internal container=marketplace-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
May 04 09:29:05.842 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-756d9f9767-z6dfq node/ip-10-0-140-155.us-east-2.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): ded with: too old resource version: 26663 (30046)\nW0504 09:28:15.234909       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 26323 (30464)\nI0504 09:28:38.808849       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"98998f35-8de3-11ea-9966-022d4a28c552", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "" to "StaticPodsDegraded: nodes/ip-10-0-155-226.us-east-2.compute.internal pods/kube-controller-manager-ip-10-0-155-226.us-east-2.compute.internal container=\"kube-controller-manager-5\" is not ready"\nI0504 09:28:40.589695       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"98998f35-8de3-11ea-9966-022d4a28c552", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-5-ip-10-0-155-226.us-east-2.compute.internal -n openshift-kube-controller-manager because it was missing\nW0504 09:28:48.540470       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Role ended with: too old resource version: 16950 (31391)\nW0504 09:28:48.547639       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Infrastructure ended with: too old resource version: 17704 (31391)\nW0504 09:28:48.599426       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.FeatureGate ended with: too old resource version: 17704 (31392)\nI0504 09:28:53.344913       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0504 09:28:53.344981       1 leaderelection.go:65] leaderelection lost\n
May 04 09:29:07.041 E ns/openshift-service-ca pod/apiservice-cabundle-injector-664d7f9945-lcfhs node/ip-10-0-140-155.us-east-2.compute.internal container=apiservice-cabundle-injector-controller container exited with code 2 (Error): 
May 04 09:29:07.641 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-7f6b59c446-bqgnv node/ip-10-0-140-155.us-east-2.compute.internal container=operator container exited with code 2 (Error): config/informers/externalversions/factory.go:101: Watch close - *v1.Build total 0 items received\nI0504 09:28:48.327719       1 reflector.go:357] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: Watch close - *v1.Image total 0 items received\nI0504 09:28:48.329960       1 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch close - *v1.Namespace total 0 items received\nW0504 09:28:48.403889       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Build ended with: too old resource version: 20129 (31379)\nW0504 09:28:48.418245       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Namespace ended with: too old resource version: 17739 (31378)\nW0504 09:28:48.418508       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Image ended with: too old resource version: 18462 (31378)\nI0504 09:28:48.437167       1 reflector.go:357] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: Watch close - *v1.OpenShiftControllerManager total 0 items received\nW0504 09:28:48.446347       1 reflector.go:270] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: watch of *v1.OpenShiftControllerManager ended with: too old resource version: 23384 (31384)\nI0504 09:28:49.404995       1 reflector.go:169] Listing and watching *v1.Build from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0504 09:28:49.420548       1 reflector.go:169] Listing and watching *v1.Image from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0504 09:28:49.420764       1 reflector.go:169] Listing and watching *v1.Namespace from k8s.io/client-go/informers/factory.go:132\nI0504 09:28:49.446682       1 reflector.go:169] Listing and watching *v1.OpenShiftControllerManager from github.com/openshift/client-go/operator/informers/externalversions/factory.go:101\n
May 04 09:29:11.617 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-137-68.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): 
May 04 09:29:15.152 E ns/openshift-monitoring pod/node-exporter-sp6kx node/ip-10-0-143-66.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): 
May 04 09:29:21.202 E ns/openshift-console pod/downloads-5877796c85-hcvxm node/ip-10-0-140-155.us-east-2.compute.internal container=download-server container exited with code 137 (Error): 
May 04 09:29:35.397 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-155-226.us-east-2.compute.internal node/ip-10-0-155-226.us-east-2.compute.internal container=scheduler container exited with code 255 (Error): cheduling openshift-console/downloads-5877796c85-hcvxm: Operation cannot be fulfilled on pods/binding "downloads-5877796c85-hcvxm": etcdserver: request timed out, possibly due to previous leader failure; retrying\nI0504 09:26:03.948134       1 scheduler.go:491] Failed to bind pod: openshift-monitoring/prometheus-operator-7d84969cdb-8tgps\nE0504 09:26:03.948193       1 factory.go:1519] Error scheduling openshift-monitoring/prometheus-operator-7d84969cdb-8tgps: Operation cannot be fulfilled on pods/binding "prometheus-operator-7d84969cdb-8tgps": etcdserver: request timed out, possibly due to previous leader failure; retrying\nW0504 09:26:03.972376       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.ReplicationController ended with: too old resource version: 21105 (28899)\nW0504 09:26:03.972521       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 21107 (28899)\nE0504 09:26:04.002038       1 scheduler.go:598] error binding pod: Operation cannot be fulfilled on pods/binding "prometheus-operator-7d84969cdb-8tgps": etcdserver: request timed out, possibly due to previous leader failure\nE0504 09:26:04.002248       1 scheduler.go:598] error binding pod: Operation cannot be fulfilled on pods/binding "downloads-5877796c85-hcvxm": etcdserver: request timed out, possibly due to previous leader failure\nE0504 09:26:04.011678       1 scheduler.go:448] scheduler cache AssumePod failed: pod 4266413a-8de9-11ea-8d13-020f38042d02 is in the cache, so can't be assumed\nE0504 09:26:04.011775       1 factory.go:1519] Error scheduling openshift-console/downloads-5877796c85-hcvxm: pod 4266413a-8de9-11ea-8d13-020f38042d02 is in the cache, so can't be assumed; retrying\nE0504 09:26:04.043964       1 scheduler.go:573] error assuming pod: pod 4266413a-8de9-11ea-8d13-020f38042d02 is in the cache, so can't be assumed\nE0504 09:26:46.671656       1 server.go:259] lost master\nI0504 09:26:46.672870       1 serving.go:88] Shutting down DynamicLoader\n
May 04 09:29:35.797 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-155-226.us-east-2.compute.internal node/ip-10-0-155-226.us-east-2.compute.internal container=kube-controller-manager-cert-syncer-5 container exited with code 255 (Error): I0504 09:06:41.108436       1 certsync_controller.go:269] Starting CertSyncer\nI0504 09:06:41.108748       1 observer_polling.go:106] Starting file observer\nE0504 09:06:46.255666       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:kube-controller-manager" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\nE0504 09:06:46.278888       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: secrets is forbidden: User "system:kube-controller-manager" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-controller-manager"\nW0504 09:13:28.266160       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 21699 (24130)\nW0504 09:22:47.275667       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 24260 (27617)\n
May 04 09:29:35.797 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-155-226.us-east-2.compute.internal node/ip-10-0-155-226.us-east-2.compute.internal container=kube-controller-manager-5 container exited with code 255 (Error): icate: "kubelet-signer" [] issuer="<self>" (2020-05-04 08:31:09 +0000 UTC to 2020-05-05 08:31:09 +0000 UTC (now=2020-05-04 09:06:41.284286434 +0000 UTC))\nI0504 09:06:41.284384       1 clientca.go:92] [3] "/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" client-ca certificate: "kube-apiserver-to-kubelet-signer" [] issuer="<self>" (2020-05-04 08:31:09 +0000 UTC to 2021-05-04 08:31:09 +0000 UTC (now=2020-05-04 09:06:41.284368056 +0000 UTC))\nI0504 09:06:41.284512       1 clientca.go:92] [4] "/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" client-ca certificate: "kube-control-plane-signer" [] issuer="<self>" (2020-05-04 08:31:09 +0000 UTC to 2021-05-04 08:31:09 +0000 UTC (now=2020-05-04 09:06:41.284496105 +0000 UTC))\nI0504 09:06:41.292362       1 controllermanager.go:169] Version: v1.13.4+6458880\nI0504 09:06:41.294331       1 serving.go:195] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1588581992" (2020-05-04 08:46:46 +0000 UTC to 2022-05-04 08:46:47 +0000 UTC (now=2020-05-04 09:06:41.294293205 +0000 UTC))\nI0504 09:06:41.294430       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1588581992" [] issuer="<self>" (2020-05-04 08:46:31 +0000 UTC to 2021-05-04 08:46:32 +0000 UTC (now=2020-05-04 09:06:41.294410615 +0000 UTC))\nI0504 09:06:41.294569       1 secure_serving.go:136] Serving securely on [::]:10257\nI0504 09:06:41.294696       1 serving.go:77] Starting DynamicLoader\nI0504 09:06:41.298610       1 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-controller-manager...\nE0504 09:26:46.860978       1 controllermanager.go:282] leaderelection lost\n
May 04 09:29:36.997 E ns/openshift-etcd pod/etcd-member-ip-10-0-155-226.us-east-2.compute.internal node/ip-10-0-155-226.us-east-2.compute.internal container=etcd-metrics container exited with code 255 (Error): 2020-05-04 09:26:06.263917 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-cvzhtcyw-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-cvzhtcyw-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-05-04 09:26:06.264807 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-05-04 09:26:06.265425 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-cvzhtcyw-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-cvzhtcyw-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/05/04 09:26:06 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.155.226:9978: connect: connection refused"; Reconnecting to {etcd-1.ci-op-cvzhtcyw-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\n2020-05-04 09:26:07.279630 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
May 04 09:29:36.997 E ns/openshift-etcd pod/etcd-member-ip-10-0-155-226.us-east-2.compute.internal node/ip-10-0-155-226.us-east-2.compute.internal container=etcd-member container exited with code 255 (Error): 90e (stream MsgApp v2 reader)\n2020-05-04 09:26:47.204366 I | rafthttp: stopped streaming with peer 485e45819b05390e (stream MsgApp v2 reader)\n2020-05-04 09:26:47.208809 W | rafthttp: lost the TCP streaming connection with peer 485e45819b05390e (stream Message reader)\n2020-05-04 09:26:47.208913 I | rafthttp: stopped streaming with peer 485e45819b05390e (stream Message reader)\n2020-05-04 09:26:47.208972 I | rafthttp: stopped peer 485e45819b05390e\n2020-05-04 09:26:47.209022 I | rafthttp: stopping peer 833c64a13478c60f...\n2020-05-04 09:26:47.213523 I | rafthttp: closed the TCP streaming connection with peer 833c64a13478c60f (stream MsgApp v2 writer)\n2020-05-04 09:26:47.213630 I | rafthttp: stopped streaming with peer 833c64a13478c60f (writer)\n2020-05-04 09:26:47.213941 I | rafthttp: closed the TCP streaming connection with peer 833c64a13478c60f (stream Message writer)\n2020-05-04 09:26:47.214057 I | rafthttp: stopped streaming with peer 833c64a13478c60f (writer)\n2020-05-04 09:26:47.214295 I | rafthttp: stopped HTTP pipelining with peer 833c64a13478c60f\n2020-05-04 09:26:47.214466 W | rafthttp: lost the TCP streaming connection with peer 833c64a13478c60f (stream MsgApp v2 reader)\n2020-05-04 09:26:47.214534 E | rafthttp: failed to read 833c64a13478c60f on stream MsgApp v2 (context canceled)\n2020-05-04 09:26:47.214586 I | rafthttp: peer 833c64a13478c60f became inactive (message send to peer failed)\n2020-05-04 09:26:47.214635 I | rafthttp: stopped streaming with peer 833c64a13478c60f (stream MsgApp v2 reader)\n2020-05-04 09:26:47.214752 W | rafthttp: lost the TCP streaming connection with peer 833c64a13478c60f (stream Message reader)\n2020-05-04 09:26:47.214820 I | rafthttp: stopped streaming with peer 833c64a13478c60f (stream Message reader)\n2020-05-04 09:26:47.214902 I | rafthttp: stopped peer 833c64a13478c60f\n2020-05-04 09:26:47.226750 E | rafthttp: failed to find member 485e45819b05390e in cluster 1f9941102e27d0ec\n2020-05-04 09:26:47.243332 E | rafthttp: failed to find member 833c64a13478c60f in cluster 1f9941102e27d0ec\n
May 04 09:29:37.398 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-155-226.us-east-2.compute.internal node/ip-10-0-155-226.us-east-2.compute.internal container=kube-apiserver-7 container exited with code 255 (Error):  controller.go:176] Shutting down kubernetes service endpoint reconciler\nI0504 09:26:46.683350       1 picker_wrapper.go:218] blockingPicker: the picked transport is not ready, loop back to repick\nI0504 09:26:46.683469       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0504 09:26:46.683512       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd-0.ci-op-cvzhtcyw-a4243.origin-ci-int-aws.dev.rhcloud.com:2379 <nil>} {etcd-2.ci-op-cvzhtcyw-a4243.origin-ci-int-aws.dev.rhcloud.com:2379 <nil>}]\nI0504 09:26:46.684018       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd-0.ci-op-cvzhtcyw-a4243.origin-ci-int-aws.dev.rhcloud.com:2379 <nil>} {etcd-2.ci-op-cvzhtcyw-a4243.origin-ci-int-aws.dev.rhcloud.com:2379 <nil>}]\nW0504 09:26:46.683976       1 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {etcd-1.ci-op-cvzhtcyw-a4243.origin-ci-int-aws.dev.rhcloud.com:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.155.226:2379: connect: connection refused". Reconnecting...\nW0504 09:26:46.684930       1 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing\nI0504 09:26:46.687308       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0504 09:26:46.687530       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd-0.ci-op-cvzhtcyw-a4243.origin-ci-int-aws.dev.rhcloud.com:2379 <nil>} {etcd-2.ci-op-cvzhtcyw-a4243.origin-ci-int-aws.dev.rhcloud.com:2379 <nil>}]\nI0504 09:26:46.687598       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd-0.ci-op-cvzhtcyw-a4243.origin-ci-int-aws.dev.rhcloud.com:2379 <nil>} {etcd-2.ci-op-cvzhtcyw-a4243.origin-ci-int-aws.dev.rhcloud.com:2379 <nil>}]\nW0504 09:26:46.688671       1 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing\n
May 04 09:29:37.398 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-155-226.us-east-2.compute.internal node/ip-10-0-155-226.us-east-2.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 255 (Error): I0504 09:06:41.178059       1 certsync_controller.go:269] Starting CertSyncer\nI0504 09:06:41.179399       1 observer_polling.go:106] Starting file observer\nW0504 09:15:27.293805       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 21711 (24651)\nW0504 09:24:42.302168       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 24788 (28270)\n
May 04 09:29:38.196 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-155-226.us-east-2.compute.internal node/ip-10-0-155-226.us-east-2.compute.internal container=scheduler container exited with code 255 (Error): cheduling openshift-console/downloads-5877796c85-hcvxm: Operation cannot be fulfilled on pods/binding "downloads-5877796c85-hcvxm": etcdserver: request timed out, possibly due to previous leader failure; retrying\nI0504 09:26:03.948134       1 scheduler.go:491] Failed to bind pod: openshift-monitoring/prometheus-operator-7d84969cdb-8tgps\nE0504 09:26:03.948193       1 factory.go:1519] Error scheduling openshift-monitoring/prometheus-operator-7d84969cdb-8tgps: Operation cannot be fulfilled on pods/binding "prometheus-operator-7d84969cdb-8tgps": etcdserver: request timed out, possibly due to previous leader failure; retrying\nW0504 09:26:03.972376       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.ReplicationController ended with: too old resource version: 21105 (28899)\nW0504 09:26:03.972521       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 21107 (28899)\nE0504 09:26:04.002038       1 scheduler.go:598] error binding pod: Operation cannot be fulfilled on pods/binding "prometheus-operator-7d84969cdb-8tgps": etcdserver: request timed out, possibly due to previous leader failure\nE0504 09:26:04.002248       1 scheduler.go:598] error binding pod: Operation cannot be fulfilled on pods/binding "downloads-5877796c85-hcvxm": etcdserver: request timed out, possibly due to previous leader failure\nE0504 09:26:04.011678       1 scheduler.go:448] scheduler cache AssumePod failed: pod 4266413a-8de9-11ea-8d13-020f38042d02 is in the cache, so can't be assumed\nE0504 09:26:04.011775       1 factory.go:1519] Error scheduling openshift-console/downloads-5877796c85-hcvxm: pod 4266413a-8de9-11ea-8d13-020f38042d02 is in the cache, so can't be assumed; retrying\nE0504 09:26:04.043964       1 scheduler.go:573] error assuming pod: pod 4266413a-8de9-11ea-8d13-020f38042d02 is in the cache, so can't be assumed\nE0504 09:26:46.671656       1 server.go:259] lost master\nI0504 09:26:46.672870       1 serving.go:88] Shutting down DynamicLoader\n
May 04 09:29:55.752 - 29s   E openshift-apiserver OpenShift API is not responding to GET requests
May 04 09:30:11.023 E clusteroperator/kube-scheduler changed Degraded to True: StaticPodsDegradedError: StaticPodsDegraded: nodes/ip-10-0-155-226.us-east-2.compute.internal pods/openshift-kube-scheduler-ip-10-0-155-226.us-east-2.compute.internal container="scheduler" is not ready\nStaticPodsDegraded: nodes/ip-10-0-155-226.us-east-2.compute.internal pods/openshift-kube-scheduler-ip-10-0-155-226.us-east-2.compute.internal container="scheduler" is terminated: "Error" - "cheduling openshift-console/downloads-5877796c85-hcvxm: Operation cannot be fulfilled on pods/binding \"downloads-5877796c85-hcvxm\": etcdserver: request timed out, possibly due to previous leader failure; retrying\nI0504 09:26:03.948134       1 scheduler.go:491] Failed to bind pod: openshift-monitoring/prometheus-operator-7d84969cdb-8tgps\nE0504 09:26:03.948193       1 factory.go:1519] Error scheduling openshift-monitoring/prometheus-operator-7d84969cdb-8tgps: Operation cannot be fulfilled on pods/binding \"prometheus-operator-7d84969cdb-8tgps\": etcdserver: request timed out, possibly due to previous leader failure; retrying\nW0504 09:26:03.972376       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.ReplicationController ended with: too old resource version: 21105 (28899)\nW0504 09:26:03.972521       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 21107 (28899)\nE0504 09:26:04.002038       1 scheduler.go:598] error binding pod: Operation cannot be fulfilled on pods/binding \"prometheus-operator-7d84969cdb-8tgps\": etcdserver: request timed out, possibly due to previous leader failure\nE0504 09:26:04.002248       1 scheduler.go:598] error binding pod: Operation cannot be fulfilled on pods/binding \"downloads-5877796c85-hcvxm\": etcdserver: request timed out, possibly due to previous leader failure\nE0504 09:26:04.011678       1 scheduler.go:448] scheduler cache AssumePod failed: pod 4266413a-8de9-11ea-8d13-020f38042d02 is in the cache, so can't be assumed\nE0504 09:26:04.011775       1 factory.go:1519] Error scheduling openshift-console/downloads-5877796c85-hcvxm: pod 4266413a-8de9-11ea-8d13-020f38042d02 is in the cache, so can't be assumed; retrying\nE0504 09:26:04.043964       1 scheduler.go:573] error assuming pod: pod 4266413a-8de9-11ea-8d13-020f38042d02 is in the cache, so can't be assumed\nE0504 09:26:46.671656       1 server.go:259] lost master\nI0504 09:26:46.672870       1 serving.go:88] Shutting down DynamicLoader\n"
May 04 09:30:55.752 E openshift-apiserver OpenShift API is not responding to GET requests
May 04 09:30:59.179 E ns/openshift-monitoring pod/node-exporter-sp6kx node/ip-10-0-143-66.us-east-2.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
May 04 09:30:59.387 E ns/openshift-cluster-node-tuning-operator pod/tuned-qtxs7 node/ip-10-0-143-66.us-east-2.compute.internal container=tuned container exited with code 255 (Error): 35] Pod (openshift-monitoring/kube-state-metrics-6d7fc6cb6d-wws48) labels changed node wide: true\nI0504 09:26:08.533725   44819 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:26:08.535617   44819 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:26:08.669845   44819 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0504 09:28:35.024831   44819 openshift-tuned.go:435] Pod (openshift-monitoring/alertmanager-main-2) labels changed node wide: true\nI0504 09:28:38.533682   44819 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:28:38.536327   44819 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:28:38.648158   44819 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0504 09:28:38.835413   44819 openshift-tuned.go:435] Pod (openshift-ingress/router-default-7544b76595-4x4rd) labels changed node wide: true\nI0504 09:28:43.533688   44819 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:28:43.536258   44819 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:28:43.648946   44819 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0504 09:28:44.431775   44819 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-adapter-7784fdcd8-2tjvb) labels changed node wide: true\nI0504 09:28:48.533719   44819 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:28:48.535068   44819 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:28:48.645213   44819 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0504 09:29:14.241240   44819 openshift-tuned.go:435] Pod (e2e-tests-sig-apps-job-upgrade-p562f/foo-bbgwn) labels changed node wide: true\n
May 04 09:31:03.472 E ns/openshift-sdn pod/sdn-249qg node/ip-10-0-143-66.us-east-2.compute.internal container=sdn container exited with code 255 (Error): check disconnected from OVS server: <nil>\nI0504 09:29:15.108848   56423 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0504 09:29:15.149147   56423 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-monitoring/node-exporter:https to [10.0.131.63:9100 10.0.137.68:9100 10.0.140.155:9100 10.0.151.195:9100 10.0.155.226:9100]\nI0504 09:29:15.149177   56423 roundrobin.go:240] Delete endpoint 10.0.143.66:9100 for service "openshift-monitoring/node-exporter:https"\ninterrupt: Gracefully shutting down ...\nI0504 09:29:15.223654   56423 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nE0504 09:29:15.262937   56423 proxier.go:1350] Failed to execute iptables-restore: signal: terminated ()\nI0504 09:29:15.262991   56423 proxier.go:1352] Closing local ports after iptables-restore failure\nI0504 09:29:15.312967   56423 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0504 09:29:15.393518   56423 proxier.go:367] userspace proxy: processing 0 service events\nI0504 09:29:15.393545   56423 proxier.go:346] userspace syncProxyRules took 130.534041ms\nI0504 09:29:15.410396   56423 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0504 09:29:15.509140   56423 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0504 09:29:15.614225   56423 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0504 09:29:15.710948   56423 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\n
May 04 09:31:04.073 E ns/openshift-dns pod/dns-default-tsfh7 node/ip-10-0-143-66.us-east-2.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-05-04T09:17:23.477Z [INFO] CoreDNS-1.3.1\n2020-05-04T09:17:23.477Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-05-04T09:17:23.477Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nW0504 09:28:48.419294       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: watch of *v1.Namespace ended with: too old resource version: 17739 (31378)\n[INFO] SIGTERM: Shutting down servers then terminating\n
May 04 09:31:04.073 E ns/openshift-dns pod/dns-default-tsfh7 node/ip-10-0-143-66.us-east-2.compute.internal container=dns-node-resolver container exited with code 255 (Error): /bin/bash: line 1: kill: (102) - No such process\n
May 04 09:31:04.606 E ns/openshift-sdn pod/ovs-gc2bp node/ip-10-0-143-66.us-east-2.compute.internal container=openvswitch container exited with code 255 (Error): on port 11\n2020-05-04T09:28:35.665Z|00132|connmgr|INFO|br0<->unix#188: 4 flow_mods in the last 0 s (4 deletes)\n2020-05-04T09:28:35.701Z|00133|bridge|INFO|bridge br0: deleted interface vethca2f4559 on port 4\n2020-05-04T09:28:35.794Z|00134|connmgr|INFO|br0<->unix#191: 4 flow_mods in the last 0 s (4 deletes)\n2020-05-04T09:28:35.872Z|00135|bridge|INFO|bridge br0: deleted interface veth0ba9cbe6 on port 8\n2020-05-04T09:28:35.938Z|00136|connmgr|INFO|br0<->unix#194: 4 flow_mods in the last 0 s (4 deletes)\n2020-05-04T09:28:35.980Z|00137|bridge|INFO|bridge br0: deleted interface veth6844cf34 on port 13\n2020-05-04T09:28:36.071Z|00138|connmgr|INFO|br0<->unix#197: 2 flow_mods in the last 0 s (2 deletes)\n2020-05-04T09:28:36.113Z|00139|connmgr|INFO|br0<->unix#200: 4 flow_mods in the last 0 s (4 deletes)\n2020-05-04T09:28:36.147Z|00140|bridge|INFO|bridge br0: deleted interface veth0f4fc08a on port 14\n2020-05-04T09:28:36.202Z|00141|connmgr|INFO|br0<->unix#203: 4 flow_mods in the last 0 s (4 deletes)\n2020-05-04T09:28:36.251Z|00142|bridge|INFO|bridge br0: deleted interface vethdbf4762e on port 6\n2020-05-04T09:28:36.301Z|00143|connmgr|INFO|br0<->unix#206: 2 flow_mods in the last 0 s (2 deletes)\n2020-05-04T09:28:36.362Z|00144|connmgr|INFO|br0<->unix#209: 4 flow_mods in the last 0 s (4 deletes)\n2020-05-04T09:28:36.407Z|00145|bridge|INFO|bridge br0: deleted interface vethe62bf48d on port 15\n2020-05-04T09:28:36.466Z|00146|connmgr|INFO|br0<->unix#212: 4 flow_mods in the last 0 s (4 deletes)\n2020-05-04T09:28:36.506Z|00147|bridge|INFO|bridge br0: deleted interface vethde0fe2c9 on port 10\n2020-05-04T09:28:36.563Z|00148|connmgr|INFO|br0<->unix#215: 4 flow_mods in the last 0 s (4 deletes)\n2020-05-04T09:28:36.609Z|00149|bridge|INFO|bridge br0: deleted interface vethea8ab891 on port 9\n2020-05-04T09:29:05.195Z|00150|connmgr|INFO|br0<->unix#221: 4 flow_mods in the last 0 s (4 deletes)\n2020-05-04T09:29:05.215Z|00151|bridge|INFO|bridge br0: deleted interface vetha3b9f3af on port 5\nTerminated\novs-vswitchd is not running.\novsdb-server is not running.\n
May 04 09:31:04.872 E ns/openshift-multus pod/multus-pbv5m node/ip-10-0-143-66.us-east-2.compute.internal container=kube-multus container exited with code 255 (Error): 
May 04 09:31:05.471 E ns/openshift-machine-config-operator pod/machine-config-daemon-xgxjt node/ip-10-0-143-66.us-east-2.compute.internal container=machine-config-daemon container exited with code 255 (Error): 
May 04 09:31:14.642 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-140-155.us-east-2.compute.internal node/ip-10-0-140-155.us-east-2.compute.internal container=kube-controller-manager-cert-syncer-5 container exited with code 255 (Error): I0504 09:04:46.126264       1 certsync_controller.go:269] Starting CertSyncer\nI0504 09:04:46.140069       1 observer_polling.go:106] Starting file observer\nE0504 09:04:50.012051       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: secrets is forbidden: User "system:kube-controller-manager" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-controller-manager"\nE0504 09:04:50.015291       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:kube-controller-manager" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\nW0504 09:10:08.020527       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 21699 (23196)\nW0504 09:17:02.025484       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 23327 (25338)\nW0504 09:26:36.030581       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25644 (29027)\n
May 04 09:31:14.642 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-140-155.us-east-2.compute.internal node/ip-10-0-140-155.us-east-2.compute.internal container=kube-controller-manager-5 container exited with code 255 (Error): ta for resource "monitoring.coreos.com/v1, Resource=prometheuses", couldn't start monitor for resource "healthchecking.openshift.io/v1alpha1, Resource=machinehealthchecks": unable to monitor quota for resource "healthchecking.openshift.io/v1alpha1, Resource=machinehealthchecks"]\nI0504 09:29:23.015000       1 deployment_controller.go:484] Error syncing deployment openshift-monitoring/kube-state-metrics: Operation cannot be fulfilled on deployments.apps "kube-state-metrics": the object has been modified; please apply your changes to the latest version and try again\nI0504 09:29:23.190286       1 deployment_controller.go:484] Error syncing deployment openshift-monitoring/prometheus-operator: Operation cannot be fulfilled on deployments.apps "prometheus-operator": the object has been modified; please apply your changes to the latest version and try again\nI0504 09:29:25.714229       1 replica_set.go:477] Too few replicas for ReplicaSet openshift-machine-config-operator/etcd-quorum-guard-bcdbbd9c7, need 3, creating 1\nI0504 09:29:25.733749       1 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-machine-config-operator", Name:"etcd-quorum-guard-bcdbbd9c7", UID:"d214012d-8de3-11ea-9966-022d4a28c552", APIVersion:"apps/v1", ResourceVersion:"32144", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: etcd-quorum-guard-bcdbbd9c7-jbb6z\nW0504 09:29:26.637641       1 reflector.go:256] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.BuildConfig ended with: The resourceVersion for the provided watch is too old.\nI0504 09:29:27.994954       1 deployment_controller.go:484] Error syncing deployment openshift-monitoring/telemeter-client: Operation cannot be fulfilled on deployments.apps "telemeter-client": the object has been modified; please apply your changes to the latest version and try again\nE0504 09:29:30.939187       1 controllermanager.go:282] leaderelection lost\nI0504 09:29:30.939228       1 serving.go:88] Shutting down DynamicLoader\n
May 04 09:31:19.788 E ns/openshift-marketplace pod/certified-operators-7d68f94555-9rkqz node/ip-10-0-151-195.us-east-2.compute.internal container=certified-operators container exited with code 2 (Error): 
May 04 09:31:22.516 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-140-155.us-east-2.compute.internal node/ip-10-0-140-155.us-east-2.compute.internal container=scheduler container exited with code 255 (Error): t-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope\nE0504 09:04:50.119180       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope\nE0504 09:04:50.126390       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope\nE0504 09:04:50.134414       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope\nE0504 09:04:50.138318       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope\nE0504 09:04:50.158349       1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:245: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope\nW0504 09:28:48.396565       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 17739 (31379)\nW0504 09:28:48.403809       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 17743 (31379)\nW0504 09:28:48.423281       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 17739 (31379)\nE0504 09:29:31.069752       1 server.go:259] lost master\n
May 04 09:31:24.722 E ns/openshift-apiserver pod/apiserver-s25xv node/ip-10-0-140-155.us-east-2.compute.internal container=openshift-apiserver container exited with code 255 (Error): 9 <nil>}]\nI0504 09:29:30.961488       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0504 09:29:30.984506       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/aggregator-client-ca/ca-bundle.crt\nI0504 09:29:30.984625       1 clusterquotamapping.go:145] Shutting down ClusterQuotaMappingController controller\nI0504 09:29:30.984696       1 serving.go:88] Shutting down DynamicLoader\nI0504 09:29:30.984732       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/client-ca/ca-bundle.crt\nI0504 09:29:30.984791       1 controller.go:87] Shutting down OpenAPI AggregationController\nI0504 09:29:30.986627       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nE0504 09:29:30.986942       1 watch.go:212] unable to encode watch object <nil>: expected pointer, but got invalid kind\nE0504 09:29:30.987128       1 watch.go:212] unable to encode watch object <nil>: expected pointer, but got invalid kind\nI0504 09:29:30.987306       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0504 09:29:30.987671       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0504 09:29:30.987881       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0504 09:29:30.988041       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0504 09:29:30.988235       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0504 09:29:30.988499       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nE0504 09:29:30.988581       1 watch.go:212] unable to encode watch object <nil>: expected pointer, but got invalid kind\nI0504 09:29:30.988631       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\n
May 04 09:31:25.121 E ns/openshift-monitoring pod/node-exporter-g8plm node/ip-10-0-140-155.us-east-2.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
May 04 09:31:25.121 E ns/openshift-monitoring pod/node-exporter-g8plm node/ip-10-0-140-155.us-east-2.compute.internal container=node-exporter container exited with code 255 (Error): 
May 04 09:31:25.784 E ns/openshift-ingress pod/router-default-7544b76595-7b5b9 node/ip-10-0-151-195.us-east-2.compute.internal container=router container exited with code 2 (Error): 09:29:15.924014       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0504 09:29:20.927505       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0504 09:29:25.927666       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0504 09:29:30.926007       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0504 09:29:35.927560       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0504 09:29:40.945532       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0504 09:30:22.473746       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0504 09:30:27.466446       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0504 09:30:58.257322       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0504 09:31:03.255543       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0504 09:31:08.255477       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0504 09:31:13.260725       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0504 09:31:19.279107       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n
May 04 09:31:25.918 E ns/openshift-image-registry pod/node-ca-gxzrx node/ip-10-0-140-155.us-east-2.compute.internal container=node-ca container exited with code 255 (Error): 
May 04 09:31:26.316 E ns/openshift-dns pod/dns-default-z4q76 node/ip-10-0-140-155.us-east-2.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-05-04T09:16:30.703Z [INFO] CoreDNS-1.3.1\n2020-05-04T09:16:30.703Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-05-04T09:16:30.703Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nW0504 09:26:46.799258       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: watch of *v1.Namespace ended with: very short watch: github.com/coredns/coredns/plugin/kubernetes/controller.go:322: Unexpected watch close - watch lasted less than a second and no items received\n[INFO] SIGTERM: Shutting down servers then terminating\n
May 04 09:31:26.316 E ns/openshift-dns pod/dns-default-z4q76 node/ip-10-0-140-155.us-east-2.compute.internal container=dns-node-resolver container exited with code 255 (Error): /bin/bash: line 1: kill: (112) - No such process\n
May 04 09:31:26.918 E ns/openshift-sdn pod/sdn-jx69g node/ip-10-0-140-155.us-east-2.compute.internal container=sdn container exited with code 255 (Error): 29:28.589565   71317 roundrobin.go:240] Delete endpoint 10.129.0.87:8080 for service "openshift-monitoring/cluster-monitoring-operator:http"\nI0504 09:29:28.737734   71317 proxier.go:367] userspace proxy: processing 0 service events\nI0504 09:29:28.737764   71317 proxier.go:346] userspace syncProxyRules took 53.118601ms\nI0504 09:29:28.988136   71317 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-controller-manager/controller-manager:https to [10.128.0.61:8443 10.129.0.73:8443 10.130.0.52:8443]\nI0504 09:29:28.988183   71317 roundrobin.go:240] Delete endpoint 10.129.0.73:8443 for service "openshift-controller-manager/controller-manager:https"\nI0504 09:29:29.099989   71317 proxier.go:367] userspace proxy: processing 0 service events\nI0504 09:29:29.100031   71317 proxier.go:346] userspace syncProxyRules took 27.568408ms\nI0504 09:29:30.590465   71317 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-operator-lifecycle-manager/catalog-operator-metrics:https-metrics to [10.129.0.92:8081]\nI0504 09:29:30.590504   71317 roundrobin.go:240] Delete endpoint 10.129.0.92:8081 for service "openshift-operator-lifecycle-manager/catalog-operator-metrics:https-metrics"\nI0504 09:29:30.715108   71317 proxier.go:367] userspace proxy: processing 0 service events\nI0504 09:29:30.715133   71317 proxier.go:346] userspace syncProxyRules took 27.653493ms\ninterrupt: Gracefully shutting down ...\nE0504 09:29:31.402873   71317 healthcheck.go:57] SDN healthcheck disconnected from OVS server: <nil>\nI0504 09:29:31.403150   71317 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0504 09:29:31.503482   71317 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0504 09:29:31.603498   71317 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\n
May 04 09:31:27.718 E ns/openshift-machine-config-operator pod/machine-config-server-2cblq node/ip-10-0-140-155.us-east-2.compute.internal container=machine-config-server container exited with code 255 (Error): 
May 04 09:31:28.916 E ns/openshift-cluster-node-tuning-operator pod/tuned-6qxnx node/ip-10-0-140-155.us-east-2.compute.internal container=tuned container exited with code 255 (Error): hift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:29:02.821624   52464 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:29:02.944624   52464 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0504 09:29:02.945403   52464 openshift-tuned.go:435] Pod (openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-5d7458899b-pb8cj) labels changed node wide: true\nI0504 09:29:07.819868   52464 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:29:07.821649   52464 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:29:07.947392   52464 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0504 09:29:07.947995   52464 openshift-tuned.go:435] Pod (openshift-controller-manager-operator/openshift-controller-manager-operator-7f6b59c446-bqgnv) labels changed node wide: true\nI0504 09:29:12.819883   52464 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:29:12.821260   52464 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:29:12.961124   52464 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0504 09:29:27.218344   52464 openshift-tuned.go:435] Pod (openshift-machine-config-operator/etcd-quorum-guard-bcdbbd9c7-9hpxw) labels changed node wide: true\nI0504 09:29:27.819972   52464 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:29:27.822072   52464 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:29:27.966657   52464 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0504 09:29:30.646845   52464 openshift-tuned.go:435] Pod (openshift-console/downloads-5877796c85-hcvxm) labels changed node wide: true\n
May 04 09:31:29.316 E ns/openshift-multus pod/multus-2dmcc node/ip-10-0-140-155.us-east-2.compute.internal container=kube-multus container exited with code 255 (Error): 
May 04 09:31:30.115 E ns/openshift-machine-config-operator pod/machine-config-daemon-4bf8v node/ip-10-0-140-155.us-east-2.compute.internal container=machine-config-daemon container exited with code 255 (Error): 
May 04 09:31:35.323 E ns/openshift-marketplace pod/community-operators-79b846779f-697n7 node/ip-10-0-143-66.us-east-2.compute.internal container=community-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
May 04 09:31:36.725 E ns/openshift-sdn pod/sdn-controller-9hgp6 node/ip-10-0-140-155.us-east-2.compute.internal container=sdn-controller container exited with code 255 (Error): mpting to acquire leader lease  openshift-sdn/openshift-network-controller...\nI0504 09:18:11.716043       1 leaderelection.go:214] successfully acquired lease openshift-sdn/openshift-network-controller\nI0504 09:18:11.716356       1 event.go:221] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"openshift-sdn", Name:"openshift-network-controller", UID:"a778ddb6-8de3-11ea-9966-022d4a28c552", APIVersion:"v1", ResourceVersion:"26194", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ip-10-0-140-155 became leader\nI0504 09:18:25.902433       1 master.go:57] Initializing SDN master of type "redhat/openshift-ovs-networkpolicy"\nI0504 09:18:25.926145       1 network_controller.go:49] Started OpenShift Network Controller\nE0504 09:26:28.306135       1 memcache.go:141] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request\nE0504 09:27:14.570647       1 memcache.go:141] couldn't get resource list for authorization.openshift.io/v1: the server is currently unable to handle the request\nE0504 09:27:17.642673       1 memcache.go:141] couldn't get resource list for build.openshift.io/v1: the server is currently unable to handle the request\nE0504 09:27:20.714733       1 memcache.go:141] couldn't get resource list for oauth.openshift.io/v1: the server is currently unable to handle the request\nE0504 09:27:23.786311       1 memcache.go:141] couldn't get resource list for project.openshift.io/v1: the server is currently unable to handle the request\nE0504 09:27:26.858456       1 memcache.go:141] couldn't get resource list for route.openshift.io/v1: the server is currently unable to handle the request\nW0504 09:28:48.395253       1 reflector.go:256] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 20590 (31379)\nW0504 09:28:48.419681       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Namespace ended with: too old resource version: 17739 (31378)\n
May 04 09:31:37.115 E ns/openshift-sdn pod/ovs-bjk8s node/ip-10-0-140-155.us-east-2.compute.internal container=openvswitch container exited with code 255 (Error): 020-05-04T09:28:51.968Z|00210|bridge|INFO|bridge br0: deleted interface vethd3389ec2 on port 9\n2020-05-04T09:28:52.279Z|00211|connmgr|INFO|br0<->unix#298: 4 flow_mods in the last 0 s (4 deletes)\n2020-05-04T09:28:52.316Z|00212|bridge|INFO|bridge br0: deleted interface veth52e21038 on port 18\n2020-05-04T09:28:52.434Z|00213|connmgr|INFO|br0<->unix#301: 2 flow_mods in the last 0 s (2 deletes)\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-05-04T09:28:51.776Z|00028|jsonrpc|WARN|unix#236: send error: Broken pipe\n2020-05-04T09:28:51.776Z|00029|reconnect|WARN|unix#236: connection dropped (Broken pipe)\n\n==> /var/log/openvswitch/ovs-vswitchd.log <==\n2020-05-04T09:28:52.467Z|00214|connmgr|INFO|br0<->unix#304: 4 flow_mods in the last 0 s (4 deletes)\n2020-05-04T09:28:52.508Z|00215|bridge|INFO|bridge br0: deleted interface veth976a4652 on port 24\n2020-05-04T09:28:52.841Z|00216|connmgr|INFO|br0<->unix#310: 4 flow_mods in the last 0 s (4 deletes)\n2020-05-04T09:28:52.865Z|00217|bridge|INFO|bridge br0: deleted interface veth0d1fa3bd on port 4\n2020-05-04T09:28:53.216Z|00218|connmgr|INFO|br0<->unix#313: 4 flow_mods in the last 0 s (4 deletes)\n2020-05-04T09:28:53.240Z|00219|bridge|INFO|bridge br0: deleted interface veth5cd96add on port 12\n2020-05-04T09:28:53.601Z|00220|connmgr|INFO|br0<->unix#316: 4 flow_mods in the last 0 s (4 deletes)\n2020-05-04T09:28:53.625Z|00221|bridge|INFO|bridge br0: deleted interface vethcf867dcf on port 6\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-05-04T09:28:53.606Z|00030|jsonrpc|WARN|unix#267: receive error: Connection reset by peer\n2020-05-04T09:28:53.607Z|00031|reconnect|WARN|unix#267: connection dropped (Connection reset by peer)\n\n==> /var/log/openvswitch/ovs-vswitchd.log <==\n2020-05-04T09:29:20.798Z|00222|connmgr|INFO|br0<->unix#320: 2 flow_mods in the last 0 s (2 deletes)\n2020-05-04T09:29:20.830Z|00223|connmgr|INFO|br0<->unix#323: 4 flow_mods in the last 0 s (4 deletes)\n2020-05-04T09:29:20.852Z|00224|bridge|INFO|bridge br0: deleted interface vethbf9f7b35 on port 22\nTerminated\nTerminated\n
May 04 09:31:37.516 E ns/openshift-controller-manager pod/controller-manager-dh6sv node/ip-10-0-140-155.us-east-2.compute.internal container=controller-manager container exited with code 255 (Error): 
May 04 09:31:41.752 E kube-apiserver Kube API started failing: Get https://api.ci-op-cvzhtcyw-a4243.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=3s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
May 04 09:31:53.703 E ns/openshift-machine-config-operator pod/machine-config-operator-6896df876c-jf74r node/ip-10-0-131-63.us-east-2.compute.internal container=machine-config-operator container exited with code 2 (Error): 
May 04 09:31:55.904 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-58945699cd-vzcv8 node/ip-10-0-131-63.us-east-2.compute.internal container=openshift-apiserver-operator container exited with code 255 (Error): ot ready: 503\nAvailable: v1.route.openshift.io is not ready: 503\nAvailable: v1.user.openshift.io is not ready: 503"\nI0504 09:31:01.967677       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"98ba5d7b-8de3-11ea-9966-022d4a28c552", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available message changed from "Available: v1.apps.openshift.io is not ready: 503\nAvailable: v1.oauth.openshift.io is not ready: 503\nAvailable: v1.project.openshift.io is not ready: 503\nAvailable: v1.route.openshift.io is not ready: 503\nAvailable: v1.user.openshift.io is not ready: 503" to "Available: v1.build.openshift.io is not ready: 503\nAvailable: v1.template.openshift.io is not ready: 503"\nI0504 09:31:05.120598       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"98ba5d7b-8de3-11ea-9966-022d4a28c552", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available message changed from "Available: v1.build.openshift.io is not ready: 503\nAvailable: v1.template.openshift.io is not ready: 503" to "Available: v1.image.openshift.io is not ready: 503"\nI0504 09:31:05.441996       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"98ba5d7b-8de3-11ea-9966-022d4a28c552", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available changed from False to True ("")\nI0504 09:31:51.908303       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0504 09:31:51.908376       1 leaderelection.go:65] leaderelection lost\n
May 04 09:31:59.503 E ns/openshift-console-operator pod/console-operator-6cf674d5b-4zgtl node/ip-10-0-131-63.us-east-2.compute.internal container=console-operator container exited with code 255 (Error): onsole status"\ntime="2020-05-04T09:31:47Z" level=info msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-cvzhtcyw-a4243.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-05-04T09:31:47Z" level=info msg="sync loop 4.0.0 complete"\ntime="2020-05-04T09:31:47Z" level=info msg="finished syncing operator \"cluster\" (204.364µs) \n\n"\ntime="2020-05-04T09:31:47Z" level=info msg="started syncing operator \"cluster\" (2020-05-04 09:31:47.911267714 +0000 UTC m=+1526.404092624)"\ntime="2020-05-04T09:31:47Z" level=info msg="console is in a managed state."\ntime="2020-05-04T09:31:47Z" level=info msg="running sync loop 4.0.0"\ntime="2020-05-04T09:31:47Z" level=info msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-cvzhtcyw-a4243.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-05-04T09:31:48Z" level=info msg="service-ca configmap exists and is in the correct state"\ntime="2020-05-04T09:31:48Z" level=info msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-cvzhtcyw-a4243.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-05-04T09:31:48Z" level=info msg=-----------------------\ntime="2020-05-04T09:31:48Z" level=info msg="sync loop 4.0.0 resources updated: false \n"\ntime="2020-05-04T09:31:48Z" level=info msg=-----------------------\ntime="2020-05-04T09:31:48Z" level=info msg="deployment is available, ready replicas: 1 \n"\ntime="2020-05-04T09:31:48Z" level=info msg="sync_v400: updating console status"\ntime="2020-05-04T09:31:48Z" level=info msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-cvzhtcyw-a4243.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-05-04T09:31:48Z" level=info msg="sync loop 4.0.0 complete"\ntime="2020-05-04T09:31:48Z" level=info msg="finished syncing operator \"cluster\" (36.933µs) \n\n"\nI0504 09:31:52.659515       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0504 09:31:52.659584       1 leaderelection.go:65] leaderelection lost\n
May 04 09:32:04.303 E ns/openshift-console pod/console-7fcd4d55bc-ww4xw node/ip-10-0-131-63.us-east-2.compute.internal container=console container exited with code 2 (Error): 2020/05/4 09:07:39 cmd/main: cookies are secure!\n2020/05/4 09:07:39 cmd/main: Binding to 0.0.0.0:8443...\n2020/05/4 09:07:39 cmd/main: using TLS\n
May 04 09:32:04.903 E ns/openshift-service-ca pod/configmap-cabundle-injector-58db8dc9cf-c94fh node/ip-10-0-131-63.us-east-2.compute.internal container=configmap-cabundle-injector-controller container exited with code 2 (Error): 
May 04 09:32:26.503 E ns/openshift-operator-lifecycle-manager pod/packageserver-794b8fff88-nprtx node/ip-10-0-131-63.us-east-2.compute.internal container=packageserver container exited with code 137 (Error): g to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-05-04T09:32:00Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-05-04T09:32:01Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-05-04T09:32:01Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-05-04T09:32:02Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-05-04T09:32:02Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-05-04T09:32:04Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-05-04T09:32:04Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-05-04T09:32:07Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-05-04T09:32:07Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-05-04T09:32:18Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-05-04T09:32:18Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\n
May 04 09:32:32.826 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-140-155.us-east-2.compute.internal node/ip-10-0-140-155.us-east-2.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 255 (Error): I0504 09:04:46.273691       1 certsync_controller.go:269] Starting CertSyncer\nI0504 09:04:46.274018       1 observer_polling.go:106] Starting file observer\nW0504 09:10:31.098781       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 21711 (23300)\nW0504 09:19:02.103746       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 23443 (26339)\nW0504 09:27:46.109568       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 26524 (30219)\nE0504 09:29:31.090289       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/configmaps?resourceVersion=30469&timeout=6m41s&timeoutSeconds=401&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0504 09:29:31.090607       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/secrets?resourceVersion=17739&timeout=7m16s&timeoutSeconds=436&watch=true: dial tcp [::1]:6443: connect: connection refused\n
May 04 09:32:32.826 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-140-155.us-east-2.compute.internal node/ip-10-0-140-155.us-east-2.compute.internal container=kube-apiserver-7 container exited with code 255 (Error): losed the connection; LastStreamID=6299, ErrCode=NO_ERROR, debug=""\nI0504 09:29:30.991197       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=6299, ErrCode=NO_ERROR, debug=""\nI0504 09:29:30.991212       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=6299, ErrCode=NO_ERROR, debug=""\nI0504 09:29:30.991342       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=6299, ErrCode=NO_ERROR, debug=""\nI0504 09:29:30.991358       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=6299, ErrCode=NO_ERROR, debug=""\nI0504 09:29:30.991487       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=6299, ErrCode=NO_ERROR, debug=""\nI0504 09:29:30.991502       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=6299, ErrCode=NO_ERROR, debug=""\nI0504 09:29:30.991658       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=6299, ErrCode=NO_ERROR, debug=""\nI0504 09:29:30.991680       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=6299, ErrCode=NO_ERROR, debug=""\nI0504 09:29:30.991815       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=6299, ErrCode=NO_ERROR, debug=""\nI0504 09:29:30.991833       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=6299, ErrCode=NO_ERROR, debug=""\n
May 04 09:32:34.027 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-140-155.us-east-2.compute.internal node/ip-10-0-140-155.us-east-2.compute.internal container=kube-controller-manager-cert-syncer-5 container exited with code 255 (Error): I0504 09:04:46.126264       1 certsync_controller.go:269] Starting CertSyncer\nI0504 09:04:46.140069       1 observer_polling.go:106] Starting file observer\nE0504 09:04:50.012051       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: secrets is forbidden: User "system:kube-controller-manager" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-controller-manager"\nE0504 09:04:50.015291       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:kube-controller-manager" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\nW0504 09:10:08.020527       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 21699 (23196)\nW0504 09:17:02.025484       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 23327 (25338)\nW0504 09:26:36.030581       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25644 (29027)\n
May 04 09:32:34.027 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-140-155.us-east-2.compute.internal node/ip-10-0-140-155.us-east-2.compute.internal container=kube-controller-manager-5 container exited with code 255 (Error): ta for resource "monitoring.coreos.com/v1, Resource=prometheuses", couldn't start monitor for resource "healthchecking.openshift.io/v1alpha1, Resource=machinehealthchecks": unable to monitor quota for resource "healthchecking.openshift.io/v1alpha1, Resource=machinehealthchecks"]\nI0504 09:29:23.015000       1 deployment_controller.go:484] Error syncing deployment openshift-monitoring/kube-state-metrics: Operation cannot be fulfilled on deployments.apps "kube-state-metrics": the object has been modified; please apply your changes to the latest version and try again\nI0504 09:29:23.190286       1 deployment_controller.go:484] Error syncing deployment openshift-monitoring/prometheus-operator: Operation cannot be fulfilled on deployments.apps "prometheus-operator": the object has been modified; please apply your changes to the latest version and try again\nI0504 09:29:25.714229       1 replica_set.go:477] Too few replicas for ReplicaSet openshift-machine-config-operator/etcd-quorum-guard-bcdbbd9c7, need 3, creating 1\nI0504 09:29:25.733749       1 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-machine-config-operator", Name:"etcd-quorum-guard-bcdbbd9c7", UID:"d214012d-8de3-11ea-9966-022d4a28c552", APIVersion:"apps/v1", ResourceVersion:"32144", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: etcd-quorum-guard-bcdbbd9c7-jbb6z\nW0504 09:29:26.637641       1 reflector.go:256] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.BuildConfig ended with: The resourceVersion for the provided watch is too old.\nI0504 09:29:27.994954       1 deployment_controller.go:484] Error syncing deployment openshift-monitoring/telemeter-client: Operation cannot be fulfilled on deployments.apps "telemeter-client": the object has been modified; please apply your changes to the latest version and try again\nE0504 09:29:30.939187       1 controllermanager.go:282] leaderelection lost\nI0504 09:29:30.939228       1 serving.go:88] Shutting down DynamicLoader\n
May 04 09:32:34.830 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-140-155.us-east-2.compute.internal node/ip-10-0-140-155.us-east-2.compute.internal container=scheduler container exited with code 255 (Error): t-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope\nE0504 09:04:50.119180       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope\nE0504 09:04:50.126390       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope\nE0504 09:04:50.134414       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope\nE0504 09:04:50.138318       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope\nE0504 09:04:50.158349       1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:245: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope\nW0504 09:28:48.396565       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 17739 (31379)\nW0504 09:28:48.403809       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 17743 (31379)\nW0504 09:28:48.423281       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 17739 (31379)\nE0504 09:29:31.069752       1 server.go:259] lost master\n
May 04 09:32:35.231 E ns/openshift-etcd pod/etcd-member-ip-10-0-140-155.us-east-2.compute.internal node/ip-10-0-140-155.us-east-2.compute.internal container=etcd-metrics container exited with code 255 (Error): 2020-05-04 09:28:57.483483 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-0.ci-op-cvzhtcyw-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-0.ci-op-cvzhtcyw-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-05-04 09:28:57.484683 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-05-04 09:28:57.485593 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-cvzhtcyw-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-cvzhtcyw-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/05/04 09:28:57 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.140.155:9978: connect: connection refused"; Reconnecting to {etcd-0.ci-op-cvzhtcyw-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\n2020-05-04 09:28:58.498803 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
May 04 09:32:35.231 E ns/openshift-etcd pod/etcd-member-ip-10-0-140-155.us-east-2.compute.internal node/ip-10-0-140-155.us-east-2.compute.internal container=etcd-member container exited with code 255 (Error): MsgApp v2 (context canceled)\n2020-05-04 09:29:31.460464 I | rafthttp: peer 485e45819b05390e became inactive (message send to peer failed)\n2020-05-04 09:29:31.460479 I | rafthttp: stopped streaming with peer 485e45819b05390e (stream MsgApp v2 reader)\n2020-05-04 09:29:31.460568 W | rafthttp: lost the TCP streaming connection with peer 485e45819b05390e (stream Message reader)\n2020-05-04 09:29:31.460588 I | rafthttp: stopped streaming with peer 485e45819b05390e (stream Message reader)\n2020-05-04 09:29:31.460599 I | rafthttp: stopped peer 485e45819b05390e\n2020-05-04 09:29:31.460606 I | rafthttp: stopping peer a77455c5089fbd14...\n2020-05-04 09:29:31.460926 I | rafthttp: closed the TCP streaming connection with peer a77455c5089fbd14 (stream MsgApp v2 writer)\n2020-05-04 09:29:31.460942 I | rafthttp: stopped streaming with peer a77455c5089fbd14 (writer)\n2020-05-04 09:29:31.461302 I | rafthttp: closed the TCP streaming connection with peer a77455c5089fbd14 (stream Message writer)\n2020-05-04 09:29:31.461319 I | rafthttp: stopped streaming with peer a77455c5089fbd14 (writer)\n2020-05-04 09:29:31.461473 I | rafthttp: stopped HTTP pipelining with peer a77455c5089fbd14\n2020-05-04 09:29:31.461551 W | rafthttp: lost the TCP streaming connection with peer a77455c5089fbd14 (stream MsgApp v2 reader)\n2020-05-04 09:29:31.461570 E | rafthttp: failed to read a77455c5089fbd14 on stream MsgApp v2 (context canceled)\n2020-05-04 09:29:31.461579 I | rafthttp: peer a77455c5089fbd14 became inactive (message send to peer failed)\n2020-05-04 09:29:31.461587 I | rafthttp: stopped streaming with peer a77455c5089fbd14 (stream MsgApp v2 reader)\n2020-05-04 09:29:31.461659 W | rafthttp: lost the TCP streaming connection with peer a77455c5089fbd14 (stream Message reader)\n2020-05-04 09:29:31.461697 I | rafthttp: stopped streaming with peer a77455c5089fbd14 (stream Message reader)\n2020-05-04 09:29:31.461708 I | rafthttp: stopped peer a77455c5089fbd14\n2020-05-04 09:29:31.487543 E | rafthttp: failed to find member 485e45819b05390e in cluster 1f9941102e27d0ec\n
May 04 09:32:39.427 E ns/openshift-etcd pod/etcd-member-ip-10-0-140-155.us-east-2.compute.internal node/ip-10-0-140-155.us-east-2.compute.internal container=etcd-metrics container exited with code 255 (Error): 2020-05-04 09:28:57.483483 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-0.ci-op-cvzhtcyw-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-0.ci-op-cvzhtcyw-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-05-04 09:28:57.484683 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-05-04 09:28:57.485593 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-cvzhtcyw-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-cvzhtcyw-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/05/04 09:28:57 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.140.155:9978: connect: connection refused"; Reconnecting to {etcd-0.ci-op-cvzhtcyw-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\n2020-05-04 09:28:58.498803 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
May 04 09:32:39.427 E ns/openshift-etcd pod/etcd-member-ip-10-0-140-155.us-east-2.compute.internal node/ip-10-0-140-155.us-east-2.compute.internal container=etcd-member container exited with code 255 (Error): MsgApp v2 (context canceled)\n2020-05-04 09:29:31.460464 I | rafthttp: peer 485e45819b05390e became inactive (message send to peer failed)\n2020-05-04 09:29:31.460479 I | rafthttp: stopped streaming with peer 485e45819b05390e (stream MsgApp v2 reader)\n2020-05-04 09:29:31.460568 W | rafthttp: lost the TCP streaming connection with peer 485e45819b05390e (stream Message reader)\n2020-05-04 09:29:31.460588 I | rafthttp: stopped streaming with peer 485e45819b05390e (stream Message reader)\n2020-05-04 09:29:31.460599 I | rafthttp: stopped peer 485e45819b05390e\n2020-05-04 09:29:31.460606 I | rafthttp: stopping peer a77455c5089fbd14...\n2020-05-04 09:29:31.460926 I | rafthttp: closed the TCP streaming connection with peer a77455c5089fbd14 (stream MsgApp v2 writer)\n2020-05-04 09:29:31.460942 I | rafthttp: stopped streaming with peer a77455c5089fbd14 (writer)\n2020-05-04 09:29:31.461302 I | rafthttp: closed the TCP streaming connection with peer a77455c5089fbd14 (stream Message writer)\n2020-05-04 09:29:31.461319 I | rafthttp: stopped streaming with peer a77455c5089fbd14 (writer)\n2020-05-04 09:29:31.461473 I | rafthttp: stopped HTTP pipelining with peer a77455c5089fbd14\n2020-05-04 09:29:31.461551 W | rafthttp: lost the TCP streaming connection with peer a77455c5089fbd14 (stream MsgApp v2 reader)\n2020-05-04 09:29:31.461570 E | rafthttp: failed to read a77455c5089fbd14 on stream MsgApp v2 (context canceled)\n2020-05-04 09:29:31.461579 I | rafthttp: peer a77455c5089fbd14 became inactive (message send to peer failed)\n2020-05-04 09:29:31.461587 I | rafthttp: stopped streaming with peer a77455c5089fbd14 (stream MsgApp v2 reader)\n2020-05-04 09:29:31.461659 W | rafthttp: lost the TCP streaming connection with peer a77455c5089fbd14 (stream Message reader)\n2020-05-04 09:29:31.461697 I | rafthttp: stopped streaming with peer a77455c5089fbd14 (stream Message reader)\n2020-05-04 09:29:31.461708 I | rafthttp: stopped peer a77455c5089fbd14\n2020-05-04 09:29:31.487543 E | rafthttp: failed to find member 485e45819b05390e in cluster 1f9941102e27d0ec\n
May 04 09:32:49.558 E ns/openshift-operator-lifecycle-manager pod/packageserver-794b8fff88-jfw98 node/ip-10-0-140-155.us-east-2.compute.internal container=packageserver container exited with code 137 (Error): ifecycle-manager\ntime="2020-05-04T09:32:30Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-05-04T09:32:30Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-05-04T09:32:30Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-05-04T09:32:30Z" level=info msg="new grpc connection added" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-05-04T09:32:30Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-05-04T09:32:30Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-05-04T09:32:30Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-05-04T09:32:30Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-05-04T09:32:33Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-05-04T09:32:33Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-05-04T09:32:40Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-05-04T09:32:40Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\n
May 04 09:33:40.524 E ns/openshift-image-registry pod/node-ca-22ktf node/ip-10-0-151-195.us-east-2.compute.internal container=node-ca container exited with code 255 (Error): 
May 04 09:33:40.557 E ns/openshift-monitoring pod/node-exporter-f9vlk node/ip-10-0-151-195.us-east-2.compute.internal container=node-exporter container exited with code 255 (Error): 
May 04 09:33:40.557 E ns/openshift-monitoring pod/node-exporter-f9vlk node/ip-10-0-151-195.us-east-2.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
May 04 09:33:40.768 E ns/openshift-cluster-node-tuning-operator pod/tuned-c5nnw node/ip-10-0-151-195.us-east-2.compute.internal container=tuned container exited with code 255 (Error): 3406 openshift-tuned.go:722] Increasing resyncPeriod to 114\nI0504 09:31:25.085522   33406 openshift-tuned.go:187] Extracting tuned profiles\nI0504 09:31:25.087261   33406 openshift-tuned.go:623] Resync period to pull node/pod labels: 114 [s]\nI0504 09:31:25.100136   33406 openshift-tuned.go:435] Pod (openshift-marketplace/certified-operators-7d68f94555-9rkqz) labels changed node wide: true\nI0504 09:31:30.096823   33406 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:31:30.098343   33406 openshift-tuned.go:275] Dumping labels to /var/lib/tuned/ocp-node-labels.cfg\nI0504 09:31:30.099558   33406 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:31:30.209862   33406 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0504 09:31:30.378055   33406 openshift-tuned.go:435] Pod (openshift-marketplace/community-operators-79b846779f-vkzsr) labels changed node wide: true\nI0504 09:31:35.096913   33406 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:31:35.098430   33406 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:31:35.206621   33406 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0504 09:31:55.444360   33406 openshift-tuned.go:435] Pod (e2e-tests-sig-apps-job-upgrade-p562f/foo-bdctz) labels changed node wide: true\nI0504 09:32:00.096861   33406 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:32:00.098352   33406 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:32:00.206396   33406 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0504 09:32:00.781692   33406 openshift-tuned.go:435] Pod (openshift-console/downloads-5877796c85-rf4dz) labels changed node wide: true\nI0504 09:32:01.110979   33406 openshift-tuned.go:126] Received signal: terminated\n
May 04 09:33:45.414 E ns/openshift-sdn pod/sdn-6svrp node/ip-10-0-151-195.us-east-2.compute.internal container=sdn container exited with code 255 (Error): \nI0504 09:31:53.097186   57302 proxier.go:367] userspace proxy: processing 0 service events\nI0504 09:31:53.097215   57302 proxier.go:346] userspace syncProxyRules took 26.577376ms\nI0504 09:31:53.215890   57302 proxier.go:367] userspace proxy: processing 0 service events\nI0504 09:31:53.215918   57302 proxier.go:346] userspace syncProxyRules took 31.176575ms\nI0504 09:31:57.420389   57302 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-operator-lifecycle-manager/olm-operators:grpc to [10.131.0.30:50051]\nI0504 09:31:57.420425   57302 roundrobin.go:240] Delete endpoint 10.131.0.30:50051 for service "openshift-operator-lifecycle-manager/olm-operators:grpc"\nI0504 09:31:57.519726   57302 proxier.go:367] userspace proxy: processing 0 service events\nI0504 09:31:57.519752   57302 proxier.go:346] userspace syncProxyRules took 26.124416ms\nE0504 09:32:01.007185   57302 healthcheck.go:57] SDN healthcheck disconnected from OVS server: <nil>\nI0504 09:32:01.007332   57302 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\ninterrupt: Gracefully shutting down ...\nI0504 09:32:01.112879   57302 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0504 09:32:01.211151   57302 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0504 09:32:01.310467   57302 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0504 09:32:01.410284   57302 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0504 09:32:01.509351   57302 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\n
May 04 09:33:45.815 E ns/openshift-dns pod/dns-default-m2f4m node/ip-10-0-151-195.us-east-2.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-05-04T09:16:50.910Z [INFO] CoreDNS-1.3.1\n2020-05-04T09:16:50.910Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-05-04T09:16:50.910Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nW0504 09:26:46.795272       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:317: watch of *v1.Endpoints ended with: very short watch: github.com/coredns/coredns/plugin/kubernetes/controller.go:317: Unexpected watch close - watch lasted less than a second and no items received\nW0504 09:26:46.795272       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: watch of *v1.Service ended with: very short watch: github.com/coredns/coredns/plugin/kubernetes/controller.go:315: Unexpected watch close - watch lasted less than a second and no items received\nW0504 09:26:46.858352       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: watch of *v1.Namespace ended with: too old resource version: 21105 (28899)\n[INFO] SIGTERM: Shutting down servers then terminating\n
May 04 09:33:45.815 E ns/openshift-dns pod/dns-default-m2f4m node/ip-10-0-151-195.us-east-2.compute.internal container=dns-node-resolver container exited with code 255 (Error): /bin/bash: line 1: kill: (135) - No such process\n
May 04 09:33:46.216 E ns/openshift-multus pod/multus-7nc5f node/ip-10-0-151-195.us-east-2.compute.internal container=kube-multus container exited with code 255 (Error): 
May 04 09:33:46.614 E ns/openshift-sdn pod/ovs-rkvkj node/ip-10-0-151-195.us-east-2.compute.internal container=openvswitch container exited with code 255 (Error): 24.801Z|00177|bridge|INFO|bridge br0: deleted interface vethbcda15c9 on port 19\n2020-05-04T09:31:24.873Z|00178|connmgr|INFO|br0<->unix#287: 4 flow_mods in the last 0 s (4 deletes)\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-05-04T09:31:24.788Z|00023|jsonrpc|WARN|Dropped 8 log messages in last 682 seconds (most recently, 681 seconds ago) due to excessive rate\n2020-05-04T09:31:24.788Z|00024|jsonrpc|WARN|unix#212: receive error: Connection reset by peer\n2020-05-04T09:31:24.788Z|00025|reconnect|WARN|unix#212: connection dropped (Connection reset by peer)\n\n==> /var/log/openvswitch/ovs-vswitchd.log <==\n2020-05-04T09:31:24.909Z|00179|bridge|INFO|bridge br0: deleted interface vethd63d67a7 on port 9\n2020-05-04T09:31:45.718Z|00180|bridge|INFO|bridge br0: added interface veth794f7206 on port 21\n2020-05-04T09:31:45.747Z|00181|connmgr|INFO|br0<->unix#293: 5 flow_mods in the last 0 s (5 adds)\n2020-05-04T09:31:45.785Z|00182|connmgr|INFO|br0<->unix#296: 2 flow_mods in the last 0 s (2 deletes)\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-05-04T09:31:45.725Z|00026|jsonrpc|WARN|unix#223: receive error: Connection reset by peer\n2020-05-04T09:31:45.725Z|00027|reconnect|WARN|unix#223: connection dropped (Connection reset by peer)\n2020-05-04T09:31:45.762Z|00028|jsonrpc|WARN|unix#226: receive error: Connection reset by peer\n2020-05-04T09:31:45.762Z|00029|reconnect|WARN|unix#226: connection dropped (Connection reset by peer)\n\n==> /var/log/openvswitch/ovs-vswitchd.log <==\n2020-05-04T09:31:53.240Z|00183|connmgr|INFO|br0<->unix#299: 4 flow_mods in the last 0 s (4 deletes)\n2020-05-04T09:31:53.260Z|00184|bridge|INFO|bridge br0: deleted interface vethb0568764 on port 3\n2020-05-04T09:31:54.102Z|00185|connmgr|INFO|br0<->unix#302: 2 flow_mods in the last 0 s (2 deletes)\n2020-05-04T09:31:54.129Z|00186|connmgr|INFO|br0<->unix#305: 4 flow_mods in the last 0 s (4 deletes)\n2020-05-04T09:31:54.149Z|00187|bridge|INFO|bridge br0: deleted interface vetheac89f74 on port 14\nTerminated\novs-vswitchd is not running.\novsdb-server is not running.\n
May 04 09:33:47.614 E ns/openshift-machine-config-operator pod/machine-config-daemon-2shpp node/ip-10-0-151-195.us-east-2.compute.internal container=machine-config-daemon container exited with code 255 (Error): 
May 04 09:33:49.413 E ns/openshift-operator-lifecycle-manager pod/olm-operators-85cj9 node/ip-10-0-151-195.us-east-2.compute.internal container=configmap-registry-server container exited with code 255 (Error): 
May 04 09:33:54.491 E ns/openshift-operator-lifecycle-manager pod/packageserver-794b8fff88-z6db8 node/ip-10-0-155-226.us-east-2.compute.internal container=packageserver container exited with code 137 (Error): 80       1 log.go:172] http: TLS handshake error from 10.128.0.1:60234: remote error: tls: bad certificate\nI0504 09:33:21.652238       1 log.go:172] http: TLS handshake error from 10.128.0.1:60292: remote error: tls: bad certificate\nI0504 09:33:22.051349       1 log.go:172] http: TLS handshake error from 10.128.0.1:60322: remote error: tls: bad certificate\nI0504 09:33:22.451572       1 log.go:172] http: TLS handshake error from 10.128.0.1:60344: remote error: tls: bad certificate\nI0504 09:33:22.987307       1 wrap.go:47] GET /healthz: (134.402µs) 200 [kube-probe/1.13+ 10.129.0.1:40870]\nI0504 09:33:23.407271       1 wrap.go:47] GET /: (246.205µs) 200 [Go-http-client/2.0 10.129.0.1:59406]\nI0504 09:33:23.408131       1 wrap.go:47] GET /: (175.44µs) 200 [Go-http-client/2.0 10.128.0.1:56416]\nI0504 09:33:23.408444       1 wrap.go:47] GET /: (116.989µs) 200 [Go-http-client/2.0 10.128.0.1:56416]\nI0504 09:33:23.454773       1 secure_serving.go:156] Stopped listening on [::]:5443\ntime="2020-05-04T09:33:30Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-05-04T09:33:30Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-05-04T09:33:33Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-05-04T09:33:33Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-05-04T09:33:49Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-05-04T09:33:49Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\n
May 04 09:34:09.175 E ns/openshift-apiserver pod/apiserver-8v5x6 node/ip-10-0-131-63.us-east-2.compute.internal container=openshift-apiserver container exited with code 255 (Error):        1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []\nI0504 09:32:16.434946       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0504 09:32:16.435059       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0504 09:32:16.435520       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0504 09:32:16.449148       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nE0504 09:32:26.044547       1 memcache.go:141] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request\nI0504 09:32:28.014591       1 clusterquotamapping.go:145] Shutting down ClusterQuotaMappingController controller\nI0504 09:32:28.014923       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/client-ca/ca-bundle.crt\nI0504 09:32:28.014993       1 controller.go:87] Shutting down OpenAPI AggregationController\nI0504 09:32:28.015003       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/aggregator-client-ca/ca-bundle.crt\nI0504 09:32:28.015017       1 serving.go:88] Shutting down DynamicLoader\nI0504 09:32:28.018942       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0504 09:32:28.020206       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0504 09:32:28.020477       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0504 09:32:28.020683       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0504 09:32:28.020856       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0504 09:32:28.021152       1 secure_serving.go:180] Stopped listening on 0.0.0.0:8443\n
May 04 09:34:09.202 E ns/openshift-image-registry pod/node-ca-f5v47 node/ip-10-0-131-63.us-east-2.compute.internal container=node-ca container exited with code 255 (Error): 
May 04 09:34:09.394 E ns/openshift-monitoring pod/node-exporter-m86tc node/ip-10-0-131-63.us-east-2.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
May 04 09:34:09.394 E ns/openshift-monitoring pod/node-exporter-m86tc node/ip-10-0-131-63.us-east-2.compute.internal container=node-exporter container exited with code 255 (Error): 
May 04 09:34:18.278 E ns/openshift-controller-manager pod/controller-manager-m8cdp node/ip-10-0-131-63.us-east-2.compute.internal container=controller-manager container exited with code 255 (Error): 
May 04 09:34:19.086 E ns/openshift-cluster-node-tuning-operator pod/tuned-z7tjq node/ip-10-0-131-63.us-east-2.compute.internal container=tuned container exited with code 255 (Error): ofile...\nI0504 09:32:00.345709   52366 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0504 09:32:00.346596   52366 openshift-tuned.go:435] Pod (openshift-cluster-samples-operator/cluster-samples-operator-fb465fbd4-5m6tf) labels changed node wide: true\nI0504 09:32:05.120082   52366 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:32:05.121750   52366 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:32:05.252488   52366 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0504 09:32:06.502490   52366 openshift-tuned.go:435] Pod (openshift-service-ca/configmap-cabundle-injector-58db8dc9cf-c94fh) labels changed node wide: true\nI0504 09:32:10.120098   52366 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:32:10.121532   52366 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:32:10.246081   52366 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0504 09:32:17.700163   52366 openshift-tuned.go:435] Pod (openshift-machine-config-operator/etcd-quorum-guard-bcdbbd9c7-h4czv) labels changed node wide: true\nI0504 09:32:20.120130   52366 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0504 09:32:20.121537   52366 openshift-tuned.go:326] Getting recommended profile...\nI0504 09:32:20.246515   52366 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0504 09:32:25.699592   52366 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/packageserver-794b8fff88-rz66k) labels changed node wide: false\nI0504 09:32:26.700843   52366 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/packageserver-794b8fff88-nprtx) labels changed node wide: true\n
May 04 09:34:19.487 E ns/openshift-dns pod/dns-default-z65v4 node/ip-10-0-131-63.us-east-2.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-05-04T09:16:06.800Z [INFO] CoreDNS-1.3.1\n2020-05-04T09:16:06.800Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-05-04T09:16:06.800Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nW0504 09:28:48.414079       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: watch of *v1.Namespace ended with: too old resource version: 17739 (31378)\nE0504 09:29:31.083795       1 reflector.go:322] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: Failed to watch *v1.Namespace: Get https://172.30.0.1:443/api/v1/namespaces?resourceVersion=31378&timeoutSeconds=454&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nW0504 09:29:31.083886       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:317: watch of *v1.Endpoints ended with: very short watch: github.com/coredns/coredns/plugin/kubernetes/controller.go:317: Unexpected watch close - watch lasted less than a second and no items received\n[INFO] SIGTERM: Shutting down servers then terminating\n
May 04 09:34:19.487 E ns/openshift-dns pod/dns-default-z65v4 node/ip-10-0-131-63.us-east-2.compute.internal container=dns-node-resolver container exited with code 255 (Error): /bin/bash: line 1: kill: (144) - No such process\n
May 04 09:34:19.877 E ns/openshift-sdn pod/ovs-g8r28 node/ip-10-0-131-63.us-east-2.compute.internal container=openvswitch container exited with code 255 (Error): 4 flow_mods in the last 0 s (4 deletes)\n2020-05-04T09:31:52.453Z|00188|bridge|INFO|bridge br0: deleted interface vethc329895a on port 12\n2020-05-04T09:31:52.829Z|00189|connmgr|INFO|br0<->unix#442: 2 flow_mods in the last 0 s (2 deletes)\n2020-05-04T09:31:52.874Z|00190|connmgr|INFO|br0<->unix#445: 4 flow_mods in the last 0 s (4 deletes)\n2020-05-04T09:31:52.917Z|00191|bridge|INFO|bridge br0: deleted interface veth778daff1 on port 22\n2020-05-04T09:31:53.075Z|00192|connmgr|INFO|br0<->unix#448: 4 flow_mods in the last 0 s (4 deletes)\n2020-05-04T09:31:53.139Z|00193|bridge|INFO|bridge br0: deleted interface veth615c8762 on port 13\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-05-04T09:31:53.129Z|00028|jsonrpc|WARN|unix#317: send error: Broken pipe\n2020-05-04T09:31:53.130Z|00029|reconnect|WARN|unix#317: connection dropped (Broken pipe)\n\n==> /var/log/openvswitch/ovs-vswitchd.log <==\n2020-05-04T09:31:53.340Z|00194|connmgr|INFO|br0<->unix#451: 4 flow_mods in the last 0 s (4 deletes)\n2020-05-04T09:31:53.375Z|00195|bridge|INFO|bridge br0: deleted interface vethd7042ba2 on port 5\n2020-05-04T09:31:53.431Z|00196|connmgr|INFO|br0<->unix#454: 2 flow_mods in the last 0 s (2 deletes)\n2020-05-04T09:31:53.471Z|00197|connmgr|INFO|br0<->unix#457: 4 flow_mods in the last 0 s (4 deletes)\n2020-05-04T09:31:53.496Z|00198|bridge|INFO|bridge br0: deleted interface veth2ae51435 on port 30\n2020-05-04T09:32:20.335Z|00199|connmgr|INFO|br0<->unix#463: 2 flow_mods in the last 0 s (2 deletes)\n2020-05-04T09:32:20.374Z|00200|connmgr|INFO|br0<->unix#466: 4 flow_mods in the last 0 s (4 deletes)\n2020-05-04T09:32:20.400Z|00201|bridge|INFO|bridge br0: deleted interface veth4cf0711f on port 29\n2020-05-04T09:32:20.887Z|00202|connmgr|INFO|br0<->unix#469: 2 flow_mods in the last 0 s (2 deletes)\n2020-05-04T09:32:20.919Z|00203|connmgr|INFO|br0<->unix#472: 4 flow_mods in the last 0 s (4 deletes)\n2020-05-04T09:32:20.946Z|00204|bridge|INFO|bridge br0: deleted interface veth121ad54e on port 28\nTerminated\novs-vswitchd is not running.\novsdb-server is not running.\n
May 04 09:34:20.275 E ns/openshift-sdn pod/sdn-controller-hgx7x node/ip-10-0-131-63.us-east-2.compute.internal container=sdn-controller container exited with code 255 (Error): I0504 09:16:57.607398       1 leaderelection.go:205] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
May 04 09:34:29.077 E ns/openshift-multus pod/multus-k7kqx node/ip-10-0-131-63.us-east-2.compute.internal container=kube-multus container exited with code 255 (Error): 
May 04 09:34:29.678 E ns/openshift-machine-config-operator pod/machine-config-server-nx2wk node/ip-10-0-131-63.us-east-2.compute.internal container=machine-config-server container exited with code 255 (Error): 
May 04 09:34:45.077 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-131-63.us-east-2.compute.internal node/ip-10-0-131-63.us-east-2.compute.internal container=scheduler container exited with code 255 (Error):   1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251\nI0504 09:05:52.855369       1 serving.go:195] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "scheduler.openshift-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1588581992" (2020-05-04 08:46:46 +0000 UTC to 2022-05-04 08:46:47 +0000 UTC (now=2020-05-04 09:05:52.855342064 +0000 UTC))\nI0504 09:05:52.855409       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1588581992" [] issuer="<self>" (2020-05-04 08:46:31 +0000 UTC to 2021-05-04 08:46:32 +0000 UTC (now=2020-05-04 09:05:52.855393462 +0000 UTC))\nI0504 09:05:52.855434       1 secure_serving.go:136] Serving securely on [::]:10259\nI0504 09:05:52.855512       1 serving.go:77] Starting DynamicLoader\nI0504 09:05:53.757473       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller\nI0504 09:05:53.857710       1 controller_utils.go:1034] Caches are synced for scheduler controller\nI0504 09:05:53.857751       1 leaderelection.go:205] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\nI0504 09:27:03.852915       1 leaderelection.go:214] successfully acquired lease openshift-kube-scheduler/kube-scheduler\nW0504 09:28:48.581941       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 16950 (31392)\nW0504 09:31:45.683090       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 16943 (33950)\nW0504 09:31:45.672691       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 16943 (33950)\nE0504 09:32:27.884402       1 server.go:259] lost master\n
May 04 09:34:45.478 E ns/openshift-etcd pod/etcd-member-ip-10-0-131-63.us-east-2.compute.internal node/ip-10-0-131-63.us-east-2.compute.internal container=etcd-metrics container exited with code 255 (Error): 2020-05-04 09:31:54.136501 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-2.ci-op-cvzhtcyw-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-2.ci-op-cvzhtcyw-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-05-04 09:31:54.137488 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-05-04 09:31:54.138109 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-2.ci-op-cvzhtcyw-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-2.ci-op-cvzhtcyw-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/05/04 09:31:54 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.131.63:9978: connect: connection refused"; Reconnecting to {etcd-2.ci-op-cvzhtcyw-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\n2020-05-04 09:31:55.154045 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
May 04 09:34:45.478 E ns/openshift-etcd pod/etcd-member-ip-10-0-131-63.us-east-2.compute.internal node/ip-10-0-131-63.us-east-2.compute.internal container=etcd-member container exited with code 255 (Error): 833c64a13478c60f (writer)\n2020-05-04 09:32:28.334553 I | rafthttp: stopped HTTP pipelining with peer 833c64a13478c60f\n2020-05-04 09:32:28.334637 W | rafthttp: lost the TCP streaming connection with peer 833c64a13478c60f (stream MsgApp v2 reader)\n2020-05-04 09:32:28.334652 E | rafthttp: failed to read 833c64a13478c60f on stream MsgApp v2 (context canceled)\n2020-05-04 09:32:28.334661 I | rafthttp: peer 833c64a13478c60f became inactive (message send to peer failed)\n2020-05-04 09:32:28.334672 I | rafthttp: stopped streaming with peer 833c64a13478c60f (stream MsgApp v2 reader)\n2020-05-04 09:32:28.334773 W | rafthttp: lost the TCP streaming connection with peer 833c64a13478c60f (stream Message reader)\n2020-05-04 09:32:28.334788 I | rafthttp: stopped streaming with peer 833c64a13478c60f (stream Message reader)\n2020-05-04 09:32:28.334799 I | rafthttp: stopped peer 833c64a13478c60f\n2020-05-04 09:32:28.334809 I | rafthttp: stopping peer a77455c5089fbd14...\n2020-05-04 09:32:28.335069 I | rafthttp: closed the TCP streaming connection with peer a77455c5089fbd14 (stream MsgApp v2 writer)\n2020-05-04 09:32:28.335081 I | rafthttp: stopped streaming with peer a77455c5089fbd14 (writer)\n2020-05-04 09:32:28.335511 I | rafthttp: closed the TCP streaming connection with peer a77455c5089fbd14 (stream Message writer)\n2020-05-04 09:32:28.335525 I | rafthttp: stopped streaming with peer a77455c5089fbd14 (writer)\n2020-05-04 09:32:28.335781 I | rafthttp: stopped HTTP pipelining with peer a77455c5089fbd14\n2020-05-04 09:32:28.335881 W | rafthttp: lost the TCP streaming connection with peer a77455c5089fbd14 (stream MsgApp v2 reader)\n2020-05-04 09:32:28.335901 I | rafthttp: stopped streaming with peer a77455c5089fbd14 (stream MsgApp v2 reader)\n2020-05-04 09:32:28.336003 W | rafthttp: lost the TCP streaming connection with peer a77455c5089fbd14 (stream Message reader)\n2020-05-04 09:32:28.336032 I | rafthttp: stopped streaming with peer a77455c5089fbd14 (stream Message reader)\n2020-05-04 09:32:28.336043 I | rafthttp: stopped peer a77455c5089fbd14\n
May 04 09:34:45.877 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-63.us-east-2.compute.internal node/ip-10-0-131-63.us-east-2.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 255 (Error): I0504 09:03:02.434863       1 observer_polling.go:106] Starting file observer\nI0504 09:03:02.435132       1 certsync_controller.go:269] Starting CertSyncer\nW0504 09:12:59.069024       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 21711 (23985)\nW0504 09:21:53.075992       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 24132 (27382)\nW0504 09:27:07.086795       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 27510 (29818)\n
May 04 09:34:45.877 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-63.us-east-2.compute.internal node/ip-10-0-131-63.us-east-2.compute.internal container=kube-apiserver-7 container exited with code 255 (Error): StreamID=3753, ErrCode=NO_ERROR, debug=""\nI0504 09:32:28.026751       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=3753, ErrCode=NO_ERROR, debug=""\nI0504 09:32:28.026848       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=3753, ErrCode=NO_ERROR, debug=""\nI0504 09:32:28.026917       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=3753, ErrCode=NO_ERROR, debug=""\nI0504 09:32:28.027250       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=3753, ErrCode=NO_ERROR, debug=""\nI0504 09:32:28.027707       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=3753, ErrCode=NO_ERROR, debug=""\nI0504 09:32:28.027890       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=3753, ErrCode=NO_ERROR, debug=""\nI0504 09:32:28.028096       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=3753, ErrCode=NO_ERROR, debug=""\nI0504 09:32:28.159623       1 controller.go:107] OpenAPI AggregationController: Processing item v1.template.openshift.io\nI0504 09:32:28.202523       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\nI0504 09:32:28.202850       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\nI0504 09:32:28.203047       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\nI0504 09:32:28.203271       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\n
May 04 09:34:46.278 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-131-63.us-east-2.compute.internal node/ip-10-0-131-63.us-east-2.compute.internal container=kube-controller-manager-cert-syncer-5 container exited with code 255 (Error): I0504 09:03:12.526438       1 certsync_controller.go:269] Starting CertSyncer\nI0504 09:03:12.526784       1 observer_polling.go:106] Starting file observer\nW0504 09:09:44.566206       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 21699 (23078)\nW0504 09:15:07.571713       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 23229 (24563)\nW0504 09:21:10.580948       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 24693 (27172)\nW0504 09:26:16.591248       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 27320 (28689)\n
May 04 09:34:46.278 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-131-63.us-east-2.compute.internal node/ip-10-0-131-63.us-east-2.compute.internal container=kube-controller-manager-5 container exited with code 255 (Error): icate: "kubelet-signer" [] issuer="<self>" (2020-05-04 08:31:09 +0000 UTC to 2020-05-05 08:31:09 +0000 UTC (now=2020-05-04 09:03:12.974683472 +0000 UTC))\nI0504 09:03:12.974788       1 clientca.go:92] [3] "/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" client-ca certificate: "kube-apiserver-to-kubelet-signer" [] issuer="<self>" (2020-05-04 08:31:09 +0000 UTC to 2021-05-04 08:31:09 +0000 UTC (now=2020-05-04 09:03:12.974771215 +0000 UTC))\nI0504 09:03:12.974854       1 clientca.go:92] [4] "/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" client-ca certificate: "kube-control-plane-signer" [] issuer="<self>" (2020-05-04 08:31:09 +0000 UTC to 2021-05-04 08:31:09 +0000 UTC (now=2020-05-04 09:03:12.974838999 +0000 UTC))\nI0504 09:03:12.981798       1 controllermanager.go:169] Version: v1.13.4+6458880\nI0504 09:03:12.983716       1 serving.go:195] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1588581992" (2020-05-04 08:46:46 +0000 UTC to 2022-05-04 08:46:47 +0000 UTC (now=2020-05-04 09:03:12.983688297 +0000 UTC))\nI0504 09:03:12.983814       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1588581992" [] issuer="<self>" (2020-05-04 08:46:31 +0000 UTC to 2021-05-04 08:46:32 +0000 UTC (now=2020-05-04 09:03:12.983793989 +0000 UTC))\nI0504 09:03:12.983887       1 secure_serving.go:136] Serving securely on [::]:10257\nI0504 09:03:12.984093       1 serving.go:77] Starting DynamicLoader\nI0504 09:03:12.984456       1 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-controller-manager...\nE0504 09:32:27.860877       1 controllermanager.go:282] leaderelection lost\n
May 04 09:35:12.878 E ns/openshift-machine-config-operator pod/etcd-quorum-guard-bcdbbd9c7-cmklt node/ip-10-0-155-226.us-east-2.compute.internal container=guard container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated