ResultFAILURE
Tests 3 failed / 19 succeeded
Started2020-04-03 13:49
Elapsed1h46m
Work namespaceci-op-xsts04cq
Refs release-4.1:514189df
812:8d0c3f82
podef65cab2-75b1-11ea-bcfa-0a58ac10463b
repoopenshift/cluster-kube-apiserver-operator
revision1

Test Failures


openshift-tests Monitor cluster while tests execute 43m30s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
242 error level events were detected during this test run:

Apr 03 14:50:12.470 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-f64cf8785-pkwff node/ip-10-0-132-247.us-west-1.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): or.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 13618 (14367)\nW0403 14:40:03.193303       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.RoleBinding ended with: too old resource version: 12123 (13639)\nW0403 14:40:03.193443       1 reflector.go:270] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: watch of *v1.KubeControllerManager ended with: too old resource version: 12760 (14154)\nW0403 14:40:03.197112       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 13178 (13790)\nW0403 14:40:03.197176       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Namespace ended with: too old resource version: 9840 (13631)\nW0403 14:40:03.197274       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Infrastructure ended with: too old resource version: 13018 (14166)\nW0403 14:40:03.197324       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Role ended with: too old resource version: 12067 (13639)\nW0403 14:45:11.212607       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 14506 (15795)\nW0403 14:48:07.207081       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 14506 (17092)\nW0403 14:49:33.199164       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 14506 (17562)\nW0403 14:49:47.171121       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 13890 (13992)\nI0403 14:50:11.804285       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 14:50:11.804350       1 leaderelection.go:65] leaderelection lost\n
Apr 03 14:50:24.503 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-78fff88985-r9jc6 node/ip-10-0-132-247.us-west-1.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): usteroperator/kube-scheduler changed: Progressing changed from True to False ("Progressing: 3 nodes are at revision 5"),Available message changed from "Available: 3 nodes are active; 1 nodes are at revision 4; 2 nodes are at revision 5" to "Available: 3 nodes are active; 3 nodes are at revision 5"\nI0403 14:39:08.463959       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"2e05c7b2-75b7-11ea-91c6-060888dd8c91", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/revision-status-5 -n openshift-kube-scheduler: cause by changes in data.status\nI0403 14:39:14.270181       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"2e05c7b2-75b7-11ea-91c6-060888dd8c91", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-5-ip-10-0-132-247.us-west-1.compute.internal -n openshift-kube-scheduler because it was missing\nW0403 14:43:50.405838       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 13210 (13790)\nW0403 14:44:53.663307       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 13632 (15694)\nW0403 14:45:01.268766       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 13618 (15737)\nW0403 14:49:11.560920       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 13987 (13988)\nI0403 14:50:23.614418       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 14:50:23.614483       1 leaderelection.go:65] leaderelection lost\nF0403 14:50:23.622927       1 builder.go:217] server exited\n
Apr 03 14:51:55.567 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-7c68d95554-4vpml node/ip-10-0-132-247.us-west-1.compute.internal container=openshift-apiserver-operator container exited with code 255 (Error): 3.144625       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Image ended with: too old resource version: 9608 (14501)\nW0403 14:40:03.175179       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Ingress ended with: too old resource version: 9612 (13591)\nW0403 14:40:03.221860       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ServiceAccount ended with: too old resource version: 12030 (13698)\nW0403 14:40:03.251680       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 13226 (13790)\nW0403 14:40:03.273970       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Pod ended with: too old resource version: 11884 (12373)\nW0403 14:40:03.276096       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 13606 (14367)\nW0403 14:45:33.984742       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 14503 (15890)\nW0403 14:48:26.080560       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 14506 (17174)\nW0403 14:48:40.067087       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 13890 (13986)\nW0403 14:49:45.520560       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 14517 (17632)\nW0403 14:51:30.991714       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 16022 (18259)\nI0403 14:51:54.897704       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 14:51:54.897878       1 leaderelection.go:65] leaderelection lost\n
Apr 03 14:52:07.614 E ns/openshift-machine-api pod/machine-api-operator-5c74464fbf-t4lnq node/ip-10-0-132-247.us-west-1.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Apr 03 14:54:19.257 E ns/openshift-machine-api pod/machine-api-controllers-84d466ffd7-bvtnp node/ip-10-0-146-58.us-west-1.compute.internal container=controller-manager container exited with code 1 (Error): 
Apr 03 14:54:19.257 E ns/openshift-machine-api pod/machine-api-controllers-84d466ffd7-bvtnp node/ip-10-0-146-58.us-west-1.compute.internal container=nodelink-controller container exited with code 2 (Error): 
Apr 03 14:54:49.277 E ns/openshift-cluster-machine-approver pod/machine-approver-57dcb57969-bvnz6 node/ip-10-0-132-247.us-west-1.compute.internal container=machine-approver-controller container exited with code 2 (Error): r sent GOAWAY and closed the connection; LastStreamID=33, ErrCode=NO_ERROR, debug=""\nE0403 14:38:14.282095       1 reflector.go:322] github.com/openshift/cluster-machine-approver/main.go:185: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?resourceVersion=7042&timeoutSeconds=431&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0403 14:38:15.282850       1 reflector.go:205] github.com/openshift/cluster-machine-approver/main.go:185: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0403 14:51:49.355401       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""\nE0403 14:51:49.356027       1 reflector.go:322] github.com/openshift/cluster-machine-approver/main.go:185: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?resourceVersion=13634&timeoutSeconds=319&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0403 14:51:50.356828       1 reflector.go:205] github.com/openshift/cluster-machine-approver/main.go:185: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0403 14:51:55.806266       1 reflector.go:205] github.com/openshift/cluster-machine-approver/main.go:185: Failed to list *v1beta1.CertificateSigningRequest: certificatesigningrequests.certificates.k8s.io is forbidden: User "system:serviceaccount:openshift-cluster-machine-approver:machine-approver-sa" cannot list resource "certificatesigningrequests" in API group "certificates.k8s.io" at the cluster scope\n
Apr 03 14:55:23.961 E ns/openshift-monitoring pod/node-exporter-hjsvt node/ip-10-0-153-45.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): 
Apr 03 14:55:33.495 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-153-45.us-west-1.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): 
Apr 03 14:55:33.495 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-153-45.us-west-1.compute.internal container=prometheus-proxy container exited with code 2 (Error): 
Apr 03 14:55:33.495 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-153-45.us-west-1.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 
Apr 03 14:55:35.679 E ns/openshift-authentication-operator pod/authentication-operator-6fd4c785df-l87hf node/ip-10-0-128-97.us-west-1.compute.internal container=operator container exited with code 255 (Error): rce version: 13592 (19457)\nW0403 14:55:30.421056       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 19724 (20317)\nW0403 14:55:30.421269       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 12373 (18532)\nW0403 14:55:30.443702       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 16254 (18529)\nW0403 14:55:30.518084       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Ingress ended with: too old resource version: 13591 (19228)\nW0403 14:55:30.518266       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Infrastructure ended with: too old resource version: 13590 (19454)\nW0403 14:55:30.518341       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 16660 (18529)\nW0403 14:55:30.545754       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Deployment ended with: too old resource version: 18789 (20137)\nW0403 14:55:30.645714       1 reflector.go:270] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: watch of *v1.Authentication ended with: too old resource version: 16133 (21287)\nW0403 14:55:30.678882       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.OAuth ended with: too old resource version: 13592 (21288)\nW0403 14:55:30.681672       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Console ended with: too old resource version: 13591 (21290)\nI0403 14:55:34.588277       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 14:55:34.588338       1 leaderelection.go:65] leaderelection lost\n
Apr 03 14:55:38.656 E ns/openshift-service-catalog-apiserver-operator pod/openshift-service-catalog-apiserver-operator-695b959f7c-mv6fv node/ip-10-0-128-97.us-west-1.compute.internal container=operator container exited with code 2 (Error): client-go/informers/factory.go:132\nI0403 14:55:31.444847       1 reflector.go:169] Listing and watching *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:132\nI0403 14:55:31.444854       1 reflector.go:169] Listing and watching *v1.ConfigMap from k8s.io/client-go/informers/factory.go:132\nI0403 14:55:31.448259       1 reflector.go:169] Listing and watching *v1.ConfigMap from k8s.io/client-go/informers/factory.go:132\nI0403 14:55:31.528916       1 reflector.go:169] Listing and watching *v1.Secret from k8s.io/client-go/informers/factory.go:132\nI0403 14:55:31.540613       1 request.go:530] Throttling request took 95.35538ms, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps?limit=500&resourceVersion=0\nI0403 14:55:31.740600       1 request.go:530] Throttling request took 292.206545ms, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-service-catalog-apiserver/configmaps?limit=500&resourceVersion=0\nI0403 14:55:31.940627       1 request.go:530] Throttling request took 480.812903ms, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-service-catalog-apiserver\nI0403 14:55:32.140645       1 request.go:530] Throttling request took 611.467522ms, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets?limit=500&resourceVersion=0\nI0403 14:55:32.340610       1 request.go:530] Throttling request took 360.690762ms, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-service-catalog-apiserver\nI0403 14:55:32.350713       1 reflector.go:169] Listing and watching *v1.ServiceCatalogAPIServer from github.com/openshift/client-go/operator/informers/externalversions/factory.go:101\nI0403 14:55:32.540604       1 request.go:530] Throttling request took 181.309386ms, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-service-catalog-apiserver\nI0403 14:55:36.419713       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\n
Apr 03 14:55:38.661 E ns/openshift-monitoring pod/kube-state-metrics-9b8794db7-tl8qx node/ip-10-0-143-181.us-west-1.compute.internal container=kube-state-metrics container exited with code 2 (Error): 
Apr 03 14:55:43.054 E ns/openshift-console-operator pod/console-operator-5cb446dc9d-dgg9q node/ip-10-0-128-97.us-west-1.compute.internal container=console-operator container exited with code 255 (Error): onsole status"\ntime="2020-04-03T14:55:32Z" level=info msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-04-03T14:55:32Z" level=info msg="sync loop 4.0.0 complete"\ntime="2020-04-03T14:55:32Z" level=info msg="finished syncing operator \"cluster\" (212.554µs) \n\n"\ntime="2020-04-03T14:55:32Z" level=info msg="started syncing operator \"cluster\" (2020-04-03 14:55:32.179150076 +0000 UTC m=+1279.932184928)"\ntime="2020-04-03T14:55:32Z" level=info msg="console is in a managed state."\ntime="2020-04-03T14:55:32Z" level=info msg="running sync loop 4.0.0"\ntime="2020-04-03T14:55:32Z" level=info msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-04-03T14:55:32Z" level=info msg="service-ca configmap exists and is in the correct state"\ntime="2020-04-03T14:55:32Z" level=info msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-04-03T14:55:32Z" level=info msg=-----------------------\ntime="2020-04-03T14:55:32Z" level=info msg="sync loop 4.0.0 resources updated: false \n"\ntime="2020-04-03T14:55:32Z" level=info msg=-----------------------\ntime="2020-04-03T14:55:32Z" level=info msg="deployment is available, ready replicas: 2 \n"\ntime="2020-04-03T14:55:32Z" level=info msg="sync_v400: updating console status"\ntime="2020-04-03T14:55:32Z" level=info msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-04-03T14:55:32Z" level=info msg="sync loop 4.0.0 complete"\ntime="2020-04-03T14:55:32Z" level=info msg="finished syncing operator \"cluster\" (43.846µs) \n\n"\nI0403 14:55:39.711668       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 14:55:39.711810       1 leaderelection.go:65] leaderelection lost\n
Apr 03 14:55:44.852 E ns/openshift-console pod/downloads-59d79f9796-9wnz9 node/ip-10-0-132-247.us-west-1.compute.internal container=download-server container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 14:55:45.528 E ns/openshift-monitoring pod/telemeter-client-75bbf59d85-hdb9x node/ip-10-0-153-45.us-west-1.compute.internal container=reload container exited with code 2 (Error): 
Apr 03 14:55:45.528 E ns/openshift-monitoring pod/telemeter-client-75bbf59d85-hdb9x node/ip-10-0-153-45.us-west-1.compute.internal container=telemeter-client container exited with code 2 (Error): 
Apr 03 14:55:46.461 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-143-181.us-west-1.compute.internal container=prometheus container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 14:55:46.461 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-143-181.us-west-1.compute.internal container=prometheus-config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 14:55:46.461 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-143-181.us-west-1.compute.internal container=prometheus-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 14:55:46.461 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-143-181.us-west-1.compute.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 14:55:46.461 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-143-181.us-west-1.compute.internal container=rules-configmap-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 14:55:46.461 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-143-181.us-west-1.compute.internal container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 14:55:46.570 E ns/openshift-image-registry pod/node-ca-nb5wq node/ip-10-0-146-58.us-west-1.compute.internal container=node-ca container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 14:55:51.655 E ns/openshift-marketplace pod/certified-operators-7d448995d8-97qwm node/ip-10-0-143-181.us-west-1.compute.internal container=certified-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 14:55:51.724 E ns/openshift-monitoring pod/prometheus-adapter-594fc6b445-qf8xt node/ip-10-0-153-45.us-west-1.compute.internal container=prometheus-adapter container exited with code 2 (Error): 
Apr 03 14:55:51.860 E ns/openshift-marketplace pod/community-operators-5765fb794-7cgrv node/ip-10-0-143-181.us-west-1.compute.internal container=community-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 14:55:52.508 E ns/openshift-monitoring pod/node-exporter-ff2jf node/ip-10-0-132-247.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): 
Apr 03 14:56:02.824 E ns/openshift-monitoring pod/node-exporter-xwsr8 node/ip-10-0-143-181.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): 
Apr 03 14:56:05.488 E ns/openshift-cluster-node-tuning-operator pod/tuned-76cph node/ip-10-0-137-222.us-west-1.compute.internal container=tuned container exited with code 143 (Error): ft-tuned.go:326] Getting recommended profile...\nI0403 14:47:07.398268    3085 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 14:55:00.282655    3085 openshift-tuned.go:435] Pod (openshift-console/downloads-5df57b9b8c-n29fb) labels changed node wide: true\nI0403 14:55:02.285083    3085 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 14:55:02.286747    3085 openshift-tuned.go:326] Getting recommended profile...\nI0403 14:55:02.396481    3085 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 14:55:19.331596    3085 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-operator-748d7d84fb-777k8) labels changed node wide: true\nI0403 14:55:22.285072    3085 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 14:55:22.286493    3085 openshift-tuned.go:326] Getting recommended profile...\nI0403 14:55:22.395456    3085 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 14:55:23.669661    3085 openshift-tuned.go:435] Pod (openshift-monitoring/telemeter-client-7865cdd66-zrjft) labels changed node wide: true\nI0403 14:55:27.285032    3085 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 14:55:27.286368    3085 openshift-tuned.go:326] Getting recommended profile...\nI0403 14:55:27.395855    3085 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nE0403 14:55:30.057384    3085 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=35, ErrCode=NO_ERROR, debug=""\nE0403 14:55:30.059001    3085 openshift-tuned.go:720] Pod event watch channel closed.\nI0403 14:55:30.059021    3085 openshift-tuned.go:722] Increasing resyncPeriod to 108\n
Apr 03 14:56:08.329 E ns/openshift-operator-lifecycle-manager pod/packageserver-cbfb54d8f-fbxdq node/ip-10-0-132-247.us-west-1.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 14:56:08.671 E ns/openshift-ingress pod/router-default-6b6cdd758b-2lgw4 node/ip-10-0-153-45.us-west-1.compute.internal container=router container exited with code 2 (Error): 3 14:55:27.369658       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nE0403 14:55:30.058879       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=195, ErrCode=NO_ERROR, debug=""\nE0403 14:55:30.058983       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=195, ErrCode=NO_ERROR, debug=""\nE0403 14:55:30.059294       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=195, ErrCode=NO_ERROR, debug=""\nW0403 14:55:30.116195       1 reflector.go:341] github.com/openshift/router/pkg/router/template/service_lookup.go:32: watch of *v1.Service ended with: too old resource version: 16657 (19483)\nI0403 14:55:32.595260       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 14:55:37.576734       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 14:55:42.574016       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 14:55:47.571942       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 14:55:52.573528       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 14:55:58.842941       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 14:56:03.816237       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n
Apr 03 14:56:14.810 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-153-45.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): 
Apr 03 14:56:17.819 E ns/openshift-service-ca pod/apiservice-cabundle-injector-b5d5f6d7b-9sztl node/ip-10-0-128-97.us-west-1.compute.internal container=apiservice-cabundle-injector-controller container exited with code 2 (Error): 
Apr 03 14:56:17.941 E ns/openshift-service-ca pod/service-serving-cert-signer-7d4fb798f8-szms7 node/ip-10-0-146-58.us-west-1.compute.internal container=service-serving-cert-signer-controller container exited with code 2 (Error): 
Apr 03 14:56:18.334 E ns/openshift-service-ca pod/configmap-cabundle-injector-7c6bc968d8-2fxl8 node/ip-10-0-146-58.us-west-1.compute.internal container=configmap-cabundle-injector-controller container exited with code 2 (Error): 
Apr 03 14:56:18.734 E ns/openshift-monitoring pod/node-exporter-fmjqg node/ip-10-0-146-58.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): 
Apr 03 14:56:27.751 E ns/openshift-marketplace pod/redhat-operators-99f485f94-t8hf8 node/ip-10-0-153-45.us-west-1.compute.internal container=redhat-operators container exited with code 2 (Error): 
Apr 03 14:56:30.278 E ns/openshift-monitoring pod/node-exporter-c4lq9 node/ip-10-0-137-222.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): 
Apr 03 14:56:31.794 E ns/openshift-cluster-node-tuning-operator pod/tuned-xwsgm node/ip-10-0-132-247.us-west-1.compute.internal container=tuned container exited with code 143 (Error): tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 14:54:50.256489   19065 openshift-tuned.go:435] Pod (openshift-cluster-machine-approver/machine-approver-57dcb57969-bvnz6) labels changed node wide: true\nI0403 14:54:51.308738   19065 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 14:54:51.310334   19065 openshift-tuned.go:326] Getting recommended profile...\nI0403 14:54:51.435333   19065 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 14:55:00.248007   19065 openshift-tuned.go:435] Pod (openshift-machine-api/cluster-autoscaler-operator-864d74689-86sz9) labels changed node wide: true\nI0403 14:55:01.308781   19065 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 14:55:01.310594   19065 openshift-tuned.go:326] Getting recommended profile...\nI0403 14:55:01.426749   19065 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 14:55:21.568468   19065 openshift-tuned.go:435] Pod (openshift-image-registry/cluster-image-registry-operator-64f5bbc949-5zkcs) labels changed node wide: true\nI0403 14:55:26.308766   19065 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 14:55:26.310794   19065 openshift-tuned.go:326] Getting recommended profile...\nI0403 14:55:26.456743   19065 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nE0403 14:55:30.054722   19065 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=33, ErrCode=NO_ERROR, debug=""\nE0403 14:55:30.078833   19065 openshift-tuned.go:720] Pod event watch channel closed.\nI0403 14:55:30.079054   19065 openshift-tuned.go:722] Increasing resyncPeriod to 112\n
Apr 03 14:56:39.784 E ns/openshift-marketplace pod/certified-operators-ff46f65f7-tfhl9 node/ip-10-0-153-45.us-west-1.compute.internal container=certified-operators container exited with code 2 (Error): 
Apr 03 14:56:45.131 E ns/openshift-cluster-node-tuning-operator pod/tuned-8n5fd node/ip-10-0-143-181.us-west-1.compute.internal container=tuned container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 14:56:47.928 E ns/openshift-controller-manager pod/controller-manager-rh644 node/ip-10-0-128-97.us-west-1.compute.internal container=controller-manager container exited with code 137 (Error): 
Apr 03 14:56:48.649 E ns/openshift-marketplace pod/community-operators-667586cfc4-ppjfr node/ip-10-0-137-222.us-west-1.compute.internal container=community-operators container exited with code 2 (Error): 
Apr 03 14:56:49.669 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-137-222.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): 
Apr 03 14:56:52.815 E ns/openshift-cluster-node-tuning-operator pod/tuned-5ncmc node/ip-10-0-153-45.us-west-1.compute.internal container=tuned container exited with code 143 (Error):  openshift-tuned.go:326] Getting recommended profile...\nI0403 14:46:41.415924    2715 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 14:46:43.190758    2715 openshift-tuned.go:435] Pod (e2e-tests-service-upgrade-hbt2s/service-test-c24fv) labels changed node wide: true\nI0403 14:46:46.276244    2715 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 14:46:46.277759    2715 openshift-tuned.go:326] Getting recommended profile...\nI0403 14:46:46.398979    2715 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 14:55:17.579806    2715 openshift-tuned.go:435] Pod (openshift-image-registry/image-registry-56b5bcc7f4-b4q68) labels changed node wide: true\nI0403 14:55:21.276221    2715 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 14:55:21.277564    2715 openshift-tuned.go:326] Getting recommended profile...\nI0403 14:55:21.399637    2715 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 14:55:24.570248    2715 openshift-tuned.go:435] Pod (openshift-monitoring/node-exporter-hjsvt) labels changed node wide: true\nI0403 14:55:26.276219    2715 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 14:55:26.277693    2715 openshift-tuned.go:326] Getting recommended profile...\nI0403 14:55:26.397761    2715 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nE0403 14:55:30.066563    2715 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=37, ErrCode=NO_ERROR, debug=""\nE0403 14:55:30.072422    2715 openshift-tuned.go:720] Pod event watch channel closed.\nI0403 14:55:30.072446    2715 openshift-tuned.go:722] Increasing resyncPeriod to 102\n
Apr 03 14:57:13.960 E ns/openshift-authentication pod/oauth-openshift-5994b57d8-7jbt2 node/ip-10-0-132-247.us-west-1.compute.internal container=oauth-openshift container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 14:57:42.032 E ns/openshift-controller-manager pod/controller-manager-4f42k node/ip-10-0-132-247.us-west-1.compute.internal container=controller-manager container exited with code 137 (Error): 
Apr 03 14:57:53.064 E ns/openshift-console pod/console-c844bc495-n86d5 node/ip-10-0-132-247.us-west-1.compute.internal container=console container exited with code 2 (Error): nd\n2020/04/3 14:36:31 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://172.30.0.1:443/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/04/3 14:36:41 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://172.30.0.1:443/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/04/3 14:36:51 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://172.30.0.1:443/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/04/3 14:37:01 auth: error contacting auth provider (retrying in 10s): request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com: x509: certificate signed by unknown authority\n2020/04/3 14:37:11 auth: error contacting auth provider (retrying in 10s): request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com: x509: certificate signed by unknown authority\n2020/04/3 14:37:21 auth: error contacting auth provider (retrying in 10s): request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com: x509: certificate signed by unknown authority\n2020/04/3 14:37:31 auth: error contacting auth provider (retrying in 10s): request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com: x509: certificate signed by unknown authority\n2020/04/3 14:37:41 cmd/main: Binding to 0.0.0.0:8443...\n2020/04/3 14:37:41 cmd/main: using TLS\n
Apr 03 14:58:29.213 E ns/openshift-controller-manager pod/controller-manager-82f54 node/ip-10-0-146-58.us-west-1.compute.internal container=controller-manager container exited with code 137 (Error): 
Apr 03 14:59:13.439 E ns/openshift-dns pod/dns-default-8mnqz node/ip-10-0-132-247.us-west-1.compute.internal container=dns container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 14:59:13.439 E ns/openshift-dns pod/dns-default-8mnqz node/ip-10-0-132-247.us-west-1.compute.internal container=dns-node-resolver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 14:59:56.418 E ns/openshift-multus pod/multus-sl2gv node/ip-10-0-143-181.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 03 15:00:04.577 E ns/openshift-sdn pod/sdn-controller-pwslx node/ip-10-0-132-247.us-west-1.compute.internal container=sdn-controller container exited with code 137 (Error):       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Namespace ended with: too old resource version: 9840 (12372)\nW0403 14:38:04.173564       1 reflector.go:256] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 9833 (13529)\nW0403 14:38:04.174575       1 reflector.go:256] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 6991 (13529)\nI0403 14:46:39.555624       1 vnids.go:115] Allocated netid 4129744 for namespace "e2e-tests-sig-apps-replicaset-upgrade-69lzn"\nI0403 14:46:39.563395       1 vnids.go:115] Allocated netid 1862386 for namespace "e2e-tests-sig-storage-sig-api-machinery-secret-upgrade-9nrv4"\nI0403 14:46:39.569124       1 vnids.go:115] Allocated netid 1954848 for namespace "e2e-tests-sig-apps-daemonset-upgrade-9m87z"\nI0403 14:46:39.579099       1 vnids.go:115] Allocated netid 3705282 for namespace "e2e-tests-sig-apps-job-upgrade-v29lk"\nI0403 14:46:39.588955       1 vnids.go:115] Allocated netid 5192778 for namespace "e2e-tests-service-upgrade-hbt2s"\nI0403 14:46:39.628781       1 vnids.go:115] Allocated netid 6896358 for namespace "e2e-tests-sig-storage-sig-api-machinery-configmap-upgrade-jvkrb"\nI0403 14:46:39.640533       1 vnids.go:115] Allocated netid 7707540 for namespace "e2e-tests-sig-apps-deployment-upgrade-pdbls"\nW0403 14:55:26.746319       1 reflector.go:256] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 16378 (19198)\nW0403 14:55:26.748786       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Namespace ended with: too old resource version: 16610 (18529)\nW0403 14:55:26.748948       1 reflector.go:256] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 13529 (19198)\n
Apr 03 15:00:06.491 E ns/openshift-sdn pod/ovs-k5x6l node/ip-10-0-146-58.us-west-1.compute.internal container=openvswitch container exited with code 137 (Error): T14:59:51.350Z|00371|connmgr|INFO|br0<->unix#939: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T14:59:51.374Z|00372|connmgr|INFO|br0<->unix#942: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T14:59:51.408Z|00373|connmgr|INFO|br0<->unix#945: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T14:59:51.438Z|00374|connmgr|INFO|br0<->unix#948: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T14:59:51.471Z|00375|connmgr|INFO|br0<->unix#951: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T14:59:51.499Z|00376|connmgr|INFO|br0<->unix#954: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T14:59:51.579Z|00377|connmgr|INFO|br0<->unix#957: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T14:59:51.619Z|00378|connmgr|INFO|br0<->unix#960: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T14:59:51.647Z|00379|connmgr|INFO|br0<->unix#963: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T14:59:51.676Z|00380|connmgr|INFO|br0<->unix#966: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T14:59:51.712Z|00381|connmgr|INFO|br0<->unix#969: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T14:59:51.740Z|00382|connmgr|INFO|br0<->unix#972: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T14:59:51.763Z|00383|connmgr|INFO|br0<->unix#975: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T14:59:51.799Z|00384|connmgr|INFO|br0<->unix#978: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T14:59:51.829Z|00385|connmgr|INFO|br0<->unix#981: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T14:59:51.856Z|00386|connmgr|INFO|br0<->unix#984: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T14:59:52.969Z|00387|connmgr|INFO|br0<->unix#987: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T14:59:52.996Z|00388|bridge|INFO|bridge br0: deleted interface veth7c3ad885 on port 3\n2020-04-03T15:00:03.172Z|00389|bridge|INFO|bridge br0: added interface vethe602f554 on port 60\n2020-04-03T15:00:03.204Z|00390|connmgr|INFO|br0<->unix#990: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T15:00:03.247Z|00391|connmgr|INFO|br0<->unix#993: 2 flow_mods in the last 0 s (2 deletes)\n
Apr 03 15:00:09.505 E ns/openshift-sdn pod/sdn-pjlls node/ip-10-0-146-58.us-west-1.compute.internal container=sdn container exited with code 255 (Error): ix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:00:07.453459   77380 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:00:07.553462   77380 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:00:07.653468   77380 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:00:07.753457   77380 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:00:07.853430   77380 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:00:07.953481   77380 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:00:08.053729   77380 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:00:08.153531   77380 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:00:08.253481   77380 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:00:08.354308   77380 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:00:08.458446   77380 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0403 15:00:08.458521   77380 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
Apr 03 15:00:37.601 E ns/openshift-sdn pod/sdn-controller-z24kq node/ip-10-0-128-97.us-west-1.compute.internal container=sdn-controller container exited with code 137 (Error): I0403 14:27:33.179727       1 leaderelection.go:205] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Apr 03 15:00:39.244 E ns/openshift-sdn pod/ovs-76trf node/ip-10-0-153-45.us-west-1.compute.internal container=openvswitch container exited with code 137 (Error): 8Z|00142|connmgr|INFO|br0<->unix#389: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T15:00:11.858Z|00143|bridge|INFO|bridge br0: deleted interface veth58f11466 on port 3\n2020-04-03T15:00:11.861Z|00144|bridge|WARN|could not open network device veth58f11466 (No such device)\n2020-04-03T15:00:12.254Z|00145|connmgr|INFO|br0<->unix#421: 2 flow_mods in the last 0 s (2 adds)\n2020-04-03T15:00:12.676Z|00146|connmgr|INFO|br0<->unix#427: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T15:00:12.708Z|00147|connmgr|INFO|br0<->unix#430: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T15:00:12.733Z|00148|connmgr|INFO|br0<->unix#433: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T15:00:12.756Z|00149|connmgr|INFO|br0<->unix#436: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T15:00:12.784Z|00150|connmgr|INFO|br0<->unix#439: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T15:00:12.807Z|00151|connmgr|INFO|br0<->unix#442: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T15:00:12.834Z|00152|connmgr|INFO|br0<->unix#445: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T15:00:12.864Z|00153|connmgr|INFO|br0<->unix#448: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T15:00:12.887Z|00154|connmgr|INFO|br0<->unix#451: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T15:00:12.917Z|00155|connmgr|INFO|br0<->unix#454: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T15:00:24.340Z|00156|bridge|WARN|could not open network device veth58f11466 (No such device)\n2020-04-03T15:00:24.362Z|00157|bridge|WARN|could not open network device veth58f11466 (No such device)\n2020-04-03T15:00:24.375Z|00158|bridge|INFO|bridge br0: added interface veth62f032f3 on port 24\n2020-04-03T15:00:24.378Z|00159|bridge|WARN|could not open network device veth58f11466 (No such device)\n2020-04-03T15:00:24.385Z|00160|bridge|WARN|could not open network device veth58f11466 (No such device)\n2020-04-03T15:00:24.407Z|00161|connmgr|INFO|br0<->unix#457: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T15:00:24.444Z|00162|connmgr|INFO|br0<->unix#460: 2 flow_mods in the last 0 s (2 deletes)\n
Apr 03 15:00:41.255 E ns/openshift-sdn pod/sdn-xvm9g node/ip-10-0-153-45.us-west-1.compute.internal container=sdn container exited with code 255 (Error): ix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:00:40.107340   60952 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:00:40.207346   60952 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:00:40.307451   60952 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:00:40.407347   60952 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:00:40.507333   60952 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:00:40.607397   60952 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:00:40.707430   60952 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:00:40.807418   60952 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:00:40.907406   60952 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:00:41.007476   60952 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:00:41.112094   60952 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0403 15:00:41.112168   60952 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
Apr 03 15:00:50.276 E ns/openshift-multus pod/multus-qkpng node/ip-10-0-153-45.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 03 15:01:10.668 E ns/openshift-sdn pod/sdn-controller-kgszt node/ip-10-0-146-58.us-west-1.compute.internal container=sdn-controller container exited with code 137 (Error): I0403 14:28:01.255624       1 leaderelection.go:205] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Apr 03 15:01:11.751 E ns/openshift-sdn pod/ovs-b7k7f node/ip-10-0-128-97.us-west-1.compute.internal container=openvswitch container exited with code 137 (Error): -03T14:59:29.869Z|00360|bridge|INFO|bridge br0: deleted interface vethfcdd24c5 on port 3\n2020-04-03T14:59:46.360Z|00361|bridge|INFO|bridge br0: added interface vethe412da6e on port 60\n2020-04-03T14:59:46.393Z|00362|connmgr|INFO|br0<->unix#901: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T14:59:46.436Z|00363|connmgr|INFO|br0<->unix#904: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T15:00:47.818Z|00364|connmgr|INFO|br0<->unix#916: 2 flow_mods in the last 0 s (2 adds)\n2020-04-03T15:00:47.934Z|00365|connmgr|INFO|br0<->unix#922: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T15:00:47.964Z|00366|connmgr|INFO|br0<->unix#925: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T15:00:47.997Z|00367|connmgr|INFO|br0<->unix#928: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T15:00:48.026Z|00368|connmgr|INFO|br0<->unix#931: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T15:00:48.056Z|00369|connmgr|INFO|br0<->unix#934: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T15:00:48.082Z|00370|connmgr|INFO|br0<->unix#937: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T15:00:48.351Z|00371|connmgr|INFO|br0<->unix#940: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T15:00:48.383Z|00372|connmgr|INFO|br0<->unix#943: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T15:00:48.415Z|00373|connmgr|INFO|br0<->unix#946: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T15:00:48.447Z|00374|connmgr|INFO|br0<->unix#949: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T15:00:48.474Z|00375|connmgr|INFO|br0<->unix#952: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T15:00:48.501Z|00376|connmgr|INFO|br0<->unix#955: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T15:00:48.529Z|00377|connmgr|INFO|br0<->unix#958: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T15:00:48.555Z|00378|connmgr|INFO|br0<->unix#961: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T15:00:48.583Z|00379|connmgr|INFO|br0<->unix#964: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T15:00:48.610Z|00380|connmgr|INFO|br0<->unix#967: 1 flow_mods in the last 0 s (1 adds)\n
Apr 03 15:01:22.743 E ns/openshift-sdn pod/sdn-55kv2 node/ip-10-0-128-97.us-west-1.compute.internal container=sdn container exited with code 255 (Error): ar/run/openvswitch/db.sock: connect: connection refused\nI0403 15:01:20.805116   68348 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:01:20.905124   68348 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:01:21.005109   68348 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:01:21.105167   68348 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:01:21.205107   68348 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:01:21.306223   68348 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:01:21.405196   68348 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:01:21.505088   68348 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:01:21.605121   68348 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:01:21.705239   68348 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:01:21.705336   68348 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nF0403 15:01:21.705349   68348 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: timed out waiting for the condition\n
Apr 03 15:01:28.557 E ns/openshift-operator-lifecycle-manager pod/packageserver-cbfb54d8f-7l9fg node/ip-10-0-132-247.us-west-1.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 15:01:29.694 E ns/openshift-operator-lifecycle-manager pod/packageserver-ccc458ccf-smr2q node/ip-10-0-132-247.us-west-1.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 15:01:38.748 E ns/openshift-multus pod/multus-hs9tq node/ip-10-0-146-58.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 03 15:01:54.070 E ns/openshift-sdn pod/ovs-rs2b6 node/ip-10-0-132-247.us-west-1.compute.internal container=openvswitch container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 15:02:00.930 E ns/openshift-sdn pod/sdn-k8h7j node/ip-10-0-132-247.us-west-1.compute.internal container=sdn container exited with code 255 (Error): ix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:01:59.385975   75444 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:01:59.486074   75444 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:01:59.586013   75444 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:01:59.686053   75444 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:01:59.785994   75444 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:01:59.886078   75444 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:01:59.986017   75444 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:02:00.086011   75444 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:02:00.186049   75444 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:02:00.286029   75444 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:02:00.391024   75444 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0403 15:02:00.391088   75444 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
Apr 03 15:02:20.976 E ns/openshift-operator-lifecycle-manager pod/packageserver-79987454bd-9298w node/ip-10-0-128-97.us-west-1.compute.internal container=packageserver container exited with code 137 (Error): dshake error from 10.128.0.1:57444: remote error: tls: bad certificate\nI0403 15:01:44.339380       1 log.go:172] http: TLS handshake error from 10.128.0.1:57446: remote error: tls: bad certificate\nI0403 15:01:44.739645       1 log.go:172] http: TLS handshake error from 10.128.0.1:57450: remote error: tls: bad certificate\nI0403 15:01:45.941008       1 log.go:172] http: TLS handshake error from 10.128.0.1:57462: remote error: tls: bad certificate\nI0403 15:01:46.030280       1 log.go:172] http: TLS handshake error from 10.128.0.1:57464: remote error: tls: bad certificate\nI0403 15:01:46.340098       1 log.go:172] http: TLS handshake error from 10.128.0.1:57466: remote error: tls: bad certificate\nI0403 15:01:46.523481       1 wrap.go:47] GET /healthz: (102.335µs) 200 [kube-probe/1.13+ 10.129.0.1:49586]\nI0403 15:01:46.568328       1 log.go:172] http: TLS handshake error from 10.128.0.1:57468: remote error: tls: bad certificate\nI0403 15:01:46.610763       1 wrap.go:47] GET /healthz: (1.602308ms) 200 [kube-probe/1.13+ 10.129.0.1:49588]\nI0403 15:01:47.141189       1 log.go:172] http: TLS handshake error from 10.128.0.1:57474: remote error: tls: bad certificate\nI0403 15:01:47.389780       1 wrap.go:47] GET /: (333.64µs) 200 [Go-http-client/2.0 10.129.0.1:44278]\nI0403 15:01:48.339698       1 log.go:172] http: TLS handshake error from 10.128.0.1:57496: remote error: tls: bad certificate\nI0403 15:01:48.739381       1 log.go:172] http: TLS handshake error from 10.128.0.1:57498: remote error: tls: bad certificate\nI0403 15:01:49.539851       1 log.go:172] http: TLS handshake error from 10.128.0.1:57506: remote error: tls: bad certificate\nI0403 15:01:49.760713       1 wrap.go:47] GET /: (2.129441ms) 200 [Go-http-client/2.0 10.128.0.1:51528]\nI0403 15:01:49.764718       1 wrap.go:47] GET /: (8.525891ms) 200 [Go-http-client/2.0 10.128.0.1:51528]\nI0403 15:01:49.765092       1 wrap.go:47] GET /: (6.966659ms) 200 [Go-http-client/2.0 10.130.0.1:38482]\nI0403 15:01:49.826115       1 secure_serving.go:156] Stopped listening on [::]:5443\n
Apr 03 15:02:31.288 E ns/openshift-sdn pod/ovs-sz22p node/ip-10-0-137-222.us-west-1.compute.internal container=openvswitch container exited with code 137 (Error): 68: receive error: Connection reset by peer\n2020-04-03T15:00:32.550Z|00039|reconnect|WARN|unix#268: connection dropped (Connection reset by peer)\n2020-04-03T15:00:32.555Z|00040|jsonrpc|WARN|unix#269: receive error: Connection reset by peer\n2020-04-03T15:00:32.555Z|00041|reconnect|WARN|unix#269: connection dropped (Connection reset by peer)\n\n==> /var/log/openvswitch/ovs-vswitchd.log <==\n2020-04-03T15:00:40.460Z|00143|bridge|INFO|bridge br0: added interface vethc763d83c on port 22\n2020-04-03T15:00:40.488Z|00144|connmgr|INFO|br0<->unix#424: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T15:00:40.524Z|00145|connmgr|INFO|br0<->unix#427: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T15:01:21.712Z|00146|connmgr|INFO|br0<->unix#439: 2 flow_mods in the last 0 s (2 adds)\n2020-04-03T15:01:21.802Z|00147|connmgr|INFO|br0<->unix#445: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T15:01:21.825Z|00148|connmgr|INFO|br0<->unix#448: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T15:01:21.858Z|00149|connmgr|INFO|br0<->unix#451: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T15:01:22.146Z|00150|connmgr|INFO|br0<->unix#454: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T15:01:22.174Z|00151|connmgr|INFO|br0<->unix#457: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T15:01:22.210Z|00152|connmgr|INFO|br0<->unix#460: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T15:01:22.243Z|00153|connmgr|INFO|br0<->unix#463: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T15:01:22.272Z|00154|connmgr|INFO|br0<->unix#466: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T15:01:22.295Z|00155|connmgr|INFO|br0<->unix#469: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T15:01:22.326Z|00156|connmgr|INFO|br0<->unix#472: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T15:01:22.355Z|00157|connmgr|INFO|br0<->unix#475: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T15:01:22.384Z|00158|connmgr|INFO|br0<->unix#478: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T15:01:22.407Z|00159|connmgr|INFO|br0<->unix#481: 1 flow_mods in the last 0 s (1 adds)\n
Apr 03 15:02:33.366 E ns/openshift-sdn pod/sdn-lp7nk node/ip-10-0-137-222.us-west-1.compute.internal container=sdn container exited with code 255 (Error): ix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:02:32.151706   50953 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:02:32.251674   50953 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:02:32.351668   50953 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:02:32.451674   50953 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:02:32.551672   50953 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:02:32.652839   50953 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:02:32.751704   50953 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:02:32.851640   50953 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:02:32.951664   50953 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:02:33.051676   50953 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:02:33.164897   50953 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0403 15:02:33.164989   50953 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
Apr 03 15:02:36.050 E ns/openshift-service-ca pod/service-serving-cert-signer-7b9469d8f9-tcwxx node/ip-10-0-132-247.us-west-1.compute.internal container=service-serving-cert-signer-controller container exited with code 255 (Error): 
Apr 03 15:03:03.849 E ns/openshift-sdn pod/ovs-tbtjc node/ip-10-0-143-181.us-west-1.compute.internal container=openvswitch container exited with code 137 (Error): 4.492Z|00142|connmgr|INFO|br0<->unix#398: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T14:57:14.528Z|00143|connmgr|INFO|br0<->unix#401: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T15:00:26.220Z|00144|connmgr|INFO|br0<->unix#430: 2 flow_mods in the last 0 s (2 adds)\n2020-04-03T15:00:26.310Z|00145|connmgr|INFO|br0<->unix#436: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T15:00:26.337Z|00146|connmgr|INFO|br0<->unix#439: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T15:00:26.642Z|00147|connmgr|INFO|br0<->unix#442: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T15:00:26.667Z|00148|connmgr|INFO|br0<->unix#445: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T15:00:26.687Z|00149|connmgr|INFO|br0<->unix#448: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T15:00:26.710Z|00150|connmgr|INFO|br0<->unix#451: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T15:00:26.738Z|00151|connmgr|INFO|br0<->unix#454: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T15:00:26.761Z|00152|connmgr|INFO|br0<->unix#457: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T15:00:26.790Z|00153|connmgr|INFO|br0<->unix#460: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T15:00:26.816Z|00154|connmgr|INFO|br0<->unix#463: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T15:00:26.842Z|00155|connmgr|INFO|br0<->unix#466: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T15:00:26.865Z|00156|connmgr|INFO|br0<->unix#469: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T15:00:48.403Z|00157|connmgr|INFO|br0<->unix#472: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T15:00:48.426Z|00158|bridge|INFO|bridge br0: deleted interface veth12c62404 on port 3\n2020-04-03T15:00:56.993Z|00002|ofproto_dpif_upcall(handler1)|INFO|received packet on unassociated datapath port 4\n2020-04-03T15:00:56.993Z|00159|bridge|INFO|bridge br0: added interface veth02733922 on port 23\n2020-04-03T15:00:57.023Z|00160|connmgr|INFO|br0<->unix#478: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T15:00:57.060Z|00161|connmgr|INFO|br0<->unix#481: 2 flow_mods in the last 0 s (2 deletes)\n
Apr 03 15:03:08.871 E ns/openshift-sdn pod/sdn-85xr6 node/ip-10-0-143-181.us-west-1.compute.internal container=sdn container exited with code 255 (Error): ix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:03:07.720478   45369 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:03:07.820450   45369 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:03:07.920452   45369 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:03:08.020436   45369 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:03:08.120454   45369 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:03:08.220472   45369 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:03:08.320497   45369 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:03:08.420446   45369 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:03:08.521045   45369 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:03:08.620444   45369 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 15:03:08.726296   45369 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0403 15:03:08.726380   45369 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
Apr 03 15:03:11.503 E ns/openshift-multus pod/multus-772q2 node/ip-10-0-137-222.us-west-1.compute.internal container=kube-multus container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 15:04:00.249 E ns/openshift-multus pod/multus-sv8nt node/ip-10-0-128-97.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 03 15:04:36.420 E ns/openshift-machine-config-operator pod/machine-config-operator-7d88998d64-5wlgm node/ip-10-0-132-247.us-west-1.compute.internal container=machine-config-operator container exited with code 2 (Error): 
Apr 03 15:08:40.078 E ns/openshift-machine-config-operator pod/machine-config-controller-58df86c98d-f687b node/ip-10-0-128-97.us-west-1.compute.internal container=machine-config-controller container exited with code 2 (Error): 
Apr 03 15:10:36.174 E ns/openshift-machine-config-operator pod/machine-config-server-44kjp node/ip-10-0-146-58.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): 
Apr 03 15:10:43.447 E ns/openshift-machine-config-operator pod/machine-config-server-pggqn node/ip-10-0-132-247.us-west-1.compute.internal container=machine-config-server container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 15:10:54.573 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-cbfbhqxx4 node/ip-10-0-146-58.us-west-1.compute.internal container=operator container exited with code 2 (Error): 0 [Prometheus/2.7.2 10.128.2.19:38094]\nI0403 15:07:34.898318       1 wrap.go:47] GET /metrics: (5.805035ms) 200 [Prometheus/2.7.2 10.129.2.17:47676]\nI0403 15:08:04.900496       1 wrap.go:47] GET /metrics: (9.145299ms) 200 [Prometheus/2.7.2 10.128.2.19:38094]\nI0403 15:08:04.900497       1 wrap.go:47] GET /metrics: (8.05049ms) 200 [Prometheus/2.7.2 10.129.2.17:47676]\nI0403 15:08:25.957484       1 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch close - *v1.ConfigMap total 0 items received\nW0403 15:08:25.959875       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 26413 (28008)\nI0403 15:08:26.960117       1 reflector.go:169] Listing and watching *v1.ConfigMap from k8s.io/client-go/informers/factory.go:132\nI0403 15:08:34.898550       1 wrap.go:47] GET /metrics: (7.333723ms) 200 [Prometheus/2.7.2 10.128.2.19:38094]\nI0403 15:08:34.900379       1 wrap.go:47] GET /metrics: (7.978679ms) 200 [Prometheus/2.7.2 10.129.2.17:47676]\nI0403 15:09:04.900191       1 wrap.go:47] GET /metrics: (8.994944ms) 200 [Prometheus/2.7.2 10.128.2.19:38094]\nI0403 15:09:04.901752       1 wrap.go:47] GET /metrics: (9.151338ms) 200 [Prometheus/2.7.2 10.129.2.17:47676]\nI0403 15:09:34.898411       1 wrap.go:47] GET /metrics: (5.824011ms) 200 [Prometheus/2.7.2 10.129.2.17:47676]\nI0403 15:09:34.898810       1 wrap.go:47] GET /metrics: (7.586274ms) 200 [Prometheus/2.7.2 10.128.2.19:38094]\nI0403 15:10:04.898356       1 wrap.go:47] GET /metrics: (7.089196ms) 200 [Prometheus/2.7.2 10.128.2.19:38094]\nI0403 15:10:04.898818       1 wrap.go:47] GET /metrics: (6.309765ms) 200 [Prometheus/2.7.2 10.129.2.17:47676]\nI0403 15:10:34.898531       1 wrap.go:47] GET /metrics: (7.337298ms) 200 [Prometheus/2.7.2 10.128.2.19:38094]\nI0403 15:10:34.898863       1 wrap.go:47] GET /metrics: (6.413579ms) 200 [Prometheus/2.7.2 10.129.2.17:47676]\nI0403 15:10:34.954713       1 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch close - *v1.Namespace total 0 items received\n
Apr 03 15:10:55.569 E ns/openshift-console pod/console-5f45485685-5c2x4 node/ip-10-0-146-58.us-west-1.compute.internal container=console container exited with code 2 (Error): 2020/04/3 14:57:48 cmd/main: cookies are secure!\n2020/04/3 14:57:48 cmd/main: Binding to 0.0.0.0:8443...\n2020/04/3 14:57:48 cmd/main: using TLS\n
Apr 03 15:10:56.408 E ns/openshift-ingress pod/router-default-7669bb5dff-jwqhm node/ip-10-0-137-222.us-west-1.compute.internal container=router container exited with code 2 (Error): aded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 15:01:49.795511       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 15:01:54.788950       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 15:02:00.949740       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 15:02:07.655987       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 15:02:12.653103       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 15:02:35.203378       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 15:02:40.525827       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 15:03:08.884902       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 15:03:15.315949       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nW0403 15:05:24.927265       1 reflector.go:341] github.com/openshift/router/pkg/router/controller/factory/factory.go:112: watch of *v1.Route ended with: The resourceVersion for the provided watch is too old.\nI0403 15:10:45.279542       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 15:10:50.285018       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n
Apr 03 15:10:56.837 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-137-222.us-west-1.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 
Apr 03 15:10:56.837 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-137-222.us-west-1.compute.internal container=prometheus-proxy container exited with code 2 (Error): 
Apr 03 15:10:56.837 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-137-222.us-west-1.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): 
Apr 03 15:10:58.617 E ns/openshift-service-catalog-apiserver-operator pod/openshift-service-catalog-apiserver-operator-7bc58756bc-rd8jz node/ip-10-0-146-58.us-west-1.compute.internal container=operator container exited with code 2 (Error): ewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 15:09:50.331984       1 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch close - *v1.Namespace total 0 items received\nI0403 15:09:52.325698       1 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch close - *v1.Secret total 0 items received\nI0403 15:09:53.285052       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 15:10:03.296890       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 15:10:06.333667       1 reflector.go:357] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: Watch close - *v1.ClusterOperator total 12 items received\nI0403 15:10:13.308549       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 15:10:23.320038       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 15:10:33.331866       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 15:10:43.343499       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 15:10:46.335603       1 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch close - *v1.ConfigMap total 0 items received\nW0403 15:10:46.340372       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 26388 (28710)\nI0403 15:10:47.343591       1 reflector.go:169] Listing and watching *v1.ConfigMap from k8s.io/client-go/informers/factory.go:132\n
Apr 03 15:10:59.692 E ns/openshift-monitoring pod/telemeter-client-7865cdd66-zrjft node/ip-10-0-137-222.us-west-1.compute.internal container=reload container exited with code 2 (Error): 
Apr 03 15:10:59.692 E ns/openshift-monitoring pod/telemeter-client-7865cdd66-zrjft node/ip-10-0-137-222.us-west-1.compute.internal container=telemeter-client container exited with code 2 (Error): 
Apr 03 15:11:02.169 E ns/openshift-authentication-operator pod/authentication-operator-6b96ccccc-tb7qz node/ip-10-0-146-58.us-west-1.compute.internal container=operator container exited with code 255 (Error): ssing"},{"lastTransitionTime":"2020-04-03T14:45:59Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-04-03T14:33:45Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0403 14:57:30.904646       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"355b39ad-75b7-11ea-91c6-060888dd8c91", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Progressing changed from True to False ("")\nW0403 15:01:49.464043       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 23285 (25554)\nW0403 15:03:15.540686       1 reflector.go:270] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: The resourceVersion for the provided watch is too old.\nW0403 15:03:52.462508       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 23285 (26608)\nW0403 15:04:43.455863       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 23285 (26865)\nW0403 15:04:43.462312       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 23285 (26865)\nW0403 15:08:45.471339       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25847 (28113)\nW0403 15:10:46.248509       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.OAuth ended with: too old resource version: 19530 (28992)\nI0403 15:10:50.481515       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 15:10:50.481593       1 leaderelection.go:65] leaderelection lost\n
Apr 03 15:11:05.931 E ns/openshift-machine-config-operator pod/machine-config-server-nlknx node/ip-10-0-128-97.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): 
Apr 03 15:11:14.220 E openshift-apiserver OpenShift API is not responding to GET requests
Apr 03 15:11:23.779 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-143-181.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): 
Apr 03 15:12:44.982 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-146-58.us-west-1.compute.internal node/ip-10-0-146-58.us-west-1.compute.internal container=kube-controller-manager-5 container exited with code 255 (Error): >" (2020-04-03 14:12:53 +0000 UTC to 2021-04-03 14:12:53 +0000 UTC (now=2020-04-03 14:53:09.481132376 +0000 UTC))\nI0403 14:53:09.481167       1 clientca.go:92] [4] "/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" client-ca certificate: "kube-control-plane-signer" [] issuer="<self>" (2020-04-03 14:12:53 +0000 UTC to 2021-04-03 14:12:53 +0000 UTC (now=2020-04-03 14:53:09.481156381 +0000 UTC))\nI0403 14:53:09.485839       1 controllermanager.go:169] Version: v1.13.4+3040211\nI0403 14:53:09.487015       1 serving.go:195] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1585924100" (2020-04-03 14:28:38 +0000 UTC to 2022-04-03 14:28:39 +0000 UTC (now=2020-04-03 14:53:09.486998726 +0000 UTC))\nI0403 14:53:09.487052       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585924100" [] issuer="<self>" (2020-04-03 14:28:20 +0000 UTC to 2021-04-03 14:28:21 +0000 UTC (now=2020-04-03 14:53:09.487036148 +0000 UTC))\nI0403 14:53:09.487084       1 secure_serving.go:136] Serving securely on [::]:10257\nI0403 14:53:09.487319       1 serving.go:77] Starting DynamicLoader\nI0403 14:53:09.487612       1 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-controller-manager...\nE0403 14:55:31.484420       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0403 15:11:05.128957       1 controllermanager.go:282] leaderelection lost\nI0403 15:11:05.128998       1 serving.go:88] Shutting down DynamicLoader\n
Apr 03 15:12:44.982 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-146-58.us-west-1.compute.internal node/ip-10-0-146-58.us-west-1.compute.internal container=kube-controller-manager-cert-syncer-5 container exited with code 255 (Error): used\nE0403 14:55:31.083518       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 14:55:32.083462       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 14:55:32.084305       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 14:55:33.084684       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 14:55:33.085807       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 14:55:38.805807       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: secrets is forbidden: User "system:kube-controller-manager" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-controller-manager"\nW0403 15:04:56.988063       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 21605 (26952)\nW0403 15:10:31.993498       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 27104 (28643)\n
Apr 03 15:12:52.378 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-146-58.us-west-1.compute.internal node/ip-10-0-146-58.us-west-1.compute.internal container=scheduler container exited with code 255 (Error): .go:132: Failed to list *v1.Node: Get https://localhost:6443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 14:55:33.113016       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: Get https://localhost:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 14:55:33.114742       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://localhost:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 14:55:33.114825       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: Get https://localhost:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 14:55:33.114897       1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:245: Failed to list *v1.Pod: Get https://localhost:6443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nW0403 15:10:46.434416       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1beta1.PodDisruptionBudget ended with: too old resource version: 21495 (29000)\nW0403 15:10:46.434545       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 25691 (28999)\nW0403 15:10:46.434600       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 21481 (28999)\nW0403 15:10:46.434638       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 21496 (29000)\nE0403 15:11:05.237323       1 server.go:259] lost master\nI0403 15:11:05.238423       1 serving.go:88] Shutting down DynamicLoader\n
Apr 03 15:12:56.190 E ns/openshift-apiserver pod/apiserver-4vtv7 node/ip-10-0-146-58.us-west-1.compute.internal container=openshift-apiserver container exited with code 255 (Error):  [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 15:10:53.637220       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 15:10:53.654445       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 15:10:53.668737       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []\nI0403 15:10:53.668887       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 15:10:53.668921       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 15:10:53.668921       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 15:10:53.668947       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 15:10:53.668965       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 15:10:53.685846       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 15:11:05.155745       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/aggregator-client-ca/ca-bundle.crt\nI0403 15:11:05.156720       1 clusterquotamapping.go:145] Shutting down ClusterQuotaMappingController controller\nI0403 15:11:05.156757       1 serving.go:88] Shutting down DynamicLoader\nI0403 15:11:05.156769       1 controller.go:87] Shutting down OpenAPI AggregationController\nI0403 15:11:05.157041       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/client-ca/ca-bundle.crt\nI0403 15:11:05.159079       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 15:11:05.159274       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\n
Apr 03 15:12:56.578 E ns/openshift-image-registry pod/node-ca-ctbmm node/ip-10-0-146-58.us-west-1.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 03 15:12:58.980 E ns/openshift-multus pod/multus-p58cs node/ip-10-0-146-58.us-west-1.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 03 15:13:00.578 E ns/openshift-dns pod/dns-default-nkhbx node/ip-10-0-146-58.us-west-1.compute.internal container=dns-node-resolver container exited with code 255 (Error): /etc/hosts.tmp /etc/hosts differ: char 159, line 3\n/etc/hosts.tmp /etc/hosts differ: char 159, line 3\n/bin/bash: line 1: kill: (99) - No such process\n
Apr 03 15:13:00.578 E ns/openshift-dns pod/dns-default-nkhbx node/ip-10-0-146-58.us-west-1.compute.internal container=dns container exited with code 255 (Error): E0403 15:00:10.831430       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:317: Failed to list *v1.Endpoints: Get https://172.30.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.30.0.1:443: connect: no route to host\nE0403 15:00:10.831804       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: Failed to list *v1.Service: Get https://172.30.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.30.0.1:443: connect: no route to host\nE0403 15:00:10.831952       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: Failed to list *v1.Namespace: Get https://172.30.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 172.30.0.1:443: connect: no route to host\n.:5353\n2020-04-03T15:00:11.934Z [INFO] CoreDNS-1.3.1\n2020-04-03T15:00:11.934Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-03T15:00:11.934Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nW0403 15:10:46.453024       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: watch of *v1.Service ended with: too old resource version: 25691 (28999)\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 03 15:13:06.177 E ns/openshift-monitoring pod/node-exporter-496dd node/ip-10-0-146-58.us-west-1.compute.internal container=node-exporter container exited with code 255 (Error): 
Apr 03 15:13:06.177 E ns/openshift-monitoring pod/node-exporter-496dd node/ip-10-0-146-58.us-west-1.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 03 15:13:08.177 E ns/openshift-controller-manager pod/controller-manager-dlcx9 node/ip-10-0-146-58.us-west-1.compute.internal container=controller-manager container exited with code 255 (Error): 
Apr 03 15:13:16.344 E ns/openshift-service-catalog-apiserver-operator pod/openshift-service-catalog-apiserver-operator-7bc58756bc-5fwzb node/ip-10-0-128-97.us-west-1.compute.internal container=operator container exited with code 2 (Error): paces?limit=500&resourceVersion=0\nI0403 15:12:04.715649       1 request.go:530] Throttling request took 597.573614ms, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-config/configmaps?limit=500&resourceVersion=0\nI0403 15:12:04.915612       1 request.go:530] Throttling request took 797.545671ms, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-config/secrets?limit=500&resourceVersion=0\nI0403 15:12:05.016801       1 shared_informer.go:123] caches populated\nI0403 15:12:05.115613       1 request.go:530] Throttling request took 991.138837ms, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-service-catalog-apiserver\nI0403 15:12:05.315621       1 request.go:530] Throttling request took 1.0977059s, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-service-catalog-apiserver\nI0403 15:12:05.515608       1 request.go:530] Throttling request took 392.834842ms, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-service-catalog-apiserver\nI0403 15:12:14.118921       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 15:12:24.126310       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 15:12:34.132662       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 15:12:44.139854       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 15:12:54.147876       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 15:13:04.154630       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\n
Apr 03 15:13:18.337 E ns/openshift-console-operator pod/console-operator-7b49458dd-zb8ch node/ip-10-0-128-97.us-west-1.compute.internal container=console-operator container exited with code 255 (Error): e-openshift-console.apps.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-04-03T15:13:10Z" level=info msg="service-ca configmap exists and is in the correct state"\ntime="2020-04-03T15:13:10Z" level=info msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-04-03T15:13:10Z" level=info msg=-----------------------\ntime="2020-04-03T15:13:10Z" level=info msg="sync loop 4.0.0 resources updated: false \n"\ntime="2020-04-03T15:13:10Z" level=info msg=-----------------------\ntime="2020-04-03T15:13:10Z" level=info msg="deployment is available, ready replicas: 2 \n"\ntime="2020-04-03T15:13:10Z" level=info msg="sync_v400: updating console status"\ntime="2020-04-03T15:13:10Z" level=info msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-04-03T15:13:10Z" level=info msg="sync loop 4.0.0 complete"\ntime="2020-04-03T15:13:10Z" level=info msg="finished syncing operator \"cluster\" (131.437µs) \n\n"\ntime="2020-04-03T15:13:10Z" level=info msg="started syncing operator \"cluster\" (2020-04-03 15:13:10.618964095 +0000 UTC m=+1051.804323179)"\ntime="2020-04-03T15:13:10Z" level=info msg="console is in a managed state."\ntime="2020-04-03T15:13:10Z" level=info msg="running sync loop 4.0.0"\ntime="2020-04-03T15:13:10Z" level=info msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-04-03T15:13:10Z" level=info msg="service-ca configmap exists and is in the correct state"\ntime="2020-04-03T15:13:10Z" level=info msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com \n"\nI0403 15:13:10.766140       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 15:13:10.766249       1 leaderelection.go:65] leaderelection lost\n
Apr 03 15:13:18.936 E ns/openshift-console pod/console-5f45485685-wdc6m node/ip-10-0-128-97.us-west-1.compute.internal container=console container exited with code 2 (Error): 2020/04/3 14:56:57 cmd/main: cookies are secure!\n2020/04/3 14:57:02 auth: error contacting auth provider (retrying in 10s): request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n2020/04/3 14:57:17 auth: error contacting auth provider (retrying in 10s): request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n2020/04/3 14:57:27 cmd/main: Binding to 0.0.0.0:8443...\n2020/04/3 14:57:27 cmd/main: using TLS\n
Apr 03 15:13:21.342 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-d649c7f5b-qj5n6 node/ip-10-0-128-97.us-west-1.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): ::1]:6443: connect: connection refused\\nE0403 14:55:33.114742       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://localhost:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\\nE0403 14:55:33.114825       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: Get https://localhost:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\\nE0403 14:55:33.114897       1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:245: Failed to list *v1.Pod: Get https://localhost:6443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\\nW0403 15:10:46.434416       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1beta1.PodDisruptionBudget ended with: too old resource version: 21495 (29000)\\nW0403 15:10:46.434545       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 25691 (28999)\\nW0403 15:10:46.434600       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 21481 (28999)\\nW0403 15:10:46.434638       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 21496 (29000)\\nE0403 15:11:05.237323       1 server.go:259] lost master\\nI0403 15:11:05.238423       1 serving.go:88] Shutting down DynamicLoader\\n\"" to "StaticPodsDegraded: nodes/ip-10-0-146-58.us-west-1.compute.internal pods/openshift-kube-scheduler-ip-10-0-146-58.us-west-1.compute.internal container=\"scheduler\" is not ready"\nI0403 15:13:13.279494       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 15:13:13.279880       1 leaderelection.go:65] leaderelection lost\n
Apr 03 15:13:25.532 E ns/openshift-service-ca-operator pod/service-ca-operator-556b5446bc-gkdw7 node/ip-10-0-128-97.us-west-1.compute.internal container=operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 15:13:27.739 E ns/openshift-service-ca pod/configmap-cabundle-injector-8d9749997-hmfsx node/ip-10-0-128-97.us-west-1.compute.internal container=configmap-cabundle-injector-controller container exited with code 2 (Error): 
Apr 03 15:13:28.464 E ns/openshift-cluster-node-tuning-operator pod/tuned-mtd99 node/ip-10-0-137-222.us-west-1.compute.internal container=tuned container exited with code 255 (Error): s changed node wide: true\nI0403 15:07:52.430087   42356 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 15:07:52.431678   42356 openshift-tuned.go:326] Getting recommended profile...\nI0403 15:07:52.543126   42356 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 15:10:55.024373   42356 openshift-tuned.go:435] Pod (openshift-monitoring/alertmanager-main-0) labels changed node wide: true\nI0403 15:10:57.430093   42356 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 15:10:57.432400   42356 openshift-tuned.go:326] Getting recommended profile...\nI0403 15:10:57.542027   42356 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 15:10:58.211903   42356 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-k8s-0) labels changed node wide: true\nI0403 15:11:02.430092   42356 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 15:11:02.431897   42356 openshift-tuned.go:326] Getting recommended profile...\nI0403 15:11:02.542427   42356 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 15:11:37.854801   42356 openshift-tuned.go:435] Pod (e2e-tests-sig-apps-job-upgrade-v29lk/foo-2hmhc) labels changed node wide: true\nI0403 15:11:42.430094   42356 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 15:11:42.431571   42356 openshift-tuned.go:326] Getting recommended profile...\nI0403 15:11:42.542664   42356 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 15:11:47.853489   42356 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-operator-748d7d84fb-777k8) labels changed node wide: true\nI0403 15:11:48.073658   42356 openshift-tuned.go:126] Received signal: terminated\n
Apr 03 15:13:28.483 E ns/openshift-monitoring pod/node-exporter-qlppm node/ip-10-0-137-222.us-west-1.compute.internal container=node-exporter container exited with code 255 (Error): 
Apr 03 15:13:28.483 E ns/openshift-monitoring pod/node-exporter-qlppm node/ip-10-0-137-222.us-west-1.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 03 15:13:28.678 E ns/openshift-image-registry pod/node-ca-kvssn node/ip-10-0-137-222.us-west-1.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 03 15:13:32.559 E ns/openshift-dns pod/dns-default-rthrg node/ip-10-0-137-222.us-west-1.compute.internal container=dns-node-resolver container exited with code 255 (Error): 
Apr 03 15:13:32.559 E ns/openshift-dns pod/dns-default-rthrg node/ip-10-0-137-222.us-west-1.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-04-03T15:00:47.316Z [INFO] CoreDNS-1.3.1\n2020-04-03T15:00:47.317Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-03T15:00:47.317Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 03 15:13:33.950 E ns/openshift-sdn pod/sdn-lp7nk node/ip-10-0-137-222.us-west-1.compute.internal container=sdn container exited with code 255 (Error): 44.253985   52777 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-monitoring/node-exporter:https to [10.0.128.97:9100 10.0.132.247:9100 10.0.137.222:9100 10.0.143.181:9100 10.0.153.45:9100]\nI0403 15:11:44.254022   52777 roundrobin.go:240] Delete endpoint 10.0.146.58:9100 for service "openshift-monitoring/node-exporter:https"\nI0403 15:11:44.308823   52777 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-sdn/sdn:metrics to [10.0.128.97:9101 10.0.132.247:9101 10.0.137.222:9101 10.0.143.181:9101 10.0.153.45:9101]\nI0403 15:11:44.308860   52777 roundrobin.go:240] Delete endpoint 10.0.146.58:9101 for service "openshift-sdn/sdn:metrics"\nI0403 15:11:44.415945   52777 proxier.go:367] userspace proxy: processing 0 service events\nI0403 15:11:44.415973   52777 proxier.go:346] userspace syncProxyRules took 53.197831ms\nI0403 15:11:44.584420   52777 proxier.go:367] userspace proxy: processing 0 service events\nI0403 15:11:44.584445   52777 proxier.go:346] userspace syncProxyRules took 53.969992ms\ninterrupt: Gracefully shutting down ...\nE0403 15:11:48.087431   52777 healthcheck.go:57] SDN healthcheck disconnected from OVS server: <nil>\nI0403 15:11:48.087521   52777 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 15:11:48.188748   52777 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 15:11:48.287821   52777 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 15:11:48.400792   52777 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 15:11:48.488798   52777 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\n
Apr 03 15:13:34.342 E ns/openshift-sdn pod/ovs-snfgl node/ip-10-0-137-222.us-west-1.compute.internal container=openvswitch container exited with code 255 (Error): eleted interface veth2f1e4fd8 on port 5\n2020-04-03T15:10:55.618Z|00121|connmgr|INFO|br0<->unix#154: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T15:10:55.661Z|00122|bridge|INFO|bridge br0: deleted interface vethfe378e7b on port 7\n2020-04-03T15:10:55.723Z|00123|connmgr|INFO|br0<->unix#157: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T15:10:55.773Z|00124|bridge|INFO|bridge br0: deleted interface vethfec00523 on port 11\n2020-04-03T15:10:55.830Z|00125|connmgr|INFO|br0<->unix#160: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T15:10:55.874Z|00126|bridge|INFO|bridge br0: deleted interface vethbc9b2e30 on port 3\n2020-04-03T15:10:55.928Z|00127|connmgr|INFO|br0<->unix#163: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T15:10:55.973Z|00128|bridge|INFO|bridge br0: deleted interface vethcac52ed8 on port 6\n2020-04-03T15:11:25.245Z|00129|connmgr|INFO|br0<->unix#169: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T15:11:25.265Z|00130|bridge|INFO|bridge br0: deleted interface veth06aeecd8 on port 12\n2020-04-03T15:11:25.432Z|00131|connmgr|INFO|br0<->unix#172: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T15:11:25.459Z|00132|bridge|INFO|bridge br0: deleted interface veth71292485 on port 4\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-04-03T15:11:25.451Z|00017|jsonrpc|WARN|Dropped 2 log messages in last 532 seconds (most recently, 532 seconds ago) due to excessive rate\n2020-04-03T15:11:25.451Z|00018|jsonrpc|WARN|unix#138: receive error: Connection reset by peer\n2020-04-03T15:11:25.451Z|00019|reconnect|WARN|unix#138: connection dropped (Connection reset by peer)\n2020-04-03T15:11:34.564Z|00020|jsonrpc|WARN|unix#141: receive error: Connection reset by peer\n2020-04-03T15:11:34.564Z|00021|reconnect|WARN|unix#141: connection dropped (Connection reset by peer)\nTerminated\n2020-04-03T15:11:48Z|00001|unixctl|WARN|failed to connect to /var/run/openvswitch/ovs-vswitchd.52725.ctl\novs-appctl: cannot connect to "/var/run/openvswitch/ovs-vswitchd.52725.ctl" (No such file or directory)\novsdb-server is not running.\n
Apr 03 15:13:34.742 E ns/openshift-multus pod/multus-95j78 node/ip-10-0-137-222.us-west-1.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 03 15:13:35.140 E ns/openshift-machine-config-operator pod/machine-config-daemon-xsxsm node/ip-10-0-137-222.us-west-1.compute.internal container=machine-config-daemon container exited with code 255 (Error): 
Apr 03 15:13:37.826 E ns/openshift-marketplace pod/redhat-operators-5fdb7b5ff7-7d2tv node/ip-10-0-153-45.us-west-1.compute.internal container=redhat-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 15:13:44.716 E clusteroperator/network changed Degraded to True: ApplyOperatorConfig: Error while updating operator configuration: could not apply (apps/v1, Kind=DaemonSet) openshift-sdn/sdn: could not update object (apps/v1, Kind=DaemonSet) openshift-sdn/sdn: Operation cannot be fulfilled on daemonsets.apps "sdn": the object has been modified; please apply your changes to the latest version and try again
Apr 03 15:13:50.780 E ns/openshift-etcd pod/etcd-member-ip-10-0-146-58.us-west-1.compute.internal node/ip-10-0-146-58.us-west-1.compute.internal container=etcd-metrics container exited with code 255 (Error): 2020-04-03 15:11:00.246672 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-03 15:11:00.247774 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-03 15:11:00.248572 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/03 15:11:00 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.146.58:9978: connect: connection refused"; Reconnecting to {etcd-1.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\n2020-04-03 15:11:01.262156 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Apr 03 15:13:50.780 E ns/openshift-etcd pod/etcd-member-ip-10-0-146-58.us-west-1.compute.internal node/ip-10-0-146-58.us-west-1.compute.internal container=etcd-member container exited with code 255 (Error): stream MsgApp v2 reader)\n2020-04-03 15:11:05.243599 E | rafthttp: failed to read 8fd39e48966bb359 on stream MsgApp v2 (context canceled)\n2020-04-03 15:11:05.243607 I | rafthttp: peer 8fd39e48966bb359 became inactive (message send to peer failed)\n2020-04-03 15:11:05.243616 I | rafthttp: stopped streaming with peer 8fd39e48966bb359 (stream MsgApp v2 reader)\n2020-04-03 15:11:05.243707 W | rafthttp: lost the TCP streaming connection with peer 8fd39e48966bb359 (stream Message reader)\n2020-04-03 15:11:05.243728 I | rafthttp: stopped streaming with peer 8fd39e48966bb359 (stream Message reader)\n2020-04-03 15:11:05.243740 I | rafthttp: stopped peer 8fd39e48966bb359\n2020-04-03 15:11:05.243747 I | rafthttp: stopping peer dcbfce377863944a...\n2020-04-03 15:11:05.244351 I | rafthttp: closed the TCP streaming connection with peer dcbfce377863944a (stream MsgApp v2 writer)\n2020-04-03 15:11:05.244371 I | rafthttp: stopped streaming with peer dcbfce377863944a (writer)\n2020-04-03 15:11:05.244789 I | rafthttp: closed the TCP streaming connection with peer dcbfce377863944a (stream Message writer)\n2020-04-03 15:11:05.244804 I | rafthttp: stopped streaming with peer dcbfce377863944a (writer)\n2020-04-03 15:11:05.244889 I | rafthttp: stopped HTTP pipelining with peer dcbfce377863944a\n2020-04-03 15:11:05.245023 W | rafthttp: lost the TCP streaming connection with peer dcbfce377863944a (stream MsgApp v2 reader)\n2020-04-03 15:11:05.245147 E | rafthttp: failed to read dcbfce377863944a on stream MsgApp v2 (context canceled)\n2020-04-03 15:11:05.245174 I | rafthttp: peer dcbfce377863944a became inactive (message send to peer failed)\n2020-04-03 15:11:05.245184 I | rafthttp: stopped streaming with peer dcbfce377863944a (stream MsgApp v2 reader)\n2020-04-03 15:11:05.245277 W | rafthttp: lost the TCP streaming connection with peer dcbfce377863944a (stream Message reader)\n2020-04-03 15:11:05.245296 I | rafthttp: stopped streaming with peer dcbfce377863944a (stream Message reader)\n2020-04-03 15:11:05.245305 I | rafthttp: stopped peer dcbfce377863944a\n
Apr 03 15:13:51.180 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-146-58.us-west-1.compute.internal node/ip-10-0-146-58.us-west-1.compute.internal container=kube-controller-manager-5 container exited with code 255 (Error): >" (2020-04-03 14:12:53 +0000 UTC to 2021-04-03 14:12:53 +0000 UTC (now=2020-04-03 14:53:09.481132376 +0000 UTC))\nI0403 14:53:09.481167       1 clientca.go:92] [4] "/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" client-ca certificate: "kube-control-plane-signer" [] issuer="<self>" (2020-04-03 14:12:53 +0000 UTC to 2021-04-03 14:12:53 +0000 UTC (now=2020-04-03 14:53:09.481156381 +0000 UTC))\nI0403 14:53:09.485839       1 controllermanager.go:169] Version: v1.13.4+3040211\nI0403 14:53:09.487015       1 serving.go:195] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1585924100" (2020-04-03 14:28:38 +0000 UTC to 2022-04-03 14:28:39 +0000 UTC (now=2020-04-03 14:53:09.486998726 +0000 UTC))\nI0403 14:53:09.487052       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585924100" [] issuer="<self>" (2020-04-03 14:28:20 +0000 UTC to 2021-04-03 14:28:21 +0000 UTC (now=2020-04-03 14:53:09.487036148 +0000 UTC))\nI0403 14:53:09.487084       1 secure_serving.go:136] Serving securely on [::]:10257\nI0403 14:53:09.487319       1 serving.go:77] Starting DynamicLoader\nI0403 14:53:09.487612       1 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-controller-manager...\nE0403 14:55:31.484420       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0403 15:11:05.128957       1 controllermanager.go:282] leaderelection lost\nI0403 15:11:05.128998       1 serving.go:88] Shutting down DynamicLoader\n
Apr 03 15:13:51.180 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-146-58.us-west-1.compute.internal node/ip-10-0-146-58.us-west-1.compute.internal container=kube-controller-manager-cert-syncer-5 container exited with code 255 (Error): used\nE0403 14:55:31.083518       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 14:55:32.083462       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 14:55:32.084305       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 14:55:33.084684       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 14:55:33.085807       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 14:55:38.805807       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: secrets is forbidden: User "system:kube-controller-manager" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-controller-manager"\nW0403 15:04:56.988063       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 21605 (26952)\nW0403 15:10:31.993498       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 27104 (28643)\n
Apr 03 15:13:51.533 E ns/openshift-operator-lifecycle-manager pod/packageserver-c796b5d87-qsrf9 node/ip-10-0-128-97.us-west-1.compute.internal container=packageserver container exited with code 137 (Error): w grpc connection added" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T15:13:42Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T15:13:42Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T15:13:42Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T15:13:42Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T15:13:42Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T15:13:42Z" level=info msg="new grpc connection added" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T15:13:43Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T15:13:43Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T15:13:44Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T15:13:44Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T15:13:44Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T15:13:44Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\n
Apr 03 15:13:51.579 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-146-58.us-west-1.compute.internal node/ip-10-0-146-58.us-west-1.compute.internal container=kube-apiserver-cert-syncer-8 container exited with code 255 (Error): I0403 14:55:34.134699       1 certsync_controller.go:269] Starting CertSyncer\nI0403 14:55:34.135015       1 observer_polling.go:106] Starting file observer\nW0403 15:04:53.068826       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22342 (26936)\nW0403 15:10:26.074671       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 27087 (28617)\n
Apr 03 15:13:51.579 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-146-58.us-west-1.compute.internal node/ip-10-0-146-58.us-west-1.compute.internal container=kube-apiserver-8 container exited with code 255 (Error): ng installplans.operators.coreos.com count at <storage-prefix>//operators.coreos.com/installplans\nW0403 15:10:57.636713       1 clientconn.go:953] Failed to dial etcd-0.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com:2379: grpc: the connection is closing; please retry.\nW0403 15:10:57.636741       1 clientconn.go:953] Failed to dial etcd-1.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com:2379: context canceled; please retry.\nI0403 15:10:58.335237       1 controller.go:107] OpenAPI AggregationController: Processing item v1.route.openshift.io\nI0403 15:11:00.078955       1 controller.go:107] OpenAPI AggregationController: Processing item v1.user.openshift.io\nI0403 15:11:01.712878       1 controller.go:107] OpenAPI AggregationController: Processing item v1.security.openshift.io\nI0403 15:11:04.340796       1 controller.go:107] OpenAPI AggregationController: Processing item v1.apps.openshift.io\nI0403 15:11:04.982894       1 controller.go:107] OpenAPI AggregationController: Processing item v1.image.openshift.io\nI0403 15:11:05.160452       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=713, ErrCode=NO_ERROR, debug=""\nI0403 15:11:05.160484       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=713, ErrCode=NO_ERROR, debug=""\nI0403 15:11:05.160704       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=713, ErrCode=NO_ERROR, debug=""\nI0403 15:11:05.160723       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=713, ErrCode=NO_ERROR, debug=""\nI0403 15:11:05.171612       1 controller.go:176] Shutting down kubernetes service endpoint reconciler\nW0403 15:11:05.307450       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [10.0.128.97 10.0.132.247]\n
Apr 03 15:13:52.333 E ns/openshift-console pod/downloads-5df57b9b8c-nd8cz node/ip-10-0-128-97.us-west-1.compute.internal container=download-server container exited with code 137 (Error): 
Apr 03 15:13:52.779 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-146-58.us-west-1.compute.internal node/ip-10-0-146-58.us-west-1.compute.internal container=scheduler container exited with code 255 (Error): .go:132: Failed to list *v1.Node: Get https://localhost:6443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 14:55:33.113016       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: Get https://localhost:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 14:55:33.114742       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://localhost:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 14:55:33.114825       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: Get https://localhost:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 14:55:33.114897       1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:245: Failed to list *v1.Pod: Get https://localhost:6443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nW0403 15:10:46.434416       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1beta1.PodDisruptionBudget ended with: too old resource version: 21495 (29000)\nW0403 15:10:46.434545       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 25691 (28999)\nW0403 15:10:46.434600       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 21481 (28999)\nW0403 15:10:46.434638       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 21496 (29000)\nE0403 15:11:05.237323       1 server.go:259] lost master\nI0403 15:11:05.238423       1 serving.go:88] Shutting down DynamicLoader\n
Apr 03 15:13:57.014 E ns/openshift-ingress pod/router-default-7669bb5dff-r6c6m node/ip-10-0-153-45.us-west-1.compute.internal container=router container exited with code 2 (Error): p://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 15:13:19.048145       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 15:13:24.042283       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 15:13:29.045153       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 15:13:34.044904       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 15:13:39.043186       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 15:13:44.044858       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 15:13:49.051549       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nE0403 15:13:53.360690       1 reflector.go:322] github.com/openshift/router/pkg/router/controller/factory/factory.go:112: Failed to watch *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io)\nW0403 15:13:53.503545       1 reflector.go:341] github.com/openshift/router/pkg/router/controller/factory/factory.go:112: watch of *v1.Endpoints ended with: very short watch: github.com/openshift/router/pkg/router/controller/factory/factory.go:112: Unexpected watch close - watch lasted less than a second and no items received\nW0403 15:13:53.503651       1 reflector.go:341] github.com/openshift/router/pkg/router/template/service_lookup.go:32: watch of *v1.Service ended with: very short watch: github.com/openshift/router/pkg/router/template/service_lookup.go:32: Unexpected watch close - watch lasted less than a second and no items received\n
Apr 03 15:13:58.214 E ns/openshift-monitoring pod/grafana-6cd95fb897-pv6f9 node/ip-10-0-153-45.us-west-1.compute.internal container=grafana container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 15:13:58.214 E ns/openshift-monitoring pod/grafana-6cd95fb897-pv6f9 node/ip-10-0-153-45.us-west-1.compute.internal container=grafana-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 15:13:59.013 E ns/openshift-marketplace pod/community-operators-5557bc9649-swv7h node/ip-10-0-153-45.us-west-1.compute.internal container=community-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 15:14:00.213 E ns/openshift-monitoring pod/kube-state-metrics-85c8c8895d-hzh7p node/ip-10-0-153-45.us-west-1.compute.internal container=kube-state-metrics container exited with code 2 (Error): 
Apr 03 15:14:25.180 E ns/openshift-console pod/downloads-5df57b9b8c-rwtf9 node/ip-10-0-153-45.us-west-1.compute.internal container=download-server container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 15:14:44.220 - 44s   E openshift-apiserver OpenShift API is not responding to GET requests
Apr 03 15:15:36.044 E clusteroperator/monitoring changed Degraded to True: UpdatingconfigurationsharingFailed: Failed to rollout the stack. Error: running task Updating configuration sharing failed: failed to retrieve Alertmanager host: getting Route object failed: the server is currently unable to handle the request (get routes.route.openshift.io alertmanager-main)
Apr 03 15:15:36.234 E ns/openshift-cluster-node-tuning-operator pod/tuned-cvtch node/ip-10-0-143-181.us-west-1.compute.internal container=tuned container exited with code 143 (Error): hift-monitoring/prometheus-k8s-0) labels changed node wide: true\nI0403 15:11:01.395678   37651 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 15:11:01.397425   37651 openshift-tuned.go:326] Getting recommended profile...\nI0403 15:11:01.522860   37651 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 15:11:05.357334   37651 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0403 15:11:05.359253   37651 openshift-tuned.go:720] Pod event watch channel closed.\nI0403 15:11:05.359271   37651 openshift-tuned.go:722] Increasing resyncPeriod to 102\nI0403 15:12:47.359503   37651 openshift-tuned.go:187] Extracting tuned profiles\nI0403 15:12:47.361485   37651 openshift-tuned.go:623] Resync period to pull node/pod labels: 102 [s]\nI0403 15:12:47.376990   37651 openshift-tuned.go:435] Pod (e2e-tests-sig-apps-daemonset-upgrade-9m87z/ds1-zxbc2) labels changed node wide: true\nI0403 15:12:52.373911   37651 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 15:12:52.375440   37651 openshift-tuned.go:275] Dumping labels to /var/lib/tuned/ocp-node-labels.cfg\nI0403 15:12:52.376497   37651 openshift-tuned.go:326] Getting recommended profile...\nI0403 15:12:52.502533   37651 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 15:13:10.612394   37651 openshift-tuned.go:435] Pod (openshift-console/downloads-5df57b9b8c-79cqc) labels changed node wide: true\nI0403 15:13:12.373921   37651 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 15:13:12.375416   37651 openshift-tuned.go:326] Getting recommended profile...\nI0403 15:13:12.502887   37651 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 15:14:29.365389   37651 openshift-tuned.go:691] Lowering resyncPeriod to 51\n
Apr 03 15:15:36.366 E ns/openshift-cluster-node-tuning-operator pod/tuned-mtd99 node/ip-10-0-137-222.us-west-1.compute.internal container=tuned container exited with code 143 (Error): Failed to execute operation: Unit file tuned.service does not exist.\nI0403 15:13:35.474324    2526 openshift-tuned.go:187] Extracting tuned profiles\nI0403 15:13:35.480181    2526 openshift-tuned.go:623] Resync period to pull node/pod labels: 64 [s]\nE0403 15:13:40.117115    2526 openshift-tuned.go:720] Get https://172.30.0.1:443/api/v1/nodes/ip-10-0-137-222.us-west-1.compute.internal: dial tcp 172.30.0.1:443: connect: no route to host\nI0403 15:13:40.117156    2526 openshift-tuned.go:722] Increasing resyncPeriod to 128\n
Apr 03 15:15:36.442 E ns/openshift-cluster-node-tuning-operator pod/tuned-qsnh2 node/ip-10-0-132-247.us-west-1.compute.internal container=tuned container exited with code 143 (Error): wide: true\nI0403 15:12:33.367551   65501 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 15:12:33.369298   65501 openshift-tuned.go:326] Getting recommended profile...\nI0403 15:12:33.489473   65501 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 15:13:10.304201   65501 openshift-tuned.go:435] Pod (openshift-cluster-version/cluster-version-operator-7d7bf5686f-qqs2d) labels changed node wide: true\nI0403 15:13:13.368049   65501 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 15:13:13.372862   65501 openshift-tuned.go:326] Getting recommended profile...\nI0403 15:13:13.561926   65501 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 15:13:15.664456   65501 openshift-tuned.go:435] Pod (openshift-machine-config-operator/machine-config-operator-565df5fb9f-f4src) labels changed node wide: true\nI0403 15:13:18.367531   65501 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 15:13:18.369406   65501 openshift-tuned.go:326] Getting recommended profile...\nI0403 15:13:18.489655   65501 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 15:13:53.499419   65501 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0403 15:13:53.520921   65501 openshift-tuned.go:720] Pod event watch channel closed.\nI0403 15:13:53.521073   65501 openshift-tuned.go:722] Increasing resyncPeriod to 102\nI0403 15:15:35.521362   65501 openshift-tuned.go:187] Extracting tuned profiles\nI0403 15:15:35.523302   65501 openshift-tuned.go:623] Resync period to pull node/pod labels: 102 [s]\nI0403 15:15:35.538959   65501 openshift-tuned.go:435] Pod (openshift-kube-scheduler/installer-4-ip-10-0-132-247.us-west-1.compute.internal) labels changed node wide: true\n
Apr 03 15:15:36.620 E ns/openshift-cluster-node-tuning-operator pod/tuned-xvkn7 node/ip-10-0-146-58.us-west-1.compute.internal container=tuned container exited with code 143 (Error): tting recommended profile...\nI0403 15:15:05.449145    4791 openshift-tuned.go:520] Active profile () != recommended profile (openshift-control-plane)\nI0403 15:15:05.449323    4791 openshift-tuned.go:226] Reloading tuned...\n2020-04-03 15:15:05,605 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-04-03 15:15:05,626 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-04-03 15:15:05,626 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-03 15:15:05,628 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-04-03 15:15:05,629 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-04-03 15:15:05,690 WARNING  tuned.daemon.application: Using one shot no deamon mode, most of the functionality will be not available, it can be changed in global config\n2020-04-03 15:15:05,690 INFO     tuned.daemon.controller: starting controller\n2020-04-03 15:15:05,690 INFO     tuned.daemon.daemon: starting tuning\n2020-04-03 15:15:05,695 INFO     tuned.daemon.controller: terminating controller\n2020-04-03 15:15:05,699 INFO     tuned.daemon.daemon: stopping tuning\n2020-04-03 15:15:05,701 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-03 15:15:05,703 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-03 15:15:05,706 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-03 15:15:05,709 INFO     tuned.plugins.base: instance disk: assigning devices xvda\n2020-04-03 15:15:05,711 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-03 15:15:05,863 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-03 15:15:05,886 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\n2020-04-03 15:15:05,895 INFO     tuned.daemon.daemon: terminating Tuned in one-shot mode\n
Apr 03 15:15:40.634 E ns/openshift-operator-lifecycle-manager pod/packageserver-55d9d944ff-cfv2g node/ip-10-0-146-58.us-west-1.compute.internal container=packageserver container exited with code 137 (Error):   1 wrap.go:47] GET /healthz: (133.983µs) 200 [kube-probe/1.13+ 10.130.0.1:48406]\nI0403 15:15:07.805871       1 wrap.go:47] GET /apis/packages.operators.coreos.com/v1?timeout=32s: (413.224µs) 200 [olm/v0.0.0 (linux/amd64) kubernetes/$Format 10.128.0.1:52664]\nI0403 15:15:08.604570       1 wrap.go:47] GET /apis/packages.operators.coreos.com/v1?timeout=32s: (371.916µs) 200 [olm/v0.0.0 (linux/amd64) kubernetes/$Format 10.128.0.1:52664]\nI0403 15:15:08.908584       1 wrap.go:47] GET /apis/packages.operators.coreos.com/v1?timeout=32s: (2.480129ms) 200 [openshift-controller-manager/v1.13.4+3040211 (linux/amd64) kubernetes/3040211 10.128.0.1:52664]\nI0403 15:15:09.194123       1 wrap.go:47] GET /apis/packages.operators.coreos.com/v1?timeout=32s: (2.899142ms) 200 [openshift-controller-manager/v1.13.4+3040211 (linux/amd64) kubernetes/3040211/system:serviceaccount:openshift-infra:resourcequota-controller 10.128.0.1:52664]\nI0403 15:15:09.404709       1 wrap.go:47] GET /apis/packages.operators.coreos.com/v1?timeout=32s: (385.536µs) 200 [olm/v0.0.0 (linux/amd64) kubernetes/$Format 10.128.0.1:52664]\nI0403 15:15:09.581351       1 wrap.go:47] GET /: (6.745372ms) 200 [Go-http-client/2.0 10.130.0.1:47510]\nI0403 15:15:09.581392       1 wrap.go:47] GET /: (3.963433ms) 200 [Go-http-client/2.0 10.130.0.1:47510]\nI0403 15:15:09.627833       1 secure_serving.go:156] Stopped listening on [::]:5443\ntime="2020-04-03T15:15:14Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T15:15:14Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T15:15:17Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T15:15:17Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\n
Apr 03 15:15:42.250 E ns/openshift-marketplace pod/redhat-operators-6d6b5688c4-rg766 node/ip-10-0-143-181.us-west-1.compute.internal container=redhat-operators container exited with code 2 (Error): 
Apr 03 15:16:03.902 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-128-97.us-west-1.compute.internal node/ip-10-0-128-97.us-west-1.compute.internal container=scheduler container exited with code 255 (Error): ionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope\nE0403 14:53:46.848348       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope\nE0403 14:53:46.852846       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope\nE0403 14:53:46.852933       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope\nE0403 14:53:48.050542       1 factory.go:832] scheduler cache UpdatePod failed: pod e9c93da0-75ba-11ea-b403-0203ba0fdf9f is not added to scheduler cache, so cannot be updated\nE0403 14:53:48.986395       1 factory.go:832] scheduler cache UpdatePod failed: pod e9c93da0-75ba-11ea-b403-0203ba0fdf9f is not added to scheduler cache, so cannot be updated\nE0403 14:53:50.017061       1 factory.go:832] scheduler cache UpdatePod failed: pod e9c93da0-75ba-11ea-b403-0203ba0fdf9f is not added to scheduler cache, so cannot be updated\nE0403 14:54:04.513394       1 factory.go:832] scheduler cache UpdatePod failed: pod e9c93da0-75ba-11ea-b403-0203ba0fdf9f is not added to scheduler cache, so cannot be updated\nW0403 15:13:10.022493       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 19483 (30637)\nW0403 15:13:10.088971       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 19493 (30640)\nE0403 15:13:53.372742       1 server.go:259] lost master\n
Apr 03 15:16:05.443 E ns/openshift-marketplace pod/community-operators-5bbc77cfbf-ghhq2 node/ip-10-0-137-222.us-west-1.compute.internal container=community-operators container exited with code 2 (Error): 
Apr 03 15:16:13.236 E ns/openshift-apiserver pod/apiserver-pfzrh node/ip-10-0-128-97.us-west-1.compute.internal container=openshift-apiserver container exited with code 255 (Error): r_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 15:13:42.258149       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []\nI0403 15:13:42.258326       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 15:13:42.258326       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 15:13:42.258394       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 15:13:42.275385       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nE0403 15:13:43.635282       1 memcache.go:141] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request\nI0403 15:13:53.283465       1 clusterquotamapping.go:145] Shutting down ClusterQuotaMappingController controller\nI0403 15:13:53.283659       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/aggregator-client-ca/ca-bundle.crt\nI0403 15:13:53.283685       1 controller.go:87] Shutting down OpenAPI AggregationController\nI0403 15:13:53.283702       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/client-ca/ca-bundle.crt\nI0403 15:13:53.283713       1 serving.go:88] Shutting down DynamicLoader\nI0403 15:13:53.284838       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 15:13:53.284972       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 15:13:53.285013       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 15:13:53.285074       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 15:13:53.285731       1 secure_serving.go:180] Stopped listening on 0.0.0.0:8443\n
Apr 03 15:16:14.031 E ns/openshift-monitoring pod/node-exporter-wxrf8 node/ip-10-0-128-97.us-west-1.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 03 15:16:14.031 E ns/openshift-monitoring pod/node-exporter-wxrf8 node/ip-10-0-128-97.us-west-1.compute.internal container=node-exporter container exited with code 255 (Error): 
Apr 03 15:16:14.428 E ns/openshift-sdn pod/sdn-controller-wmxzk node/ip-10-0-128-97.us-west-1.compute.internal container=sdn-controller container exited with code 255 (Error): I0403 15:00:39.661275       1 leaderelection.go:205] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Apr 03 15:16:14.832 E ns/openshift-machine-config-operator pod/machine-config-server-kjs9p node/ip-10-0-128-97.us-west-1.compute.internal container=machine-config-server container exited with code 255 (Error): 
Apr 03 15:16:15.830 E ns/openshift-cluster-node-tuning-operator pod/tuned-bwpq6 node/ip-10-0-128-97.us-west-1.compute.internal container=tuned container exited with code 255 (Error): 19.673692   59162 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 15:13:19.676482   59162 openshift-tuned.go:326] Getting recommended profile...\nI0403 15:13:19.793644   59162 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 15:13:20.716038   59162 openshift-tuned.go:435] Pod (openshift-image-registry/cluster-image-registry-operator-949f97979-wnhn7) labels changed node wide: true\nI0403 15:13:24.673646   59162 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 15:13:24.674992   59162 openshift-tuned.go:326] Getting recommended profile...\nI0403 15:13:24.805211   59162 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 15:13:25.125789   59162 openshift-tuned.go:435] Pod (openshift-marketplace/marketplace-operator-9466d5696-kg69s) labels changed node wide: true\nI0403 15:13:29.673654   59162 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 15:13:29.674991   59162 openshift-tuned.go:326] Getting recommended profile...\nI0403 15:13:29.794715   59162 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 15:13:42.107440   59162 openshift-tuned.go:435] Pod (openshift-machine-config-operator/etcd-quorum-guard-5695bf7947-72zbt) labels changed node wide: true\nI0403 15:13:44.673676   59162 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 15:13:44.675156   59162 openshift-tuned.go:326] Getting recommended profile...\nI0403 15:13:44.812059   59162 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 15:13:51.717659   59162 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/packageserver-c796b5d87-qsrf9) labels changed node wide: true\n
Apr 03 15:16:18.230 E ns/openshift-dns pod/dns-default-rdf9c node/ip-10-0-128-97.us-west-1.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-04-03T14:59:51.862Z [INFO] CoreDNS-1.3.1\n2020-04-03T14:59:51.863Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-03T14:59:51.863Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nW0403 15:13:10.392125       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: watch of *v1.Service ended with: too old resource version: 25691 (30668)\nW0403 15:13:10.442658       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: watch of *v1.Namespace ended with: too old resource version: 18529 (30672)\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 03 15:16:18.230 E ns/openshift-dns pod/dns-default-rdf9c node/ip-10-0-128-97.us-west-1.compute.internal container=dns-node-resolver container exited with code 255 (Error): /bin/bash: line 1: kill: (128) - No such process\n
Apr 03 15:16:18.630 E ns/openshift-etcd pod/etcd-member-ip-10-0-128-97.us-west-1.compute.internal node/ip-10-0-128-97.us-west-1.compute.internal container=etcd-metrics container exited with code 255 (Error): 2020-04-03 15:13:22.339129 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-2.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-2.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-03 15:13:22.340367 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-03 15:13:22.341223 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-2.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-2.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/03 15:13:22 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.128.97:9978: connect: connection refused"; Reconnecting to {etcd-2.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\n2020-04-03 15:13:23.358687 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Apr 03 15:16:18.630 E ns/openshift-etcd pod/etcd-member-ip-10-0-128-97.us-west-1.compute.internal node/ip-10-0-128-97.us-west-1.compute.internal container=etcd-member container exited with code 255 (Error): 8fd39e48966bb359 (writer)\n2020-04-03 15:13:53.842419 I | rafthttp: stopped HTTP pipelining with peer 8fd39e48966bb359\n2020-04-03 15:13:53.842493 W | rafthttp: lost the TCP streaming connection with peer 8fd39e48966bb359 (stream MsgApp v2 reader)\n2020-04-03 15:13:53.842509 E | rafthttp: failed to read 8fd39e48966bb359 on stream MsgApp v2 (context canceled)\n2020-04-03 15:13:53.842516 I | rafthttp: peer 8fd39e48966bb359 became inactive (message send to peer failed)\n2020-04-03 15:13:53.842526 I | rafthttp: stopped streaming with peer 8fd39e48966bb359 (stream MsgApp v2 reader)\n2020-04-03 15:13:53.842577 W | rafthttp: lost the TCP streaming connection with peer 8fd39e48966bb359 (stream Message reader)\n2020-04-03 15:13:53.842611 I | rafthttp: stopped streaming with peer 8fd39e48966bb359 (stream Message reader)\n2020-04-03 15:13:53.842623 I | rafthttp: stopped peer 8fd39e48966bb359\n2020-04-03 15:13:53.842630 I | rafthttp: stopping peer 9f9988b7069b4cc7...\n2020-04-03 15:13:53.842864 I | rafthttp: closed the TCP streaming connection with peer 9f9988b7069b4cc7 (stream MsgApp v2 writer)\n2020-04-03 15:13:53.842878 I | rafthttp: stopped streaming with peer 9f9988b7069b4cc7 (writer)\n2020-04-03 15:13:53.843077 I | rafthttp: closed the TCP streaming connection with peer 9f9988b7069b4cc7 (stream Message writer)\n2020-04-03 15:13:53.843088 I | rafthttp: stopped streaming with peer 9f9988b7069b4cc7 (writer)\n2020-04-03 15:13:53.843188 I | rafthttp: stopped HTTP pipelining with peer 9f9988b7069b4cc7\n2020-04-03 15:13:53.843256 W | rafthttp: lost the TCP streaming connection with peer 9f9988b7069b4cc7 (stream MsgApp v2 reader)\n2020-04-03 15:13:53.843275 I | rafthttp: stopped streaming with peer 9f9988b7069b4cc7 (stream MsgApp v2 reader)\n2020-04-03 15:13:53.843319 W | rafthttp: lost the TCP streaming connection with peer 9f9988b7069b4cc7 (stream Message reader)\n2020-04-03 15:13:53.843338 I | rafthttp: stopped streaming with peer 9f9988b7069b4cc7 (stream Message reader)\n2020-04-03 15:13:53.843345 I | rafthttp: stopped peer 9f9988b7069b4cc7\n
Apr 03 15:16:19.027 E ns/openshift-controller-manager pod/controller-manager-qzzt2 node/ip-10-0-128-97.us-west-1.compute.internal container=controller-manager container exited with code 255 (Error): 
Apr 03 15:16:25.830 E ns/openshift-sdn pod/sdn-55kv2 node/ip-10-0-128-97.us-west-1.compute.internal container=sdn container exited with code 255 (Error): ft-kube-controller-manager/kube-controller-manager:https"\nI0403 15:13:51.346682   69446 proxier.go:367] userspace proxy: processing 0 service events\nI0403 15:13:51.346704   69446 proxier.go:346] userspace syncProxyRules took 61.573283ms\nI0403 15:13:51.549041   69446 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-kube-apiserver/apiserver:https to [10.0.128.97:6443 10.0.132.247:6443]\nI0403 15:13:51.549079   69446 roundrobin.go:240] Delete endpoint 10.0.146.58:6443 for service "openshift-kube-apiserver/apiserver:https"\nI0403 15:13:51.740541   69446 proxier.go:367] userspace proxy: processing 0 service events\nI0403 15:13:51.740570   69446 proxier.go:346] userspace syncProxyRules took 65.35308ms\nI0403 15:13:52.750942   69446 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-kube-scheduler/scheduler:https to [10.0.128.97:10259 10.0.132.247:10259]\nI0403 15:13:52.750979   69446 roundrobin.go:240] Delete endpoint 10.0.146.58:10259 for service "openshift-kube-scheduler/scheduler:https"\nI0403 15:13:52.923956   69446 proxier.go:367] userspace proxy: processing 0 service events\nI0403 15:13:52.923983   69446 proxier.go:346] userspace syncProxyRules took 60.824336ms\ninterrupt: Gracefully shutting down ...\nE0403 15:13:53.429076   69446 healthcheck.go:57] SDN healthcheck disconnected from OVS server: <nil>\nI0403 15:13:53.429188   69446 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 15:13:53.544879   69446 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 15:13:53.722491   69446 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 15:13:53.729452   69446 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\n
Apr 03 15:16:26.229 E ns/openshift-sdn pod/ovs-64d8x node/ip-10-0-128-97.us-west-1.compute.internal container=openvswitch container exited with code 255 (Error): 8768 (No such device)\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-04-03T15:13:35.790Z|00028|jsonrpc|WARN|unix#294: send error: Broken pipe\n2020-04-03T15:13:35.791Z|00029|reconnect|WARN|unix#294: connection dropped (Broken pipe)\n\n==> /var/log/openvswitch/ovs-vswitchd.log <==\n2020-04-03T15:13:36.110Z|00237|connmgr|INFO|br0<->unix#375: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T15:13:36.152Z|00238|connmgr|INFO|br0<->unix#378: 4 flow_mods in the last 0 s (4 deletes)\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-04-03T15:13:36.168Z|00030|jsonrpc|WARN|unix#300: receive error: Connection reset by peer\n2020-04-03T15:13:36.168Z|00031|reconnect|WARN|unix#300: connection dropped (Connection reset by peer)\n\n==> /var/log/openvswitch/ovs-vswitchd.log <==\n2020-04-03T15:13:40.942Z|00239|connmgr|INFO|br0<->unix#381: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T15:13:40.964Z|00240|bridge|INFO|bridge br0: deleted interface veth8cabe8e8 on port 9\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-04-03T15:13:40.953Z|00032|jsonrpc|WARN|unix#305: receive error: Connection reset by peer\n2020-04-03T15:13:40.953Z|00033|reconnect|WARN|unix#305: connection dropped (Connection reset by peer)\n\n==> /var/log/openvswitch/ovs-vswitchd.log <==\n2020-04-03T15:13:46.991Z|00241|connmgr|INFO|br0<->unix#384: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T15:13:47.039Z|00242|connmgr|INFO|br0<->unix#387: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T15:13:47.092Z|00243|bridge|INFO|bridge br0: deleted interface veth3d817784 on port 20\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-04-03T15:13:47.082Z|00034|jsonrpc|WARN|unix#311: receive error: Connection reset by peer\n2020-04-03T15:13:47.082Z|00035|reconnect|WARN|unix#311: connection dropped (Connection reset by peer)\nTerminated\n2020-04-03T15:13:53Z|00001|unixctl|WARN|failed to connect to /var/run/openvswitch/ovs-vswitchd.69409.ctl\novs-appctl: cannot connect to "/var/run/openvswitch/ovs-vswitchd.69409.ctl" (No such file or directory)\novsdb-server is not running.\n
Apr 03 15:16:26.629 E ns/openshift-image-registry pod/node-ca-fcb4l node/ip-10-0-128-97.us-west-1.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 03 15:16:27.029 E ns/openshift-multus pod/multus-n5smv node/ip-10-0-128-97.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Apr 03 15:16:27.029 E ns/openshift-multus pod/multus-n5smv node/ip-10-0-128-97.us-west-1.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 03 15:16:27.648 E ns/openshift-monitoring pod/node-exporter-clbnf node/ip-10-0-153-45.us-west-1.compute.internal container=node-exporter container exited with code 255 (Error): 
Apr 03 15:16:27.648 E ns/openshift-monitoring pod/node-exporter-clbnf node/ip-10-0-153-45.us-west-1.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 03 15:16:27.663 E ns/openshift-image-registry pod/node-ca-fl5g6 node/ip-10-0-153-45.us-west-1.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 03 15:16:27.891 E ns/openshift-cluster-node-tuning-operator pod/tuned-66kln node/ip-10-0-153-45.us-west-1.compute.internal container=tuned container exited with code 255 (Error): 26] Getting recommended profile...\nI0403 15:10:56.737392   55701 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 15:13:35.623285   55701 openshift-tuned.go:435] Pod (openshift-marketplace/redhat-operators-5fdb7b5ff7-7d2tv) labels changed node wide: true\nI0403 15:13:36.616337   55701 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 15:13:36.617640   55701 openshift-tuned.go:326] Getting recommended profile...\nI0403 15:13:36.749813   55701 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 15:13:37.840052   55701 openshift-tuned.go:435] Pod (openshift-marketplace/redhat-operators-685469cb59-f6jw8) labels changed node wide: true\nI0403 15:13:41.616386   55701 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 15:13:41.617821   55701 openshift-tuned.go:326] Getting recommended profile...\nI0403 15:13:41.759442   55701 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 15:13:48.007414   55701 openshift-tuned.go:435] Pod (openshift-marketplace/redhat-operators-5fdb7b5ff7-7d2tv) labels changed node wide: true\nI0403 15:13:51.616343   55701 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 15:13:51.618561   55701 openshift-tuned.go:326] Getting recommended profile...\nI0403 15:13:51.743271   55701 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 15:13:53.500055   55701 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0403 15:13:53.502667   55701 openshift-tuned.go:720] Pod event watch channel closed.\nI0403 15:13:53.502690   55701 openshift-tuned.go:722] Increasing resyncPeriod to 136\nI0403 15:14:48.860643   55701 openshift-tuned.go:126] Received signal: terminated\n
Apr 03 15:16:31.686 E ns/openshift-sdn pod/sdn-xvm9g node/ip-10-0-153-45.us-west-1.compute.internal container=sdn container exited with code 255 (Error): ice "openshift-kube-scheduler/scheduler:https"\nI0403 15:13:52.922024   62411 proxier.go:367] userspace proxy: processing 0 service events\nI0403 15:13:52.922060   62411 proxier.go:346] userspace syncProxyRules took 60.051559ms\nI0403 15:13:57.354852   62411 roundrobin.go:310] LoadBalancerRR: Setting endpoints for default/kubernetes:https to [10.0.132.247:6443 10.0.146.58:6443]\nI0403 15:13:57.354889   62411 roundrobin.go:240] Delete endpoint 10.0.128.97:6443 for service "default/kubernetes:https"\nI0403 15:13:57.519166   62411 proxier.go:367] userspace proxy: processing 0 service events\nI0403 15:13:57.519187   62411 proxier.go:346] userspace syncProxyRules took 55.288442ms\nI0403 15:14:27.678186   62411 proxier.go:367] userspace proxy: processing 0 service events\nI0403 15:14:27.678213   62411 proxier.go:346] userspace syncProxyRules took 53.331144ms\nE0403 15:14:48.845895   62411 healthcheck.go:57] SDN healthcheck disconnected from OVS server: <nil>\nI0403 15:14:48.846005   62411 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\ninterrupt: Gracefully shutting down ...\nI0403 15:14:48.946411   62411 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 15:14:49.046308   62411 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 15:14:49.151836   62411 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 15:14:49.246520   62411 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 15:14:49.346492   62411 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\n
Apr 03 15:16:32.221 E kube-apiserver Kube API started failing: Get https://api.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=3s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Apr 03 15:16:36.639 E ns/openshift-dns pod/dns-default-n5glz node/ip-10-0-153-45.us-west-1.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-04-03T15:00:28.711Z [INFO] CoreDNS-1.3.1\n2020-04-03T15:00:28.711Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-03T15:00:28.711Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nW0403 15:13:10.027992       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: watch of *v1.Namespace ended with: too old resource version: 19483 (30637)\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 03 15:16:36.639 E ns/openshift-dns pod/dns-default-n5glz node/ip-10-0-153-45.us-west-1.compute.internal container=dns-node-resolver container exited with code 255 (Error): /bin/bash: line 1: kill: (127) - No such process\n
Apr 03 15:16:36.708 E ns/openshift-sdn pod/ovs-rhk2p node/ip-10-0-153-45.us-west-1.compute.internal container=openvswitch container exited with code 255 (Error): b9b49e on port 11\n2020-04-03T15:13:55.090Z|00156|connmgr|INFO|br0<->unix#240: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T15:13:55.133Z|00157|connmgr|INFO|br0<->unix#243: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T15:13:55.171Z|00158|bridge|INFO|bridge br0: deleted interface vethf4f8f7fe on port 18\n2020-04-03T15:13:55.228Z|00159|connmgr|INFO|br0<->unix#246: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T15:13:55.302Z|00160|bridge|INFO|bridge br0: deleted interface vethdddec862 on port 7\n2020-04-03T15:13:55.359Z|00161|connmgr|INFO|br0<->unix#249: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T15:13:55.397Z|00162|bridge|INFO|bridge br0: deleted interface veth948ef5d0 on port 8\n2020-04-03T15:13:55.456Z|00163|connmgr|INFO|br0<->unix#252: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T15:13:55.488Z|00164|bridge|INFO|bridge br0: deleted interface veth986b1ca3 on port 13\n2020-04-03T15:13:55.606Z|00165|connmgr|INFO|br0<->unix#255: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T15:13:55.671Z|00166|bridge|INFO|bridge br0: deleted interface veth3384ae5c on port 6\n2020-04-03T15:13:55.738Z|00167|connmgr|INFO|br0<->unix#258: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T15:13:55.767Z|00168|bridge|INFO|bridge br0: deleted interface veth6f2bf679 on port 12\n2020-04-03T15:14:24.165Z|00169|connmgr|INFO|br0<->unix#264: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T15:14:24.192Z|00170|connmgr|INFO|br0<->unix#267: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T15:14:24.212Z|00171|bridge|INFO|bridge br0: deleted interface vethec98d32e on port 15\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-04-03T15:14:42.739Z|00023|jsonrpc|WARN|Dropped 8 log messages in last 841 seconds (most recently, 840 seconds ago) due to excessive rate\n2020-04-03T15:14:42.739Z|00024|jsonrpc|WARN|unix#223: receive error: Connection reset by peer\n2020-04-03T15:14:42.739Z|00025|reconnect|WARN|unix#223: connection dropped (Connection reset by peer)\nTerminated\novs-vswitchd is not running.\novsdb-server is not running.\n
Apr 03 15:16:36.712 E ns/openshift-multus pod/multus-rfd2t node/ip-10-0-153-45.us-west-1.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 03 15:16:36.787 E ns/openshift-machine-config-operator pod/machine-config-daemon-6bn2b node/ip-10-0-153-45.us-west-1.compute.internal container=machine-config-daemon container exited with code 255 (Error): 
Apr 03 15:16:46.362 E ns/openshift-authentication-operator pod/authentication-operator-6b96ccccc-lznzn node/ip-10-0-132-247.us-west-1.compute.internal container=operator container exited with code 255 (Error): hentication-operator", UID:"355b39ad-75b7-11ea-91c6-060888dd8c91", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "OAuthClientsDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io openshift-challenging-client)" to "RouteStatusDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io openshift-challenging-client)"\nI0403 15:15:36.890210       1 status_controller.go:164] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2020-04-03T14:37:12Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-04-03T15:13:46Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-04-03T14:45:59Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-04-03T14:33:45Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0403 15:15:36.901449       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"355b39ad-75b7-11ea-91c6-060888dd8c91", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "RouteStatusDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io openshift-challenging-client)" to ""\nI0403 15:16:37.570981       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 15:16:37.571135       1 leaderelection.go:65] leaderelection lost\n
Apr 03 15:16:49.360 E ns/openshift-machine-api pod/machine-api-operator-7b669fd4cd-v9m99 node/ip-10-0-132-247.us-west-1.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Apr 03 15:16:50.565 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-86dfd6df78-kkw9q node/ip-10-0-132-247.us-west-1.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): \"kube-controller-manager-5\" is not ready" to ""\nI0403 15:16:24.627112       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"2dff0488-75b7-11ea-91c6-060888dd8c91", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "" to "StaticPodsDegraded: nodes/ip-10-0-128-97.us-west-1.compute.internal pods/kube-controller-manager-ip-10-0-128-97.us-west-1.compute.internal container=\"kube-controller-manager-5\" is not ready"\nW0403 15:16:36.435435       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.RoleBinding ended with: too old resource version: 19492 (33771)\nW0403 15:16:36.507142       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Network ended with: too old resource version: 19457 (33771)\nW0403 15:16:36.507269       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.FeatureGate ended with: too old resource version: 19454 (33771)\nI0403 15:16:36.948159       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"2dff0488-75b7-11ea-91c6-060888dd8c91", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: nodes/ip-10-0-128-97.us-west-1.compute.internal pods/kube-controller-manager-ip-10-0-128-97.us-west-1.compute.internal container=\"kube-controller-manager-5\" is not ready" to ""\nI0403 15:16:42.488623       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 15:16:42.488691       1 leaderelection.go:65] leaderelection lost\n
Apr 03 15:16:51.960 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-cbfbxv8c6 node/ip-10-0-132-247.us-west-1.compute.internal container=operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 15:16:53.560 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-7b8fcc5c76-kdrrh node/ip-10-0-132-247.us-west-1.compute.internal container=operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 15:16:54.160 E ns/openshift-machine-config-operator pod/machine-config-operator-565df5fb9f-f4src node/ip-10-0-132-247.us-west-1.compute.internal container=machine-config-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 15:16:54.759 E ns/openshift-cluster-machine-approver pod/machine-approver-88646bfd-x7frp node/ip-10-0-132-247.us-west-1.compute.internal container=machine-approver-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 15:16:56.562 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-5c896f6557-w8tkj node/ip-10-0-132-247.us-west-1.compute.internal container=kube-apiserver-operator container exited with code 255 (Error): : 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "" to "StaticPodsDegraded: nodes/ip-10-0-128-97.us-west-1.compute.internal pods/kube-apiserver-ip-10-0-128-97.us-west-1.compute.internal container=\"kube-apiserver-8\" is not ready"\nW0403 15:16:36.435259       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.RoleBinding ended with: too old resource version: 18533 (33771)\nW0403 15:16:36.457330       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Scheduler ended with: too old resource version: 19457 (33771)\nW0403 15:16:36.492400       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Authentication ended with: too old resource version: 19457 (33771)\nW0403 15:16:36.492664       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.FeatureGate ended with: too old resource version: 19454 (33771)\nW0403 15:16:36.502125       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Network ended with: too old resource version: 19457 (33771)\nI0403 15:16:37.290707       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"2de8824a-75b7-11ea-91c6-060888dd8c91", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "StaticPodsDegraded: nodes/ip-10-0-128-97.us-west-1.compute.internal pods/kube-apiserver-ip-10-0-128-97.us-west-1.compute.internal container=\"kube-apiserver-8\" is not ready" to ""\nI0403 15:16:39.494291       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 15:16:39.494369       1 leaderelection.go:65] leaderelection lost\n
Apr 03 15:16:58.359 E ns/openshift-machine-config-operator pod/machine-config-controller-654b8c598f-j9nt8 node/ip-10-0-132-247.us-west-1.compute.internal container=machine-config-controller container exited with code 2 (Error): 
Apr 03 15:17:03.760 E ns/openshift-marketplace pod/marketplace-operator-9466d5696-qf44q node/ip-10-0-132-247.us-west-1.compute.internal container=marketplace-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 15:17:09.553 E ns/openshift-marketplace pod/certified-operators-5c664dcb56-ckrgz node/ip-10-0-153-45.us-west-1.compute.internal container=certified-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 15:17:09.769 E ns/openshift-marketplace pod/community-operators-6895c4d774-bhdr5 node/ip-10-0-153-45.us-west-1.compute.internal container=community-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 15:17:21.960 E ns/openshift-operator-lifecycle-manager pod/packageserver-7f58cc55d4-trr6p node/ip-10-0-132-247.us-west-1.compute.internal container=packageserver container exited with code 137 (Error): c catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T15:17:11Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T15:17:11Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T15:17:11Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T15:17:11Z" level=info msg="new grpc connection added" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T15:17:12Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-03T15:17:12Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-03T15:17:12Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T15:17:12Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T15:17:13Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T15:17:13Z" level=info msg="new grpc connection added" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T15:17:13Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T15:17:13Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\n
Apr 03 15:17:41.668 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-128-97.us-west-1.compute.internal node/ip-10-0-128-97.us-west-1.compute.internal container=kube-controller-manager-5 container exited with code 255 (Error): 5557bc9649-swv7h\nE0403 15:13:40.676565       1 memcache.go:141] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request\nW0403 15:13:46.217080       1 garbagecollector.go:648] failed to discover some groups: map[packages.operators.coreos.com/v1:the server is currently unable to handle the request]\nI0403 15:13:47.169834       1 deployment_controller.go:484] Error syncing deployment openshift-machine-api/machine-api-operator: Operation cannot be fulfilled on deployments.apps "machine-api-operator": the object has been modified; please apply your changes to the latest version and try again\nI0403 15:13:51.212253       1 deployment_controller.go:484] Error syncing deployment openshift-monitoring/cluster-monitoring-operator: Operation cannot be fulfilled on deployments.apps "cluster-monitoring-operator": the object has been modified; please apply your changes to the latest version and try again\nI0403 15:13:52.318310       1 node_lifecycle_controller.go:775] Node ip-10-0-137-222.us-west-1.compute.internal is healthy again, removing all taints\nE0403 15:13:53.301767       1 reflector.go:237] github.com/openshift/client-go/security/informers/externalversions/factory.go:101: Failed to watch *v1.RangeAllocation: the server is currently unable to handle the request (get rangeallocations.security.openshift.io)\nE0403 15:13:53.303323       1 reflector.go:237] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: Failed to watch *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io)\nE0403 15:13:53.303417       1 reflector.go:237] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.BrokerTemplateInstance: the server is currently unable to handle the request (get brokertemplateinstances.template.openshift.io)\nE0403 15:13:53.350360       1 controllermanager.go:282] leaderelection lost\nI0403 15:13:53.350402       1 serving.go:88] Shutting down DynamicLoader\n
Apr 03 15:17:41.668 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-128-97.us-west-1.compute.internal node/ip-10-0-128-97.us-west-1.compute.internal container=kube-controller-manager-cert-syncer-5 container exited with code 255 (Error): ers/factory.go:132: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?resourceVersion=18684&timeout=9m28s&timeoutSeconds=568&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0403 14:53:40.460214       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?resourceVersion=18903&timeout=6m57s&timeoutSeconds=417&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0403 14:53:41.460901       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 14:53:41.465113       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 14:53:46.852410       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: secrets is forbidden: User "system:kube-controller-manager" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-controller-manager"\nE0403 14:53:46.853210       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:kube-controller-manager" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\nW0403 14:59:37.883939       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 19683 (24319)\nW0403 15:08:55.902741       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 24569 (28192)\n
Apr 03 15:17:42.070 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-128-97.us-west-1.compute.internal node/ip-10-0-128-97.us-west-1.compute.internal container=kube-apiserver-cert-syncer-8 container exited with code 255 (Error): I0403 14:53:42.526727       1 certsync_controller.go:269] Starting CertSyncer\nI0403 14:53:42.526907       1 observer_polling.go:106] Starting file observer\nW0403 14:59:25.066889       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22342 (24215)\nW0403 15:08:51.072305       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 24464 (28149)\n
Apr 03 15:17:42.070 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-128-97.us-west-1.compute.internal node/ip-10-0-128-97.us-west-1.compute.internal container=kube-apiserver-8 container exited with code 255 (Error): OF\nI0403 15:13:53.288068       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 15:13:53.288175       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 15:13:53.288188       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 15:13:53.288293       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 15:13:53.288307       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 15:13:53.288409       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 15:13:53.288424       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 15:13:53.288527       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 15:13:53.288539       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 15:13:53.288672       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 15:13:53.288687       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 15:13:53.288792       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 15:13:53.288804       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 15:13:53.288944       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 15:13:53.288958       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 15:13:53.289094       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 15:13:53.289108       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 15:13:53.400587       1 controller.go:176] Shutting down kubernetes service endpoint reconciler\n
Apr 03 15:17:42.698 E clusteroperator/monitoring changed Degraded to True: UpdatingnodeExporterFailed: Failed to rollout the stack. Error: running task Updating node-exporter failed: reconciling node-exporter ClusterRoleBinding failed: updating ClusterRoleBinding object failed: Put https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings/node-exporter: unexpected EOF
Apr 03 15:17:44.867 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-128-97.us-west-1.compute.internal node/ip-10-0-128-97.us-west-1.compute.internal container=scheduler container exited with code 255 (Error): ionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope\nE0403 14:53:46.848348       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope\nE0403 14:53:46.852846       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope\nE0403 14:53:46.852933       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope\nE0403 14:53:48.050542       1 factory.go:832] scheduler cache UpdatePod failed: pod e9c93da0-75ba-11ea-b403-0203ba0fdf9f is not added to scheduler cache, so cannot be updated\nE0403 14:53:48.986395       1 factory.go:832] scheduler cache UpdatePod failed: pod e9c93da0-75ba-11ea-b403-0203ba0fdf9f is not added to scheduler cache, so cannot be updated\nE0403 14:53:50.017061       1 factory.go:832] scheduler cache UpdatePod failed: pod e9c93da0-75ba-11ea-b403-0203ba0fdf9f is not added to scheduler cache, so cannot be updated\nE0403 14:54:04.513394       1 factory.go:832] scheduler cache UpdatePod failed: pod e9c93da0-75ba-11ea-b403-0203ba0fdf9f is not added to scheduler cache, so cannot be updated\nW0403 15:13:10.022493       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 19483 (30637)\nW0403 15:13:10.088971       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 19493 (30640)\nE0403 15:13:53.372742       1 server.go:259] lost master\n
Apr 03 15:17:46.068 E ns/openshift-etcd pod/etcd-member-ip-10-0-128-97.us-west-1.compute.internal node/ip-10-0-128-97.us-west-1.compute.internal container=etcd-metrics container exited with code 255 (Error): 2020-04-03 15:13:22.339129 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-2.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-2.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-03 15:13:22.340367 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-03 15:13:22.341223 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-2.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-2.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/03 15:13:22 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.128.97:9978: connect: connection refused"; Reconnecting to {etcd-2.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\n2020-04-03 15:13:23.358687 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Apr 03 15:17:46.068 E ns/openshift-etcd pod/etcd-member-ip-10-0-128-97.us-west-1.compute.internal node/ip-10-0-128-97.us-west-1.compute.internal container=etcd-member container exited with code 255 (Error): 8fd39e48966bb359 (writer)\n2020-04-03 15:13:53.842419 I | rafthttp: stopped HTTP pipelining with peer 8fd39e48966bb359\n2020-04-03 15:13:53.842493 W | rafthttp: lost the TCP streaming connection with peer 8fd39e48966bb359 (stream MsgApp v2 reader)\n2020-04-03 15:13:53.842509 E | rafthttp: failed to read 8fd39e48966bb359 on stream MsgApp v2 (context canceled)\n2020-04-03 15:13:53.842516 I | rafthttp: peer 8fd39e48966bb359 became inactive (message send to peer failed)\n2020-04-03 15:13:53.842526 I | rafthttp: stopped streaming with peer 8fd39e48966bb359 (stream MsgApp v2 reader)\n2020-04-03 15:13:53.842577 W | rafthttp: lost the TCP streaming connection with peer 8fd39e48966bb359 (stream Message reader)\n2020-04-03 15:13:53.842611 I | rafthttp: stopped streaming with peer 8fd39e48966bb359 (stream Message reader)\n2020-04-03 15:13:53.842623 I | rafthttp: stopped peer 8fd39e48966bb359\n2020-04-03 15:13:53.842630 I | rafthttp: stopping peer 9f9988b7069b4cc7...\n2020-04-03 15:13:53.842864 I | rafthttp: closed the TCP streaming connection with peer 9f9988b7069b4cc7 (stream MsgApp v2 writer)\n2020-04-03 15:13:53.842878 I | rafthttp: stopped streaming with peer 9f9988b7069b4cc7 (writer)\n2020-04-03 15:13:53.843077 I | rafthttp: closed the TCP streaming connection with peer 9f9988b7069b4cc7 (stream Message writer)\n2020-04-03 15:13:53.843088 I | rafthttp: stopped streaming with peer 9f9988b7069b4cc7 (writer)\n2020-04-03 15:13:53.843188 I | rafthttp: stopped HTTP pipelining with peer 9f9988b7069b4cc7\n2020-04-03 15:13:53.843256 W | rafthttp: lost the TCP streaming connection with peer 9f9988b7069b4cc7 (stream MsgApp v2 reader)\n2020-04-03 15:13:53.843275 I | rafthttp: stopped streaming with peer 9f9988b7069b4cc7 (stream MsgApp v2 reader)\n2020-04-03 15:13:53.843319 W | rafthttp: lost the TCP streaming connection with peer 9f9988b7069b4cc7 (stream Message reader)\n2020-04-03 15:13:53.843338 I | rafthttp: stopped streaming with peer 9f9988b7069b4cc7 (stream Message reader)\n2020-04-03 15:13:53.843345 I | rafthttp: stopped peer 9f9988b7069b4cc7\n
Apr 03 15:17:48.470 E ns/openshift-etcd pod/etcd-member-ip-10-0-128-97.us-west-1.compute.internal node/ip-10-0-128-97.us-west-1.compute.internal container=etcd-metrics container exited with code 255 (Error): 2020-04-03 15:13:22.339129 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-2.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-2.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-03 15:13:22.340367 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-03 15:13:22.341223 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-2.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-2.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/03 15:13:22 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.128.97:9978: connect: connection refused"; Reconnecting to {etcd-2.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\n2020-04-03 15:13:23.358687 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Apr 03 15:17:48.470 E ns/openshift-etcd pod/etcd-member-ip-10-0-128-97.us-west-1.compute.internal node/ip-10-0-128-97.us-west-1.compute.internal container=etcd-member container exited with code 255 (Error): 8fd39e48966bb359 (writer)\n2020-04-03 15:13:53.842419 I | rafthttp: stopped HTTP pipelining with peer 8fd39e48966bb359\n2020-04-03 15:13:53.842493 W | rafthttp: lost the TCP streaming connection with peer 8fd39e48966bb359 (stream MsgApp v2 reader)\n2020-04-03 15:13:53.842509 E | rafthttp: failed to read 8fd39e48966bb359 on stream MsgApp v2 (context canceled)\n2020-04-03 15:13:53.842516 I | rafthttp: peer 8fd39e48966bb359 became inactive (message send to peer failed)\n2020-04-03 15:13:53.842526 I | rafthttp: stopped streaming with peer 8fd39e48966bb359 (stream MsgApp v2 reader)\n2020-04-03 15:13:53.842577 W | rafthttp: lost the TCP streaming connection with peer 8fd39e48966bb359 (stream Message reader)\n2020-04-03 15:13:53.842611 I | rafthttp: stopped streaming with peer 8fd39e48966bb359 (stream Message reader)\n2020-04-03 15:13:53.842623 I | rafthttp: stopped peer 8fd39e48966bb359\n2020-04-03 15:13:53.842630 I | rafthttp: stopping peer 9f9988b7069b4cc7...\n2020-04-03 15:13:53.842864 I | rafthttp: closed the TCP streaming connection with peer 9f9988b7069b4cc7 (stream MsgApp v2 writer)\n2020-04-03 15:13:53.842878 I | rafthttp: stopped streaming with peer 9f9988b7069b4cc7 (writer)\n2020-04-03 15:13:53.843077 I | rafthttp: closed the TCP streaming connection with peer 9f9988b7069b4cc7 (stream Message writer)\n2020-04-03 15:13:53.843088 I | rafthttp: stopped streaming with peer 9f9988b7069b4cc7 (writer)\n2020-04-03 15:13:53.843188 I | rafthttp: stopped HTTP pipelining with peer 9f9988b7069b4cc7\n2020-04-03 15:13:53.843256 W | rafthttp: lost the TCP streaming connection with peer 9f9988b7069b4cc7 (stream MsgApp v2 reader)\n2020-04-03 15:13:53.843275 I | rafthttp: stopped streaming with peer 9f9988b7069b4cc7 (stream MsgApp v2 reader)\n2020-04-03 15:13:53.843319 W | rafthttp: lost the TCP streaming connection with peer 9f9988b7069b4cc7 (stream Message reader)\n2020-04-03 15:13:53.843338 I | rafthttp: stopped streaming with peer 9f9988b7069b4cc7 (stream Message reader)\n2020-04-03 15:13:53.843345 I | rafthttp: stopped peer 9f9988b7069b4cc7\n
Apr 03 15:17:54.672 E ns/openshift-marketplace pod/community-operators-5557bc9649-mq84b node/ip-10-0-137-222.us-west-1.compute.internal container=community-operators container exited with code 2 (Error): 
Apr 03 15:17:56.678 E ns/openshift-marketplace pod/certified-operators-7c59c588bc-jr4bb node/ip-10-0-137-222.us-west-1.compute.internal container=certified-operators container exited with code 2 (Error): 
Apr 03 15:18:19.863 E ns/openshift-authentication pod/oauth-openshift-65ff57dd9b-54ddf node/ip-10-0-128-97.us-west-1.compute.internal container=oauth-openshift container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 15:19:04.261 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-132-247.us-west-1.compute.internal node/ip-10-0-132-247.us-west-1.compute.internal container=kube-apiserver-8 container exited with code 255 (Error): 1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=5317, ErrCode=NO_ERROR, debug=""\nI0403 15:17:23.237471       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=5317, ErrCode=NO_ERROR, debug=""\nI0403 15:17:23.237539       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=5317, ErrCode=NO_ERROR, debug=""\nI0403 15:17:23.237694       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=5317, ErrCode=NO_ERROR, debug=""\nI0403 15:17:23.237749       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=5317, ErrCode=NO_ERROR, debug=""\nI0403 15:17:23.237899       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=5317, ErrCode=NO_ERROR, debug=""\nI0403 15:17:23.237966       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=5317, ErrCode=NO_ERROR, debug=""\nE0403 15:17:23.240140       1 reflector.go:237] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: the server is currently unable to handle the request (get groups.user.openshift.io)\nI0403 15:17:23.262089       1 controller.go:176] Shutting down kubernetes service endpoint reconciler\nW0403 15:17:23.279121       1 reflector.go:256] github.com/openshift/client-go/oauth/informers/externalversions/factory.go:101: watch of *v1.OAuthClient ended with: The resourceVersion for the provided watch is too old.\nW0403 15:17:23.282040       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [10.0.128.97 10.0.146.58]\n
Apr 03 15:19:04.261 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-132-247.us-west-1.compute.internal node/ip-10-0-132-247.us-west-1.compute.internal container=kube-apiserver-cert-syncer-8 container exited with code 255 (Error): I0403 14:51:51.095142       1 observer_polling.go:106] Starting file observer\nI0403 14:51:51.095557       1 certsync_controller.go:269] Starting CertSyncer\nW0403 14:58:20.738909       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22342 (23886)\nW0403 15:08:04.745058       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 24046 (27859)\nW0403 15:13:15.751322       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 28050 (30297)\n
Apr 03 15:19:04.685 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-132-247.us-west-1.compute.internal node/ip-10-0-132-247.us-west-1.compute.internal container=kube-controller-manager-cert-syncer-5 container exited with code 255 (Error): I0403 14:53:49.516346       1 certsync_controller.go:269] Starting CertSyncer\nI0403 14:53:49.516492       1 observer_polling.go:106] Starting file observer\nW0403 15:00:57.545079       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 19683 (25024)\nW0403 15:07:10.549626       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25345 (27595)\nW0403 15:12:55.554451       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 27726 (30081)\n
Apr 03 15:19:04.685 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-132-247.us-west-1.compute.internal node/ip-10-0-132-247.us-west-1.compute.internal container=kube-controller-manager-5 container exited with code 255 (Error): TC to 2020-04-04 14:12:53 +0000 UTC (now=2020-04-03 14:53:49.985649128 +0000 UTC))\nI0403 14:53:49.985695       1 clientca.go:92] [3] "/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" client-ca certificate: "kube-apiserver-to-kubelet-signer" [] issuer="<self>" (2020-04-03 14:12:53 +0000 UTC to 2021-04-03 14:12:53 +0000 UTC (now=2020-04-03 14:53:49.985674849 +0000 UTC))\nI0403 14:53:49.985718       1 clientca.go:92] [4] "/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" client-ca certificate: "kube-control-plane-signer" [] issuer="<self>" (2020-04-03 14:12:53 +0000 UTC to 2021-04-03 14:12:53 +0000 UTC (now=2020-04-03 14:53:49.985705044 +0000 UTC))\nI0403 14:53:49.993315       1 controllermanager.go:169] Version: v1.13.4+3040211\nI0403 14:53:49.994413       1 serving.go:195] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1585924100" (2020-04-03 14:28:38 +0000 UTC to 2022-04-03 14:28:39 +0000 UTC (now=2020-04-03 14:53:49.99439511 +0000 UTC))\nI0403 14:53:49.994441       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585924100" [] issuer="<self>" (2020-04-03 14:28:20 +0000 UTC to 2021-04-03 14:28:21 +0000 UTC (now=2020-04-03 14:53:49.99443151 +0000 UTC))\nI0403 14:53:49.994461       1 secure_serving.go:136] Serving securely on [::]:10257\nI0403 14:53:49.994508       1 serving.go:77] Starting DynamicLoader\nI0403 14:53:49.994997       1 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-controller-manager...\nE0403 15:17:23.301304       1 controllermanager.go:282] leaderelection lost\nI0403 15:17:23.301330       1 serving.go:88] Shutting down DynamicLoader\n
Apr 03 15:19:14.005 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-132-247.us-west-1.compute.internal node/ip-10-0-132-247.us-west-1.compute.internal container=scheduler container exited with code 255 (Error): ces/secrets/serving-cert/tls.crt" serving certificate: "scheduler.openshift-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1585924100" (2020-04-03 14:28:39 +0000 UTC to 2022-04-03 14:28:40 +0000 UTC (now=2020-04-03 14:54:48.353078648 +0000 UTC))\nI0403 14:54:48.353133       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585924100" [] issuer="<self>" (2020-04-03 14:28:20 +0000 UTC to 2021-04-03 14:28:21 +0000 UTC (now=2020-04-03 14:54:48.353117302 +0000 UTC))\nI0403 14:54:48.353158       1 secure_serving.go:136] Serving securely on [::]:10259\nI0403 14:54:48.353216       1 serving.go:77] Starting DynamicLoader\nI0403 14:54:49.254720       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller\nI0403 14:54:49.354841       1 controller_utils.go:1034] Caches are synced for scheduler controller\nI0403 14:54:49.354889       1 leaderelection.go:205] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\nI0403 15:11:20.571369       1 leaderelection.go:214] successfully acquired lease openshift-kube-scheduler/kube-scheduler\nW0403 15:13:10.328219       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 18533 (30664)\nW0403 15:13:10.328328       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 18529 (30664)\nW0403 15:13:10.411218       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 25691 (30668)\nW0403 15:16:36.466013       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 18529 (33771)\nE0403 15:17:23.286255       1 server.go:259] lost master\n
Apr 03 15:19:15.604 E ns/openshift-image-registry pod/node-ca-cwghl node/ip-10-0-132-247.us-west-1.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 03 15:19:15.944 E ns/openshift-ingress pod/router-default-7669bb5dff-dq259 node/ip-10-0-143-181.us-west-1.compute.internal container=router container exited with code 2 (Error): 15:17:28.631715       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 15:17:33.632632       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 15:17:38.661031       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 15:17:43.634412       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 15:17:48.636186       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 15:17:53.633308       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 15:17:58.634426       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 15:18:03.633036       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 15:18:18.338664       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 15:18:23.330297       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 15:18:33.731706       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 15:18:38.725708       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 15:19:12.402457       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n
Apr 03 15:19:16.007 E ns/openshift-dns pod/dns-default-jsgdh node/ip-10-0-132-247.us-west-1.compute.internal container=dns-node-resolver container exited with code 255 (Error): 
Apr 03 15:19:16.007 E ns/openshift-dns pod/dns-default-jsgdh node/ip-10-0-132-247.us-west-1.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-04-03T14:59:26.005Z [INFO] CoreDNS-1.3.1\n2020-04-03T14:59:26.005Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-03T14:59:26.005Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nW0403 15:10:46.414638       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: watch of *v1.Service ended with: too old resource version: 25691 (28999)\nW0403 15:13:10.037055       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: watch of *v1.Namespace ended with: too old resource version: 21484 (30637)\nE0403 15:13:53.502102       1 reflector.go:322] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: Failed to watch *v1.Namespace: Get https://172.30.0.1:443/api/v1/namespaces?resourceVersion=30637&timeoutSeconds=384&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 03 15:19:16.405 E ns/openshift-sdn pod/sdn-controller-rv7hw node/ip-10-0-132-247.us-west-1.compute.internal container=sdn-controller container exited with code 255 (Error): coreos.com/v1: the server is currently unable to handle the request\nW0403 15:13:53.642924       1 reflector.go:256] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 28999 (31523)\nW0403 15:13:53.679286       1 reflector.go:256] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 28973 (31529)\nE0403 15:14:40.770762       1 memcache.go:141] couldn't get resource list for oauth.openshift.io/v1: the server is currently unable to handle the request\nE0403 15:14:42.962301       1 memcache.go:141] couldn't get resource list for project.openshift.io/v1: the server is currently unable to handle the request\nE0403 15:14:46.034332       1 memcache.go:141] couldn't get resource list for quota.openshift.io/v1: the server is currently unable to handle the request\nE0403 15:14:49.107758       1 memcache.go:141] couldn't get resource list for route.openshift.io/v1: the server is currently unable to handle the request\nE0403 15:14:49.144105       1 memcache.go:141] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request\nE0403 15:15:19.825971       1 memcache.go:141] couldn't get resource list for build.openshift.io/v1: the server is currently unable to handle the request\nE0403 15:15:22.898698       1 memcache.go:141] couldn't get resource list for quota.openshift.io/v1: the server is currently unable to handle the request\nE0403 15:15:25.970665       1 memcache.go:141] couldn't get resource list for template.openshift.io/v1: the server is currently unable to handle the request\nE0403 15:15:29.042762       1 memcache.go:141] couldn't get resource list for user.openshift.io/v1: the server is currently unable to handle the request\nE0403 15:16:59.694345       1 memcache.go:141] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request\n
Apr 03 15:19:18.006 E ns/openshift-apiserver pod/apiserver-b45r9 node/ip-10-0-132-247.us-west-1.compute.internal container=openshift-apiserver container exited with code 255 (Error): 3 15:17:05.879078       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 15:17:05.889192       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []\nI0403 15:17:05.889372       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 15:17:05.889407       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 15:17:05.889374       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 15:17:05.895112       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 15:17:05.906948       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nE0403 15:17:11.146257       1 memcache.go:141] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request\nE0403 15:17:21.214139       1 memcache.go:141] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request\nI0403 15:17:23.233981       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/client-ca/ca-bundle.crt\nI0403 15:17:23.234445       1 serving.go:88] Shutting down DynamicLoader\nI0403 15:17:23.234462       1 controller.go:87] Shutting down OpenAPI AggregationController\nI0403 15:17:23.234562       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/aggregator-client-ca/ca-bundle.crt\nI0403 15:17:23.234677       1 clusterquotamapping.go:145] Shutting down ClusterQuotaMappingController controller\nI0403 15:17:23.235915       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 15:17:23.236013       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\n
Apr 03 15:19:18.405 E ns/openshift-sdn pod/ovs-d4f5r node/ip-10-0-132-247.us-west-1.compute.internal container=openvswitch container exited with code 255 (Error): in the last 0 s (4 deletes)\n2020-04-03T15:16:45.325Z|00285|bridge|INFO|bridge br0: deleted interface vethc433f536 on port 35\n2020-04-03T15:16:45.485Z|00286|connmgr|INFO|br0<->unix#481: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T15:16:45.523Z|00287|bridge|INFO|bridge br0: deleted interface veth2a6edf33 on port 10\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-04-03T15:16:45.306Z|00027|jsonrpc|WARN|unix#377: send error: Broken pipe\n2020-04-03T15:16:45.307Z|00028|reconnect|WARN|unix#377: connection dropped (Broken pipe)\n\n==> /var/log/openvswitch/ovs-vswitchd.log <==\n2020-04-03T15:16:45.788Z|00288|connmgr|INFO|br0<->unix#484: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T15:16:45.833Z|00289|connmgr|INFO|br0<->unix#487: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T15:16:45.875Z|00290|bridge|INFO|bridge br0: deleted interface veth885d21e7 on port 36\n2020-04-03T15:16:46.176Z|00291|connmgr|INFO|br0<->unix#490: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T15:16:46.208Z|00292|bridge|INFO|bridge br0: deleted interface veth9a525026 on port 9\n2020-04-03T15:16:46.266Z|00293|connmgr|INFO|br0<->unix#493: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T15:16:46.303Z|00294|bridge|INFO|bridge br0: deleted interface veth464fedd8 on port 7\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-04-03T15:16:46.275Z|00029|jsonrpc|WARN|unix#395: send error: Broken pipe\n2020-04-03T15:16:46.275Z|00030|reconnect|WARN|unix#395: connection dropped (Broken pipe)\n2020-04-03T15:17:02.804Z|00031|jsonrpc|WARN|unix#400: receive error: Connection reset by peer\n2020-04-03T15:17:02.804Z|00032|reconnect|WARN|unix#400: connection dropped (Connection reset by peer)\n\n==> /var/log/openvswitch/ovs-vswitchd.log <==\n2020-04-03T15:17:14.801Z|00295|connmgr|INFO|br0<->unix#499: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T15:17:14.831Z|00296|connmgr|INFO|br0<->unix#502: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T15:17:14.855Z|00297|bridge|INFO|bridge br0: deleted interface veth210d905b on port 37\nTerminated\nTerminated\n
Apr 03 15:19:18.806 E ns/openshift-multus pod/multus-45qwc node/ip-10-0-132-247.us-west-1.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 03 15:19:19.204 E ns/openshift-machine-config-operator pod/machine-config-server-b24ct node/ip-10-0-132-247.us-west-1.compute.internal container=machine-config-server container exited with code 255 (Error): 
Apr 03 15:19:24.804 E ns/openshift-controller-manager pod/controller-manager-9krl4 node/ip-10-0-132-247.us-west-1.compute.internal container=controller-manager container exited with code 255 (Error): 
Apr 03 15:19:25.206 E ns/openshift-sdn pod/sdn-k8h7j node/ip-10-0-132-247.us-west-1.compute.internal container=sdn container exited with code 255 (Error): :17:17.045924   77900 proxier.go:346] userspace syncProxyRules took 67.499953ms\nI0403 15:17:22.528418   77900 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-etcd/etcd:etcd-metrics to [10.0.128.97:9979 10.0.132.247:9979 10.0.146.58:9979]\nI0403 15:17:22.528519   77900 roundrobin.go:240] Delete endpoint 10.0.132.247:9979 for service "openshift-etcd/etcd:etcd-metrics"\nI0403 15:17:22.528545   77900 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-etcd/etcd:etcd to [10.0.128.97:2379 10.0.132.247:2379 10.0.146.58:2379]\nI0403 15:17:22.528579   77900 roundrobin.go:240] Delete endpoint 10.0.132.247:2379 for service "openshift-etcd/etcd:etcd"\nI0403 15:17:22.717680   77900 proxier.go:367] userspace proxy: processing 0 service events\nI0403 15:17:22.717706   77900 proxier.go:346] userspace syncProxyRules took 59.147007ms\nE0403 15:17:23.276918   77900 healthcheck.go:57] SDN healthcheck disconnected from OVS server: <nil>\nI0403 15:17:23.277042   77900 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\ninterrupt: Gracefully shutting down ...\nI0403 15:17:23.377968   77900 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 15:17:23.477383   77900 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 15:17:23.577398   77900 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 15:17:23.677752   77900 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 15:17:23.784034   77900 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\n
Apr 03 15:19:26.804 E ns/openshift-monitoring pod/node-exporter-tpxfh node/ip-10-0-132-247.us-west-1.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 03 15:19:26.804 E ns/openshift-monitoring pod/node-exporter-tpxfh node/ip-10-0-132-247.us-west-1.compute.internal container=node-exporter container exited with code 255 (Error): 
Apr 03 15:19:27.204 E ns/openshift-machine-config-operator pod/machine-config-daemon-7dg67 node/ip-10-0-132-247.us-west-1.compute.internal container=machine-config-daemon container exited with code 255 (Error): 
Apr 03 15:19:27.610 E ns/openshift-cluster-node-tuning-operator pod/tuned-mvhrd node/ip-10-0-132-247.us-west-1.compute.internal container=tuned container exited with code 255 (Error): g-controller-manager-operator/openshift-service-catalog-controller-manager-operator-cbfbxv8c6) labels changed node wide: true\nI0403 15:16:57.267112  107578 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 15:16:57.268578  107578 openshift-tuned.go:326] Getting recommended profile...\nI0403 15:16:57.391067  107578 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 15:16:57.391577  107578 openshift-tuned.go:435] Pod (openshift-apiserver-operator/openshift-apiserver-operator-85777c599d-h4qgc) labels changed node wide: true\nI0403 15:17:02.266984  107578 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 15:17:02.268456  107578 openshift-tuned.go:326] Getting recommended profile...\nI0403 15:17:02.392005  107578 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 15:17:02.730564  107578 openshift-tuned.go:435] Pod (openshift-cluster-storage-operator/cluster-storage-operator-77874b8bc7-m8tl6) labels changed node wide: true\nI0403 15:17:07.267003  107578 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 15:17:07.268382  107578 openshift-tuned.go:326] Getting recommended profile...\nI0403 15:17:07.383089  107578 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 15:17:21.531698  107578 openshift-tuned.go:435] Pod (openshift-machine-config-operator/etcd-quorum-guard-5695bf7947-67hhf) labels changed node wide: true\nI0403 15:17:22.267087  107578 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 15:17:22.268400  107578 openshift-tuned.go:326] Getting recommended profile...\nI0403 15:17:22.387089  107578 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\n
Apr 03 15:19:42.806 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-132-247.us-west-1.compute.internal node/ip-10-0-132-247.us-west-1.compute.internal container=kube-controller-manager-cert-syncer-5 container exited with code 255 (Error): I0403 14:53:49.516346       1 certsync_controller.go:269] Starting CertSyncer\nI0403 14:53:49.516492       1 observer_polling.go:106] Starting file observer\nW0403 15:00:57.545079       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 19683 (25024)\nW0403 15:07:10.549626       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25345 (27595)\nW0403 15:12:55.554451       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 27726 (30081)\n
Apr 03 15:19:42.806 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-132-247.us-west-1.compute.internal node/ip-10-0-132-247.us-west-1.compute.internal container=kube-controller-manager-5 container exited with code 255 (Error): TC to 2020-04-04 14:12:53 +0000 UTC (now=2020-04-03 14:53:49.985649128 +0000 UTC))\nI0403 14:53:49.985695       1 clientca.go:92] [3] "/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" client-ca certificate: "kube-apiserver-to-kubelet-signer" [] issuer="<self>" (2020-04-03 14:12:53 +0000 UTC to 2021-04-03 14:12:53 +0000 UTC (now=2020-04-03 14:53:49.985674849 +0000 UTC))\nI0403 14:53:49.985718       1 clientca.go:92] [4] "/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" client-ca certificate: "kube-control-plane-signer" [] issuer="<self>" (2020-04-03 14:12:53 +0000 UTC to 2021-04-03 14:12:53 +0000 UTC (now=2020-04-03 14:53:49.985705044 +0000 UTC))\nI0403 14:53:49.993315       1 controllermanager.go:169] Version: v1.13.4+3040211\nI0403 14:53:49.994413       1 serving.go:195] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1585924100" (2020-04-03 14:28:38 +0000 UTC to 2022-04-03 14:28:39 +0000 UTC (now=2020-04-03 14:53:49.99439511 +0000 UTC))\nI0403 14:53:49.994441       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585924100" [] issuer="<self>" (2020-04-03 14:28:20 +0000 UTC to 2021-04-03 14:28:21 +0000 UTC (now=2020-04-03 14:53:49.99443151 +0000 UTC))\nI0403 14:53:49.994461       1 secure_serving.go:136] Serving securely on [::]:10257\nI0403 14:53:49.994508       1 serving.go:77] Starting DynamicLoader\nI0403 14:53:49.994997       1 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-controller-manager...\nE0403 15:17:23.301304       1 controllermanager.go:282] leaderelection lost\nI0403 15:17:23.301330       1 serving.go:88] Shutting down DynamicLoader\n
Apr 03 15:19:43.205 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-132-247.us-west-1.compute.internal node/ip-10-0-132-247.us-west-1.compute.internal container=scheduler container exited with code 255 (Error): ces/secrets/serving-cert/tls.crt" serving certificate: "scheduler.openshift-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1585924100" (2020-04-03 14:28:39 +0000 UTC to 2022-04-03 14:28:40 +0000 UTC (now=2020-04-03 14:54:48.353078648 +0000 UTC))\nI0403 14:54:48.353133       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585924100" [] issuer="<self>" (2020-04-03 14:28:20 +0000 UTC to 2021-04-03 14:28:21 +0000 UTC (now=2020-04-03 14:54:48.353117302 +0000 UTC))\nI0403 14:54:48.353158       1 secure_serving.go:136] Serving securely on [::]:10259\nI0403 14:54:48.353216       1 serving.go:77] Starting DynamicLoader\nI0403 14:54:49.254720       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller\nI0403 14:54:49.354841       1 controller_utils.go:1034] Caches are synced for scheduler controller\nI0403 14:54:49.354889       1 leaderelection.go:205] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\nI0403 15:11:20.571369       1 leaderelection.go:214] successfully acquired lease openshift-kube-scheduler/kube-scheduler\nW0403 15:13:10.328219       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 18533 (30664)\nW0403 15:13:10.328328       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 18529 (30664)\nW0403 15:13:10.411218       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 25691 (30668)\nW0403 15:16:36.466013       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 18529 (33771)\nE0403 15:17:23.286255       1 server.go:259] lost master\n
Apr 03 15:19:43.606 E ns/openshift-etcd pod/etcd-member-ip-10-0-132-247.us-west-1.compute.internal node/ip-10-0-132-247.us-west-1.compute.internal container=etcd-metrics container exited with code 255 (Error): 2020-04-03 15:16:50.421485 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-0.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-0.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-03 15:16:50.422508 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-03 15:16:50.423226 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/03 15:16:50 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.132.247:9978: connect: connection refused"; Reconnecting to {etcd-0.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\n2020-04-03 15:16:51.436728 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Apr 03 15:19:43.606 E ns/openshift-etcd pod/etcd-member-ip-10-0-132-247.us-west-1.compute.internal node/ip-10-0-132-247.us-west-1.compute.internal container=etcd-member container exited with code 255 (Error): dcbfce377863944a (writer)\n2020-04-03 15:17:23.678092 I | rafthttp: stopped HTTP pipelining with peer dcbfce377863944a\n2020-04-03 15:17:23.678214 W | rafthttp: lost the TCP streaming connection with peer dcbfce377863944a (stream MsgApp v2 reader)\n2020-04-03 15:17:23.678275 E | rafthttp: failed to read dcbfce377863944a on stream MsgApp v2 (context canceled)\n2020-04-03 15:17:23.678316 I | rafthttp: peer dcbfce377863944a became inactive (message send to peer failed)\n2020-04-03 15:17:23.678355 I | rafthttp: stopped streaming with peer dcbfce377863944a (stream MsgApp v2 reader)\n2020-04-03 15:17:23.678465 W | rafthttp: lost the TCP streaming connection with peer dcbfce377863944a (stream Message reader)\n2020-04-03 15:17:23.678524 I | rafthttp: stopped streaming with peer dcbfce377863944a (stream Message reader)\n2020-04-03 15:17:23.678578 I | rafthttp: stopped peer dcbfce377863944a\n2020-04-03 15:17:23.678620 I | rafthttp: stopping peer 9f9988b7069b4cc7...\n2020-04-03 15:17:23.679091 I | rafthttp: closed the TCP streaming connection with peer 9f9988b7069b4cc7 (stream MsgApp v2 writer)\n2020-04-03 15:17:23.679158 I | rafthttp: stopped streaming with peer 9f9988b7069b4cc7 (writer)\n2020-04-03 15:17:23.679594 I | rafthttp: closed the TCP streaming connection with peer 9f9988b7069b4cc7 (stream Message writer)\n2020-04-03 15:17:23.679766 I | rafthttp: stopped streaming with peer 9f9988b7069b4cc7 (writer)\n2020-04-03 15:17:23.679828 I | rafthttp: stopped HTTP pipelining with peer 9f9988b7069b4cc7\n2020-04-03 15:17:23.679958 W | rafthttp: lost the TCP streaming connection with peer 9f9988b7069b4cc7 (stream MsgApp v2 reader)\n2020-04-03 15:17:23.680018 I | rafthttp: stopped streaming with peer 9f9988b7069b4cc7 (stream MsgApp v2 reader)\n2020-04-03 15:17:23.680101 W | rafthttp: lost the TCP streaming connection with peer 9f9988b7069b4cc7 (stream Message reader)\n2020-04-03 15:17:23.680152 I | rafthttp: stopped streaming with peer 9f9988b7069b4cc7 (stream Message reader)\n2020-04-03 15:17:23.680187 I | rafthttp: stopped peer 9f9988b7069b4cc7\n
Apr 03 15:19:44.005 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-132-247.us-west-1.compute.internal node/ip-10-0-132-247.us-west-1.compute.internal container=kube-apiserver-8 container exited with code 255 (Error): 1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=5317, ErrCode=NO_ERROR, debug=""\nI0403 15:17:23.237471       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=5317, ErrCode=NO_ERROR, debug=""\nI0403 15:17:23.237539       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=5317, ErrCode=NO_ERROR, debug=""\nI0403 15:17:23.237694       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=5317, ErrCode=NO_ERROR, debug=""\nI0403 15:17:23.237749       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=5317, ErrCode=NO_ERROR, debug=""\nI0403 15:17:23.237899       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=5317, ErrCode=NO_ERROR, debug=""\nI0403 15:17:23.237966       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=5317, ErrCode=NO_ERROR, debug=""\nE0403 15:17:23.240140       1 reflector.go:237] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: the server is currently unable to handle the request (get groups.user.openshift.io)\nI0403 15:17:23.262089       1 controller.go:176] Shutting down kubernetes service endpoint reconciler\nW0403 15:17:23.279121       1 reflector.go:256] github.com/openshift/client-go/oauth/informers/externalversions/factory.go:101: watch of *v1.OAuthClient ended with: The resourceVersion for the provided watch is too old.\nW0403 15:17:23.282040       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [10.0.128.97 10.0.146.58]\n
Apr 03 15:19:44.005 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-132-247.us-west-1.compute.internal node/ip-10-0-132-247.us-west-1.compute.internal container=kube-apiserver-cert-syncer-8 container exited with code 255 (Error): I0403 14:51:51.095142       1 observer_polling.go:106] Starting file observer\nI0403 14:51:51.095557       1 certsync_controller.go:269] Starting CertSyncer\nW0403 14:58:20.738909       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22342 (23886)\nW0403 15:08:04.745058       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 24046 (27859)\nW0403 15:13:15.751322       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 28050 (30297)\n
Apr 03 15:19:47.205 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-132-247.us-west-1.compute.internal node/ip-10-0-132-247.us-west-1.compute.internal container=scheduler container exited with code 255 (Error): ces/secrets/serving-cert/tls.crt" serving certificate: "scheduler.openshift-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1585924100" (2020-04-03 14:28:39 +0000 UTC to 2022-04-03 14:28:40 +0000 UTC (now=2020-04-03 14:54:48.353078648 +0000 UTC))\nI0403 14:54:48.353133       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585924100" [] issuer="<self>" (2020-04-03 14:28:20 +0000 UTC to 2021-04-03 14:28:21 +0000 UTC (now=2020-04-03 14:54:48.353117302 +0000 UTC))\nI0403 14:54:48.353158       1 secure_serving.go:136] Serving securely on [::]:10259\nI0403 14:54:48.353216       1 serving.go:77] Starting DynamicLoader\nI0403 14:54:49.254720       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller\nI0403 14:54:49.354841       1 controller_utils.go:1034] Caches are synced for scheduler controller\nI0403 14:54:49.354889       1 leaderelection.go:205] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\nI0403 15:11:20.571369       1 leaderelection.go:214] successfully acquired lease openshift-kube-scheduler/kube-scheduler\nW0403 15:13:10.328219       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 18533 (30664)\nW0403 15:13:10.328328       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 18529 (30664)\nW0403 15:13:10.411218       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 25691 (30668)\nW0403 15:16:36.466013       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 18529 (33771)\nE0403 15:17:23.286255       1 server.go:259] lost master\n
Apr 03 15:19:47.610 E ns/openshift-etcd pod/etcd-member-ip-10-0-132-247.us-west-1.compute.internal node/ip-10-0-132-247.us-west-1.compute.internal container=etcd-metrics container exited with code 255 (Error): 2020-04-03 15:16:50.421485 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-0.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-0.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-03 15:16:50.422508 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-03 15:16:50.423226 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/03 15:16:50 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.132.247:9978: connect: connection refused"; Reconnecting to {etcd-0.ci-op-xsts04cq-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\n2020-04-03 15:16:51.436728 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Apr 03 15:19:47.610 E ns/openshift-etcd pod/etcd-member-ip-10-0-132-247.us-west-1.compute.internal node/ip-10-0-132-247.us-west-1.compute.internal container=etcd-member container exited with code 255 (Error): dcbfce377863944a (writer)\n2020-04-03 15:17:23.678092 I | rafthttp: stopped HTTP pipelining with peer dcbfce377863944a\n2020-04-03 15:17:23.678214 W | rafthttp: lost the TCP streaming connection with peer dcbfce377863944a (stream MsgApp v2 reader)\n2020-04-03 15:17:23.678275 E | rafthttp: failed to read dcbfce377863944a on stream MsgApp v2 (context canceled)\n2020-04-03 15:17:23.678316 I | rafthttp: peer dcbfce377863944a became inactive (message send to peer failed)\n2020-04-03 15:17:23.678355 I | rafthttp: stopped streaming with peer dcbfce377863944a (stream MsgApp v2 reader)\n2020-04-03 15:17:23.678465 W | rafthttp: lost the TCP streaming connection with peer dcbfce377863944a (stream Message reader)\n2020-04-03 15:17:23.678524 I | rafthttp: stopped streaming with peer dcbfce377863944a (stream Message reader)\n2020-04-03 15:17:23.678578 I | rafthttp: stopped peer dcbfce377863944a\n2020-04-03 15:17:23.678620 I | rafthttp: stopping peer 9f9988b7069b4cc7...\n2020-04-03 15:17:23.679091 I | rafthttp: closed the TCP streaming connection with peer 9f9988b7069b4cc7 (stream MsgApp v2 writer)\n2020-04-03 15:17:23.679158 I | rafthttp: stopped streaming with peer 9f9988b7069b4cc7 (writer)\n2020-04-03 15:17:23.679594 I | rafthttp: closed the TCP streaming connection with peer 9f9988b7069b4cc7 (stream Message writer)\n2020-04-03 15:17:23.679766 I | rafthttp: stopped streaming with peer 9f9988b7069b4cc7 (writer)\n2020-04-03 15:17:23.679828 I | rafthttp: stopped HTTP pipelining with peer 9f9988b7069b4cc7\n2020-04-03 15:17:23.679958 W | rafthttp: lost the TCP streaming connection with peer 9f9988b7069b4cc7 (stream MsgApp v2 reader)\n2020-04-03 15:17:23.680018 I | rafthttp: stopped streaming with peer 9f9988b7069b4cc7 (stream MsgApp v2 reader)\n2020-04-03 15:17:23.680101 W | rafthttp: lost the TCP streaming connection with peer 9f9988b7069b4cc7 (stream Message reader)\n2020-04-03 15:17:23.680152 I | rafthttp: stopped streaming with peer 9f9988b7069b4cc7 (stream Message reader)\n2020-04-03 15:17:23.680187 I | rafthttp: stopped peer 9f9988b7069b4cc7\n
Apr 03 15:21:27.436 E ns/openshift-monitoring pod/node-exporter-cmwqg node/ip-10-0-143-181.us-west-1.compute.internal container=node-exporter container exited with code 255 (Error): 
Apr 03 15:21:27.436 E ns/openshift-monitoring pod/node-exporter-cmwqg node/ip-10-0-143-181.us-west-1.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 03 15:21:27.448 E ns/openshift-image-registry pod/node-ca-n9ms5 node/ip-10-0-143-181.us-west-1.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 03 15:21:27.654 E ns/openshift-multus pod/multus-dkxtr node/ip-10-0-143-181.us-west-1.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 03 15:21:31.488 E ns/openshift-sdn pod/sdn-85xr6 node/ip-10-0-143-181.us-west-1.compute.internal container=sdn container exited with code 255 (Error):   51393 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-etcd/etcd:etcd to [10.0.128.97:2379 10.0.146.58:2379]\nI0403 15:19:47.577022   51393 roundrobin.go:240] Delete endpoint 10.0.132.247:2379 for service "openshift-etcd/etcd:etcd"\nI0403 15:19:47.746825   51393 proxier.go:367] userspace proxy: processing 0 service events\nI0403 15:19:47.746848   51393 proxier.go:346] userspace syncProxyRules took 53.365154ms\nI0403 15:19:48.375306   51393 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-etcd/etcd:etcd-metrics to [10.0.128.97:9979 10.0.132.247:9979 10.0.146.58:9979]\nI0403 15:19:48.375343   51393 roundrobin.go:240] Delete endpoint 10.0.132.247:9979 for service "openshift-etcd/etcd:etcd-metrics"\nI0403 15:19:48.375358   51393 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-etcd/etcd:etcd to [10.0.128.97:2379 10.0.132.247:2379 10.0.146.58:2379]\nI0403 15:19:48.375365   51393 roundrobin.go:240] Delete endpoint 10.0.132.247:2379 for service "openshift-etcd/etcd:etcd"\nI0403 15:19:48.534663   51393 proxier.go:367] userspace proxy: processing 0 service events\nI0403 15:19:48.534686   51393 proxier.go:346] userspace syncProxyRules took 55.226636ms\ninterrupt: Gracefully shutting down ...\nE0403 15:19:48.859060   51393 healthcheck.go:57] SDN healthcheck disconnected from OVS server: <nil>\nI0403 15:19:48.859181   51393 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 15:19:48.964281   51393 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 15:19:49.059502   51393 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 15:19:49.159968   51393 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\n
Apr 03 15:21:32.042 E ns/openshift-dns pod/dns-default-hgcjx node/ip-10-0-143-181.us-west-1.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-04-03T15:01:01.785Z [INFO] CoreDNS-1.3.1\n2020-04-03T15:01:01.785Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-03T15:01:01.785Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nW0403 15:13:10.412706       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: watch of *v1.Service ended with: too old resource version: 25691 (30668)\nW0403 15:13:10.435449       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: watch of *v1.Namespace ended with: too old resource version: 18529 (30672)\nW0403 15:17:23.896358       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: watch of *v1.Namespace ended with: too old resource version: 30672 (33463)\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 03 15:21:32.042 E ns/openshift-dns pod/dns-default-hgcjx node/ip-10-0-143-181.us-west-1.compute.internal container=dns-node-resolver container exited with code 255 (Error): /bin/bash: line 1: kill: (159) - No such process\n
Apr 03 15:21:32.409 E ns/openshift-sdn pod/ovs-q5s64 node/ip-10-0-143-181.us-west-1.compute.internal container=openvswitch container exited with code 255 (Error): -04-03T15:19:13.976Z|00170|bridge|INFO|bridge br0: deleted interface vethc232a7a1 on port 3\n2020-04-03T15:19:14.021Z|00171|connmgr|INFO|br0<->unix#299: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T15:19:14.062Z|00172|connmgr|INFO|br0<->unix#302: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T15:19:14.092Z|00173|bridge|INFO|bridge br0: deleted interface vetha3c5852e on port 16\n2020-04-03T15:19:14.168Z|00174|connmgr|INFO|br0<->unix#305: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T15:19:14.223Z|00175|connmgr|INFO|br0<->unix#308: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T15:19:14.267Z|00176|bridge|INFO|bridge br0: deleted interface veth9f40cd5f on port 19\n2020-04-03T15:19:42.579Z|00177|connmgr|INFO|br0<->unix#314: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T15:19:42.600Z|00178|bridge|INFO|bridge br0: deleted interface veth33fa80e9 on port 12\n2020-04-03T15:19:42.630Z|00179|connmgr|INFO|br0<->unix#317: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T15:19:42.669Z|00180|connmgr|INFO|br0<->unix#320: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T15:19:42.690Z|00181|bridge|INFO|bridge br0: deleted interface vethf6f1f845 on port 14\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-04-03T15:19:42.684Z|00022|jsonrpc|WARN|Dropped 7 log messages in last 993 seconds (most recently, 993 seconds ago) due to excessive rate\n2020-04-03T15:19:42.684Z|00023|jsonrpc|WARN|unix#245: receive error: Connection reset by peer\n2020-04-03T15:19:42.684Z|00024|reconnect|WARN|unix#245: connection dropped (Connection reset by peer)\n\n==> /var/log/openvswitch/ovs-vswitchd.log <==\n2020-04-03T15:19:43.233Z|00182|connmgr|INFO|br0<->unix#323: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T15:19:43.265Z|00183|connmgr|INFO|br0<->unix#326: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T15:19:43.286Z|00184|bridge|INFO|bridge br0: deleted interface veth84a1f3c6 on port 20\nTerminated\novs-vswitchd is not running.\novsdb-server is not running.\ncat: /var/run/openvswitch/ovs-vswitchd.pid: No such file or directory\n
Apr 03 15:21:32.778 E ns/openshift-machine-config-operator pod/machine-config-daemon-rf569 node/ip-10-0-143-181.us-west-1.compute.internal container=machine-config-daemon container exited with code 255 (Error): 
Apr 03 15:21:33.148 E ns/openshift-cluster-node-tuning-operator pod/tuned-t2sk9 node/ip-10-0-143-181.us-west-1.compute.internal container=tuned container exited with code 255 (Error): g labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 15:16:32.137687   78717 openshift-tuned.go:326] Getting recommended profile...\nI0403 15:16:32.249270   78717 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 15:19:12.347685   78717 openshift-tuned.go:435] Pod (e2e-tests-sig-apps-deployment-upgrade-pdbls/dp-57cc5d77b4-jsvpv) labels changed node wide: true\nI0403 15:19:17.136151   78717 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 15:19:17.138353   78717 openshift-tuned.go:326] Getting recommended profile...\nI0403 15:19:17.247084   78717 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 15:19:18.317337   78717 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-adapter-7c86dc4fbd-ngbkx) labels changed node wide: true\nI0403 15:19:22.136133   78717 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 15:19:22.137761   78717 openshift-tuned.go:326] Getting recommended profile...\nI0403 15:19:22.247180   78717 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 15:19:44.172063   78717 openshift-tuned.go:435] Pod (e2e-tests-sig-apps-job-upgrade-v29lk/foo-scwjg) labels changed node wide: false\nI0403 15:19:44.193175   78717 openshift-tuned.go:435] Pod (e2e-tests-sig-apps-job-upgrade-v29lk/foo-w66g6) labels changed node wide: true\nI0403 15:19:47.136166   78717 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 15:19:47.138261   78717 openshift-tuned.go:326] Getting recommended profile...\nI0403 15:19:47.255149   78717 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 15:19:47.822925   78717 openshift-tuned.go:435] Pod (openshift-console/downloads-5df57b9b8c-79cqc) labels changed node wide: true\n
Apr 03 15:21:40.741 E ns/openshift-multus pod/multus-dkxtr node/ip-10-0-143-181.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending