ResultFAILURE
Tests 3 failed / 19 succeeded
Started2020-04-03 18:36
Elapsed1h44m
Work namespaceci-op-ci7zfy1r
Refs release-4.1:514189df
812:8d0c3f82
podf2cfbb58-75d9-11ea-ae76-0a58ac104072
repoopenshift/cluster-kube-apiserver-operator
revision1

Test Failures


openshift-tests Monitor cluster while tests execute 50m23s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
219 error level events were detected during this test run:

Apr 03 19:26:26.819 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-7cf6fd78c6-7bk7h node/ip-10-0-135-157.us-west-2.compute.internal container=kube-apiserver-operator container exited with code 255 (Error): status-6 -n openshift-kube-apiserver: cause by changes in data.status\nI0403 19:20:33.659443       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"4f7cec09-75de-11ea-ab12-028f5e43b996", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("Progressing: 3 nodes are at revision 6"),Available message changed from "Available: 3 nodes are active; 1 nodes are at revision 3; 2 nodes are at revision 6" to "Available: 3 nodes are active; 3 nodes are at revision 6"\nI0403 19:20:33.696672       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"4f7cec09-75de-11ea-ab12-028f5e43b996", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "StaticPodsDegraded: nodes/ip-10-0-135-157.us-west-2.compute.internal pods/kube-apiserver-ip-10-0-135-157.us-west-2.compute.internal container=\"kube-apiserver-6\" is not ready" to ""\nI0403 19:20:34.813671       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"4f7cec09-75de-11ea-ab12-028f5e43b996", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-6-ip-10-0-135-157.us-west-2.compute.internal -n openshift-kube-apiserver because it was missing\nW0403 19:25:39.389560       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 14583 (16525)\nI0403 19:26:25.639974       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 19:26:25.640038       1 leaderelection.go:65] leaderelection lost\n
Apr 03 19:28:17.988 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-7d574b8f5f-xr9bj node/ip-10-0-135-157.us-west-2.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): ector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Namespace ended with: too old resource version: 10241 (13999)\nW0403 19:20:15.571394       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ClusterRoleBinding ended with: too old resource version: 11020 (14003)\nW0403 19:20:15.571444       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 14006 (14446)\nW0403 19:20:15.571473       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 13149 (13998)\nW0403 19:20:15.571508       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 13988 (13998)\nW0403 19:20:15.573247       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Pod ended with: too old resource version: 13981 (13999)\nW0403 19:20:15.732386       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 14302 (14570)\nW0403 19:20:15.734392       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Scheduler ended with: too old resource version: 8892 (14570)\nW0403 19:20:15.810130       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.FeatureGate ended with: too old resource version: 8906 (14570)\nW0403 19:20:15.822473       1 reflector.go:270] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: watch of *v1.KubeScheduler ended with: too old resource version: 13709 (14570)\nI0403 19:28:16.945975       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 19:28:16.946040       1 leaderelection.go:65] leaderelection lost\nI0403 19:28:16.946189       1 secure_serving.go:156] Stopped listening on 0.0.0.0:8443\n
Apr 03 19:29:45.290 E ns/openshift-machine-api pod/machine-api-operator-6c7b4f4dc9-hbl8v node/ip-10-0-135-157.us-west-2.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Apr 03 19:32:03.017 E ns/openshift-machine-api pod/machine-api-controllers-7d49d765b4-62xvz node/ip-10-0-128-177.us-west-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Apr 03 19:32:03.017 E ns/openshift-machine-api pod/machine-api-controllers-7d49d765b4-62xvz node/ip-10-0-128-177.us-west-2.compute.internal container=nodelink-controller container exited with code 2 (Error): 
Apr 03 19:32:21.948 E clusterversion/version changed Failing to True: MultipleErrors: Multiple errors are preventing progress:\n* Cluster operator monitoring is still updating\n* Could not update deployment "openshift-cluster-node-tuning-operator/cluster-node-tuning-operator" (162 of 350)\n* Could not update deployment "openshift-cluster-storage-operator/cluster-storage-operator" (199 of 350)\n* Could not update deployment "openshift-service-catalog-controller-manager-operator/openshift-service-catalog-controller-manager-operator" (217 of 350)
Apr 03 19:32:26.212 E ns/openshift-cluster-node-tuning-operator pod/cluster-node-tuning-operator-54cb547889-94hrw node/ip-10-0-145-43.us-west-2.compute.internal container=cluster-node-tuning-operator container exited with code 255 (Error): 5] syncClusterRole()\nI0403 19:20:16.615876       1 tuned_controller.go:246] syncClusterRoleBinding()\nI0403 19:20:16.676760       1 tuned_controller.go:277] syncClusterConfigMap()\nI0403 19:20:16.680401       1 tuned_controller.go:277] syncClusterConfigMap()\nI0403 19:20:16.684292       1 tuned_controller.go:315] syncDaemonSet()\nI0403 19:20:16.692729       1 tuned_controller.go:419] Reconciling Tuned openshift-cluster-node-tuning-operator/default\nI0403 19:20:16.692743       1 status.go:26] syncOperatorStatus()\nI0403 19:20:16.699635       1 tuned_controller.go:187] syncServiceAccount()\nI0403 19:20:16.699739       1 tuned_controller.go:215] syncClusterRole()\nI0403 19:20:16.722135       1 tuned_controller.go:246] syncClusterRoleBinding()\nI0403 19:20:16.749847       1 tuned_controller.go:277] syncClusterConfigMap()\nI0403 19:20:16.752702       1 tuned_controller.go:277] syncClusterConfigMap()\nI0403 19:20:16.755364       1 tuned_controller.go:315] syncDaemonSet()\nI0403 19:22:51.417589       1 tuned_controller.go:419] Reconciling Tuned openshift-cluster-node-tuning-operator/default\nI0403 19:22:51.417632       1 status.go:26] syncOperatorStatus()\nI0403 19:22:51.423983       1 tuned_controller.go:187] syncServiceAccount()\nI0403 19:22:51.424109       1 tuned_controller.go:215] syncClusterRole()\nI0403 19:22:51.444598       1 tuned_controller.go:246] syncClusterRoleBinding()\nI0403 19:22:51.463695       1 tuned_controller.go:277] syncClusterConfigMap()\nI0403 19:22:51.466756       1 tuned_controller.go:277] syncClusterConfigMap()\nI0403 19:22:51.469460       1 tuned_controller.go:315] syncDaemonSet()\nW0403 19:26:16.479034       1 reflector.go:270] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:196: watch of *v1.ConfigMap ended with: too old resource version: 14571 (16691)\nW0403 19:32:18.484657       1 reflector.go:270] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:196: watch of *v1.ConfigMap ended with: too old resource version: 16867 (19172)\nF0403 19:32:21.366905       1 main.go:85] <nil>\n
Apr 03 19:32:27.611 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-5ffc78b26 node/ip-10-0-145-43.us-west-2.compute.internal container=operator container exited with code 2 (Error): )\nI0403 19:29:05.698239       1 reflector.go:169] Listing and watching *v1.ConfigMap from k8s.io/client-go/informers/factory.go:132\nI0403 19:29:17.145288       1 wrap.go:47] GET /metrics: (5.308237ms) 200 [Prometheus/2.7.2 10.129.2.9:41006]\nI0403 19:29:17.145311       1 wrap.go:47] GET /metrics: (5.617689ms) 200 [Prometheus/2.7.2 10.128.2.7:33822]\nI0403 19:29:47.143348       1 wrap.go:47] GET /metrics: (5.503754ms) 200 [Prometheus/2.7.2 10.128.2.7:33822]\nI0403 19:29:47.143411       1 wrap.go:47] GET /metrics: (3.34184ms) 200 [Prometheus/2.7.2 10.129.2.9:41006]\nI0403 19:30:17.145097       1 wrap.go:47] GET /metrics: (7.236305ms) 200 [Prometheus/2.7.2 10.128.2.7:33822]\nI0403 19:30:17.146473       1 wrap.go:47] GET /metrics: (6.415818ms) 200 [Prometheus/2.7.2 10.129.2.9:41006]\nI0403 19:30:47.143366       1 wrap.go:47] GET /metrics: (5.579515ms) 200 [Prometheus/2.7.2 10.128.2.7:33822]\nI0403 19:30:47.143912       1 wrap.go:47] GET /metrics: (3.864058ms) 200 [Prometheus/2.7.2 10.129.2.9:41006]\nI0403 19:31:01.937081       1 reflector.go:357] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: Watch close - *v1.ClusterOperator total 46 items received\nI0403 19:31:17.144407       1 wrap.go:47] GET /metrics: (4.350174ms) 200 [Prometheus/2.7.2 10.129.2.9:41006]\nI0403 19:31:17.144750       1 wrap.go:47] GET /metrics: (6.761522ms) 200 [Prometheus/2.7.2 10.128.2.7:33822]\nI0403 19:31:47.143524       1 wrap.go:47] GET /metrics: (5.738188ms) 200 [Prometheus/2.7.2 10.128.2.7:33822]\nI0403 19:31:47.143846       1 wrap.go:47] GET /metrics: (3.926432ms) 200 [Prometheus/2.7.2 10.129.2.9:41006]\nI0403 19:32:09.121015       1 reflector.go:357] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: Watch close - *v1.ServiceCatalogControllerManager total 0 items received\nI0403 19:32:17.144189       1 wrap.go:47] GET /metrics: (6.289481ms) 200 [Prometheus/2.7.2 10.128.2.7:33822]\nI0403 19:32:17.144883       1 wrap.go:47] GET /metrics: (4.829632ms) 200 [Prometheus/2.7.2 10.129.2.9:41006]\n
Apr 03 19:32:29.011 E ns/openshift-monitoring pod/node-exporter-j4x6j node/ip-10-0-145-43.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 
Apr 03 19:32:30.580 E ns/openshift-cluster-node-tuning-operator pod/tuned-srkbq node/ip-10-0-142-248.us-west-2.compute.internal container=tuned container exited with code 143 (Error): :24:42.792754    2810 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 19:24:42.792819    2810 openshift-tuned.go:435] Pod (e2e-tests-service-upgrade-hphcs/service-test-gk56w) labels changed node wide: true\nI0403 19:24:47.668113    2810 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 19:24:47.669702    2810 openshift-tuned.go:326] Getting recommended profile...\nI0403 19:24:47.782083    2810 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 19:24:54.935399    2810 openshift-tuned.go:435] Pod (e2e-tests-sig-apps-deployment-upgrade-6fkpl/dp-57cc5d77b4-vcwlv) labels changed node wide: true\nI0403 19:24:57.668146    2810 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 19:24:57.670041    2810 openshift-tuned.go:326] Getting recommended profile...\nI0403 19:24:57.782014    2810 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 19:25:06.580079    2810 openshift-tuned.go:435] Pod (e2e-tests-sig-storage-sig-api-machinery-configmap-upgrade-55lfn/pod-configmap-c3c30052-75e0-11ea-9b9c-0a58ac10f35b) labels changed node wide: false\nI0403 19:25:09.299737    2810 openshift-tuned.go:435] Pod (e2e-tests-sig-storage-sig-api-machinery-secret-upgrade-5d9cp/pod-secrets-c4251143-75e0-11ea-9b9c-0a58ac10f35b) labels changed node wide: false\nI0403 19:32:22.676215    2810 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-operator-7f444bdb65-f95qg) labels changed node wide: true\nI0403 19:32:27.668083    2810 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 19:32:27.675019    2810 openshift-tuned.go:326] Getting recommended profile...\nI0403 19:32:27.825999    2810 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\n
Apr 03 19:32:51.506 E ns/openshift-cluster-node-tuning-operator pod/tuned-6vg9x node/ip-10-0-145-43.us-west-2.compute.internal container=tuned container exited with code 143 (Error): ler-6-ip-10-0-145-43.us-west-2.compute.internal) labels changed node wide: false\nI0403 19:31:03.777683   13581 openshift-tuned.go:435] Pod (openshift-kube-scheduler/installer-5-ip-10-0-145-43.us-west-2.compute.internal) labels changed node wide: false\nI0403 19:31:07.273909   13581 openshift-tuned.go:435] Pod (openshift-kube-controller-manager/kube-controller-manager-ip-10-0-145-43.us-west-2.compute.internal) labels changed node wide: true\nI0403 19:31:09.675506   13581 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 19:31:09.677201   13581 openshift-tuned.go:326] Getting recommended profile...\nI0403 19:31:09.777013   13581 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 19:31:12.374734   13581 openshift-tuned.go:435] Pod (openshift-kube-scheduler/openshift-kube-scheduler-ip-10-0-145-43.us-west-2.compute.internal) labels changed node wide: true\nI0403 19:31:14.675495   13581 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 19:31:14.676884   13581 openshift-tuned.go:326] Getting recommended profile...\nI0403 19:31:14.772879   13581 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 19:31:33.478966   13581 openshift-tuned.go:435] Pod (openshift-kube-controller-manager/revision-pruner-6-ip-10-0-145-43.us-west-2.compute.internal) labels changed node wide: false\nI0403 19:31:39.816515   13581 openshift-tuned.go:435] Pod (openshift-machine-api/machine-api-controllers-57847dc59-cj9dd) labels changed node wide: true\nE0403 19:31:43.036164   13581 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""\nE0403 19:31:43.049000   13581 openshift-tuned.go:720] Pod event watch channel closed.\nI0403 19:31:43.049022   13581 openshift-tuned.go:722] Increasing resyncPeriod to 116\n
Apr 03 19:32:58.568 E ns/openshift-cluster-node-tuning-operator pod/tuned-2jhll node/ip-10-0-157-88.us-west-2.compute.internal container=tuned container exited with code 143 (Error):  Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 19:32:21.493369    2744 openshift-tuned.go:435] Pod (openshift-monitoring/kube-state-metrics-66649f5bcf-gxnsq) labels changed node wide: true\nI0403 19:32:26.405714    2744 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 19:32:26.407457    2744 openshift-tuned.go:326] Getting recommended profile...\nI0403 19:32:26.522110    2744 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 19:32:27.694049    2744 openshift-tuned.go:435] Pod (openshift-monitoring/telemeter-client-6d4bdb9cc9-fg4l9) labels changed node wide: true\nI0403 19:32:31.405708    2744 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 19:32:31.407556    2744 openshift-tuned.go:326] Getting recommended profile...\nI0403 19:32:31.522419    2744 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 19:32:36.484687    2744 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-k8s-1) labels changed node wide: true\nI0403 19:32:41.405795    2744 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 19:32:41.407611    2744 openshift-tuned.go:326] Getting recommended profile...\nI0403 19:32:41.532120    2744 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 19:32:43.917190    2744 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-k8s-1) labels changed node wide: true\nI0403 19:32:46.405684    2744 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 19:32:46.407174    2744 openshift-tuned.go:326] Getting recommended profile...\nI0403 19:32:46.523192    2744 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\n
Apr 03 19:33:00.601 E ns/openshift-monitoring pod/prometheus-adapter-66595999b7-b82xr node/ip-10-0-157-88.us-west-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): 
Apr 03 19:33:03.877 E ns/openshift-service-ca-operator pod/service-ca-operator-697cdcbd64-8lvs9 node/ip-10-0-135-157.us-west-2.compute.internal container=operator container exited with code 2 (Error): 
Apr 03 19:33:08.279 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-6749f6c949-ll72s node/ip-10-0-135-157.us-west-2.compute.internal container=operator container exited with code 2 (Error):  wrap.go:47] GET /metrics: (4.25265ms) 200 [Prometheus/2.7.2 10.129.2.9:52734]\nI0403 19:32:27.881853       1 request.go:530] Throttling request took 165.823684ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0403 19:32:28.081829       1 request.go:530] Throttling request took 197.368666ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0403 19:32:28.125803       1 status_controller.go:160] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2020-04-03T19:09:00Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-04-03T19:32:28Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-04-03T19:10:00Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-04-03T19:09:00Z","reason":"NoData","status":"Unknown","type":"Upgradeable"}]}}\nI0403 19:32:28.132371       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"4fd209e3-75de-11ea-ab12-028f5e43b996", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for operator openshift-controller-manager changed: Progressing changed from True to False ("")\nI0403 19:32:39.629998       1 wrap.go:47] GET /metrics: (4.032334ms) 200 [Prometheus/2.7.2 10.128.2.7:59086]\nI0403 19:32:47.881698       1 request.go:530] Throttling request took 149.006011ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0403 19:32:48.081697       1 request.go:530] Throttling request took 197.927465ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\n
Apr 03 19:33:08.600 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-157-88.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): 
Apr 03 19:33:14.693 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-142-248.us-west-2.compute.internal container=prometheus-proxy container exited with code 2 (Error): 
Apr 03 19:33:14.693 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-142-248.us-west-2.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): 
Apr 03 19:33:14.693 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-142-248.us-west-2.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 
Apr 03 19:33:19.276 E ns/openshift-operator-lifecycle-manager pod/packageserver-67bb88657d-8kvqh node/ip-10-0-128-177.us-west-2.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 19:33:20.035 E ns/openshift-monitoring pod/prometheus-adapter-66595999b7-nxw52 node/ip-10-0-137-192.us-west-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): 
Apr 03 19:33:20.357 E ns/openshift-operator-lifecycle-manager pod/packageserver-868bd4c9d8-bhrp5 node/ip-10-0-128-177.us-west-2.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 19:33:22.551 E ns/openshift-cluster-node-tuning-operator pod/tuned-6llc9 node/ip-10-0-135-157.us-west-2.compute.internal container=tuned container exited with code 143 (Error): 32:46.764406   16693 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 19:33:02.248975   16693 openshift-tuned.go:435] Pod (openshift-machine-api/cluster-autoscaler-operator-f79f4998d-p5hbp) labels changed node wide: true\nI0403 19:33:06.666641   16693 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 19:33:06.668271   16693 openshift-tuned.go:326] Getting recommended profile...\nI0403 19:33:06.778419   16693 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 19:33:07.847562   16693 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/olm-operator-6fd6b6596d-9qscq) labels changed node wide: true\nI0403 19:33:11.666670   16693 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 19:33:11.668397   16693 openshift-tuned.go:326] Getting recommended profile...\nI0403 19:33:11.789877   16693 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 19:33:12.788169   16693 openshift-tuned.go:435] Pod (openshift-console-operator/console-operator-6fb8577f98-xd2bf) labels changed node wide: true\nI0403 19:33:16.666597   16693 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 19:33:16.667937   16693 openshift-tuned.go:326] Getting recommended profile...\nI0403 19:33:16.778878   16693 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 19:33:19.515750   16693 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/catalog-operator-57d486c79b-w49fv) labels changed node wide: true\nI0403 19:33:21.666617   16693 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 19:33:21.668566   16693 openshift-tuned.go:326] Getting recommended profile...\n
Apr 03 19:33:27.289 E ns/openshift-authentication-operator pod/authentication-operator-797884664c-ctffv node/ip-10-0-128-177.us-west-2.compute.internal container=operator container exited with code 255 (Error): ication-operator", Name:"authentication-operator", UID:"56d9d3de-75de-11ea-ab12-028f5e43b996", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Progressing changed from True to False (""),Available changed from False to True ("")\nW0403 19:24:31.902429       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 13976 (15693)\nW0403 19:25:19.976277       1 reflector.go:270] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: The resourceVersion for the provided watch is too old.\nW0403 19:26:00.573566       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 13998 (16619)\nW0403 19:27:35.885734       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 13920 (17130)\nW0403 19:29:41.906201       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 15842 (17891)\nW0403 19:30:44.902880       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 15698 (18423)\nW0403 19:31:55.148436       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Deployment ended with: too old resource version: 13829 (15648)\nW0403 19:32:20.917453       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 13366 (14205)\nW0403 19:32:43.202104       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 13366 (15534)\nI0403 19:33:04.624416       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 19:33:04.624514       1 leaderelection.go:65] leaderelection lost\n
Apr 03 19:33:29.535 E ns/openshift-controller-manager pod/controller-manager-flqb2 node/ip-10-0-128-177.us-west-2.compute.internal container=controller-manager container exited with code 137 (Error): 
Apr 03 19:33:29.637 E ns/openshift-console pod/downloads-b9cb6cb95-2xvkg node/ip-10-0-145-43.us-west-2.compute.internal container=download-server container exited with code 137 (Error): 
Apr 03 19:33:32.935 E ns/openshift-monitoring pod/node-exporter-pzj7n node/ip-10-0-128-177.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 
Apr 03 19:33:33.474 E ns/openshift-cluster-node-tuning-operator pod/tuned-mvmzp node/ip-10-0-137-192.us-west-2.compute.internal container=tuned container exited with code 143 (Error): rofile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 19:32:58.354747    2667 openshift-tuned.go:435] Pod (openshift-monitoring/node-exporter-qvvbl) labels changed node wide: true\nI0403 19:32:58.404136    2667 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 19:32:58.410655    2667 openshift-tuned.go:326] Getting recommended profile...\nI0403 19:32:58.570941    2667 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 19:33:15.935719    2667 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-k8s-0) labels changed node wide: true\nI0403 19:33:18.404308    2667 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 19:33:18.405720    2667 openshift-tuned.go:326] Getting recommended profile...\nI0403 19:33:18.523566    2667 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 19:33:21.416102    2667 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-adapter-66595999b7-nxw52) labels changed node wide: true\nI0403 19:33:23.404114    2667 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 19:33:23.405490    2667 openshift-tuned.go:326] Getting recommended profile...\nI0403 19:33:23.522188    2667 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 19:33:24.417183    2667 openshift-tuned.go:435] Pod (openshift-image-registry/image-registry-55564d6858-5h26d) labels changed node wide: true\nE0403 19:33:26.131384    2667 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=3, ErrCode=NO_ERROR, debug=""\nE0403 19:33:26.135057    2667 openshift-tuned.go:720] Pod event watch channel closed.\nI0403 19:33:26.135079    2667 openshift-tuned.go:722] Increasing resyncPeriod to 216\n
Apr 03 19:33:45.508 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-137-192.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): 
Apr 03 19:33:52.939 E ns/openshift-cluster-node-tuning-operator pod/tuned-94pzh node/ip-10-0-128-177.us-west-2.compute.internal container=tuned container exited with code 143 (Error): 326] Getting recommended profile...\nI0403 19:32:23.538305   15161 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 19:32:44.743708   15161 openshift-tuned.go:435] Pod (openshift-cluster-samples-operator/cluster-samples-operator-6848fb57c4-jczhx) labels changed node wide: true\nI0403 19:32:48.407377   15161 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 19:32:48.410142   15161 openshift-tuned.go:326] Getting recommended profile...\nI0403 19:32:48.505772   15161 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 19:33:17.656468   15161 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/packageserver-67bb88657d-8kvqh) labels changed node wide: true\nI0403 19:33:18.407349   15161 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 19:33:18.409061   15161 openshift-tuned.go:326] Getting recommended profile...\nI0403 19:33:18.513689   15161 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 19:33:19.190331   15161 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/packageserver-868bd4c9d8-bhrp5) labels changed node wide: true\nI0403 19:33:23.407367   15161 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 19:33:23.414279   15161 openshift-tuned.go:326] Getting recommended profile...\nI0403 19:33:23.518261   15161 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 19:33:26.096398   15161 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0403 19:33:26.100281   15161 openshift-tuned.go:720] Pod event watch channel closed.\nI0403 19:33:26.100356   15161 openshift-tuned.go:722] Increasing resyncPeriod to 236\n
Apr 03 19:33:55.777 E ns/openshift-monitoring pod/node-exporter-2l8vl node/ip-10-0-142-248.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 
Apr 03 19:34:01.790 E ns/openshift-ingress pod/router-default-c4dbd9dcb-r59ff node/ip-10-0-142-248.us-west-2.compute.internal container=router container exited with code 2 (Error): eck ok : 0 retry attempt(s).\nI0403 19:33:18.524271       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 19:33:23.530652       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nE0403 19:33:26.107503       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=101, ErrCode=NO_ERROR, debug=""\nE0403 19:33:26.107517       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=101, ErrCode=NO_ERROR, debug=""\nE0403 19:33:26.107556       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=101, ErrCode=NO_ERROR, debug=""\nI0403 19:33:28.522975       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 19:33:33.530369       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 19:33:38.522884       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 19:33:43.530956       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 19:33:48.528759       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 19:33:53.543087       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 19:33:58.527794       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n
Apr 03 19:34:10.562 E ns/openshift-service-ca pod/apiservice-cabundle-injector-6d5895f8f9-mq8wp node/ip-10-0-128-177.us-west-2.compute.internal container=apiservice-cabundle-injector-controller container exited with code 2 (Error): 
Apr 03 19:34:10.739 E ns/openshift-service-ca pod/configmap-cabundle-injector-6547fcc76f-mxc5q node/ip-10-0-145-43.us-west-2.compute.internal container=configmap-cabundle-injector-controller container exited with code 2 (Error): 
Apr 03 19:34:10.748 E ns/openshift-monitoring pod/node-exporter-jxmlh node/ip-10-0-135-157.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 
Apr 03 19:34:11.781 E ns/openshift-marketplace pod/certified-operators-865c97b6fb-jmlkq node/ip-10-0-142-248.us-west-2.compute.internal container=certified-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 19:34:11.986 E ns/openshift-marketplace pod/redhat-operators-65d9f896bd-kzr7m node/ip-10-0-142-248.us-west-2.compute.internal container=redhat-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 19:34:12.753 E ns/openshift-operator-lifecycle-manager pod/packageserver-5768b6f88f-r6jlz node/ip-10-0-145-43.us-west-2.compute.internal container=packageserver container exited with code 137 (Error): 
Apr 03 19:34:28.908 E ns/openshift-monitoring pod/node-exporter-rpkph node/ip-10-0-157-88.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 
Apr 03 19:35:02.981 E ns/openshift-marketplace pod/community-operators-74f86dcdb7-mncpg node/ip-10-0-157-88.us-west-2.compute.internal container=community-operators container exited with code 2 (Error): 
Apr 03 19:35:06.804 E ns/openshift-controller-manager pod/controller-manager-qfbb7 node/ip-10-0-128-177.us-west-2.compute.internal container=controller-manager container exited with code 137 (Error): 
Apr 03 19:35:49.982 E ns/openshift-controller-manager pod/controller-manager-5lj99 node/ip-10-0-135-157.us-west-2.compute.internal container=controller-manager container exited with code 137 (Error): 
Apr 03 19:43:49.114 E ns/openshift-multus pod/multus-7x2gh node/ip-10-0-135-157.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 03 19:43:59.103 E ns/openshift-sdn pod/ovs-9ksfv node/ip-10-0-145-43.us-west-2.compute.internal container=openvswitch container exited with code 137 (Error): 9Z|00339|connmgr|INFO|br0<->unix#893: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T19:43:55.790Z|00340|connmgr|INFO|br0<->unix#896: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T19:43:55.812Z|00341|connmgr|INFO|br0<->unix#899: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T19:43:55.836Z|00342|connmgr|INFO|br0<->unix#902: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T19:43:55.857Z|00343|connmgr|INFO|br0<->unix#905: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T19:43:55.879Z|00344|connmgr|INFO|br0<->unix#908: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T19:43:55.903Z|00345|connmgr|INFO|br0<->unix#911: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T19:43:55.927Z|00346|connmgr|INFO|br0<->unix#914: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T19:43:55.951Z|00347|connmgr|INFO|br0<->unix#917: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T19:43:55.973Z|00348|connmgr|INFO|br0<->unix#920: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T19:43:55.997Z|00349|connmgr|INFO|br0<->unix#923: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T19:43:56.103Z|00350|connmgr|INFO|br0<->unix#926: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T19:43:56.130Z|00351|connmgr|INFO|br0<->unix#929: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T19:43:56.150Z|00352|connmgr|INFO|br0<->unix#932: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T19:43:56.171Z|00353|connmgr|INFO|br0<->unix#935: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T19:43:56.195Z|00354|connmgr|INFO|br0<->unix#938: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T19:43:56.216Z|00355|connmgr|INFO|br0<->unix#941: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T19:43:56.237Z|00356|connmgr|INFO|br0<->unix#944: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T19:43:56.257Z|00357|connmgr|INFO|br0<->unix#947: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T19:43:56.282Z|00358|connmgr|INFO|br0<->unix#950: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T19:43:56.303Z|00359|connmgr|INFO|br0<->unix#953: 1 flow_mods in the last 0 s (1 adds)\n
Apr 03 19:44:05.066 E ns/openshift-sdn pod/sdn-jz64v node/ip-10-0-145-43.us-west-2.compute.internal container=sdn container exited with code 255 (Error): ix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:44:03.920123   70046 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:44:04.020127   70046 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:44:04.120077   70046 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:44:04.220071   70046 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:44:04.320077   70046 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:44:04.420101   70046 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:44:04.520059   70046 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:44:04.620086   70046 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:44:04.720083   70046 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:44:04.820113   70046 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:44:04.924386   70046 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0403 19:44:04.924457   70046 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
Apr 03 19:44:36.094 E ns/openshift-sdn pod/ovs-8dnx6 node/ip-10-0-128-177.us-west-2.compute.internal container=openvswitch container exited with code 137 (Error): 3:38.276Z|00370|bridge|INFO|bridge br0: added interface vethd6af3c07 on port 60\n2020-04-03T19:43:38.302Z|00371|connmgr|INFO|br0<->unix#941: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T19:43:38.336Z|00372|connmgr|INFO|br0<->unix#944: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T19:44:27.173Z|00373|connmgr|INFO|br0<->unix#956: 2 flow_mods in the last 0 s (2 adds)\n2020-04-03T19:44:27.249Z|00374|connmgr|INFO|br0<->unix#962: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T19:44:27.269Z|00375|connmgr|INFO|br0<->unix#965: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T19:44:27.292Z|00376|connmgr|INFO|br0<->unix#968: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T19:44:27.313Z|00377|connmgr|INFO|br0<->unix#971: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T19:44:27.340Z|00378|connmgr|INFO|br0<->unix#974: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T19:44:27.361Z|00379|connmgr|INFO|br0<->unix#977: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T19:44:27.384Z|00380|connmgr|INFO|br0<->unix#980: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T19:44:27.527Z|00381|connmgr|INFO|br0<->unix#983: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T19:44:27.553Z|00382|connmgr|INFO|br0<->unix#986: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T19:44:27.573Z|00383|connmgr|INFO|br0<->unix#989: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T19:44:27.593Z|00384|connmgr|INFO|br0<->unix#992: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T19:44:27.615Z|00385|connmgr|INFO|br0<->unix#995: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T19:44:27.635Z|00386|connmgr|INFO|br0<->unix#998: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T19:44:27.656Z|00387|connmgr|INFO|br0<->unix#1001: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T19:44:27.679Z|00388|connmgr|INFO|br0<->unix#1004: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T19:44:27.700Z|00389|connmgr|INFO|br0<->unix#1007: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T19:44:27.726Z|00390|connmgr|INFO|br0<->unix#1010: 1 flow_mods in the last 0 s (1 adds)\n
Apr 03 19:44:36.957 E ns/openshift-multus pod/multus-kfg57 node/ip-10-0-142-248.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 03 19:44:39.112 E ns/openshift-sdn pod/sdn-6fvbd node/ip-10-0-128-177.us-west-2.compute.internal container=sdn container exited with code 255 (Error): ix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:44:36.957511   82871 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:44:37.057477   82871 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:44:37.157539   82871 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:44:37.257511   82871 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:44:37.357523   82871 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:44:37.457520   82871 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:44:37.557519   82871 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:44:37.657533   82871 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:44:37.757509   82871 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:44:37.857495   82871 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:44:37.962604   82871 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0403 19:44:37.962732   82871 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
Apr 03 19:45:04.195 E ns/openshift-sdn pod/sdn-controller-fqsbt node/ip-10-0-145-43.us-west-2.compute.internal container=sdn-controller container exited with code 137 (Error): I0403 19:08:32.081127       1 leaderelection.go:205] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Apr 03 19:45:09.046 E ns/openshift-sdn pod/ovs-qvfvt node/ip-10-0-142-248.us-west-2.compute.internal container=openvswitch container exited with code 137 (Error): 0-04-03T19:34:59.836Z|00166|connmgr|INFO|br0<->unix#428: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T19:34:59.872Z|00167|connmgr|INFO|br0<->unix#431: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T19:34:59.899Z|00168|bridge|INFO|bridge br0: deleted interface veth3820e6d6 on port 6\n2020-04-03T19:44:38.217Z|00169|connmgr|INFO|br0<->unix#501: 2 flow_mods in the last 0 s (2 adds)\n2020-04-03T19:44:38.295Z|00170|connmgr|INFO|br0<->unix#507: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T19:44:38.318Z|00171|connmgr|INFO|br0<->unix#510: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T19:44:38.610Z|00172|connmgr|INFO|br0<->unix#513: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T19:44:38.634Z|00173|connmgr|INFO|br0<->unix#516: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T19:44:38.657Z|00174|connmgr|INFO|br0<->unix#519: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T19:44:38.680Z|00175|connmgr|INFO|br0<->unix#522: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T19:44:38.702Z|00176|connmgr|INFO|br0<->unix#525: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T19:44:38.730Z|00177|connmgr|INFO|br0<->unix#528: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T19:44:38.762Z|00178|connmgr|INFO|br0<->unix#531: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T19:44:38.791Z|00179|connmgr|INFO|br0<->unix#534: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T19:44:38.823Z|00180|connmgr|INFO|br0<->unix#537: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T19:44:38.856Z|00181|connmgr|INFO|br0<->unix#540: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T19:44:45.432Z|00182|connmgr|INFO|br0<->unix#543: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T19:44:45.458Z|00183|bridge|INFO|bridge br0: deleted interface vethac946f4c on port 3\n2020-04-03T19:44:55.557Z|00184|bridge|INFO|bridge br0: added interface vethf8b0efd4 on port 27\n2020-04-03T19:44:55.587Z|00185|connmgr|INFO|br0<->unix#546: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T19:44:55.625Z|00186|connmgr|INFO|br0<->unix#549: 2 flow_mods in the last 0 s (2 deletes)\n
Apr 03 19:45:11.133 E ns/openshift-sdn pod/sdn-7kr8l node/ip-10-0-142-248.us-west-2.compute.internal container=sdn container exited with code 255 (Error): ix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:45:09.911760   69496 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:45:10.011755   69496 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:45:10.111791   69496 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:45:10.211752   69496 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:45:10.311758   69496 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:45:10.411860   69496 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:45:10.511796   69496 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:45:10.611777   69496 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:45:10.711875   69496 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:45:10.811816   69496 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:45:10.925256   69496 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0403 19:45:10.925329   69496 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
Apr 03 19:45:51.494 E ns/openshift-sdn pod/sdn-bttzz node/ip-10-0-157-88.us-west-2.compute.internal container=sdn container exited with code 255 (Error): ix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:45:49.868205   68503 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:45:49.968194   68503 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:45:50.068279   68503 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:45:50.168272   68503 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:45:50.268239   68503 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:45:50.368254   68503 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:45:50.468220   68503 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:45:50.568158   68503 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:45:50.668223   68503 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:45:50.768217   68503 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:45:50.872708   68503 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0403 19:45:50.872781   68503 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
Apr 03 19:46:08.451 E ns/openshift-multus pod/multus-mjdpl node/ip-10-0-157-88.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 03 19:46:22.024 E ns/openshift-sdn pod/ovs-w6d5l node/ip-10-0-137-192.us-west-2.compute.internal container=openvswitch container exited with code 137 (Error): 04-03T19:34:07.465Z|00130|connmgr|INFO|br0<->unix#358: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-04-03T19:43:43.679Z|00131|connmgr|INFO|br0<->unix#429: 2 flow_mods in the last 0 s (2 adds)\n2020-04-03T19:43:43.755Z|00132|connmgr|INFO|br0<->unix#435: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T19:43:44.091Z|00133|connmgr|INFO|br0<->unix#438: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T19:43:44.116Z|00134|connmgr|INFO|br0<->unix#441: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T19:43:44.138Z|00135|connmgr|INFO|br0<->unix#444: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T19:43:44.161Z|00136|connmgr|INFO|br0<->unix#447: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T19:43:44.188Z|00137|connmgr|INFO|br0<->unix#450: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T19:43:44.215Z|00138|connmgr|INFO|br0<->unix#453: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T19:43:44.242Z|00139|connmgr|INFO|br0<->unix#456: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T19:43:44.265Z|00140|connmgr|INFO|br0<->unix#459: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T19:43:44.287Z|00141|connmgr|INFO|br0<->unix#462: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T19:43:44.312Z|00142|connmgr|INFO|br0<->unix#465: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T19:45:23.686Z|00143|connmgr|INFO|br0<->unix#477: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T19:45:23.709Z|00144|bridge|INFO|bridge br0: deleted interface vethb62d0b64 on port 3\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-04-03T19:45:23.702Z|00024|jsonrpc|WARN|unix#282: receive error: Connection reset by peer\n2020-04-03T19:45:23.702Z|00025|reconnect|WARN|unix#282: connection dropped (Connection reset by peer)\n\n==> /var/log/openvswitch/ovs-vswitchd.log <==\n2020-04-03T19:45:34.757Z|00145|bridge|INFO|bridge br0: added interface veth8aeb3514 on port 21\n2020-04-03T19:45:34.786Z|00146|connmgr|INFO|br0<->unix#480: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T19:45:34.829Z|00147|connmgr|INFO|br0<->unix#483: 2 flow_mods in the last 0 s (2 deletes)\n
Apr 03 19:46:24.031 E ns/openshift-sdn pod/sdn-xrmng node/ip-10-0-137-192.us-west-2.compute.internal container=sdn container exited with code 255 (Error): ix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:46:22.890415   43871 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:46:22.990464   43871 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:46:23.090362   43871 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:46:23.190362   43871 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:46:23.290351   43871 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:46:23.390359   43871 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:46:23.490347   43871 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:46:23.590320   43871 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:46:23.691114   43871 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:46:23.790379   43871 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:46:23.904383   43871 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0403 19:46:23.904454   43871 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
Apr 03 19:46:54.660 E ns/openshift-sdn pod/ovs-zts4x node/ip-10-0-135-157.us-west-2.compute.internal container=openvswitch container exited with code 137 (Error): |connmgr|INFO|br0<->unix#1035: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T19:45:06.108Z|00407|connmgr|INFO|br0<->unix#1038: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T19:45:06.132Z|00408|connmgr|INFO|br0<->unix#1041: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T19:45:06.152Z|00409|connmgr|INFO|br0<->unix#1044: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T19:45:06.177Z|00410|connmgr|INFO|br0<->unix#1047: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T19:45:06.202Z|00411|connmgr|INFO|br0<->unix#1050: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T19:45:06.309Z|00412|connmgr|INFO|br0<->unix#1053: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T19:45:06.336Z|00413|connmgr|INFO|br0<->unix#1056: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T19:45:06.357Z|00414|connmgr|INFO|br0<->unix#1059: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T19:45:06.384Z|00415|connmgr|INFO|br0<->unix#1062: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T19:45:06.410Z|00416|connmgr|INFO|br0<->unix#1065: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T19:45:06.439Z|00417|connmgr|INFO|br0<->unix#1068: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T19:45:06.466Z|00418|connmgr|INFO|br0<->unix#1071: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T19:45:06.489Z|00419|connmgr|INFO|br0<->unix#1074: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T19:45:06.514Z|00420|connmgr|INFO|br0<->unix#1077: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T19:45:06.543Z|00421|connmgr|INFO|br0<->unix#1080: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T19:45:42.254Z|00422|connmgr|INFO|br0<->unix#1086: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T19:45:42.283Z|00423|bridge|INFO|bridge br0: deleted interface vethaf9182bc on port 14\n2020-04-03T19:45:55.865Z|00424|bridge|INFO|bridge br0: added interface veth8f0aced3 on port 67\n2020-04-03T19:45:55.895Z|00425|connmgr|INFO|br0<->unix#1089: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T19:45:55.927Z|00426|connmgr|INFO|br0<->unix#1092: 2 flow_mods in the last 0 s (2 deletes)\n
Apr 03 19:46:57.671 E ns/openshift-sdn pod/sdn-ltvf8 node/ip-10-0-135-157.us-west-2.compute.internal container=sdn container exited with code 255 (Error): ix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:46:55.523216   77717 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:46:55.623229   77717 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:46:55.723215   77717 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:46:55.823225   77717 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:46:55.923209   77717 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:46:56.023230   77717 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:46:56.123215   77717 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:46:56.223223   77717 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:46:56.323195   77717 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:46:56.423212   77717 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 19:46:56.601057   77717 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0403 19:46:56.601120   77717 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
Apr 03 19:47:03.106 E ns/openshift-multus pod/multus-6pkfz node/ip-10-0-137-192.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 03 19:47:08.707 E ns/openshift-machine-api pod/cluster-autoscaler-operator-76896fbc99-ph2g7 node/ip-10-0-135-157.us-west-2.compute.internal container=cluster-autoscaler-operator container exited with code 255 (Error): 
Apr 03 19:47:35.765 E ns/openshift-service-ca pod/apiservice-cabundle-injector-58cf88bc68-bwfrv node/ip-10-0-135-157.us-west-2.compute.internal container=apiservice-cabundle-injector-controller container exited with code 255 (Error): 
Apr 03 19:50:36.233 E ns/openshift-machine-config-operator pod/machine-config-daemon-qnmcx node/ip-10-0-135-157.us-west-2.compute.internal container=machine-config-daemon container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 19:51:37.203 E ns/openshift-machine-config-operator pod/machine-config-controller-867d7b6b49-9blxc node/ip-10-0-145-43.us-west-2.compute.internal container=machine-config-controller container exited with code 2 (Error): 
Apr 03 19:53:33.436 E ns/openshift-machine-config-operator pod/machine-config-server-jr88b node/ip-10-0-145-43.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): 
Apr 03 19:53:40.356 E ns/openshift-machine-config-operator pod/machine-config-server-tmxqf node/ip-10-0-128-177.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): 
Apr 03 19:53:44.095 E ns/openshift-ingress pod/router-default-7964677cfc-kg75t node/ip-10-0-137-192.us-west-2.compute.internal container=router container exited with code 2 (Error): 19:45:10.039452       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 19:45:15.036377       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 19:45:20.038986       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 19:45:25.038697       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 19:45:41.948195       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 19:45:46.943390       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 19:45:51.947538       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 19:46:00.835014       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 19:46:05.837486       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 19:46:25.944051       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 19:46:33.994540       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 19:46:57.675960       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 19:47:05.916233       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n
Apr 03 19:53:48.781 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-949f88bbf-7nfxb node/ip-10-0-145-43.us-west-2.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error):  version: 17805 (18218)\nW0403 19:33:26.405630       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 17909 (20252)\nW0403 19:33:26.443721       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 17805 (18218)\nI0403 19:33:27.202129       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"4f814d2f-75de-11ea-ab12-028f5e43b996", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/revision-status-5 -n openshift-kube-scheduler: cause by changes in data.status\nI0403 19:33:35.804347       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"4f814d2f-75de-11ea-ab12-028f5e43b996", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-5-ip-10-0-135-157.us-west-2.compute.internal -n openshift-kube-scheduler because it was missing\nW0403 19:42:43.397927       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22228 (25963)\nW0403 19:42:45.800765       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22248 (25967)\nW0403 19:51:51.997723       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 26110 (29521)\nW0403 19:52:04.397555       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 26120 (29601)\nI0403 19:53:43.607142       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 19:53:43.607265       1 leaderelection.go:65] leaderelection lost\n
Apr 03 19:53:49.629 E ns/openshift-operator-lifecycle-manager pod/packageserver-74b5d98f6d-wfdpd node/ip-10-0-135-157.us-west-2.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 19:53:50.383 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-5c87f6579b-6n8gq node/ip-10-0-145-43.us-west-2.compute.internal container=operator container exited with code 2 (Error): metheus-k8s\nI0403 19:52:44.963985       1 wrap.go:47] GET /metrics: (4.671269ms) 200 [Prometheus/2.7.2 10.131.0.16:37578]\nI0403 19:52:44.964509       1 wrap.go:47] GET /metrics: (4.070645ms) 200 [Prometheus/2.7.2 10.129.2.17:45656]\nI0403 19:52:49.323529       1 request.go:530] Throttling request took 171.360304ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0403 19:52:49.523491       1 request.go:530] Throttling request took 196.567402ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0403 19:52:56.132321       1 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch close - *v1.ConfigMap total 37 items received\nI0403 19:53:09.323814       1 request.go:530] Throttling request took 170.371791ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0403 19:53:09.523790       1 request.go:530] Throttling request took 196.306789ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0403 19:53:14.964404       1 wrap.go:47] GET /metrics: (5.06562ms) 200 [Prometheus/2.7.2 10.131.0.16:37578]\nI0403 19:53:14.969688       1 wrap.go:47] GET /metrics: (9.232744ms) 200 [Prometheus/2.7.2 10.129.2.17:45656]\nI0403 19:53:29.323952       1 request.go:530] Throttling request took 168.687289ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0403 19:53:29.523946       1 request.go:530] Throttling request took 196.509034ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0403 19:53:32.132766       1 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch close - *v1.ConfigMap total 32 items received\n
Apr 03 19:53:51.381 E ns/openshift-authentication-operator pod/authentication-operator-5d9b9f5577-9vp46 node/ip-10-0-145-43.us-west-2.compute.internal container=operator container exited with code 255 (Error): 8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22937 (25512)\nW0403 19:42:04.092324       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22937 (25783)\nW0403 19:43:00.137139       1 reflector.go:270] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: The resourceVersion for the provided watch is too old.\nW0403 19:43:08.101698       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22937 (26081)\nW0403 19:47:22.094715       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Deployment ended with: too old resource version: 23676 (23894)\nW0403 19:47:29.098027       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25918 (28060)\nW0403 19:48:36.096100       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25643 (28504)\nW0403 19:48:41.112168       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25189 (28523)\nW0403 19:50:29.176445       1 reflector.go:270] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: The resourceVersion for the provided watch is too old.\nW0403 19:50:31.107709       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 26235 (29046)\nW0403 19:52:34.103739       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 28287 (29755)\nI0403 19:53:43.591070       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 19:53:43.591208       1 leaderelection.go:65] leaderelection lost\n
Apr 03 19:53:53.781 E ns/openshift-console pod/console-6798668846-bkxf9 node/ip-10-0-145-43.us-west-2.compute.internal container=console container exited with code 2 (Error): 2020/04/3 19:35:05 cmd/main: cookies are secure!\n2020/04/3 19:35:05 cmd/main: Binding to 0.0.0.0:8443...\n2020/04/3 19:35:05 cmd/main: using TLS\n
Apr 03 19:54:14.717 E ns/openshift-operator-lifecycle-manager pod/packageserver-74b5d98f6d-xgzqr node/ip-10-0-145-43.us-west-2.compute.internal container=packageserver container exited with code 137 (Error): t-operators namespace=openshift-marketplace\ntime="2020-04-03T19:53:57Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-03T19:53:57Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-03T19:53:57Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T19:53:57Z" level=info msg="new grpc connection added" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T19:53:58Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T19:53:58Z" level=info msg="new grpc connection added" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T19:53:58Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T19:53:58Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T19:53:59Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T19:53:59Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-03T19:53:59Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-03T19:53:59Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\n
Apr 03 19:54:16.718 E ns/openshift-console pod/downloads-5c89d4df64-5jwdt node/ip-10-0-145-43.us-west-2.compute.internal container=download-server container exited with code 137 (Error): 
Apr 03 19:54:21.113 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-142-248.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): 
Apr 03 19:54:25.655 E openshift-apiserver OpenShift API is not responding to GET requests
Apr 03 19:54:36.520 E ns/openshift-operator-lifecycle-manager pod/packageserver-74b5d98f6d-nzth7 node/ip-10-0-128-177.us-west-2.compute.internal container=packageserver container exited with code 137 (Error): :37270]\nI0403 19:54:04.521069       1 wrap.go:47] GET /: (2.616726ms) 200 [Go-http-client/2.0 10.128.0.1:41152]\nI0403 19:54:04.521676       1 wrap.go:47] GET /: (2.85397ms) 200 [Go-http-client/2.0 10.128.0.1:41152]\nI0403 19:54:04.522709       1 wrap.go:47] GET /: (2.694016ms) 200 [Go-http-client/2.0 10.128.0.1:41152]\nI0403 19:54:04.524112       1 wrap.go:47] GET /: (3.57764ms) 200 [Go-http-client/2.0 10.130.0.1:48862]\nI0403 19:54:04.524177       1 wrap.go:47] GET /: (3.507903ms) 200 [Go-http-client/2.0 10.129.0.1:54680]\nI0403 19:54:04.524329       1 wrap.go:47] GET /: (3.534121ms) 200 [Go-http-client/2.0 10.129.0.1:54680]\nI0403 19:54:04.524380       1 wrap.go:47] GET /: (3.677859ms) 200 [Go-http-client/2.0 10.128.0.1:41152]\nI0403 19:54:04.524187       1 wrap.go:47] GET /: (3.564868ms) 200 [Go-http-client/2.0 10.130.0.1:48862]\nI0403 19:54:04.524584       1 wrap.go:47] GET /: (3.813699ms) 200 [Go-http-client/2.0 10.129.0.1:54680]\nI0403 19:54:04.525402       1 wrap.go:47] GET /: (4.946917ms) 200 [Go-http-client/2.0 10.130.0.1:48862]\nI0403 19:54:04.536712       1 log.go:172] http: TLS handshake error from 10.129.0.1:37276: remote error: tls: bad certificate\nI0403 19:54:04.586921       1 log.go:172] http: TLS handshake error from 10.129.0.1:37278: remote error: tls: bad certificate\nI0403 19:54:04.985703       1 log.go:172] http: TLS handshake error from 10.129.0.1:37280: remote error: tls: bad certificate\nI0403 19:54:05.385839       1 log.go:172] http: TLS handshake error from 10.129.0.1:37286: remote error: tls: bad certificate\nI0403 19:54:05.522821       1 wrap.go:47] GET /: (111.414µs) 200 [Go-http-client/2.0 10.128.0.1:41152]\nI0403 19:54:05.523026       1 wrap.go:47] GET /: (89.315µs) 200 [Go-http-client/2.0 10.128.0.1:41152]\nI0403 19:54:05.523273       1 wrap.go:47] GET /: (76.791µs) 200 [Go-http-client/2.0 10.129.0.1:54680]\nI0403 19:54:05.523553       1 wrap.go:47] GET /: (1.026684ms) 200 [Go-http-client/2.0 10.129.0.1:54680]\nI0403 19:54:05.565877       1 secure_serving.go:156] Stopped listening on [::]:5443\n
Apr 03 19:54:46.549 E ns/openshift-operator-lifecycle-manager pod/packageserver-599cdf8c78-jx2k8 node/ip-10-0-128-177.us-west-2.compute.internal container=packageserver container exited with code 137 (Error): 403 19:54:13.434380       1 wrap.go:47] GET /: (1.811866ms) 200 [Go-http-client/2.0 10.130.0.1:43462]\nI0403 19:54:13.434430       1 wrap.go:47] GET /: (2.544448ms) 200 [Go-http-client/2.0 10.129.0.1:42114]\nI0403 19:54:13.434521       1 wrap.go:47] GET /: (1.791047ms) 200 [Go-http-client/2.0 10.130.0.1:43462]\nI0403 19:54:13.434649       1 wrap.go:47] GET /: (2.297472ms) 200 [Go-http-client/2.0 10.130.0.1:43462]\nI0403 19:54:13.985488       1 wrap.go:47] GET /apis/packages.operators.coreos.com/v1?timeout=32s: (310.363µs) 200 [olm/v0.0.0 (linux/amd64) kubernetes/$Format 10.129.0.1:42122]\nI0403 19:54:14.385023       1 wrap.go:47] GET /apis/packages.operators.coreos.com/v1?timeout=32s: (273.937µs) 200 [olm/v0.0.0 (linux/amd64) kubernetes/$Format 10.129.0.1:42122]\nI0403 19:54:14.385378       1 wrap.go:47] GET /apis/packages.operators.coreos.com/v1?timeout=32s: (1.382817ms) 200 [hyperkube/v1.13.4+3040211 (linux/amd64) kubernetes/3040211/controller-discovery 10.129.0.1:42122]\nI0403 19:54:14.784947       1 wrap.go:47] GET /apis/packages.operators.coreos.com/v1?timeout=32s: (308.713µs) 200 [olm/v0.0.0 (linux/amd64) kubernetes/$Format 10.129.0.1:42122]\nI0403 19:54:14.979072       1 wrap.go:47] GET /: (152.305µs) 200 [Go-http-client/2.0 10.130.0.1:43462]\nI0403 19:54:14.979072       1 wrap.go:47] GET /: (170.767µs) 200 [Go-http-client/2.0 10.130.0.1:43462]\nI0403 19:54:15.185299       1 wrap.go:47] GET /apis/packages.operators.coreos.com/v1?timeout=32s: (359.167µs) 200 [olm/v0.0.0 (linux/amd64) kubernetes/$Format 10.129.0.1:42122]\nI0403 19:54:15.628879       1 wrap.go:47] GET /: (138.994µs) 200 [Go-http-client/2.0 10.128.0.1:48032]\nI0403 19:54:15.629200       1 wrap.go:47] GET /: (116.958µs) 200 [Go-http-client/2.0 10.129.0.1:42114]\nI0403 19:54:15.629339       1 wrap.go:47] GET /: (795.961µs) 200 [Go-http-client/2.0 10.128.0.1:48032]\nI0403 19:54:15.629835       1 wrap.go:47] GET /: (276.253µs) 200 [Go-http-client/2.0 10.130.0.1:43462]\nI0403 19:54:15.712264       1 secure_serving.go:156] Stopped listening on [::]:5443\n
Apr 03 19:54:55.655 E openshift-apiserver OpenShift API is not responding to GET requests
Apr 03 19:55:57.861 E ns/openshift-monitoring pod/node-exporter-s5v2c node/ip-10-0-137-192.us-west-2.compute.internal container=node-exporter container exited with code 255 (Error): 
Apr 03 19:55:57.861 E ns/openshift-monitoring pod/node-exporter-s5v2c node/ip-10-0-137-192.us-west-2.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 03 19:55:57.876 E ns/openshift-cluster-node-tuning-operator pod/tuned-2mcp2 node/ip-10-0-137-192.us-west-2.compute.internal container=tuned container exited with code 255 (Error): -rnlqg) labels changed node wide: true\nI0403 19:50:57.444012   31831 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 19:50:57.445376   31831 openshift-tuned.go:326] Getting recommended profile...\nI0403 19:50:57.555006   31831 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 19:53:42.853188   31831 openshift-tuned.go:435] Pod (openshift-monitoring/alertmanager-main-0) labels changed node wide: true\nI0403 19:53:47.444073   31831 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 19:53:47.445595   31831 openshift-tuned.go:326] Getting recommended profile...\nI0403 19:53:47.556594   31831 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 19:53:47.557348   31831 openshift-tuned.go:435] Pod (openshift-monitoring/grafana-657d59447-plr2n) labels changed node wide: true\nI0403 19:53:52.444046   31831 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 19:53:52.445505   31831 openshift-tuned.go:326] Getting recommended profile...\nI0403 19:53:52.567312   31831 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 19:53:58.360903   31831 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-k8s-0) labels changed node wide: true\nI0403 19:54:02.444030   31831 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 19:54:02.445521   31831 openshift-tuned.go:326] Getting recommended profile...\nI0403 19:54:02.556510   31831 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 19:54:15.047341   31831 openshift-tuned.go:435] Pod (e2e-tests-sig-apps-job-upgrade-j7sv5/foo-6drls) labels changed node wide: true\nI0403 19:54:15.925200   31831 openshift-tuned.go:126] Received signal: terminated\n
Apr 03 19:55:58.099 E ns/openshift-image-registry pod/node-ca-84z7b node/ip-10-0-137-192.us-west-2.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 03 19:56:02.236 E ns/openshift-sdn pod/sdn-xrmng node/ip-10-0-137-192.us-west-2.compute.internal container=sdn container exited with code 255 (Error): rators-coreos-com:"\nI0403 19:54:15.665595   47956 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com: to [10.128.0.72:5443 10.129.0.67:5443]\nI0403 19:54:15.665629   47956 roundrobin.go:240] Delete endpoint 10.129.0.63:5443 for service "openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com:"\nI0403 19:54:15.792771   47956 proxier.go:367] userspace proxy: processing 0 service events\nI0403 19:54:15.792795   47956 proxier.go:346] userspace syncProxyRules took 54.067496ms\nE0403 19:54:15.947540   47956 healthcheck.go:57] SDN healthcheck disconnected from OVS server: <nil>\nI0403 19:54:15.947637   47956 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\ninterrupt: Gracefully shutting down ...\nE0403 19:54:15.981623   47956 proxier.go:356] Failed to ensure iptables: error checking rule: signal: terminated: \nI0403 19:54:15.981647   47956 proxier.go:367] userspace proxy: processing 0 service events\nI0403 19:54:15.981662   47956 proxier.go:346] userspace syncProxyRules took 77.506356ms\nI0403 19:54:16.049212   47956 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 19:54:16.150217   47956 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 19:54:16.251128   47956 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 19:54:16.355116   47956 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 19:54:16.448020   47956 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\n
Apr 03 19:56:03.160 E ns/openshift-dns pod/dns-default-d5789 node/ip-10-0-137-192.us-west-2.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-04-03T19:45:40.940Z [INFO] CoreDNS-1.3.1\n2020-04-03T19:45:40.940Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-03T19:45:40.940Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 03 19:56:03.160 E ns/openshift-dns pod/dns-default-d5789 node/ip-10-0-137-192.us-west-2.compute.internal container=dns-node-resolver container exited with code 255 (Error): /bin/bash: line 1: kill: (79) - No such process\n
Apr 03 19:56:03.532 E ns/openshift-sdn pod/ovs-nbhwd node/ip-10-0-137-192.us-west-2.compute.internal container=openvswitch container exited with code 255 (Error): ds in the last 0 s (4 deletes)\n2020-04-03T19:53:43.502Z|00128|bridge|INFO|bridge br0: deleted interface veth9f7eb29b on port 7\n2020-04-03T19:53:43.574Z|00129|connmgr|INFO|br0<->unix#159: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T19:53:43.632Z|00130|bridge|INFO|bridge br0: deleted interface veth850aa192 on port 9\n2020-04-03T19:53:43.698Z|00131|connmgr|INFO|br0<->unix#162: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T19:53:43.759Z|00132|bridge|INFO|bridge br0: deleted interface vetha6c4dad1 on port 8\n2020-04-03T19:53:43.822Z|00133|connmgr|INFO|br0<->unix#165: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T19:53:43.879Z|00134|bridge|INFO|bridge br0: deleted interface veth8c7005b7 on port 4\n2020-04-03T19:53:43.935Z|00135|connmgr|INFO|br0<->unix#168: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T19:53:43.990Z|00136|bridge|INFO|bridge br0: deleted interface vetha8c6a798 on port 3\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-04-03T19:53:43.968Z|00023|jsonrpc|WARN|Dropped 8 log messages in last 439 seconds (most recently, 439 seconds ago) due to excessive rate\n2020-04-03T19:53:43.968Z|00024|jsonrpc|WARN|unix#131: receive error: Connection reset by peer\n2020-04-03T19:53:43.968Z|00025|reconnect|WARN|unix#131: connection dropped (Connection reset by peer)\n\n==> /var/log/openvswitch/ovs-vswitchd.log <==\n2020-04-03T19:54:13.120Z|00137|connmgr|INFO|br0<->unix#174: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T19:54:13.140Z|00138|bridge|INFO|bridge br0: deleted interface veth8257d393 on port 6\n2020-04-03T19:54:13.191Z|00139|connmgr|INFO|br0<->unix#177: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T19:54:13.216Z|00140|bridge|INFO|bridge br0: deleted interface veth783c65ef on port 12\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-04-03T19:54:13.153Z|00026|jsonrpc|WARN|unix#141: receive error: Connection reset by peer\n2020-04-03T19:54:13.153Z|00027|reconnect|WARN|unix#141: connection dropped (Connection reset by peer)\nTerminated\novs-vswitchd is not running.\novsdb-server is not running.\n
Apr 03 19:56:03.897 E ns/openshift-multus pod/multus-n5qmt node/ip-10-0-137-192.us-west-2.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 03 19:56:04.267 E ns/openshift-machine-config-operator pod/machine-config-daemon-jwcfj node/ip-10-0-137-192.us-west-2.compute.internal container=machine-config-daemon container exited with code 255 (Error): 
Apr 03 19:56:31.354 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-157-88.us-west-2.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 
Apr 03 19:56:31.354 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-157-88.us-west-2.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): 
Apr 03 19:56:31.354 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-157-88.us-west-2.compute.internal container=prometheus-proxy container exited with code 2 (Error): 
Apr 03 19:56:34.553 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-145-43.us-west-2.compute.internal node/ip-10-0-145-43.us-west-2.compute.internal container=kube-controller-manager-cert-syncer-6 container exited with code 255 (Error): I0403 19:31:12.851046       1 certsync_controller.go:269] Starting CertSyncer\nI0403 19:31:12.851375       1 observer_polling.go:106] Starting file observer\nW0403 19:37:52.878022       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 19018 (24632)\nW0403 19:43:49.882231       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 24783 (26356)\nW0403 19:52:33.886577       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 26615 (29755)\n
Apr 03 19:56:34.553 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-145-43.us-west-2.compute.internal node/ip-10-0-145-43.us-west-2.compute.internal container=kube-controller-manager-6 container exited with code 255 (Error): ficate: "kubelet-signer" [] issuer="<self>" (2020-04-03 18:48:58 +0000 UTC to 2020-04-04 18:48:58 +0000 UTC (now=2020-04-03 19:31:12.862289345 +0000 UTC))\nI0403 19:31:12.862312       1 clientca.go:92] [3] "/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" client-ca certificate: "kube-apiserver-to-kubelet-signer" [] issuer="<self>" (2020-04-03 18:48:58 +0000 UTC to 2021-04-03 18:48:58 +0000 UTC (now=2020-04-03 19:31:12.862303553 +0000 UTC))\nI0403 19:31:12.862326       1 clientca.go:92] [4] "/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" client-ca certificate: "kube-control-plane-signer" [] issuer="<self>" (2020-04-03 18:48:58 +0000 UTC to 2021-04-03 18:48:58 +0000 UTC (now=2020-04-03 19:31:12.862317733 +0000 UTC))\nI0403 19:31:12.881090       1 controllermanager.go:169] Version: v1.13.4+3040211\nI0403 19:31:12.885773       1 serving.go:195] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1585940933" (2020-04-03 19:09:09 +0000 UTC to 2022-04-03 19:09:10 +0000 UTC (now=2020-04-03 19:31:12.88574824 +0000 UTC))\nI0403 19:31:12.885875       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585940933" [] issuer="<self>" (2020-04-03 19:08:52 +0000 UTC to 2021-04-03 19:08:53 +0000 UTC (now=2020-04-03 19:31:12.885861477 +0000 UTC))\nI0403 19:31:12.885921       1 secure_serving.go:136] Serving securely on [::]:10257\nI0403 19:31:12.886529       1 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-controller-manager...\nI0403 19:31:12.887401       1 serving.go:77] Starting DynamicLoader\nE0403 19:54:22.609713       1 controllermanager.go:282] leaderelection lost\n
Apr 03 19:56:43.026 E ns/openshift-apiserver pod/apiserver-6rsxc node/ip-10-0-145-43.us-west-2.compute.internal container=openshift-apiserver container exited with code 255 (Error): 4:02.504252       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 19:54:02.504282       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 19:54:02.517577       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nE0403 19:54:08.222765       1 watcher.go:208] watch chan error: etcdserver: mvcc: required revision has been compacted\nI0403 19:54:10.600618       1 controller.go:608] quota admission added evaluator for: routes.route.openshift.io\nI0403 19:54:10.600654       1 controller.go:608] quota admission added evaluator for: routes.route.openshift.io\nI0403 19:54:22.218678       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []\nI0403 19:54:22.218791       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 19:54:22.218821       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 19:54:22.219116       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 19:54:22.590429       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/client-ca/ca-bundle.crt\nI0403 19:54:22.595926       1 clusterquotamapping.go:145] Shutting down ClusterQuotaMappingController controller\nE0403 19:54:22.596119       1 watch.go:212] unable to encode watch object <nil>: expected pointer, but got invalid kind\nI0403 19:54:22.596788       1 controller.go:87] Shutting down OpenAPI AggregationController\nI0403 19:54:22.596553       1 serving.go:88] Shutting down DynamicLoader\nI0403 19:54:22.596845       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/aggregator-client-ca/ca-bundle.crt\nI0403 19:54:22.598697       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\n
Apr 03 19:56:43.581 E ns/openshift-monitoring pod/node-exporter-lgh7p node/ip-10-0-145-43.us-west-2.compute.internal container=node-exporter container exited with code 255 (Error): 
Apr 03 19:56:43.581 E ns/openshift-monitoring pod/node-exporter-lgh7p node/ip-10-0-145-43.us-west-2.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 03 19:56:43.948 E ns/openshift-cluster-node-tuning-operator pod/tuned-shwl6 node/ip-10-0-145-43.us-west-2.compute.internal container=tuned container exited with code 255 (Error): ged node wide: true\nI0403 19:53:47.210439   51755 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 19:53:47.211893   51755 openshift-tuned.go:326] Getting recommended profile...\nI0403 19:53:47.316795   51755 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 19:53:47.502927   51755 openshift-tuned.go:435] Pod (openshift-kube-scheduler/installer-5-ip-10-0-145-43.us-west-2.compute.internal) labels changed node wide: true\nI0403 19:53:52.210471   51755 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 19:53:52.212098   51755 openshift-tuned.go:326] Getting recommended profile...\nI0403 19:53:52.311620   51755 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 19:53:54.031848   51755 openshift-tuned.go:435] Pod (openshift-console/console-6798668846-bkxf9) labels changed node wide: true\nI0403 19:53:57.210545   51755 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 19:53:57.212380   51755 openshift-tuned.go:326] Getting recommended profile...\nI0403 19:53:57.322566   51755 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 19:53:58.152696   51755 openshift-tuned.go:435] Pod (openshift-machine-config-operator/etcd-quorum-guard-6f65b9864b-jxrwf) labels changed node wide: true\nI0403 19:54:02.210446   51755 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 19:54:02.211814   51755 openshift-tuned.go:326] Getting recommended profile...\nI0403 19:54:02.311884   51755 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 19:54:22.360347   51755 openshift-tuned.go:435] Pod (openshift-console/downloads-5c89d4df64-5jwdt) labels changed node wide: true\n
Apr 03 19:56:44.317 E ns/openshift-dns pod/dns-default-ffn77 node/ip-10-0-145-43.us-west-2.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-04-03T19:45:23.154Z [INFO] CoreDNS-1.3.1\n2020-04-03T19:45:23.154Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-03T19:45:23.154Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nW0403 19:53:44.037183       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: watch of *v1.Service ended with: too old resource version: 23232 (30479)\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 03 19:56:44.317 E ns/openshift-dns pod/dns-default-ffn77 node/ip-10-0-145-43.us-west-2.compute.internal container=dns-node-resolver container exited with code 255 (Error): /etc/hosts.tmp /etc/hosts differ: char 159, line 3\n
Apr 03 19:56:44.685 E ns/openshift-machine-config-operator pod/machine-config-daemon-qxkvg node/ip-10-0-145-43.us-west-2.compute.internal container=machine-config-daemon container exited with code 255 (Error): 
Apr 03 19:56:47.250 E ns/openshift-machine-config-operator pod/machine-config-server-cp76k node/ip-10-0-145-43.us-west-2.compute.internal container=machine-config-server container exited with code 255 (Error): 
Apr 03 19:56:47.652 E ns/openshift-sdn pod/sdn-controller-kncb7 node/ip-10-0-145-43.us-west-2.compute.internal container=sdn-controller container exited with code 255 (Error): I0403 19:45:06.518579       1 leaderelection.go:205] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Apr 03 19:56:49.251 E ns/openshift-image-registry pod/node-ca-jwwxr node/ip-10-0-145-43.us-west-2.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 03 19:56:53.102 E ns/openshift-console pod/downloads-5c89d4df64-jkb5t node/ip-10-0-157-88.us-west-2.compute.internal container=download-server container exited with code 137 (Error): 
Apr 03 19:56:53.451 E ns/openshift-multus pod/multus-prnns node/ip-10-0-145-43.us-west-2.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 03 19:56:54.651 E ns/openshift-controller-manager pod/controller-manager-7hw6q node/ip-10-0-145-43.us-west-2.compute.internal container=controller-manager container exited with code 255 (Error): 
Apr 03 19:57:05.010 E ns/openshift-cluster-machine-approver pod/machine-approver-5459787665-gqzrn node/ip-10-0-128-177.us-west-2.compute.internal container=machine-approver-controller container exited with code 2 (Error): main.go:110] CSR csr-krdlm is already approved\nI0403 19:32:25.745647       1 main.go:107] CSR csr-l9p54 added\nI0403 19:32:25.745651       1 main.go:110] CSR csr-l9p54 is already approved\nI0403 19:32:25.745655       1 main.go:107] CSR csr-5h5jx added\nI0403 19:32:25.745659       1 main.go:110] CSR csr-5h5jx is already approved\nI0403 19:32:25.745663       1 main.go:107] CSR csr-6f9g5 added\nI0403 19:32:25.745667       1 main.go:110] CSR csr-6f9g5 is already approved\nI0403 19:32:25.745675       1 main.go:107] CSR csr-vxsbj added\nI0403 19:32:25.745678       1 main.go:110] CSR csr-vxsbj is already approved\nI0403 19:32:25.745683       1 main.go:107] CSR csr-sz7zd added\nI0403 19:32:25.745686       1 main.go:110] CSR csr-sz7zd is already approved\nI0403 19:32:25.745690       1 main.go:107] CSR csr-txtc6 added\nI0403 19:32:25.745694       1 main.go:110] CSR csr-txtc6 is already approved\nE0403 19:33:26.125414       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=3, ErrCode=NO_ERROR, debug=""\nE0403 19:33:26.125839       1 reflector.go:322] github.com/openshift/cluster-machine-approver/main.go:185: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?resourceVersion=13367&timeoutSeconds=582&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0403 19:33:30.831785       1 reflector.go:205] github.com/openshift/cluster-machine-approver/main.go:185: Failed to list *v1beta1.CertificateSigningRequest: certificatesigningrequests.certificates.k8s.io is forbidden: User "system:serviceaccount:openshift-cluster-machine-approver:machine-approver-sa" cannot list resource "certificatesigningrequests" in API group "certificates.k8s.io" at the cluster scope\nW0403 19:57:00.722599       1 reflector.go:341] github.com/openshift/cluster-machine-approver/main.go:185: watch of *v1beta1.CertificateSigningRequest ended with: too old resource version: 22206 (33031)\n
Apr 03 19:57:06.211 E ns/openshift-service-ca pod/service-serving-cert-signer-ff6dd5f78-qmxbx node/ip-10-0-128-177.us-west-2.compute.internal container=service-serving-cert-signer-controller container exited with code 2 (Error): 
Apr 03 19:57:07.011 E ns/openshift-machine-api pod/machine-api-operator-c45bcdd89-b7qz2 node/ip-10-0-128-177.us-west-2.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Apr 03 19:57:11.812 E ns/openshift-machine-config-operator pod/machine-config-operator-7d5c4569d8-8cfxl node/ip-10-0-128-177.us-west-2.compute.internal container=machine-config-operator container exited with code 2 (Error): 
Apr 03 19:57:14.011 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-6cfc79b54f-5ltwg node/ip-10-0-128-177.us-west-2.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): re leader lease  kube-system/kube-controller-manager...\\nI0403 19:31:12.887401       1 serving.go:77] Starting DynamicLoader\\nE0403 19:54:22.609713       1 controllermanager.go:282] leaderelection lost\\n\"\nStaticPodsDegraded: nodes/ip-10-0-145-43.us-west-2.compute.internal pods/kube-controller-manager-ip-10-0-145-43.us-west-2.compute.internal container=\"kube-controller-manager-cert-syncer-6\" is not ready\nStaticPodsDegraded: nodes/ip-10-0-145-43.us-west-2.compute.internal pods/kube-controller-manager-ip-10-0-145-43.us-west-2.compute.internal container=\"kube-controller-manager-cert-syncer-6\" is terminated: \"Error\" - \"I0403 19:31:12.851046       1 certsync_controller.go:269] Starting CertSyncer\\nI0403 19:31:12.851375       1 observer_polling.go:106] Starting file observer\\nW0403 19:37:52.878022       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 19018 (24632)\\nW0403 19:43:49.882231       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 24783 (26356)\\nW0403 19:52:33.886577       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 26615 (29755)\\n\"" to "StaticPodsDegraded: nodes/ip-10-0-145-43.us-west-2.compute.internal pods/kube-controller-manager-ip-10-0-145-43.us-west-2.compute.internal container=\"kube-controller-manager-6\" is not ready"\nW0403 19:57:00.818443       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.FeatureGate ended with: too old resource version: 22318 (33039)\nW0403 19:57:00.838024       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.RoleBinding ended with: too old resource version: 30486 (33039)\nI0403 19:57:06.570146       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 19:57:06.570291       1 leaderelection.go:65] leaderelection lost\n
Apr 03 19:57:14.613 E ns/openshift-console pod/console-6798668846-2m6hz node/ip-10-0-128-177.us-west-2.compute.internal container=console container exited with code 2 (Error): 2020/04/3 19:54:00 cmd/main: cookies are secure!\n2020/04/3 19:54:05 auth: error contacting auth provider (retrying in 10s): request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-ci7zfy1r-a4243.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-ci7zfy1r-a4243.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n2020/04/3 19:54:15 cmd/main: Binding to 0.0.0.0:8443...\n2020/04/3 19:54:15 cmd/main: using TLS\n
Apr 03 19:57:16.813 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-949f88bbf-wrtkh node/ip-10-0-128-177.us-west-2.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): e.internal -n openshift-kube-scheduler because it was missing\nI0403 19:56:55.982826       1 status_controller.go:164] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2020-04-03T19:15:06Z","message":"StaticPodsDegraded: nodes/ip-10-0-145-43.us-west-2.compute.internal pods/openshift-kube-scheduler-ip-10-0-145-43.us-west-2.compute.internal container=\"scheduler\" is not ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-04-03T19:33:26Z","message":"Progressing: 3 nodes are at revision 5","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-04-03T19:11:34Z","message":"Available: 3 nodes are active; 3 nodes are at revision 5","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-04-03T19:09:00Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0403 19:56:55.990631       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"4f814d2f-75de-11ea-ab12-028f5e43b996", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Degraded message changed from "" to "StaticPodsDegraded: nodes/ip-10-0-145-43.us-west-2.compute.internal pods/openshift-kube-scheduler-ip-10-0-145-43.us-west-2.compute.internal container=\"scheduler\" is not ready"\nW0403 19:57:00.818502       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.FeatureGate ended with: too old resource version: 22318 (33039)\nW0403 19:57:00.838588       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.RoleBinding ended with: too old resource version: 22216 (33039)\nI0403 19:57:02.199610       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 19:57:02.199651       1 leaderelection.go:65] leaderelection lost\n
Apr 03 19:57:33.472 E ns/openshift-console pod/downloads-5c89d4df64-tdl5f node/ip-10-0-128-177.us-west-2.compute.internal container=download-server container exited with code 137 (Error): 
Apr 03 19:57:38.853 E ns/openshift-etcd pod/etcd-member-ip-10-0-145-43.us-west-2.compute.internal node/ip-10-0-145-43.us-west-2.compute.internal container=etcd-metrics container exited with code 255 (Error): 2020-04-03 19:53:59.308626 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-ci7zfy1r-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-ci7zfy1r-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-03 19:53:59.309443 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-03 19:53:59.309930 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-ci7zfy1r-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-ci7zfy1r-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/03 19:53:59 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.145.43:9978: connect: connection refused"; Reconnecting to {etcd-1.ci-op-ci7zfy1r-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\n2020-04-03 19:54:00.321033 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Apr 03 19:57:38.853 E ns/openshift-etcd pod/etcd-member-ip-10-0-145-43.us-west-2.compute.internal node/ip-10-0-145-43.us-west-2.compute.internal container=etcd-member container exited with code 255 (Error): 5123ed9b80db0b6f (writer)\n2020-04-03 19:54:23.104323 I | rafthttp: stopped HTTP pipelining with peer 5123ed9b80db0b6f\n2020-04-03 19:54:23.104419 W | rafthttp: lost the TCP streaming connection with peer 5123ed9b80db0b6f (stream MsgApp v2 reader)\n2020-04-03 19:54:23.104475 E | rafthttp: failed to read 5123ed9b80db0b6f on stream MsgApp v2 (context canceled)\n2020-04-03 19:54:23.104505 I | rafthttp: peer 5123ed9b80db0b6f became inactive (message send to peer failed)\n2020-04-03 19:54:23.104519 I | rafthttp: stopped streaming with peer 5123ed9b80db0b6f (stream MsgApp v2 reader)\n2020-04-03 19:54:23.104608 W | rafthttp: lost the TCP streaming connection with peer 5123ed9b80db0b6f (stream Message reader)\n2020-04-03 19:54:23.104618 I | rafthttp: stopped streaming with peer 5123ed9b80db0b6f (stream Message reader)\n2020-04-03 19:54:23.104624 I | rafthttp: stopped peer 5123ed9b80db0b6f\n2020-04-03 19:54:23.104631 I | rafthttp: stopping peer beda7db85913a529...\n2020-04-03 19:54:23.105108 I | rafthttp: closed the TCP streaming connection with peer beda7db85913a529 (stream MsgApp v2 writer)\n2020-04-03 19:54:23.105119 I | rafthttp: stopped streaming with peer beda7db85913a529 (writer)\n2020-04-03 19:54:23.105454 I | rafthttp: closed the TCP streaming connection with peer beda7db85913a529 (stream Message writer)\n2020-04-03 19:54:23.105464 I | rafthttp: stopped streaming with peer beda7db85913a529 (writer)\n2020-04-03 19:54:23.105532 I | rafthttp: stopped HTTP pipelining with peer beda7db85913a529\n2020-04-03 19:54:23.105587 W | rafthttp: lost the TCP streaming connection with peer beda7db85913a529 (stream MsgApp v2 reader)\n2020-04-03 19:54:23.105600 I | rafthttp: stopped streaming with peer beda7db85913a529 (stream MsgApp v2 reader)\n2020-04-03 19:54:23.105643 W | rafthttp: lost the TCP streaming connection with peer beda7db85913a529 (stream Message reader)\n2020-04-03 19:54:23.105652 I | rafthttp: stopped streaming with peer beda7db85913a529 (stream Message reader)\n2020-04-03 19:54:23.105658 I | rafthttp: stopped peer beda7db85913a529\n
Apr 03 19:57:39.252 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-145-43.us-west-2.compute.internal node/ip-10-0-145-43.us-west-2.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 255 (Error): I0403 19:29:52.419104       1 observer_polling.go:106] Starting file observer\nI0403 19:29:52.420779       1 certsync_controller.go:269] Starting CertSyncer\nW0403 19:34:57.685253       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22761 (23462)\nW0403 19:41:29.690976       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 23846 (25630)\nW0403 19:51:01.696339       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25761 (29202)\n
Apr 03 19:57:39.252 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-145-43.us-west-2.compute.internal node/ip-10-0-145-43.us-west-2.compute.internal container=kube-apiserver-7 container exited with code 255 (Error): ocessing item v1.oauth.openshift.io\nI0403 19:54:11.815147       1 controller.go:107] OpenAPI AggregationController: Processing item v1.template.openshift.io\nI0403 19:54:13.111252       1 controller.go:107] OpenAPI AggregationController: Processing item v1.security.openshift.io\nI0403 19:54:15.168328       1 controller.go:107] OpenAPI AggregationController: Processing item v1.apps.openshift.io\nI0403 19:54:16.518316       1 controller.go:107] OpenAPI AggregationController: Processing item v1.quota.openshift.io\nI0403 19:54:17.852947       1 controller.go:107] OpenAPI AggregationController: Processing item v1.build.openshift.io\nI0403 19:54:22.598013       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=3549, ErrCode=NO_ERROR, debug=""\nI0403 19:54:22.598033       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=3549, ErrCode=NO_ERROR, debug=""\nI0403 19:54:22.598165       1 controller.go:176] Shutting down kubernetes service endpoint reconciler\nI0403 19:54:22.598261       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=3549, ErrCode=NO_ERROR, debug=""\nI0403 19:54:22.598327       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=3549, ErrCode=NO_ERROR, debug=""\nI0403 19:54:22.598346       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=3549, ErrCode=NO_ERROR, debug=""\nI0403 19:54:22.598389       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=3549, ErrCode=NO_ERROR, debug=""\nW0403 19:54:22.655626       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [10.0.128.177 10.0.135.157]\n
Apr 03 19:57:39.651 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-145-43.us-west-2.compute.internal node/ip-10-0-145-43.us-west-2.compute.internal container=kube-controller-manager-cert-syncer-6 container exited with code 255 (Error): I0403 19:31:12.851046       1 certsync_controller.go:269] Starting CertSyncer\nI0403 19:31:12.851375       1 observer_polling.go:106] Starting file observer\nW0403 19:37:52.878022       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 19018 (24632)\nW0403 19:43:49.882231       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 24783 (26356)\nW0403 19:52:33.886577       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 26615 (29755)\n
Apr 03 19:57:39.651 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-145-43.us-west-2.compute.internal node/ip-10-0-145-43.us-west-2.compute.internal container=kube-controller-manager-6 container exited with code 255 (Error): ficate: "kubelet-signer" [] issuer="<self>" (2020-04-03 18:48:58 +0000 UTC to 2020-04-04 18:48:58 +0000 UTC (now=2020-04-03 19:31:12.862289345 +0000 UTC))\nI0403 19:31:12.862312       1 clientca.go:92] [3] "/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" client-ca certificate: "kube-apiserver-to-kubelet-signer" [] issuer="<self>" (2020-04-03 18:48:58 +0000 UTC to 2021-04-03 18:48:58 +0000 UTC (now=2020-04-03 19:31:12.862303553 +0000 UTC))\nI0403 19:31:12.862326       1 clientca.go:92] [4] "/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" client-ca certificate: "kube-control-plane-signer" [] issuer="<self>" (2020-04-03 18:48:58 +0000 UTC to 2021-04-03 18:48:58 +0000 UTC (now=2020-04-03 19:31:12.862317733 +0000 UTC))\nI0403 19:31:12.881090       1 controllermanager.go:169] Version: v1.13.4+3040211\nI0403 19:31:12.885773       1 serving.go:195] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1585940933" (2020-04-03 19:09:09 +0000 UTC to 2022-04-03 19:09:10 +0000 UTC (now=2020-04-03 19:31:12.88574824 +0000 UTC))\nI0403 19:31:12.885875       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585940933" [] issuer="<self>" (2020-04-03 19:08:52 +0000 UTC to 2021-04-03 19:08:53 +0000 UTC (now=2020-04-03 19:31:12.885861477 +0000 UTC))\nI0403 19:31:12.885921       1 secure_serving.go:136] Serving securely on [::]:10257\nI0403 19:31:12.886529       1 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-controller-manager...\nI0403 19:31:12.887401       1 serving.go:77] Starting DynamicLoader\nE0403 19:54:22.609713       1 controllermanager.go:282] leaderelection lost\n
Apr 03 19:57:40.052 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-145-43.us-west-2.compute.internal node/ip-10-0-145-43.us-west-2.compute.internal container=scheduler container exited with code 255 (Error):   1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585940933" [] issuer="<self>" (2020-04-03 19:08:52 +0000 UTC to 2021-04-03 19:08:53 +0000 UTC (now=2020-04-03 19:31:15.868108118 +0000 UTC))\nI0403 19:31:15.868146       1 secure_serving.go:136] Serving securely on [::]:10259\nI0403 19:31:15.868219       1 serving.go:77] Starting DynamicLoader\nI0403 19:31:16.769238       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller\nI0403 19:31:16.869380       1 controller_utils.go:1034] Caches are synced for scheduler controller\nI0403 19:31:16.869411       1 leaderelection.go:205] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\nW0403 19:53:43.970323       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 18219 (30466)\nW0403 19:53:43.970323       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StatefulSet ended with: too old resource version: 22675 (30466)\nW0403 19:53:44.042272       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 23232 (30479)\nW0403 19:53:44.107165       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.ReplicationController ended with: too old resource version: 18221 (30486)\nW0403 19:53:44.144758       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1beta1.PodDisruptionBudget ended with: too old resource version: 18223 (30486)\nW0403 19:53:44.151799       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 18225 (30486)\nW0403 19:53:44.152339       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 18220 (30486)\nE0403 19:54:22.587903       1 server.go:259] lost master\n
Apr 03 19:57:55.655 - 14s   E openshift-apiserver OpenShift API is not responding to GET requests
Apr 03 19:58:05.005 E ns/openshift-operator-lifecycle-manager pod/packageserver-cdf78dc69-lxdhc node/ip-10-0-145-43.us-west-2.compute.internal container=packageserver container exited with code 137 (Error): added" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T19:57:39Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T19:57:39Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T19:57:39Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T19:57:39Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T19:57:40Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T19:57:40Z" level=info msg="new grpc connection added" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T19:57:41Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T19:57:41Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T19:57:41Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T19:57:41Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\nI0403 19:57:41.550208       1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0403 19:57:41.550614       1 reflector.go:337] github.com/operator-framework/operator-lifecycle-manager/pkg/lib/queueinformer/queueinformer_operator.go:130: Watch close - *v1alpha1.CatalogSource total 19 items received\n
Apr 03 19:58:39.154 E ns/openshift-cluster-node-tuning-operator pod/tuned-rhbh9 node/ip-10-0-157-88.us-west-2.compute.internal container=tuned container exited with code 255 (Error): e wide: true\nI0403 19:53:45.713798   49169 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 19:53:45.718795   49169 openshift-tuned.go:326] Getting recommended profile...\nI0403 19:53:45.863507   49169 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 19:53:45.863608   49169 openshift-tuned.go:435] Pod (openshift-console/downloads-5c89d4df64-jkb5t) labels changed node wide: true\nI0403 19:53:50.713814   49169 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 19:53:50.715780   49169 openshift-tuned.go:326] Getting recommended profile...\nI0403 19:53:50.849164   49169 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 19:54:22.814474   49169 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0403 19:54:22.816653   49169 openshift-tuned.go:720] Pod event watch channel closed.\nI0403 19:54:22.816670   49169 openshift-tuned.go:722] Increasing resyncPeriod to 138\nI0403 19:56:40.816893   49169 openshift-tuned.go:187] Extracting tuned profiles\nI0403 19:56:40.818970   49169 openshift-tuned.go:623] Resync period to pull node/pod labels: 138 [s]\nI0403 19:56:40.831412   49169 openshift-tuned.go:435] Pod (openshift-cluster-node-tuning-operator/tuned-rhbh9) labels changed node wide: true\nI0403 19:56:45.829099   49169 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 19:56:45.830477   49169 openshift-tuned.go:275] Dumping labels to /var/lib/tuned/ocp-node-labels.cfg\nI0403 19:56:45.831569   49169 openshift-tuned.go:326] Getting recommended profile...\nI0403 19:56:45.946266   49169 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 19:56:54.073208   49169 openshift-tuned.go:435] Pod (openshift-console/downloads-5c89d4df64-jkb5t) labels changed node wide: true\n
Apr 03 19:58:39.165 E ns/openshift-image-registry pod/node-ca-vkc8r node/ip-10-0-157-88.us-west-2.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 03 19:58:39.244 E ns/openshift-monitoring pod/node-exporter-957lc node/ip-10-0-157-88.us-west-2.compute.internal container=node-exporter container exited with code 255 (Error): 
Apr 03 19:58:39.244 E ns/openshift-monitoring pod/node-exporter-957lc node/ip-10-0-157-88.us-west-2.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 03 19:58:39.352 E ns/openshift-marketplace pod/certified-operators-575c7bc5d9-hl574 node/ip-10-0-137-192.us-west-2.compute.internal container=certified-operators container exited with code 2 (Error): 
Apr 03 19:58:40.655 E openshift-apiserver OpenShift API is not responding to GET requests
Apr 03 19:58:42.559 E ns/openshift-dns pod/dns-default-kmfc8 node/ip-10-0-157-88.us-west-2.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-04-03T19:44:40.087Z [INFO] CoreDNS-1.3.1\n2020-04-03T19:44:40.087Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-03T19:44:40.087Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 03 19:58:42.559 E ns/openshift-dns pod/dns-default-kmfc8 node/ip-10-0-157-88.us-west-2.compute.internal container=dns-node-resolver container exited with code 255 (Error): kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]\n
Apr 03 19:58:44.088 E ns/openshift-sdn pod/ovs-k5gvp node/ip-10-0-157-88.us-west-2.compute.internal container=openvswitch container exited with code 255 (Error): |00136|bridge|INFO|bridge br0: deleted interface veth59151a28 on port 6\n2020-04-03T19:56:23.004Z|00137|connmgr|INFO|br0<->unix#203: 4 flow_mods in the last 0 s (4 deletes)\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-04-03T19:56:22.715Z|00020|jsonrpc|WARN|Dropped 5 log messages in last 631 seconds (most recently, 631 seconds ago) due to excessive rate\n2020-04-03T19:56:22.715Z|00021|jsonrpc|WARN|unix#132: receive error: Connection reset by peer\n2020-04-03T19:56:22.715Z|00022|reconnect|WARN|unix#132: connection dropped (Connection reset by peer)\n\n==> /var/log/openvswitch/ovs-vswitchd.log <==\n2020-04-03T19:56:23.054Z|00138|bridge|INFO|bridge br0: deleted interface vethf6ba06be on port 3\n2020-04-03T19:56:23.126Z|00139|connmgr|INFO|br0<->unix#206: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T19:56:23.161Z|00140|bridge|INFO|bridge br0: deleted interface vethcc0c672e on port 5\n2020-04-03T19:56:23.242Z|00141|connmgr|INFO|br0<->unix#209: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T19:56:23.287Z|00142|bridge|INFO|bridge br0: deleted interface veth35413d8d on port 11\n2020-04-03T19:56:23.365Z|00143|connmgr|INFO|br0<->unix#212: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T19:56:23.411Z|00144|bridge|INFO|bridge br0: deleted interface veth97a7a82d on port 4\n2020-04-03T19:56:23.470Z|00145|connmgr|INFO|br0<->unix#215: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T19:56:23.519Z|00146|bridge|INFO|bridge br0: deleted interface veth3e03e83f on port 8\n2020-04-03T19:56:23.676Z|00147|connmgr|INFO|br0<->unix#218: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T19:56:23.707Z|00148|bridge|INFO|bridge br0: deleted interface veth37a5caa7 on port 12\n2020-04-03T19:56:52.397Z|00149|connmgr|INFO|br0<->unix#221: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T19:56:52.425Z|00150|connmgr|INFO|br0<->unix#224: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T19:56:52.445Z|00151|bridge|INFO|bridge br0: deleted interface vethd018c353 on port 15\nTerminated\novs-vswitchd is not running.\novsdb-server is not running.\n
Apr 03 19:58:44.521 E ns/openshift-multus pod/multus-ldgtn node/ip-10-0-157-88.us-west-2.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 03 19:58:44.915 E ns/openshift-machine-config-operator pod/machine-config-daemon-pk4kl node/ip-10-0-157-88.us-west-2.compute.internal container=machine-config-daemon container exited with code 255 (Error): 
Apr 03 19:58:45.317 E ns/openshift-sdn pod/sdn-bttzz node/ip-10-0-157-88.us-west-2.compute.internal container=sdn container exited with code 255 (Error): ingress/router-internal-default:http"\nI0403 19:56:54.265569   71606 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-ingress/router-internal-default:metrics to [10.128.2.24:1936 10.131.0.27:1936]\nI0403 19:56:54.265604   71606 roundrobin.go:240] Delete endpoint 10.131.0.27:1936 for service "openshift-ingress/router-internal-default:metrics"\nI0403 19:56:54.265634   71606 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-ingress/router-internal-default:https to [10.128.2.24:443 10.131.0.27:443]\nI0403 19:56:54.265649   71606 roundrobin.go:240] Delete endpoint 10.131.0.27:443 for service "openshift-ingress/router-internal-default:https"\nI0403 19:56:54.422622   71606 proxier.go:367] userspace proxy: processing 0 service events\nI0403 19:56:54.422644   71606 proxier.go:346] userspace syncProxyRules took 51.562325ms\nI0403 19:56:54.583366   71606 proxier.go:367] userspace proxy: processing 0 service events\nI0403 19:56:54.583390   71606 proxier.go:346] userspace syncProxyRules took 53.95356ms\ninterrupt: Gracefully shutting down ...\nE0403 19:56:55.326349   71606 healthcheck.go:57] SDN healthcheck disconnected from OVS server: <nil>\nI0403 19:56:55.326489   71606 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 19:56:55.426742   71606 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 19:56:55.528836   71606 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 19:56:55.626778   71606 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 19:56:55.726730   71606 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\n
Apr 03 19:58:55.066 E clusteroperator/monitoring changed Degraded to True: UpdatingconfigurationsharingFailed: Failed to rollout the stack. Error: running task Updating configuration sharing failed: failed to retrieve Prometheus host: getting Route object failed: the server is currently unable to handle the request (get routes.route.openshift.io prometheus-k8s)
Apr 03 19:59:20.585 E ns/openshift-monitoring pod/grafana-657d59447-7drzl node/ip-10-0-142-248.us-west-2.compute.internal container=grafana container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 19:59:20.585 E ns/openshift-monitoring pod/grafana-657d59447-7drzl node/ip-10-0-142-248.us-west-2.compute.internal container=grafana-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 19:59:21.785 E ns/openshift-monitoring pod/prometheus-adapter-79cc975d-m5stm node/ip-10-0-142-248.us-west-2.compute.internal container=prometheus-adapter container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 19:59:22.383 E ns/openshift-operator-lifecycle-manager pod/olm-operators-tplks node/ip-10-0-142-248.us-west-2.compute.internal container=configmap-registry-server container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 19:59:22.984 E ns/openshift-ingress pod/router-default-7964677cfc-lw2gb node/ip-10-0-142-248.us-west-2.compute.internal container=router container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 20:00:12.718 E ns/openshift-monitoring pod/node-exporter-wx6vx node/ip-10-0-128-177.us-west-2.compute.internal container=node-exporter container exited with code 255 (Error): 
Apr 03 20:00:12.718 E ns/openshift-monitoring pod/node-exporter-wx6vx node/ip-10-0-128-177.us-west-2.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 03 20:00:12.740 E ns/openshift-apiserver pod/apiserver-prgsj node/ip-10-0-128-177.us-west-2.compute.internal container=openshift-apiserver container exited with code 255 (Error): allenging-client" (started: 2020-04-03 19:57:01.71727996 +0000 UTC m=+1392.844785591) (total time: 12.64571466s):\nTrace[1417143828]: [12.645669187s] [12.64566637s] About to write a response\nI0403 19:57:14.363304       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nE0403 19:57:14.440200       1 watcher.go:208] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0403 19:57:14.440286       1 watcher.go:208] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0403 19:57:14.440626       1 watcher.go:208] watch chan error: etcdserver: mvcc: required revision has been compacted\nW0403 19:57:14.440720       1 reflector.go:256] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: watch of *v1.Group ended with: The resourceVersion for the provided watch is too old.\nE0403 19:57:29.383530       1 memcache.go:141] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request\nI0403 19:57:41.415019       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/client-ca/ca-bundle.crt\nI0403 19:57:41.415213       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/aggregator-client-ca/ca-bundle.crt\nI0403 19:57:41.415271       1 clusterquotamapping.go:145] Shutting down ClusterQuotaMappingController controller\nI0403 19:57:41.415291       1 serving.go:88] Shutting down DynamicLoader\nI0403 19:57:41.415301       1 controller.go:87] Shutting down OpenAPI AggregationController\nI0403 19:57:41.416518       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nE0403 19:57:41.416969       1 watch.go:212] unable to encode watch object <nil>: expected pointer, but got invalid kind\nI0403 19:57:41.417079       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 19:57:41.417130       1 secure_serving.go:180] Stopped listening on 0.0.0.0:8443\n
Apr 03 20:00:12.947 E ns/openshift-cluster-node-tuning-operator pod/tuned-h7dbj node/ip-10-0-128-177.us-west-2.compute.internal container=tuned container exited with code 255 (Error): ide: true\nI0403 19:57:09.840730   67003 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 19:57:09.843651   67003 openshift-tuned.go:326] Getting recommended profile...\nI0403 19:57:09.940511   67003 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 19:57:11.376392   67003 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/catalog-operator-5dd78d54d7-6lqfv) labels changed node wide: true\nI0403 19:57:14.840760   67003 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 19:57:14.842343   67003 openshift-tuned.go:326] Getting recommended profile...\nI0403 19:57:14.941665   67003 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 19:57:15.378692   67003 openshift-tuned.go:435] Pod (openshift-ingress-operator/ingress-operator-7dc695fbbc-d4845) labels changed node wide: true\nI0403 19:57:19.840776   67003 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 19:57:19.842375   67003 openshift-tuned.go:326] Getting recommended profile...\nI0403 19:57:19.945021   67003 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 19:57:21.778313   67003 openshift-tuned.go:435] Pod (openshift-machine-config-operator/etcd-quorum-guard-6f65b9864b-nbwvr) labels changed node wide: true\nI0403 19:57:24.840752   67003 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 19:57:24.842310   67003 openshift-tuned.go:326] Getting recommended profile...\nI0403 19:57:24.938332   67003 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 19:57:40.707619   67003 openshift-tuned.go:435] Pod (openshift-console/downloads-5c89d4df64-tdl5f) labels changed node wide: true\n
Apr 03 20:00:21.674 E ns/openshift-image-registry pod/node-ca-mxb6x node/ip-10-0-128-177.us-west-2.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 03 20:00:22.474 E ns/openshift-controller-manager pod/controller-manager-4v584 node/ip-10-0-128-177.us-west-2.compute.internal container=controller-manager container exited with code 255 (Error): 
Apr 03 20:00:22.878 E ns/openshift-dns pod/dns-default-dh7rg node/ip-10-0-128-177.us-west-2.compute.internal container=dns-node-resolver container exited with code 255 (Error): 
Apr 03 20:00:22.878 E ns/openshift-dns pod/dns-default-dh7rg node/ip-10-0-128-177.us-west-2.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-04-03T19:43:44.524Z [INFO] CoreDNS-1.3.1\n2020-04-03T19:43:44.524Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-03T19:43:44.524Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nW0403 19:53:43.794807       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: watch of *v1.Service ended with: too old resource version: 23232 (30444)\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 03 20:00:26.275 E ns/openshift-multus pod/multus-m9sj6 node/ip-10-0-128-177.us-west-2.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 03 20:00:28.278 E ns/openshift-machine-config-operator pod/machine-config-server-czlfz node/ip-10-0-128-177.us-west-2.compute.internal container=machine-config-server container exited with code 255 (Error): 
Apr 03 20:00:38.653 E kube-apiserver Kube API started failing: Get https://api.ci-op-ci7zfy1r-a4243.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=3s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Apr 03 20:00:40.655 E kube-apiserver Kube API is not responding to GET requests
Apr 03 20:00:40.655 E openshift-apiserver OpenShift API is not responding to GET requests
Apr 03 20:00:49.120 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-79c5qm5gp node/ip-10-0-135-157.us-west-2.compute.internal container=operator container exited with code 2 (Error): ing and watching *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:132\nI0403 19:57:42.637765       1 reflector.go:169] Listing and watching *v1.Deployment from k8s.io/client-go/informers/factory.go:132\nI0403 19:57:42.682724       1 reflector.go:169] Listing and watching *v1.ServiceCatalogControllerManager from github.com/openshift/client-go/operator/informers/externalversions/factory.go:101\nI0403 19:57:42.683780       1 reflector.go:169] Listing and watching *v1.Namespace from k8s.io/client-go/informers/factory.go:132\nI0403 19:57:42.691143       1 reflector.go:169] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:132\nI0403 19:57:42.730405       1 reflector.go:169] Listing and watching *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0403 19:58:06.518252       1 wrap.go:47] GET /metrics: (5.69899ms) 200 [Prometheus/2.7.2 10.131.0.33:44704]\nI0403 19:58:06.518252       1 wrap.go:47] GET /metrics: (4.791744ms) 200 [Prometheus/2.7.2 10.128.2.31:54178]\nI0403 19:58:36.526818       1 wrap.go:47] GET /metrics: (14.061804ms) 200 [Prometheus/2.7.2 10.131.0.33:44704]\nI0403 19:58:36.527478       1 wrap.go:47] GET /metrics: (13.952523ms) 200 [Prometheus/2.7.2 10.128.2.31:54178]\nI0403 19:59:06.517810       1 wrap.go:47] GET /metrics: (5.288234ms) 200 [Prometheus/2.7.2 10.131.0.33:44704]\nI0403 19:59:06.518417       1 wrap.go:47] GET /metrics: (4.9821ms) 200 [Prometheus/2.7.2 10.128.2.31:54178]\nI0403 19:59:36.519165       1 wrap.go:47] GET /metrics: (5.226055ms) 200 [Prometheus/2.7.2 10.131.0.33:44704]\nI0403 20:00:06.519066       1 wrap.go:47] GET /metrics: (5.261574ms) 200 [Prometheus/2.7.2 10.131.0.33:44704]\nI0403 20:00:06.523547       1 wrap.go:47] GET /metrics: (1.539022ms) 200 [Prometheus/2.7.2 10.129.2.41:33232]\nI0403 20:00:36.523806       1 wrap.go:47] GET /metrics: (9.003663ms) 200 [Prometheus/2.7.2 10.129.2.41:33232]\nI0403 20:00:36.527022       1 wrap.go:47] GET /metrics: (13.266697ms) 200 [Prometheus/2.7.2 10.131.0.33:44704]\n
Apr 03 20:00:50.915 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-6cfc79b54f-dqg5m node/ip-10-0-135-157.us-west-2.compute.internal container=kube-controller-manager-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 20:00:51.485 E ns/openshift-etcd pod/etcd-member-ip-10-0-128-177.us-west-2.compute.internal node/ip-10-0-128-177.us-west-2.compute.internal container=etcd-metrics container exited with code 255 (Error): 2020-04-03 19:57:11.917545 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-0.ci-op-ci7zfy1r-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-0.ci-op-ci7zfy1r-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-03 19:57:11.922187 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-03 19:57:11.924149 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-ci7zfy1r-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-ci7zfy1r-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/03 19:57:11 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.128.177:9978: connect: connection refused"; Reconnecting to {etcd-0.ci-op-ci7zfy1r-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\n2020-04-03 19:57:12.936063 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Apr 03 20:00:51.485 E ns/openshift-etcd pod/etcd-member-ip-10-0-128-177.us-west-2.compute.internal node/ip-10-0-128-177.us-west-2.compute.internal container=etcd-member container exited with code 255 (Error): ac4cbff1ebca434d (writer)\n2020-04-03 19:57:41.988867 I | rafthttp: stopped HTTP pipelining with peer ac4cbff1ebca434d\n2020-04-03 19:57:41.988956 W | rafthttp: lost the TCP streaming connection with peer ac4cbff1ebca434d (stream MsgApp v2 reader)\n2020-04-03 19:57:41.988993 I | rafthttp: stopped streaming with peer ac4cbff1ebca434d (stream MsgApp v2 reader)\n2020-04-03 19:57:41.989054 W | rafthttp: lost the TCP streaming connection with peer ac4cbff1ebca434d (stream Message reader)\n2020-04-03 19:57:41.989087 I | rafthttp: stopped streaming with peer ac4cbff1ebca434d (stream Message reader)\n2020-04-03 19:57:41.989115 I | rafthttp: stopped peer ac4cbff1ebca434d\n2020-04-03 19:57:41.989141 I | rafthttp: stopping peer 5123ed9b80db0b6f...\n2020-04-03 19:57:41.989461 I | rafthttp: closed the TCP streaming connection with peer 5123ed9b80db0b6f (stream MsgApp v2 writer)\n2020-04-03 19:57:41.989522 I | rafthttp: stopped streaming with peer 5123ed9b80db0b6f (writer)\n2020-04-03 19:57:41.989845 I | rafthttp: closed the TCP streaming connection with peer 5123ed9b80db0b6f (stream Message writer)\n2020-04-03 19:57:41.989903 I | rafthttp: stopped streaming with peer 5123ed9b80db0b6f (writer)\n2020-04-03 19:57:41.989950 I | rafthttp: stopped HTTP pipelining with peer 5123ed9b80db0b6f\n2020-04-03 19:57:41.990037 W | rafthttp: lost the TCP streaming connection with peer 5123ed9b80db0b6f (stream MsgApp v2 reader)\n2020-04-03 19:57:41.990070 E | rafthttp: failed to read 5123ed9b80db0b6f on stream MsgApp v2 (context canceled)\n2020-04-03 19:57:41.990103 I | rafthttp: peer 5123ed9b80db0b6f became inactive (message send to peer failed)\n2020-04-03 19:57:41.990130 I | rafthttp: stopped streaming with peer 5123ed9b80db0b6f (stream MsgApp v2 reader)\n2020-04-03 19:57:41.990204 W | rafthttp: lost the TCP streaming connection with peer 5123ed9b80db0b6f (stream Message reader)\n2020-04-03 19:57:41.990260 I | rafthttp: stopped streaming with peer 5123ed9b80db0b6f (stream Message reader)\n2020-04-03 19:57:41.990289 I | rafthttp: stopped peer 5123ed9b80db0b6f\n
Apr 03 20:00:52.310 E ns/openshift-authentication-operator pod/authentication-operator-5d9b9f5577-nl864 node/ip-10-0-135-157.us-west-2.compute.internal container=operator container exited with code 255 (Error): Path:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "" to "WellKnownEndpointDegraded: failed to GET well-known https://10.0.135.157:6443/.well-known/oauth-authorization-server: dial tcp 10.0.135.157:6443: i/o timeout"\nI0403 20:00:29.096296       1 status_controller.go:164] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2020-04-03T19:15:25Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-04-03T19:57:36Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-04-03T19:24:05Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-04-03T19:12:47Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0403 20:00:29.107094       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"56d9d3de-75de-11ea-ab12-028f5e43b996", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "WellKnownEndpointDegraded: failed to GET well-known https://10.0.135.157:6443/.well-known/oauth-authorization-server: dial tcp 10.0.135.157:6443: i/o timeout" to ""\nW0403 20:00:42.057537       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.OAuth ended with: too old resource version: 33038 (36051)\nW0403 20:00:42.057710       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Infrastructure ended with: too old resource version: 22655 (36051)\nI0403 20:00:42.536909       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 20:00:42.536967       1 leaderelection.go:65] leaderelection lost\nF0403 20:00:42.568234       1 builder.go:217] server exited\n
Apr 03 20:00:53.709 E ns/openshift-dns-operator pod/dns-operator-b98fd9cf6-m4r22 node/ip-10-0-135-157.us-west-2.compute.internal container=dns-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 20:00:56.709 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-7bffbbd878-ndlpg node/ip-10-0-135-157.us-west-2.compute.internal container=cluster-storage-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 20:00:58.310 E ns/openshift-console pod/console-6798668846-ssdpq node/ip-10-0-135-157.us-west-2.compute.internal container=console container exited with code 2 (Error): 2020/04/3 19:35:25 cmd/main: cookies are secure!\n2020/04/3 19:35:25 cmd/main: Binding to 0.0.0.0:8443...\n2020/04/3 19:35:25 cmd/main: using TLS\n2020/04/3 20:00:42 http: TLS handshake error from 10.129.2.31:54084: EOF\n2020/04/3 20:00:42 http: TLS handshake error from 10.131.0.27:34576: EOF\n
Apr 03 20:01:01.711 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-67ddd599f8-6xjp5 node/ip-10-0-135-157.us-west-2.compute.internal container=openshift-apiserver-operator container exited with code 255 (Error): erver-operator", Name:"openshift-apiserver-operator", UID:"4f9c0b01-75de-11ea-ab12-028f5e43b996", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available message changed from "Available: v1.oauth.openshift.io is not ready: 503\nAvailable: v1.template.openshift.io is not ready: 503\nAvailable: v1.user.openshift.io is not ready: 503" to "Available: v1.route.openshift.io is not ready: 503\nAvailable: v1.security.openshift.io is not ready: 503"\nI0403 19:59:20.951318       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"4f9c0b01-75de-11ea-ab12-028f5e43b996", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available message changed from "Available: v1.route.openshift.io is not ready: 503\nAvailable: v1.security.openshift.io is not ready: 503" to "Available: v1.authorization.openshift.io is not ready: 503\nAvailable: v1.image.openshift.io is not ready: 503\nAvailable: v1.quota.openshift.io is not ready: 503\nAvailable: v1.template.openshift.io is not ready: 503"\nI0403 19:59:21.260580       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"4f9c0b01-75de-11ea-ab12-028f5e43b996", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available changed from False to True ("")\nW0403 20:00:42.063262       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Namespace ended with: too old resource version: 19120 (36051)\nI0403 20:00:42.313851       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 20:00:42.313979       1 leaderelection.go:65] leaderelection lost\n
Apr 03 20:01:02.311 E ns/openshift-console-operator pod/console-operator-6fb8577f98-xd2bf node/ip-10-0-135-157.us-west-2.compute.internal container=console-operator container exited with code 255 (Error): -------\ntime="2020-04-03T19:59:58Z" level=info msg="sync loop 4.0.0 resources updated: false \n"\ntime="2020-04-03T19:59:58Z" level=info msg=-----------------------\ntime="2020-04-03T19:59:58Z" level=info msg="deployment is available, ready replicas: 2 \n"\ntime="2020-04-03T19:59:58Z" level=info msg="sync_v400: updating console status"\ntime="2020-04-03T19:59:58Z" level=info msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-ci7zfy1r-a4243.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-04-03T19:59:58Z" level=info msg="sync loop 4.0.0 complete"\ntime="2020-04-03T19:59:58Z" level=info msg="finished syncing operator \"cluster\" (37.283µs) \n\n"\nW0403 20:00:35.658183       1 reflector.go:270] github.com/openshift/client-go/oauth/informers/externalversions/factory.go:101: watch of *v1.OAuthClient ended with: The resourceVersion for the provided watch is too old.\nW0403 20:00:42.058519       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Infrastructure ended with: too old resource version: 22655 (36051)\ntime="2020-04-03T20:00:42Z" level=info msg="started syncing operator \"cluster\" (2020-04-03 20:00:42.079807559 +0000 UTC m=+1634.069184771)"\ntime="2020-04-03T20:00:42Z" level=info msg="console is in a managed state."\ntime="2020-04-03T20:00:42Z" level=info msg="running sync loop 4.0.0"\ntime="2020-04-03T20:00:42Z" level=info msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-ci7zfy1r-a4243.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-04-03T20:00:42Z" level=info msg="service-ca configmap exists and is in the correct state"\ntime="2020-04-03T20:00:42Z" level=info msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-ci7zfy1r-a4243.origin-ci-int-aws.dev.rhcloud.com \n"\nI0403 20:00:42.295589       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 20:00:42.295711       1 leaderelection.go:65] leaderelection lost\n
Apr 03 20:01:02.482 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-128-177.us-west-2.compute.internal node/ip-10-0-128-177.us-west-2.compute.internal container=kube-apiserver-7 container exited with code 255 (Error): 3 19:57:41.438445       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 19:57:41.438483       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 19:57:41.438620       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 19:57:41.438656       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 19:57:41.438808       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 19:57:41.438846       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 19:57:41.438957       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 19:57:41.438993       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 19:57:41.439102       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 19:57:41.439136       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 19:57:41.439272       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 19:57:41.439316       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 19:57:41.439440       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 19:57:41.439479       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 19:57:41.439593       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 19:57:41.439631       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 19:57:41.439756       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 19:57:41.439792       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\n
Apr 03 20:01:02.482 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-128-177.us-west-2.compute.internal node/ip-10-0-128-177.us-west-2.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 255 (Error): I0403 19:33:27.050249       1 certsync_controller.go:269] Starting CertSyncer\nI0403 19:33:27.050532       1 observer_polling.go:106] Starting file observer\nW0403 19:40:17.870026       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22761 (25316)\nW0403 19:45:33.874643       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25446 (27304)\nW0403 19:52:51.878956       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 27539 (29826)\n
Apr 03 20:01:03.285 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-128-177.us-west-2.compute.internal node/ip-10-0-128-177.us-west-2.compute.internal container=kube-controller-manager-6 container exited with code 255 (Error): ents.apps "community-operators": the object has been modified; please apply your changes to the latest version and try again\nI0403 19:57:35.609912       1 service_controller.go:734] Service has been deleted openshift-marketplace/community-operators. Attempting to cleanup load balancer resources\nI0403 19:57:35.615180       1 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-marketplace", Name:"community-operators-646cd88f57", UID:"5cf72be1-75e5-11ea-8472-021742bf320e", APIVersion:"apps/v1", ResourceVersion:"33923", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: community-operators-646cd88f57-v6r8f\nI0403 19:57:35.715063       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-marketplace", Name:"redhat-operators", UID:"36bb9217-75df-11ea-b63c-0653dc03765e", APIVersion:"apps/v1", ResourceVersion:"33951", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set redhat-operators-5cbb56dcb5 to 1\nI0403 19:57:35.715360       1 replica_set.go:477] Too few replicas for ReplicaSet openshift-marketplace/redhat-operators-5cbb56dcb5, need 1, creating 1\nI0403 19:57:35.731758       1 deployment_controller.go:484] Error syncing deployment openshift-marketplace/redhat-operators: Operation cannot be fulfilled on deployments.apps "redhat-operators": the object has been modified; please apply your changes to the latest version and try again\nI0403 19:57:35.733655       1 service_controller.go:734] Service has been deleted openshift-marketplace/redhat-operators. Attempting to cleanup load balancer resources\nI0403 19:57:35.753317       1 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-marketplace", Name:"redhat-operators-5cbb56dcb5", UID:"5d097787-75e5-11ea-8472-021742bf320e", APIVersion:"apps/v1", ResourceVersion:"33953", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redhat-operators-5cbb56dcb5-7l2kv\nE0403 19:57:41.413584       1 controllermanager.go:282] leaderelection lost\n
Apr 03 20:01:03.285 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-128-177.us-west-2.compute.internal node/ip-10-0-128-177.us-west-2.compute.internal container=kube-controller-manager-cert-syncer-6 container exited with code 255 (Error): to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=7, ErrCode=NO_ERROR, debug=""\nE0403 19:33:26.140148       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=7, ErrCode=NO_ERROR, debug=""\nE0403 19:33:26.140794       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?resourceVersion=19018&timeout=7m55s&timeoutSeconds=475&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0403 19:33:26.140844       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?resourceVersion=18080&timeout=6m47s&timeoutSeconds=407&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0403 19:33:30.906443       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:kube-controller-manager" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\nE0403 19:33:30.906537       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: secrets is forbidden: User "system:kube-controller-manager" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-controller-manager"\nW0403 19:42:17.932901       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22268 (25840)\nW0403 19:47:23.937645       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25976 (28047)\nW0403 19:55:00.941930       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 28261 (31401)\n
Apr 03 20:01:03.713 E ns/openshift-service-ca pod/apiservice-cabundle-injector-58cf88bc68-bwfrv node/ip-10-0-135-157.us-west-2.compute.internal container=apiservice-cabundle-injector-controller container exited with code 2 (Error): 
Apr 03 20:01:05.911 E ns/openshift-service-catalog-apiserver-operator pod/openshift-service-catalog-apiserver-operator-7695c5c6bf-dfxcp node/ip-10-0-135-157.us-west-2.compute.internal container=operator container exited with code 2 (Error):      1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 20:00:11.906771       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 20:00:18.036774       1 reflector.go:357] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: Watch close - *v1.ClusterOperator total 101 items received\nI0403 20:00:21.916029       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 20:00:23.037523       1 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch close - *v1.Namespace total 0 items received\nI0403 20:00:24.038650       1 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch close - *v1.ConfigMap total 0 items received\nW0403 20:00:24.039824       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 31385 (35580)\nI0403 20:00:25.040056       1 reflector.go:169] Listing and watching *v1.ConfigMap from k8s.io/client-go/informers/factory.go:132\nI0403 20:00:31.922514       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 20:00:35.856591       1 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch close - *v1.Namespace total 0 items received\nW0403 20:00:42.068502       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Namespace ended with: too old resource version: 19120 (36051)\nI0403 20:00:42.092558       1 leaderelection.go:245] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0403 20:00:43.069091       1 reflector.go:169] Listing and watching *v1.Namespace from k8s.io/client-go/informers/factory.go:132\n
Apr 03 20:01:13.109 E ns/openshift-machine-api pod/machine-api-controllers-57847dc59-z65cn node/ip-10-0-135-157.us-west-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Apr 03 20:01:20.909 E ns/openshift-operator-lifecycle-manager pod/packageserver-7f76487cdb-hpsxj node/ip-10-0-135-157.us-west-2.compute.internal container=packageserver container exited with code 137 (Error): hift-marketplace\ntime="2020-04-03T20:01:14Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T20:01:14Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T20:01:14Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T20:01:14Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T20:01:14Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T20:01:14Z" level=info msg="new grpc connection added" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T20:01:15Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-03T20:01:15Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-03T20:01:16Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-03T20:01:16Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-03T20:01:19Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-03T20:01:19Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\n
Apr 03 20:01:28.881 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-128-177.us-west-2.compute.internal node/ip-10-0-128-177.us-west-2.compute.internal container=scheduler container exited with code 255 (Error): t resource "replicasets" in API group "apps" at the cluster scope\nE0403 19:33:30.931692       1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:245: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope\nE0403 19:33:30.933158       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope\nE0403 19:33:30.955453       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope\nE0403 19:33:30.974440       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope\nE0403 19:33:30.986864       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope\nE0403 19:33:31.051899       1 leaderelection.go:270] error retrieving resource lock openshift-kube-scheduler/kube-scheduler: configmaps "kube-scheduler" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "openshift-kube-scheduler"\nW0403 19:53:43.796170       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 23232 (30444)\nW0403 19:57:00.465630       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 22221 (32984)\nE0403 19:57:41.491729       1 server.go:259] lost master\n
Apr 03 20:01:29.685 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-128-177.us-west-2.compute.internal node/ip-10-0-128-177.us-west-2.compute.internal container=kube-controller-manager-6 container exited with code 255 (Error): ents.apps "community-operators": the object has been modified; please apply your changes to the latest version and try again\nI0403 19:57:35.609912       1 service_controller.go:734] Service has been deleted openshift-marketplace/community-operators. Attempting to cleanup load balancer resources\nI0403 19:57:35.615180       1 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-marketplace", Name:"community-operators-646cd88f57", UID:"5cf72be1-75e5-11ea-8472-021742bf320e", APIVersion:"apps/v1", ResourceVersion:"33923", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: community-operators-646cd88f57-v6r8f\nI0403 19:57:35.715063       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-marketplace", Name:"redhat-operators", UID:"36bb9217-75df-11ea-b63c-0653dc03765e", APIVersion:"apps/v1", ResourceVersion:"33951", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set redhat-operators-5cbb56dcb5 to 1\nI0403 19:57:35.715360       1 replica_set.go:477] Too few replicas for ReplicaSet openshift-marketplace/redhat-operators-5cbb56dcb5, need 1, creating 1\nI0403 19:57:35.731758       1 deployment_controller.go:484] Error syncing deployment openshift-marketplace/redhat-operators: Operation cannot be fulfilled on deployments.apps "redhat-operators": the object has been modified; please apply your changes to the latest version and try again\nI0403 19:57:35.733655       1 service_controller.go:734] Service has been deleted openshift-marketplace/redhat-operators. Attempting to cleanup load balancer resources\nI0403 19:57:35.753317       1 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-marketplace", Name:"redhat-operators-5cbb56dcb5", UID:"5d097787-75e5-11ea-8472-021742bf320e", APIVersion:"apps/v1", ResourceVersion:"33953", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redhat-operators-5cbb56dcb5-7l2kv\nE0403 19:57:41.413584       1 controllermanager.go:282] leaderelection lost\n
Apr 03 20:01:29.685 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-128-177.us-west-2.compute.internal node/ip-10-0-128-177.us-west-2.compute.internal container=kube-controller-manager-cert-syncer-6 container exited with code 255 (Error): to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=7, ErrCode=NO_ERROR, debug=""\nE0403 19:33:26.140148       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=7, ErrCode=NO_ERROR, debug=""\nE0403 19:33:26.140794       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?resourceVersion=19018&timeout=7m55s&timeoutSeconds=475&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0403 19:33:26.140844       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?resourceVersion=18080&timeout=6m47s&timeoutSeconds=407&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0403 19:33:30.906443       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:kube-controller-manager" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\nE0403 19:33:30.906537       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: secrets is forbidden: User "system:kube-controller-manager" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-controller-manager"\nW0403 19:42:17.932901       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22268 (25840)\nW0403 19:47:23.937645       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25976 (28047)\nW0403 19:55:00.941930       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 28261 (31401)\n
Apr 03 20:01:30.084 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-128-177.us-west-2.compute.internal node/ip-10-0-128-177.us-west-2.compute.internal container=kube-apiserver-7 container exited with code 255 (Error): 3 19:57:41.438445       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 19:57:41.438483       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 19:57:41.438620       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 19:57:41.438656       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 19:57:41.438808       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 19:57:41.438846       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 19:57:41.438957       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 19:57:41.438993       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 19:57:41.439102       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 19:57:41.439136       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 19:57:41.439272       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 19:57:41.439316       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 19:57:41.439440       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 19:57:41.439479       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 19:57:41.439593       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 19:57:41.439631       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 19:57:41.439756       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 19:57:41.439792       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\n
Apr 03 20:01:30.084 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-128-177.us-west-2.compute.internal node/ip-10-0-128-177.us-west-2.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 255 (Error): I0403 19:33:27.050249       1 certsync_controller.go:269] Starting CertSyncer\nI0403 19:33:27.050532       1 observer_polling.go:106] Starting file observer\nW0403 19:40:17.870026       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22761 (25316)\nW0403 19:45:33.874643       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25446 (27304)\nW0403 19:52:51.878956       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 27539 (29826)\n
Apr 03 20:01:30.564 E ns/openshift-etcd pod/etcd-member-ip-10-0-128-177.us-west-2.compute.internal node/ip-10-0-128-177.us-west-2.compute.internal container=etcd-metrics container exited with code 255 (Error): 2020-04-03 19:57:11.917545 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-0.ci-op-ci7zfy1r-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-0.ci-op-ci7zfy1r-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-03 19:57:11.922187 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-03 19:57:11.924149 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-ci7zfy1r-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-ci7zfy1r-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/03 19:57:11 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.128.177:9978: connect: connection refused"; Reconnecting to {etcd-0.ci-op-ci7zfy1r-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\n2020-04-03 19:57:12.936063 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Apr 03 20:01:30.564 E ns/openshift-etcd pod/etcd-member-ip-10-0-128-177.us-west-2.compute.internal node/ip-10-0-128-177.us-west-2.compute.internal container=etcd-member container exited with code 255 (Error): ac4cbff1ebca434d (writer)\n2020-04-03 19:57:41.988867 I | rafthttp: stopped HTTP pipelining with peer ac4cbff1ebca434d\n2020-04-03 19:57:41.988956 W | rafthttp: lost the TCP streaming connection with peer ac4cbff1ebca434d (stream MsgApp v2 reader)\n2020-04-03 19:57:41.988993 I | rafthttp: stopped streaming with peer ac4cbff1ebca434d (stream MsgApp v2 reader)\n2020-04-03 19:57:41.989054 W | rafthttp: lost the TCP streaming connection with peer ac4cbff1ebca434d (stream Message reader)\n2020-04-03 19:57:41.989087 I | rafthttp: stopped streaming with peer ac4cbff1ebca434d (stream Message reader)\n2020-04-03 19:57:41.989115 I | rafthttp: stopped peer ac4cbff1ebca434d\n2020-04-03 19:57:41.989141 I | rafthttp: stopping peer 5123ed9b80db0b6f...\n2020-04-03 19:57:41.989461 I | rafthttp: closed the TCP streaming connection with peer 5123ed9b80db0b6f (stream MsgApp v2 writer)\n2020-04-03 19:57:41.989522 I | rafthttp: stopped streaming with peer 5123ed9b80db0b6f (writer)\n2020-04-03 19:57:41.989845 I | rafthttp: closed the TCP streaming connection with peer 5123ed9b80db0b6f (stream Message writer)\n2020-04-03 19:57:41.989903 I | rafthttp: stopped streaming with peer 5123ed9b80db0b6f (writer)\n2020-04-03 19:57:41.989950 I | rafthttp: stopped HTTP pipelining with peer 5123ed9b80db0b6f\n2020-04-03 19:57:41.990037 W | rafthttp: lost the TCP streaming connection with peer 5123ed9b80db0b6f (stream MsgApp v2 reader)\n2020-04-03 19:57:41.990070 E | rafthttp: failed to read 5123ed9b80db0b6f on stream MsgApp v2 (context canceled)\n2020-04-03 19:57:41.990103 I | rafthttp: peer 5123ed9b80db0b6f became inactive (message send to peer failed)\n2020-04-03 19:57:41.990130 I | rafthttp: stopped streaming with peer 5123ed9b80db0b6f (stream MsgApp v2 reader)\n2020-04-03 19:57:41.990204 W | rafthttp: lost the TCP streaming connection with peer 5123ed9b80db0b6f (stream Message reader)\n2020-04-03 19:57:41.990260 I | rafthttp: stopped streaming with peer 5123ed9b80db0b6f (stream Message reader)\n2020-04-03 19:57:41.990289 I | rafthttp: stopped peer 5123ed9b80db0b6f\n
Apr 03 20:01:31.824 E kube-apiserver Kube API started failing: Get https://api.ci-op-ci7zfy1r-a4243.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=3s: dial tcp 52.24.143.83:6443: connect: connection refused
Apr 03 20:01:36.415 E ns/openshift-cluster-node-tuning-operator pod/tuned-2dn27 node/ip-10-0-142-248.us-west-2.compute.internal container=tuned container exited with code 255 (Error): ..\nI0403 19:53:44.813809   38700 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 19:53:58.430365   38700 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-k8s-0) labels changed node wide: true\nI0403 19:53:59.682328   38700 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 19:53:59.683823   38700 openshift-tuned.go:326] Getting recommended profile...\nI0403 19:53:59.810667   38700 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 19:54:22.816778   38700 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0403 19:54:22.821040   38700 openshift-tuned.go:720] Pod event watch channel closed.\nI0403 19:54:22.821059   38700 openshift-tuned.go:722] Increasing resyncPeriod to 136\nI0403 19:56:38.821277   38700 openshift-tuned.go:187] Extracting tuned profiles\nI0403 19:56:38.823335   38700 openshift-tuned.go:623] Resync period to pull node/pod labels: 136 [s]\nI0403 19:56:38.832942   38700 openshift-tuned.go:435] Pod (openshift-sdn/ovs-gdn4t) labels changed node wide: true\nI0403 19:56:43.831051   38700 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 19:56:43.832597   38700 openshift-tuned.go:275] Dumping labels to /var/lib/tuned/ocp-node-labels.cfg\nI0403 19:56:43.833752   38700 openshift-tuned.go:326] Getting recommended profile...\nI0403 19:56:43.958200   38700 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 19:57:41.549204   38700 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0403 19:57:41.550107   38700 openshift-tuned.go:720] Pod event watch channel closed.\nI0403 19:57:41.550129   38700 openshift-tuned.go:722] Increasing resyncPeriod to 272\nI0403 19:59:49.326768   38700 openshift-tuned.go:126] Received signal: terminated\n
Apr 03 20:01:36.425 E ns/openshift-monitoring pod/node-exporter-wcqkg node/ip-10-0-142-248.us-west-2.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 03 20:01:36.425 E ns/openshift-monitoring pod/node-exporter-wcqkg node/ip-10-0-142-248.us-west-2.compute.internal container=node-exporter container exited with code 255 (Error): 
Apr 03 20:01:36.641 E ns/openshift-image-registry pod/node-ca-dgjxl node/ip-10-0-142-248.us-west-2.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 03 20:01:41.413 E ns/openshift-sdn pod/sdn-7kr8l node/ip-10-0-142-248.us-west-2.compute.internal container=sdn container exited with code 255 (Error): 91 for service "openshift-monitoring/prometheus-operated:web"\nI0403 19:59:47.994740   71742 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-monitoring/prometheus-k8s:web to [10.129.2.41:9091 10.131.0.33:9091]\nI0403 19:59:47.994776   71742 roundrobin.go:240] Delete endpoint 10.129.2.41:9091 for service "openshift-monitoring/prometheus-k8s:web"\nI0403 19:59:47.994796   71742 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-monitoring/prometheus-k8s:tenancy to [10.129.2.41:9092 10.131.0.33:9092]\nI0403 19:59:47.994809   71742 roundrobin.go:240] Delete endpoint 10.129.2.41:9092 for service "openshift-monitoring/prometheus-k8s:tenancy"\nI0403 19:59:48.157692   71742 proxier.go:367] userspace proxy: processing 0 service events\nI0403 19:59:48.157715   71742 proxier.go:346] userspace syncProxyRules took 52.297368ms\nI0403 19:59:48.320266   71742 proxier.go:367] userspace proxy: processing 0 service events\nI0403 19:59:48.320292   71742 proxier.go:346] userspace syncProxyRules took 52.853311ms\ninterrupt: Gracefully shutting down ...\nE0403 19:59:49.357694   71742 healthcheck.go:57] SDN healthcheck disconnected from OVS server: <nil>\nI0403 19:59:49.357803   71742 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 19:59:49.458143   71742 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 19:59:49.560213   71742 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 19:59:49.660157   71742 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 19:59:49.758119   71742 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\n
Apr 03 20:01:41.806 E ns/openshift-multus pod/multus-xd64z node/ip-10-0-142-248.us-west-2.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 03 20:01:42.206 E ns/openshift-dns pod/dns-default-z6pqd node/ip-10-0-142-248.us-west-2.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-04-03T19:45:00.731Z [INFO] CoreDNS-1.3.1\n2020-04-03T19:45:00.732Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-03T19:45:00.732Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nW0403 19:53:43.795849       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: watch of *v1.Service ended with: too old resource version: 23232 (30444)\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 03 20:01:42.206 E ns/openshift-dns pod/dns-default-z6pqd node/ip-10-0-142-248.us-west-2.compute.internal container=dns-node-resolver container exited with code 255 (Error): /bin/bash: line 1: kill: (127) - No such process\n
Apr 03 20:01:42.669 E ns/openshift-sdn pod/ovs-gdn4t node/ip-10-0-142-248.us-west-2.compute.internal container=openvswitch container exited with code 255 (Error): s in the last 0 s (4 deletes)\n2020-04-03T19:59:16.183Z|00179|bridge|INFO|bridge br0: deleted interface veth0d1dc7db on port 20\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-04-03T19:59:16.170Z|00020|jsonrpc|WARN|Dropped 5 log messages in last 844 seconds (most recently, 844 seconds ago) due to excessive rate\n2020-04-03T19:59:16.170Z|00021|jsonrpc|WARN|unix#229: receive error: Connection reset by peer\n2020-04-03T19:59:16.170Z|00022|reconnect|WARN|unix#229: connection dropped (Connection reset by peer)\n\n==> /var/log/openvswitch/ovs-vswitchd.log <==\n2020-04-03T19:59:29.377Z|00180|bridge|INFO|bridge br0: added interface vethc3ffd9f1 on port 21\n2020-04-03T19:59:29.407Z|00181|connmgr|INFO|br0<->unix#292: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T19:59:29.443Z|00182|connmgr|INFO|br0<->unix#295: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T19:59:44.536Z|00183|connmgr|INFO|br0<->unix#301: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T19:59:44.561Z|00184|bridge|INFO|bridge br0: deleted interface veth649dd3f5 on port 7\n2020-04-03T19:59:44.663Z|00185|connmgr|INFO|br0<->unix#304: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T19:59:44.692Z|00186|connmgr|INFO|br0<->unix#307: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T19:59:44.715Z|00187|bridge|INFO|bridge br0: deleted interface veth3a380b5c on port 16\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-04-03T19:59:44.550Z|00023|jsonrpc|WARN|unix#241: receive error: Connection reset by peer\n2020-04-03T19:59:44.550Z|00024|reconnect|WARN|unix#241: connection dropped (Connection reset by peer)\n2020-04-03T19:59:44.555Z|00025|jsonrpc|WARN|unix#242: receive error: Connection reset by peer\n2020-04-03T19:59:44.555Z|00026|reconnect|WARN|unix#242: connection dropped (Connection reset by peer)\n2020-04-03T19:59:44.708Z|00027|jsonrpc|WARN|unix#247: receive error: Connection reset by peer\n2020-04-03T19:59:44.708Z|00028|reconnect|WARN|unix#247: connection dropped (Connection reset by peer)\nTerminated\novs-vswitchd is not running.\novsdb-server is not running.\n
Apr 03 20:01:43.005 E ns/openshift-machine-config-operator pod/machine-config-daemon-xd2vh node/ip-10-0-142-248.us-west-2.compute.internal container=machine-config-daemon container exited with code 255 (Error): 
Apr 03 20:01:43.083 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-128-177.us-west-2.compute.internal node/ip-10-0-128-177.us-west-2.compute.internal container=kube-apiserver-7 container exited with code 255 (Error): 3 19:57:41.438445       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 19:57:41.438483       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 19:57:41.438620       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 19:57:41.438656       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 19:57:41.438808       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 19:57:41.438846       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 19:57:41.438957       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 19:57:41.438993       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 19:57:41.439102       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 19:57:41.439136       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 19:57:41.439272       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 19:57:41.439316       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 19:57:41.439440       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 19:57:41.439479       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 19:57:41.439593       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 19:57:41.439631       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 19:57:41.439756       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 19:57:41.439792       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\n
Apr 03 20:01:43.083 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-128-177.us-west-2.compute.internal node/ip-10-0-128-177.us-west-2.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 255 (Error): I0403 19:33:27.050249       1 certsync_controller.go:269] Starting CertSyncer\nI0403 19:33:27.050532       1 observer_polling.go:106] Starting file observer\nW0403 19:40:17.870026       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22761 (25316)\nW0403 19:45:33.874643       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25446 (27304)\nW0403 19:52:51.878956       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 27539 (29826)\n
Apr 03 20:01:43.405 E ns/openshift-operator-lifecycle-manager pod/olm-operators-cvkwg node/ip-10-0-142-248.us-west-2.compute.internal container=configmap-registry-server container exited with code 255 (Error): 
Apr 03 20:01:43.483 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-128-177.us-west-2.compute.internal node/ip-10-0-128-177.us-west-2.compute.internal container=kube-controller-manager-6 container exited with code 255 (Error): ents.apps "community-operators": the object has been modified; please apply your changes to the latest version and try again\nI0403 19:57:35.609912       1 service_controller.go:734] Service has been deleted openshift-marketplace/community-operators. Attempting to cleanup load balancer resources\nI0403 19:57:35.615180       1 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-marketplace", Name:"community-operators-646cd88f57", UID:"5cf72be1-75e5-11ea-8472-021742bf320e", APIVersion:"apps/v1", ResourceVersion:"33923", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: community-operators-646cd88f57-v6r8f\nI0403 19:57:35.715063       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-marketplace", Name:"redhat-operators", UID:"36bb9217-75df-11ea-b63c-0653dc03765e", APIVersion:"apps/v1", ResourceVersion:"33951", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set redhat-operators-5cbb56dcb5 to 1\nI0403 19:57:35.715360       1 replica_set.go:477] Too few replicas for ReplicaSet openshift-marketplace/redhat-operators-5cbb56dcb5, need 1, creating 1\nI0403 19:57:35.731758       1 deployment_controller.go:484] Error syncing deployment openshift-marketplace/redhat-operators: Operation cannot be fulfilled on deployments.apps "redhat-operators": the object has been modified; please apply your changes to the latest version and try again\nI0403 19:57:35.733655       1 service_controller.go:734] Service has been deleted openshift-marketplace/redhat-operators. Attempting to cleanup load balancer resources\nI0403 19:57:35.753317       1 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-marketplace", Name:"redhat-operators-5cbb56dcb5", UID:"5d097787-75e5-11ea-8472-021742bf320e", APIVersion:"apps/v1", ResourceVersion:"33953", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redhat-operators-5cbb56dcb5-7l2kv\nE0403 19:57:41.413584       1 controllermanager.go:282] leaderelection lost\n
Apr 03 20:01:43.483 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-128-177.us-west-2.compute.internal node/ip-10-0-128-177.us-west-2.compute.internal container=kube-controller-manager-cert-syncer-6 container exited with code 255 (Error): to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=7, ErrCode=NO_ERROR, debug=""\nE0403 19:33:26.140148       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=7, ErrCode=NO_ERROR, debug=""\nE0403 19:33:26.140794       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?resourceVersion=19018&timeout=7m55s&timeoutSeconds=475&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0403 19:33:26.140844       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?resourceVersion=18080&timeout=6m47s&timeoutSeconds=407&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0403 19:33:30.906443       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:kube-controller-manager" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\nE0403 19:33:30.906537       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: secrets is forbidden: User "system:kube-controller-manager" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-controller-manager"\nW0403 19:42:17.932901       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22268 (25840)\nW0403 19:47:23.937645       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25976 (28047)\nW0403 19:55:00.941930       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 28261 (31401)\n
Apr 03 20:01:44.283 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-128-177.us-west-2.compute.internal node/ip-10-0-128-177.us-west-2.compute.internal container=scheduler container exited with code 255 (Error): t resource "replicasets" in API group "apps" at the cluster scope\nE0403 19:33:30.931692       1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:245: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope\nE0403 19:33:30.933158       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope\nE0403 19:33:30.955453       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope\nE0403 19:33:30.974440       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope\nE0403 19:33:30.986864       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope\nE0403 19:33:31.051899       1 leaderelection.go:270] error retrieving resource lock openshift-kube-scheduler/kube-scheduler: configmaps "kube-scheduler" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "openshift-kube-scheduler"\nW0403 19:53:43.796170       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 23232 (30444)\nW0403 19:57:00.465630       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 22221 (32984)\nE0403 19:57:41.491729       1 server.go:259] lost master\n
Apr 03 20:02:10.655 E openshift-apiserver OpenShift API is not responding to GET requests
Apr 03 20:02:36.257 E ns/openshift-authentication pod/oauth-openshift-59cdd8956d-47fmt node/ip-10-0-128-177.us-west-2.compute.internal container=oauth-openshift container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 20:03:05.921 E ns/openshift-operator-lifecycle-manager pod/packageserver-7f76487cdb-2fg88 node/ip-10-0-145-43.us-west-2.compute.internal container=packageserver container exited with code 137 (Error): penshift-operator-lifecycle-manager\ntime="2020-04-03T20:02:35Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\nI0403 20:02:44.324737       1 reflector.go:202] github.com/operator-framework/operator-lifecycle-manager/pkg/lib/queueinformer/queueinformer_operator.go:130: forcing resync\ntime="2020-04-03T20:02:45Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T20:02:45Z" level=info msg="new grpc connection added" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T20:02:45Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T20:02:45Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T20:02:45Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T20:02:45Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T20:02:45Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-03T20:02:45Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-03T20:03:00Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-03T20:03:00Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\n
Apr 03 20:03:18.669 E ns/openshift-operator-lifecycle-manager pod/packageserver-6549fbdb9f-v4mf2 node/ip-10-0-128-177.us-west-2.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 20:04:17.822 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-135-157.us-west-2.compute.internal node/ip-10-0-135-157.us-west-2.compute.internal container=kube-apiserver-7 container exited with code 255 (Error): 72] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 20:01:31.301019       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 20:01:31.304113       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 20:01:31.304187       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 20:01:31.304341       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 20:01:31.304402       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 20:01:31.304564       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 20:01:31.304616       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 20:01:31.304737       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 20:01:31.304779       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 20:01:31.304934       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 20:01:31.304992       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 20:01:31.305151       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 20:01:31.305199       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nE0403 20:01:31.312517       1 reflector.go:237] github.com/openshift/client-go/oauth/informers/externalversions/factory.go:101: Failed to watch *v1.OAuthClient: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io)\nE0403 20:01:31.314732       1 reflector.go:237] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: the server is currently unable to handle the request (get groups.user.openshift.io)\n
Apr 03 20:04:17.822 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-135-157.us-west-2.compute.internal node/ip-10-0-135-157.us-west-2.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 255 (Error): I0403 19:31:44.577725       1 certsync_controller.go:269] Starting CertSyncer\nI0403 19:31:44.578563       1 observer_polling.go:106] Starting file observer\nW0403 19:39:37.841416       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22761 (25132)\nW0403 19:46:16.845946       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25272 (27629)\nW0403 19:55:29.850324       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 27816 (31619)\n
Apr 03 20:04:25.983 E ns/openshift-apiserver pod/apiserver-6xzvs node/ip-10-0-135-157.us-west-2.compute.internal container=openshift-apiserver container exited with code 255 (Error):  1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 20:01:20.207054       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 20:01:20.219309       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nE0403 20:01:28.736657       1 memcache.go:141] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request\nI0403 20:01:31.296488       1 serving.go:88] Shutting down DynamicLoader\nI0403 20:01:31.296660       1 clusterquotamapping.go:145] Shutting down ClusterQuotaMappingController controller\nI0403 20:01:31.296753       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/client-ca/ca-bundle.crt\nI0403 20:01:31.296782       1 controller.go:87] Shutting down OpenAPI AggregationController\nI0403 20:01:31.296793       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/aggregator-client-ca/ca-bundle.crt\nE0403 20:01:31.297435       1 watch.go:212] unable to encode watch object <nil>: expected pointer, but got invalid kind\nI0403 20:01:31.297669       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 20:01:31.297972       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 20:01:31.298080       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 20:01:31.298500       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nE0403 20:01:31.298626       1 watch.go:212] unable to encode watch object <nil>: expected pointer, but got invalid kind\nI0403 20:01:31.298690       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 20:01:31.298799       1 secure_serving.go:180] Stopped listening on 0.0.0.0:8443\n
Apr 03 20:04:26.983 E ns/openshift-cluster-node-tuning-operator pod/tuned-sdj26 node/ip-10-0-135-157.us-west-2.compute.internal container=tuned container exited with code 255 (Error): t-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 20:01:03.075094   58654 openshift-tuned.go:326] Getting recommended profile...\nI0403 20:01:03.183662   58654 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 20:01:03.183725   58654 openshift-tuned.go:435] Pod (openshift-controller-manager-operator/openshift-controller-manager-operator-5c87f6579b-ll69l) labels changed node wide: true\nI0403 20:01:08.073776   58654 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 20:01:08.075152   58654 openshift-tuned.go:326] Getting recommended profile...\nI0403 20:01:08.177489   58654 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 20:01:08.479753   58654 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/catalog-operator-5dd78d54d7-6fjqv) labels changed node wide: true\nI0403 20:01:13.073748   58654 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 20:01:13.075003   58654 openshift-tuned.go:326] Getting recommended profile...\nI0403 20:01:13.177926   58654 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 20:01:13.281030   58654 openshift-tuned.go:435] Pod (openshift-machine-api/machine-api-controllers-57847dc59-z65cn) labels changed node wide: true\nI0403 20:01:18.073768   58654 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 20:01:18.075249   58654 openshift-tuned.go:326] Getting recommended profile...\nI0403 20:01:18.190676   58654 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 20:01:30.482916   58654 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/packageserver-7f76487cdb-hpsxj) labels changed node wide: true\n
Apr 03 20:04:27.584 E ns/openshift-image-registry pod/node-ca-rbvpf node/ip-10-0-135-157.us-west-2.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 03 20:04:29.782 E ns/openshift-controller-manager pod/controller-manager-mjddv node/ip-10-0-135-157.us-west-2.compute.internal container=controller-manager container exited with code 255 (Error): 
Apr 03 20:04:30.982 E ns/openshift-multus pod/multus-cmq2z node/ip-10-0-135-157.us-west-2.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 03 20:04:32.982 E ns/openshift-machine-config-operator pod/machine-config-daemon-lx54j node/ip-10-0-135-157.us-west-2.compute.internal container=machine-config-daemon container exited with code 255 (Error): 
Apr 03 20:04:39.582 E ns/openshift-monitoring pod/node-exporter-cmrbd node/ip-10-0-135-157.us-west-2.compute.internal container=node-exporter container exited with code 255 (Error): 
Apr 03 20:04:39.582 E ns/openshift-monitoring pod/node-exporter-cmrbd node/ip-10-0-135-157.us-west-2.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 03 20:04:39.982 E ns/openshift-dns pod/dns-default-49sg5 node/ip-10-0-135-157.us-west-2.compute.internal container=dns-node-resolver container exited with code 255 (Error): 
Apr 03 20:04:39.982 E ns/openshift-dns pod/dns-default-49sg5 node/ip-10-0-135-157.us-west-2.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-04-03T19:46:01.027Z [INFO] CoreDNS-1.3.1\n2020-04-03T19:46:01.027Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-03T19:46:01.027Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nW0403 20:00:42.065001       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: watch of *v1.Namespace ended with: too old resource version: 19120 (36051)\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 03 20:04:53.982 E ns/openshift-etcd pod/etcd-member-ip-10-0-135-157.us-west-2.compute.internal node/ip-10-0-135-157.us-west-2.compute.internal container=etcd-member container exited with code 255 (Error): (message send to peer failed)\n2020-04-03 20:01:31.749746 I | rafthttp: stopped streaming with peer ac4cbff1ebca434d (stream MsgApp v2 reader)\n2020-04-03 20:01:31.749780 W | rafthttp: lost the TCP streaming connection with peer ac4cbff1ebca434d (stream Message reader)\n2020-04-03 20:01:31.749786 I | rafthttp: stopped streaming with peer ac4cbff1ebca434d (stream Message reader)\n2020-04-03 20:01:31.749793 I | rafthttp: stopped peer ac4cbff1ebca434d\n2020-04-03 20:01:31.749798 I | rafthttp: stopping peer beda7db85913a529...\n2020-04-03 20:01:31.750078 I | rafthttp: closed the TCP streaming connection with peer beda7db85913a529 (stream MsgApp v2 writer)\n2020-04-03 20:01:31.750086 I | rafthttp: stopped streaming with peer beda7db85913a529 (writer)\n2020-04-03 20:01:31.750386 I | rafthttp: closed the TCP streaming connection with peer beda7db85913a529 (stream Message writer)\n2020-04-03 20:01:31.750411 I | rafthttp: stopped streaming with peer beda7db85913a529 (writer)\n2020-04-03 20:01:31.750467 I | rafthttp: stopped HTTP pipelining with peer beda7db85913a529\n2020-04-03 20:01:31.750549 W | rafthttp: lost the TCP streaming connection with peer beda7db85913a529 (stream MsgApp v2 reader)\n2020-04-03 20:01:31.750564 E | rafthttp: failed to read beda7db85913a529 on stream MsgApp v2 (context canceled)\n2020-04-03 20:01:31.750568 I | rafthttp: peer beda7db85913a529 became inactive (message send to peer failed)\n2020-04-03 20:01:31.750607 I | rafthttp: stopped streaming with peer beda7db85913a529 (stream MsgApp v2 reader)\n2020-04-03 20:01:31.750658 W | rafthttp: lost the TCP streaming connection with peer beda7db85913a529 (stream Message reader)\n2020-04-03 20:01:31.750676 I | rafthttp: stopped streaming with peer beda7db85913a529 (stream Message reader)\n2020-04-03 20:01:31.750683 I | rafthttp: stopped peer beda7db85913a529\n2020-04-03 20:01:31.773344 E | rafthttp: failed to find member beda7db85913a529 in cluster 23c8f408a2e8bb91\n2020-04-03 20:01:31.778968 E | rafthttp: failed to find member beda7db85913a529 in cluster 23c8f408a2e8bb91\n
Apr 03 20:04:53.982 E ns/openshift-etcd pod/etcd-member-ip-10-0-135-157.us-west-2.compute.internal node/ip-10-0-135-157.us-west-2.compute.internal container=etcd-metrics container exited with code 255 (Error): 2020-04-03 20:00:50.726988 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-2.ci-op-ci7zfy1r-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-2.ci-op-ci7zfy1r-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-03 20:00:50.729654 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-03 20:00:50.731266 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-2.ci-op-ci7zfy1r-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-2.ci-op-ci7zfy1r-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/03 20:00:50 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.135.157:9978: connect: connection refused"; Reconnecting to {etcd-2.ci-op-ci7zfy1r-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\n2020-04-03 20:00:51.744333 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Apr 03 20:04:54.382 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-135-157.us-west-2.compute.internal node/ip-10-0-135-157.us-west-2.compute.internal container=scheduler container exited with code 255 (Error):   1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251\nI0403 19:32:31.064166       1 serving.go:195] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "scheduler.openshift-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1585940933" (2020-04-03 19:09:09 +0000 UTC to 2022-04-03 19:09:10 +0000 UTC (now=2020-04-03 19:32:31.064060355 +0000 UTC))\nI0403 19:32:31.064203       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585940933" [] issuer="<self>" (2020-04-03 19:08:52 +0000 UTC to 2021-04-03 19:08:53 +0000 UTC (now=2020-04-03 19:32:31.064191829 +0000 UTC))\nI0403 19:32:31.064294       1 secure_serving.go:136] Serving securely on [::]:10259\nI0403 19:32:31.064864       1 serving.go:77] Starting DynamicLoader\nI0403 19:32:31.968676       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller\nI0403 19:32:32.068829       1 controller_utils.go:1034] Caches are synced for scheduler controller\nI0403 19:32:32.068887       1 leaderelection.go:205] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\nW0403 19:57:00.832336       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 19122 (33041)\nI0403 19:57:56.341259       1 leaderelection.go:214] successfully acquired lease openshift-kube-scheduler/kube-scheduler\nW0403 20:00:42.045480       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 19120 (36051)\nW0403 20:00:42.045772       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 19120 (36051)\nE0403 20:01:31.236962       1 server.go:259] lost master\n
Apr 03 20:04:54.781 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-135-157.us-west-2.compute.internal node/ip-10-0-135-157.us-west-2.compute.internal container=kube-controller-manager-6 container exited with code 255 (Error): t-network-operator, name: cluster-network-operator, uid: 5d69c53f-75e3-11ea-bcf5-0653dc03765e] with propagation policy Background\nE0403 20:01:18.177828       1 memcache.go:141] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request\nI0403 20:01:19.077901       1 deployment_controller.go:484] Error syncing deployment openshift-monitoring/prometheus-operator: Operation cannot be fulfilled on deployments.apps "prometheus-operator": the object has been modified; please apply your changes to the latest version and try again\nI0403 20:01:19.477759       1 deployment_controller.go:484] Error syncing deployment openshift-monitoring/kube-state-metrics: Operation cannot be fulfilled on deployments.apps "kube-state-metrics": the object has been modified; please apply your changes to the latest version and try again\nW0403 20:01:21.634171       1 garbagecollector.go:648] failed to discover some groups: map[packages.operators.coreos.com/v1:the server is currently unable to handle the request]\nI0403 20:01:24.672488       1 deployment_controller.go:484] Error syncing deployment openshift-monitoring/telemeter-client: Operation cannot be fulfilled on deployments.apps "telemeter-client": the object has been modified; please apply your changes to the latest version and try again\nI0403 20:01:28.870387       1 deployment_controller.go:484] Error syncing deployment openshift-monitoring/prometheus-adapter: Operation cannot be fulfilled on deployments.apps "prometheus-adapter": the object has been modified; please apply your changes to the latest version and try again\nI0403 20:01:30.274980       1 deployment_controller.go:484] Error syncing deployment openshift-monitoring/grafana: Operation cannot be fulfilled on deployments.apps "grafana": the object has been modified; please apply your changes to the latest version and try again\nE0403 20:01:31.315095       1 controllermanager.go:282] leaderelection lost\nI0403 20:01:31.315345       1 serving.go:88] Shutting down DynamicLoader\n
Apr 03 20:04:54.781 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-135-157.us-west-2.compute.internal node/ip-10-0-135-157.us-west-2.compute.internal container=kube-controller-manager-cert-syncer-6 container exited with code 255 (Error): y.go:132: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?resourceVersion=19018&timeout=9m56s&timeoutSeconds=596&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0403 19:31:44.034934       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 19:31:44.035914       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 19:31:47.705110       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: secrets is forbidden: User "system:kube-controller-manager" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-controller-manager"\nE0403 19:31:47.705211       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:kube-controller-manager" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\nW0403 19:36:58.727896       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 19152 (24384)\nW0403 19:44:35.732070       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 24515 (26795)\nW0403 19:52:10.736767       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 27023 (29633)\nW0403 19:59:55.741198       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 29776 (35315)\n
Apr 03 20:04:55.181 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-135-157.us-west-2.compute.internal node/ip-10-0-135-157.us-west-2.compute.internal container=kube-apiserver-7 container exited with code 255 (Error): 72] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 20:01:31.301019       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 20:01:31.304113       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 20:01:31.304187       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 20:01:31.304341       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 20:01:31.304402       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 20:01:31.304564       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 20:01:31.304616       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 20:01:31.304737       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 20:01:31.304779       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 20:01:31.304934       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 20:01:31.304992       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 20:01:31.305151       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 20:01:31.305199       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nE0403 20:01:31.312517       1 reflector.go:237] github.com/openshift/client-go/oauth/informers/externalversions/factory.go:101: Failed to watch *v1.OAuthClient: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io)\nE0403 20:01:31.314732       1 reflector.go:237] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: the server is currently unable to handle the request (get groups.user.openshift.io)\n
Apr 03 20:04:55.181 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-135-157.us-west-2.compute.internal node/ip-10-0-135-157.us-west-2.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 255 (Error): I0403 19:31:44.577725       1 certsync_controller.go:269] Starting CertSyncer\nI0403 19:31:44.578563       1 observer_polling.go:106] Starting file observer\nW0403 19:39:37.841416       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22761 (25132)\nW0403 19:46:16.845946       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25272 (27629)\nW0403 19:55:29.850324       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 27816 (31619)\n
Apr 03 20:06:01.040 E ns/openshift-machine-config-operator pod/etcd-quorum-guard-6f65b9864b-jqcbr node/ip-10-0-135-157.us-west-2.compute.internal container=guard container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated