ResultFAILURE
Tests 3 failed / 19 succeeded
Started2020-04-03 15:47
Elapsed1h32m
Work namespaceci-op-1v3y6khs
Refs release-4.1:514189df
812:8d0c3f82
pod68925ca3-75c2-11ea-bcfa-0a58ac10463b
repoopenshift/cluster-kube-apiserver-operator
revision1

Test Failures


openshift-tests Monitor cluster while tests execute 43m47s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
228 error level events were detected during this test run:

Apr 03 16:32:07.373 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-977896bdd-8ptj7 node/ip-10-0-146-226.us-west-2.compute.internal container=kube-apiserver-operator container exited with code 255 (Error): ", Name:"kube-apiserver-operator", UID:"51dbc6e0-75c5-11ea-ae28-02f230eac062", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "StaticPodsDegraded: nodes/ip-10-0-142-162.us-west-2.compute.internal pods/kube-apiserver-ip-10-0-142-162.us-west-2.compute.internal container=\"kube-apiserver-6\" is not ready" to ""\nI0403 16:21:04.567384       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"51dbc6e0-75c5-11ea-ae28-02f230eac062", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-6-ip-10-0-142-162.us-west-2.compute.internal -n openshift-kube-apiserver because it was missing\nW0403 16:25:40.325962       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 14269 (16070)\nW0403 16:25:58.325946       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 14281 (16148)\nW0403 16:26:38.125823       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 14281 (16318)\nW0403 16:27:55.126878       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 14921 (16661)\nW0403 16:31:04.331000       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 16195 (18065)\nW0403 16:31:25.330569       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 16279 (18163)\nI0403 16:32:06.380215       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 16:32:06.380288       1 leaderelection.go:65] leaderelection lost\n
Apr 03 16:33:30.546 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-5968f699d5-k92zf node/ip-10-0-146-226.us-west-2.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): o:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Network ended with: too old resource version: 5307 (13650)\nW0403 16:19:03.645301       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.RoleBinding ended with: too old resource version: 6465 (13609)\nW0403 16:19:03.646817       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ServiceAccount ended with: too old resource version: 10269 (13601)\nW0403 16:19:03.647682       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 4387 (13601)\nW0403 16:19:03.648013       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 13263 (13601)\nW0403 16:19:03.648774       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Role ended with: too old resource version: 4368 (13609)\nW0403 16:24:07.620596       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 14262 (15640)\nW0403 16:27:38.622502       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 14262 (16593)\nW0403 16:28:41.631373       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 14262 (16866)\nW0403 16:30:30.625497       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 15796 (17741)\nI0403 16:33:29.480811       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 16:33:29.480868       1 leaderelection.go:65] leaderelection lost\nI0403 16:33:29.502795       1 resourcesync_controller.go:229] Shutting down ResourceSyncController\nI0403 16:33:29.502813       1 unsupportedconfigoverrides_controller.go:161] Shutting down UnsupportedConfigOverridesController\n
Apr 03 16:33:42.573 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-79d9b6fd4c-75mdn node/ip-10-0-146-226.us-west-2.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"51c4d62a-75c5-11ea-ae28-02f230eac062", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Progressing changed from True to False ("Progressing: 3 nodes are at revision 6"),Available message changed from "Available: 3 nodes are active; 1 nodes are at revision 5; 2 nodes are at revision 6" to "Available: 3 nodes are active; 3 nodes are at revision 6"\nI0403 16:19:10.777208       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"51c4d62a-75c5-11ea-ae28-02f230eac062", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/revision-status-6 -n openshift-kube-scheduler: cause by changes in data.status\nI0403 16:19:15.581252       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"51c4d62a-75c5-11ea-ae28-02f230eac062", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-6-ip-10-0-142-162.us-west-2.compute.internal -n openshift-kube-scheduler because it was missing\nW0403 16:27:20.377311       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 14285 (16504)\nW0403 16:28:15.710545       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 14179 (16748)\nI0403 16:33:41.911412       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 16:33:41.911532       1 leaderelection.go:65] leaderelection lost\nI0403 16:33:41.916212       1 remove_stale_conditions.go:83] Shutting down RemoveStaleConditions\n
Apr 03 16:35:13.668 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-c47f8479b-gpfb5 node/ip-10-0-146-226.us-west-2.compute.internal container=openshift-apiserver-operator container exited with code 255 (Error): iceAccount ended with: too old resource version: 10226 (14276)\nW0403 16:20:46.261327       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 14262 (14678)\nW0403 16:20:46.261375       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 14262 (14677)\nW0403 16:20:46.362595       1 reflector.go:270] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: watch of *v1.OpenShiftAPIServer ended with: too old resource version: 9544 (14804)\nW0403 16:20:46.375113       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Image ended with: too old resource version: 8953 (14804)\nW0403 16:20:46.388975       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Project ended with: too old resource version: 5564 (14804)\nW0403 16:20:46.407340       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Ingress ended with: too old resource version: 5564 (14804)\nW0403 16:27:40.568161       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 14818 (16601)\nW0403 16:28:45.368959       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 14834 (16879)\nW0403 16:30:23.168196       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 14831 (17361)\nW0403 16:34:11.573143       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 16727 (19081)\nI0403 16:34:57.527800       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 16:34:57.527975       1 leaderelection.go:65] leaderelection lost\n
Apr 03 16:35:14.674 E ns/openshift-machine-api pod/machine-api-operator-554d8467f5-6htkd node/ip-10-0-146-226.us-west-2.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Apr 03 16:37:26.438 E ns/openshift-machine-api pod/machine-api-controllers-864657cd58-7rsg2 node/ip-10-0-137-56.us-west-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Apr 03 16:37:26.438 E ns/openshift-machine-api pod/machine-api-controllers-864657cd58-7rsg2 node/ip-10-0-137-56.us-west-2.compute.internal container=nodelink-controller container exited with code 2 (Error): 
Apr 03 16:37:45.799 E ns/openshift-cluster-node-tuning-operator pod/cluster-node-tuning-operator-6d98d59b47-szggc node/ip-10-0-142-162.us-west-2.compute.internal container=cluster-node-tuning-operator container exited with code 255 (Error): oller-runtime/pkg/cache/internal/informers_map.go:196: watch of *v1.ClusterRole ended with: too old resource version: 14836 (19711)\nW0403 16:36:58.467584       1 reflector.go:270] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:196: watch of *v1.ServiceAccount ended with: too old resource version: 17722 (19707)\nW0403 16:36:58.467688       1 reflector.go:270] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:196: watch of *v1.ConfigMap ended with: too old resource version: 19691 (20301)\nW0403 16:36:58.467866       1 reflector.go:270] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:126: watch of *v1.Tuned ended with: too old resource version: 19679 (20238)\nI0403 16:36:59.469456       1 tuned_controller.go:419] Reconciling Tuned openshift-cluster-node-tuning-operator/default\nI0403 16:36:59.469477       1 status.go:26] syncOperatorStatus()\nI0403 16:36:59.476290       1 tuned_controller.go:187] syncServiceAccount()\nI0403 16:36:59.476620       1 tuned_controller.go:215] syncClusterRole()\nI0403 16:36:59.507309       1 tuned_controller.go:246] syncClusterRoleBinding()\nI0403 16:36:59.536595       1 tuned_controller.go:277] syncClusterConfigMap()\nI0403 16:36:59.540143       1 tuned_controller.go:277] syncClusterConfigMap()\nI0403 16:36:59.543411       1 tuned_controller.go:315] syncDaemonSet()\nI0403 16:36:59.552799       1 tuned_controller.go:419] Reconciling Tuned openshift-cluster-node-tuning-operator/default\nI0403 16:36:59.552812       1 status.go:26] syncOperatorStatus()\nI0403 16:36:59.557916       1 tuned_controller.go:187] syncServiceAccount()\nI0403 16:36:59.558040       1 tuned_controller.go:215] syncClusterRole()\nI0403 16:36:59.597343       1 tuned_controller.go:246] syncClusterRoleBinding()\nI0403 16:36:59.630777       1 tuned_controller.go:277] syncClusterConfigMap()\nI0403 16:36:59.634789       1 tuned_controller.go:277] syncClusterConfigMap()\nI0403 16:36:59.638179       1 tuned_controller.go:315] syncDaemonSet()\nF0403 16:37:44.908893       1 main.go:85] <nil>\n
Apr 03 16:37:47.370 E ns/openshift-monitoring pod/node-exporter-s8tlc node/ip-10-0-130-176.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 
Apr 03 16:37:47.688 E clusterversion/version changed Failing to True: MultipleErrors: Multiple errors are preventing progress:\n* Cluster operator monitoring is still updating\n* Cluster operator node-tuning is still updating\n* Cluster operator service-catalog-controller-manager is still updating\n* Cluster operator storage is still updating\n* Could not update deployment "openshift-authentication-operator/authentication-operator" (107 of 350)\n* Could not update deployment "openshift-cloud-credential-operator/cloud-credential-operator" (94 of 350)\n* Could not update deployment "openshift-cluster-samples-operator/cluster-samples-operator" (185 of 350)\n* Could not update deployment "openshift-console/downloads" (237 of 350)\n* Could not update deployment "openshift-controller-manager-operator/openshift-controller-manager-operator" (173 of 350)\n* Could not update deployment "openshift-image-registry/cluster-image-registry-operator" (133 of 350)\n* Could not update deployment "openshift-machine-api/cluster-autoscaler-operator" (122 of 350)\n* Could not update deployment "openshift-marketplace/marketplace-operator" (282 of 350)\n* Could not update deployment "openshift-operator-lifecycle-manager/olm-operator" (253 of 350)\n* Could not update deployment "openshift-service-ca-operator/service-ca-operator" (290 of 350)\n* Could not update deployment "openshift-service-catalog-apiserver-operator/openshift-service-catalog-apiserver-operator" (209 of 350)
Apr 03 16:37:49.816 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-6d54p77tg node/ip-10-0-142-162.us-west-2.compute.internal container=operator container exited with code 2 (Error):  LastStreamID=135, ErrCode=NO_ERROR, debug=""\nI0403 16:36:58.439516       1 reflector.go:357] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: Watch close - *v1.ClusterOperator total 21 items received\nE0403 16:36:58.440122       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=135, ErrCode=NO_ERROR, debug=""\nI0403 16:36:58.440243       1 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch close - *v1.Service total 0 items received\nE0403 16:36:58.440461       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=135, ErrCode=NO_ERROR, debug=""\nI0403 16:36:58.440566       1 reflector.go:357] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: Watch close - *v1.ServiceCatalogControllerManager total 0 items received\nE0403 16:36:58.440850       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=135, ErrCode=NO_ERROR, debug=""\nI0403 16:36:58.440965       1 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch close - *v1.Namespace total 0 items received\nW0403 16:36:58.462439       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 19511 (20301)\nI0403 16:36:59.462682       1 reflector.go:169] Listing and watching *v1.ConfigMap from k8s.io/client-go/informers/factory.go:132\nI0403 16:37:06.423168       1 wrap.go:47] GET /metrics: (2.634829ms) 200 [Prometheus/2.7.2 10.129.2.5:34262]\nI0403 16:37:06.424256       1 wrap.go:47] GET /metrics: (5.405574ms) 200 [Prometheus/2.7.2 10.128.2.7:51908]\nI0403 16:37:36.423505       1 wrap.go:47] GET /metrics: (4.553583ms) 200 [Prometheus/2.7.2 10.128.2.7:51908]\nI0403 16:37:36.424100       1 wrap.go:47] GET /metrics: (3.740906ms) 200 [Prometheus/2.7.2 10.129.2.5:34262]\n
Apr 03 16:37:56.864 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-157-20.us-west-2.compute.internal container=prometheus-proxy container exited with code 2 (Error): 
Apr 03 16:37:56.864 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-157-20.us-west-2.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 
Apr 03 16:37:56.864 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-157-20.us-west-2.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): 
Apr 03 16:38:02.871 E ns/openshift-cluster-node-tuning-operator pod/tuned-h9lqr node/ip-10-0-142-162.us-west-2.compute.internal container=tuned container exited with code 143 (Error): 5:27.156130   16601 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 16:35:35.987741   16601 openshift-tuned.go:435] Pod (openshift-kube-apiserver/installer-7-ip-10-0-142-162.us-west-2.compute.internal) labels changed node wide: false\nI0403 16:35:47.736306   16601 openshift-tuned.go:435] Pod (openshift-kube-controller-manager/revision-pruner-6-ip-10-0-142-162.us-west-2.compute.internal) labels changed node wide: false\nI0403 16:35:48.463382   16601 openshift-tuned.go:435] Pod (openshift-kube-apiserver/kube-apiserver-ip-10-0-142-162.us-west-2.compute.internal) labels changed node wide: true\nI0403 16:35:52.006892   16601 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:35:52.008239   16601 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:35:52.101021   16601 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 16:36:33.008656   16601 openshift-tuned.go:435] Pod (openshift-kube-scheduler/installer-7-ip-10-0-142-162.us-west-2.compute.internal) labels changed node wide: false\nI0403 16:36:44.843826   16601 openshift-tuned.go:435] Pod (openshift-kube-scheduler/openshift-kube-scheduler-ip-10-0-142-162.us-west-2.compute.internal) labels changed node wide: true\nI0403 16:36:47.006866   16601 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:36:47.008344   16601 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:36:47.099508   16601 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 16:36:58.437641   16601 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0403 16:36:58.439652   16601 openshift-tuned.go:720] Pod event watch channel closed.\nI0403 16:36:58.439740   16601 openshift-tuned.go:722] Increasing resyncPeriod to 116\n
Apr 03 16:38:05.420 E ns/openshift-ingress pod/router-default-55978c8b56-4x92q node/ip-10-0-130-176.us-west-2.compute.internal container=router container exited with code 2 (Error): :482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 16:36:23.389589       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 16:36:43.633941       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 16:37:01.966339       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 16:37:06.955003       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 16:37:11.957694       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nE0403 16:37:28.162789       1 limiter.go:137] error reloading router: wait: no child processes\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 16:37:33.151441       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 16:37:38.151429       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 16:37:45.909704       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 16:37:50.903241       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 16:37:55.910485       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 16:38:00.909845       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n
Apr 03 16:38:06.260 E ns/openshift-monitoring pod/node-exporter-jpn7m node/ip-10-0-157-20.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 
Apr 03 16:38:07.551 E ns/openshift-monitoring pod/kube-state-metrics-5c7848c856-klml6 node/ip-10-0-135-189.us-west-2.compute.internal container=kube-state-metrics container exited with code 2 (Error): 
Apr 03 16:38:10.142 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-157-20.us-west-2.compute.internal container=prometheus-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 16:38:10.142 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-157-20.us-west-2.compute.internal container=rules-configmap-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 16:38:10.142 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-157-20.us-west-2.compute.internal container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 16:38:10.142 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-157-20.us-west-2.compute.internal container=prometheus container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 16:38:10.142 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-157-20.us-west-2.compute.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 16:38:10.142 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-157-20.us-west-2.compute.internal container=prometheus-config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 16:38:13.561 E ns/openshift-monitoring pod/telemeter-client-d47b8d5cd-phrkp node/ip-10-0-135-189.us-west-2.compute.internal container=reload container exited with code 2 (Error): 
Apr 03 16:38:13.561 E ns/openshift-monitoring pod/telemeter-client-d47b8d5cd-phrkp node/ip-10-0-135-189.us-west-2.compute.internal container=telemeter-client container exited with code 2 (Error): 
Apr 03 16:38:16.902 E ns/openshift-monitoring pod/node-exporter-k88kk node/ip-10-0-142-162.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 
Apr 03 16:38:18.076 E ns/openshift-console pod/downloads-5d6cfb5557-4wltx node/ip-10-0-146-226.us-west-2.compute.internal container=download-server container exited with code 137 (Error): 
Apr 03 16:38:19.080 E ns/openshift-controller-manager pod/controller-manager-bz7tw node/ip-10-0-146-226.us-west-2.compute.internal container=controller-manager container exited with code 137 (Error): 
Apr 03 16:38:30.942 E ns/openshift-console pod/downloads-5d6cfb5557-97trt node/ip-10-0-142-162.us-west-2.compute.internal container=download-server container exited with code 137 (Error): 
Apr 03 16:38:34.494 E ns/openshift-cluster-node-tuning-operator pod/tuned-t6xhl node/ip-10-0-130-176.us-west-2.compute.internal container=tuned container exited with code 143 (Error): bsf/pod-configmap-5b649686-75c8-11ea-a467-0a58ac10489f) labels changed node wide: false\nI0403 16:37:44.221572    3386 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-operator-74d7f45b8c-n7zwz) labels changed node wide: true\nI0403 16:37:47.993710    3386 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:37:47.995488    3386 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:37:48.105641    3386 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 16:37:51.419661    3386 openshift-tuned.go:435] Pod (openshift-monitoring/node-exporter-s8tlc) labels changed node wide: true\nI0403 16:37:52.993694    3386 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:37:52.995360    3386 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:37:53.105772    3386 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 16:37:53.790444    3386 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-adapter-cfd599b4-29qkh) labels changed node wide: true\nI0403 16:37:57.993728    3386 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:37:57.995293    3386 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:37:58.105220    3386 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 16:38:11.467791    3386 openshift-tuned.go:435] Pod (openshift-ingress/router-default-55978c8b56-4x92q) labels changed node wide: true\nI0403 16:38:12.993734    3386 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:38:13.003304    3386 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:38:13.176339    3386 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\n
Apr 03 16:38:41.196 E ns/openshift-cluster-node-tuning-operator pod/tuned-vgxkh node/ip-10-0-157-20.us-west-2.compute.internal container=tuned container exited with code 143 (Error): .go:435] Pod (openshift-monitoring/prometheus-adapter-cfd599b4-dg4vr) labels changed node wide: true\nI0403 16:38:12.529763    3459 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:38:12.531293    3459 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:38:12.640786    3459 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 16:38:14.021455    3459 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-adapter-5859d57c6f-8v86x) labels changed node wide: true\nI0403 16:38:17.529743    3459 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:38:17.532152    3459 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:38:17.661422    3459 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 16:38:27.365862    3459 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-k8s-1) labels changed node wide: true\nI0403 16:38:27.529772    3459 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:38:27.531367    3459 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:38:27.641303    3459 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 16:38:30.170783    3459 openshift-tuned.go:435] Pod (openshift-monitoring/grafana-5d8846bc6f-jkp58) labels changed node wide: true\nI0403 16:38:32.534214    3459 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:38:32.539379    3459 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:38:32.670031    3459 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 16:38:40.170474    3459 openshift-tuned.go:435] Pod (openshift-ingress/router-default-55978c8b56-b7bmr) labels changed node wide: true\n
Apr 03 16:38:42.977 E ns/openshift-console-operator pod/console-operator-69b8d7bb4c-6qlxk node/ip-10-0-142-162.us-west-2.compute.internal container=console-operator container exited with code 255 (Error):  msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-1v3y6khs-a4243.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-04-03T16:36:59Z" level=info msg="sync loop 4.0.0 complete"\ntime="2020-04-03T16:36:59Z" level=info msg="finished syncing operator \"cluster\" (26.655µs) \n\n"\ntime="2020-04-03T16:36:59Z" level=info msg="started syncing operator \"cluster\" (2020-04-03 16:36:59.509077163 +0000 UTC m=+1403.271441525)"\ntime="2020-04-03T16:36:59Z" level=info msg="console is in a managed state."\ntime="2020-04-03T16:36:59Z" level=info msg="running sync loop 4.0.0"\ntime="2020-04-03T16:36:59Z" level=info msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-1v3y6khs-a4243.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-04-03T16:36:59Z" level=info msg="service-ca configmap exists and is in the correct state"\ntime="2020-04-03T16:37:00Z" level=info msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-1v3y6khs-a4243.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-04-03T16:37:00Z" level=info msg=-----------------------\ntime="2020-04-03T16:37:00Z" level=info msg="sync loop 4.0.0 resources updated: false \n"\ntime="2020-04-03T16:37:00Z" level=info msg=-----------------------\ntime="2020-04-03T16:37:00Z" level=info msg="deployment is available, ready replicas: 2 \n"\ntime="2020-04-03T16:37:00Z" level=info msg="sync_v400: updating console status"\ntime="2020-04-03T16:37:00Z" level=info msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-1v3y6khs-a4243.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-04-03T16:37:00Z" level=info msg="sync loop 4.0.0 complete"\ntime="2020-04-03T16:37:00Z" level=info msg="finished syncing operator \"cluster\" (20.644µs) \n\n"\nI0403 16:38:42.155942       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 16:38:42.155986       1 leaderelection.go:65] leaderelection lost\nF0403 16:38:42.160808       1 builder.go:248] stopped\n
Apr 03 16:38:53.249 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-157-20.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): 
Apr 03 16:38:55.203 E ns/openshift-authentication-operator pod/authentication-operator-56698f9f59-ptbbg node/ip-10-0-137-56.us-west-2.compute.internal container=operator container exited with code 255 (Error): d","status":"True","type":"Upgradeable"}]}}\nI0403 16:27:06.697054       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"593ba09d-75c5-11ea-ae28-02f230eac062", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Progressing changed from True to False (""),Available changed from False to True ("")\nW0403 16:28:37.660805       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 14262 (16843)\nW0403 16:28:42.545181       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 14262 (16871)\nW0403 16:30:47.589459       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 16161 (17948)\nW0403 16:34:16.490547       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Deployment ended with: too old resource version: 14252 (16931)\nW0403 16:34:16.548457       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 17015 (19129)\nW0403 16:34:20.738840       1 reflector.go:270] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: The resourceVersion for the provided watch is too old.\nW0403 16:34:47.484658       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 16017 (19317)\nW0403 16:37:06.592904       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 18124 (20354)\nI0403 16:37:57.030475       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 16:37:57.030521       1 leaderelection.go:65] leaderelection lost\n
Apr 03 16:38:56.603 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-130-176.us-west-2.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 
Apr 03 16:38:56.603 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-130-176.us-west-2.compute.internal container=prometheus-proxy container exited with code 2 (Error): 
Apr 03 16:38:56.603 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-130-176.us-west-2.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): 
Apr 03 16:39:04.697 E ns/openshift-monitoring pod/node-exporter-sttfl node/ip-10-0-135-189.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 
Apr 03 16:39:15.306 E ns/openshift-cluster-node-tuning-operator pod/tuned-lnlv9 node/ip-10-0-137-56.us-west-2.compute.internal container=tuned container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 16:39:21.790 E ns/openshift-marketplace pod/redhat-operators-cdf88dd4f-7c986 node/ip-10-0-135-189.us-west-2.compute.internal container=redhat-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 16:39:21.993 E ns/openshift-marketplace pod/community-operators-f946c44bb-bzchp node/ip-10-0-135-189.us-west-2.compute.internal container=community-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 16:39:24.788 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-130-176.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): 
Apr 03 16:39:30.768 E ns/openshift-cluster-node-tuning-operator pod/tuned-fbzht node/ip-10-0-135-189.us-west-2.compute.internal container=tuned container exited with code 143 (Error): 403 16:38:48.535342    3942 openshift-tuned.go:691] Lowering resyncPeriod to 54\nI0403 16:38:50.529432    3942 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:38:50.531068    3942 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:38:50.644299    3942 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 16:39:05.665503    3942 openshift-tuned.go:435] Pod (openshift-monitoring/node-exporter-sttfl) labels changed node wide: true\nI0403 16:39:10.529391    3942 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:39:10.530836    3942 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:39:10.640197    3942 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 16:39:18.526840    3942 openshift-tuned.go:435] Pod (openshift-marketplace/community-operators-f946c44bb-bzchp) labels changed node wide: true\nI0403 16:39:20.529414    3942 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:39:20.531001    3942 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:39:20.669094    3942 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 16:39:20.669181    3942 openshift-tuned.go:435] Pod (openshift-marketplace/redhat-operators-cdf88dd4f-7c986) labels changed node wide: true\nI0403 16:39:25.529472    3942 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:39:25.531119    3942 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:39:25.672265    3942 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 16:39:25.860727    3942 openshift-tuned.go:435] Pod (openshift-marketplace/community-operators-f946c44bb-bzchp) labels changed node wide: true\n
Apr 03 16:39:54.813 E ns/openshift-marketplace pod/redhat-operators-5fd54fff6c-dz49b node/ip-10-0-135-189.us-west-2.compute.internal container=redhat-operators container exited with code 2 (Error): 
Apr 03 16:40:04.355 E ns/openshift-controller-manager pod/controller-manager-s8g7b node/ip-10-0-142-162.us-west-2.compute.internal container=controller-manager container exited with code 137 (Error): 
Apr 03 16:40:07.774 E ns/openshift-console pod/console-786fdd64b7-88hsj node/ip-10-0-146-226.us-west-2.compute.internal container=console container exited with code 2 (Error): 2020/04/3 16:17:54 cmd/main: cookies are secure!\n2020/04/3 16:17:54 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://172.30.0.1:443/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/04/3 16:18:04 cmd/main: Binding to 0.0.0.0:8443...\n2020/04/3 16:18:04 cmd/main: using TLS\n
Apr 03 16:40:32.913 E ns/openshift-marketplace pod/community-operators-799bb4488f-9kvb5 node/ip-10-0-135-189.us-west-2.compute.internal container=community-operators container exited with code 2 (Error): 
Apr 03 16:41:34.593 E ns/openshift-controller-manager pod/controller-manager-h44m7 node/ip-10-0-137-56.us-west-2.compute.internal container=controller-manager container exited with code 137 (Error): 
Apr 03 16:43:29.855 E ns/openshift-sdn pod/ovs-868fz node/ip-10-0-137-56.us-west-2.compute.internal container=openvswitch container exited with code 137 (Error): :37.040Z|00336|connmgr|INFO|br0<->unix#846: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T16:39:37.073Z|00337|connmgr|INFO|br0<->unix#849: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T16:39:37.098Z|00338|bridge|INFO|bridge br0: deleted interface veth0677069e on port 30\n2020-04-03T16:39:52.164Z|00339|bridge|INFO|bridge br0: added interface veth6b124878 on port 55\n2020-04-03T16:39:52.191Z|00340|connmgr|INFO|br0<->unix#855: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T16:39:52.227Z|00341|connmgr|INFO|br0<->unix#858: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T16:39:54.446Z|00342|connmgr|INFO|br0<->unix#861: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T16:39:54.479Z|00343|connmgr|INFO|br0<->unix#864: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T16:39:54.504Z|00344|bridge|INFO|bridge br0: deleted interface veth6b124878 on port 55\n2020-04-03T16:39:56.881Z|00345|bridge|INFO|bridge br0: added interface vethc19c1664 on port 56\n2020-04-03T16:39:56.909Z|00346|connmgr|INFO|br0<->unix#867: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T16:39:56.940Z|00347|connmgr|INFO|br0<->unix#870: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T16:41:34.140Z|00348|connmgr|INFO|br0<->unix#883: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T16:41:34.170Z|00349|connmgr|INFO|br0<->unix#886: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T16:41:34.190Z|00350|bridge|INFO|bridge br0: deleted interface veth11cf9093 on port 54\n2020-04-03T16:41:49.726Z|00351|bridge|INFO|bridge br0: added interface veth7fa73174 on port 57\n2020-04-03T16:41:49.758Z|00352|connmgr|INFO|br0<->unix#889: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T16:41:49.801Z|00353|connmgr|INFO|br0<->unix#892: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T16:42:49.944Z|00354|connmgr|INFO|br0<->unix#901: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T16:42:49.971Z|00355|connmgr|INFO|br0<->unix#904: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T16:42:49.996Z|00356|bridge|INFO|bridge br0: deleted interface veth023f0e97 on port 10\n
Apr 03 16:43:38.880 E ns/openshift-sdn pod/sdn-bnf9t node/ip-10-0-137-56.us-west-2.compute.internal container=sdn container exited with code 255 (Error): ix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:43:36.922708    2718 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:43:37.022732    2718 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:43:37.122702    2718 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:43:37.222715    2718 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:43:37.322716    2718 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:43:37.422724    2718 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:43:37.522730    2718 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:43:37.622725    2718 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:43:37.722713    2718 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:43:37.823804    2718 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:43:37.927659    2718 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0403 16:43:37.927711    2718 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
Apr 03 16:44:04.364 E ns/openshift-sdn pod/sdn-controller-g22sk node/ip-10-0-146-226.us-west-2.compute.internal container=sdn-controller container exited with code 137 (Error): I0403 16:08:44.849207       1 leaderelection.go:205] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Apr 03 16:44:12.036 E ns/openshift-sdn pod/sdn-tfdhz node/ip-10-0-157-20.us-west-2.compute.internal container=sdn container exited with code 255 (Error): ix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:44:10.912770   49910 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:44:11.012744   49910 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:44:11.112732   49910 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:44:11.212697   49910 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:44:11.312738   49910 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:44:11.412696   49910 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:44:11.512733   49910 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:44:11.612742   49910 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:44:11.712698   49910 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:44:11.812682   49910 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:44:11.921186   49910 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0403 16:44:11.921267   49910 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
Apr 03 16:44:17.053 E ns/openshift-multus pod/multus-8msrp node/ip-10-0-157-20.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 03 16:44:42.492 E ns/openshift-sdn pod/ovs-sdf2m node/ip-10-0-130-176.us-west-2.compute.internal container=openvswitch container exited with code 137 (Error): \n2020-04-03T16:39:28.075Z|00140|bridge|INFO|bridge br0: added interface veth49778ef7 on port 23\n2020-04-03T16:39:28.104Z|00141|connmgr|INFO|br0<->unix#412: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T16:39:28.141Z|00142|connmgr|INFO|br0<->unix#415: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T16:42:50.358Z|00143|connmgr|INFO|br0<->unix#437: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T16:42:50.387Z|00144|connmgr|INFO|br0<->unix#440: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T16:42:50.408Z|00145|bridge|INFO|bridge br0: deleted interface vethb983b9cb on port 3\n2020-04-03T16:42:58.592Z|00146|bridge|INFO|bridge br0: added interface veth312343d2 on port 24\n2020-04-03T16:42:58.624Z|00147|connmgr|INFO|br0<->unix#446: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T16:42:58.660Z|00148|connmgr|INFO|br0<->unix#449: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T16:43:45.761Z|00149|connmgr|INFO|br0<->unix#462: 2 flow_mods in the last 0 s (2 adds)\n2020-04-03T16:43:45.839Z|00150|connmgr|INFO|br0<->unix#468: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T16:43:46.165Z|00151|connmgr|INFO|br0<->unix#471: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T16:43:46.192Z|00152|connmgr|INFO|br0<->unix#474: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T16:43:46.214Z|00153|connmgr|INFO|br0<->unix#477: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T16:43:46.241Z|00154|connmgr|INFO|br0<->unix#480: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T16:43:46.270Z|00155|connmgr|INFO|br0<->unix#483: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T16:43:46.297Z|00156|connmgr|INFO|br0<->unix#486: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T16:43:46.321Z|00157|connmgr|INFO|br0<->unix#489: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T16:43:46.348Z|00158|connmgr|INFO|br0<->unix#492: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T16:43:46.379Z|00159|connmgr|INFO|br0<->unix#495: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T16:43:46.409Z|00160|connmgr|INFO|br0<->unix#498: 1 flow_mods in the last 0 s (1 adds)\n
Apr 03 16:44:44.505 E ns/openshift-sdn pod/sdn-9fw6k node/ip-10-0-130-176.us-west-2.compute.internal container=sdn container exited with code 255 (Error): /openvswitch/db.sock: connect: connection refused\nI0403 16:44:43.395622   46157 proxier.go:367] userspace proxy: processing 0 service events\nI0403 16:44:43.395643   46157 proxier.go:346] userspace syncProxyRules took 58.724329ms\nI0403 16:44:43.453925   46157 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:44:43.554045   46157 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:44:43.653959   46157 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:44:43.753967   46157 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:44:43.853996   46157 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:44:43.953971   46157 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:44:44.053917   46157 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:44:44.154033   46157 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:44:44.254013   46157 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:44:44.358297   46157 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0403 16:44:44.358386   46157 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
Apr 03 16:44:48.040 E ns/openshift-sdn pod/sdn-controller-fmgvz node/ip-10-0-137-56.us-west-2.compute.internal container=sdn-controller container exited with code 137 (Error): I0403 16:08:44.519846       1 leaderelection.go:205] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Apr 03 16:45:05.547 E ns/openshift-multus pod/multus-qvhn8 node/ip-10-0-130-176.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 03 16:45:15.446 E ns/openshift-sdn pod/ovs-k4bg4 node/ip-10-0-135-189.us-west-2.compute.internal container=openvswitch container exited with code 137 (Error): -03T16:40:08.617Z|00175|connmgr|INFO|br0<->unix#484: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T16:40:08.646Z|00176|connmgr|INFO|br0<->unix#487: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T16:40:08.672Z|00177|bridge|INFO|bridge br0: deleted interface veth1939c544 on port 4\n2020-04-03T16:40:32.230Z|00178|connmgr|INFO|br0<->unix#493: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T16:40:32.270Z|00179|connmgr|INFO|br0<->unix#496: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T16:40:32.295Z|00180|bridge|INFO|bridge br0: deleted interface veth62f36b27 on port 5\n2020-04-03T16:43:06.926Z|00181|connmgr|INFO|br0<->unix#521: 2 flow_mods in the last 0 s (2 adds)\n2020-04-03T16:43:06.999Z|00182|connmgr|INFO|br0<->unix#527: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T16:43:07.021Z|00183|connmgr|INFO|br0<->unix#530: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T16:43:07.298Z|00184|connmgr|INFO|br0<->unix#533: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T16:43:07.321Z|00185|connmgr|INFO|br0<->unix#536: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T16:43:07.342Z|00186|connmgr|INFO|br0<->unix#539: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T16:43:07.364Z|00187|connmgr|INFO|br0<->unix#542: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T16:43:07.390Z|00188|connmgr|INFO|br0<->unix#545: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T16:43:07.421Z|00189|connmgr|INFO|br0<->unix#548: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T16:43:07.448Z|00190|connmgr|INFO|br0<->unix#551: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T16:43:07.471Z|00191|connmgr|INFO|br0<->unix#554: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T16:43:07.494Z|00192|connmgr|INFO|br0<->unix#557: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T16:43:07.519Z|00193|connmgr|INFO|br0<->unix#560: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T16:45:03.195Z|00194|connmgr|INFO|br0<->unix#572: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T16:45:03.224Z|00195|bridge|INFO|bridge br0: deleted interface veth21be66c3 on port 3\n
Apr 03 16:45:21.460 E ns/openshift-sdn pod/sdn-5db2n node/ip-10-0-135-189.us-west-2.compute.internal container=sdn container exited with code 255 (Error): ix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:45:19.410113   80525 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:45:19.510084   80525 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:45:19.610150   80525 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:45:19.710117   80525 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:45:19.810131   80525 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:45:19.910161   80525 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:45:20.010158   80525 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:45:20.110151   80525 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:45:20.210202   80525 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:45:20.310160   80525 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:45:20.416482   80525 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0403 16:45:20.416531   80525 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
Apr 03 16:45:51.054 E ns/openshift-sdn pod/ovs-xgr82 node/ip-10-0-142-162.us-west-2.compute.internal container=openvswitch container exited with code 137 (Error): 5Z|00347|connmgr|INFO|br0<->unix#908: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T16:44:37.584Z|00348|connmgr|INFO|br0<->unix#911: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T16:44:37.605Z|00349|connmgr|INFO|br0<->unix#914: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T16:44:37.628Z|00350|connmgr|INFO|br0<->unix#917: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T16:44:37.648Z|00351|connmgr|INFO|br0<->unix#920: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T16:44:37.674Z|00352|connmgr|INFO|br0<->unix#923: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T16:44:37.696Z|00353|connmgr|INFO|br0<->unix#926: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T16:44:37.783Z|00354|connmgr|INFO|br0<->unix#929: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T16:44:37.805Z|00355|connmgr|INFO|br0<->unix#932: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T16:44:37.824Z|00356|connmgr|INFO|br0<->unix#935: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T16:44:37.844Z|00357|connmgr|INFO|br0<->unix#938: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T16:44:37.872Z|00358|connmgr|INFO|br0<->unix#941: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T16:44:37.892Z|00359|connmgr|INFO|br0<->unix#944: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T16:44:37.913Z|00360|connmgr|INFO|br0<->unix#947: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T16:44:37.938Z|00361|connmgr|INFO|br0<->unix#950: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T16:44:37.962Z|00362|connmgr|INFO|br0<->unix#953: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T16:44:37.984Z|00363|connmgr|INFO|br0<->unix#956: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T16:44:53.644Z|00364|bridge|INFO|bridge br0: added interface veth160284cd on port 55\n2020-04-03T16:44:53.671Z|00365|connmgr|INFO|br0<->unix#959: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T16:44:53.711Z|00366|connmgr|INFO|br0<->unix#963: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T16:44:53.714Z|00367|connmgr|INFO|br0<->unix#965: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n
Apr 03 16:45:54.517 E ns/openshift-multus pod/multus-trwvd node/ip-10-0-135-189.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 03 16:46:02.076 E ns/openshift-sdn pod/sdn-lhqvh node/ip-10-0-142-162.us-west-2.compute.internal container=sdn container exited with code 255 (Error): ar/run/openvswitch/db.sock: connect: connection refused\nI0403 16:46:00.117498   72154 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:46:00.217460   72154 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:46:00.317504   72154 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:46:00.417474   72154 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:46:00.517499   72154 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:46:00.617475   72154 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:46:00.717500   72154 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:46:00.817477   72154 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:46:00.917429   72154 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:46:01.017535   72154 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:46:01.017606   72154 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nF0403 16:46:01.017615   72154 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: timed out waiting for the condition\n
Apr 03 16:46:34.711 E ns/openshift-sdn pod/ovs-r4fbd node/ip-10-0-146-226.us-west-2.compute.internal container=openvswitch container exited with code 137 (Error): |connmgr|INFO|br0<->unix#1100: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T16:44:13.072Z|00442|connmgr|INFO|br0<->unix#1103: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T16:44:13.095Z|00443|connmgr|INFO|br0<->unix#1106: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T16:44:13.116Z|00444|connmgr|INFO|br0<->unix#1109: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T16:44:13.138Z|00445|connmgr|INFO|br0<->unix#1112: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T16:44:13.167Z|00446|connmgr|INFO|br0<->unix#1115: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T16:44:13.289Z|00447|connmgr|INFO|br0<->unix#1118: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T16:44:13.313Z|00448|connmgr|INFO|br0<->unix#1121: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T16:44:13.335Z|00449|connmgr|INFO|br0<->unix#1124: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T16:44:13.363Z|00450|connmgr|INFO|br0<->unix#1127: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T16:44:13.389Z|00451|connmgr|INFO|br0<->unix#1130: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T16:44:13.416Z|00452|connmgr|INFO|br0<->unix#1133: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T16:44:13.441Z|00453|connmgr|INFO|br0<->unix#1136: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T16:44:13.466Z|00454|connmgr|INFO|br0<->unix#1139: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T16:44:13.493Z|00455|connmgr|INFO|br0<->unix#1142: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T16:44:13.518Z|00456|connmgr|INFO|br0<->unix#1145: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T16:45:27.759Z|00457|connmgr|INFO|br0<->unix#1154: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T16:45:27.781Z|00458|bridge|INFO|bridge br0: deleted interface veth12b6bd50 on port 14\n2020-04-03T16:45:42.246Z|00459|bridge|INFO|bridge br0: added interface veth01f17abe on port 74\n2020-04-03T16:45:42.273Z|00460|connmgr|INFO|br0<->unix#1157: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T16:45:42.306Z|00461|connmgr|INFO|br0<->unix#1160: 2 flow_mods in the last 0 s (2 deletes)\n
Apr 03 16:46:45.739 E ns/openshift-sdn pod/sdn-h48j9 node/ip-10-0-146-226.us-west-2.compute.internal container=sdn container exited with code 255 (Error): ar/run/openvswitch/db.sock: connect: connection refused\nI0403 16:46:43.776491   78066 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:46:43.876514   78066 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:46:43.976509   78066 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:46:44.076495   78066 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:46:44.176505   78066 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:46:44.276498   78066 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:46:44.376500   78066 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:46:44.476494   78066 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:46:44.576477   78066 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:46:44.676505   78066 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 16:46:44.676553   78066 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nF0403 16:46:44.676562   78066 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: timed out waiting for the condition\n
Apr 03 16:46:47.171 E ns/openshift-multus pod/multus-czrrf node/ip-10-0-142-162.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 03 16:46:58.775 E ns/openshift-machine-api pod/cluster-autoscaler-operator-7f4798bf6d-9kx6s node/ip-10-0-146-226.us-west-2.compute.internal container=cluster-autoscaler-operator container exited with code 255 (Error): 
Apr 03 16:47:24.839 E ns/openshift-service-ca pod/apiservice-cabundle-injector-588c977d66-v7v6b node/ip-10-0-146-226.us-west-2.compute.internal container=apiservice-cabundle-injector-controller container exited with code 255 (Error): 
Apr 03 16:47:35.864 E ns/openshift-multus pod/multus-vb9jv node/ip-10-0-146-226.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 03 16:48:03.936 E ns/openshift-machine-config-operator pod/machine-config-operator-86cd89b458-xmlql node/ip-10-0-146-226.us-west-2.compute.internal container=machine-config-operator container exited with code 2 (Error): 
Apr 03 16:54:14.225 E ns/openshift-machine-config-operator pod/machine-config-server-7ksf5 node/ip-10-0-137-56.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): 
Apr 03 16:54:26.703 E kube-apiserver Kube API started failing: Get https://api.ci-op-1v3y6khs-a4243.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=3s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Apr 03 16:54:28.703 E kube-apiserver Kube API is not responding to GET requests
Apr 03 16:54:28.703 E openshift-apiserver OpenShift API is not responding to GET requests
Apr 03 16:54:31.314 E ns/openshift-monitoring pod/kube-state-metrics-7999c949c7-wxktr node/ip-10-0-157-20.us-west-2.compute.internal container=kube-state-metrics container exited with code 2 (Error): 
Apr 03 16:54:31.332 E ns/openshift-marketplace pod/community-operators-64dd79d8bc-qvjxj node/ip-10-0-157-20.us-west-2.compute.internal container=community-operators container exited with code 2 (Error): 
Apr 03 16:54:31.349 E ns/openshift-marketplace pod/redhat-operators-7b9896d7bb-q5wqk node/ip-10-0-157-20.us-west-2.compute.internal container=redhat-operators container exited with code 2 (Error): 
Apr 03 16:54:35.973 E ns/openshift-operator-lifecycle-manager pod/packageserver-6f7d6f75cd-xjvwh node/ip-10-0-142-162.us-west-2.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 16:54:36.345 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-7679f4f9b6-f2mlw node/ip-10-0-137-56.us-west-2.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): ctory.go:132: watch of *v1.ClusterRoleBinding ended with: too old resource version: 14281 (20548)\nW0403 16:38:45.562722       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 19637 (20544)\nW0403 16:38:45.562780       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ServiceAccount ended with: too old resource version: 17722 (20545)\nW0403 16:45:14.512559       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22887 (26536)\nW0403 16:45:26.466950       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22887 (26628)\nW0403 16:47:42.453212       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Pod ended with: too old resource version: 20704 (21279)\nW0403 16:48:07.660439       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22887 (27634)\nW0403 16:52:04.471189       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 26860 (28903)\nW0403 16:54:08.515941       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 26763 (29492)\nW0403 16:54:30.662246       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ServiceAccount ended with: too old resource version: 20545 (29836)\nW0403 16:54:30.674024       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.FeatureGate ended with: too old resource version: 22277 (29836)\nI0403 16:54:32.091108       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 16:54:32.091230       1 builder.go:217] server exited\nI0403 16:54:32.111702       1 secure_serving.go:156] Stopped listening on 0.0.0.0:8443\n
Apr 03 16:54:42.144 E ns/openshift-machine-api pod/machine-api-controllers-5f685d9b45-pmscb node/ip-10-0-137-56.us-west-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Apr 03 16:54:42.144 E ns/openshift-machine-api pod/machine-api-controllers-5f685d9b45-pmscb node/ip-10-0-137-56.us-west-2.compute.internal container=nodelink-controller container exited with code 2 (Error): 
Apr 03 16:54:43.346 E ns/openshift-machine-config-operator pod/etcd-quorum-guard-ccf89ff64-x74gx node/ip-10-0-137-56.us-west-2.compute.internal container=guard container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 16:54:44.953 E ns/openshift-monitoring pod/cluster-monitoring-operator-656d4bd457-mhg9p node/ip-10-0-137-56.us-west-2.compute.internal container=cluster-monitoring-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 16:54:49.934 E ns/openshift-machine-config-operator pod/machine-config-server-sr466 node/ip-10-0-142-162.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): 
Apr 03 16:54:54.288 E ns/openshift-console pod/downloads-66d8c75c4b-5p7mh node/ip-10-0-157-20.us-west-2.compute.internal container=download-server container exited with code 137 (Error): 
Apr 03 16:54:59.997 E ns/openshift-machine-config-operator pod/machine-config-server-b9ndx node/ip-10-0-146-226.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): 
Apr 03 16:55:02.156 E ns/openshift-operator-lifecycle-manager pod/packageserver-6f7d6f75cd-q2dmw node/ip-10-0-137-56.us-west-2.compute.internal container=packageserver container exited with code 137 (Error): rpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T16:54:58Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T16:54:58Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T16:55:00Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T16:55:00Z" level=info msg="new grpc connection added" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T16:55:00Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T16:55:00Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T16:55:00Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T16:55:00Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T16:55:01Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T16:55:01Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T16:55:01Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T16:55:01Z" level=info msg="new grpc connection added" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\n
Apr 03 16:55:09.622 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-135-189.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): 
Apr 03 16:55:34.095 E ns/openshift-operator-lifecycle-manager pod/packageserver-67c46fb658-8bcdm node/ip-10-0-146-226.us-west-2.compute.internal container=packageserver container exited with code 137 (Error): sg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T16:55:15Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T16:55:15Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T16:55:21Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T16:55:21Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T16:55:22Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T16:55:22Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T16:55:22Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T16:55:22Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T16:55:24Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T16:55:24Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T16:55:32Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T16:55:32Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\n
Apr 03 16:56:14.099 E ns/openshift-cluster-node-tuning-operator pod/tuned-t4z5t node/ip-10-0-130-176.us-west-2.compute.internal container=tuned container exited with code 143 (Error): ft-node) match.  Label changes will not trigger profile reload.\nI0403 16:50:21.416861   36017 openshift-tuned.go:435] Pod (openshift-machine-config-operator/machine-config-daemon-xq96l) labels changed node wide: true\nI0403 16:50:25.099579   36017 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:50:25.101232   36017 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:50:25.226148   36017 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 16:54:54.170090   36017 openshift-tuned.go:435] Pod (openshift-marketplace/redhat-operators-dc94696cd-s2xdt) labels changed node wide: true\nI0403 16:54:55.099590   36017 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:54:55.101167   36017 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:54:55.210864   36017 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 16:54:56.358733   36017 openshift-tuned.go:435] Pod (openshift-marketplace/redhat-operators-8659c4c67d-6897w) labels changed node wide: true\nI0403 16:55:00.099578   36017 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:55:00.101309   36017 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:55:00.215651   36017 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 16:56:01.425719   36017 openshift-tuned.go:435] Pod (openshift-marketplace/redhat-operators-dc94696cd-s2xdt) labels changed node wide: true\nI0403 16:56:05.099548   36017 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:56:05.101312   36017 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:56:05.227320   36017 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\n
Apr 03 16:56:14.241 E ns/openshift-cluster-node-tuning-operator pod/tuned-jk465 node/ip-10-0-146-226.us-west-2.compute.internal container=tuned container exited with code 143 (Error): 3 16:55:05.944243   68591 openshift-tuned.go:435] Pod (openshift-machine-config-operator/machine-config-server-b9ndx) labels changed node wide: true\nI0403 16:55:09.479465   68591 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:55:09.481010   68591 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:55:09.587281   68591 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 16:55:25.931137   68591 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/packageserver-6f7d6f75cd-fhcxv) labels changed node wide: true\nI0403 16:55:29.479498   68591 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:55:29.481280   68591 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:55:29.580181   68591 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 16:55:45.932378   68591 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/packageserver-67c46fb658-8bcdm) labels changed node wide: true\nI0403 16:55:49.479496   68591 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:55:49.481013   68591 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:55:49.578516   68591 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 16:55:54.722226   68591 openshift-tuned.go:435] Pod (openshift-machine-config-operator/machine-config-server-xlw8r) labels changed node wide: true\nI0403 16:55:59.479460   68591 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:55:59.480813   68591 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:55:59.580458   68591 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\n
Apr 03 16:56:14.343 E ns/openshift-cluster-node-tuning-operator pod/tuned-tflfg node/ip-10-0-142-162.us-west-2.compute.internal container=tuned container exited with code 143 (Error): 403 16:54:31.120890   59875 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/packageserver-6f7d6f75cd-xjvwh) labels changed node wide: true\nI0403 16:54:33.014875   59875 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:54:33.016136   59875 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:54:33.117701   59875 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 16:54:35.306114   59875 openshift-tuned.go:435] Pod (openshift-marketplace/marketplace-operator-5746fb96dc-xvccq) labels changed node wide: true\nI0403 16:54:38.014910   59875 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:54:38.016501   59875 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:54:38.169934   59875 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 16:54:38.170904   59875 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/packageserver-6f7d6f75cd-xjvwh) labels changed node wide: true\nI0403 16:54:43.014882   59875 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:54:43.016567   59875 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:54:43.200581   59875 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 16:54:50.107626   59875 openshift-tuned.go:435] Pod (openshift-machine-config-operator/machine-config-server-sr466) labels changed node wide: true\nI0403 16:54:53.014863   59875 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:54:53.016169   59875 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:54:53.126958   59875 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\n
Apr 03 16:56:14.666 E ns/openshift-cluster-node-tuning-operator pod/tuned-dq2hb node/ip-10-0-135-189.us-west-2.compute.internal container=tuned container exited with code 143 (Error):  Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:46:00.772755   72552 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:46:00.893424   72552 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 16:50:09.530434   72552 openshift-tuned.go:435] Pod (openshift-machine-config-operator/machine-config-daemon-gnczz) labels changed node wide: true\nI0403 16:50:10.770876   72552 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:50:10.772453   72552 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:50:10.899787   72552 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 16:54:23.563059   72552 openshift-tuned.go:435] Pod (e2e-tests-sig-apps-job-upgrade-gg47g/foo-zvrcp) labels changed node wide: true\nI0403 16:54:25.770838   72552 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:54:25.772389   72552 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:54:25.881411   72552 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 16:54:39.037391   72552 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-k8s-1) labels changed node wide: true\nI0403 16:54:40.770848   72552 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:54:40.772472   72552 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:54:40.894420   72552 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 16:55:03.297644   72552 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0403 16:55:03.301122   72552 openshift-tuned.go:720] Pod event watch channel closed.\nI0403 16:55:03.301146   72552 openshift-tuned.go:722] Increasing resyncPeriod to 120\n
Apr 03 16:56:28.703 E openshift-apiserver OpenShift API is not responding to GET requests
Apr 03 16:56:43.007 E ns/openshift-monitoring pod/node-exporter-vl5qb node/ip-10-0-157-20.us-west-2.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 03 16:56:43.007 E ns/openshift-monitoring pod/node-exporter-vl5qb node/ip-10-0-157-20.us-west-2.compute.internal container=node-exporter container exited with code 255 (Error): 
Apr 03 16:56:43.018 E ns/openshift-image-registry pod/node-ca-8255w node/ip-10-0-157-20.us-west-2.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 03 16:56:43.228 E ns/openshift-cluster-node-tuning-operator pod/tuned-pnkc6 node/ip-10-0-157-20.us-west-2.compute.internal container=tuned container exited with code 255 (Error): rue\nI0403 16:44:24.660892   39456 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:44:24.662484   39456 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:44:24.808766   39456 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 16:50:50.173956   39456 openshift-tuned.go:435] Pod (openshift-machine-config-operator/machine-config-daemon-8fk98) labels changed node wide: true\nI0403 16:50:54.660854   39456 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:50:54.662413   39456 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:50:54.774655   39456 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 16:54:23.448426   39456 openshift-tuned.go:435] Pod (openshift-monitoring/alertmanager-main-0) labels changed node wide: true\nI0403 16:54:24.660885   39456 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:54:24.662278   39456 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:54:24.782407   39456 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 16:54:35.267487   39456 openshift-tuned.go:435] Pod (openshift-marketplace/redhat-operators-7b9896d7bb-q5wqk) labels changed node wide: true\nI0403 16:54:39.660909   39456 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:54:39.662348   39456 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:54:39.770285   39456 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 16:55:00.187073   39456 openshift-tuned.go:435] Pod (e2e-tests-sig-apps-job-upgrade-gg47g/foo-44qf9) labels changed node wide: true\nI0403 16:55:00.509132   39456 openshift-tuned.go:126] Received signal: terminated\n
Apr 03 16:56:46.007 E ns/openshift-dns pod/dns-default-7vdrp node/ip-10-0-157-20.us-west-2.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-04-03T16:43:48.939Z [INFO] CoreDNS-1.3.1\n2020-04-03T16:43:48.940Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-03T16:43:48.940Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nW0403 16:54:30.674471       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: watch of *v1.Service ended with: too old resource version: 23902 (29837)\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 03 16:56:46.007 E ns/openshift-dns pod/dns-default-7vdrp node/ip-10-0-157-20.us-west-2.compute.internal container=dns-node-resolver container exited with code 255 (Error): /bin/bash: line 1: kill: (104) - No such process\n
Apr 03 16:56:49.134 E ns/openshift-sdn pod/sdn-tfdhz node/ip-10-0-157-20.us-west-2.compute.internal container=sdn container exited with code 255 (Error): gSourceConfig: community-operators,},ClusterIP:172.30.113.120,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[],},},}\nI0403 16:54:56.603553   51995 proxier.go:367] userspace proxy: processing 0 service events\nI0403 16:54:56.603578   51995 proxier.go:346] userspace syncProxyRules took 53.051127ms\nI0403 16:54:56.603605   51995 service.go:321] Updating existing service port "openshift-marketplace/community-operators:grpc" at 172.30.113.120:50051/TCP\nI0403 16:54:56.764087   51995 proxier.go:367] userspace proxy: processing 0 service events\nI0403 16:54:56.764112   51995 proxier.go:346] userspace syncProxyRules took 52.582233ms\nE0403 16:55:00.480754   51995 healthcheck.go:57] SDN healthcheck disconnected from OVS server: <nil>\nI0403 16:55:00.480872   51995 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\ninterrupt: Gracefully shutting down ...\nI0403 16:55:00.582664   51995 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 16:55:00.716380   51995 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 16:55:00.781747   51995 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 16:55:00.882367   51995 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 16:55:00.981166   51995 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\n
Apr 03 16:56:49.534 E ns/openshift-sdn pod/ovs-5w7kk node/ip-10-0-157-20.us-west-2.compute.internal container=openvswitch container exited with code 255 (Error): itchd.log <==\n2020-04-03T16:54:30.715Z|00131|connmgr|INFO|br0<->unix#173: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T16:54:30.740Z|00132|bridge|INFO|bridge br0: deleted interface vetheae80d94 on port 8\n2020-04-03T16:54:30.785Z|00133|connmgr|INFO|br0<->unix#176: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T16:54:30.820Z|00134|bridge|INFO|bridge br0: deleted interface vethc2e6a9c5 on port 15\n2020-04-03T16:54:30.863Z|00135|connmgr|INFO|br0<->unix#179: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T16:54:30.903Z|00136|bridge|INFO|bridge br0: deleted interface veth966ab30d on port 10\n2020-04-03T16:54:30.952Z|00137|connmgr|INFO|br0<->unix#182: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T16:54:30.993Z|00138|bridge|INFO|bridge br0: deleted interface veth8a2d959b on port 3\n2020-04-03T16:54:31.044Z|00139|connmgr|INFO|br0<->unix#185: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T16:54:31.073Z|00140|bridge|INFO|bridge br0: deleted interface vethc69ec635 on port 12\n2020-04-03T16:54:31.125Z|00141|connmgr|INFO|br0<->unix#188: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T16:54:31.160Z|00142|bridge|INFO|bridge br0: deleted interface veth27670f6d on port 7\n2020-04-03T16:54:31.226Z|00143|connmgr|INFO|br0<->unix#191: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T16:54:31.263Z|00144|bridge|INFO|bridge br0: deleted interface veth33ac8011 on port 9\n2020-04-03T16:54:31.374Z|00145|connmgr|INFO|br0<->unix#194: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T16:54:31.413Z|00146|bridge|INFO|bridge br0: deleted interface veth9ee9d361 on port 5\n2020-04-03T16:54:53.668Z|00147|connmgr|INFO|br0<->unix#200: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T16:54:53.691Z|00148|bridge|INFO|bridge br0: deleted interface veth63001926 on port 14\n2020-04-03T16:54:53.743Z|00149|connmgr|INFO|br0<->unix#203: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T16:54:53.760Z|00150|bridge|INFO|bridge br0: deleted interface veth8b2bbe28 on port 11\nTerminated\novs-vswitchd is not running.\novsdb-server is not running.\n
Apr 03 16:56:49.934 E ns/openshift-multus pod/multus-wtglv node/ip-10-0-157-20.us-west-2.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 03 16:56:50.333 E ns/openshift-machine-config-operator pod/machine-config-daemon-lvs8g node/ip-10-0-157-20.us-west-2.compute.internal container=machine-config-daemon container exited with code 255 (Error): 
Apr 03 16:57:08.259 E ns/openshift-monitoring pod/prometheus-operator-74d7f45b8c-n7zwz node/ip-10-0-130-176.us-west-2.compute.internal container=prometheus-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 16:57:09.060 E ns/openshift-monitoring pod/prometheus-adapter-cfd599b4-29qkh node/ip-10-0-130-176.us-west-2.compute.internal container=prometheus-adapter container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 16:57:09.873 E ns/openshift-monitoring pod/grafana-64fd4c44cf-7jw27 node/ip-10-0-130-176.us-west-2.compute.internal container=grafana-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 16:57:09.873 E ns/openshift-monitoring pod/grafana-64fd4c44cf-7jw27 node/ip-10-0-130-176.us-west-2.compute.internal container=grafana container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 16:57:12.260 E ns/openshift-marketplace pod/community-operators-dbbc44d59-lbrkc node/ip-10-0-130-176.us-west-2.compute.internal container=community-operators container exited with code 2 (Error): 
Apr 03 16:57:14.661 E ns/openshift-ingress pod/router-default-7cc798ffcc-mdxj9 node/ip-10-0-130-176.us-west-2.compute.internal container=router container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 16:57:27.546 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-137-56.us-west-2.compute.internal node/ip-10-0-137-56.us-west-2.compute.internal container=kube-controller-manager-cert-syncer-6 container exited with code 255 (Error): efused\nE0403 16:38:46.335126       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 16:38:46.340457       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 16:38:47.359818       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 16:38:47.360152       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 16:38:51.863310       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:kube-controller-manager" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\nE0403 16:38:51.866684       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: secrets is forbidden: User "system:kube-controller-manager" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-controller-manager"\nW0403 16:45:38.868773       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 23048 (26717)\nW0403 16:54:15.873092       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 26943 (29527)\n
Apr 03 16:57:27.546 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-137-56.us-west-2.compute.internal node/ip-10-0-137-56.us-west-2.compute.internal container=kube-controller-manager-6 container exited with code 255 (Error): ver-b9ndx\nI0403 16:55:01.516674       1 deployment_controller.go:484] Error syncing deployment openshift-monitoring/prometheus-adapter: Operation cannot be fulfilled on deployments.apps "prometheus-adapter": the object has been modified; please apply your changes to the latest version and try again\nI0403 16:55:02.916893       1 deployment_controller.go:484] Error syncing deployment openshift-monitoring/grafana: Operation cannot be fulfilled on deployments.apps "grafana": the object has been modified; please apply your changes to the latest version and try again\nI0403 16:55:02.945330       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"a47b9825-75c5-11ea-ae28-02f230eac062", APIVersion:"apps/v1", ResourceVersion:"30418", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set packageserver-67c46fb658 to 0\nI0403 16:55:02.945552       1 replica_set.go:525] Too many replicas for ReplicaSet openshift-operator-lifecycle-manager/packageserver-67c46fb658, need 0, deleting 1\nI0403 16:55:02.945583       1 controller_utils.go:598] Controller packageserver-67c46fb658 deleting pod openshift-operator-lifecycle-manager/packageserver-67c46fb658-8bcdm\nI0403 16:55:02.958932       1 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver-67c46fb658", UID:"c9f6c9e1-75cb-11ea-aa14-0259e372416c", APIVersion:"apps/v1", ResourceVersion:"30861", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: packageserver-67c46fb658-8bcdm\nE0403 16:55:03.235211       1 reflector.go:237] github.com/openshift/client-go/security/informers/externalversions/factory.go:101: Failed to watch *v1.RangeAllocation: the server is currently unable to handle the request (get rangeallocations.security.openshift.io)\nI0403 16:55:03.288587       1 serving.go:88] Shutting down DynamicLoader\nE0403 16:55:03.288686       1 controllermanager.go:282] leaderelection lost\n
Apr 03 16:57:28.395 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-137-56.us-west-2.compute.internal node/ip-10-0-137-56.us-west-2.compute.internal container=scheduler container exited with code 255 (Error): ityPriority:{} SelectorSpreadPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} BalancedResourceAllocation:{}]'\nW0403 16:38:52.298494       1 authorization.go:47] Authorization is disabled\nW0403 16:38:52.298958       1 authentication.go:55] Authentication is disabled\nI0403 16:38:52.299026       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251\nI0403 16:38:52.304243       1 serving.go:195] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "scheduler.openshift-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1585930170" (2020-04-03 16:09:47 +0000 UTC to 2022-04-03 16:09:48 +0000 UTC (now=2020-04-03 16:38:52.304222132 +0000 UTC))\nI0403 16:38:52.304315       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585930170" [] issuer="<self>" (2020-04-03 16:09:29 +0000 UTC to 2021-04-03 16:09:30 +0000 UTC (now=2020-04-03 16:38:52.304302509 +0000 UTC))\nI0403 16:38:52.304367       1 secure_serving.go:136] Serving securely on [::]:10259\nI0403 16:38:52.304512       1 serving.go:77] Starting DynamicLoader\nI0403 16:38:53.345571       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller\nI0403 16:38:53.466942       1 controller_utils.go:1034] Caches are synced for scheduler controller\nI0403 16:38:53.467042       1 leaderelection.go:205] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\nW0403 16:54:30.638915       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 23902 (29836)\nW0403 16:54:30.665537       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StatefulSet ended with: too old resource version: 24349 (29837)\nE0403 16:55:03.228972       1 server.go:259] lost master\n
Apr 03 16:57:37.399 E ns/openshift-apiserver pod/apiserver-b5n4m node/ip-10-0-137-56.us-west-2.compute.internal container=openshift-apiserver container exited with code 255 (Error): \nI0403 16:54:49.171238       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 16:54:49.317278       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []\nI0403 16:54:49.317336       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 16:54:49.317461       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 16:54:49.317772       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 16:54:49.330402       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 16:55:03.228657       1 clusterquotamapping.go:145] Shutting down ClusterQuotaMappingController controller\nI0403 16:55:03.228827       1 controller.go:87] Shutting down OpenAPI AggregationController\nI0403 16:55:03.228851       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/aggregator-client-ca/ca-bundle.crt\nI0403 16:55:03.228859       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/client-ca/ca-bundle.crt\nI0403 16:55:03.230519       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 16:55:03.230949       1 serving.go:88] Shutting down DynamicLoader\nI0403 16:55:03.231025       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 16:55:03.231439       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 16:55:03.231567       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 16:55:03.231603       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 16:55:03.231651       1 secure_serving.go:180] Stopped listening on 0.0.0.0:8443\n
Apr 03 16:57:38.137 E ns/openshift-controller-manager pod/controller-manager-ctkwh node/ip-10-0-137-56.us-west-2.compute.internal container=controller-manager container exited with code 255 (Error): 
Apr 03 16:57:39.734 E ns/openshift-sdn pod/sdn-controller-tsnwz node/ip-10-0-137-56.us-west-2.compute.internal container=sdn-controller container exited with code 255 (Error): I0403 16:44:49.974261       1 leaderelection.go:205] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Apr 03 16:57:40.932 E ns/openshift-image-registry pod/node-ca-td8x7 node/ip-10-0-137-56.us-west-2.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 03 16:57:41.334 E ns/openshift-monitoring pod/node-exporter-fqlhn node/ip-10-0-137-56.us-west-2.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 03 16:57:41.334 E ns/openshift-monitoring pod/node-exporter-fqlhn node/ip-10-0-137-56.us-west-2.compute.internal container=node-exporter container exited with code 255 (Error): 
Apr 03 16:57:41.733 E ns/openshift-multus pod/multus-7mvzh node/ip-10-0-137-56.us-west-2.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 03 16:57:42.134 E ns/openshift-sdn pod/sdn-9mts2 node/ip-10-0-137-56.us-west-2.compute.internal container=sdn container exited with code 255 (Error): int 10.128.0.81:5443 for service "openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com:"\nI0403 16:55:02.965234   83665 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com: to [10.128.0.81:5443 10.129.0.59:5443]\nI0403 16:55:02.965345   83665 roundrobin.go:240] Delete endpoint 10.128.0.77:5443 for service "openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com:"\nI0403 16:55:03.106531   83665 proxier.go:367] userspace proxy: processing 0 service events\nI0403 16:55:03.106568   83665 proxier.go:346] userspace syncProxyRules took 51.089169ms\nE0403 16:55:03.188195   83665 healthcheck.go:57] SDN healthcheck disconnected from OVS server: <nil>\nI0403 16:55:03.188358   83665 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\ninterrupt: Gracefully shutting down ...\nE0403 16:55:03.240000   83665 proxier.go:1350] Failed to execute iptables-restore: signal: terminated ()\nI0403 16:55:03.240041   83665 proxier.go:1352] Closing local ports after iptables-restore failure\nI0403 16:55:03.307342   83665 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 16:55:03.490129   83665 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 16:55:03.584845   83665 proxier.go:367] userspace proxy: processing 0 service events\nI0403 16:55:03.584907   83665 proxier.go:346] userspace syncProxyRules took 344.814645ms\nI0403 16:55:03.588611   83665 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 16:55:03.688569   83665 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\n
Apr 03 16:57:42.534 E ns/openshift-machine-config-operator pod/machine-config-daemon-mznt6 node/ip-10-0-137-56.us-west-2.compute.internal container=machine-config-daemon container exited with code 255 (Error): 
Apr 03 16:57:49.533 E ns/openshift-cluster-node-tuning-operator pod/tuned-mgqw5 node/ip-10-0-137-56.us-west-2.compute.internal container=tuned container exited with code 255 (Error): cfg\nI0403 16:54:34.305362   75313 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:54:34.432301   75313 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 16:54:34.474222   75313 openshift-tuned.go:435] Pod (openshift-kube-scheduler/installer-7-ip-10-0-137-56.us-west-2.compute.internal) labels changed node wide: true\nI0403 16:54:39.301336   75313 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:54:39.302650   75313 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:54:39.397182   75313 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 16:54:42.316074   75313 openshift-tuned.go:435] Pod (openshift-machine-api/machine-api-controllers-5f685d9b45-pmscb) labels changed node wide: true\nI0403 16:54:44.301345   75313 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:54:44.308296   75313 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:54:44.403231   75313 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 16:54:45.120018   75313 openshift-tuned.go:435] Pod (openshift-monitoring/cluster-monitoring-operator-656d4bd457-mhg9p) labels changed node wide: true\nI0403 16:54:49.301339   75313 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:54:49.302649   75313 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:54:49.402999   75313 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 16:55:03.136197   75313 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/packageserver-6f7d6f75cd-q2dmw) labels changed node wide: true\nI0403 16:55:03.290532   75313 openshift-tuned.go:126] Received signal: terminated\n
Apr 03 16:57:50.134 E ns/openshift-dns pod/dns-default-qd9kq node/ip-10-0-137-56.us-west-2.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-04-03T16:44:14.691Z [INFO] CoreDNS-1.3.1\n2020-04-03T16:44:14.691Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-03T16:44:14.691Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 03 16:57:50.134 E ns/openshift-dns pod/dns-default-qd9kq node/ip-10-0-137-56.us-west-2.compute.internal container=dns-node-resolver container exited with code 255 (Error): /bin/bash: line 1: kill: (96) - No such process\n
Apr 03 16:57:54.703 E kube-apiserver Kube API started failing: Get https://api.ci-op-1v3y6khs-a4243.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=3s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Apr 03 16:57:58.254 E clusterversion/version changed Failing to True: ClusterOperatorNotAvailable: Cluster operator machine-config is still updating
Apr 03 16:57:58.703 E openshift-apiserver OpenShift API is not responding to GET requests
Apr 03 16:58:01.806 E ns/openshift-console pod/console-5fd96d999c-v59ft node/ip-10-0-142-162.us-west-2.compute.internal container=console container exited with code 2 (Error): 2020/04/3 16:54:45 cmd/main: cookies are secure!\n2020/04/3 16:54:46 cmd/main: Binding to 0.0.0.0:8443...\n2020/04/3 16:54:46 cmd/main: using TLS\n
Apr 03 16:58:05.980 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-6d66f68459-5bjrk node/ip-10-0-142-162.us-west-2.compute.internal container=operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 16:58:06.581 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-7679f4f9b6-pfzbk node/ip-10-0-142-162.us-west-2.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): :132: watch of *v1.ConfigMap ended with: too old resource version: 26943 (29527)\\n\"" to "StaticPodsDegraded: nodes/ip-10-0-137-56.us-west-2.compute.internal pods/kube-controller-manager-ip-10-0-137-56.us-west-2.compute.internal container=\"kube-controller-manager-6\" is not ready"\nI0403 16:57:50.865196       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"51dc60b1-75c5-11ea-ae28-02f230eac062", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: nodes/ip-10-0-137-56.us-west-2.compute.internal pods/kube-controller-manager-ip-10-0-137-56.us-west-2.compute.internal container=\"kube-controller-manager-6\" is not ready" to ""\nW0403 16:57:51.908160       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Network ended with: too old resource version: 22277 (32776)\nW0403 16:57:57.929026       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Namespace ended with: too old resource version: 20544 (32796)\nW0403 16:57:57.929112       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Role ended with: too old resource version: 20548 (32796)\nW0403 16:57:57.930258       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Infrastructure ended with: too old resource version: 21577 (32796)\nW0403 16:57:57.941772       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.RoleBinding ended with: too old resource version: 20548 (32796)\nI0403 16:58:00.495997       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 16:58:00.496048       1 leaderelection.go:65] leaderelection lost\nF0403 16:58:00.507883       1 builder.go:217] server exited\n
Apr 03 16:58:31.582 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-137-56.us-west-2.compute.internal node/ip-10-0-137-56.us-west-2.compute.internal container=kube-apiserver-7 container exited with code 255 (Error): F\nI0403 16:55:03.230639       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 16:55:03.230649       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 16:55:03.230779       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 16:55:03.230788       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 16:55:03.231320       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 16:55:03.231335       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 16:55:03.238738       1 healthz.go:184] [-]terminating failed: reason withheld\n[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/kube-apiserver-requestheader-reload ok\n[+]poststarthook/kube-apiserver-clientCA-reload ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-discovery-available ok\n[+]crd-informer-synced ok\n[+]crd-informer-synced ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/openshift.io-clientCA-reload ok\n[+]poststarthook/openshift.io-requestheader-reload ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\n
Apr 03 16:58:31.582 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-137-56.us-west-2.compute.internal node/ip-10-0-137-56.us-west-2.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 255 (Error): I0403 16:38:48.238719       1 observer_polling.go:106] Starting file observer\nI0403 16:38:48.238918       1 certsync_controller.go:269] Starting CertSyncer\nW0403 16:45:39.106327       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 23425 (26719)\n
Apr 03 16:58:31.981 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-137-56.us-west-2.compute.internal node/ip-10-0-137-56.us-west-2.compute.internal container=scheduler container exited with code 255 (Error): ityPriority:{} SelectorSpreadPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} BalancedResourceAllocation:{}]'\nW0403 16:38:52.298494       1 authorization.go:47] Authorization is disabled\nW0403 16:38:52.298958       1 authentication.go:55] Authentication is disabled\nI0403 16:38:52.299026       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251\nI0403 16:38:52.304243       1 serving.go:195] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "scheduler.openshift-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1585930170" (2020-04-03 16:09:47 +0000 UTC to 2022-04-03 16:09:48 +0000 UTC (now=2020-04-03 16:38:52.304222132 +0000 UTC))\nI0403 16:38:52.304315       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585930170" [] issuer="<self>" (2020-04-03 16:09:29 +0000 UTC to 2021-04-03 16:09:30 +0000 UTC (now=2020-04-03 16:38:52.304302509 +0000 UTC))\nI0403 16:38:52.304367       1 secure_serving.go:136] Serving securely on [::]:10259\nI0403 16:38:52.304512       1 serving.go:77] Starting DynamicLoader\nI0403 16:38:53.345571       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller\nI0403 16:38:53.466942       1 controller_utils.go:1034] Caches are synced for scheduler controller\nI0403 16:38:53.467042       1 leaderelection.go:205] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\nW0403 16:54:30.638915       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 23902 (29836)\nW0403 16:54:30.665537       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StatefulSet ended with: too old resource version: 24349 (29837)\nE0403 16:55:03.228972       1 server.go:259] lost master\n
Apr 03 16:58:32.382 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-137-56.us-west-2.compute.internal node/ip-10-0-137-56.us-west-2.compute.internal container=kube-controller-manager-cert-syncer-6 container exited with code 255 (Error): efused\nE0403 16:38:46.335126       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 16:38:46.340457       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 16:38:47.359818       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 16:38:47.360152       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 16:38:51.863310       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:kube-controller-manager" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\nE0403 16:38:51.866684       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: secrets is forbidden: User "system:kube-controller-manager" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-controller-manager"\nW0403 16:45:38.868773       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 23048 (26717)\nW0403 16:54:15.873092       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 26943 (29527)\n
Apr 03 16:58:32.382 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-137-56.us-west-2.compute.internal node/ip-10-0-137-56.us-west-2.compute.internal container=kube-controller-manager-6 container exited with code 255 (Error): ver-b9ndx\nI0403 16:55:01.516674       1 deployment_controller.go:484] Error syncing deployment openshift-monitoring/prometheus-adapter: Operation cannot be fulfilled on deployments.apps "prometheus-adapter": the object has been modified; please apply your changes to the latest version and try again\nI0403 16:55:02.916893       1 deployment_controller.go:484] Error syncing deployment openshift-monitoring/grafana: Operation cannot be fulfilled on deployments.apps "grafana": the object has been modified; please apply your changes to the latest version and try again\nI0403 16:55:02.945330       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"a47b9825-75c5-11ea-ae28-02f230eac062", APIVersion:"apps/v1", ResourceVersion:"30418", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set packageserver-67c46fb658 to 0\nI0403 16:55:02.945552       1 replica_set.go:525] Too many replicas for ReplicaSet openshift-operator-lifecycle-manager/packageserver-67c46fb658, need 0, deleting 1\nI0403 16:55:02.945583       1 controller_utils.go:598] Controller packageserver-67c46fb658 deleting pod openshift-operator-lifecycle-manager/packageserver-67c46fb658-8bcdm\nI0403 16:55:02.958932       1 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver-67c46fb658", UID:"c9f6c9e1-75cb-11ea-aa14-0259e372416c", APIVersion:"apps/v1", ResourceVersion:"30861", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: packageserver-67c46fb658-8bcdm\nE0403 16:55:03.235211       1 reflector.go:237] github.com/openshift/client-go/security/informers/externalversions/factory.go:101: Failed to watch *v1.RangeAllocation: the server is currently unable to handle the request (get rangeallocations.security.openshift.io)\nI0403 16:55:03.288587       1 serving.go:88] Shutting down DynamicLoader\nE0403 16:55:03.288686       1 controllermanager.go:282] leaderelection lost\n
Apr 03 16:58:32.784 E ns/openshift-etcd pod/etcd-member-ip-10-0-137-56.us-west-2.compute.internal node/ip-10-0-137-56.us-west-2.compute.internal container=etcd-metrics container exited with code 255 (Error): 2020-04-03 16:54:45.522240 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-0.ci-op-1v3y6khs-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-0.ci-op-1v3y6khs-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-03 16:54:45.522833 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-03 16:54:45.523251 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-1v3y6khs-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-1v3y6khs-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/03 16:54:45 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.137.56:9978: connect: connection refused"; Reconnecting to {etcd-0.ci-op-1v3y6khs-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\n2020-04-03 16:54:46.535946 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Apr 03 16:58:32.784 E ns/openshift-etcd pod/etcd-member-ip-10-0-137-56.us-west-2.compute.internal node/ip-10-0-137-56.us-west-2.compute.internal container=etcd-member container exited with code 255 (Error): c7a7a17da4d42f (writer)\n2020-04-03 16:55:03.706617 I | rafthttp: closed the TCP streaming connection with peer 37c7a7a17da4d42f (stream Message writer)\n2020-04-03 16:55:03.706632 I | rafthttp: stopped streaming with peer 37c7a7a17da4d42f (writer)\n2020-04-03 16:55:03.708700 I | rafthttp: stopped HTTP pipelining with peer 37c7a7a17da4d42f\n2020-04-03 16:55:03.708845 W | rafthttp: lost the TCP streaming connection with peer 37c7a7a17da4d42f (stream MsgApp v2 reader)\n2020-04-03 16:55:03.708862 I | rafthttp: stopped streaming with peer 37c7a7a17da4d42f (stream MsgApp v2 reader)\n2020-04-03 16:55:03.708963 W | rafthttp: lost the TCP streaming connection with peer 37c7a7a17da4d42f (stream Message reader)\n2020-04-03 16:55:03.708974 I | rafthttp: stopped streaming with peer 37c7a7a17da4d42f (stream Message reader)\n2020-04-03 16:55:03.708980 I | rafthttp: stopped peer 37c7a7a17da4d42f\n2020-04-03 16:55:03.708985 I | rafthttp: stopping peer ca74ec7acc77a42e...\n2020-04-03 16:55:03.709274 I | rafthttp: closed the TCP streaming connection with peer ca74ec7acc77a42e (stream MsgApp v2 writer)\n2020-04-03 16:55:03.709283 I | rafthttp: stopped streaming with peer ca74ec7acc77a42e (writer)\n2020-04-03 16:55:03.712387 I | rafthttp: closed the TCP streaming connection with peer ca74ec7acc77a42e (stream Message writer)\n2020-04-03 16:55:03.712403 I | rafthttp: stopped streaming with peer ca74ec7acc77a42e (writer)\n2020-04-03 16:55:03.712475 I | rafthttp: stopped HTTP pipelining with peer ca74ec7acc77a42e\n2020-04-03 16:55:03.712523 W | rafthttp: lost the TCP streaming connection with peer ca74ec7acc77a42e (stream MsgApp v2 reader)\n2020-04-03 16:55:03.712534 I | rafthttp: stopped streaming with peer ca74ec7acc77a42e (stream MsgApp v2 reader)\n2020-04-03 16:55:03.712569 W | rafthttp: lost the TCP streaming connection with peer ca74ec7acc77a42e (stream Message reader)\n2020-04-03 16:55:03.712579 I | rafthttp: stopped streaming with peer ca74ec7acc77a42e (stream Message reader)\n2020-04-03 16:55:03.712585 I | rafthttp: stopped peer ca74ec7acc77a42e\n
Apr 03 16:58:55.379 E ns/openshift-operator-lifecycle-manager pod/packageserver-78d5fcbd5f-mwxjt node/ip-10-0-137-56.us-west-2.compute.internal container=packageserver container exited with code 137 (Error): tempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T16:58:30Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T16:58:30Z" level=info msg="new grpc connection added" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T16:58:31Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T16:58:31Z" level=info msg="new grpc connection added" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T16:58:31Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T16:58:31Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T16:58:32Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T16:58:32Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T16:58:32Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T16:58:32Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T16:58:54Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T16:58:54Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\n
Apr 03 16:59:12.735 E ns/openshift-operator-lifecycle-manager pod/packageserver-78d5fcbd5f-6dddp node/ip-10-0-146-226.us-west-2.compute.internal container=packageserver container exited with code 137 (Error): ] http: TLS handshake error from 10.129.0.1:49774: remote error: tls: bad certificate\nI0403 16:58:41.442504       1 wrap.go:47] GET /: (156.595µs) 200 [Go-http-client/2.0 10.129.0.1:38134]\nI0403 16:58:41.442619       1 wrap.go:47] GET /: (97.866µs) 200 [Go-http-client/2.0 10.129.0.1:38134]\nI0403 16:58:41.442507       1 wrap.go:47] GET /: (156.29µs) 200 [Go-http-client/2.0 10.130.0.1:53886]\nI0403 16:58:41.442588       1 wrap.go:47] GET /: (174.86µs) 200 [Go-http-client/2.0 10.128.0.1:35030]\nI0403 16:58:41.442906       1 wrap.go:47] GET /: (101.632µs) 200 [Go-http-client/2.0 10.130.0.1:53886]\nI0403 16:58:41.443126       1 wrap.go:47] GET /: (175.522µs) 200 [Go-http-client/2.0 10.128.0.1:35030]\nI0403 16:58:41.550562       1 wrap.go:47] GET /: (2.209891ms) 200 [Go-http-client/2.0 10.129.0.1:38134]\nI0403 16:58:41.550795       1 wrap.go:47] GET /: (87.027µs) 200 [Go-http-client/2.0 10.129.0.1:38134]\nI0403 16:58:41.550977       1 wrap.go:47] GET /: (69.895µs) 200 [Go-http-client/2.0 10.129.0.1:38134]\nI0403 16:58:41.551286       1 wrap.go:47] GET /: (63µs) 200 [Go-http-client/2.0 10.128.0.1:35030]\nI0403 16:58:41.555983       1 wrap.go:47] GET /: (148.123µs) 200 [Go-http-client/2.0 10.130.0.1:53886]\nI0403 16:58:41.556531       1 wrap.go:47] GET /: (767.898µs) 200 [Go-http-client/2.0 10.130.0.1:53886]\nI0403 16:58:41.608593       1 secure_serving.go:156] Stopped listening on [::]:5443\ntime="2020-04-03T16:58:53Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T16:58:53Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T16:58:54Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T16:58:54Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\n
Apr 03 16:59:14.604 E ns/openshift-marketplace pod/community-operators-dbbc44d59-vx89n node/ip-10-0-157-20.us-west-2.compute.internal container=community-operators container exited with code 2 (Error): 
Apr 03 16:59:23.442 E ns/openshift-operator-lifecycle-manager pod/packageserver-548c8694db-6zwv6 node/ip-10-0-137-56.us-west-2.compute.internal container=packageserver container exited with code 137 (Error): nnection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T16:58:54Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T16:58:54Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T16:59:13Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T16:59:13Z" level=info msg="new grpc connection added" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T16:59:14Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T16:59:14Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T16:59:14Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T16:59:14Z" level=info msg="new grpc connection added" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T16:59:14Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T16:59:14Z" level=info msg="new grpc connection added" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T16:59:15Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T16:59:15Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\n
Apr 03 16:59:24.727 E ns/openshift-monitoring pod/node-exporter-kkprn node/ip-10-0-130-176.us-west-2.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 03 16:59:24.727 E ns/openshift-monitoring pod/node-exporter-kkprn node/ip-10-0-130-176.us-west-2.compute.internal container=node-exporter container exited with code 255 (Error): 
Apr 03 16:59:24.737 E ns/openshift-image-registry pod/node-ca-6rqbt node/ip-10-0-130-176.us-west-2.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 03 16:59:24.955 E ns/openshift-dns pod/dns-default-242mw node/ip-10-0-130-176.us-west-2.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-04-03T16:43:03.963Z [INFO] CoreDNS-1.3.1\n2020-04-03T16:43:03.963Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-03T16:43:03.963Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nW0403 16:54:30.637540       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: watch of *v1.Service ended with: too old resource version: 23902 (29836)\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 03 16:59:24.955 E ns/openshift-dns pod/dns-default-242mw node/ip-10-0-130-176.us-west-2.compute.internal container=dns-node-resolver container exited with code 255 (Error): /bin/bash: line 1: kill: (125) - No such process\n
Apr 03 16:59:28.638 E ns/openshift-sdn pod/sdn-9fw6k node/ip-10-0-130-176.us-west-2.compute.internal container=sdn container exited with code 255 (Error): -internal-default:https to [10.128.2.38:443 10.131.0.22:443]\nI0403 16:57:39.375163   47703 roundrobin.go:240] Delete endpoint 10.128.2.38:443 for service "openshift-ingress/router-internal-default:https"\nI0403 16:57:39.530917   47703 proxier.go:367] userspace proxy: processing 0 service events\nI0403 16:57:39.530943   47703 proxier.go:346] userspace syncProxyRules took 50.012504ms\nI0403 16:57:39.687171   47703 proxier.go:367] userspace proxy: processing 0 service events\nI0403 16:57:39.687197   47703 proxier.go:346] userspace syncProxyRules took 51.224996ms\nI0403 16:57:40.683542   47703 roundrobin.go:310] LoadBalancerRR: Setting endpoints for default/kubernetes:https to [10.0.137.56:6443 10.0.142.162:6443 10.0.146.226:6443]\nI0403 16:57:40.683582   47703 roundrobin.go:240] Delete endpoint 10.0.137.56:6443 for service "default/kubernetes:https"\nI0403 16:57:40.838190   47703 proxier.go:367] userspace proxy: processing 0 service events\nI0403 16:57:40.838213   47703 proxier.go:346] userspace syncProxyRules took 51.244352ms\ninterrupt: Gracefully shutting down ...\nE0403 16:57:41.536438   47703 healthcheck.go:57] SDN healthcheck disconnected from OVS server: <nil>\nI0403 16:57:41.536536   47703 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 16:57:41.645366   47703 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 16:57:41.736856   47703 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 16:57:41.837409   47703 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 16:57:41.937332   47703 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\n
Apr 03 16:59:30.640 E ns/openshift-sdn pod/ovs-jvwwj node/ip-10-0-130-176.us-west-2.compute.internal container=openvswitch container exited with code 255 (Error): 0|connmgr|INFO|br0<->unix#261: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T16:57:06.031Z|00171|connmgr|INFO|br0<->unix#264: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T16:57:06.066Z|00172|bridge|INFO|bridge br0: deleted interface veth01bc5506 on port 20\n2020-04-03T16:57:06.124Z|00173|connmgr|INFO|br0<->unix#267: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T16:57:06.172Z|00174|bridge|INFO|bridge br0: deleted interface veth3282ed82 on port 8\n2020-04-03T16:57:06.220Z|00175|connmgr|INFO|br0<->unix#270: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T16:57:06.277Z|00176|connmgr|INFO|br0<->unix#273: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T16:57:06.312Z|00177|bridge|INFO|bridge br0: deleted interface veth8f454b0c on port 19\n2020-04-03T16:57:06.368Z|00178|connmgr|INFO|br0<->unix#276: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T16:57:06.396Z|00179|bridge|INFO|bridge br0: deleted interface veth5ba558e2 on port 5\n2020-04-03T16:57:06.454Z|00180|connmgr|INFO|br0<->unix#279: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T16:57:06.499Z|00181|bridge|INFO|bridge br0: deleted interface veth75a266a7 on port 15\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-04-03T16:57:06.058Z|00019|jsonrpc|WARN|Dropped 4 log messages in last 740 seconds (most recently, 740 seconds ago) due to excessive rate\n2020-04-03T16:57:06.058Z|00020|jsonrpc|WARN|unix#207: receive error: Connection reset by peer\n2020-04-03T16:57:06.058Z|00021|reconnect|WARN|unix#207: connection dropped (Connection reset by peer)\n2020-04-03T16:57:06.383Z|00022|jsonrpc|WARN|unix#221: receive error: Connection reset by peer\n2020-04-03T16:57:06.383Z|00023|reconnect|WARN|unix#221: connection dropped (Connection reset by peer)\n\n==> /var/log/openvswitch/ovs-vswitchd.log <==\n2020-04-03T16:57:34.984Z|00182|connmgr|INFO|br0<->unix#285: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T16:57:35.004Z|00183|bridge|INFO|bridge br0: deleted interface veth40d1cc48 on port 11\nTerminated\novs-vswitchd is not running.\novsdb-server is not running.\n
Apr 03 16:59:31.038 E ns/openshift-multus pod/multus-79gjz node/ip-10-0-130-176.us-west-2.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 03 16:59:31.438 E ns/openshift-machine-config-operator pod/machine-config-daemon-9r7zp node/ip-10-0-130-176.us-west-2.compute.internal container=machine-config-daemon container exited with code 255 (Error): 
Apr 03 16:59:31.839 E ns/openshift-cluster-node-tuning-operator pod/tuned-nc55p node/ip-10-0-130-176.us-west-2.compute.internal container=tuned container exited with code 255 (Error): a\n2020-04-03 16:56:24,877 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-03 16:56:25,011 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-03 16:56:25,013 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\n2020-04-03 16:56:25,021 INFO     tuned.daemon.daemon: terminating Tuned in one-shot mode\nI0403 16:57:04.457825   64391 openshift-tuned.go:435] Pod (openshift-monitoring/alertmanager-main-1) labels changed node wide: true\nI0403 16:57:04.593351   64391 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:57:04.610545   64391 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:57:04.852377   64391 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 16:57:10.028144   64391 openshift-tuned.go:435] Pod (openshift-monitoring/grafana-64fd4c44cf-7jw27) labels changed node wide: true\nI0403 16:57:14.593049   64391 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:57:14.594656   64391 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:57:14.705530   64391 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 16:57:14.825495   64391 openshift-tuned.go:435] Pod (openshift-ingress/router-default-7cc798ffcc-mdxj9) labels changed node wide: true\nI0403 16:57:19.592914   64391 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:57:19.595052   64391 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:57:19.710554   64391 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 16:57:41.422424   64391 openshift-tuned.go:435] Pod (e2e-tests-sig-apps-job-upgrade-gg47g/foo-zzvnq) labels changed node wide: true\nI0403 16:57:41.585759   64391 openshift-tuned.go:126] Received signal: terminated\n
Apr 03 16:59:55.158 E ns/openshift-ingress pod/router-default-7cc798ffcc-wlqhd node/ip-10-0-135-189.us-west-2.compute.internal container=router container exited with code 2 (Error): 16:58:19.608520       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 16:58:24.606494       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 16:58:29.628088       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 16:58:34.612529       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 16:58:39.609568       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 16:58:44.610812       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 16:58:52.715766       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 16:58:57.714243       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 16:59:12.980097       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 16:59:17.965481       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 16:59:34.044827       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 16:59:39.045228       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 16:59:45.764377       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n
Apr 03 16:59:55.964 E ns/openshift-marketplace pod/redhat-operators-577cb8b8b9-kwntj node/ip-10-0-135-189.us-west-2.compute.internal container=redhat-operators container exited with code 2 (Error): 
Apr 03 16:59:56.557 E ns/openshift-monitoring pod/prometheus-adapter-cfd599b4-nrw7t node/ip-10-0-135-189.us-west-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): 
Apr 03 17:00:06.853 E ns/openshift-operator-lifecycle-manager pod/olm-operators-6grmb node/ip-10-0-135-189.us-west-2.compute.internal container=configmap-registry-server container exited with code 2 (Error): 
Apr 03 17:00:21.402 E ns/openshift-console pod/downloads-66d8c75c4b-2974n node/ip-10-0-135-189.us-west-2.compute.internal container=download-server container exited with code 137 (Error): 
Apr 03 17:00:34.515 E ns/openshift-image-registry pod/node-ca-x622l node/ip-10-0-142-162.us-west-2.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 03 17:00:37.295 E ns/openshift-apiserver pod/apiserver-z8rjc node/ip-10-0-142-162.us-west-2.compute.internal container=openshift-apiserver container exited with code 255 (Error): rom Notify: []\nI0403 16:58:24.370065       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 16:58:24.370103       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 16:58:24.370158       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 16:58:24.381053       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nE0403 16:58:27.082968       1 memcache.go:141] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request\nE0403 16:58:37.127912       1 memcache.go:141] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request\nI0403 16:58:43.554905       1 controller.go:87] Shutting down OpenAPI AggregationController\nI0403 16:58:43.555172       1 clusterquotamapping.go:145] Shutting down ClusterQuotaMappingController controller\nI0403 16:58:43.555314       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/aggregator-client-ca/ca-bundle.crt\nI0403 16:58:43.555344       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/client-ca/ca-bundle.crt\nI0403 16:58:43.555383       1 serving.go:88] Shutting down DynamicLoader\nI0403 16:58:43.555607       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nE0403 16:58:43.555734       1 watch.go:212] unable to encode watch object <nil>: expected pointer, but got invalid kind\nI0403 16:58:43.556237       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 16:58:43.558131       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 16:58:43.558316       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\n
Apr 03 17:00:37.654 E ns/openshift-sdn pod/sdn-controller-gzmtm node/ip-10-0-142-162.us-west-2.compute.internal container=sdn-controller container exited with code 255 (Error): t-sdn", Name:"openshift-network-controller", UID:"64721853-75c5-11ea-ae28-02f230eac062", APIVersion:"v1", ResourceVersion:"25893", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ip-10-0-142-162 became leader\nI0403 16:43:33.728963       1 master.go:57] Initializing SDN master of type "redhat/openshift-ovs-networkpolicy"\nI0403 16:43:33.733027       1 network_controller.go:49] Started OpenShift Network Controller\nW0403 16:54:30.674028       1 reflector.go:256] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 22880 (29837)\nE0403 16:54:53.848964       1 memcache.go:141] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request\nE0403 16:55:41.852493       1 memcache.go:141] couldn't get resource list for authorization.openshift.io/v1: the server is currently unable to handle the request\nE0403 16:55:44.924514       1 memcache.go:141] couldn't get resource list for oauth.openshift.io/v1: the server is currently unable to handle the request\nE0403 16:55:47.998245       1 memcache.go:141] couldn't get resource list for route.openshift.io/v1: the server is currently unable to handle the request\nE0403 16:55:51.069083       1 memcache.go:141] couldn't get resource list for template.openshift.io/v1: the server is currently unable to handle the request\nE0403 16:56:21.788721       1 memcache.go:141] couldn't get resource list for apps.openshift.io/v1: the server is currently unable to handle the request\nE0403 16:56:24.860865       1 memcache.go:141] couldn't get resource list for build.openshift.io/v1: the server is currently unable to handle the request\nE0403 16:56:27.932655       1 memcache.go:141] couldn't get resource list for user.openshift.io/v1: the server is currently unable to handle the request\nE0403 16:58:28.496462       1 memcache.go:141] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request\n
Apr 03 17:00:39.350 E ns/openshift-controller-manager pod/controller-manager-rqhhj node/ip-10-0-142-162.us-west-2.compute.internal container=controller-manager container exited with code 255 (Error): 
Apr 03 17:00:39.750 E ns/openshift-multus pod/multus-q8xb2 node/ip-10-0-142-162.us-west-2.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 03 17:00:40.549 E ns/openshift-machine-config-operator pod/machine-config-daemon-rjpr6 node/ip-10-0-142-162.us-west-2.compute.internal container=machine-config-daemon container exited with code 255 (Error): 
Apr 03 17:00:41.751 E ns/openshift-cluster-node-tuning-operator pod/tuned-fpdck node/ip-10-0-142-162.us-west-2.compute.internal container=tuned container exited with code 255 (Error): rue\nI0403 16:58:11.767695   90274 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:58:11.768827   90274 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:58:11.862638   90274 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 16:58:12.949124   90274 openshift-tuned.go:435] Pod (openshift-dns-operator/dns-operator-67f6cc4585-zf2qj) labels changed node wide: true\nI0403 16:58:16.767674   90274 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:58:16.768910   90274 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:58:16.878558   90274 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 16:58:21.349059   90274 openshift-tuned.go:435] Pod (openshift-machine-config-operator/etcd-quorum-guard-ccf89ff64-8h24j) labels changed node wide: true\nI0403 16:58:21.767725   90274 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:58:21.769278   90274 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:58:21.877084   90274 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 16:58:34.368237   90274 openshift-tuned.go:435] Pod (openshift-kube-scheduler/revision-pruner-7-ip-10-0-142-162.us-west-2.compute.internal) labels changed node wide: true\nI0403 16:58:36.767701   90274 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:58:36.768814   90274 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:58:36.861632   90274 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 16:58:42.684551   90274 openshift-tuned.go:435] Pod (openshift-console/downloads-66d8c75c4b-s978h) labels changed node wide: true\n
Apr 03 17:00:48.351 E ns/openshift-dns pod/dns-default-ct2kc node/ip-10-0-142-162.us-west-2.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-04-03T16:44:59.353Z [INFO] CoreDNS-1.3.1\n2020-04-03T16:44:59.353Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-03T16:44:59.353Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nW0403 16:54:30.639115       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: watch of *v1.Service ended with: too old resource version: 23902 (29836)\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 03 17:00:48.351 E ns/openshift-dns pod/dns-default-ct2kc node/ip-10-0-142-162.us-west-2.compute.internal container=dns-node-resolver container exited with code 255 (Error): /bin/bash: line 1: kill: (120) - No such process\n
Apr 03 17:00:48.952 E ns/openshift-machine-config-operator pod/machine-config-server-lsq8l node/ip-10-0-142-162.us-west-2.compute.internal container=machine-config-server container exited with code 255 (Error): 
Apr 03 17:00:55.151 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-142-162.us-west-2.compute.internal node/ip-10-0-142-162.us-west-2.compute.internal container=kube-controller-manager-cert-syncer-6 container exited with code 255 (Error): errole.rbac.authorization.k8s.io "system:oauth-token-deleter" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-jenkinspipeline" not found]\nE0403 16:37:03.902032       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:kube-controller-manager" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager": RBAC: [clusterrole.rbac.authorization.k8s.io "system:openshift:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "cluster-status" not found, clusterrole.rbac.authorization.k8s.io "system:webhook" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:discovery" not found, clusterrole.rbac.authorization.k8s.io "basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-source" not found, clusterrole.rbac.authorization.k8s.io "system:kube-controller-manager" not found, clusterrole.rbac.authorization.k8s.io "system:scope-impersonation" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-docker" not found, clusterrole.rbac.authorization.k8s.io "self-access-reviewer" not found, clusterrole.rbac.authorization.k8s.io "system:oauth-token-deleter" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-jenkinspipeline" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]\nW0403 16:42:58.925973       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 20609 (25326)\nW0403 16:52:48.929911       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25589 (29098)\n
Apr 03 17:00:55.151 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-142-162.us-west-2.compute.internal node/ip-10-0-142-162.us-west-2.compute.internal container=kube-controller-manager-6 container exited with code 255 (Error): f>" (2020-04-03 15:51:07 +0000 UTC to 2021-04-03 15:51:07 +0000 UTC (now=2020-04-03 16:35:32.610124632 +0000 UTC))\nI0403 16:35:32.610138       1 clientca.go:92] [4] "/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" client-ca certificate: "kube-control-plane-signer" [] issuer="<self>" (2020-04-03 15:51:07 +0000 UTC to 2021-04-03 15:51:07 +0000 UTC (now=2020-04-03 16:35:32.610133664 +0000 UTC))\nI0403 16:35:32.613472       1 controllermanager.go:169] Version: v1.13.4+3040211\nI0403 16:35:32.614384       1 serving.go:195] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1585930170" (2020-04-03 16:09:47 +0000 UTC to 2022-04-03 16:09:48 +0000 UTC (now=2020-04-03 16:35:32.614372721 +0000 UTC))\nI0403 16:35:32.614403       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585930170" [] issuer="<self>" (2020-04-03 16:09:29 +0000 UTC to 2021-04-03 16:09:30 +0000 UTC (now=2020-04-03 16:35:32.61439675 +0000 UTC))\nI0403 16:35:32.614419       1 secure_serving.go:136] Serving securely on [::]:10257\nI0403 16:35:32.614510       1 serving.go:77] Starting DynamicLoader\nI0403 16:35:32.614860       1 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-controller-manager...\nE0403 16:36:59.353087       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0403 16:58:43.548109       1 controllermanager.go:282] leaderelection lost\nI0403 16:58:43.548143       1 serving.go:88] Shutting down DynamicLoader\n
Apr 03 17:00:55.552 E ns/openshift-etcd pod/etcd-member-ip-10-0-142-162.us-west-2.compute.internal node/ip-10-0-142-162.us-west-2.compute.internal container=etcd-metrics container exited with code 255 (Error): 2020-04-03 16:58:04.020025 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-2.ci-op-1v3y6khs-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-2.ci-op-1v3y6khs-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-03 16:58:04.021123 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-03 16:58:04.022192 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-2.ci-op-1v3y6khs-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-2.ci-op-1v3y6khs-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/03 16:58:04 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.142.162:9978: connect: connection refused"; Reconnecting to {etcd-2.ci-op-1v3y6khs-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\n2020-04-03 16:58:05.042709 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Apr 03 17:00:55.552 E ns/openshift-etcd pod/etcd-member-ip-10-0-142-162.us-west-2.compute.internal node/ip-10-0-142-162.us-west-2.compute.internal container=etcd-member container exited with code 255 (Error): with peer ca74ec7acc77a42e (stream MsgApp v2 reader)\n2020-04-03 16:58:44.206110 I | rafthttp: stopped streaming with peer ca74ec7acc77a42e (stream MsgApp v2 reader)\n2020-04-03 16:58:44.207198 W | rafthttp: lost the TCP streaming connection with peer ca74ec7acc77a42e (stream Message reader)\n2020-04-03 16:58:44.207213 E | rafthttp: failed to read ca74ec7acc77a42e on stream Message (context canceled)\n2020-04-03 16:58:44.207218 I | rafthttp: peer ca74ec7acc77a42e became inactive (message send to peer failed)\n2020-04-03 16:58:44.207224 I | rafthttp: stopped streaming with peer ca74ec7acc77a42e (stream Message reader)\n2020-04-03 16:58:44.207230 I | rafthttp: stopped peer ca74ec7acc77a42e\n2020-04-03 16:58:44.208413 I | rafthttp: stopping peer c00ffbf0b05a7c...\n2020-04-03 16:58:44.208734 I | rafthttp: closed the TCP streaming connection with peer c00ffbf0b05a7c (stream MsgApp v2 writer)\n2020-04-03 16:58:44.208751 I | rafthttp: stopped streaming with peer c00ffbf0b05a7c (writer)\n2020-04-03 16:58:44.208938 I | rafthttp: closed the TCP streaming connection with peer c00ffbf0b05a7c (stream Message writer)\n2020-04-03 16:58:44.208950 I | rafthttp: stopped streaming with peer c00ffbf0b05a7c (writer)\n2020-04-03 16:58:44.209020 I | rafthttp: stopped HTTP pipelining with peer c00ffbf0b05a7c\n2020-04-03 16:58:44.209094 W | rafthttp: lost the TCP streaming connection with peer c00ffbf0b05a7c (stream MsgApp v2 reader)\n2020-04-03 16:58:44.209107 E | rafthttp: failed to read c00ffbf0b05a7c on stream MsgApp v2 (context canceled)\n2020-04-03 16:58:44.209111 I | rafthttp: peer c00ffbf0b05a7c became inactive (message send to peer failed)\n2020-04-03 16:58:44.209117 I | rafthttp: stopped streaming with peer c00ffbf0b05a7c (stream MsgApp v2 reader)\n2020-04-03 16:58:44.209165 W | rafthttp: lost the TCP streaming connection with peer c00ffbf0b05a7c (stream Message reader)\n2020-04-03 16:58:44.209179 I | rafthttp: stopped streaming with peer c00ffbf0b05a7c (stream Message reader)\n2020-04-03 16:58:44.209188 I | rafthttp: stopped peer c00ffbf0b05a7c\n
Apr 03 17:00:55.959 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-142-162.us-west-2.compute.internal node/ip-10-0-142-162.us-west-2.compute.internal container=kube-apiserver-7 container exited with code 255 (Error): F\nI0403 16:58:43.560497       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 16:58:43.560612       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 16:58:43.560776       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 16:58:43.562870       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 16:58:43.563071       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 16:58:43.563117       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nI0403 16:58:43.618801       1 healthz.go:184] [-]terminating failed: reason withheld\n[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/kube-apiserver-clientCA-reload ok\n[+]poststarthook/kube-apiserver-requestheader-reload ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-discovery-available ok\n[+]crd-informer-synced ok\n[+]crd-informer-synced ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/openshift.io-clientCA-reload ok\n[+]poststarthook/openshift.io-requestheader-reload ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\n
Apr 03 17:00:55.959 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-142-162.us-west-2.compute.internal node/ip-10-0-142-162.us-west-2.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 255 (Error): I0403 16:37:00.430446       1 observer_polling.go:106] Starting file observer\nI0403 16:37:00.431129       1 certsync_controller.go:269] Starting CertSyncer\nW0403 16:46:14.902144       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 23425 (26976)\nW0403 16:54:37.906130       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 27175 (29588)\n
Apr 03 17:00:56.351 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-142-162.us-west-2.compute.internal node/ip-10-0-142-162.us-west-2.compute.internal container=scheduler container exited with code 255 (Error): eaderelection.go:205] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\nI0403 16:38:27.257748       1 leaderelection.go:214] successfully acquired lease openshift-kube-scheduler/kube-scheduler\nE0403 16:54:30.619275       1 event.go:203] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"foo-zvrcp.16025d896b3baaa7", GenerateName:"", Namespace:"e2e-tests-sig-apps-job-upgrade-gg47g", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"e2e-tests-sig-apps-job-upgrade-gg47g", Name:"foo-zvrcp", UID:"c523af8d-75cb-11ea-aa14-0259e372416c", APIVersion:"v1", ResourceVersion:"29754", FieldPath:""}, Reason:"Scheduled", Message:"Successfully assigned e2e-tests-sig-apps-job-upgrade-gg47g/foo-zvrcp to ip-10-0-135-189.us-west-2.compute.internal", Source:v1.EventSource{Component:"default-scheduler", Host:""}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf99f86fe27ff4a7, ext:1043734666533, loc:(*time.Location)(0xb76f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf99f86fe27ff4a7, ext:1043734666533, loc:(*time.Location)(0xb76f100)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'etcdserver: request timed out' (will not retry!)\nW0403 16:57:51.804740       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 20548 (32776)\nE0403 16:58:43.549501       1 server.go:259] lost master\n
Apr 03 17:01:10.485 E ns/openshift-machine-config-operator pod/machine-config-controller-7c5b9d9766-5gr8r node/ip-10-0-146-226.us-west-2.compute.internal container=machine-config-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 17:01:11.085 E ns/openshift-machine-api pod/machine-api-controllers-5f685d9b45-llllj node/ip-10-0-146-226.us-west-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Apr 03 17:01:11.085 E ns/openshift-machine-api pod/machine-api-controllers-5f685d9b45-llllj node/ip-10-0-146-226.us-west-2.compute.internal container=nodelink-controller container exited with code 2 (Error): 
Apr 03 17:01:14.486 E ns/openshift-service-ca pod/configmap-cabundle-injector-7b55575446-l5ksc node/ip-10-0-146-226.us-west-2.compute.internal container=configmap-cabundle-injector-controller container exited with code 2 (Error): 
Apr 03 17:01:17.086 E ns/openshift-authentication-operator pod/authentication-operator-6c8c9dd4d7-9ftjp node/ip-10-0-146-226.us-west-2.compute.internal container=operator container exited with code 255 (Error): 16:58:31Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-04-03T16:27:06Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-04-03T16:13:27Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0403 16:59:14.827281       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"593ba09d-75c5-11ea-ae28-02f230eac062", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "" to "RouteStatusDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)"\nI0403 16:59:21.244516       1 status_controller.go:164] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2020-04-03T16:16:05Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-04-03T16:58:31Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-04-03T16:27:06Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-04-03T16:13:27Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0403 16:59:21.253189       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"593ba09d-75c5-11ea-ae28-02f230eac062", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "RouteStatusDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)" to ""\nI0403 17:00:58.928175       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 17:00:58.928229       1 leaderelection.go:65] leaderelection lost\n
Apr 03 17:01:21.487 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-869c4cf764-r5fkf node/ip-10-0-146-226.us-west-2.compute.internal container=openshift-apiserver-operator container exited with code 255 (Error): "51cc81a3-75c5-11ea-ae28-02f230eac062", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available message changed from "Available: v1.authorization.openshift.io is not ready: 503\nAvailable: v1.build.openshift.io is not ready: 503\nAvailable: v1.image.openshift.io is not ready: 503\nAvailable: v1.project.openshift.io is not ready: 503\nAvailable: v1.quota.openshift.io is not ready: 503\nAvailable: v1.route.openshift.io is not ready: 503\nAvailable: v1.security.openshift.io is not ready: 503" to "Available: v1.build.openshift.io is not ready: 503\nAvailable: v1.oauth.openshift.io is not ready: 503\nAvailable: v1.project.openshift.io is not ready: 503"\nI0403 16:59:17.916533       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"51cc81a3-75c5-11ea-ae28-02f230eac062", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available changed from False to True ("")\nW0403 16:59:42.242839       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 28454 (34147)\nW0403 16:59:56.250960       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 28966 (34361)\nW0403 17:00:52.033117       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Namespace ended with: too old resource version: 19696 (35540)\nW0403 17:00:58.209588       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Project ended with: too old resource version: 20239 (35597)\nI0403 17:00:58.827555       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 17:00:58.827688       1 builder.go:217] server exited\n
Apr 03 17:01:23.286 E ns/openshift-machine-config-operator pod/machine-config-operator-7b56cfff97-fwwvn node/ip-10-0-146-226.us-west-2.compute.internal container=machine-config-operator container exited with code 2 (Error): 
Apr 03 17:01:25.889 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-55958hltt node/ip-10-0-146-226.us-west-2.compute.internal container=operator container exited with code 2 (Error): ServiceCatalogControllerManager ended with: too old resource version: 32796 (33943)\nW0403 16:58:44.052873       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ServiceAccount ended with: too old resource version: 29836 (32490)\nW0403 16:58:44.067736       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 22944 (32490)\nI0403 16:58:44.950044       1 reflector.go:169] Listing and watching *v1.Deployment from k8s.io/client-go/informers/factory.go:132\nI0403 16:58:44.950225       1 reflector.go:169] Listing and watching *v1.ConfigMap from k8s.io/client-go/informers/factory.go:132\nI0403 16:58:44.996403       1 reflector.go:169] Listing and watching *v1.ServiceCatalogControllerManager from github.com/openshift/client-go/operator/informers/externalversions/factory.go:101\nI0403 16:58:45.053004       1 reflector.go:169] Listing and watching *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:132\nI0403 16:58:45.067846       1 reflector.go:169] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:132\nI0403 16:58:53.714831       1 reflector.go:215] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: forcing resync\nI0403 16:59:00.919336       1 wrap.go:47] GET /metrics: (6.421759ms) 200 [Prometheus/2.7.2 10.131.0.37:41064]\nI0403 16:59:00.919745       1 wrap.go:47] GET /metrics: (6.774327ms) 200 [Prometheus/2.7.2 10.128.2.41:51408]\nI0403 16:59:30.919045       1 wrap.go:47] GET /metrics: (5.386777ms) 200 [Prometheus/2.7.2 10.131.0.37:41064]\nI0403 16:59:30.919097       1 wrap.go:47] GET /metrics: (5.377581ms) 200 [Prometheus/2.7.2 10.128.2.41:51408]\nI0403 17:00:00.917996       1 wrap.go:47] GET /metrics: (5.06696ms) 200 [Prometheus/2.7.2 10.128.2.41:51408]\nI0403 17:00:30.919847       1 wrap.go:47] GET /metrics: (6.977946ms) 200 [Prometheus/2.7.2 10.128.2.41:51408]\nI0403 17:00:30.922032       1 wrap.go:47] GET /metrics: (1.454154ms) 200 [Prometheus/2.7.2 10.129.2.43:35520]\n
Apr 03 17:01:26.886 E ns/openshift-cluster-machine-approver pod/machine-approver-596c7945c5-9ml8l node/ip-10-0-146-226.us-west-2.compute.internal container=machine-approver-controller container exited with code 2 (Error): 6:37:46.544099       1 config.go:23] machine approver config: {NodeClientCert:{Disabled:false}}\nI0403 16:37:46.544165       1 main.go:183] Starting Machine Approver\nI0403 16:37:46.644386       1 main.go:107] CSR csr-xwvms added\nI0403 16:37:46.644413       1 main.go:110] CSR csr-xwvms is already approved\nI0403 16:37:46.644451       1 main.go:107] CSR csr-g58l8 added\nI0403 16:37:46.644458       1 main.go:110] CSR csr-g58l8 is already approved\nI0403 16:37:46.644468       1 main.go:107] CSR csr-mpx9c added\nI0403 16:37:46.644473       1 main.go:110] CSR csr-mpx9c is already approved\nI0403 16:37:46.644480       1 main.go:107] CSR csr-prt4x added\nI0403 16:37:46.644486       1 main.go:110] CSR csr-prt4x is already approved\nI0403 16:37:46.644493       1 main.go:107] CSR csr-wvd27 added\nI0403 16:37:46.644499       1 main.go:110] CSR csr-wvd27 is already approved\nI0403 16:37:46.644507       1 main.go:107] CSR csr-vb4wh added\nI0403 16:37:46.644513       1 main.go:110] CSR csr-vb4wh is already approved\nI0403 16:37:46.644523       1 main.go:107] CSR csr-w7g25 added\nI0403 16:37:46.644530       1 main.go:110] CSR csr-w7g25 is already approved\nI0403 16:37:46.644542       1 main.go:107] CSR csr-xpkql added\nI0403 16:37:46.644549       1 main.go:110] CSR csr-xpkql is already approved\nI0403 16:37:46.644557       1 main.go:107] CSR csr-52kc7 added\nI0403 16:37:46.644563       1 main.go:110] CSR csr-52kc7 is already approved\nI0403 16:37:46.644571       1 main.go:107] CSR csr-gkp56 added\nI0403 16:37:46.644577       1 main.go:110] CSR csr-gkp56 is already approved\nI0403 16:37:46.644586       1 main.go:107] CSR csr-l64dg added\nI0403 16:37:46.644592       1 main.go:110] CSR csr-l64dg is already approved\nI0403 16:37:46.644601       1 main.go:107] CSR csr-slz8f added\nI0403 16:37:46.644607       1 main.go:110] CSR csr-slz8f is already approved\nW0403 17:00:58.163768       1 reflector.go:341] github.com/openshift/cluster-machine-approver/main.go:185: watch of *v1beta1.CertificateSigningRequest ended with: too old resource version: 19708 (35597)\n
Apr 03 17:01:27.490 E ns/openshift-machine-api pod/machine-api-operator-6fc56594f6-4btkd node/ip-10-0-146-226.us-west-2.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Apr 03 17:01:32.306 E ns/openshift-marketplace pod/certified-operators-596f984747-9s826 node/ip-10-0-130-176.us-west-2.compute.internal container=certified-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 17:01:47.087 E ns/openshift-operator-lifecycle-manager pod/packageserver-96fc784f6-5gw8d node/ip-10-0-146-226.us-west-2.compute.internal container=packageserver container exited with code 137 (Error): tempting to add a new grpc connection" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T17:01:33Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T17:01:33Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T17:01:33Z" level=info msg="new grpc connection added" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T17:01:33Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T17:01:34Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T17:01:34Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T17:01:36Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T17:01:36Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T17:01:36Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T17:01:36Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T17:01:36Z" level=info msg="new grpc connection added" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T17:01:36Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\n
Apr 03 17:01:58.244 E ns/openshift-etcd pod/etcd-member-ip-10-0-142-162.us-west-2.compute.internal node/ip-10-0-142-162.us-west-2.compute.internal container=etcd-metrics container exited with code 255 (Error): 2020-04-03 16:58:04.020025 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-2.ci-op-1v3y6khs-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-2.ci-op-1v3y6khs-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-03 16:58:04.021123 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-03 16:58:04.022192 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-2.ci-op-1v3y6khs-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-2.ci-op-1v3y6khs-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/03 16:58:04 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.142.162:9978: connect: connection refused"; Reconnecting to {etcd-2.ci-op-1v3y6khs-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\n2020-04-03 16:58:05.042709 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Apr 03 17:01:58.244 E ns/openshift-etcd pod/etcd-member-ip-10-0-142-162.us-west-2.compute.internal node/ip-10-0-142-162.us-west-2.compute.internal container=etcd-member container exited with code 255 (Error): with peer ca74ec7acc77a42e (stream MsgApp v2 reader)\n2020-04-03 16:58:44.206110 I | rafthttp: stopped streaming with peer ca74ec7acc77a42e (stream MsgApp v2 reader)\n2020-04-03 16:58:44.207198 W | rafthttp: lost the TCP streaming connection with peer ca74ec7acc77a42e (stream Message reader)\n2020-04-03 16:58:44.207213 E | rafthttp: failed to read ca74ec7acc77a42e on stream Message (context canceled)\n2020-04-03 16:58:44.207218 I | rafthttp: peer ca74ec7acc77a42e became inactive (message send to peer failed)\n2020-04-03 16:58:44.207224 I | rafthttp: stopped streaming with peer ca74ec7acc77a42e (stream Message reader)\n2020-04-03 16:58:44.207230 I | rafthttp: stopped peer ca74ec7acc77a42e\n2020-04-03 16:58:44.208413 I | rafthttp: stopping peer c00ffbf0b05a7c...\n2020-04-03 16:58:44.208734 I | rafthttp: closed the TCP streaming connection with peer c00ffbf0b05a7c (stream MsgApp v2 writer)\n2020-04-03 16:58:44.208751 I | rafthttp: stopped streaming with peer c00ffbf0b05a7c (writer)\n2020-04-03 16:58:44.208938 I | rafthttp: closed the TCP streaming connection with peer c00ffbf0b05a7c (stream Message writer)\n2020-04-03 16:58:44.208950 I | rafthttp: stopped streaming with peer c00ffbf0b05a7c (writer)\n2020-04-03 16:58:44.209020 I | rafthttp: stopped HTTP pipelining with peer c00ffbf0b05a7c\n2020-04-03 16:58:44.209094 W | rafthttp: lost the TCP streaming connection with peer c00ffbf0b05a7c (stream MsgApp v2 reader)\n2020-04-03 16:58:44.209107 E | rafthttp: failed to read c00ffbf0b05a7c on stream MsgApp v2 (context canceled)\n2020-04-03 16:58:44.209111 I | rafthttp: peer c00ffbf0b05a7c became inactive (message send to peer failed)\n2020-04-03 16:58:44.209117 I | rafthttp: stopped streaming with peer c00ffbf0b05a7c (stream MsgApp v2 reader)\n2020-04-03 16:58:44.209165 W | rafthttp: lost the TCP streaming connection with peer c00ffbf0b05a7c (stream Message reader)\n2020-04-03 16:58:44.209179 I | rafthttp: stopped streaming with peer c00ffbf0b05a7c (stream Message reader)\n2020-04-03 16:58:44.209188 I | rafthttp: stopped peer c00ffbf0b05a7c\n
Apr 03 17:02:08.267 E ns/openshift-image-registry pod/node-ca-cbnk9 node/ip-10-0-135-189.us-west-2.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 03 17:02:08.289 E ns/openshift-monitoring pod/node-exporter-8qzjd node/ip-10-0-135-189.us-west-2.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 03 17:02:08.289 E ns/openshift-monitoring pod/node-exporter-8qzjd node/ip-10-0-135-189.us-west-2.compute.internal container=node-exporter container exited with code 255 (Error): 
Apr 03 17:02:08.481 E ns/openshift-sdn pod/sdn-5db2n node/ip-10-0-135-189.us-west-2.compute.internal container=sdn container exited with code 255 (Error): nshift-ingress/router-default:http to [10.128.2.38:80 10.129.2.37:80]\nI0403 17:00:22.117395   85824 roundrobin.go:240] Delete endpoint 10.129.2.37:80 for service "openshift-ingress/router-default:http"\nI0403 17:00:22.117404   85824 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-ingress/router-default:https to [10.128.2.38:443 10.129.2.37:443]\nI0403 17:00:22.117416   85824 roundrobin.go:240] Delete endpoint 10.129.2.37:443 for service "openshift-ingress/router-default:https"\nI0403 17:00:22.271041   85824 proxier.go:367] userspace proxy: processing 0 service events\nI0403 17:00:22.271067   85824 proxier.go:346] userspace syncProxyRules took 51.406578ms\nI0403 17:00:22.425389   85824 proxier.go:367] userspace proxy: processing 0 service events\nI0403 17:00:22.425411   85824 proxier.go:346] userspace syncProxyRules took 53.843109ms\ninterrupt: Gracefully shutting down ...\nE0403 17:00:22.664634   85824 healthcheck.go:57] SDN healthcheck disconnected from OVS server: <nil>\nI0403 17:00:22.664745   85824 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 17:00:22.769110   85824 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 17:00:22.878814   85824 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 17:00:22.944133   85824 healthcheck.go:87] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 17:00:22.967178   85824 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 17:00:23.066175   85824 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\n
Apr 03 17:02:13.087 E ns/openshift-dns pod/dns-default-hs7mm node/ip-10-0-135-189.us-west-2.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-04-03T16:45:27.223Z [INFO] CoreDNS-1.3.1\n2020-04-03T16:45:27.223Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-03T16:45:27.223Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nW0403 16:57:57.925502       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: watch of *v1.Namespace ended with: too old resource version: 20544 (32796)\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 03 17:02:13.087 E ns/openshift-dns pod/dns-default-hs7mm node/ip-10-0-135-189.us-west-2.compute.internal container=dns-node-resolver container exited with code 255 (Error): /bin/bash: line 1: kill: (126) - No such process\n
Apr 03 17:02:13.887 E ns/openshift-sdn pod/ovs-cjmk8 node/ip-10-0-135-189.us-west-2.compute.internal container=openvswitch container exited with code 255 (Error): 00176|bridge|INFO|bridge br0: deleted interface vethc3f2ec22 on port 20\n2020-04-03T16:59:52.451Z|00177|connmgr|INFO|br0<->unix#316: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T16:59:52.503Z|00178|connmgr|INFO|br0<->unix#319: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T16:59:52.540Z|00179|bridge|INFO|bridge br0: deleted interface veth8b7cc22b on port 19\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-04-03T16:59:52.389Z|00026|jsonrpc|WARN|Dropped 11 log messages in last 870 seconds (most recently, 870 seconds ago) due to excessive rate\n2020-04-03T16:59:52.389Z|00027|jsonrpc|WARN|unix#225: receive error: Connection reset by peer\n2020-04-03T16:59:52.389Z|00028|reconnect|WARN|unix#225: connection dropped (Connection reset by peer)\n\n==> /var/log/openvswitch/ovs-vswitchd.log <==\n2020-04-03T17:00:06.192Z|00180|connmgr|INFO|br0<->unix#325: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T17:00:06.215Z|00181|bridge|INFO|bridge br0: deleted interface veth1381b783 on port 5\n2020-04-03T17:00:06.533Z|00182|connmgr|INFO|br0<->unix#328: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T17:00:06.555Z|00183|bridge|INFO|bridge br0: deleted interface veth746da691 on port 7\n2020-04-03T17:00:17.499Z|00184|bridge|INFO|bridge br0: added interface veth225a6ffa on port 22\n2020-04-03T17:00:17.531Z|00185|connmgr|INFO|br0<->unix#331: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T17:00:17.569Z|00186|connmgr|INFO|br0<->unix#334: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T17:00:20.860Z|00187|connmgr|INFO|br0<->unix#337: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T17:00:20.880Z|00188|bridge|INFO|bridge br0: deleted interface veth29cba6bd on port 3\n2020-04-03T17:00:20.915Z|00189|connmgr|INFO|br0<->unix#340: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T17:00:20.953Z|00190|connmgr|INFO|br0<->unix#343: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T17:00:20.974Z|00191|bridge|INFO|bridge br0: deleted interface vethecae50b6 on port 13\nTerminated\novs-vswitchd is not running.\novsdb-server is not running.\n
Apr 03 17:02:14.686 E ns/openshift-operator-lifecycle-manager pod/olm-operators-gmc4z node/ip-10-0-135-189.us-west-2.compute.internal container=configmap-registry-server container exited with code 255 (Error): 
Apr 03 17:02:15.087 E ns/openshift-cluster-node-tuning-operator pod/tuned-h8jn4 node/ip-10-0-135-189.us-west-2.compute.internal container=tuned container exited with code 255 (Error): 16:59:52.784228  110810 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 16:59:54.723268  110810 openshift-tuned.go:435] Pod (openshift-image-registry/image-registry-58447b6557-shx58) labels changed node wide: true\nI0403 16:59:57.603137  110810 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 16:59:57.605460  110810 openshift-tuned.go:326] Getting recommended profile...\nI0403 16:59:57.715267  110810 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 16:59:58.332934  110810 openshift-tuned.go:435] Pod (openshift-monitoring/telemeter-client-86f48d6765-df8xr) labels changed node wide: true\nI0403 17:00:02.603133  110810 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 17:00:02.604783  110810 openshift-tuned.go:326] Getting recommended profile...\nI0403 17:00:02.717769  110810 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 17:00:11.330432  110810 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/olm-operators-6grmb) labels changed node wide: true\nI0403 17:00:12.603158  110810 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 17:00:12.604943  110810 openshift-tuned.go:326] Getting recommended profile...\nI0403 17:00:12.713148  110810 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 17:00:22.367159  110810 openshift-tuned.go:435] Pod (e2e-tests-sig-apps-job-upgrade-gg47g/foo-zvrcp) labels changed node wide: true\nI0403 17:00:22.603129  110810 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 17:00:22.606037  110810 openshift-tuned.go:326] Getting recommended profile...\nI0403 17:00:22.633566  110810 openshift-tuned.go:126] Received signal: terminated\n
Apr 03 17:02:15.687 E ns/openshift-multus pod/multus-hqvk7 node/ip-10-0-135-189.us-west-2.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 03 17:02:16.486 E ns/openshift-machine-config-operator pod/machine-config-daemon-2cb97 node/ip-10-0-135-189.us-west-2.compute.internal container=machine-config-daemon container exited with code 255 (Error): 
Apr 03 17:02:20.891 E ns/openshift-operator-lifecycle-manager pod/packageserver-96fc784f6-lkgqn node/ip-10-0-137-56.us-west-2.compute.internal container=packageserver container exited with code 137 (Error): 891ms) 200 [Go-http-client/2.0 10.128.0.1:35858]\nI0403 17:01:49.925219       1 wrap.go:47] GET /: (2.185792ms) 200 [Go-http-client/2.0 10.128.0.1:35858]\nI0403 17:01:49.925920       1 wrap.go:47] GET /: (5.68546ms) 200 [Go-http-client/2.0 10.130.0.1:46950]\nI0403 17:01:49.926132       1 wrap.go:47] GET /: (5.310173ms) 200 [Go-http-client/2.0 10.129.0.1:46460]\nI0403 17:01:49.926222       1 wrap.go:47] GET /: (5.78832ms) 200 [Go-http-client/2.0 10.129.0.1:46460]\nI0403 17:01:49.926265       1 wrap.go:47] GET /: (3.713151ms) 200 [Go-http-client/2.0 10.129.0.1:46460]\nI0403 17:01:49.956549       1 secure_serving.go:156] Stopped listening on [::]:5443\ntime="2020-04-03T17:01:56Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-03T17:01:56Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\nI0403 17:01:57.113326       1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0403 17:01:57.151108       1 reflector.go:337] github.com/operator-framework/operator-lifecycle-manager/pkg/lib/queueinformer/queueinformer_operator.go:130: Watch close - *v1alpha1.CatalogSource total 62 items received\ntime="2020-04-03T17:01:58Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-03T17:01:58Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-03T17:02:14Z" level=info msg="update detected, attempting to reset grpc connection" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\ntime="2020-04-03T17:02:14Z" level=info msg="grpc connection reset" action="sync catalogsource" name=olm-operators namespace=openshift-operator-lifecycle-manager\n
Apr 03 17:02:55.085 E ns/openshift-marketplace pod/community-operators-5c66cfd8b7-mthdr node/ip-10-0-157-20.us-west-2.compute.internal container=community-operators container exited with code 2 (Error): 
Apr 03 17:02:55.103 E ns/openshift-marketplace pod/certified-operators-554d64bbb9-zvz6m node/ip-10-0-157-20.us-west-2.compute.internal container=certified-operators container exited with code 2 (Error): 
Apr 03 17:02:55.377 E ns/openshift-marketplace pod/redhat-operators-577cb8b8b9-xkr6p node/ip-10-0-130-176.us-west-2.compute.internal container=redhat-operators container exited with code 2 (Error): 
Apr 03 17:03:13.703 E openshift-apiserver OpenShift API is not responding to GET requests
Apr 03 17:03:46.100 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-146-226.us-west-2.compute.internal node/ip-10-0-146-226.us-west-2.compute.internal container=scheduler container exited with code 255 (Error): C (now=2020-04-03 16:35:32.773549489 +0000 UTC))\nI0403 16:35:32.773597       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585930170" [] issuer="<self>" (2020-04-03 16:09:29 +0000 UTC to 2021-04-03 16:09:30 +0000 UTC (now=2020-04-03 16:35:32.773586209 +0000 UTC))\nI0403 16:35:32.773613       1 secure_serving.go:136] Serving securely on [::]:10259\nI0403 16:35:32.773713       1 serving.go:77] Starting DynamicLoader\nI0403 16:35:33.676228       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller\nI0403 16:35:33.776378       1 controller_utils.go:1034] Caches are synced for scheduler controller\nI0403 16:35:33.776418       1 leaderelection.go:205] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\nW0403 16:54:30.667319       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 19696 (29837)\nW0403 16:54:30.672645       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 23902 (29837)\nW0403 16:57:57.952353       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 19696 (32796)\nI0403 16:58:58.454871       1 leaderelection.go:214] successfully acquired lease openshift-kube-scheduler/kube-scheduler\nW0403 17:00:52.031931       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 19712 (35540)\nW0403 17:00:58.162912       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 29837 (35597)\nE0403 17:01:57.083722       1 server.go:259] lost master\nI0403 17:01:57.084749       1 serving.go:88] Shutting down DynamicLoader\nI0403 17:01:57.084998       1 secure_serving.go:180] Stopped listening on [::]:10251\n
Apr 03 17:03:56.167 E ns/openshift-apiserver pod/apiserver-pfjzj node/ip-10-0-146-226.us-west-2.compute.internal container=openshift-apiserver container exited with code 255 (Error): alancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 17:01:56.453434       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 17:01:56.453467       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 17:01:56.455401       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 17:01:56.462756       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 17:01:56.985239       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/client-ca/ca-bundle.crt\nI0403 17:01:56.985453       1 clusterquotamapping.go:145] Shutting down ClusterQuotaMappingController controller\nI0403 17:01:56.985536       1 serving.go:88] Shutting down DynamicLoader\nI0403 17:01:56.985541       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/aggregator-client-ca/ca-bundle.crt\nI0403 17:01:56.985546       1 controller.go:87] Shutting down OpenAPI AggregationController\nI0403 17:01:56.986391       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 17:01:56.986563       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 17:01:56.986757       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 17:01:56.986858       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 17:01:56.987581       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 17:01:56.987608       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 17:01:56.987631       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\n
Apr 03 17:03:56.949 E ns/openshift-multus pod/multus-srfvq node/ip-10-0-146-226.us-west-2.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 03 17:03:57.949 E ns/openshift-controller-manager pod/controller-manager-dgncs node/ip-10-0-146-226.us-west-2.compute.internal container=controller-manager container exited with code 255 (Error): 
Apr 03 17:03:58.550 E ns/openshift-dns pod/dns-default-8lv5g node/ip-10-0-146-226.us-west-2.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-04-03T16:45:48.458Z [INFO] CoreDNS-1.3.1\n2020-04-03T16:45:48.459Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-03T16:45:48.459Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nW0403 16:54:30.678333       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: watch of *v1.Service ended with: too old resource version: 23902 (29837)\nW0403 17:00:52.039045       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: watch of *v1.Namespace ended with: too old resource version: 19696 (35540)\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 03 17:03:58.550 E ns/openshift-dns pod/dns-default-8lv5g node/ip-10-0-146-226.us-west-2.compute.internal container=dns-node-resolver container exited with code 255 (Error): /bin/bash: line 1: kill: (134) - No such process\n
Apr 03 17:03:59.349 E ns/openshift-machine-config-operator pod/machine-config-server-xlw8r node/ip-10-0-146-226.us-west-2.compute.internal container=machine-config-server container exited with code 255 (Error): 
Apr 03 17:04:00.149 E ns/openshift-sdn pod/sdn-controller-jqcth node/ip-10-0-146-226.us-west-2.compute.internal container=sdn-controller container exited with code 255 (Error): I0403 16:44:16.871175       1 leaderelection.go:205] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Apr 03 17:04:00.549 E ns/openshift-monitoring pod/node-exporter-9lzzv node/ip-10-0-146-226.us-west-2.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 03 17:04:00.549 E ns/openshift-monitoring pod/node-exporter-9lzzv node/ip-10-0-146-226.us-west-2.compute.internal container=node-exporter container exited with code 255 (Error): 
Apr 03 17:04:00.949 E ns/openshift-image-registry pod/node-ca-zz6rs node/ip-10-0-146-226.us-west-2.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 03 17:04:01.750 E ns/openshift-cluster-node-tuning-operator pod/tuned-hptkf node/ip-10-0-146-226.us-west-2.compute.internal container=tuned container exited with code 255 (Error): 9.606624   98733 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 17:01:19.608007   98733 openshift-tuned.go:326] Getting recommended profile...\nI0403 17:01:19.711371   98733 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 17:01:19.711902   98733 openshift-tuned.go:435] Pod (openshift-machine-api/cluster-autoscaler-operator-7f4798bf6d-9kx6s) labels changed node wide: true\nI0403 17:01:24.606647   98733 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 17:01:24.607935   98733 openshift-tuned.go:326] Getting recommended profile...\nI0403 17:01:24.712735   98733 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 17:01:25.449114   98733 openshift-tuned.go:435] Pod (openshift-service-ca-operator/service-ca-operator-556c8547c4-pvqjk) labels changed node wide: true\nI0403 17:01:29.606825   98733 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 17:01:29.608080   98733 openshift-tuned.go:326] Getting recommended profile...\nI0403 17:01:29.706830   98733 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 17:01:47.248940   98733 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/packageserver-96fc784f6-5gw8d) labels changed node wide: true\nI0403 17:01:49.606631   98733 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 17:01:49.608148   98733 openshift-tuned.go:326] Getting recommended profile...\nI0403 17:01:49.706085   98733 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 17:01:56.252981   98733 openshift-tuned.go:435] Pod (openshift-machine-config-operator/etcd-quorum-guard-ccf89ff64-dctsc) labels changed node wide: true\n
Apr 03 17:04:16.750 E ns/openshift-etcd pod/etcd-member-ip-10-0-146-226.us-west-2.compute.internal node/ip-10-0-146-226.us-west-2.compute.internal container=etcd-member container exited with code 255 (Error): 0ffbf0b05a7c (stream MsgApp v2 reader)\n2020-04-03 17:01:57.607685 I | rafthttp: stopped streaming with peer c00ffbf0b05a7c (stream MsgApp v2 reader)\n2020-04-03 17:01:57.607727 W | rafthttp: lost the TCP streaming connection with peer c00ffbf0b05a7c (stream Message reader)\n2020-04-03 17:01:57.607735 E | rafthttp: failed to read c00ffbf0b05a7c on stream Message (context canceled)\n2020-04-03 17:01:57.607740 I | rafthttp: peer c00ffbf0b05a7c became inactive (message send to peer failed)\n2020-04-03 17:01:57.607746 I | rafthttp: stopped streaming with peer c00ffbf0b05a7c (stream Message reader)\n2020-04-03 17:01:57.607752 I | rafthttp: stopped peer c00ffbf0b05a7c\n2020-04-03 17:01:57.607757 I | rafthttp: stopping peer 37c7a7a17da4d42f...\n2020-04-03 17:01:57.607988 I | rafthttp: closed the TCP streaming connection with peer 37c7a7a17da4d42f (stream MsgApp v2 writer)\n2020-04-03 17:01:57.608004 I | rafthttp: stopped streaming with peer 37c7a7a17da4d42f (writer)\n2020-04-03 17:01:57.608192 I | rafthttp: closed the TCP streaming connection with peer 37c7a7a17da4d42f (stream Message writer)\n2020-04-03 17:01:57.608204 I | rafthttp: stopped streaming with peer 37c7a7a17da4d42f (writer)\n2020-04-03 17:01:57.608291 I | rafthttp: stopped HTTP pipelining with peer 37c7a7a17da4d42f\n2020-04-03 17:01:57.608354 W | rafthttp: lost the TCP streaming connection with peer 37c7a7a17da4d42f (stream MsgApp v2 reader)\n2020-04-03 17:01:57.608368 E | rafthttp: failed to read 37c7a7a17da4d42f on stream MsgApp v2 (context canceled)\n2020-04-03 17:01:57.608373 I | rafthttp: peer 37c7a7a17da4d42f became inactive (message send to peer failed)\n2020-04-03 17:01:57.608380 I | rafthttp: stopped streaming with peer 37c7a7a17da4d42f (stream MsgApp v2 reader)\n2020-04-03 17:01:57.608439 W | rafthttp: lost the TCP streaming connection with peer 37c7a7a17da4d42f (stream Message reader)\n2020-04-03 17:01:57.608468 I | rafthttp: stopped streaming with peer 37c7a7a17da4d42f (stream Message reader)\n2020-04-03 17:01:57.608480 I | rafthttp: stopped peer 37c7a7a17da4d42f\n
Apr 03 17:04:16.750 E ns/openshift-etcd pod/etcd-member-ip-10-0-146-226.us-west-2.compute.internal node/ip-10-0-146-226.us-west-2.compute.internal container=etcd-metrics container exited with code 255 (Error): 2020-04-03 17:01:03.522321 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-1v3y6khs-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-1v3y6khs-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-03 17:01:03.525524 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-03 17:01:03.533008 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-1v3y6khs-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-1v3y6khs-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/03 17:01:03 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.146.226:9978: connect: connection refused"; Reconnecting to {etcd-1.ci-op-1v3y6khs-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\n2020-04-03 17:01:04.548330 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Apr 03 17:04:17.149 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-146-226.us-west-2.compute.internal node/ip-10-0-146-226.us-west-2.compute.internal container=kube-controller-manager-cert-syncer-6 container exited with code 255 (Error): I0403 16:36:46.130242       1 observer_polling.go:106] Starting file observer\nI0403 16:36:46.131062       1 certsync_controller.go:269] Starting CertSyncer\nW0403 16:43:07.147136       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 20598 (25395)\nW0403 16:50:37.150990       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25665 (28394)\nW0403 17:00:24.155040       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 28574 (34832)\n
Apr 03 17:04:17.149 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-146-226.us-west-2.compute.internal node/ip-10-0-146-226.us-west-2.compute.internal container=kube-controller-manager-6 container exited with code 255 (Error): esource "tuned.openshift.io/v1, Resource=tuneds": unable to monitor quota for resource "tuned.openshift.io/v1, Resource=tuneds", couldn't start monitor for resource "monitoring.coreos.com/v1, Resource=prometheuses": unable to monitor quota for resource "monitoring.coreos.com/v1, Resource=prometheuses", couldn't start monitor for resource "operators.coreos.com/v1alpha1, Resource=subscriptions": unable to monitor quota for resource "operators.coreos.com/v1alpha1, Resource=subscriptions", couldn't start monitor for resource "machine.openshift.io/v1beta1, Resource=machinesets": unable to monitor quota for resource "machine.openshift.io/v1beta1, Resource=machinesets", couldn't start monitor for resource "monitoring.coreos.com/v1, Resource=alertmanagers": unable to monitor quota for resource "monitoring.coreos.com/v1, Resource=alertmanagers"]\nE0403 17:01:57.001089       1 reflector.go:237] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.BrokerTemplateInstance: the server is currently unable to handle the request (get brokertemplateinstances.template.openshift.io)\nE0403 17:01:57.006481       1 reflector.go:237] github.com/openshift/client-go/security/informers/externalversions/factory.go:101: Failed to watch *v1.RangeAllocation: the server is currently unable to handle the request (get rangeallocations.security.openshift.io)\nE0403 17:01:57.006750       1 reflector.go:237] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.BuildConfig: the server is currently unable to handle the request (get buildconfigs.build.openshift.io)\nE0403 17:01:57.019449       1 reflector.go:237] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io)\nE0403 17:01:57.104447       1 controllermanager.go:282] leaderelection lost\nI0403 17:01:57.104477       1 serving.go:88] Shutting down DynamicLoader\n
Apr 03 17:04:21.150 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-146-226.us-west-2.compute.internal node/ip-10-0-146-226.us-west-2.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 255 (Error): I0403 16:35:13.302732       1 certsync_controller.go:269] Starting CertSyncer\nI0403 16:35:13.303021       1 observer_polling.go:106] Starting file observer\nW0403 16:40:17.764318       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 23425 (24351)\nW0403 16:48:05.768427       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 24663 (27627)\nW0403 16:55:36.772384       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 27824 (30847)\n
Apr 03 17:04:21.150 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-146-226.us-west-2.compute.internal node/ip-10-0-146-226.us-west-2.compute.internal container=kube-apiserver-7 container exited with code 255 (Error): losed the connection; LastStreamID=4019, ErrCode=NO_ERROR, debug=""\nI0403 17:01:56.991115       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=4019, ErrCode=NO_ERROR, debug=""\nI0403 17:01:56.991165       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=4019, ErrCode=NO_ERROR, debug=""\nI0403 17:01:56.991637       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=4019, ErrCode=NO_ERROR, debug=""\nI0403 17:01:56.991695       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=4019, ErrCode=NO_ERROR, debug=""\nI0403 17:01:56.991843       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=4019, ErrCode=NO_ERROR, debug=""\nI0403 17:01:56.991899       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=4019, ErrCode=NO_ERROR, debug=""\nI0403 17:01:56.992089       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=4019, ErrCode=NO_ERROR, debug=""\nI0403 17:01:56.992168       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=4019, ErrCode=NO_ERROR, debug=""\nI0403 17:01:56.992488       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=4019, ErrCode=NO_ERROR, debug=""\nI0403 17:01:56.992550       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=4019, ErrCode=NO_ERROR, debug=""\n
Apr 03 17:04:21.549 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-146-226.us-west-2.compute.internal node/ip-10-0-146-226.us-west-2.compute.internal container=scheduler container exited with code 255 (Error): C (now=2020-04-03 16:35:32.773549489 +0000 UTC))\nI0403 16:35:32.773597       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585930170" [] issuer="<self>" (2020-04-03 16:09:29 +0000 UTC to 2021-04-03 16:09:30 +0000 UTC (now=2020-04-03 16:35:32.773586209 +0000 UTC))\nI0403 16:35:32.773613       1 secure_serving.go:136] Serving securely on [::]:10259\nI0403 16:35:32.773713       1 serving.go:77] Starting DynamicLoader\nI0403 16:35:33.676228       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller\nI0403 16:35:33.776378       1 controller_utils.go:1034] Caches are synced for scheduler controller\nI0403 16:35:33.776418       1 leaderelection.go:205] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\nW0403 16:54:30.667319       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 19696 (29837)\nW0403 16:54:30.672645       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 23902 (29837)\nW0403 16:57:57.952353       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 19696 (32796)\nI0403 16:58:58.454871       1 leaderelection.go:214] successfully acquired lease openshift-kube-scheduler/kube-scheduler\nW0403 17:00:52.031931       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 19712 (35540)\nW0403 17:00:58.162912       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 29837 (35597)\nE0403 17:01:57.083722       1 server.go:259] lost master\nI0403 17:01:57.084749       1 serving.go:88] Shutting down DynamicLoader\nI0403 17:01:57.084998       1 secure_serving.go:180] Stopped listening on [::]:10251\n
Apr 03 17:04:22.353 E ns/openshift-etcd pod/etcd-member-ip-10-0-146-226.us-west-2.compute.internal node/ip-10-0-146-226.us-west-2.compute.internal container=etcd-member container exited with code 255 (Error): 0ffbf0b05a7c (stream MsgApp v2 reader)\n2020-04-03 17:01:57.607685 I | rafthttp: stopped streaming with peer c00ffbf0b05a7c (stream MsgApp v2 reader)\n2020-04-03 17:01:57.607727 W | rafthttp: lost the TCP streaming connection with peer c00ffbf0b05a7c (stream Message reader)\n2020-04-03 17:01:57.607735 E | rafthttp: failed to read c00ffbf0b05a7c on stream Message (context canceled)\n2020-04-03 17:01:57.607740 I | rafthttp: peer c00ffbf0b05a7c became inactive (message send to peer failed)\n2020-04-03 17:01:57.607746 I | rafthttp: stopped streaming with peer c00ffbf0b05a7c (stream Message reader)\n2020-04-03 17:01:57.607752 I | rafthttp: stopped peer c00ffbf0b05a7c\n2020-04-03 17:01:57.607757 I | rafthttp: stopping peer 37c7a7a17da4d42f...\n2020-04-03 17:01:57.607988 I | rafthttp: closed the TCP streaming connection with peer 37c7a7a17da4d42f (stream MsgApp v2 writer)\n2020-04-03 17:01:57.608004 I | rafthttp: stopped streaming with peer 37c7a7a17da4d42f (writer)\n2020-04-03 17:01:57.608192 I | rafthttp: closed the TCP streaming connection with peer 37c7a7a17da4d42f (stream Message writer)\n2020-04-03 17:01:57.608204 I | rafthttp: stopped streaming with peer 37c7a7a17da4d42f (writer)\n2020-04-03 17:01:57.608291 I | rafthttp: stopped HTTP pipelining with peer 37c7a7a17da4d42f\n2020-04-03 17:01:57.608354 W | rafthttp: lost the TCP streaming connection with peer 37c7a7a17da4d42f (stream MsgApp v2 reader)\n2020-04-03 17:01:57.608368 E | rafthttp: failed to read 37c7a7a17da4d42f on stream MsgApp v2 (context canceled)\n2020-04-03 17:01:57.608373 I | rafthttp: peer 37c7a7a17da4d42f became inactive (message send to peer failed)\n2020-04-03 17:01:57.608380 I | rafthttp: stopped streaming with peer 37c7a7a17da4d42f (stream MsgApp v2 reader)\n2020-04-03 17:01:57.608439 W | rafthttp: lost the TCP streaming connection with peer 37c7a7a17da4d42f (stream Message reader)\n2020-04-03 17:01:57.608468 I | rafthttp: stopped streaming with peer 37c7a7a17da4d42f (stream Message reader)\n2020-04-03 17:01:57.608480 I | rafthttp: stopped peer 37c7a7a17da4d42f\n
Apr 03 17:04:22.353 E ns/openshift-etcd pod/etcd-member-ip-10-0-146-226.us-west-2.compute.internal node/ip-10-0-146-226.us-west-2.compute.internal container=etcd-metrics container exited with code 255 (Error): 2020-04-03 17:01:03.522321 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-1v3y6khs-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-1v3y6khs-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-03 17:01:03.525524 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-03 17:01:03.533008 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-1v3y6khs-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-1v3y6khs-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/03 17:01:03 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.146.226:9978: connect: connection refused"; Reconnecting to {etcd-1.ci-op-1v3y6khs-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\n2020-04-03 17:01:04.548330 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Apr 03 17:04:23.150 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-146-226.us-west-2.compute.internal node/ip-10-0-146-226.us-west-2.compute.internal container=kube-controller-manager-cert-syncer-6 container exited with code 255 (Error): I0403 16:36:46.130242       1 observer_polling.go:106] Starting file observer\nI0403 16:36:46.131062       1 certsync_controller.go:269] Starting CertSyncer\nW0403 16:43:07.147136       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 20598 (25395)\nW0403 16:50:37.150990       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25665 (28394)\nW0403 17:00:24.155040       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 28574 (34832)\n
Apr 03 17:04:23.150 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-146-226.us-west-2.compute.internal node/ip-10-0-146-226.us-west-2.compute.internal container=kube-controller-manager-6 container exited with code 255 (Error): esource "tuned.openshift.io/v1, Resource=tuneds": unable to monitor quota for resource "tuned.openshift.io/v1, Resource=tuneds", couldn't start monitor for resource "monitoring.coreos.com/v1, Resource=prometheuses": unable to monitor quota for resource "monitoring.coreos.com/v1, Resource=prometheuses", couldn't start monitor for resource "operators.coreos.com/v1alpha1, Resource=subscriptions": unable to monitor quota for resource "operators.coreos.com/v1alpha1, Resource=subscriptions", couldn't start monitor for resource "machine.openshift.io/v1beta1, Resource=machinesets": unable to monitor quota for resource "machine.openshift.io/v1beta1, Resource=machinesets", couldn't start monitor for resource "monitoring.coreos.com/v1, Resource=alertmanagers": unable to monitor quota for resource "monitoring.coreos.com/v1, Resource=alertmanagers"]\nE0403 17:01:57.001089       1 reflector.go:237] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.BrokerTemplateInstance: the server is currently unable to handle the request (get brokertemplateinstances.template.openshift.io)\nE0403 17:01:57.006481       1 reflector.go:237] github.com/openshift/client-go/security/informers/externalversions/factory.go:101: Failed to watch *v1.RangeAllocation: the server is currently unable to handle the request (get rangeallocations.security.openshift.io)\nE0403 17:01:57.006750       1 reflector.go:237] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.BuildConfig: the server is currently unable to handle the request (get buildconfigs.build.openshift.io)\nE0403 17:01:57.019449       1 reflector.go:237] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io)\nE0403 17:01:57.104447       1 controllermanager.go:282] leaderelection lost\nI0403 17:01:57.104477       1 serving.go:88] Shutting down DynamicLoader\n
Apr 03 17:04:51.619 E ns/openshift-machine-config-operator pod/etcd-quorum-guard-ccf89ff64-l654f node/ip-10-0-142-162.us-west-2.compute.internal container=guard container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated