ResultFAILURE
Tests 3 failed / 19 succeeded
Started2020-04-03 10:37
Elapsed1h21m
Work namespaceci-op-91k9431k
Refs release-4.1:514189df
812:8d0c3f82
pod0cfec9a4-7597-11ea-bd85-0a58ac10de6c
repoopenshift/cluster-kube-apiserver-operator
revision1

Test Failures


openshift-tests Monitor cluster while tests execute 42m17s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
235 error level events were detected during this test run:

Apr 03 11:13:53.935 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-78d77fbbf4-v9btc node/ip-10-0-146-242.us-west-1.compute.internal container=kube-apiserver-operator container exited with code 255 (Error):  revision 4 to 6\nI0403 11:10:18.106692       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"8ede25c6-7599-11ea-8574-06a6b442c077", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("Progressing: 3 nodes are at revision 6"),Available message changed from "Available: 3 nodes are active; 1 nodes are at revision 4; 2 nodes are at revision 6" to "Available: 3 nodes are active; 3 nodes are at revision 6"\nI0403 11:10:18.851996       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"8ede25c6-7599-11ea-8574-06a6b442c077", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/revision-status-6 -n openshift-kube-apiserver: cause by changes in data.status\nI0403 11:10:22.060284       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"8ede25c6-7599-11ea-8574-06a6b442c077", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-6-ip-10-0-146-242.us-west-1.compute.internal -n openshift-kube-apiserver because it was missing\nW0403 11:13:22.491076       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 14893 (16770)\nW0403 11:13:35.492228       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 15515 (16833)\nI0403 11:13:52.815851       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 11:13:52.815955       1 leaderelection.go:65] leaderelection lost\nF0403 11:13:52.821193       1 builder.go:217] server exited\n
Apr 03 11:15:23.244 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-5b5546955-69tpf node/ip-10-0-146-242.us-west-1.compute.internal container=kube-controller-manager-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 11:15:35.227 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-7464fdb98f-27jfc node/ip-10-0-146-242.us-west-1.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): nfigMap ended with: too old resource version: 14889 (15264)\nW0403 11:10:01.168077       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 13357 (14191)\nW0403 11:10:01.168163       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 13352 (14191)\nW0403 11:10:01.168259       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Scheduler ended with: too old resource version: 12730 (14267)\nW0403 11:10:01.168288       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 13352 (14191)\nW0403 11:10:01.201770       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Pod ended with: too old resource version: 14182 (14191)\nW0403 11:10:01.231286       1 reflector.go:270] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: watch of *v1.KubeScheduler ended with: too old resource version: 13941 (14271)\nW0403 11:10:01.248042       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Role ended with: too old resource version: 12731 (14193)\nW0403 11:10:01.345754       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.RoleBinding ended with: too old resource version: 12727 (14193)\nW0403 11:10:01.345935       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 10860 (14191)\nW0403 11:10:01.346049       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ServiceAccount ended with: too old resource version: 12535 (14191)\nI0403 11:15:34.138866       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 11:15:34.138940       1 leaderelection.go:65] leaderelection lost\nI0403 11:15:34.149258       1 secure_serving.go:156] Stopped listening on 0.0.0.0:8443\n
Apr 03 11:17:01.550 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-676fc89f6-hbt8g node/ip-10-0-146-242.us-west-1.compute.internal container=openshift-apiserver-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 11:17:19.551 E ns/openshift-machine-api pod/machine-api-operator-dbc547dc7-lncl9 node/ip-10-0-146-242.us-west-1.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Apr 03 11:18:24.875 E ns/openshift-apiserver pod/apiserver-gkg7d node/ip-10-0-131-184.us-west-1.compute.internal container=openshift-apiserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 11:19:31.181 E ns/openshift-machine-api pod/machine-api-controllers-7f789b46bd-5gjpk node/ip-10-0-135-18.us-west-1.compute.internal container=controller-manager container exited with code 1 (Error): 
Apr 03 11:19:31.181 E ns/openshift-machine-api pod/machine-api-controllers-7f789b46bd-5gjpk node/ip-10-0-135-18.us-west-1.compute.internal container=nodelink-controller container exited with code 2 (Error): 
Apr 03 11:19:49.122 E clusterversion/version changed Failing to True: MultipleErrors: Multiple errors are preventing progress:\n* Cluster operator monitoring is still updating\n* Could not update deployment "openshift-cloud-credential-operator/cloud-credential-operator" (94 of 350)\n* Could not update deployment "openshift-cluster-node-tuning-operator/cluster-node-tuning-operator" (162 of 350)\n* Could not update deployment "openshift-cluster-storage-operator/cluster-storage-operator" (199 of 350)
Apr 03 11:19:51.206 E ns/openshift-cluster-node-tuning-operator pod/cluster-node-tuning-operator-7c8d565b47-mnt4d node/ip-10-0-131-184.us-west-1.compute.internal container=cluster-node-tuning-operator container exited with code 255 (Error): controller-runtime/pkg/cache/internal/informers_map.go:126: watch of *v1.DaemonSet ended with: too old resource version: 14193 (15421)\nW0403 11:17:06.790228       1 reflector.go:270] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:196: watch of *v1.ClusterRole ended with: too old resource version: 14193 (15421)\nW0403 11:17:06.790430       1 reflector.go:270] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:196: watch of *v1.ConfigMap ended with: too old resource version: 17458 (18053)\nW0403 11:17:06.804031       1 reflector.go:270] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:126: watch of *v1.Tuned ended with: too old resource version: 14266 (18265)\nI0403 11:17:07.702051       1 tuned_controller.go:419] Reconciling Tuned openshift-cluster-node-tuning-operator/default\nI0403 11:17:07.702081       1 status.go:26] syncOperatorStatus()\nI0403 11:17:07.716341       1 tuned_controller.go:187] syncServiceAccount()\nI0403 11:17:07.716514       1 tuned_controller.go:215] syncClusterRole()\nI0403 11:17:07.874480       1 tuned_controller.go:246] syncClusterRoleBinding()\nI0403 11:17:07.970995       1 tuned_controller.go:277] syncClusterConfigMap()\nI0403 11:17:07.976986       1 tuned_controller.go:277] syncClusterConfigMap()\nI0403 11:17:07.982785       1 tuned_controller.go:315] syncDaemonSet()\nI0403 11:17:07.992654       1 tuned_controller.go:419] Reconciling Tuned openshift-cluster-node-tuning-operator/default\nI0403 11:17:07.992678       1 status.go:26] syncOperatorStatus()\nI0403 11:17:08.002525       1 tuned_controller.go:187] syncServiceAccount()\nI0403 11:17:08.002667       1 tuned_controller.go:215] syncClusterRole()\nI0403 11:17:08.078573       1 tuned_controller.go:246] syncClusterRoleBinding()\nI0403 11:17:08.132791       1 tuned_controller.go:277] syncClusterConfigMap()\nI0403 11:17:08.139486       1 tuned_controller.go:277] syncClusterConfigMap()\nI0403 11:17:08.144416       1 tuned_controller.go:315] syncDaemonSet()\nF0403 11:19:50.312507       1 main.go:85] <nil>\n
Apr 03 11:19:53.589 E ns/openshift-monitoring pod/node-exporter-dhmb8 node/ip-10-0-135-18.us-west-1.compute.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 11:19:53.589 E ns/openshift-monitoring pod/node-exporter-dhmb8 node/ip-10-0-135-18.us-west-1.compute.internal container=node-exporter container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 11:20:01.144 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-148-57.us-west-1.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): 
Apr 03 11:20:01.144 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-148-57.us-west-1.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 
Apr 03 11:20:01.144 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-148-57.us-west-1.compute.internal container=prometheus-proxy container exited with code 2 (Error): 
Apr 03 11:20:08.157 E ns/openshift-cluster-node-tuning-operator pod/tuned-4twzr node/ip-10-0-148-57.us-west-1.compute.internal container=tuned container exited with code 143 (Error): to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=9, ErrCode=NO_ERROR, debug=""\nE0403 11:17:06.414404    2427 openshift-tuned.go:720] Pod event watch channel closed.\nI0403 11:17:06.414420    2427 openshift-tuned.go:722] Increasing resyncPeriod to 130\nI0403 11:19:16.414686    2427 openshift-tuned.go:187] Extracting tuned profiles\nI0403 11:19:16.416771    2427 openshift-tuned.go:623] Resync period to pull node/pod labels: 130 [s]\nI0403 11:19:16.431522    2427 openshift-tuned.go:435] Pod (openshift-image-registry/node-ca-29wlz) labels changed node wide: true\nI0403 11:19:21.428743    2427 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 11:19:21.430094    2427 openshift-tuned.go:275] Dumping labels to /var/lib/tuned/ocp-node-labels.cfg\nI0403 11:19:21.431111    2427 openshift-tuned.go:326] Getting recommended profile...\nI0403 11:19:21.544070    2427 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 11:19:59.394851    2427 openshift-tuned.go:435] Pod (openshift-monitoring/kube-state-metrics-7cb8cb886c-gm2zr) labels changed node wide: true\nI0403 11:20:01.428768    2427 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 11:20:01.430419    2427 openshift-tuned.go:326] Getting recommended profile...\nI0403 11:20:01.543872    2427 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 11:20:02.423257    2427 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-k8s-1) labels changed node wide: true\nI0403 11:20:06.428818    2427 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 11:20:06.430374    2427 openshift-tuned.go:326] Getting recommended profile...\nI0403 11:20:06.540854    2427 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\n
Apr 03 11:20:08.677 E ns/openshift-monitoring pod/node-exporter-fr965 node/ip-10-0-128-95.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): 
Apr 03 11:20:21.089 E ns/openshift-monitoring pod/telemeter-client-7d94599679-tblk5 node/ip-10-0-128-95.us-west-1.compute.internal container=telemeter-client container exited with code 2 (Error): 
Apr 03 11:20:21.089 E ns/openshift-monitoring pod/telemeter-client-7d94599679-tblk5 node/ip-10-0-128-95.us-west-1.compute.internal container=reload container exited with code 2 (Error): 
Apr 03 11:20:22.967 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-148-57.us-west-1.compute.internal container=rules-configmap-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 11:20:22.967 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-148-57.us-west-1.compute.internal container=prometheus-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 11:20:22.967 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-148-57.us-west-1.compute.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 11:20:22.967 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-148-57.us-west-1.compute.internal container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 11:20:22.967 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-148-57.us-west-1.compute.internal container=prometheus container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 11:20:22.967 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-148-57.us-west-1.compute.internal container=prometheus-config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 11:20:26.295 E ns/openshift-cluster-node-tuning-operator pod/tuned-7xp7f node/ip-10-0-131-184.us-west-1.compute.internal container=tuned container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 11:20:29.997 E ns/openshift-monitoring pod/node-exporter-zbsfx node/ip-10-0-138-39.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): 
Apr 03 11:20:34.334 E ns/openshift-monitoring pod/grafana-5b55f97fd9-gxvbt node/ip-10-0-148-57.us-west-1.compute.internal container=grafana-proxy container exited with code 2 (Error): 
Apr 03 11:20:34.850 E ns/openshift-cluster-node-tuning-operator pod/tuned-s7wlh node/ip-10-0-128-95.us-west-1.compute.internal container=tuned container exited with code 143 (Error): 20-04-03 11:06:05,761 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-03 11:06:05,889 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-03 11:06:05,906 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\n2020-04-03 11:06:05,915 INFO     tuned.daemon.daemon: terminating Tuned in one-shot mode\nI0403 11:07:58.464292    2591 openshift-tuned.go:691] Lowering resyncPeriod to 59\nE0403 11:10:01.054592    2591 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=9, ErrCode=NO_ERROR, debug=""\nE0403 11:10:01.058135    2591 openshift-tuned.go:720] Pod event watch channel closed.\nI0403 11:10:01.058155    2591 openshift-tuned.go:722] Increasing resyncPeriod to 118\nI0403 11:11:59.058377    2591 openshift-tuned.go:187] Extracting tuned profiles\nI0403 11:11:59.060446    2591 openshift-tuned.go:623] Resync period to pull node/pod labels: 118 [s]\nI0403 11:11:59.072529    2591 openshift-tuned.go:435] Pod (openshift-ingress/router-default-695c897c44-q4j7h) labels changed node wide: true\nI0403 11:12:04.070434    2591 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 11:12:04.071914    2591 openshift-tuned.go:275] Dumping labels to /var/lib/tuned/ocp-node-labels.cfg\nI0403 11:12:04.073048    2591 openshift-tuned.go:326] Getting recommended profile...\nI0403 11:12:04.187419    2591 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 11:13:57.064407    2591 openshift-tuned.go:691] Lowering resyncPeriod to 59\nE0403 11:18:56.568441    2591 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=15, ErrCode=NO_ERROR, debug=""\nE0403 11:18:56.573076    2591 openshift-tuned.go:720] Pod event watch channel closed.\nI0403 11:18:56.573094    2591 openshift-tuned.go:722] Increasing resyncPeriod to 118\n
Apr 03 11:20:36.567 E ns/openshift-monitoring pod/prometheus-adapter-b6ff58669-w7blm node/ip-10-0-148-57.us-west-1.compute.internal container=prometheus-adapter container exited with code 2 (Error): 
Apr 03 11:20:39.448 E ns/openshift-cluster-samples-operator pod/cluster-samples-operator-d6f48b88f-5cww9 node/ip-10-0-131-184.us-west-1.compute.internal container=cluster-samples-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 11:20:42.916 E ns/openshift-image-registry pod/image-registry-6c465548cb-j656r node/ip-10-0-128-95.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 11:20:42.930 E ns/openshift-ingress pod/router-default-695c897c44-q4j7h node/ip-10-0-128-95.us-west-1.compute.internal container=router container exited with code 2 (Error): 11:19:33.838867       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 11:19:38.837579       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 11:19:43.839315       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 11:19:48.840866       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 11:19:53.842593       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 11:20:00.155089       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 11:20:05.146859       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 11:20:10.151576       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 11:20:15.155727       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 11:20:20.151974       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 11:20:26.845472       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 11:20:31.844289       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 11:20:41.253903       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n
Apr 03 11:20:46.526 E ns/openshift-authentication-operator pod/authentication-operator-64d845b755-9ggzp node/ip-10-0-135-18.us-west-1.compute.internal container=operator container exited with code 255 (Error):     1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Authentication ended with: too old resource version: 15593 (18431)\nW0403 11:20:44.257557       1 reflector.go:270] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: watch of *v1.Authentication ended with: too old resource version: 15601 (18445)\nW0403 11:20:44.257748       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Console ended with: too old resource version: 15606 (18434)\nW0403 11:20:44.262918       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 19126 (20369)\nW0403 11:20:44.294424       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Deployment ended with: too old resource version: 18484 (19990)\nW0403 11:20:44.294790       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.OAuth ended with: too old resource version: 15641 (18441)\nW0403 11:20:44.295063       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 19149 (20395)\nW0403 11:20:44.296959       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Ingress ended with: too old resource version: 15635 (18435)\nW0403 11:20:44.321074       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Infrastructure ended with: too old resource version: 15634 (18433)\nW0403 11:20:44.321220       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 19214 (20395)\nI0403 11:20:45.613544       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 11:20:45.613601       1 leaderelection.go:65] leaderelection lost\n
Apr 03 11:20:47.626 E ns/openshift-marketplace pod/redhat-operators-85fc7f8985-6lxrs node/ip-10-0-128-95.us-west-1.compute.internal container=redhat-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 11:20:50.899 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-dbc9964bc-fwm6g node/ip-10-0-146-242.us-west-1.compute.internal container=operator container exited with code 2 (Error): ift-controller-manager/roles/prometheus-k8s\nI0403 11:20:00.273025       1 request.go:530] Throttling request took 196.499937ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0403 11:20:00.372400       1 status_controller.go:160] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2020-04-03T10:56:42Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-04-03T11:20:00Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-04-03T10:57:42Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-04-03T10:56:42Z","reason":"NoData","status":"Unknown","type":"Upgradeable"}]}}\nI0403 11:20:00.386141       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"8f3d841e-7599-11ea-8574-06a6b442c077", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for operator openshift-controller-manager changed: Progressing changed from True to False ("")\nI0403 11:20:00.496359       1 wrap.go:47] GET /metrics: (4.725869ms) 200 [Prometheus/2.7.2 10.129.2.5:55254]\nI0403 11:20:02.350721       1 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync\nI0403 11:20:02.397030       1 reflector.go:215] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: forcing resync\nI0403 11:20:20.073096       1 request.go:530] Throttling request took 158.541149ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0403 11:20:20.273026       1 request.go:530] Throttling request took 196.144384ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\n
Apr 03 11:20:51.596 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-59c9rl7jn node/ip-10-0-135-18.us-west-1.compute.internal container=operator container exited with code 2 (Error):  event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=193, ErrCode=NO_ERROR, debug=""\nI0403 11:20:43.920904       1 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch close - *v1.Deployment total 0 items received\nI0403 11:20:43.921058       1 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch close - *v1.Namespace total 0 items received\nI0403 11:20:43.921118       1 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch close - *v1.ServiceAccount total 0 items received\nW0403 11:20:44.229652       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ServiceAccount ended with: too old resource version: 16109 (18301)\nW0403 11:20:44.229734       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Deployment ended with: too old resource version: 18261 (19990)\nW0403 11:20:44.256589       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 19251 (20369)\nW0403 11:20:44.310612       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 16105 (18301)\nW0403 11:20:44.319482       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Namespace ended with: too old resource version: 16051 (18299)\nI0403 11:20:45.229870       1 reflector.go:169] Listing and watching *v1.Deployment from k8s.io/client-go/informers/factory.go:132\nI0403 11:20:45.230348       1 reflector.go:169] Listing and watching *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:132\nI0403 11:20:45.256818       1 reflector.go:169] Listing and watching *v1.ConfigMap from k8s.io/client-go/informers/factory.go:132\nI0403 11:20:45.310981       1 reflector.go:169] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:132\nI0403 11:20:45.319707       1 reflector.go:169] Listing and watching *v1.Namespace from k8s.io/client-go/informers/factory.go:132\n
Apr 03 11:20:52.403 E ns/openshift-service-catalog-apiserver-operator pod/openshift-service-catalog-apiserver-operator-8f6cc899c-ljjs2 node/ip-10-0-131-184.us-west-1.compute.internal container=operator container exited with code 2 (Error): onfigMap from k8s.io/client-go/informers/factory.go:132\nI0403 11:20:45.108191       1 reflector.go:169] Listing and watching *v1.Secret from k8s.io/client-go/informers/factory.go:132\nI0403 11:20:45.170267       1 reflector.go:169] Listing and watching *v1.Secret from k8s.io/client-go/informers/factory.go:132\nI0403 11:20:45.170376       1 reflector.go:169] Listing and watching *v1.Namespace from k8s.io/client-go/informers/factory.go:132\nI0403 11:20:45.178175       1 reflector.go:169] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:132\nI0403 11:20:45.178518       1 reflector.go:169] Listing and watching *v1.ConfigMap from k8s.io/client-go/informers/factory.go:132\nI0403 11:20:45.194210       1 reflector.go:169] Listing and watching *v1.ConfigMap from k8s.io/client-go/informers/factory.go:132\nI0403 11:20:45.198262       1 reflector.go:169] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:132\nI0403 11:20:45.260328       1 reflector.go:169] Listing and watching *v1.ServiceCatalogAPIServer from github.com/openshift/client-go/operator/informers/externalversions/factory.go:101\nI0403 11:20:45.260739       1 reflector.go:169] Listing and watching *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:132\nI0403 11:20:45.405793       1 request.go:530] Throttling request took 207.423647ms, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-service-catalog-apiserver/services?limit=500&resourceVersion=0\nI0403 11:20:45.605746       1 request.go:530] Throttling request took 403.677724ms, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-service-catalog-apiserver\nI0403 11:20:45.808226       1 request.go:530] Throttling request took 531.223554ms, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-service-catalog-apiserver/serviceaccounts?limit=500&resourceVersion=0\nI0403 11:20:46.005755       1 request.go:530] Throttling request took 394.175656ms, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-service-catalog-apiserver\n
Apr 03 11:20:57.294 E ns/openshift-controller-manager pod/controller-manager-c9s5l node/ip-10-0-146-242.us-west-1.compute.internal container=controller-manager container exited with code 137 (Error): 
Apr 03 11:20:59.894 E ns/openshift-operator-lifecycle-manager pod/olm-operator-66584dd98-csdln node/ip-10-0-146-242.us-west-1.compute.internal container=olm-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 11:21:01.102 E ns/openshift-monitoring pod/node-exporter-g8ltm node/ip-10-0-146-242.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): 
Apr 03 11:21:04.717 E ns/openshift-cluster-node-tuning-operator pod/tuned-lht68 node/ip-10-0-135-18.us-west-1.compute.internal container=tuned container exited with code 143 (Error): ctive and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 11:20:10.998358   14288 openshift-tuned.go:435] Pod (openshift-image-registry/cluster-image-registry-operator-7b99db7558-dvv27) labels changed node wide: true\nI0403 11:20:11.759127   14288 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 11:20:11.761143   14288 openshift-tuned.go:326] Getting recommended profile...\nI0403 11:20:12.043051   14288 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 11:20:12.106677   14288 openshift-tuned.go:435] Pod (openshift-cluster-samples-operator/cluster-samples-operator-cc5bc65ff-fvqbt) labels changed node wide: true\nI0403 11:20:16.759210   14288 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 11:20:16.761428   14288 openshift-tuned.go:326] Getting recommended profile...\nI0403 11:20:16.879510   14288 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 11:20:38.441697   14288 openshift-tuned.go:435] Pod (openshift-image-registry/cluster-image-registry-operator-67ffd5856b-z9ww4) labels changed node wide: true\nI0403 11:20:41.759161   14288 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 11:20:41.782165   14288 openshift-tuned.go:326] Getting recommended profile...\nI0403 11:20:41.989085   14288 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nE0403 11:20:43.902382   14288 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=15, ErrCode=NO_ERROR, debug=""\nE0403 11:20:43.910450   14288 openshift-tuned.go:720] Pod event watch channel closed.\nI0403 11:20:43.910469   14288 openshift-tuned.go:722] Increasing resyncPeriod to 104\n
Apr 03 11:21:09.452 E ns/openshift-console-operator pod/console-operator-cb4884755-b9cz6 node/ip-10-0-131-184.us-west-1.compute.internal container=console-operator container exited with code 255 (Error):  console status"\ntime="2020-04-03T11:20:46Z" level=info msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-91k9431k-a4243.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-04-03T11:20:46Z" level=info msg="sync loop 4.0.0 complete"\ntime="2020-04-03T11:20:46Z" level=info msg="finished syncing operator \"cluster\" (42.961µs) \n\n"\ntime="2020-04-03T11:20:46Z" level=info msg="started syncing operator \"cluster\" (2020-04-03 11:20:46.218413408 +0000 UTC m=+952.145035813)"\ntime="2020-04-03T11:20:46Z" level=info msg="console is in a managed state."\ntime="2020-04-03T11:20:46Z" level=info msg="running sync loop 4.0.0"\ntime="2020-04-03T11:20:46Z" level=info msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-91k9431k-a4243.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-04-03T11:20:47Z" level=info msg="service-ca configmap exists and is in the correct state"\ntime="2020-04-03T11:20:47Z" level=info msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-91k9431k-a4243.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-04-03T11:20:47Z" level=info msg=-----------------------\ntime="2020-04-03T11:20:47Z" level=info msg="sync loop 4.0.0 resources updated: false \n"\ntime="2020-04-03T11:20:47Z" level=info msg=-----------------------\ntime="2020-04-03T11:20:47Z" level=info msg="deployment is available, ready replicas: 2 \n"\ntime="2020-04-03T11:20:47Z" level=info msg="sync_v400: updating console status"\ntime="2020-04-03T11:20:47Z" level=info msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-91k9431k-a4243.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-04-03T11:20:47Z" level=info msg="sync loop 4.0.0 complete"\ntime="2020-04-03T11:20:47Z" level=info msg="finished syncing operator \"cluster\" (36.941µs) \n\n"\nI0403 11:21:08.784919       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 11:21:08.785069       1 leaderelection.go:65] leaderelection lost\n
Apr 03 11:21:09.660 E ns/openshift-ingress pod/router-default-695c897c44-ccgb4 node/ip-10-0-148-57.us-west-1.compute.internal container=router container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 11:21:15.465 E ns/openshift-monitoring pod/node-exporter-vfdc6 node/ip-10-0-131-184.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): 
Apr 03 11:21:17.114 E ns/openshift-operator-lifecycle-manager pod/packageserver-5558dd444f-qsgzd node/ip-10-0-135-18.us-west-1.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 11:21:19.155 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-138-39.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): 
Apr 03 11:21:21.135 E ns/openshift-cluster-node-tuning-operator pod/tuned-k64p4 node/ip-10-0-138-39.us-west-1.compute.internal container=tuned container exited with code 143 (Error):  to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""\nE0403 11:18:56.578375    2560 openshift-tuned.go:720] Pod event watch channel closed.\nI0403 11:18:56.578393    2560 openshift-tuned.go:722] Increasing resyncPeriod to 104\nI0403 11:20:40.578671    2560 openshift-tuned.go:187] Extracting tuned profiles\nI0403 11:20:40.580794    2560 openshift-tuned.go:623] Resync period to pull node/pod labels: 104 [s]\nI0403 11:20:40.598949    2560 openshift-tuned.go:435] Pod (e2e-tests-sig-apps-replicaset-upgrade-g5nxf/rs-w9lnh) labels changed node wide: true\nI0403 11:20:45.596949    2560 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 11:20:45.598618    2560 openshift-tuned.go:275] Dumping labels to /var/lib/tuned/ocp-node-labels.cfg\nI0403 11:20:45.599820    2560 openshift-tuned.go:326] Getting recommended profile...\nI0403 11:20:45.733027    2560 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 11:20:51.548258    2560 openshift-tuned.go:435] Pod (openshift-monitoring/alertmanager-main-0) labels changed node wide: true\nI0403 11:20:55.596967    2560 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 11:20:55.604067    2560 openshift-tuned.go:326] Getting recommended profile...\nI0403 11:20:55.782092    2560 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 11:20:58.102212    2560 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-k8s-0) labels changed node wide: true\nI0403 11:21:00.596979    2560 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 11:21:00.600267    2560 openshift-tuned.go:326] Getting recommended profile...\nI0403 11:21:00.733005    2560 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\n
Apr 03 11:21:25.333 E ns/openshift-console pod/downloads-5b557f599f-4pskt node/ip-10-0-138-39.us-west-1.compute.internal container=download-server container exited with code 137 (Error): 
Apr 03 11:21:37.052 E ns/openshift-marketplace pod/certified-operators-6ff9499c5b-5gt5w node/ip-10-0-128-95.us-west-1.compute.internal container=certified-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 11:21:43.879 E ns/openshift-controller-manager pod/controller-manager-mjw4n node/ip-10-0-135-18.us-west-1.compute.internal container=controller-manager container exited with code 137 (Error): 
Apr 03 11:21:45.024 E ns/openshift-controller-manager pod/controller-manager-7fk88 node/ip-10-0-135-18.us-west-1.compute.internal container=controller-manager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 11:21:47.061 E ns/openshift-marketplace pod/community-operators-7c789b9b65-mdd6b node/ip-10-0-128-95.us-west-1.compute.internal container=community-operators container exited with code 2 (Error): 
Apr 03 11:22:14.616 E ns/openshift-service-ca pod/configmap-cabundle-injector-6f8fd87f8f-pp4gd node/ip-10-0-131-184.us-west-1.compute.internal container=configmap-cabundle-injector-controller container exited with code 2 (Error): 
Apr 03 11:22:16.591 E ns/openshift-service-ca pod/apiservice-cabundle-injector-7c9549566c-8xn5w node/ip-10-0-131-184.us-west-1.compute.internal container=apiservice-cabundle-injector-controller container exited with code 2 (Error): 
Apr 03 11:22:35.709 E ns/openshift-controller-manager pod/controller-manager-nm9k2 node/ip-10-0-131-184.us-west-1.compute.internal container=controller-manager container exited with code 137 (Error): 
Apr 03 11:22:44.227 E ns/openshift-console pod/console-6977c7785-8zgpv node/ip-10-0-146-242.us-west-1.compute.internal container=console container exited with code 2 (Error): 2020/04/3 11:07:24 cmd/main: cookies are secure!\n2020/04/3 11:07:24 cmd/main: Binding to 0.0.0.0:8443...\n2020/04/3 11:07:24 cmd/main: using TLS\n
Apr 03 11:22:59.864 E ns/openshift-console pod/console-6977c7785-4kzk4 node/ip-10-0-131-184.us-west-1.compute.internal container=console container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 11:23:19.329 E ns/openshift-controller-manager pod/controller-manager-nq9q6 node/ip-10-0-146-242.us-west-1.compute.internal container=controller-manager container exited with code 137 (Error): 
Apr 03 11:25:07.274 E ns/openshift-dns pod/dns-default-tcqnf node/ip-10-0-148-57.us-west-1.compute.internal container=dns container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 11:25:07.274 E ns/openshift-dns pod/dns-default-tcqnf node/ip-10-0-148-57.us-west-1.compute.internal container=dns-node-resolver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 11:25:22.871 E ns/openshift-dns pod/dns-default-7fc8s node/ip-10-0-138-39.us-west-1.compute.internal container=dns-node-resolver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 11:25:22.871 E ns/openshift-dns pod/dns-default-7fc8s node/ip-10-0-138-39.us-west-1.compute.internal container=dns container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 11:25:31.223 E ns/openshift-sdn pod/sdn-controller-2q79x node/ip-10-0-131-184.us-west-1.compute.internal container=sdn-controller container exited with code 137 (Error): I0403 10:55:57.101222       1 leaderelection.go:205] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Apr 03 11:25:32.678 E ns/openshift-sdn pod/ovs-2zfmb node/ip-10-0-146-242.us-west-1.compute.internal container=openvswitch container exited with code 137 (Error): 11:22:43.690Z|00381|connmgr|INFO|br0<->unix#913: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T11:22:43.731Z|00382|connmgr|INFO|br0<->unix#916: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T11:22:43.762Z|00383|bridge|INFO|bridge br0: deleted interface veth8eda62e0 on port 42\n2020-04-03T11:22:49.946Z|00384|bridge|INFO|bridge br0: added interface vethad4df352 on port 61\n2020-04-03T11:22:49.979Z|00385|connmgr|INFO|br0<->unix#919: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T11:22:50.018Z|00386|connmgr|INFO|br0<->unix#922: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T11:23:19.009Z|00387|connmgr|INFO|br0<->unix#928: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T11:23:19.048Z|00388|connmgr|INFO|br0<->unix#931: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T11:23:19.071Z|00389|bridge|INFO|bridge br0: deleted interface veth28c79cd6 on port 57\n2020-04-03T11:23:32.828Z|00390|bridge|INFO|bridge br0: added interface veth2e5e490b on port 62\n2020-04-03T11:23:32.857Z|00391|connmgr|INFO|br0<->unix#934: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T11:23:32.894Z|00392|connmgr|INFO|br0<->unix#937: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T11:24:41.006Z|00393|bridge|INFO|bridge br0: added interface veth19bb1abf on port 63\n2020-04-03T11:24:41.051Z|00394|connmgr|INFO|br0<->unix#950: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T11:24:41.108Z|00395|connmgr|INFO|br0<->unix#953: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T11:24:46.932Z|00396|connmgr|INFO|br0<->unix#956: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T11:24:46.961Z|00397|connmgr|INFO|br0<->unix#959: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T11:24:46.982Z|00398|bridge|INFO|bridge br0: deleted interface veth597a9cfe on port 14\n2020-04-03T11:24:56.362Z|00399|bridge|INFO|bridge br0: added interface veth4183ffbd on port 64\n2020-04-03T11:24:56.393Z|00400|connmgr|INFO|br0<->unix#962: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T11:24:56.430Z|00401|connmgr|INFO|br0<->unix#965: 2 flow_mods in the last 0 s (2 deletes)\n
Apr 03 11:25:40.716 E ns/openshift-sdn pod/sdn-rk5nf node/ip-10-0-146-242.us-west-1.compute.internal container=sdn container exited with code 255 (Error): ix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:25:39.350479    2772 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:25:39.450472    2772 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:25:39.550482    2772 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:25:39.650495    2772 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:25:39.750463    2772 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:25:39.850470    2772 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:25:39.950483    2772 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:25:40.050514    2772 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:25:40.150847    2772 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:25:40.250526    2772 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:25:40.362205    2772 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0403 11:25:40.362260    2772 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
Apr 03 11:25:46.338 E ns/openshift-dns pod/dns-default-gggh8 node/ip-10-0-131-184.us-west-1.compute.internal container=dns-node-resolver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 11:25:46.338 E ns/openshift-dns pod/dns-default-gggh8 node/ip-10-0-131-184.us-west-1.compute.internal container=dns container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 11:26:05.696 E ns/openshift-dns pod/dns-default-29kvj node/ip-10-0-128-95.us-west-1.compute.internal container=dns-node-resolver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 11:26:05.696 E ns/openshift-dns pod/dns-default-29kvj node/ip-10-0-128-95.us-west-1.compute.internal container=dns container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 11:26:11.694 E ns/openshift-sdn pod/ovs-p4m29 node/ip-10-0-128-95.us-west-1.compute.internal container=openvswitch container exited with code 137 (Error): 3T11:21:36.087Z|00167|connmgr|INFO|br0<->unix#410: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T11:21:36.141Z|00168|connmgr|INFO|br0<->unix#413: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T11:21:36.179Z|00169|bridge|INFO|bridge br0: deleted interface veth8b7a77fe on port 13\n2020-04-03T11:21:46.783Z|00170|connmgr|INFO|br0<->unix#416: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T11:21:46.817Z|00171|connmgr|INFO|br0<->unix#419: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T11:21:46.840Z|00172|bridge|INFO|bridge br0: deleted interface veth7881671f on port 12\n2020-04-03T11:25:42.795Z|00173|connmgr|INFO|br0<->unix#451: 2 flow_mods in the last 0 s (2 adds)\n2020-04-03T11:25:42.872Z|00174|connmgr|INFO|br0<->unix#457: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T11:25:42.895Z|00175|connmgr|INFO|br0<->unix#460: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T11:25:43.182Z|00176|connmgr|INFO|br0<->unix#463: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T11:25:43.213Z|00177|connmgr|INFO|br0<->unix#466: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T11:25:43.234Z|00178|connmgr|INFO|br0<->unix#469: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T11:25:43.256Z|00179|connmgr|INFO|br0<->unix#472: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T11:25:43.279Z|00180|connmgr|INFO|br0<->unix#475: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T11:25:43.306Z|00181|connmgr|INFO|br0<->unix#478: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T11:25:43.334Z|00182|connmgr|INFO|br0<->unix#481: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T11:25:43.362Z|00183|connmgr|INFO|br0<->unix#484: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T11:25:43.390Z|00184|connmgr|INFO|br0<->unix#487: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T11:25:43.413Z|00185|connmgr|INFO|br0<->unix#490: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T11:26:04.598Z|00186|connmgr|INFO|br0<->unix#493: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T11:26:04.622Z|00187|bridge|INFO|bridge br0: deleted interface vethc1d8c05e on port 3\n
Apr 03 11:26:12.336 E ns/openshift-multus pod/multus-g6mbt node/ip-10-0-148-57.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 03 11:26:12.822 E ns/openshift-sdn pod/sdn-controller-fx5mz node/ip-10-0-146-242.us-west-1.compute.internal container=sdn-controller container exited with code 137 (Error): I0403 10:55:57.466550       1 leaderelection.go:205] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Apr 03 11:26:19.742 E ns/openshift-sdn pod/sdn-29brr node/ip-10-0-128-95.us-west-1.compute.internal container=sdn container exited with code 255 (Error): ix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:26:18.562047   64066 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:26:18.662157   64066 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:26:18.762079   64066 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:26:18.862111   64066 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:26:18.962024   64066 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:26:19.062136   64066 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:26:19.162261   64066 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:26:19.262100   64066 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:26:19.361998   64066 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:26:19.462133   64066 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:26:19.566408   64066 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0403 11:26:19.566466   64066 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
Apr 03 11:26:48.877 E ns/openshift-sdn pod/sdn-controller-6q2jn node/ip-10-0-135-18.us-west-1.compute.internal container=sdn-controller container exited with code 137 (Error): reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Namespace ended with: too old resource version: 12473 (14890)\nW0403 11:09:36.884557       1 reflector.go:256] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 13200 (15295)\nW0403 11:09:36.897689       1 reflector.go:256] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 13200 (15295)\nI0403 11:11:04.063644       1 vnids.go:115] Allocated netid 4334928 for namespace "e2e-tests-sig-storage-sig-api-machinery-configmap-upgrade-lct5j"\nI0403 11:11:04.074648       1 vnids.go:115] Allocated netid 16001144 for namespace "e2e-tests-sig-apps-deployment-upgrade-c8f76"\nI0403 11:11:04.084762       1 vnids.go:115] Allocated netid 3445484 for namespace "e2e-tests-sig-apps-replicaset-upgrade-g5nxf"\nI0403 11:11:04.093879       1 vnids.go:115] Allocated netid 14326441 for namespace "e2e-tests-sig-storage-sig-api-machinery-secret-upgrade-mttbq"\nI0403 11:11:04.137979       1 vnids.go:115] Allocated netid 16653001 for namespace "e2e-tests-sig-apps-job-upgrade-nwtbt"\nI0403 11:11:04.167606       1 vnids.go:115] Allocated netid 12580470 for namespace "e2e-tests-sig-apps-daemonset-upgrade-kd7kw"\nI0403 11:11:04.175847       1 vnids.go:115] Allocated netid 10647707 for namespace "e2e-tests-service-upgrade-wwtgw"\nW0403 11:18:24.110661       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Namespace ended with: too old resource version: 16051 (18299)\nW0403 11:18:24.142371       1 reflector.go:256] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 15821 (19011)\nW0403 11:18:24.147223       1 reflector.go:256] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 15295 (19011)\n
Apr 03 11:26:53.265 E ns/openshift-operator-lifecycle-manager pod/packageserver-5558dd444f-scqwm node/ip-10-0-135-18.us-west-1.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 11:27:02.070 E ns/openshift-sdn pod/sdn-p4rpz node/ip-10-0-135-18.us-west-1.compute.internal container=sdn container exited with code 255 (Error): nvswitch/db.sock: database connection failed (Connection refused)\nI0403 11:26:55.384210   75548 ovs.go:169] Error executing ovs-vsctl: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Connection refused)\nI0403 11:26:55.420714   75548 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:26:55.520595   75548 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:26:55.620600   75548 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:26:55.720612   75548 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:26:55.820728   75548 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:26:55.920633   75548 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:26:56.020591   75548 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:26:56.120781   75548 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:26:56.220590   75548 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:26:56.325914   75548 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0403 11:26:56.325993   75548 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
Apr 03 11:27:02.972 E ns/openshift-multus pod/multus-2thbl node/ip-10-0-146-242.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 03 11:27:08.022 E ns/openshift-machine-api pod/cluster-autoscaler-operator-5b66bcf78-7cpk2 node/ip-10-0-135-18.us-west-1.compute.internal container=cluster-autoscaler-operator container exited with code 255 (Error): 
Apr 03 11:27:32.541 E ns/openshift-sdn pod/ovs-hwwqq node/ip-10-0-148-57.us-west-1.compute.internal container=openvswitch container exited with code 137 (Error): 0-04-03T11:21:08.716Z|00150|connmgr|INFO|br0<->unix#374: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T11:21:08.744Z|00151|bridge|INFO|bridge br0: deleted interface veth6b4fdfea on port 5\n2020-04-03T11:25:05.939Z|00152|connmgr|INFO|br0<->unix#403: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T11:25:05.975Z|00153|connmgr|INFO|br0<->unix#406: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T11:25:06.001Z|00154|bridge|INFO|bridge br0: deleted interface veth715cead9 on port 3\n2020-04-03T11:25:16.098Z|00155|bridge|INFO|bridge br0: added interface veth762b951b on port 25\n2020-04-03T11:25:16.129Z|00156|connmgr|INFO|br0<->unix#409: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T11:25:16.166Z|00157|connmgr|INFO|br0<->unix#412: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T11:26:38.147Z|00158|connmgr|INFO|br0<->unix#431: 2 flow_mods in the last 0 s (2 adds)\n2020-04-03T11:26:38.287Z|00159|connmgr|INFO|br0<->unix#437: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T11:26:38.313Z|00160|connmgr|INFO|br0<->unix#440: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T11:26:38.662Z|00161|connmgr|INFO|br0<->unix#443: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T11:26:38.682Z|00162|connmgr|INFO|br0<->unix#446: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T11:26:38.704Z|00163|connmgr|INFO|br0<->unix#449: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T11:26:38.727Z|00164|connmgr|INFO|br0<->unix#452: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T11:26:38.755Z|00165|connmgr|INFO|br0<->unix#455: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T11:26:38.779Z|00166|connmgr|INFO|br0<->unix#458: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T11:26:38.807Z|00167|connmgr|INFO|br0<->unix#461: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T11:26:38.830Z|00168|connmgr|INFO|br0<->unix#464: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T11:26:38.859Z|00169|connmgr|INFO|br0<->unix#467: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T11:26:38.882Z|00170|connmgr|INFO|br0<->unix#470: 1 flow_mods in the last 0 s (1 adds)\n
Apr 03 11:27:33.053 E ns/openshift-service-ca pod/apiservice-cabundle-injector-7d44c8f476-ww8qz node/ip-10-0-135-18.us-west-1.compute.internal container=apiservice-cabundle-injector-controller container exited with code 255 (Error): 
Apr 03 11:27:39.104 E ns/openshift-service-ca pod/service-serving-cert-signer-5d6d98c46b-n98jr node/ip-10-0-135-18.us-west-1.compute.internal container=service-serving-cert-signer-controller container exited with code 255 (Error): 
Apr 03 11:27:41.656 E ns/openshift-sdn pod/sdn-cbgbg node/ip-10-0-148-57.us-west-1.compute.internal container=sdn container exited with code 255 (Error): ix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:27:39.602757   41983 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:27:39.702727   41983 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:27:39.802704   41983 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:27:39.902712   41983 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:27:40.002733   41983 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:27:40.102706   41983 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:27:40.202747   41983 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:27:40.302721   41983 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:27:40.402751   41983 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:27:40.502802   41983 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:27:40.608782   41983 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0403 11:27:40.608854   41983 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
Apr 03 11:27:42.561 E ns/openshift-operator-lifecycle-manager pod/packageserver-564f6c9fbb-d8g79 node/ip-10-0-131-184.us-west-1.compute.internal container=packageserver container exited with code 137 (Error): 
Apr 03 11:27:50.113 E ns/openshift-multus pod/multus-tm5qb node/ip-10-0-138-39.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 03 11:27:56.122 E ns/openshift-operator-lifecycle-manager pod/packageserver-564f6c9fbb-h5l2n node/ip-10-0-146-242.us-west-1.compute.internal container=packageserver container exited with code 137 (Error): 
Apr 03 11:28:12.672 E ns/openshift-sdn pod/ovs-87vtv node/ip-10-0-131-184.us-west-1.compute.internal container=openvswitch container exited with code 137 (Error): T11:26:26.455Z|00318|connmgr|INFO|br0<->unix#816: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T11:26:26.480Z|00319|connmgr|INFO|br0<->unix#819: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T11:26:26.506Z|00320|connmgr|INFO|br0<->unix#822: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T11:26:26.533Z|00321|connmgr|INFO|br0<->unix#825: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T11:26:26.561Z|00322|connmgr|INFO|br0<->unix#828: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T11:26:26.587Z|00323|connmgr|INFO|br0<->unix#831: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T11:26:26.775Z|00324|connmgr|INFO|br0<->unix#834: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T11:26:26.814Z|00325|connmgr|INFO|br0<->unix#837: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T11:26:26.852Z|00326|connmgr|INFO|br0<->unix#840: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T11:26:26.893Z|00327|connmgr|INFO|br0<->unix#843: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T11:26:26.934Z|00328|connmgr|INFO|br0<->unix#846: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T11:26:26.974Z|00329|connmgr|INFO|br0<->unix#849: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T11:26:27.014Z|00330|connmgr|INFO|br0<->unix#852: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T11:26:27.039Z|00331|connmgr|INFO|br0<->unix#855: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T11:26:27.069Z|00332|connmgr|INFO|br0<->unix#858: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T11:26:27.100Z|00333|connmgr|INFO|br0<->unix#861: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T11:27:17.723Z|00334|bridge|INFO|bridge br0: added interface veth38a6229e on port 50\n2020-04-03T11:27:17.754Z|00335|connmgr|INFO|br0<->unix#867: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T11:27:17.792Z|00336|connmgr|INFO|br0<->unix#870: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T11:27:41.769Z|00337|connmgr|INFO|br0<->unix#876: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T11:27:41.799Z|00338|bridge|INFO|bridge br0: deleted interface veth2e36121f on port 9\n
Apr 03 11:28:21.672 E ns/openshift-sdn pod/sdn-g9724 node/ip-10-0-131-184.us-west-1.compute.internal container=sdn container exited with code 255 (Error): ix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:28:20.115063   63558 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:28:20.215056   63558 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:28:20.314989   63558 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:28:20.414990   63558 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:28:20.515000   63558 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:28:20.614984   63558 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:28:20.714997   63558 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:28:20.814992   63558 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:28:20.915039   63558 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:28:21.015008   63558 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:28:21.120919   63558 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0403 11:28:21.120982   63558 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
Apr 03 11:28:41.299 E ns/openshift-multus pod/multus-b4b97 node/ip-10-0-135-18.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 03 11:28:52.239 E ns/openshift-sdn pod/ovs-7ss7m node/ip-10-0-138-39.us-west-1.compute.internal container=openvswitch container exited with code 137 (Error): 31: receive error: Connection reset by peer\n2020-04-03T11:25:21.603Z|00028|reconnect|WARN|unix#231: connection dropped (Connection reset by peer)\n2020-04-03T11:25:21.608Z|00029|jsonrpc|WARN|unix#232: receive error: Connection reset by peer\n2020-04-03T11:25:21.609Z|00030|reconnect|WARN|unix#232: connection dropped (Connection reset by peer)\n\n==> /var/log/openvswitch/ovs-vswitchd.log <==\n2020-04-03T11:25:36.356Z|00135|bridge|INFO|bridge br0: added interface vethead9974c on port 21\n2020-04-03T11:25:36.386Z|00136|connmgr|INFO|br0<->unix#373: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T11:25:36.422Z|00137|connmgr|INFO|br0<->unix#376: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T11:26:10.790Z|00138|connmgr|INFO|br0<->unix#389: 2 flow_mods in the last 0 s (2 adds)\n2020-04-03T11:26:10.885Z|00139|connmgr|INFO|br0<->unix#395: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T11:26:10.918Z|00140|connmgr|INFO|br0<->unix#398: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T11:26:10.954Z|00141|connmgr|INFO|br0<->unix#401: 1 flow_mods in the last 0 s (1 deletes)\n2020-04-03T11:26:11.246Z|00142|connmgr|INFO|br0<->unix#404: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T11:26:11.280Z|00143|connmgr|INFO|br0<->unix#407: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T11:26:11.311Z|00144|connmgr|INFO|br0<->unix#410: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T11:26:11.340Z|00145|connmgr|INFO|br0<->unix#413: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T11:26:11.364Z|00146|connmgr|INFO|br0<->unix#416: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T11:26:11.389Z|00147|connmgr|INFO|br0<->unix#419: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T11:26:11.412Z|00148|connmgr|INFO|br0<->unix#422: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T11:26:11.439Z|00149|connmgr|INFO|br0<->unix#425: 1 flow_mods in the last 0 s (1 adds)\n2020-04-03T11:26:11.469Z|00150|connmgr|INFO|br0<->unix#428: 3 flow_mods in the last 0 s (3 adds)\n2020-04-03T11:26:11.498Z|00151|connmgr|INFO|br0<->unix#431: 1 flow_mods in the last 0 s (1 adds)\n
Apr 03 11:29:01.259 E ns/openshift-sdn pod/sdn-t7g2x node/ip-10-0-138-39.us-west-1.compute.internal container=sdn container exited with code 255 (Error): ix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:28:59.902685   35954 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:29:00.002671   35954 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:29:00.102734   35954 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:29:00.202713   35954 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:29:00.302711   35954 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:29:00.402678   35954 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:29:00.502779   35954 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:29:00.602672   35954 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:29:00.702743   35954 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:29:00.802693   35954 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0403 11:29:00.910837   35954 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0403 11:29:00.910897   35954 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
Apr 03 11:29:23.824 E ns/openshift-multus pod/multus-zmmqf node/ip-10-0-131-184.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 03 11:29:48.407 E ns/openshift-machine-config-operator pod/machine-config-operator-5fc464b968-v8bhr node/ip-10-0-146-242.us-west-1.compute.internal container=machine-config-operator container exited with code 2 (Error): 
Apr 03 11:32:48.089 E ns/openshift-machine-config-operator pod/machine-config-controller-666694db77-kd5qk node/ip-10-0-135-18.us-west-1.compute.internal container=machine-config-controller container exited with code 2 (Error): 
Apr 03 11:34:44.639 E ns/openshift-machine-config-operator pod/machine-config-server-vplfz node/ip-10-0-131-184.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): 
Apr 03 11:34:55.278 E ns/openshift-monitoring pod/prometheus-adapter-7f686c7d8-kn6mv node/ip-10-0-138-39.us-west-1.compute.internal container=prometheus-adapter container exited with code 2 (Error): 
Apr 03 11:34:55.869 E ns/openshift-ingress pod/router-default-558997fbf7-hz4sw node/ip-10-0-138-39.us-west-1.compute.internal container=router container exited with code 2 (Error): pt(s).\nI0403 11:27:07.074006       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 11:27:12.072481       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 11:27:25.478168       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 11:27:30.475510       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 11:27:41.610366       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 11:27:47.215812       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nW0403 11:28:15.318598       1 reflector.go:341] github.com/openshift/router/pkg/router/controller/factory/factory.go:112: watch of *v1.Route ended with: The resourceVersion for the provided watch is too old.\nI0403 11:28:21.690043       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 11:28:27.552753       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 11:29:03.043064       1 logs.go:49] http: TLS handshake error from 10.129.2.1:54544: EOF\nI0403 11:29:03.066220       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 11:29:09.655163       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0403 11:34:53.741543       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n
Apr 03 11:34:57.468 E ns/openshift-monitoring pod/grafana-6569dc4bff-82dk5 node/ip-10-0-138-39.us-west-1.compute.internal container=grafana-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 11:34:57.468 E ns/openshift-monitoring pod/grafana-6569dc4bff-82dk5 node/ip-10-0-138-39.us-west-1.compute.internal container=grafana container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 11:35:00.051 E ns/openshift-machine-config-operator pod/machine-config-operator-778f89fb49-7b6mb node/ip-10-0-146-242.us-west-1.compute.internal container=machine-config-operator container exited with code 2 (Error): 
Apr 03 11:35:01.256 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-d769cswks node/ip-10-0-146-242.us-west-1.compute.internal container=operator container exited with code 2 (Error): 01.276715       1 reflector.go:215] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: forcing resync\nI0403 11:32:01.279986       1 reflector.go:215] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: forcing resync\nI0403 11:32:10.186777       1 wrap.go:47] GET /metrics: (7.197922ms) 200 [Prometheus/2.7.2 10.128.2.20:36896]\nI0403 11:32:10.187007       1 wrap.go:47] GET /metrics: (7.002179ms) 200 [Prometheus/2.7.2 10.129.2.18:39026]\nI0403 11:32:40.186537       1 wrap.go:47] GET /metrics: (7.137007ms) 200 [Prometheus/2.7.2 10.128.2.20:36896]\nI0403 11:32:40.186892       1 wrap.go:47] GET /metrics: (7.08321ms) 200 [Prometheus/2.7.2 10.129.2.18:39026]\nI0403 11:33:10.187565       1 wrap.go:47] GET /metrics: (8.050161ms) 200 [Prometheus/2.7.2 10.128.2.20:36896]\nI0403 11:33:10.187645       1 wrap.go:47] GET /metrics: (7.698514ms) 200 [Prometheus/2.7.2 10.129.2.18:39026]\nI0403 11:33:40.186596       1 wrap.go:47] GET /metrics: (7.192113ms) 200 [Prometheus/2.7.2 10.128.2.20:36896]\nI0403 11:33:40.186955       1 wrap.go:47] GET /metrics: (7.010545ms) 200 [Prometheus/2.7.2 10.129.2.18:39026]\nI0403 11:34:03.279648       1 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch close - *v1.ServiceAccount total 0 items received\nI0403 11:34:10.186202       1 wrap.go:47] GET /metrics: (6.780414ms) 200 [Prometheus/2.7.2 10.128.2.20:36896]\nI0403 11:34:10.186410       1 wrap.go:47] GET /metrics: (6.419309ms) 200 [Prometheus/2.7.2 10.129.2.18:39026]\nI0403 11:34:40.185961       1 wrap.go:47] GET /metrics: (6.578432ms) 200 [Prometheus/2.7.2 10.128.2.20:36896]\nI0403 11:34:40.186117       1 wrap.go:47] GET /metrics: (6.117275ms) 200 [Prometheus/2.7.2 10.129.2.18:39026]\nI0403 11:34:54.706047       1 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch close - *v1.Service total 0 items received\nW0403 11:34:54.715660       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 21561 (27996)\n
Apr 03 11:35:05.452 E ns/openshift-authentication-operator pod/authentication-operator-74d7786755-2g5ts node/ip-10-0-146-242.us-west-1.compute.internal container=operator container exited with code 255 (Error): 858       1 status_controller.go:164] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2020-04-03T11:04:16Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-04-03T11:22:28Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-04-03T11:10:05Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-04-03T11:00:21Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0403 11:28:28.076711       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"965d2109-7599-11ea-8574-06a6b442c077", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "RouteHealthDegraded: failed to GET route: net/http: TLS handshake timeout" to ""\nW0403 11:28:47.974459       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22697 (25697)\nW0403 11:28:49.989681       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22697 (25708)\nW0403 11:30:22.072245       1 reflector.go:270] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: The resourceVersion for the provided watch is too old.\nW0403 11:33:57.993180       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25691 (27408)\nW0403 11:34:54.711459       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 21561 (27996)\nI0403 11:34:56.414444       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 11:34:56.414654       1 leaderelection.go:65] leaderelection lost\n
Apr 03 11:35:37.953 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-128-95.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): 
Apr 03 11:35:38.528 E openshift-apiserver OpenShift API is not responding to GET requests
Apr 03 11:35:46.913 E ns/openshift-monitoring pod/grafana-6569dc4bff-slg9l node/ip-10-0-128-95.us-west-1.compute.internal container=grafana-proxy container exited with code 1 (Error): 
Apr 03 11:36:48.276 E ns/openshift-machine-config-operator pod/machine-config-server-p5k9h node/ip-10-0-146-242.us-west-1.compute.internal container=machine-config-server container exited with code 255 (Error): 
Apr 03 11:36:48.298 E ns/openshift-apiserver pod/apiserver-86j9f node/ip-10-0-146-242.us-west-1.compute.internal container=openshift-apiserver container exited with code 255 (Error): Wrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 11:34:54.556937       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 11:34:54.569653       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 11:35:09.088627       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []\nI0403 11:35:09.088797       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 11:35:09.088804       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 11:35:09.088923       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 11:35:09.088954       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 11:35:09.106705       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nE0403 11:35:09.153484       1 watch.go:212] unable to encode watch object <nil>: expected pointer, but got invalid kind\nI0403 11:35:09.153784       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/aggregator-client-ca/ca-bundle.crt\nI0403 11:35:09.153886       1 controller.go:87] Shutting down OpenAPI AggregationController\nI0403 11:35:09.153999       1 serving.go:88] Shutting down DynamicLoader\nI0403 11:35:09.153874       1 clusterquotamapping.go:145] Shutting down ClusterQuotaMappingController controller\nI0403 11:35:09.153895       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/client-ca/ca-bundle.crt\nE0403 11:35:09.154646       1 watch.go:212] unable to encode watch object <nil>: expected pointer, but got invalid kind\nE0403 11:35:09.154863       1 watch.go:212] unable to encode watch object <nil>: expected pointer, but got invalid kind\n
Apr 03 11:36:48.515 E ns/openshift-cluster-node-tuning-operator pod/tuned-mw699 node/ip-10-0-146-242.us-west-1.compute.internal container=tuned container exited with code 255 (Error): 1:34:53.731570   58610 openshift-tuned.go:435] Pod (openshift-etcd/etcd-member-ip-10-0-146-242.us-west-1.compute.internal) labels changed node wide: true\nI0403 11:34:54.505556   58610 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 11:34:54.507152   58610 openshift-tuned.go:326] Getting recommended profile...\nI0403 11:34:54.657141   58610 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 11:34:55.008616   58610 openshift-tuned.go:435] Pod (openshift-kube-controller-manager/installer-5-ip-10-0-146-242.us-west-1.compute.internal) labels changed node wide: false\nI0403 11:34:55.399403   58610 openshift-tuned.go:435] Pod (openshift-kube-scheduler/installer-6-ip-10-0-146-242.us-west-1.compute.internal) labels changed node wide: false\nI0403 11:34:55.597967   58610 openshift-tuned.go:435] Pod (openshift-cluster-version/version--bvrwk-8w4dg) labels changed node wide: true\nI0403 11:34:59.505546   58610 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 11:34:59.507050   58610 openshift-tuned.go:326] Getting recommended profile...\nI0403 11:34:59.640975   58610 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 11:35:00.219803   58610 openshift-tuned.go:435] Pod (openshift-machine-config-operator/machine-config-operator-778f89fb49-7b6mb) labels changed node wide: true\nI0403 11:35:04.505543   58610 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 11:35:04.506863   58610 openshift-tuned.go:326] Getting recommended profile...\nI0403 11:35:04.617283   58610 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 11:35:05.625700   58610 openshift-tuned.go:435] Pod (openshift-authentication-operator/authentication-operator-74d7786755-2g5ts) labels changed node wide: true\n
Apr 03 11:36:50.624 E ns/openshift-monitoring pod/node-exporter-gg9ln node/ip-10-0-146-242.us-west-1.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 03 11:36:50.624 E ns/openshift-monitoring pod/node-exporter-gg9ln node/ip-10-0-146-242.us-west-1.compute.internal container=node-exporter container exited with code 255 (Error): 
Apr 03 11:36:56.905 E ns/openshift-image-registry pod/node-ca-5w8gt node/ip-10-0-146-242.us-west-1.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 03 11:36:57.706 E ns/openshift-controller-manager pod/controller-manager-qnnvp node/ip-10-0-146-242.us-west-1.compute.internal container=controller-manager container exited with code 255 (Error): 
Apr 03 11:36:58.106 E ns/openshift-dns pod/dns-default-fth4j node/ip-10-0-146-242.us-west-1.compute.internal container=dns-node-resolver container exited with code 255 (Error): 
Apr 03 11:36:58.106 E ns/openshift-dns pod/dns-default-fth4j node/ip-10-0-146-242.us-west-1.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-04-03T11:25:01.962Z [INFO] CoreDNS-1.3.1\n2020-04-03T11:25:01.962Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-03T11:25:01.962Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 03 11:36:58.507 E ns/openshift-sdn pod/ovs-8l5lr node/ip-10-0-146-242.us-west-1.compute.internal container=openvswitch container exited with code 255 (Error): r|INFO|br0<->unix#199: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T11:34:54.982Z|00150|connmgr|INFO|br0<->unix#202: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T11:34:55.017Z|00151|bridge|INFO|bridge br0: deleted interface vethf4842ce3 on port 17\n2020-04-03T11:34:55.150Z|00152|connmgr|INFO|br0<->unix#205: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T11:34:55.183Z|00153|bridge|INFO|bridge br0: deleted interface veth59e7c6fa on port 13\n2020-04-03T11:34:55.542Z|00154|connmgr|INFO|br0<->unix#208: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T11:34:55.568Z|00155|bridge|INFO|bridge br0: deleted interface vethc97f4c7e on port 3\n2020-04-03T11:34:56.070Z|00156|connmgr|INFO|br0<->unix#211: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T11:34:56.099Z|00157|bridge|INFO|bridge br0: deleted interface vethad4df352 on port 8\n2020-04-03T11:34:56.680Z|00158|connmgr|INFO|br0<->unix#214: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T11:34:56.702Z|00159|bridge|INFO|bridge br0: deleted interface veth19bb1abf on port 15\n2020-04-03T11:34:56.743Z|00160|connmgr|INFO|br0<->unix#217: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T11:34:56.778Z|00161|bridge|INFO|bridge br0: deleted interface veth953bb67e on port 14\n2020-04-03T11:34:57.298Z|00162|connmgr|INFO|br0<->unix#223: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T11:34:57.320Z|00163|bridge|INFO|bridge br0: deleted interface veth27e2fbf9 on port 16\n2020-04-03T11:34:59.489Z|00164|connmgr|INFO|br0<->unix#226: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T11:34:59.514Z|00165|bridge|INFO|bridge br0: deleted interface vethd9f1f95e on port 9\nTerminated\novs-vswitchd is not running.\n2020-04-03T11:35:09Z|00001|jsonrpc|WARN|unix:/var/run/openvswitch/ovsdb-server.67607.ctl: receive error: Connection reset by peer\n2020-04-03T11:35:09Z|00002|unixctl|WARN|error communicating with unix:/var/run/openvswitch/ovsdb-server.67607.ctl: Connection reset by peer\novs-appctl: /var/run/openvswitch/ovsdb-server.67607.ctl: transaction error (Connection reset by peer)\n
Apr 03 11:36:58.911 E ns/openshift-machine-config-operator pod/machine-config-daemon-ck27v node/ip-10-0-146-242.us-west-1.compute.internal container=machine-config-daemon container exited with code 255 (Error): 
Apr 03 11:37:02.509 E ns/openshift-multus pod/multus-c6sw2 node/ip-10-0-146-242.us-west-1.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 03 11:37:06.957 E ns/openshift-monitoring pod/node-exporter-2qkpr node/ip-10-0-138-39.us-west-1.compute.internal container=node-exporter container exited with code 255 (Error): 
Apr 03 11:37:06.957 E ns/openshift-monitoring pod/node-exporter-2qkpr node/ip-10-0-138-39.us-west-1.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 03 11:37:06.971 E ns/openshift-cluster-node-tuning-operator pod/tuned-bjlmt node/ip-10-0-138-39.us-west-1.compute.internal container=tuned container exited with code 255 (Error): -tuned.go:326] Getting recommended profile...\nI0403 11:32:04.934420   29629 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 11:34:53.865778   29629 openshift-tuned.go:435] Pod (e2e-tests-service-upgrade-wwtgw/service-test-cmqtz) labels changed node wide: true\nI0403 11:34:54.821636   29629 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 11:34:54.832422   29629 openshift-tuned.go:326] Getting recommended profile...\nI0403 11:34:55.069765   29629 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 11:34:58.440743   29629 openshift-tuned.go:435] Pod (openshift-ingress/router-default-558997fbf7-hz4sw) labels changed node wide: true\nI0403 11:34:59.816722   29629 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 11:34:59.818294   29629 openshift-tuned.go:326] Getting recommended profile...\nI0403 11:34:59.929797   29629 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 11:35:00.436907   29629 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-adapter-7f686c7d8-kn6mv) labels changed node wide: true\nI0403 11:35:04.816749   29629 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 11:35:04.819147   29629 openshift-tuned.go:326] Getting recommended profile...\nI0403 11:35:04.936785   29629 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 11:35:09.372071   29629 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0403 11:35:09.375373   29629 openshift-tuned.go:720] Pod event watch channel closed.\nI0403 11:35:09.375391   29629 openshift-tuned.go:722] Increasing resyncPeriod to 118\nI0403 11:35:26.981711   29629 openshift-tuned.go:126] Received signal: terminated\n
Apr 03 11:37:07.188 E ns/openshift-image-registry pod/node-ca-tp59r node/ip-10-0-138-39.us-west-1.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 03 11:37:12.094 E ns/openshift-dns pod/dns-default-dxnr8 node/ip-10-0-138-39.us-west-1.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-04-03T11:25:41.524Z [INFO] CoreDNS-1.3.1\n2020-04-03T11:25:41.525Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-03T11:25:41.525Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 03 11:37:12.094 E ns/openshift-dns pod/dns-default-dxnr8 node/ip-10-0-138-39.us-west-1.compute.internal container=dns-node-resolver container exited with code 255 (Error): /bin/bash: line 1: kill: (87) - No such process\n
Apr 03 11:37:12.968 E ns/openshift-sdn pod/sdn-t7g2x node/ip-10-0-138-39.us-west-1.compute.internal container=sdn container exited with code 255 (Error): 1936 for service "openshift-ingress/router-internal-default:metrics"\nI0403 11:35:19.363821   39909 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-ingress/router-internal-default:https to [10.128.2.25:443 10.131.0.26:443]\nI0403 11:35:19.363827   39909 roundrobin.go:240] Delete endpoint 10.128.2.25:443 for service "openshift-ingress/router-internal-default:https"\nI0403 11:35:19.520843   39909 proxier.go:367] userspace proxy: processing 0 service events\nI0403 11:35:19.520869   39909 proxier.go:346] userspace syncProxyRules took 53.079305ms\nI0403 11:35:19.697169   39909 proxier.go:367] userspace proxy: processing 0 service events\nI0403 11:35:19.697199   39909 proxier.go:346] userspace syncProxyRules took 53.355529ms\nI0403 11:35:22.444924   39909 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-console/console:https to [10.128.0.50:8443 10.129.0.67:8443]\nI0403 11:35:22.444961   39909 roundrobin.go:240] Delete endpoint 10.128.0.50:8443 for service "openshift-console/console:https"\nI0403 11:35:22.614302   39909 proxier.go:367] userspace proxy: processing 0 service events\nI0403 11:35:22.614328   39909 proxier.go:346] userspace syncProxyRules took 53.531362ms\ninterrupt: Gracefully shutting down ...\nE0403 11:35:27.051642   39909 healthcheck.go:57] SDN healthcheck disconnected from OVS server: <nil>\nI0403 11:35:27.051774   39909 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 11:35:27.152559   39909 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 11:35:27.256498   39909 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 11:35:27.352131   39909 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\n
Apr 03 11:37:13.599 E ns/openshift-multus pod/multus-zhxmk node/ip-10-0-138-39.us-west-1.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 03 11:37:13.927 E ns/openshift-sdn pod/ovs-4gggf node/ip-10-0-138-39.us-west-1.compute.internal container=openvswitch container exited with code 255 (Error):  in the last 0 s (4 deletes)\n2020-04-03T11:34:54.533Z|00127|bridge|INFO|bridge br0: deleted interface veth233fb87b on port 13\n2020-04-03T11:34:54.589Z|00128|connmgr|INFO|br0<->unix#146: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T11:34:54.648Z|00129|bridge|INFO|bridge br0: deleted interface veth74f544f5 on port 12\n2020-04-03T11:34:54.702Z|00130|connmgr|INFO|br0<->unix#149: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T11:34:54.750Z|00131|bridge|INFO|bridge br0: deleted interface vethcb8143ee on port 9\n2020-04-03T11:34:54.809Z|00132|connmgr|INFO|br0<->unix#152: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T11:34:54.859Z|00133|bridge|INFO|bridge br0: deleted interface veth2888fafc on port 7\n2020-04-03T11:34:54.950Z|00134|connmgr|INFO|br0<->unix#155: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T11:34:55.015Z|00135|bridge|INFO|bridge br0: deleted interface vetha08eed1f on port 10\n2020-04-03T11:34:55.083Z|00136|connmgr|INFO|br0<->unix#158: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T11:34:55.142Z|00137|bridge|INFO|bridge br0: deleted interface veth088d5e7d on port 3\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-04-03T11:34:54.987Z|00017|jsonrpc|WARN|Dropped 2 log messages in last 353 seconds (most recently, 353 seconds ago) due to excessive rate\n2020-04-03T11:34:54.987Z|00018|jsonrpc|WARN|unix#125: receive error: Connection reset by peer\n2020-04-03T11:34:54.987Z|00019|reconnect|WARN|unix#125: connection dropped (Connection reset by peer)\n\n==> /var/log/openvswitch/ovs-vswitchd.log <==\n2020-04-03T11:35:24.173Z|00138|connmgr|INFO|br0<->unix#165: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T11:35:24.195Z|00139|bridge|INFO|bridge br0: deleted interface vethd87a7da3 on port 6\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-04-03T11:35:24.184Z|00020|jsonrpc|WARN|unix#137: receive error: Connection reset by peer\n2020-04-03T11:35:24.184Z|00021|reconnect|WARN|unix#137: connection dropped (Connection reset by peer)\nTerminated\novs-vswitchd is not running.\novsdb-server is not running.\n
Apr 03 11:37:14.265 E ns/openshift-machine-config-operator pod/machine-config-daemon-77ts4 node/ip-10-0-138-39.us-west-1.compute.internal container=machine-config-daemon container exited with code 255 (Error): 
Apr 03 11:37:18.638 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-6684474c66-7ws2l node/ip-10-0-131-184.us-west-1.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): pruner-6-ip-10-0-146-242.us-west-1.compute.internal -n openshift-kube-scheduler because it was missing\nI0403 11:37:04.412344       1 status_controller.go:164] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2020-04-03T11:03:20Z","message":"StaticPodsDegraded: nodes/ip-10-0-146-242.us-west-1.compute.internal pods/openshift-kube-scheduler-ip-10-0-146-242.us-west-1.compute.internal container=\"scheduler\" is not ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-04-03T11:20:44Z","message":"Progressing: 3 nodes are at revision 6","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-04-03T10:59:10Z","message":"Available: 3 nodes are active; 3 nodes are at revision 6","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-04-03T10:56:44Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0403 11:37:04.420872       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"8ef7031a-7599-11ea-8574-06a6b442c077", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Degraded message changed from "" to "StaticPodsDegraded: nodes/ip-10-0-146-242.us-west-1.compute.internal pods/openshift-kube-scheduler-ip-10-0-146-242.us-west-1.compute.internal container=\"scheduler\" is not ready"\nW0403 11:37:14.647101       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.RoleBinding ended with: too old resource version: 18306 (29583)\nW0403 11:37:14.717054       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Role ended with: too old resource version: 18306 (29601)\nI0403 11:37:14.763656       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 11:37:14.763811       1 leaderelection.go:65] leaderelection lost\n
Apr 03 11:37:24.109 E ns/openshift-etcd pod/etcd-member-ip-10-0-146-242.us-west-1.compute.internal node/ip-10-0-146-242.us-west-1.compute.internal container=etcd-metrics container exited with code 255 (Error): 2020-04-03 11:35:07.735857 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-91k9431k-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-91k9431k-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-03 11:35:07.736660 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-03 11:35:07.737259 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-91k9431k-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-91k9431k-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/03 11:35:07 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.146.242:9978: connect: connection refused"; Reconnecting to {etcd-1.ci-op-91k9431k-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\n2020-04-03 11:35:08.750784 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Apr 03 11:37:24.109 E ns/openshift-etcd pod/etcd-member-ip-10-0-146-242.us-west-1.compute.internal node/ip-10-0-146-242.us-west-1.compute.internal container=etcd-member container exited with code 255 (Error): stream MsgApp v2 reader)\n2020-04-03 11:35:09.192226 E | rafthttp: failed to read 2955895d6efeb0c2 on stream MsgApp v2 (context canceled)\n2020-04-03 11:35:09.192235 I | rafthttp: peer 2955895d6efeb0c2 became inactive (message send to peer failed)\n2020-04-03 11:35:09.192245 I | rafthttp: stopped streaming with peer 2955895d6efeb0c2 (stream MsgApp v2 reader)\n2020-04-03 11:35:09.192320 W | rafthttp: lost the TCP streaming connection with peer 2955895d6efeb0c2 (stream Message reader)\n2020-04-03 11:35:09.192352 I | rafthttp: stopped streaming with peer 2955895d6efeb0c2 (stream Message reader)\n2020-04-03 11:35:09.192363 I | rafthttp: stopped peer 2955895d6efeb0c2\n2020-04-03 11:35:09.192371 I | rafthttp: stopping peer 624bc80c8aeae76f...\n2020-04-03 11:35:09.192683 I | rafthttp: closed the TCP streaming connection with peer 624bc80c8aeae76f (stream MsgApp v2 writer)\n2020-04-03 11:35:09.192697 I | rafthttp: stopped streaming with peer 624bc80c8aeae76f (writer)\n2020-04-03 11:35:09.193084 I | rafthttp: closed the TCP streaming connection with peer 624bc80c8aeae76f (stream Message writer)\n2020-04-03 11:35:09.193099 I | rafthttp: stopped streaming with peer 624bc80c8aeae76f (writer)\n2020-04-03 11:35:09.193229 I | rafthttp: stopped HTTP pipelining with peer 624bc80c8aeae76f\n2020-04-03 11:35:09.193298 W | rafthttp: lost the TCP streaming connection with peer 624bc80c8aeae76f (stream MsgApp v2 reader)\n2020-04-03 11:35:09.193310 E | rafthttp: failed to read 624bc80c8aeae76f on stream MsgApp v2 (context canceled)\n2020-04-03 11:35:09.193318 I | rafthttp: peer 624bc80c8aeae76f became inactive (message send to peer failed)\n2020-04-03 11:35:09.193327 I | rafthttp: stopped streaming with peer 624bc80c8aeae76f (stream MsgApp v2 reader)\n2020-04-03 11:35:09.193388 W | rafthttp: lost the TCP streaming connection with peer 624bc80c8aeae76f (stream Message reader)\n2020-04-03 11:35:09.193403 I | rafthttp: stopped streaming with peer 624bc80c8aeae76f (stream Message reader)\n2020-04-03 11:35:09.193413 I | rafthttp: stopped peer 624bc80c8aeae76f\n
Apr 03 11:37:26.307 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-146-242.us-west-1.compute.internal node/ip-10-0-146-242.us-west-1.compute.internal container=kube-apiserver-7 container exited with code 255 (Error): 8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 21605 (28000)\nW0403 11:34:54.881096       1 cacher.go:125] Terminating all watchers from cacher *scheduling.PriorityClass\nW0403 11:34:54.885020       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1beta1.PriorityClass ended with: too old resource version: 21605 (28010)\nI0403 11:35:04.293688       1 controller.go:107] OpenAPI AggregationController: Processing item v1.template.openshift.io\nI0403 11:35:05.927070       1 controller.go:107] OpenAPI AggregationController: Processing item v1.route.openshift.io\nI0403 11:35:07.488093       1 controller.go:107] OpenAPI AggregationController: Processing item v1.build.openshift.io\nI0403 11:35:09.156661       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 11:35:09.156693       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nE0403 11:35:09.158626       1 reflector.go:237] github.com/openshift/client-go/oauth/informers/externalversions/factory.go:101: Failed to watch *v1.OAuthClient: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io)\nI0403 11:35:09.211168       1 controller.go:107] OpenAPI AggregationController: Processing item v1.project.openshift.io\nE0403 11:35:09.214980       1 controller.go:114] loading OpenAPI spec for "v1.project.openshift.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: Error: 'dial tcp 10.130.0.52:8443: connect: connection refused'\nTrying to reach: 'https://10.130.0.52:8443/openapi/v2', Header: map[]\nI0403 11:35:09.214999       1 controller.go:127] OpenAPI AggregationController: action for item v1.project.openshift.io: Rate Limited Requeue.\nI0403 11:35:09.267938       1 controller.go:176] Shutting down kubernetes service endpoint reconciler\nW0403 11:35:09.293610       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [10.0.131.184 10.0.135.18]\n
Apr 03 11:37:26.307 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-146-242.us-west-1.compute.internal node/ip-10-0-146-242.us-west-1.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 255 (Error): I0403 11:20:48.058484       1 observer_polling.go:106] Starting file observer\nI0403 11:20:48.063139       1 certsync_controller.go:269] Starting CertSyncer\nW0403 11:26:20.941438       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22018 (24431)\nW0403 11:32:02.948682       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 24724 (26748)\n
Apr 03 11:37:26.706 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-146-242.us-west-1.compute.internal node/ip-10-0-146-242.us-west-1.compute.internal container=kube-controller-manager-cert-syncer-6 container exited with code 255 (Error): Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 11:20:45.898413       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 11:20:45.899358       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 11:20:46.899585       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 11:20:46.900512       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 11:20:52.748695       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: secrets is forbidden: User "system:kube-controller-manager" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-controller-manager"\nE0403 11:20:52.753337       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:kube-controller-manager" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\nW0403 11:29:40.763738       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 21750 (26032)\n
Apr 03 11:37:26.706 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-146-242.us-west-1.compute.internal node/ip-10-0-146-242.us-west-1.compute.internal container=kube-controller-manager-6 container exited with code 255 (Error): suer="<self>" (2020-04-03 10:40:24 +0000 UTC to 2021-04-03 10:40:24 +0000 UTC (now=2020-04-03 11:17:28.277707282 +0000 UTC))\nI0403 11:17:28.282578       1 controllermanager.go:169] Version: v1.13.4+3040211\nI0403 11:17:28.283900       1 serving.go:195] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1585911405" (2020-04-03 10:57:00 +0000 UTC to 2022-04-03 10:57:01 +0000 UTC (now=2020-04-03 11:17:28.283874637 +0000 UTC))\nI0403 11:17:28.283934       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585911405" [] issuer="<self>" (2020-04-03 10:56:44 +0000 UTC to 2021-04-03 10:56:45 +0000 UTC (now=2020-04-03 11:17:28.283918498 +0000 UTC))\nI0403 11:17:28.283963       1 secure_serving.go:136] Serving securely on [::]:10257\nI0403 11:17:28.284170       1 serving.go:77] Starting DynamicLoader\nI0403 11:17:28.284671       1 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-controller-manager...\nE0403 11:20:44.103130       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0403 11:20:52.763046       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: configmaps "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\nI0403 11:35:09.145117       1 serving.go:88] Shutting down DynamicLoader\nE0403 11:35:09.145097       1 controllermanager.go:282] leaderelection lost\n
Apr 03 11:37:27.107 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-146-242.us-west-1.compute.internal node/ip-10-0-146-242.us-west-1.compute.internal container=scheduler container exited with code 255 (Error): 18635       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: Get https://localhost:6443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 11:20:52.967958       1 leaderelection.go:270] error retrieving resource lock openshift-kube-scheduler/kube-scheduler: configmaps "kube-scheduler" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "openshift-kube-scheduler"\nE0403 11:20:59.594647       1 factory.go:832] scheduler cache UpdatePod failed: pod 2d2a7eb1-759d-11ea-ac74-06cae4339f2b is not added to scheduler cache, so cannot be updated\nE0403 11:21:06.072909       1 factory.go:832] scheduler cache UpdatePod failed: pod 2d2a7eb1-759d-11ea-ac74-06cae4339f2b is not added to scheduler cache, so cannot be updated\nE0403 11:34:53.850186       1 factory.go:832] scheduler cache UpdatePod failed: pod 2d2a7eb1-759d-11ea-ac74-06cae4339f2b is not added to scheduler cache, so cannot be updated\nE0403 11:34:53.866360       1 factory.go:923] scheduler cache RemovePod failed: pod 2d2a7eb1-759d-11ea-ac74-06cae4339f2b is not found in scheduler cache, so cannot be removed from it\nW0403 11:34:54.765607       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StatefulSet ended with: too old resource version: 22363 (27997)\nW0403 11:34:54.768009       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1beta1.PodDisruptionBudget ended with: too old resource version: 21604 (27997)\nW0403 11:34:54.768140       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 21582 (27997)\nW0403 11:34:54.829078       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 21605 (28000)\nE0403 11:35:09.211901       1 server.go:259] lost master\nI0403 11:35:09.212457       1 secure_serving.go:180] Stopped listening on [::]:10259\n
Apr 03 11:37:42.767 E ns/openshift-machine-config-operator pod/etcd-quorum-guard-5c466b57cd-kdch8 node/ip-10-0-131-184.us-west-1.compute.internal container=guard container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 11:37:45.789 E ns/openshift-operator-lifecycle-manager pod/packageserver-6c46fcb4f7-n8l2v node/ip-10-0-131-184.us-west-1.compute.internal container=packageserver container exited with code 137 (Error): el=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T11:37:34Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T11:37:34Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T11:37:37Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T11:37:37Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T11:37:39Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T11:37:39Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T11:37:40Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T11:37:40Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T11:37:42Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T11:37:42Z" level=info msg="new grpc connection added" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T11:37:43Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T11:37:43Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\n
Apr 03 11:37:47.508 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-146-242.us-west-1.compute.internal node/ip-10-0-146-242.us-west-1.compute.internal container=kube-controller-manager-cert-syncer-6 container exited with code 255 (Error): Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 11:20:45.898413       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 11:20:45.899358       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 11:20:46.899585       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 11:20:46.900512       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 11:20:52.748695       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: secrets is forbidden: User "system:kube-controller-manager" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-controller-manager"\nE0403 11:20:52.753337       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:kube-controller-manager" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\nW0403 11:29:40.763738       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 21750 (26032)\n
Apr 03 11:37:47.508 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-146-242.us-west-1.compute.internal node/ip-10-0-146-242.us-west-1.compute.internal container=kube-controller-manager-6 container exited with code 255 (Error): suer="<self>" (2020-04-03 10:40:24 +0000 UTC to 2021-04-03 10:40:24 +0000 UTC (now=2020-04-03 11:17:28.277707282 +0000 UTC))\nI0403 11:17:28.282578       1 controllermanager.go:169] Version: v1.13.4+3040211\nI0403 11:17:28.283900       1 serving.go:195] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1585911405" (2020-04-03 10:57:00 +0000 UTC to 2022-04-03 10:57:01 +0000 UTC (now=2020-04-03 11:17:28.283874637 +0000 UTC))\nI0403 11:17:28.283934       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585911405" [] issuer="<self>" (2020-04-03 10:56:44 +0000 UTC to 2021-04-03 10:56:45 +0000 UTC (now=2020-04-03 11:17:28.283918498 +0000 UTC))\nI0403 11:17:28.283963       1 secure_serving.go:136] Serving securely on [::]:10257\nI0403 11:17:28.284170       1 serving.go:77] Starting DynamicLoader\nI0403 11:17:28.284671       1 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-controller-manager...\nE0403 11:20:44.103130       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0403 11:20:52.763046       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: configmaps "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\nI0403 11:35:09.145117       1 serving.go:88] Shutting down DynamicLoader\nE0403 11:35:09.145097       1 controllermanager.go:282] leaderelection lost\n
Apr 03 11:37:47.908 E ns/openshift-etcd pod/etcd-member-ip-10-0-146-242.us-west-1.compute.internal node/ip-10-0-146-242.us-west-1.compute.internal container=etcd-metrics container exited with code 255 (Error): 2020-04-03 11:35:07.735857 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-91k9431k-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-91k9431k-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-03 11:35:07.736660 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-03 11:35:07.737259 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-91k9431k-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-91k9431k-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/03 11:35:07 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.146.242:9978: connect: connection refused"; Reconnecting to {etcd-1.ci-op-91k9431k-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\n2020-04-03 11:35:08.750784 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Apr 03 11:37:47.908 E ns/openshift-etcd pod/etcd-member-ip-10-0-146-242.us-west-1.compute.internal node/ip-10-0-146-242.us-west-1.compute.internal container=etcd-member container exited with code 255 (Error): stream MsgApp v2 reader)\n2020-04-03 11:35:09.192226 E | rafthttp: failed to read 2955895d6efeb0c2 on stream MsgApp v2 (context canceled)\n2020-04-03 11:35:09.192235 I | rafthttp: peer 2955895d6efeb0c2 became inactive (message send to peer failed)\n2020-04-03 11:35:09.192245 I | rafthttp: stopped streaming with peer 2955895d6efeb0c2 (stream MsgApp v2 reader)\n2020-04-03 11:35:09.192320 W | rafthttp: lost the TCP streaming connection with peer 2955895d6efeb0c2 (stream Message reader)\n2020-04-03 11:35:09.192352 I | rafthttp: stopped streaming with peer 2955895d6efeb0c2 (stream Message reader)\n2020-04-03 11:35:09.192363 I | rafthttp: stopped peer 2955895d6efeb0c2\n2020-04-03 11:35:09.192371 I | rafthttp: stopping peer 624bc80c8aeae76f...\n2020-04-03 11:35:09.192683 I | rafthttp: closed the TCP streaming connection with peer 624bc80c8aeae76f (stream MsgApp v2 writer)\n2020-04-03 11:35:09.192697 I | rafthttp: stopped streaming with peer 624bc80c8aeae76f (writer)\n2020-04-03 11:35:09.193084 I | rafthttp: closed the TCP streaming connection with peer 624bc80c8aeae76f (stream Message writer)\n2020-04-03 11:35:09.193099 I | rafthttp: stopped streaming with peer 624bc80c8aeae76f (writer)\n2020-04-03 11:35:09.193229 I | rafthttp: stopped HTTP pipelining with peer 624bc80c8aeae76f\n2020-04-03 11:35:09.193298 W | rafthttp: lost the TCP streaming connection with peer 624bc80c8aeae76f (stream MsgApp v2 reader)\n2020-04-03 11:35:09.193310 E | rafthttp: failed to read 624bc80c8aeae76f on stream MsgApp v2 (context canceled)\n2020-04-03 11:35:09.193318 I | rafthttp: peer 624bc80c8aeae76f became inactive (message send to peer failed)\n2020-04-03 11:35:09.193327 I | rafthttp: stopped streaming with peer 624bc80c8aeae76f (stream MsgApp v2 reader)\n2020-04-03 11:35:09.193388 W | rafthttp: lost the TCP streaming connection with peer 624bc80c8aeae76f (stream Message reader)\n2020-04-03 11:35:09.193403 I | rafthttp: stopped streaming with peer 624bc80c8aeae76f (stream Message reader)\n2020-04-03 11:35:09.193413 I | rafthttp: stopped peer 624bc80c8aeae76f\n
Apr 03 11:37:50.799 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-146-242.us-west-1.compute.internal node/ip-10-0-146-242.us-west-1.compute.internal container=kube-apiserver-7 container exited with code 255 (Error): 8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 21605 (28000)\nW0403 11:34:54.881096       1 cacher.go:125] Terminating all watchers from cacher *scheduling.PriorityClass\nW0403 11:34:54.885020       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1beta1.PriorityClass ended with: too old resource version: 21605 (28010)\nI0403 11:35:04.293688       1 controller.go:107] OpenAPI AggregationController: Processing item v1.template.openshift.io\nI0403 11:35:05.927070       1 controller.go:107] OpenAPI AggregationController: Processing item v1.route.openshift.io\nI0403 11:35:07.488093       1 controller.go:107] OpenAPI AggregationController: Processing item v1.build.openshift.io\nI0403 11:35:09.156661       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0403 11:35:09.156693       1 log.go:172] suppressing panic for copyResponse error in test; copy error: unexpected EOF\nE0403 11:35:09.158626       1 reflector.go:237] github.com/openshift/client-go/oauth/informers/externalversions/factory.go:101: Failed to watch *v1.OAuthClient: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io)\nI0403 11:35:09.211168       1 controller.go:107] OpenAPI AggregationController: Processing item v1.project.openshift.io\nE0403 11:35:09.214980       1 controller.go:114] loading OpenAPI spec for "v1.project.openshift.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: Error: 'dial tcp 10.130.0.52:8443: connect: connection refused'\nTrying to reach: 'https://10.130.0.52:8443/openapi/v2', Header: map[]\nI0403 11:35:09.214999       1 controller.go:127] OpenAPI AggregationController: action for item v1.project.openshift.io: Rate Limited Requeue.\nI0403 11:35:09.267938       1 controller.go:176] Shutting down kubernetes service endpoint reconciler\nW0403 11:35:09.293610       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [10.0.131.184 10.0.135.18]\n
Apr 03 11:37:50.799 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-146-242.us-west-1.compute.internal node/ip-10-0-146-242.us-west-1.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 255 (Error): I0403 11:20:48.058484       1 observer_polling.go:106] Starting file observer\nI0403 11:20:48.063139       1 certsync_controller.go:269] Starting CertSyncer\nW0403 11:26:20.941438       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22018 (24431)\nW0403 11:32:02.948682       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 24724 (26748)\n
Apr 03 11:37:51.107 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-146-242.us-west-1.compute.internal node/ip-10-0-146-242.us-west-1.compute.internal container=scheduler container exited with code 255 (Error): 18635       1 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: Get https://localhost:6443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 11:20:52.967958       1 leaderelection.go:270] error retrieving resource lock openshift-kube-scheduler/kube-scheduler: configmaps "kube-scheduler" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "openshift-kube-scheduler"\nE0403 11:20:59.594647       1 factory.go:832] scheduler cache UpdatePod failed: pod 2d2a7eb1-759d-11ea-ac74-06cae4339f2b is not added to scheduler cache, so cannot be updated\nE0403 11:21:06.072909       1 factory.go:832] scheduler cache UpdatePod failed: pod 2d2a7eb1-759d-11ea-ac74-06cae4339f2b is not added to scheduler cache, so cannot be updated\nE0403 11:34:53.850186       1 factory.go:832] scheduler cache UpdatePod failed: pod 2d2a7eb1-759d-11ea-ac74-06cae4339f2b is not added to scheduler cache, so cannot be updated\nE0403 11:34:53.866360       1 factory.go:923] scheduler cache RemovePod failed: pod 2d2a7eb1-759d-11ea-ac74-06cae4339f2b is not found in scheduler cache, so cannot be removed from it\nW0403 11:34:54.765607       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StatefulSet ended with: too old resource version: 22363 (27997)\nW0403 11:34:54.768009       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1beta1.PodDisruptionBudget ended with: too old resource version: 21604 (27997)\nW0403 11:34:54.768140       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 21582 (27997)\nW0403 11:34:54.829078       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 21605 (28000)\nE0403 11:35:09.211901       1 server.go:259] lost master\nI0403 11:35:09.212457       1 secure_serving.go:180] Stopped listening on [::]:10259\n
Apr 03 11:37:51.511 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-146-242.us-west-1.compute.internal node/ip-10-0-146-242.us-west-1.compute.internal container=kube-controller-manager-cert-syncer-6 container exited with code 255 (Error): Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 11:20:45.898413       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 11:20:45.899358       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 11:20:46.899585       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 11:20:46.900512       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0403 11:20:52.748695       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: secrets is forbidden: User "system:kube-controller-manager" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-controller-manager"\nE0403 11:20:52.753337       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:kube-controller-manager" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\nW0403 11:29:40.763738       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 21750 (26032)\n
Apr 03 11:37:51.511 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-146-242.us-west-1.compute.internal node/ip-10-0-146-242.us-west-1.compute.internal container=kube-controller-manager-6 container exited with code 255 (Error): suer="<self>" (2020-04-03 10:40:24 +0000 UTC to 2021-04-03 10:40:24 +0000 UTC (now=2020-04-03 11:17:28.277707282 +0000 UTC))\nI0403 11:17:28.282578       1 controllermanager.go:169] Version: v1.13.4+3040211\nI0403 11:17:28.283900       1 serving.go:195] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1585911405" (2020-04-03 10:57:00 +0000 UTC to 2022-04-03 10:57:01 +0000 UTC (now=2020-04-03 11:17:28.283874637 +0000 UTC))\nI0403 11:17:28.283934       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585911405" [] issuer="<self>" (2020-04-03 10:56:44 +0000 UTC to 2021-04-03 10:56:45 +0000 UTC (now=2020-04-03 11:17:28.283918498 +0000 UTC))\nI0403 11:17:28.283963       1 secure_serving.go:136] Serving securely on [::]:10257\nI0403 11:17:28.284170       1 serving.go:77] Starting DynamicLoader\nI0403 11:17:28.284671       1 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-controller-manager...\nE0403 11:20:44.103130       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0403 11:20:52.763046       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: configmaps "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\nI0403 11:35:09.145117       1 serving.go:88] Shutting down DynamicLoader\nE0403 11:35:09.145097       1 controllermanager.go:282] leaderelection lost\n
Apr 03 11:37:52.320 E ns/openshift-etcd pod/etcd-member-ip-10-0-146-242.us-west-1.compute.internal node/ip-10-0-146-242.us-west-1.compute.internal container=etcd-metrics container exited with code 255 (Error): 2020-04-03 11:35:07.735857 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-91k9431k-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-91k9431k-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-03 11:35:07.736660 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-03 11:35:07.737259 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-91k9431k-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-91k9431k-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/03 11:35:07 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.146.242:9978: connect: connection refused"; Reconnecting to {etcd-1.ci-op-91k9431k-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\n2020-04-03 11:35:08.750784 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Apr 03 11:37:52.320 E ns/openshift-etcd pod/etcd-member-ip-10-0-146-242.us-west-1.compute.internal node/ip-10-0-146-242.us-west-1.compute.internal container=etcd-member container exited with code 255 (Error): stream MsgApp v2 reader)\n2020-04-03 11:35:09.192226 E | rafthttp: failed to read 2955895d6efeb0c2 on stream MsgApp v2 (context canceled)\n2020-04-03 11:35:09.192235 I | rafthttp: peer 2955895d6efeb0c2 became inactive (message send to peer failed)\n2020-04-03 11:35:09.192245 I | rafthttp: stopped streaming with peer 2955895d6efeb0c2 (stream MsgApp v2 reader)\n2020-04-03 11:35:09.192320 W | rafthttp: lost the TCP streaming connection with peer 2955895d6efeb0c2 (stream Message reader)\n2020-04-03 11:35:09.192352 I | rafthttp: stopped streaming with peer 2955895d6efeb0c2 (stream Message reader)\n2020-04-03 11:35:09.192363 I | rafthttp: stopped peer 2955895d6efeb0c2\n2020-04-03 11:35:09.192371 I | rafthttp: stopping peer 624bc80c8aeae76f...\n2020-04-03 11:35:09.192683 I | rafthttp: closed the TCP streaming connection with peer 624bc80c8aeae76f (stream MsgApp v2 writer)\n2020-04-03 11:35:09.192697 I | rafthttp: stopped streaming with peer 624bc80c8aeae76f (writer)\n2020-04-03 11:35:09.193084 I | rafthttp: closed the TCP streaming connection with peer 624bc80c8aeae76f (stream Message writer)\n2020-04-03 11:35:09.193099 I | rafthttp: stopped streaming with peer 624bc80c8aeae76f (writer)\n2020-04-03 11:35:09.193229 I | rafthttp: stopped HTTP pipelining with peer 624bc80c8aeae76f\n2020-04-03 11:35:09.193298 W | rafthttp: lost the TCP streaming connection with peer 624bc80c8aeae76f (stream MsgApp v2 reader)\n2020-04-03 11:35:09.193310 E | rafthttp: failed to read 624bc80c8aeae76f on stream MsgApp v2 (context canceled)\n2020-04-03 11:35:09.193318 I | rafthttp: peer 624bc80c8aeae76f became inactive (message send to peer failed)\n2020-04-03 11:35:09.193327 I | rafthttp: stopped streaming with peer 624bc80c8aeae76f (stream MsgApp v2 reader)\n2020-04-03 11:35:09.193388 W | rafthttp: lost the TCP streaming connection with peer 624bc80c8aeae76f (stream Message reader)\n2020-04-03 11:35:09.193403 I | rafthttp: stopped streaming with peer 624bc80c8aeae76f (stream Message reader)\n2020-04-03 11:35:09.193413 I | rafthttp: stopped peer 624bc80c8aeae76f\n
Apr 03 11:38:16.359 E ns/openshift-operator-lifecycle-manager pod/packageserver-6b946b798d-hvm9f node/ip-10-0-146-242.us-west-1.compute.internal container=packageserver container exited with code 137 (Error): Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T11:37:57Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T11:37:57Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T11:37:58Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T11:37:58Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T11:38:02Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T11:38:02Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T11:38:04Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T11:38:04Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T11:38:11Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T11:38:11Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T11:38:13Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T11:38:13Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\n
Apr 03 11:38:23.573 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-138-39.us-west-1.compute.internal container=prometheus-proxy container exited with code 1 (Error): 
Apr 03 11:38:53.528 - 14s   E openshift-apiserver OpenShift API is not responding to GET requests
Apr 03 11:39:18.618 E ns/openshift-operator-lifecycle-manager pod/packageserver-6c46fcb4f7-zb2p7 node/ip-10-0-135-18.us-west-1.compute.internal container=packageserver container exited with code 137 (Error):       1 log.go:172] http: TLS handshake error from 10.130.0.1:38900: remote error: tls: bad certificate\nI0403 11:38:47.463160       1 log.go:172] http: TLS handshake error from 10.130.0.1:38914: remote error: tls: bad certificate\nI0403 11:38:47.742165       1 log.go:172] http: TLS handshake error from 10.129.0.1:55480: remote error: tls: bad certificate\nI0403 11:38:47.903137       1 wrap.go:47] GET /: (4.334512ms) 200 [Go-http-client/2.0 10.129.0.1:48900]\nI0403 11:38:47.903576       1 wrap.go:47] GET /: (84.199µs) 200 [Go-http-client/2.0 10.130.0.1:33764]\nI0403 11:38:47.903812       1 wrap.go:47] GET /: (92.284µs) 200 [Go-http-client/2.0 10.130.0.1:33764]\nI0403 11:38:48.024642       1 secure_serving.go:156] Stopped listening on [::]:5443\ntime="2020-04-03T11:38:49Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T11:38:49Z" level=info msg="new grpc connection added" action="sync catalogsource" name=certified-operators namespace=openshift-marketplace\ntime="2020-04-03T11:38:49Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T11:38:49Z" level=info msg="new grpc connection added" action="sync catalogsource" name=redhat-operators namespace=openshift-marketplace\ntime="2020-04-03T11:38:57Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T11:38:57Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T11:38:58Z" level=info msg="attempting to add a new grpc connection" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\ntime="2020-04-03T11:38:58Z" level=info msg="new grpc connection added" action="sync catalogsource" name=community-operators namespace=openshift-marketplace\n
Apr 03 11:39:33.875 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-131-184.us-west-1.compute.internal node/ip-10-0-131-184.us-west-1.compute.internal container=kube-controller-manager-6 container exited with code 255 (Error): : type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set packageserver-6b946b798d to 0\nI0403 11:37:45.715041       1 replica_set.go:477] Too few replicas for ReplicaSet openshift-operator-lifecycle-manager/packageserver-5cb95bbbc7, need 2, creating 1\nI0403 11:37:45.715645       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0d47e67e-759a-11ea-8574-06a6b442c077", APIVersion:"apps/v1", ResourceVersion:"30643", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set packageserver-5cb95bbbc7 to 2\nI0403 11:37:45.715685       1 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver-6b946b798d", UID:"780590ed-759f-11ea-ac74-06cae4339f2b", APIVersion:"apps/v1", ResourceVersion:"30642", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: packageserver-6b946b798d-hvm9f\nI0403 11:37:45.763765       1 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver-5cb95bbbc7", UID:"78c93243-759f-11ea-ac74-06cae4339f2b", APIVersion:"apps/v1", ResourceVersion:"30646", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: packageserver-5cb95bbbc7-kcln2\nE0403 11:37:50.495031       1 reflector.go:237] github.com/openshift/client-go/security/informers/externalversions/factory.go:101: Failed to watch *v1.RangeAllocation: the server is currently unable to handle the request (get rangeallocations.security.openshift.io)\nE0403 11:37:50.496272       1 reflector.go:237] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.BrokerTemplateInstance: the server is currently unable to handle the request (get brokertemplateinstances.template.openshift.io)\nE0403 11:37:50.512480       1 controllermanager.go:282] leaderelection lost\nI0403 11:37:50.512510       1 serving.go:88] Shutting down DynamicLoader\n
Apr 03 11:39:33.875 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-131-184.us-west-1.compute.internal node/ip-10-0-131-184.us-west-1.compute.internal container=kube-controller-manager-cert-syncer-6 container exited with code 255 (Error): I0403 11:18:05.242714       1 observer_polling.go:106] Starting file observer\nI0403 11:18:05.242854       1 certsync_controller.go:269] Starting CertSyncer\nW0403 11:24:20.260560       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 19360 (23523)\nW0403 11:34:01.266150       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 23652 (27424)\n
Apr 03 11:39:40.525 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-131-184.us-west-1.compute.internal node/ip-10-0-131-184.us-west-1.compute.internal container=scheduler container exited with code 255 (Error): ionPriority:{} ImageLocalityPriority:{} SelectorSpreadPriority:{} InterPodAffinityPriority:{}]'\nW0403 11:18:33.554661       1 authorization.go:47] Authorization is disabled\nW0403 11:18:33.554776       1 authentication.go:55] Authentication is disabled\nI0403 11:18:33.554791       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251\nI0403 11:18:33.556587       1 serving.go:195] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "scheduler.openshift-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1585911405" (2020-04-03 10:57:01 +0000 UTC to 2022-04-03 10:57:02 +0000 UTC (now=2020-04-03 11:18:33.556570421 +0000 UTC))\nI0403 11:18:33.556616       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585911405" [] issuer="<self>" (2020-04-03 10:56:44 +0000 UTC to 2021-04-03 10:56:45 +0000 UTC (now=2020-04-03 11:18:33.556606164 +0000 UTC))\nI0403 11:18:33.556636       1 secure_serving.go:136] Serving securely on [::]:10259\nI0403 11:18:33.559016       1 serving.go:77] Starting DynamicLoader\nI0403 11:18:34.458574       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller\nI0403 11:18:34.558796       1 controller_utils.go:1034] Caches are synced for scheduler controller\nI0403 11:18:34.558838       1 leaderelection.go:205] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\nI0403 11:19:59.285911       1 leaderelection.go:214] successfully acquired lease openshift-kube-scheduler/kube-scheduler\nW0403 11:37:14.620170       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 18309 (29583)\nE0403 11:37:50.601441       1 server.go:259] lost master\nI0403 11:37:50.601786       1 serving.go:88] Shutting down DynamicLoader\n
Apr 03 11:39:42.723 E ns/openshift-apiserver pod/apiserver-2j6mb node/ip-10-0-131-184.us-west-1.compute.internal container=openshift-apiserver container exited with code 255 (Error):    1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 11:37:42.727001       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0403 11:37:42.748205       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nE0403 11:37:49.056196       1 memcache.go:141] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request\nI0403 11:37:50.485468       1 serving.go:88] Shutting down DynamicLoader\nI0403 11:37:50.486088       1 clusterquotamapping.go:145] Shutting down ClusterQuotaMappingController controller\nI0403 11:37:50.486273       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/aggregator-client-ca/ca-bundle.crt\nI0403 11:37:50.486313       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/client-ca/ca-bundle.crt\nI0403 11:37:50.486355       1 controller.go:87] Shutting down OpenAPI AggregationController\nI0403 11:37:50.489213       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 11:37:50.490741       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 11:37:50.490862       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 11:37:50.490987       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 11:37:50.491103       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 11:37:50.491205       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 11:37:50.491295       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 11:37:50.491392       1 secure_serving.go:180] Stopped listening on 0.0.0.0:8443\n
Apr 03 11:39:43.124 E ns/openshift-cluster-node-tuning-operator pod/tuned-ltpfr node/ip-10-0-131-184.us-west-1.compute.internal container=tuned container exited with code 255 (Error):  11:37:18.775704   51779 openshift-tuned.go:326] Getting recommended profile...\nI0403 11:37:18.913035   51779 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 11:37:18.913692   51779 openshift-tuned.go:435] Pod (openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-6684474c66-7ws2l) labels changed node wide: true\nI0403 11:37:23.763293   51779 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 11:37:23.764938   51779 openshift-tuned.go:326] Getting recommended profile...\nI0403 11:37:23.889106   51779 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 11:37:23.889707   51779 openshift-tuned.go:435] Pod (openshift-machine-api/machine-api-controllers-545f994475-wqppd) labels changed node wide: true\nI0403 11:37:28.763337   51779 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 11:37:28.764879   51779 openshift-tuned.go:326] Getting recommended profile...\nI0403 11:37:28.901468   51779 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 11:37:46.762965   51779 openshift-tuned.go:435] Pod (openshift-operator-lifecycle-manager/packageserver-6c46fcb4f7-n8l2v) labels changed node wide: true\nI0403 11:37:48.763288   51779 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 11:37:48.764800   51779 openshift-tuned.go:326] Getting recommended profile...\nI0403 11:37:48.884940   51779 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 11:37:50.154064   51779 openshift-tuned.go:435] Pod (openshift-machine-config-operator/etcd-quorum-guard-5c466b57cd-kdch8) labels changed node wide: true\nI0403 11:37:50.691461   51779 openshift-tuned.go:126] Received signal: terminated\n
Apr 03 11:39:43.528 E ns/openshift-monitoring pod/node-exporter-r8b9f node/ip-10-0-131-184.us-west-1.compute.internal container=node-exporter container exited with code 255 (Error): 
Apr 03 11:39:43.528 E ns/openshift-monitoring pod/node-exporter-r8b9f node/ip-10-0-131-184.us-west-1.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 03 11:39:43.925 E ns/openshift-controller-manager pod/controller-manager-9td5p node/ip-10-0-131-184.us-west-1.compute.internal container=controller-manager container exited with code 255 (Error): 
Apr 03 11:39:44.723 E ns/openshift-machine-config-operator pod/machine-config-daemon-5ctd8 node/ip-10-0-131-184.us-west-1.compute.internal container=machine-config-daemon container exited with code 255 (Error): 
Apr 03 11:39:46.722 E ns/openshift-machine-config-operator pod/machine-config-server-dqxmw node/ip-10-0-131-184.us-west-1.compute.internal container=machine-config-server container exited with code 255 (Error): 
Apr 03 11:39:49.124 E ns/openshift-dns pod/dns-default-prnd7 node/ip-10-0-131-184.us-west-1.compute.internal container=dns-node-resolver container exited with code 255 (Error): 
Apr 03 11:39:49.124 E ns/openshift-dns pod/dns-default-prnd7 node/ip-10-0-131-184.us-west-1.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-04-03T11:26:01.327Z [INFO] CoreDNS-1.3.1\n2020-04-03T11:26:01.328Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-03T11:26:01.328Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 03 11:39:50.322 E ns/openshift-image-registry pod/node-ca-6l4tw node/ip-10-0-131-184.us-west-1.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 03 11:39:50.393 E ns/openshift-cluster-node-tuning-operator pod/tuned-rxhjf node/ip-10-0-148-57.us-west-1.compute.internal container=tuned container exited with code 255 (Error): ift-monitoring/prometheus-adapter-7f686c7d8-4jvsh) labels changed node wide: true\nI0403 11:34:54.151903   27588 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 11:34:54.155945   27588 openshift-tuned.go:326] Getting recommended profile...\nI0403 11:34:54.294645   27588 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 11:37:33.447145   27588 openshift-tuned.go:435] Pod (e2e-tests-service-upgrade-wwtgw/service-test-hfg8b) labels changed node wide: true\nI0403 11:37:34.155607   27588 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 11:37:34.157756   27588 openshift-tuned.go:326] Getting recommended profile...\nI0403 11:37:34.427160   27588 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 11:37:36.021721   27588 openshift-tuned.go:435] Pod (openshift-marketplace/redhat-operators-6cbf74f46-qhv97) labels changed node wide: true\nI0403 11:37:39.151040   27588 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 11:37:39.153748   27588 openshift-tuned.go:326] Getting recommended profile...\nI0403 11:37:39.266034   27588 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 11:37:39.267228   27588 openshift-tuned.go:435] Pod (openshift-image-registry/image-registry-5d8f4d8455-mhpgk) labels changed node wide: true\nI0403 11:37:44.151038   27588 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 11:37:44.153197   27588 openshift-tuned.go:326] Getting recommended profile...\nI0403 11:37:44.272254   27588 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 11:38:09.740432   27588 openshift-tuned.go:435] Pod (e2e-tests-sig-apps-job-upgrade-nwtbt/foo-grmw6) labels changed node wide: true\n
Apr 03 11:39:50.631 E ns/openshift-image-registry pod/node-ca-4kbx8 node/ip-10-0-148-57.us-west-1.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 03 11:39:54.124 E ns/openshift-sdn pod/sdn-g9724 node/ip-10-0-131-184.us-west-1.compute.internal container=sdn container exited with code 255 (Error): 76955   66772 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-authentication/oauth-openshift:https to [10.129.0.77:6443 10.130.0.76:6443]\nI0403 11:37:49.876993   66772 roundrobin.go:240] Delete endpoint 10.130.0.76:6443 for service "openshift-authentication/oauth-openshift:https"\nI0403 11:37:50.056840   66772 proxier.go:367] userspace proxy: processing 0 service events\nI0403 11:37:50.056886   66772 proxier.go:346] userspace syncProxyRules took 56.194306ms\nI0403 11:37:50.276660   66772 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-kube-controller-manager/kube-controller-manager:https to [10.0.131.184:10257 10.0.135.18:10257 10.0.146.242:10257]\nI0403 11:37:50.276693   66772 roundrobin.go:240] Delete endpoint 10.0.146.242:10257 for service "openshift-kube-controller-manager/kube-controller-manager:https"\nI0403 11:37:50.488356   66772 proxier.go:367] userspace proxy: processing 0 service events\nI0403 11:37:50.488377   66772 proxier.go:346] userspace syncProxyRules took 92.496923ms\ninterrupt: Gracefully shutting down ...\nE0403 11:37:50.557584   66772 healthcheck.go:57] SDN healthcheck disconnected from OVS server: <nil>\nI0403 11:37:50.557709   66772 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 11:37:50.660199   66772 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 11:37:50.757980   66772 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 11:37:50.859238   66772 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 11:37:50.958877   66772 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\n
Apr 03 11:39:55.054 E ns/openshift-monitoring pod/node-exporter-bnrz9 node/ip-10-0-148-57.us-west-1.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 03 11:39:55.054 E ns/openshift-monitoring pod/node-exporter-bnrz9 node/ip-10-0-148-57.us-west-1.compute.internal container=node-exporter container exited with code 255 (Error): 
Apr 03 11:39:55.122 E ns/openshift-multus pod/multus-ckfwm node/ip-10-0-131-184.us-west-1.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 03 11:39:55.492 E ns/openshift-dns pod/dns-default-qk2mn node/ip-10-0-148-57.us-west-1.compute.internal container=dns-node-resolver container exited with code 255 (Error): 
Apr 03 11:39:55.492 E ns/openshift-dns pod/dns-default-qk2mn node/ip-10-0-148-57.us-west-1.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-04-03T11:25:20.851Z [INFO] CoreDNS-1.3.1\n2020-04-03T11:25:20.852Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-03T11:25:20.852Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nE0403 11:37:50.642115       1 reflector.go:322] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: Failed to watch *v1.Namespace: Get https://172.30.0.1:443/api/v1/namespaces?resourceVersion=18299&timeoutSeconds=329&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0403 11:37:50.642223       1 reflector.go:322] github.com/coredns/coredns/plugin/kubernetes/controller.go:317: Failed to watch *v1.Endpoints: Get https://172.30.0.1:443/api/v1/endpoints?resourceVersion=30768&timeoutSeconds=390&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0403 11:37:50.642262       1 reflector.go:322] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: Failed to watch *v1.Service: Get https://172.30.0.1:443/api/v1/services?resourceVersion=30628&timeoutSeconds=346&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0403 11:37:51.650900       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:317: Failed to list *v1.Endpoints: Get https://172.30.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.30.0.1:443: connect: connection refused\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 03 11:39:55.928 E ns/openshift-multus pod/multus-9vl9s node/ip-10-0-148-57.us-west-1.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 03 11:39:56.326 E ns/openshift-sdn pod/sdn-cbgbg node/ip-10-0-148-57.us-west-1.compute.internal container=sdn container exited with code 255 (Error): ithub.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 29583 (30777)\nW0403 11:37:50.943853   43890 reflector.go:256] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 19011 (30777)\nW0403 11:37:50.954058   43890 reflector.go:256] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.EgressNetworkPolicy ended with: too old resource version: 18435 (30777)\nI0403 11:37:52.381162   43890 roundrobin.go:310] LoadBalancerRR: Setting endpoints for default/kubernetes:https to [10.0.135.18:6443 10.0.146.242:6443]\nI0403 11:37:52.381205   43890 roundrobin.go:240] Delete endpoint 10.0.131.184:6443 for service "default/kubernetes:https"\nI0403 11:37:52.553460   43890 proxier.go:367] userspace proxy: processing 0 service events\nI0403 11:37:52.553484   43890 proxier.go:346] userspace syncProxyRules took 51.59886ms\ninterrupt: Gracefully shutting down ...\nE0403 11:38:10.592997   43890 healthcheck.go:57] SDN healthcheck disconnected from OVS server: <nil>\nI0403 11:38:10.593129   43890 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 11:38:10.724949   43890 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 11:38:10.795895   43890 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 11:38:10.895051   43890 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 11:38:10.993508   43890 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\n
Apr 03 11:39:56.727 E ns/openshift-sdn pod/ovs-xqgkd node/ip-10-0-148-57.us-west-1.compute.internal container=openvswitch container exited with code 255 (Error):  on port 11\n2020-04-03T11:37:34.274Z|00139|connmgr|INFO|br0<->unix#190: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T11:37:34.318Z|00140|connmgr|INFO|br0<->unix#193: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T11:37:34.371Z|00141|bridge|INFO|bridge br0: deleted interface veth8a0be3a0 on port 15\n2020-04-03T11:37:34.431Z|00142|connmgr|INFO|br0<->unix#196: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T11:37:34.482Z|00143|connmgr|INFO|br0<->unix#199: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T11:37:34.532Z|00144|bridge|INFO|bridge br0: deleted interface vethc3496cd7 on port 16\n2020-04-03T11:37:34.589Z|00145|connmgr|INFO|br0<->unix#202: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T11:37:34.639Z|00146|bridge|INFO|bridge br0: deleted interface vetha4824514 on port 9\n2020-04-03T11:37:34.696Z|00147|connmgr|INFO|br0<->unix#205: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T11:37:34.736Z|00148|bridge|INFO|bridge br0: deleted interface veth6db8237e on port 12\n2020-04-03T11:37:34.808Z|00149|connmgr|INFO|br0<->unix#208: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T11:37:34.850Z|00150|bridge|INFO|bridge br0: deleted interface vethee314322 on port 8\n2020-04-03T11:37:34.912Z|00151|connmgr|INFO|br0<->unix#211: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T11:37:34.941Z|00152|bridge|INFO|bridge br0: deleted interface veth3f54b04c on port 7\n2020-04-03T11:37:35.006Z|00153|connmgr|INFO|br0<->unix#214: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T11:37:35.029Z|00154|bridge|INFO|bridge br0: deleted interface veth8c224076 on port 3\n2020-04-03T11:38:03.720Z|00155|connmgr|INFO|br0<->unix#221: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T11:38:03.742Z|00156|bridge|INFO|bridge br0: deleted interface veth8ebb7d9c on port 5\n2020-04-03T11:38:04.057Z|00157|connmgr|INFO|br0<->unix#224: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T11:38:04.077Z|00158|bridge|INFO|bridge br0: deleted interface veth4174da17 on port 4\nTerminated\novs-vswitchd is not running.\novsdb-server is not running.\n
Apr 03 11:39:57.150 E ns/openshift-machine-config-operator pod/machine-config-daemon-sjwl5 node/ip-10-0-148-57.us-west-1.compute.internal container=machine-config-daemon container exited with code 255 (Error): 
Apr 03 11:40:00.529 E kube-apiserver Kube API started failing: Get https://api.ci-op-91k9431k-a4243.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=3s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Apr 03 11:40:07.081 E ns/openshift-multus pod/multus-9vl9s node/ip-10-0-148-57.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Apr 03 11:40:17.451 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-7b875bfcc6-7tntd node/ip-10-0-135-18.us-west-1.compute.internal container=openshift-apiserver-operator container exited with code 255 (Error): PIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available message changed from "Available: v1.quota.openshift.io is not ready: 503\nAvailable: v1.security.openshift.io is not ready: 503" to "Available: v1.build.openshift.io is not ready: 503\nAvailable: v1.oauth.openshift.io is not ready: 503\nAvailable: v1.project.openshift.io is not ready: 503\nAvailable: v1.security.openshift.io is not ready: 503"\nI0403 11:39:28.253638       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"8ef9bd55-7599-11ea-8574-06a6b442c077", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available message changed from "Available: v1.build.openshift.io is not ready: 503\nAvailable: v1.oauth.openshift.io is not ready: 503\nAvailable: v1.project.openshift.io is not ready: 503\nAvailable: v1.security.openshift.io is not ready: 503" to "Available: v1.build.openshift.io is not ready: 503\nAvailable: v1.image.openshift.io is not ready: 503\nAvailable: v1.quota.openshift.io is not ready: 503\nAvailable: v1.route.openshift.io is not ready: 503\nAvailable: v1.template.openshift.io is not ready: 503"\nI0403 11:39:28.630689       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"8ef9bd55-7599-11ea-8574-06a6b442c077", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available changed from False to True ("")\nI0403 11:40:06.294889       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 11:40:06.294968       1 leaderelection.go:65] leaderelection lost\nF0403 11:40:06.305866       1 builder.go:217] server exited\n
Apr 03 11:40:21.454 E ns/openshift-service-ca-operator pod/service-ca-operator-77d958bfdd-f9vjs node/ip-10-0-135-18.us-west-1.compute.internal container=operator container exited with code 2 (Error): 
Apr 03 11:40:23.250 E ns/openshift-service-ca pod/service-serving-cert-signer-5d6d98c46b-n98jr node/ip-10-0-135-18.us-west-1.compute.internal container=service-serving-cert-signer-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 11:40:23.850 E ns/openshift-service-ca pod/apiservice-cabundle-injector-7d44c8f476-ww8qz node/ip-10-0-135-18.us-west-1.compute.internal container=apiservice-cabundle-injector-controller container exited with code 2 (Error): 
Apr 03 11:40:25.051 E ns/openshift-authentication-operator pod/authentication-operator-74d7786755-n8pvq node/ip-10-0-135-18.us-west-1.compute.internal container=operator container exited with code 255 (Error):  UID:"965d2109-7599-11ea-8574-06a6b442c077", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "" to "OAuthClientsDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io openshift-challenging-client)"\nI0403 11:39:31.255937       1 status_controller.go:164] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2020-04-03T11:04:16Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-04-03T11:38:59Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-04-03T11:10:05Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-04-03T11:00:21Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0403 11:39:31.266150       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"965d2109-7599-11ea-8574-06a6b442c077", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "OAuthClientsDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io openshift-challenging-client)" to ""\nW0403 11:39:59.241389       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Infrastructure ended with: too old resource version: 19872 (32183)\nW0403 11:40:05.056491       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.OAuth ended with: too old resource version: 22698 (32230)\nI0403 11:40:14.659416       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 11:40:14.659576       1 leaderelection.go:65] leaderelection lost\n
Apr 03 11:40:28.054 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-66ddd68666-77vb2 node/ip-10-0-135-18.us-west-1.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): lermanager.go:282] leaderelection lost\\nI0403 11:37:50.512510       1 serving.go:88] Shutting down DynamicLoader\\n\"\nStaticPodsDegraded: nodes/ip-10-0-131-184.us-west-1.compute.internal pods/kube-controller-manager-ip-10-0-131-184.us-west-1.compute.internal container=\"kube-controller-manager-cert-syncer-6\" is not ready\nStaticPodsDegraded: nodes/ip-10-0-131-184.us-west-1.compute.internal pods/kube-controller-manager-ip-10-0-131-184.us-west-1.compute.internal container=\"kube-controller-manager-cert-syncer-6\" is terminated: \"Error\" - \"I0403 11:18:05.242714       1 observer_polling.go:106] Starting file observer\\nI0403 11:18:05.242854       1 certsync_controller.go:269] Starting CertSyncer\\nW0403 11:24:20.260560       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 19360 (23523)\\nW0403 11:34:01.266150       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 23652 (27424)\\n\"" to "StaticPodsDegraded: nodes/ip-10-0-131-184.us-west-1.compute.internal pods/kube-controller-manager-ip-10-0-131-184.us-west-1.compute.internal container=\"kube-controller-manager-6\" is not ready"\nI0403 11:40:05.401898       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8eda5aaf-7599-11ea-8574-06a6b442c077", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: nodes/ip-10-0-131-184.us-west-1.compute.internal pods/kube-controller-manager-ip-10-0-131-184.us-west-1.compute.internal container=\"kube-controller-manager-6\" is not ready" to ""\nI0403 11:40:09.251489       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0403 11:40:09.251569       1 leaderelection.go:65] leaderelection lost\n
Apr 03 11:40:34.251 E ns/openshift-machine-config-operator pod/machine-config-controller-74c5445d6d-jf8hw node/ip-10-0-135-18.us-west-1.compute.internal container=machine-config-controller container exited with code 2 (Error): 
Apr 03 11:40:35.252 E ns/openshift-image-registry pod/cluster-image-registry-operator-7b99db7558-dvv27 node/ip-10-0-135-18.us-west-1.compute.internal container=cluster-image-registry-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 11:40:46.851 E ns/openshift-console pod/downloads-76bbf5f8bd-jm2fb node/ip-10-0-135-18.us-west-1.compute.internal container=download-server container exited with code 137 (Error): 
Apr 03 11:40:48.790 E kube-apiserver failed contacting the API: Get https://api.ci-op-91k9431k-a4243.origin-ci-int-aws.dev.rhcloud.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&resourceVersion=31858&timeout=9m26s&timeoutSeconds=566&watch=true: dial tcp 52.52.136.148:6443: connect: connection refused
Apr 03 11:40:48.792 E kube-apiserver failed contacting the API: Get https://api.ci-op-91k9431k-a4243.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/pods?resourceVersion=33321&timeout=5m56s&timeoutSeconds=356&watch=true: dial tcp 52.52.136.148:6443: connect: connection refused
Apr 03 11:41:04.536 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-131-184.us-west-1.compute.internal node/ip-10-0-131-184.us-west-1.compute.internal container=kube-controller-manager-6 container exited with code 255 (Error): : type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set packageserver-6b946b798d to 0\nI0403 11:37:45.715041       1 replica_set.go:477] Too few replicas for ReplicaSet openshift-operator-lifecycle-manager/packageserver-5cb95bbbc7, need 2, creating 1\nI0403 11:37:45.715645       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0d47e67e-759a-11ea-8574-06a6b442c077", APIVersion:"apps/v1", ResourceVersion:"30643", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set packageserver-5cb95bbbc7 to 2\nI0403 11:37:45.715685       1 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver-6b946b798d", UID:"780590ed-759f-11ea-ac74-06cae4339f2b", APIVersion:"apps/v1", ResourceVersion:"30642", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: packageserver-6b946b798d-hvm9f\nI0403 11:37:45.763765       1 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver-5cb95bbbc7", UID:"78c93243-759f-11ea-ac74-06cae4339f2b", APIVersion:"apps/v1", ResourceVersion:"30646", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: packageserver-5cb95bbbc7-kcln2\nE0403 11:37:50.495031       1 reflector.go:237] github.com/openshift/client-go/security/informers/externalversions/factory.go:101: Failed to watch *v1.RangeAllocation: the server is currently unable to handle the request (get rangeallocations.security.openshift.io)\nE0403 11:37:50.496272       1 reflector.go:237] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.BrokerTemplateInstance: the server is currently unable to handle the request (get brokertemplateinstances.template.openshift.io)\nE0403 11:37:50.512480       1 controllermanager.go:282] leaderelection lost\nI0403 11:37:50.512510       1 serving.go:88] Shutting down DynamicLoader\n
Apr 03 11:41:04.536 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-131-184.us-west-1.compute.internal node/ip-10-0-131-184.us-west-1.compute.internal container=kube-controller-manager-cert-syncer-6 container exited with code 255 (Error): I0403 11:18:05.242714       1 observer_polling.go:106] Starting file observer\nI0403 11:18:05.242854       1 certsync_controller.go:269] Starting CertSyncer\nW0403 11:24:20.260560       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 19360 (23523)\nW0403 11:34:01.266150       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 23652 (27424)\n
Apr 03 11:41:04.936 E ns/openshift-etcd pod/etcd-member-ip-10-0-131-184.us-west-1.compute.internal node/ip-10-0-131-184.us-west-1.compute.internal container=etcd-member container exited with code 255 (Error):  (stream MsgApp v2 reader)\n2020-04-03 11:37:51.015095 I | rafthttp: stopped streaming with peer 624bc80c8aeae76f (stream MsgApp v2 reader)\n2020-04-03 11:37:51.015177 W | rafthttp: lost the TCP streaming connection with peer 624bc80c8aeae76f (stream Message reader)\n2020-04-03 11:37:51.015190 E | rafthttp: failed to read 624bc80c8aeae76f on stream Message (context canceled)\n2020-04-03 11:37:51.015198 I | rafthttp: peer 624bc80c8aeae76f became inactive (message send to peer failed)\n2020-04-03 11:37:51.015206 I | rafthttp: stopped streaming with peer 624bc80c8aeae76f (stream Message reader)\n2020-04-03 11:37:51.015215 I | rafthttp: stopped peer 624bc80c8aeae76f\n2020-04-03 11:37:51.015222 I | rafthttp: stopping peer c9a123b23c13a550...\n2020-04-03 11:37:51.015619 I | rafthttp: closed the TCP streaming connection with peer c9a123b23c13a550 (stream MsgApp v2 writer)\n2020-04-03 11:37:51.015631 I | rafthttp: stopped streaming with peer c9a123b23c13a550 (writer)\n2020-04-03 11:37:51.016029 I | rafthttp: closed the TCP streaming connection with peer c9a123b23c13a550 (stream Message writer)\n2020-04-03 11:37:51.016039 I | rafthttp: stopped streaming with peer c9a123b23c13a550 (writer)\n2020-04-03 11:37:51.016181 I | rafthttp: stopped HTTP pipelining with peer c9a123b23c13a550\n2020-04-03 11:37:51.016253 W | rafthttp: lost the TCP streaming connection with peer c9a123b23c13a550 (stream MsgApp v2 reader)\n2020-04-03 11:37:51.016266 E | rafthttp: failed to read c9a123b23c13a550 on stream MsgApp v2 (context canceled)\n2020-04-03 11:37:51.016274 I | rafthttp: peer c9a123b23c13a550 became inactive (message send to peer failed)\n2020-04-03 11:37:51.016282 I | rafthttp: stopped streaming with peer c9a123b23c13a550 (stream MsgApp v2 reader)\n2020-04-03 11:37:51.016350 W | rafthttp: lost the TCP streaming connection with peer c9a123b23c13a550 (stream Message reader)\n2020-04-03 11:37:51.016366 I | rafthttp: stopped streaming with peer c9a123b23c13a550 (stream Message reader)\n2020-04-03 11:37:51.016375 I | rafthttp: stopped peer c9a123b23c13a550\n
Apr 03 11:41:04.936 E ns/openshift-etcd pod/etcd-member-ip-10-0-131-184.us-west-1.compute.internal node/ip-10-0-131-184.us-west-1.compute.internal container=etcd-metrics container exited with code 255 (Error): 2020-04-03 11:37:26.093018 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-0.ci-op-91k9431k-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-0.ci-op-91k9431k-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-03 11:37:26.094103 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-03 11:37:26.095092 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-91k9431k-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-91k9431k-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/03 11:37:26 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.131.184:9978: connect: connection refused"; Reconnecting to {etcd-0.ci-op-91k9431k-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\n2020-04-03 11:37:27.108582 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Apr 03 11:41:05.336 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-184.us-west-1.compute.internal node/ip-10-0-131-184.us-west-1.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 255 (Error): I0403 11:17:08.155752       1 observer_polling.go:106] Starting file observer\nI0403 11:17:08.156775       1 certsync_controller.go:269] Starting CertSyncer\nW0403 11:23:45.739706       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22018 (23349)\nW0403 11:30:44.744988       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 23505 (26402)\n
Apr 03 11:41:05.336 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-184.us-west-1.compute.internal node/ip-10-0-131-184.us-west-1.compute.internal container=kube-apiserver-7 container exited with code 255 (Error): go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=5153, ErrCode=NO_ERROR, debug=""\nI0403 11:37:50.492046       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=5153, ErrCode=NO_ERROR, debug=""\nI0403 11:37:50.492386       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=5153, ErrCode=NO_ERROR, debug=""\nI0403 11:37:50.492403       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=5153, ErrCode=NO_ERROR, debug=""\nI0403 11:37:50.492485       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=5153, ErrCode=NO_ERROR, debug=""\nI0403 11:37:50.492499       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=5153, ErrCode=NO_ERROR, debug=""\nI0403 11:37:50.492573       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=5153, ErrCode=NO_ERROR, debug=""\nI0403 11:37:50.492584       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=5153, ErrCode=NO_ERROR, debug=""\nI0403 11:37:50.492652       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=5153, ErrCode=NO_ERROR, debug=""\nI0403 11:37:50.492662       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=5153, ErrCode=NO_ERROR, debug=""\nI0403 11:37:50.525144       1 controller.go:176] Shutting down kubernetes service endpoint reconciler\n
Apr 03 11:41:05.735 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-131-184.us-west-1.compute.internal node/ip-10-0-131-184.us-west-1.compute.internal container=scheduler container exited with code 255 (Error): ionPriority:{} ImageLocalityPriority:{} SelectorSpreadPriority:{} InterPodAffinityPriority:{}]'\nW0403 11:18:33.554661       1 authorization.go:47] Authorization is disabled\nW0403 11:18:33.554776       1 authentication.go:55] Authentication is disabled\nI0403 11:18:33.554791       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251\nI0403 11:18:33.556587       1 serving.go:195] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "scheduler.openshift-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1585911405" (2020-04-03 10:57:01 +0000 UTC to 2022-04-03 10:57:02 +0000 UTC (now=2020-04-03 11:18:33.556570421 +0000 UTC))\nI0403 11:18:33.556616       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585911405" [] issuer="<self>" (2020-04-03 10:56:44 +0000 UTC to 2021-04-03 10:56:45 +0000 UTC (now=2020-04-03 11:18:33.556606164 +0000 UTC))\nI0403 11:18:33.556636       1 secure_serving.go:136] Serving securely on [::]:10259\nI0403 11:18:33.559016       1 serving.go:77] Starting DynamicLoader\nI0403 11:18:34.458574       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller\nI0403 11:18:34.558796       1 controller_utils.go:1034] Caches are synced for scheduler controller\nI0403 11:18:34.558838       1 leaderelection.go:205] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\nI0403 11:19:59.285911       1 leaderelection.go:214] successfully acquired lease openshift-kube-scheduler/kube-scheduler\nW0403 11:37:14.620170       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 18309 (29583)\nE0403 11:37:50.601441       1 server.go:259] lost master\nI0403 11:37:50.601786       1 serving.go:88] Shutting down DynamicLoader\n
Apr 03 11:42:29.248 E ns/openshift-monitoring pod/prometheus-adapter-7f686c7d8-8m6pg node/ip-10-0-128-95.us-west-1.compute.internal container=prometheus-adapter container exited with code 2 (Error): 
Apr 03 11:42:29.417 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-135-18.us-west-1.compute.internal node/ip-10-0-135-18.us-west-1.compute.internal container=kube-controller-manager-cert-syncer-6 container exited with code 255 (Error): I0403 11:18:59.109346       1 certsync_controller.go:269] Starting CertSyncer\nI0403 11:18:59.109654       1 observer_polling.go:106] Starting file observer\nE0403 11:19:03.853779       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: secrets is forbidden: User "system:kube-controller-manager" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-controller-manager"\nE0403 11:19:03.853896       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:kube-controller-manager" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\nW0403 11:28:14.865369       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 19360 (25508)\nW0403 11:36:07.871759       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25689 (28607)\n
Apr 03 11:42:29.417 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-135-18.us-west-1.compute.internal node/ip-10-0-135-18.us-west-1.compute.internal container=kube-controller-manager-6 container exited with code 255 (Error): icate: "kube-apiserver-to-kubelet-signer" [] issuer="<self>" (2020-04-03 10:40:25 +0000 UTC to 2021-04-03 10:40:25 +0000 UTC (now=2020-04-03 11:18:59.432819909 +0000 UTC))\nI0403 11:18:59.432857       1 clientca.go:92] [4] "/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" client-ca certificate: "kube-control-plane-signer" [] issuer="<self>" (2020-04-03 10:40:24 +0000 UTC to 2021-04-03 10:40:24 +0000 UTC (now=2020-04-03 11:18:59.432843523 +0000 UTC))\nI0403 11:18:59.437530       1 controllermanager.go:169] Version: v1.13.4+3040211\nI0403 11:18:59.438683       1 serving.go:195] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1585911405" (2020-04-03 10:57:00 +0000 UTC to 2022-04-03 10:57:01 +0000 UTC (now=2020-04-03 11:18:59.438667352 +0000 UTC))\nI0403 11:18:59.438711       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585911405" [] issuer="<self>" (2020-04-03 10:56:44 +0000 UTC to 2021-04-03 10:56:45 +0000 UTC (now=2020-04-03 11:18:59.438702312 +0000 UTC))\nI0403 11:18:59.438737       1 secure_serving.go:136] Serving securely on [::]:10257\nI0403 11:18:59.438773       1 serving.go:77] Starting DynamicLoader\nI0403 11:18:59.439331       1 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-controller-manager...\nE0403 11:19:03.803611       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: configmaps "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\nE0403 11:40:48.057594       1 controllermanager.go:282] leaderelection lost\n
Apr 03 11:42:30.054 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-135-18.us-west-1.compute.internal node/ip-10-0-135-18.us-west-1.compute.internal container=scheduler container exited with code 255 (Error): ces/secrets/serving-cert/tls.crt" serving certificate: "scheduler.openshift-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1585911405" (2020-04-03 10:57:01 +0000 UTC to 2022-04-03 10:57:02 +0000 UTC (now=2020-04-03 11:19:45.240247095 +0000 UTC))\nI0403 11:19:45.241342       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585911405" [] issuer="<self>" (2020-04-03 10:56:44 +0000 UTC to 2021-04-03 10:56:45 +0000 UTC (now=2020-04-03 11:19:45.241313546 +0000 UTC))\nI0403 11:19:45.241378       1 secure_serving.go:136] Serving securely on [::]:10259\nI0403 11:19:45.247387       1 serving.go:77] Starting DynamicLoader\nI0403 11:19:46.143295       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller\nI0403 11:19:46.243579       1 controller_utils.go:1034] Caches are synced for scheduler controller\nI0403 11:19:46.243618       1 leaderelection.go:205] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\nW0403 11:34:54.734476       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 25021 (27996)\nW0403 11:37:15.025458       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 19280 (29665)\nI0403 11:38:06.983635       1 leaderelection.go:214] successfully acquired lease openshift-kube-scheduler/kube-scheduler\nW0403 11:39:59.115890       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 19280 (32183)\nW0403 11:39:59.118521       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 19281 (32183)\nE0403 11:40:48.205066       1 server.go:259] lost master\n
Apr 03 11:42:30.846 E ns/openshift-operator-lifecycle-manager pod/olm-operators-78w2m node/ip-10-0-128-95.us-west-1.compute.internal container=configmap-registry-server container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Apr 03 11:42:38.437 E ns/openshift-apiserver pod/apiserver-8xpg9 node/ip-10-0-135-18.us-west-1.compute.internal container=openshift-apiserver container exited with code 255 (Error): nn.go:1304] grpc: addrConn.createTransport failed to connect to {etcd.openshift-etcd.svc:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 172.30.96.43:2379: connect: connection refused". Reconnecting...\nW0403 11:40:48.080895       1 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {etcd.openshift-etcd.svc:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 172.30.96.43:2379: connect: connection refused". Reconnecting...\nI0403 11:40:48.090050       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 11:40:48.090224       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 11:40:48.348223       1 controller.go:87] Shutting down OpenAPI AggregationController\nI0403 11:40:48.349078       1 clusterquotamapping.go:145] Shutting down ClusterQuotaMappingController controller\nI0403 11:40:48.349817       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 11:40:48.354900       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nE0403 11:40:48.355055       1 watch.go:212] unable to encode watch object <nil>: expected pointer, but got invalid kind\nI0403 11:40:48.356439       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/aggregator-client-ca/ca-bundle.crt\nI0403 11:40:48.356460       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/client-ca/ca-bundle.crt\nI0403 11:40:48.356473       1 serving.go:88] Shutting down DynamicLoader\nI0403 11:40:48.365060       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 11:40:48.365215       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0403 11:40:48.365366       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\n
Apr 03 11:42:39.407 E ns/openshift-monitoring pod/node-exporter-s5zmb node/ip-10-0-135-18.us-west-1.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 03 11:42:39.407 E ns/openshift-monitoring pod/node-exporter-s5zmb node/ip-10-0-135-18.us-west-1.compute.internal container=node-exporter container exited with code 255 (Error): 
Apr 03 11:42:40.207 E ns/openshift-multus pod/multus-77lhx node/ip-10-0-135-18.us-west-1.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 03 11:42:42.605 E ns/openshift-cluster-node-tuning-operator pod/tuned-wkg9c node/ip-10-0-135-18.us-west-1.compute.internal container=tuned container exited with code 255 (Error): rue\nI0403 11:40:25.420007   68349 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 11:40:25.421812   68349 openshift-tuned.go:326] Getting recommended profile...\nI0403 11:40:25.542178   68349 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 11:40:25.842950   68349 openshift-tuned.go:435] Pod (openshift-authentication/oauth-openshift-79644c545f-7kdzq) labels changed node wide: true\nI0403 11:40:30.420010   68349 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 11:40:30.421413   68349 openshift-tuned.go:326] Getting recommended profile...\nI0403 11:40:30.576145   68349 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 11:40:30.576824   68349 openshift-tuned.go:435] Pod (openshift-machine-config-operator/machine-config-operator-778f89fb49-n5vsg) labels changed node wide: true\nI0403 11:40:35.420290   68349 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 11:40:35.421557   68349 openshift-tuned.go:326] Getting recommended profile...\nI0403 11:40:35.552903   68349 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 11:40:35.553094   68349 openshift-tuned.go:435] Pod (openshift-image-registry/cluster-image-registry-operator-7b99db7558-dvv27) labels changed node wide: true\nI0403 11:40:40.420029   68349 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 11:40:40.421564   68349 openshift-tuned.go:326] Getting recommended profile...\nI0403 11:40:40.541279   68349 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0403 11:40:47.030013   68349 openshift-tuned.go:435] Pod (openshift-console/downloads-76bbf5f8bd-jm2fb) labels changed node wide: true\n
Apr 03 11:42:44.609 E ns/openshift-dns pod/dns-default-ldl49 node/ip-10-0-135-18.us-west-1.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-04-03T11:26:46.820Z [INFO] CoreDNS-1.3.1\n2020-04-03T11:26:46.820Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-03T11:26:46.820Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nE0403 11:37:50.647937       1 reflector.go:322] github.com/coredns/coredns/plugin/kubernetes/controller.go:317: Failed to watch *v1.Endpoints: Get https://172.30.0.1:443/api/v1/endpoints?resourceVersion=30768&timeoutSeconds=544&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nW0403 11:37:50.897005       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: watch of *v1.Namespace ended with: too old resource version: 21585 (29614)\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 03 11:42:44.609 E ns/openshift-dns pod/dns-default-ldl49 node/ip-10-0-135-18.us-west-1.compute.internal container=dns-node-resolver container exited with code 255 (Error): kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]\n
Apr 03 11:42:49.606 E ns/openshift-image-registry pod/node-ca-mb4wg node/ip-10-0-135-18.us-west-1.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 03 11:42:50.205 E ns/openshift-controller-manager pod/controller-manager-h7lsg node/ip-10-0-135-18.us-west-1.compute.internal container=controller-manager container exited with code 255 (Error): 
Apr 03 11:42:50.606 E ns/openshift-sdn pod/sdn-controller-frntn node/ip-10-0-135-18.us-west-1.compute.internal container=sdn-controller container exited with code 255 (Error): n't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request\nE0403 11:37:47.603373       1 memcache.go:141] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request\nE0403 11:38:20.615427       1 memcache.go:141] couldn't get resource list for apps.openshift.io/v1: the server is currently unable to handle the request\nE0403 11:38:23.687564       1 memcache.go:141] couldn't get resource list for authorization.openshift.io/v1: the server is currently unable to handle the request\nE0403 11:38:26.760522       1 memcache.go:141] couldn't get resource list for build.openshift.io/v1: the server is currently unable to handle the request\nE0403 11:38:29.832220       1 memcache.go:141] couldn't get resource list for project.openshift.io/v1: the server is currently unable to handle the request\nE0403 11:38:32.903598       1 memcache.go:141] couldn't get resource list for security.openshift.io/v1: the server is currently unable to handle the request\nE0403 11:39:03.623342       1 memcache.go:141] couldn't get resource list for apps.openshift.io/v1: the server is currently unable to handle the request\nE0403 11:39:06.695368       1 memcache.go:141] couldn't get resource list for authorization.openshift.io/v1: the server is currently unable to handle the request\nE0403 11:39:09.767639       1 memcache.go:141] couldn't get resource list for build.openshift.io/v1: the server is currently unable to handle the request\nE0403 11:39:12.840418       1 memcache.go:141] couldn't get resource list for route.openshift.io/v1: the server is currently unable to handle the request\nE0403 11:39:15.911399       1 memcache.go:141] couldn't get resource list for security.openshift.io/v1: the server is currently unable to handle the request\nW0403 11:40:05.054029       1 reflector.go:256] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 24720 (32230)\n
Apr 03 11:42:51.404 E ns/openshift-machine-config-operator pod/machine-config-server-98bpv node/ip-10-0-135-18.us-west-1.compute.internal container=machine-config-server container exited with code 255 (Error): 
Apr 03 11:42:56.158 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-148-57.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): 
Apr 03 11:43:06.406 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-135-18.us-west-1.compute.internal node/ip-10-0-135-18.us-west-1.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 255 (Error): I0403 11:18:59.084498       1 observer_polling.go:106] Starting file observer\nI0403 11:18:59.085663       1 certsync_controller.go:269] Starting CertSyncer\nW0403 11:27:26.852808       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22018 (25056)\nW0403 11:32:56.858552       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25377 (27089)\n
Apr 03 11:43:06.406 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-135-18.us-west-1.compute.internal node/ip-10-0-135-18.us-west-1.compute.internal container=kube-apiserver-7 container exited with code 255 (Error): r sent GOAWAY and closed the connection; LastStreamID=2927, ErrCode=NO_ERROR, debug=""\nI0403 11:40:48.359458       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=2927, ErrCode=NO_ERROR, debug=""\nI0403 11:40:48.359619       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=2927, ErrCode=NO_ERROR, debug=""\nI0403 11:40:48.359696       1 log.go:172] suppressing panic for copyResponse error in test; copy error: http2: server sent GOAWAY and closed the connection; LastStreamID=2927, ErrCode=NO_ERROR, debug=""\nI0403 11:40:48.534986       1 healthz.go:184] [-]terminating failed: reason withheld\n[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/kube-apiserver-requestheader-reload ok\n[+]poststarthook/kube-apiserver-clientCA-reload ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-discovery-available ok\n[+]crd-informer-synced ok\n[+]crd-informer-synced ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/openshift.io-clientCA-reload ok\n[+]poststarthook/openshift.io-requestheader-reload ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\n
Apr 03 11:43:06.809 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-135-18.us-west-1.compute.internal node/ip-10-0-135-18.us-west-1.compute.internal container=kube-controller-manager-cert-syncer-6 container exited with code 255 (Error): I0403 11:18:59.109346       1 certsync_controller.go:269] Starting CertSyncer\nI0403 11:18:59.109654       1 observer_polling.go:106] Starting file observer\nE0403 11:19:03.853779       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: secrets is forbidden: User "system:kube-controller-manager" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-controller-manager"\nE0403 11:19:03.853896       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:kube-controller-manager" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\nW0403 11:28:14.865369       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 19360 (25508)\nW0403 11:36:07.871759       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25689 (28607)\n
Apr 03 11:43:06.809 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-135-18.us-west-1.compute.internal node/ip-10-0-135-18.us-west-1.compute.internal container=kube-controller-manager-6 container exited with code 255 (Error): icate: "kube-apiserver-to-kubelet-signer" [] issuer="<self>" (2020-04-03 10:40:25 +0000 UTC to 2021-04-03 10:40:25 +0000 UTC (now=2020-04-03 11:18:59.432819909 +0000 UTC))\nI0403 11:18:59.432857       1 clientca.go:92] [4] "/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" client-ca certificate: "kube-control-plane-signer" [] issuer="<self>" (2020-04-03 10:40:24 +0000 UTC to 2021-04-03 10:40:24 +0000 UTC (now=2020-04-03 11:18:59.432843523 +0000 UTC))\nI0403 11:18:59.437530       1 controllermanager.go:169] Version: v1.13.4+3040211\nI0403 11:18:59.438683       1 serving.go:195] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1585911405" (2020-04-03 10:57:00 +0000 UTC to 2022-04-03 10:57:01 +0000 UTC (now=2020-04-03 11:18:59.438667352 +0000 UTC))\nI0403 11:18:59.438711       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585911405" [] issuer="<self>" (2020-04-03 10:56:44 +0000 UTC to 2021-04-03 10:56:45 +0000 UTC (now=2020-04-03 11:18:59.438702312 +0000 UTC))\nI0403 11:18:59.438737       1 secure_serving.go:136] Serving securely on [::]:10257\nI0403 11:18:59.438773       1 serving.go:77] Starting DynamicLoader\nI0403 11:18:59.439331       1 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-controller-manager...\nE0403 11:19:03.803611       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: configmaps "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\nE0403 11:40:48.057594       1 controllermanager.go:282] leaderelection lost\n
Apr 03 11:43:07.208 E ns/openshift-etcd pod/etcd-member-ip-10-0-135-18.us-west-1.compute.internal node/ip-10-0-135-18.us-west-1.compute.internal container=etcd-metrics container exited with code 255 (Error): 2020-04-03 11:40:18.166302 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-2.ci-op-91k9431k-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-2.ci-op-91k9431k-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-03 11:40:18.167546 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-03 11:40:18.168357 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-2.ci-op-91k9431k-a4243.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-2.ci-op-91k9431k-a4243.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/03 11:40:18 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.135.18:9978: connect: connection refused"; Reconnecting to {etcd-2.ci-op-91k9431k-a4243.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\n2020-04-03 11:40:19.182381 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Apr 03 11:43:07.208 E ns/openshift-etcd pod/etcd-member-ip-10-0-135-18.us-west-1.compute.internal node/ip-10-0-135-18.us-west-1.compute.internal container=etcd-member container exited with code 255 (Error): m MsgApp v2 (context canceled)\n2020-04-03 11:40:48.547776 I | rafthttp: peer 2955895d6efeb0c2 became inactive (message send to peer failed)\n2020-04-03 11:40:48.547786 I | rafthttp: stopped streaming with peer 2955895d6efeb0c2 (stream MsgApp v2 reader)\n2020-04-03 11:40:48.547852 W | rafthttp: lost the TCP streaming connection with peer 2955895d6efeb0c2 (stream Message reader)\n2020-04-03 11:40:48.547868 I | rafthttp: stopped streaming with peer 2955895d6efeb0c2 (stream Message reader)\n2020-04-03 11:40:48.547878 I | rafthttp: stopped peer 2955895d6efeb0c2\n2020-04-03 11:40:48.547886 I | rafthttp: stopping peer c9a123b23c13a550...\n2020-04-03 11:40:48.548313 I | rafthttp: closed the TCP streaming connection with peer c9a123b23c13a550 (stream MsgApp v2 writer)\n2020-04-03 11:40:48.548326 I | rafthttp: stopped streaming with peer c9a123b23c13a550 (writer)\n2020-04-03 11:40:48.548723 I | rafthttp: closed the TCP streaming connection with peer c9a123b23c13a550 (stream Message writer)\n2020-04-03 11:40:48.548735 I | rafthttp: stopped streaming with peer c9a123b23c13a550 (writer)\n2020-04-03 11:40:48.548855 I | rafthttp: stopped HTTP pipelining with peer c9a123b23c13a550\n2020-04-03 11:40:48.548933 W | rafthttp: lost the TCP streaming connection with peer c9a123b23c13a550 (stream MsgApp v2 reader)\n2020-04-03 11:40:48.548953 I | rafthttp: stopped streaming with peer c9a123b23c13a550 (stream MsgApp v2 reader)\n2020-04-03 11:40:48.549019 W | rafthttp: lost the TCP streaming connection with peer c9a123b23c13a550 (stream Message reader)\n2020-04-03 11:40:48.549031 E | rafthttp: failed to read c9a123b23c13a550 on stream Message (context canceled)\n2020-04-03 11:40:48.549040 I | rafthttp: peer c9a123b23c13a550 became inactive (message send to peer failed)\n2020-04-03 11:40:48.549049 I | rafthttp: stopped streaming with peer c9a123b23c13a550 (stream Message reader)\n2020-04-03 11:40:48.549058 I | rafthttp: stopped peer c9a123b23c13a550\n2020-04-03 11:40:48.575857 E | rafthttp: failed to find member c9a123b23c13a550 in cluster c3a84ca9804afc72\n
Apr 03 11:43:07.606 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-135-18.us-west-1.compute.internal node/ip-10-0-135-18.us-west-1.compute.internal container=scheduler container exited with code 255 (Error): ces/secrets/serving-cert/tls.crt" serving certificate: "scheduler.openshift-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1585911405" (2020-04-03 10:57:01 +0000 UTC to 2022-04-03 10:57:02 +0000 UTC (now=2020-04-03 11:19:45.240247095 +0000 UTC))\nI0403 11:19:45.241342       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585911405" [] issuer="<self>" (2020-04-03 10:56:44 +0000 UTC to 2021-04-03 10:56:45 +0000 UTC (now=2020-04-03 11:19:45.241313546 +0000 UTC))\nI0403 11:19:45.241378       1 secure_serving.go:136] Serving securely on [::]:10259\nI0403 11:19:45.247387       1 serving.go:77] Starting DynamicLoader\nI0403 11:19:46.143295       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller\nI0403 11:19:46.243579       1 controller_utils.go:1034] Caches are synced for scheduler controller\nI0403 11:19:46.243618       1 leaderelection.go:205] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\nW0403 11:34:54.734476       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 25021 (27996)\nW0403 11:37:15.025458       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 19280 (29665)\nI0403 11:38:06.983635       1 leaderelection.go:214] successfully acquired lease openshift-kube-scheduler/kube-scheduler\nW0403 11:39:59.115890       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 19280 (32183)\nW0403 11:39:59.118521       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 19281 (32183)\nE0403 11:40:48.205066       1 server.go:259] lost master\n
Apr 03 11:43:12.406 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-135-18.us-west-1.compute.internal node/ip-10-0-135-18.us-west-1.compute.internal container=scheduler container exited with code 255 (Error): ces/secrets/serving-cert/tls.crt" serving certificate: "scheduler.openshift-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1585911405" (2020-04-03 10:57:01 +0000 UTC to 2022-04-03 10:57:02 +0000 UTC (now=2020-04-03 11:19:45.240247095 +0000 UTC))\nI0403 11:19:45.241342       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1585911405" [] issuer="<self>" (2020-04-03 10:56:44 +0000 UTC to 2021-04-03 10:56:45 +0000 UTC (now=2020-04-03 11:19:45.241313546 +0000 UTC))\nI0403 11:19:45.241378       1 secure_serving.go:136] Serving securely on [::]:10259\nI0403 11:19:45.247387       1 serving.go:77] Starting DynamicLoader\nI0403 11:19:46.143295       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller\nI0403 11:19:46.243579       1 controller_utils.go:1034] Caches are synced for scheduler controller\nI0403 11:19:46.243618       1 leaderelection.go:205] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\nW0403 11:34:54.734476       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 25021 (27996)\nW0403 11:37:15.025458       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 19280 (29665)\nI0403 11:38:06.983635       1 leaderelection.go:214] successfully acquired lease openshift-kube-scheduler/kube-scheduler\nW0403 11:39:59.115890       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.PersistentVolume ended with: too old resource version: 19280 (32183)\nW0403 11:39:59.118521       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.StorageClass ended with: too old resource version: 19281 (32183)\nE0403 11:40:48.205066       1 server.go:259] lost master\n
Apr 03 11:43:12.413 E clusterversion/version changed Failing to True: UpdatePayloadFailed: Could not update deployment "openshift-machine-config-operator/etcd-quorum-guard" (315 of 350)
Apr 03 11:44:39.102 E ns/openshift-monitoring pod/node-exporter-h5tc4 node/ip-10-0-128-95.us-west-1.compute.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 03 11:44:39.102 E ns/openshift-monitoring pod/node-exporter-h5tc4 node/ip-10-0-128-95.us-west-1.compute.internal container=node-exporter container exited with code 255 (Error): 
Apr 03 11:44:39.116 E ns/openshift-image-registry pod/node-ca-fvbln node/ip-10-0-128-95.us-west-1.compute.internal container=node-ca container exited with code 255 (Error): 
Apr 03 11:44:39.334 E ns/openshift-cluster-node-tuning-operator pod/tuned-4t94g node/ip-10-0-128-95.us-west-1.compute.internal container=tuned container exited with code 255 (Error): 348   48888 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 11:37:50.644120   48888 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0403 11:37:50.647102   48888 openshift-tuned.go:720] Pod event watch channel closed.\nI0403 11:37:50.647121   48888 openshift-tuned.go:722] Increasing resyncPeriod to 138\nI0403 11:40:08.647342   48888 openshift-tuned.go:187] Extracting tuned profiles\nI0403 11:40:08.649427   48888 openshift-tuned.go:623] Resync period to pull node/pod labels: 138 [s]\nI0403 11:40:08.666170   48888 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-k8s-0) labels changed node wide: true\nI0403 11:40:13.662614   48888 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 11:40:13.664227   48888 openshift-tuned.go:275] Dumping labels to /var/lib/tuned/ocp-node-labels.cfg\nI0403 11:40:13.665332   48888 openshift-tuned.go:326] Getting recommended profile...\nI0403 11:40:13.778699   48888 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 11:40:14.411837   48888 openshift-tuned.go:435] Pod (openshift-console/downloads-76bbf5f8bd-gj9jn) labels changed node wide: true\nI0403 11:40:18.662622   48888 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0403 11:40:18.664145   48888 openshift-tuned.go:326] Getting recommended profile...\nI0403 11:40:18.778618   48888 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0403 11:40:48.582400   48888 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0403 11:40:48.585147   48888 openshift-tuned.go:720] Pod event watch channel closed.\nI0403 11:40:48.585173   48888 openshift-tuned.go:722] Increasing resyncPeriod to 276\nI0403 11:42:59.519602   48888 openshift-tuned.go:126] Received signal: terminated\n
Apr 03 11:44:43.482 E ns/openshift-multus pod/multus-5llwx node/ip-10-0-128-95.us-west-1.compute.internal container=kube-multus container exited with code 255 (Error): 
Apr 03 11:44:43.853 E ns/openshift-sdn pod/sdn-29brr node/ip-10-0-128-95.us-west-1.compute.internal container=sdn container exited with code 255 (Error): web"\nI0403 11:42:58.731235   66206 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-monitoring/alertmanager-operated:mesh to [10.128.2.37:6783 10.129.2.33:6783]\nI0403 11:42:58.731248   66206 roundrobin.go:240] Delete endpoint 10.128.2.37:6783 for service "openshift-monitoring/alertmanager-operated:mesh"\nI0403 11:42:58.955640   66206 proxier.go:367] userspace proxy: processing 0 service events\nI0403 11:42:58.955668   66206 proxier.go:346] userspace syncProxyRules took 120.119084ms\nI0403 11:42:59.111672   66206 proxier.go:367] userspace proxy: processing 0 service events\nI0403 11:42:59.111697   66206 proxier.go:346] userspace syncProxyRules took 51.027583ms\nE0403 11:42:59.487427   66206 healthcheck.go:57] SDN healthcheck disconnected from OVS server: <nil>\nI0403 11:42:59.487547   66206 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 11:42:59.588165   66206 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\ninterrupt: Gracefully shutting down ...\nI0403 11:42:59.687889   66206 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 11:42:59.790234   66206 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 11:42:59.888248   66206 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 11:42:59.988568   66206 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0403 11:43:00.087878   66206 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\n
Apr 03 11:44:44.220 E ns/openshift-dns pod/dns-default-kzfd9 node/ip-10-0-128-95.us-west-1.compute.internal container=dns container exited with code 255 (Error): .:5353\n2020-04-03T11:26:26.724Z [INFO] CoreDNS-1.3.1\n2020-04-03T11:26:26.724Z [INFO] linux/amd64, go1.10.8, \nCoreDNS-1.3.1\nlinux/amd64, go1.10.8, \n2020-04-03T11:26:26.724Z [INFO] plugin/reload: Running configuration MD5 = 6dfacbfa08660b953611ad25ea5c84fc\nW0403 11:34:54.710755       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: watch of *v1.Service ended with: too old resource version: 25021 (27996)\nW0403 11:37:14.792965       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: watch of *v1.Namespace ended with: too old resource version: 19280 (29614)\nW0403 11:40:48.656015       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: watch of *v1.Namespace ended with: too old resource version: 29614 (31874)\nW0403 11:40:48.678857       1 reflector.go:341] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: watch of *v1.Service ended with: too old resource version: 30628 (31876)\n[INFO] SIGTERM: Shutting down servers then terminating\n
Apr 03 11:44:44.220 E ns/openshift-dns pod/dns-default-kzfd9 node/ip-10-0-128-95.us-west-1.compute.internal container=dns-node-resolver container exited with code 255 (Error): /bin/bash: line 1: kill: (144) - No such process\n
Apr 03 11:44:44.589 E ns/openshift-sdn pod/ovs-v5tw2 node/ip-10-0-128-95.us-west-1.compute.internal container=openvswitch container exited with code 255 (Error): ort 7\n2020-04-03T11:42:28.041Z|00168|connmgr|INFO|br0<->unix#299: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T11:42:28.089Z|00169|connmgr|INFO|br0<->unix#302: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T11:42:28.124Z|00170|bridge|INFO|bridge br0: deleted interface vethf307152e on port 17\n2020-04-03T11:42:28.171Z|00171|connmgr|INFO|br0<->unix#305: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T11:42:28.219Z|00172|connmgr|INFO|br0<->unix#308: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T11:42:28.257Z|00173|bridge|INFO|bridge br0: deleted interface vethda9d7fbd on port 18\n2020-04-03T11:42:28.342Z|00174|connmgr|INFO|br0<->unix#311: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T11:42:28.382Z|00175|bridge|INFO|bridge br0: deleted interface veth3491ca46 on port 4\n2020-04-03T11:42:28.445Z|00176|connmgr|INFO|br0<->unix#314: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T11:42:28.483Z|00177|connmgr|INFO|br0<->unix#317: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T11:42:28.509Z|00178|bridge|INFO|bridge br0: deleted interface veth35aa43b1 on port 19\n2020-04-03T11:42:37.850Z|00179|bridge|INFO|bridge br0: added interface veth51368786 on port 21\n2020-04-03T11:42:37.878Z|00180|connmgr|INFO|br0<->unix#320: 5 flow_mods in the last 0 s (5 adds)\n2020-04-03T11:42:37.914Z|00181|connmgr|INFO|br0<->unix#323: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T11:42:56.720Z|00182|connmgr|INFO|br0<->unix#329: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T11:42:56.760Z|00183|connmgr|INFO|br0<->unix#332: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T11:42:56.786Z|00184|bridge|INFO|bridge br0: deleted interface veth3bf94cf3 on port 20\n2020-04-03T11:42:56.856Z|00185|connmgr|INFO|br0<->unix#335: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-03T11:42:56.884Z|00186|connmgr|INFO|br0<->unix#338: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-03T11:42:56.905Z|00187|bridge|INFO|bridge br0: deleted interface vethfa878b95 on port 14\nTerminated\novs-vswitchd is not running.\novsdb-server is not running.\n
Apr 03 11:44:44.956 E ns/openshift-machine-config-operator pod/machine-config-daemon-zh9q4 node/ip-10-0-128-95.us-west-1.compute.internal container=machine-config-daemon container exited with code 255 (Error): 
Apr 03 11:44:45.326 E ns/openshift-operator-lifecycle-manager pod/olm-operators-mt6ls node/ip-10-0-128-95.us-west-1.compute.internal container=configmap-registry-server container exited with code 255 (Error): 
Apr 03 11:44:47.731 E ns/openshift-multus pod/multus-5llwx node/ip-10-0-128-95.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending