ResultFAILURE
Tests 5 failed / 20 succeeded
Started2020-02-27 19:20
Elapsed2h13m
Work namespaceci-op-zfmbybf1
Refs openshift-4.5:29304dc2
34:7800a949
pod15505afc-5996-11ea-a557-0a58ac107de1
repoopenshift/etcd
revision1

Test Failures


Cluster upgrade Application behind service load balancer with PDB is not disrupted 1h17m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sApplication\sbehind\sservice\sload\sbalancer\swith\sPDB\sis\snot\sdisrupted$'
Service was unreachable during disruption for at least 33s of 1h15m20s (1%):

Feb 27 20:05:59.914 E ns/e2e-k8s-service-lb-available-2273 svc/service-test Service stopped responding to GET requests over new connections
Feb 27 20:05:59.914 E ns/e2e-k8s-service-lb-available-2273 svc/service-test Service stopped responding to GET requests on reused connections
Feb 27 20:06:00.893 - 999ms E ns/e2e-k8s-service-lb-available-2273 svc/service-test Service is not responding to GET requests on reused connections
Feb 27 20:06:00.893 - 999ms E ns/e2e-k8s-service-lb-available-2273 svc/service-test Service is not responding to GET requests over new connections
Feb 27 20:06:01.964 I ns/e2e-k8s-service-lb-available-2273 svc/service-test Service started responding to GET requests on reused connections
Feb 27 20:06:01.965 I ns/e2e-k8s-service-lb-available-2273 svc/service-test Service started responding to GET requests over new connections
Feb 27 20:06:02.931 E ns/e2e-k8s-service-lb-available-2273 svc/service-test Service stopped responding to GET requests over new connections
Feb 27 20:06:03.893 - 4s    E ns/e2e-k8s-service-lb-available-2273 svc/service-test Service is not responding to GET requests over new connections
Feb 27 20:06:08.969 I ns/e2e-k8s-service-lb-available-2273 svc/service-test Service started responding to GET requests over new connections
Feb 27 20:06:09.901 E ns/e2e-k8s-service-lb-available-2273 svc/service-test Service stopped responding to GET requests over new connections
Feb 27 20:06:10.893 E ns/e2e-k8s-service-lb-available-2273 svc/service-test Service is not responding to GET requests over new connections
Feb 27 20:06:10.951 I ns/e2e-k8s-service-lb-available-2273 svc/service-test Service started responding to GET requests over new connections
Feb 27 20:06:13.907 E ns/e2e-k8s-service-lb-available-2273 svc/service-test Service stopped responding to GET requests over new connections
Feb 27 20:06:14.893 E ns/e2e-k8s-service-lb-available-2273 svc/service-test Service is not responding to GET requests over new connections
Feb 27 20:06:14.954 I ns/e2e-k8s-service-lb-available-2273 svc/service-test Service started responding to GET requests over new connections
Feb 27 20:06:16.960 E ns/e2e-k8s-service-lb-available-2273 svc/service-test Service stopped responding to GET requests over new connections
Feb 27 20:06:17.893 E ns/e2e-k8s-service-lb-available-2273 svc/service-test Service is not responding to GET requests over new connections
Feb 27 20:06:17.949 I ns/e2e-k8s-service-lb-available-2273 svc/service-test Service started responding to GET requests over new connections
Feb 27 20:06:19.944 E ns/e2e-k8s-service-lb-available-2273 svc/service-test Service stopped responding to GET requests over new connections
Feb 27 20:06:20.894 E ns/e2e-k8s-service-lb-available-2273 svc/service-test Service is not responding to GET requests over new connections
Feb 27 20:06:21.016 I ns/e2e-k8s-service-lb-available-2273 svc/service-test Service started responding to GET requests over new connections
Feb 27 20:06:23.988 E ns/e2e-k8s-service-lb-available-2273 svc/service-test Service stopped responding to GET requests over new connections
Feb 27 20:06:24.893 - 999ms E ns/e2e-k8s-service-lb-available-2273 svc/service-test Service is not responding to GET requests over new connections
Feb 27 20:06:26.125 I ns/e2e-k8s-service-lb-available-2273 svc/service-test Service started responding to GET requests over new connections
Feb 27 20:06:27.021 E ns/e2e-k8s-service-lb-available-2273 svc/service-test Service stopped responding to GET requests over new connections
Feb 27 20:06:27.893 - 2s    E ns/e2e-k8s-service-lb-available-2273 svc/service-test Service is not responding to GET requests over new connections
Feb 27 20:06:30.051 I ns/e2e-k8s-service-lb-available-2273 svc/service-test Service started responding to GET requests over new connections
Feb 27 20:06:34.014 E ns/e2e-k8s-service-lb-available-2273 svc/service-test Service stopped responding to GET requests over new connections
Feb 27 20:06:34.893 - 999ms E ns/e2e-k8s-service-lb-available-2273 svc/service-test Service is not responding to GET requests over new connections
Feb 27 20:06:36.082 I ns/e2e-k8s-service-lb-available-2273 svc/service-test Service started responding to GET requests over new connections
Feb 27 20:06:44.003 E ns/e2e-k8s-service-lb-available-2273 svc/service-test Service stopped responding to GET requests over new connections
Feb 27 20:06:44.893 E ns/e2e-k8s-service-lb-available-2273 svc/service-test Service is not responding to GET requests over new connections
Feb 27 20:06:44.999 I ns/e2e-k8s-service-lb-available-2273 svc/service-test Service started responding to GET requests over new connections
Feb 27 20:06:46.941 E ns/e2e-k8s-service-lb-available-2273 svc/service-test Service stopped responding to GET requests over new connections
Feb 27 20:06:47.893 E ns/e2e-k8s-service-lb-available-2273 svc/service-test Service is not responding to GET requests over new connections
Feb 27 20:06:48.044 I ns/e2e-k8s-service-lb-available-2273 svc/service-test Service started responding to GET requests over new connections
Feb 27 20:06:51.010 E ns/e2e-k8s-service-lb-available-2273 svc/service-test Service stopped responding to GET requests over new connections
Feb 27 20:06:51.893 - 1s    E ns/e2e-k8s-service-lb-available-2273 svc/service-test Service is not responding to GET requests over new connections
Feb 27 20:06:53.129 I ns/e2e-k8s-service-lb-available-2273 svc/service-test Service started responding to GET requests over new connections
Feb 27 20:07:01.090 E ns/e2e-k8s-service-lb-available-2273 svc/service-test Service stopped responding to GET requests over new connections
Feb 27 20:07:01.893 E ns/e2e-k8s-service-lb-available-2273 svc/service-test Service is not responding to GET requests over new connections
Feb 27 20:07:02.152 I ns/e2e-k8s-service-lb-available-2273 svc/service-test Service started responding to GET requests over new connections
				from junit_upgrade_1582838539.xml

Filter through log files


Cluster upgrade Cluster frontend ingress remain available 1h16m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sCluster\sfrontend\singress\sremain\savailable$'
Frontends were unreachable during disruption for at least 20s of 1h16m43s (0%):

Feb 27 20:11:52.790 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Feb 27 20:11:52.790 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests on reused connections
Feb 27 20:11:52.882 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Feb 27 20:11:52.883 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests on reused connections
Feb 27 20:12:04.790 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests on reused connections
Feb 27 20:12:04.790 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Feb 27 20:12:04.893 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Feb 27 20:12:04.893 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests on reused connections
Feb 27 20:14:05.790 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Feb 27 20:14:05.904 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Feb 27 20:15:07.841 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests on reused connections
Feb 27 20:15:07.841 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Feb 27 20:15:07.841 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Feb 27 20:15:07.842 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Feb 27 20:15:08.790 E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Feb 27 20:15:08.790 E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests on reused connections
Feb 27 20:15:08.790 E ns/openshift-console route/console Route is not responding to GET requests over new connections
Feb 27 20:15:08.790 E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Feb 27 20:15:08.870 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests on reused connections
Feb 27 20:15:08.870 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Feb 27 20:15:08.914 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Feb 27 20:15:08.914 I ns/openshift-console route/console Route started responding to GET requests over new connections
Feb 27 20:15:09.841 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Feb 27 20:15:10.790 - 1s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Feb 27 20:15:11.915 I ns/openshift-console route/console Route started responding to GET requests over new connections
Feb 27 20:23:44.790 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests on reused connections
Feb 27 20:23:44.791 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Feb 27 20:23:44.908 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Feb 27 20:23:44.917 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests on reused connections
Feb 27 20:26:39.790 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Feb 27 20:26:39.924 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Feb 27 20:31:17.790 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Feb 27 20:31:17.790 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests on reused connections
Feb 27 20:31:17.897 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests on reused connections
Feb 27 20:31:17.910 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
				from junit_upgrade_1582838539.xml

Filter through log files


openshift-tests Monitor cluster while tests execute 1h17m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
246 error level events were detected during this test run:

Feb 27 20:06:08.662 E clusterversion/version changed Failing to True: WorkloadNotAvailable: deployment openshift-cluster-version/cluster-version-operator is progressing NewReplicaSetAvailable: ReplicaSet "cluster-version-operator-655d47cd5c" has successfully progressed.
Feb 27 20:06:27.754 E ns/openshift-etcd-operator pod/etcd-operator-6c548ddf8f-rn8t4 node/ip-10-0-136-188.us-east-2.compute.internal container=operator container exited with code 255 (Error): 06:27.151038       1 base_controller.go:39] All UnsupportedConfigOverridesController workers have been terminated\nI0227 20:06:27.151052       1 base_controller.go:49] Shutting down worker of LoggingSyncer controller ...\nI0227 20:06:27.151062       1 base_controller.go:39] All LoggingSyncer workers have been terminated\nI0227 20:06:27.151083       1 base_controller.go:49] Shutting down worker of RevisionController controller ...\nI0227 20:06:27.151091       1 base_controller.go:39] All RevisionController workers have been terminated\nI0227 20:06:27.151103       1 base_controller.go:49] Shutting down worker of PruneController controller ...\nI0227 20:06:27.151108       1 base_controller.go:39] All PruneController workers have been terminated\nI0227 20:06:27.151123       1 base_controller.go:49] Shutting down worker of  controller ...\nI0227 20:06:27.151129       1 base_controller.go:39] All  workers have been terminated\nI0227 20:06:27.151143       1 base_controller.go:49] Shutting down worker of InstallerStateController controller ...\nI0227 20:06:27.151148       1 base_controller.go:39] All InstallerStateController workers have been terminated\nI0227 20:06:27.151158       1 base_controller.go:49] Shutting down worker of InstallerController controller ...\nI0227 20:06:27.151164       1 base_controller.go:39] All InstallerController workers have been terminated\nI0227 20:06:27.151174       1 base_controller.go:49] Shutting down worker of StaticPodStateController controller ...\nI0227 20:06:27.151179       1 base_controller.go:39] All StaticPodStateController workers have been terminated\nI0227 20:06:27.151189       1 base_controller.go:49] Shutting down worker of  controller ...\nI0227 20:06:27.151195       1 base_controller.go:39] All  workers have been terminated\nI0227 20:06:27.151204       1 base_controller.go:49] Shutting down worker of NodeController controller ...\nI0227 20:06:27.151209       1 base_controller.go:39] All NodeController workers have been terminated\nF0227 20:06:27.151718       1 builder.go:209] server exited\n
Feb 27 20:06:42.822 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-5f5496b6b5-xddwd node/ip-10-0-136-188.us-east-2.compute.internal container=kube-apiserver-operator container exited with code 255 (Error): :281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"c410f6c9-717e-40f7-b546-d5141fc469dc", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeCurrentRevisionChanged' Updated node "ip-10-0-153-35.us-east-2.compute.internal" from revision 2 to 7 because static pod is ready\nI0227 20:00:26.415665       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"c410f6c9-717e-40f7-b546-d5141fc469dc", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 7"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 2; 2 nodes are at revision 7" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7"\nI0227 20:00:28.597192       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"c410f6c9-717e-40f7-b546-d5141fc469dc", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/revision-status-7 -n openshift-kube-apiserver:\ncause by changes in data.status\nI0227 20:00:36.597074       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"c410f6c9-717e-40f7-b546-d5141fc469dc", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-7-ip-10-0-153-35.us-east-2.compute.internal -n openshift-kube-apiserver because it was missing\nI0227 20:06:42.237748       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nF0227 20:06:42.238530       1 builder.go:209] server exited\n
Feb 27 20:06:55.869 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-757c8d57c5-76n2l node/ip-10-0-136-188.us-east-2.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): 1c-435b-adc8-f449c45514c3", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-8-ip-10-0-136-188.us-east-2.compute.internal -n openshift-kube-controller-manager because it was missing\nI0227 20:06:55.290320       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0227 20:06:55.290952       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nI0227 20:06:55.292316       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0227 20:06:55.292395       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0227 20:06:55.292452       1 base_controller.go:74] Shutting down InstallerStateController ...\nI0227 20:06:55.292508       1 base_controller.go:74] Shutting down InstallerController ...\nI0227 20:06:55.292579       1 base_controller.go:74] Shutting down StaticPodStateController ...\nI0227 20:06:55.292631       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0227 20:06:55.292934       1 base_controller.go:74] Shutting down PruneController ...\nI0227 20:06:55.292991       1 base_controller.go:74] Shutting down  ...\nI0227 20:06:55.293043       1 base_controller.go:74] Shutting down NodeController ...\nI0227 20:06:55.293094       1 base_controller.go:74] Shutting down RevisionController ...\nI0227 20:06:55.293142       1 satokensigner_controller.go:332] Shutting down SATokenSignerController\nI0227 20:06:55.293224       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0227 20:06:55.293272       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "CSRSigningCert"\nI0227 20:06:55.293320       1 status_controller.go:212] Shutting down StatusSyncer-kube-controller-manager\nI0227 20:06:55.293365       1 remove_stale_conditions.go:84] Shutting down RemoveStaleConditions\nI0227 20:06:55.293410       1 targetconfigcontroller.go:613] Shutting down TargetConfigController\nF0227 20:06:55.293491       1 builder.go:209] server exited\n
Feb 27 20:07:00.891 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-576c68fb99-nx472 node/ip-10-0-136-188.us-east-2.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): r of NodeController controller ...\nI0227 20:06:59.940666       1 base_controller.go:39] All NodeController workers have been terminated\nI0227 20:06:59.935204       1 base_controller.go:49] Shutting down worker of PruneController controller ...\nI0227 20:06:59.940678       1 base_controller.go:39] All PruneController workers have been terminated\nI0227 20:06:59.935232       1 base_controller.go:74] Shutting down InstallerStateController ...\nI0227 20:06:59.935245       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0227 20:06:59.935254       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0227 20:06:59.935266       1 base_controller.go:74] Shutting down StaticPodStateController ...\nI0227 20:06:59.935286       1 base_controller.go:49] Shutting down worker of InstallerStateController controller ...\nI0227 20:06:59.940750       1 base_controller.go:39] All InstallerStateController workers have been terminated\nI0227 20:06:59.935302       1 base_controller.go:49] Shutting down worker of StaticPodStateController controller ...\nI0227 20:06:59.940767       1 base_controller.go:39] All StaticPodStateController workers have been terminated\nI0227 20:06:59.935321       1 base_controller.go:49] Shutting down worker of  controller ...\nI0227 20:06:59.940780       1 base_controller.go:39] All  workers have been terminated\nI0227 20:06:59.935335       1 base_controller.go:74] Shutting down InstallerController ...\nI0227 20:06:59.935354       1 base_controller.go:49] Shutting down worker of InstallerController controller ...\nI0227 20:06:59.940796       1 base_controller.go:39] All InstallerController workers have been terminated\nI0227 20:06:59.935386       1 configmap_cafile_content.go:226] Shutting down client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\nI0227 20:06:59.935404       1 configmap_cafile_content.go:226] Shutting down client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nF0227 20:06:59.935431       1 builder.go:243] stopped\n
Feb 27 20:07:28.026 E ns/openshift-machine-api pod/machine-api-operator-7d5dd8dd5-d8cf4 node/ip-10-0-136-188.us-east-2.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Feb 27 20:09:37.193 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-7dcd97ccc-64xfw node/ip-10-0-136-188.us-east-2.compute.internal container=kube-storage-version-migrator-operator container exited with code 255 (Error): ): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: {"conditions":[{"type":"Degraded","status":"False","lastTransitionTime":"2020-02-27T19:46:31Z","reason":"AsExpected"},{"type":"Progressing","status":"False","lastTransitionTime":"2020-02-27T19:46:31Z","reason":"AsExpected"},{"type":"Available","status":"False","lastTransitionTime":"2020-02-27T19:46:31Z","reason":"_NoMigratorPod","message":"Available: deployment/migrator.openshift-kube-storage-version-migrator: no replicas are available"},{"type":"Upgradeable","status":"Unknown","lastTransitionTime":"2020-02-27T19:46:29Z","reason":"NoData"}],"versions":[{"name":"operator","version":"0.0.1-2020-02-27-192049"}\n\nA: ],"relatedObjects":[{"group":"operator.openshift.io","resource":"kubestorageversionmigrators","name":"cluster"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator-operator"}],"extension":null}\n\n\nB: ,{"name":"kube-storage-version-migrator","version":""}],"relatedObjects":[{"group":"operator.openshift.io","resource":"kubestorageversionmigrators","name":"cluster"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator-operator"}],"extension":null}\n\n\nI0227 19:51:58.982193       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"22396089-704e-4922-8f03-b2585a348a20", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from False to True ("")\nI0227 20:09:36.198850       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0227 20:09:36.198909       1 leaderelection.go:66] leaderelection lost\n
Feb 27 20:10:59.558 E ns/openshift-cluster-machine-approver pod/machine-approver-857bf95d65-rs6rj node/ip-10-0-136-188.us-east-2.compute.internal container=machine-approver-controller container exited with code 2 (Error): :machine-approver" not found, clusterrole.rbac.authorization.k8s.io "cluster-status" not found, clusterrole.rbac.authorization.k8s.io "system:oauth-token-deleter" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]\nI0227 19:54:58.975892       1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0227 19:54:58.976536       1 reflector.go:270] github.com/openshift/cluster-machine-approver/main.go:238: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?resourceVersion=15637&timeoutSeconds=506&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0227 19:54:59.977362       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nI0227 19:56:21.294385       1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0227 19:56:21.317919       1 reflector.go:270] github.com/openshift/cluster-machine-approver/main.go:238: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?resourceVersion=17053&timeoutSeconds=346&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0227 19:56:22.318655       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nW0227 20:07:47.349708       1 reflector.go:289] github.com/openshift/cluster-machine-approver/main.go:238: watch of *v1beta1.CertificateSigningRequest ended with: too old resource version: 18192 (23353)\n
Feb 27 20:11:10.724 E ns/openshift-operator-lifecycle-manager pod/olm-operator-5c98bbb8ff-ntcj8 node/ip-10-0-136-188.us-east-2.compute.internal container=olm-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 20:11:15.745 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-6dfbcc6fcb-4vnvn node/ip-10-0-136-188.us-east-2.compute.internal container=operator container exited with code 255 (Error):  20:11:07.315861       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"87128941-1d23-4af1-942f-c2720cf0181f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ObservedConfigChanged' Writing updated observed config:   map[string]interface{}{\n  	"build": map[string]interface{}{\n  		"buildDefaults": map[string]interface{}{"resources": map[string]interface{}{}},\n- 		"imageTemplateFormat": map[string]interface{}{\n- 			"format": string("registry.svc.ci.openshift.org/ci-op-zfmbybf1/stable-initial@sha256:471891b26e981d2ed9c87cdd306bc028abe62b760a7af413bd9c05389c4ea5a4"),\n- 		},\n+ 		"imageTemplateFormat": map[string]interface{}{\n+ 			"format": string("registry.svc.ci.openshift.org/ci-op-zfmbybf1/stable@sha256:471891b26e981d2ed9c87cdd306bc028abe62b760a7af413bd9c05389c4ea5a4"),\n+ 		},\n  	},\n- 	"deployer": map[string]interface{}{\n- 		"imageTemplateFormat": map[string]interface{}{\n- 			"format": string("registry.svc.ci.openshift.org/ci-op-zfmbybf1/stable-initial@sha256:ca6b8e042a8fb0d9eb3539e4b6544fd7aae53da85fe037ce9a92e59ea19cd786"),\n- 		},\n- 	},\n+ 	"deployer": map[string]interface{}{\n+ 		"imageTemplateFormat": map[string]interface{}{\n+ 			"format": string("registry.svc.ci.openshift.org/ci-op-zfmbybf1/stable@sha256:ca6b8e042a8fb0d9eb3539e4b6544fd7aae53da85fe037ce9a92e59ea19cd786"),\n+ 		},\n+ 	},\n  	"dockerPullSecret": map[string]interface{}{"internalRegistryHostname": string("image-registry.openshift-image-registry.svc:5000")},\n  	"ingress":          map[string]interface{}{"ingressIPNetworkCIDR": string("")},\n  }\nI0227 20:11:10.966600       1 httplog.go:90] GET /metrics: (3.779243ms) 200 [Prometheus/2.15.2 10.129.2.11:53938]\nI0227 20:11:14.819862       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0227 20:11:14.820012       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0227 20:11:14.820158       1 builder.go:210] server exited\n
Feb 27 20:11:16.660 E ns/openshift-monitoring pod/node-exporter-mnr58 node/ip-10-0-151-30.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:10:19Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:10:32Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:10:34Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:10:47Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:10:49Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:11:02Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:11:04Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 27 20:11:21.169 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-6f78nj8qg node/ip-10-0-136-188.us-east-2.compute.internal container=operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 20:11:24.131 E ns/openshift-monitoring pod/openshift-state-metrics-7b467f86cf-dn86r node/ip-10-0-140-113.us-east-2.compute.internal container=openshift-state-metrics container exited with code 2 (Error): 
Feb 27 20:11:31.000 E ns/openshift-monitoring pod/telemeter-client-684dd997b8-hszvs node/ip-10-0-141-156.us-east-2.compute.internal container=telemeter-client container exited with code 2 (Error): 
Feb 27 20:11:31.000 E ns/openshift-monitoring pod/telemeter-client-684dd997b8-hszvs node/ip-10-0-141-156.us-east-2.compute.internal container=reload container exited with code 2 (Error): 
Feb 27 20:11:32.206 E ns/openshift-operator-lifecycle-manager pod/packageserver-755b75c58f-w8ts9 node/ip-10-0-130-82.us-east-2.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 20:11:36.082 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-151-30.us-east-2.compute.internal container=config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 20:11:36.082 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-151-30.us-east-2.compute.internal container=alertmanager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 20:11:36.082 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-151-30.us-east-2.compute.internal container=alertmanager-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 20:11:47.117 E ns/openshift-controller-manager pod/controller-manager-zlf7b node/ip-10-0-153-35.us-east-2.compute.internal container=controller-manager container exited with code 137 (Error): ream error: stream ID 95; INTERNAL_ERROR") has prevented the request from succeeding\nW0227 20:09:17.680454       1 reflector.go:340] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 93; INTERNAL_ERROR") has prevented the request from succeeding\nW0227 20:09:17.680529       1 reflector.go:340] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: watch of *v1.TemplateInstance ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 5; INTERNAL_ERROR") has prevented the request from succeeding\nW0227 20:09:29.118128       1 reflector.go:340] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: watch of *v1.TemplateInstance ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 107; INTERNAL_ERROR") has prevented the request from succeeding\nW0227 20:09:29.118394       1 reflector.go:340] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.BuildConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 91; INTERNAL_ERROR") has prevented the request from succeeding\nW0227 20:09:29.120957       1 reflector.go:340] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.Image ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 41; INTERNAL_ERROR") has prevented the request from succeeding\nW0227 20:09:29.121134       1 reflector.go:340] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 9; INTERNAL_ERROR") has prevented the request from succeeding\n
Feb 27 20:11:47.353 E ns/openshift-controller-manager pod/controller-manager-ckgbk node/ip-10-0-130-82.us-east-2.compute.internal container=controller-manager container exited with code 137 (OOMKilled): I0227 19:51:25.511697       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0227 19:51:25.513292       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-zfmbybf1/stable-initial@sha256:ca6b8e042a8fb0d9eb3539e4b6544fd7aae53da85fe037ce9a92e59ea19cd786"\nI0227 19:51:25.513375       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-zfmbybf1/stable-initial@sha256:471891b26e981d2ed9c87cdd306bc028abe62b760a7af413bd9c05389c4ea5a4"\nI0227 19:51:25.513455       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0227 19:51:25.513516       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Feb 27 20:11:49.919 E ns/openshift-monitoring pod/prometheus-adapter-6486d868f4-449xv node/ip-10-0-151-30.us-east-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0227 19:55:49.939269       1 adapter.go:93] successfully using in-cluster auth\nI0227 19:55:51.616149       1 secure_serving.go:116] Serving securely on [::]:6443\nW0227 20:00:07.419710       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.Node ended with: too old resource version: 19028 (19249)\n
Feb 27 20:11:50.304 E ns/openshift-monitoring pod/node-exporter-hlcr9 node/ip-10-0-130-82.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:10:47Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:11:00Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:11:02Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:11:15Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:11:17Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:11:30Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:11:45Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 27 20:11:50.347 E ns/openshift-operator-lifecycle-manager pod/packageserver-79db888957-2fdsh node/ip-10-0-130-82.us-east-2.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 20:11:52.055 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-141-156.us-east-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/02/27 19:56:11 Watching directory: "/etc/alertmanager/config"\n
Feb 27 20:11:52.055 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-141-156.us-east-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/02/27 19:56:11 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/27 19:56:11 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/27 19:56:11 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/27 19:56:11 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/27 19:56:11 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/27 19:56:11 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/27 19:56:11 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0227 19:56:11.286175       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/02/27 19:56:11 http.go:107: HTTPS: listening on [::]:9095\n
Feb 27 20:11:54.062 E ns/openshift-monitoring pod/prometheus-adapter-6486d868f4-fj9tm node/ip-10-0-141-156.us-east-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0227 19:55:50.198135       1 adapter.go:93] successfully using in-cluster auth\nI0227 19:55:50.768961       1 secure_serving.go:116] Serving securely on [::]:6443\nW0227 20:00:07.939730       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.Node ended with: too old resource version: 19028 (19249)\n
Feb 27 20:11:56.026 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-151-30.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-27T20:11:52.851Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-27T20:11:52.853Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-27T20:11:52.856Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-27T20:11:52.857Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-27T20:11:52.857Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-27T20:11:52.857Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-27T20:11:52.858Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-27T20:11:52.858Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-27T20:11:52.858Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-27T20:11:52.858Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-27T20:11:52.858Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-27T20:11:52.858Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-27T20:11:52.858Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-27T20:11:52.858Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-27T20:11:52.860Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-27T20:11:52.860Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-27
Feb 27 20:11:57.250 E ns/openshift-service-ca pod/service-ca-554b97dfcf-qstmp node/ip-10-0-153-35.us-east-2.compute.internal container=service-ca-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 20:11:58.041 E ns/openshift-monitoring pod/thanos-querier-766f4d5469-69rzz node/ip-10-0-151-30.us-east-2.compute.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 20:11:58.041 E ns/openshift-monitoring pod/thanos-querier-766f4d5469-69rzz node/ip-10-0-151-30.us-east-2.compute.internal container=oauth-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 20:11:58.041 E ns/openshift-monitoring pod/thanos-querier-766f4d5469-69rzz node/ip-10-0-151-30.us-east-2.compute.internal container=thanos-querier container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 20:11:58.041 E ns/openshift-monitoring pod/thanos-querier-766f4d5469-69rzz node/ip-10-0-151-30.us-east-2.compute.internal container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 20:12:02.255 E ns/openshift-monitoring pod/node-exporter-55xw5 node/ip-10-0-140-113.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:10:50Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:11:00Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:11:05Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:11:15Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:11:20Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:11:30Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:11:45Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 27 20:12:04.264 E ns/openshift-ingress pod/router-default-7b7c5c7c87-mjp6q node/ip-10-0-140-113.us-east-2.compute.internal container=router container exited with code 2 (Error): p://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nE0227 20:11:14.032234       1 limiter.go:140] error reloading router: wait: no child processes\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0227 20:11:19.061440       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 20:11:24.031324       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 20:11:29.036285       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 20:11:34.044787       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 20:11:39.049970       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 20:11:44.035531       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 20:11:49.038223       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 20:11:54.050998       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 20:11:59.045893       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Feb 27 20:12:06.336 E clusterversion/version changed Failing to True: WorkloadNotAvailable: deployment openshift-console/downloads is progressing ReplicaSetUpdated: ReplicaSet "downloads-67c5874fd7" is progressing.
Feb 27 20:12:09.248 E ns/openshift-monitoring pod/grafana-868dc858fc-bkjhm node/ip-10-0-141-156.us-east-2.compute.internal container=grafana container exited with code 1 (Error): 
Feb 27 20:12:09.248 E ns/openshift-monitoring pod/grafana-868dc858fc-bkjhm node/ip-10-0-141-156.us-east-2.compute.internal container=grafana-proxy container exited with code 2 (Error): 
Feb 27 20:12:09.268 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-141-156.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-27T19:57:13.510Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-27T19:57:13.516Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-27T19:57:13.517Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-27T19:57:13.518Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-27T19:57:13.518Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-27T19:57:13.518Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-27T19:57:13.518Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-27T19:57:13.518Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-27T19:57:13.518Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-27T19:57:13.518Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-27T19:57:13.518Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-27T19:57:13.518Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-27T19:57:13.518Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-27T19:57:13.518Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-27T19:57:13.519Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-27T19:57:13.519Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-27
Feb 27 20:12:09.268 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-141-156.us-east-2.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): Error on reading termination message from logs: failed to try resolving symlinks in path "/var/log/pods/openshift-monitoring_prometheus-k8s-0_47776558-8eef-4f43-b63c-16fe2b895fe9/prometheus-config-reloader/0.log": lstat /var/log/pods/openshift-monitoring_prometheus-k8s-0_47776558-8eef-4f43-b63c-16fe2b895fe9/prometheus-config-reloader/0.log: no such file or directory
Feb 27 20:12:11.296 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-140-113.us-east-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/02/27 19:56:16 Watching directory: "/etc/alertmanager/config"\n
Feb 27 20:12:11.296 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-140-113.us-east-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/02/27 19:56:17 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/27 19:56:17 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/27 19:56:17 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/27 19:56:17 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/27 19:56:17 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/27 19:56:17 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/27 19:56:17 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/27 19:56:17 http.go:107: HTTPS: listening on [::]:9095\nI0227 19:56:17.030608       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Feb 27 20:12:19.069 E ns/openshift-ingress pod/router-default-7b7c5c7c87-w5qs7 node/ip-10-0-151-30.us-east-2.compute.internal container=router container exited with code 2 (Error): lhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 20:11:29.060945       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 20:11:34.060310       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 20:11:39.063733       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 20:11:44.170495       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 20:11:49.084508       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 20:11:54.034980       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 20:11:59.033684       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 20:12:04.032958       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 20:12:09.029227       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 20:12:14.033785       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Feb 27 20:12:23.334 E ns/openshift-monitoring pod/thanos-querier-766f4d5469-bwrvr node/ip-10-0-141-156.us-east-2.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/02/27 19:56:41 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/27 19:56:41 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/27 19:56:41 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/27 19:56:41 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/27 19:56:41 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/27 19:56:41 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/27 19:56:41 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/27 19:56:41 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/02/27 19:56:41 http.go:107: HTTPS: listening on [::]:9091\nI0227 19:56:41.689315       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Feb 27 20:12:25.324 E ns/openshift-monitoring pod/node-exporter-zp9mn node/ip-10-0-153-35.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:11:12Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:11:15Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:11:27Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:11:30Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:11:42Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:11:57Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:12:15Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 27 20:12:25.353 E ns/openshift-marketplace pod/redhat-marketplace-846cf5988f-gdwq9 node/ip-10-0-140-113.us-east-2.compute.internal container=redhat-marketplace container exited with code 2 (Error): 
Feb 27 20:12:27.388 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-141-156.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-27T20:12:24.724Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-27T20:12:24.730Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-27T20:12:24.730Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-27T20:12:24.731Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-27T20:12:24.731Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-27T20:12:24.731Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-27T20:12:24.731Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-27T20:12:24.731Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-27T20:12:24.731Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-27T20:12:24.731Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-27T20:12:24.731Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-27T20:12:24.731Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-27T20:12:24.731Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-27T20:12:24.731Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-27T20:12:24.732Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-27T20:12:24.732Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-27
Feb 27 20:12:30.380 E ns/openshift-marketplace pod/redhat-operators-767bc95c78-4ldr9 node/ip-10-0-140-113.us-east-2.compute.internal container=redhat-operators container exited with code 2 (Error): 
Feb 27 20:12:32.338 E ns/openshift-csi-snapshot-controller pod/csi-snapshot-controller-76599fc769-vmq65 node/ip-10-0-141-156.us-east-2.compute.internal container=snapshot-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 20:12:38.402 E ns/openshift-csi-snapshot-controller pod/csi-snapshot-controller-55bbfbdb86-6kt6c node/ip-10-0-140-113.us-east-2.compute.internal container=snapshot-controller container exited with code 2 (Error): 
Feb 27 20:12:43.420 E ns/openshift-marketplace pod/certified-operators-6796898f44-76v67 node/ip-10-0-140-113.us-east-2.compute.internal container=certified-operators container exited with code 2 (Error): 
Feb 27 20:12:50.626 E ns/openshift-controller-manager pod/controller-manager-8mqxl node/ip-10-0-136-188.us-east-2.compute.internal container=controller-manager container exited with code 137 (Error): I0227 19:52:19.132194       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0227 19:52:19.134880       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-zfmbybf1/stable-initial@sha256:ca6b8e042a8fb0d9eb3539e4b6544fd7aae53da85fe037ce9a92e59ea19cd786"\nI0227 19:52:19.134949       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-zfmbybf1/stable-initial@sha256:471891b26e981d2ed9c87cdd306bc028abe62b760a7af413bd9c05389c4ea5a4"\nI0227 19:52:19.135059       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0227 19:52:19.136146       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Feb 27 20:12:50.769 E ns/openshift-service-ca-operator pod/service-ca-operator-75d5665cb-8mg86 node/ip-10-0-136-188.us-east-2.compute.internal container=operator container exited with code 255 (Error): 
Feb 27 20:12:50.880 E ns/openshift-monitoring pod/node-exporter-cdx4t node/ip-10-0-136-188.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:11:18Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:11:24Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:11:39Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:11:54Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:12:03Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:12:18Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:12:33Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 27 20:13:43.741 E ns/openshift-console pod/console-586b55955c-4vjvn node/ip-10-0-153-35.us-east-2.compute.internal container=console container exited with code 2 (Error): 2020-02-27T19:57:24Z cmd/main: cookies are secure!\n2020-02-27T19:57:24Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-02-27T19:57:34Z cmd/main: Binding to [::]:8443...\n2020-02-27T19:57:34Z cmd/main: using TLS\n
Feb 27 20:14:05.042 E ns/openshift-sdn pod/sdn-controller-sfdjc node/ip-10-0-136-188.us-east-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0227 19:42:32.901685       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0227 19:48:18.391388       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: etcdserver: request timed out\n
Feb 27 20:14:05.631 E ns/openshift-sdn pod/ovs-5xwbc node/ip-10-0-141-156.us-east-2.compute.internal container=openvswitch container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 20:14:09.927 E ns/openshift-sdn pod/sdn-controller-lfg4f node/ip-10-0-130-82.us-east-2.compute.internal container=sdn-controller container exited with code 2 (Error): ubnet ip-10-0-140-113.us-east-2.compute.internal (host: "ip-10-0-140-113.us-east-2.compute.internal", ip: "10.0.140.113", subnet: "10.131.0.0/23")\nI0227 19:51:50.505426       1 subnets.go:149] Created HostSubnet ip-10-0-151-30.us-east-2.compute.internal (host: "ip-10-0-151-30.us-east-2.compute.internal", ip: "10.0.151.30", subnet: "10.128.2.0/23")\nI0227 19:51:53.359768       1 subnets.go:149] Created HostSubnet ip-10-0-141-156.us-east-2.compute.internal (host: "ip-10-0-141-156.us-east-2.compute.internal", ip: "10.0.141.156", subnet: "10.129.2.0/23")\nI0227 20:04:35.465347       1 vnids.go:115] Allocated netid 15427542 for namespace "e2e-k8s-sig-apps-deployment-upgrade-2643"\nI0227 20:04:35.478531       1 vnids.go:115] Allocated netid 7153627 for namespace "e2e-frontend-ingress-available-351"\nI0227 20:04:35.490663       1 vnids.go:115] Allocated netid 4329550 for namespace "e2e-control-plane-available-7441"\nI0227 20:04:35.504613       1 vnids.go:115] Allocated netid 4776176 for namespace "e2e-k8s-sig-apps-replicaset-upgrade-2635"\nI0227 20:04:35.513723       1 vnids.go:115] Allocated netid 9783650 for namespace "e2e-k8s-sig-apps-job-upgrade-2999"\nI0227 20:04:35.529667       1 vnids.go:115] Allocated netid 11657931 for namespace "e2e-k8s-service-lb-available-2273"\nI0227 20:04:35.538918       1 vnids.go:115] Allocated netid 866306 for namespace "e2e-k8s-sig-apps-daemonset-upgrade-9401"\nI0227 20:04:35.548515       1 vnids.go:115] Allocated netid 2984571 for namespace "e2e-k8s-sig-storage-sig-api-machinery-secret-upgrade-5276"\nI0227 20:04:35.565301       1 vnids.go:115] Allocated netid 8740194 for namespace "e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-9779"\nE0227 20:12:49.267378       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://api-int.ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=28648&timeout=6m35s&timeoutSeconds=395&watch=true: dial tcp 10.0.148.206:6443: connect: connection refused\n
Feb 27 20:14:13.634 E ns/openshift-sdn pod/sdn-9qrbh node/ip-10-0-141-156.us-east-2.compute.internal container=sdn container exited with code 255 (Error): ndpoint 10.130.0.38:8443 for service "openshift-console/console:https"\nI0227 20:13:43.202740    2949 proxier.go:368] userspace proxy: processing 0 service events\nI0227 20:13:43.202765    2949 proxier.go:347] userspace syncProxyRules took 72.945038ms\nI0227 20:13:43.450466    2949 proxier.go:368] userspace proxy: processing 0 service events\nI0227 20:13:43.450491    2949 proxier.go:347] userspace syncProxyRules took 71.934852ms\nI0227 20:13:55.325784    2949 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-console/console:https to [10.128.0.76:8443 10.129.0.32:8443 10.130.0.63:8443]\nI0227 20:13:55.360890    2949 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-console/console:https to [10.128.0.76:8443 10.130.0.63:8443]\nI0227 20:13:55.360920    2949 roundrobin.go:217] Delete endpoint 10.129.0.32:8443 for service "openshift-console/console:https"\nI0227 20:13:55.577929    2949 proxier.go:368] userspace proxy: processing 0 service events\nI0227 20:13:55.577958    2949 proxier.go:347] userspace syncProxyRules took 73.044246ms\nI0227 20:13:55.825238    2949 proxier.go:368] userspace proxy: processing 0 service events\nI0227 20:13:55.825267    2949 proxier.go:347] userspace syncProxyRules took 73.43551ms\nI0227 20:14:02.380205    2949 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.17:6443 10.130.0.3:6443]\nI0227 20:14:02.380242    2949 roundrobin.go:217] Delete endpoint 10.129.0.2:6443 for service "openshift-multus/multus-admission-controller:"\nI0227 20:14:02.689822    2949 proxier.go:368] userspace proxy: processing 0 service events\nI0227 20:14:02.689880    2949 proxier.go:347] userspace syncProxyRules took 105.89195ms\nI0227 20:14:13.506049    2949 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0227 20:14:13.506091    2949 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Feb 27 20:14:20.866 E ns/openshift-sdn pod/sdn-controller-wkq2j node/ip-10-0-153-35.us-east-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0227 19:42:32.766090       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0227 19:48:11.407976       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: etcdserver: leader changed\nE0227 19:50:03.240890       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: unexpected EOF\n
Feb 27 20:14:33.427 E ns/openshift-multus pod/multus-vjfjm node/ip-10-0-151-30.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Feb 27 20:14:52.267 E ns/openshift-sdn pod/ovs-5glhz node/ip-10-0-136-188.us-east-2.compute.internal container=openvswitch container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 20:14:57.295 E ns/openshift-sdn pod/sdn-v97rc node/ip-10-0-136-188.us-east-2.compute.internal container=sdn container exited with code 255 (Error): s" at 172.30.196.20:443/TCP\nI0227 20:14:50.475581   10326 service.go:363] Adding new service port "openshift-marketplace/marketplace-operator-metrics:metrics" at 172.30.178.127:8383/TCP\nI0227 20:14:50.475601   10326 service.go:363] Adding new service port "openshift-cloud-credential-operator/cco-metrics:cco-metrics" at 172.30.36.184:2112/TCP\nI0227 20:14:50.475621   10326 service.go:363] Adding new service port "openshift-etcd-operator/metrics:https" at 172.30.201.12:443/TCP\nI0227 20:14:50.475892   10326 proxier.go:766] Stale udp service openshift-dns/dns-default:dns -> 172.30.0.10\nI0227 20:14:50.708904   10326 proxier.go:368] userspace proxy: processing 0 service events\nI0227 20:14:50.708931   10326 proxier.go:347] userspace syncProxyRules took 234.8075ms\nI0227 20:14:50.734531   10326 proxier.go:368] userspace proxy: processing 0 service events\nI0227 20:14:50.734560   10326 proxier.go:347] userspace syncProxyRules took 260.216356ms\nI0227 20:14:50.841723   10326 proxier.go:1609] Opened local port "nodePort for e2e-k8s-service-lb-available-2273/service-test:" (:30820/tcp)\nI0227 20:14:50.841973   10326 proxier.go:1609] Opened local port "nodePort for openshift-ingress/router-default:https" (:30905/tcp)\nI0227 20:14:50.842116   10326 proxier.go:1609] Opened local port "nodePort for openshift-ingress/router-default:http" (:30664/tcp)\nI0227 20:14:50.906437   10326 service_health.go:98] Opening healthcheck "openshift-ingress/router-default" on port 32593\nI0227 20:14:51.357883   10326 proxy.go:305] openshift-sdn proxy services and endpoints initialized\nI0227 20:14:51.357922   10326 cmd.go:173] openshift-sdn network plugin registering startup\nI0227 20:14:51.358049   10326 cmd.go:177] openshift-sdn network plugin ready\nI0227 20:14:57.036993   10326 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0227 20:14:57.037029   10326 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Feb 27 20:15:10.331 E ns/openshift-multus pod/multus-qvcfq node/ip-10-0-136-188.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Feb 27 20:15:13.036 E ns/openshift-multus pod/multus-admission-controller-rw6sw node/ip-10-0-153-35.us-east-2.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Feb 27 20:15:25.190 E ns/openshift-sdn pod/sdn-5x6fg node/ip-10-0-130-82.us-east-2.compute.internal container=sdn container exited with code 255 (Error): odePort for e2e-k8s-service-lb-available-2273/service-test:" (:30820/tcp)\nI0227 20:14:22.616833    3694 service_health.go:98] Opening healthcheck "openshift-ingress/router-default" on port 32593\nI0227 20:14:22.628737    3694 proxy.go:305] openshift-sdn proxy services and endpoints initialized\nI0227 20:14:22.628776    3694 cmd.go:173] openshift-sdn network plugin registering startup\nI0227 20:14:22.628908    3694 cmd.go:177] openshift-sdn network plugin ready\nI0227 20:14:32.824225    3694 pod.go:539] CNI_DEL openshift-multus/multus-admission-controller-r2zk7\nI0227 20:14:37.670797    3694 pod.go:503] CNI_ADD openshift-multus/multus-admission-controller-kxj2s got IP 10.129.0.69, ofport 70\nI0227 20:14:42.024294    3694 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.17:6443 10.129.0.69:6443 10.130.0.3:6443]\nI0227 20:14:42.041138    3694 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.17:6443 10.129.0.69:6443]\nI0227 20:14:42.041171    3694 roundrobin.go:217] Delete endpoint 10.130.0.3:6443 for service "openshift-multus/multus-admission-controller:"\nI0227 20:14:42.312086    3694 proxier.go:368] userspace proxy: processing 0 service events\nI0227 20:14:42.312111    3694 proxier.go:347] userspace syncProxyRules took 77.312513ms\nI0227 20:14:42.623891    3694 proxier.go:368] userspace proxy: processing 0 service events\nI0227 20:14:42.623919    3694 proxier.go:347] userspace syncProxyRules took 95.588522ms\nI0227 20:15:12.916695    3694 proxier.go:368] userspace proxy: processing 0 service events\nI0227 20:15:12.916723    3694 proxier.go:347] userspace syncProxyRules took 85.446837ms\nI0227 20:15:21.818966    3694 healthcheck.go:92] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nF0227 20:15:24.647227    3694 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Feb 27 20:15:48.841 E ns/openshift-sdn pod/sdn-cgc27 node/ip-10-0-140-113.us-east-2.compute.internal container=sdn container exited with code 255 (Error): 27 20:15:10.154587    5385 proxier.go:347] userspace syncProxyRules took 152.40766ms\nI0227 20:15:10.241397    5385 proxier.go:1609] Opened local port "nodePort for e2e-k8s-service-lb-available-2273/service-test:" (:30820/tcp)\nI0227 20:15:10.241550    5385 proxier.go:1609] Opened local port "nodePort for openshift-ingress/router-default:http" (:30664/tcp)\nI0227 20:15:10.241694    5385 proxier.go:1609] Opened local port "nodePort for openshift-ingress/router-default:https" (:30905/tcp)\nI0227 20:15:10.278680    5385 service_health.go:98] Opening healthcheck "openshift-ingress/router-default" on port 32593\nI0227 20:15:10.287450    5385 proxy.go:305] openshift-sdn proxy services and endpoints initialized\nI0227 20:15:10.287484    5385 cmd.go:173] openshift-sdn network plugin registering startup\nI0227 20:15:10.287621    5385 cmd.go:177] openshift-sdn network plugin ready\nI0227 20:15:35.125600    5385 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.17:6443 10.129.0.69:6443 10.130.0.64:6443]\nI0227 20:15:35.145495    5385 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.129.0.69:6443 10.130.0.64:6443]\nI0227 20:15:35.145531    5385 roundrobin.go:217] Delete endpoint 10.128.0.17:6443 for service "openshift-multus/multus-admission-controller:"\nI0227 20:15:35.403137    5385 proxier.go:368] userspace proxy: processing 0 service events\nI0227 20:15:35.403175    5385 proxier.go:347] userspace syncProxyRules took 85.518115ms\nI0227 20:15:35.659084    5385 proxier.go:368] userspace proxy: processing 0 service events\nI0227 20:15:35.659108    5385 proxier.go:347] userspace syncProxyRules took 71.457348ms\nI0227 20:15:48.720952    5385 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0227 20:15:48.721003    5385 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Feb 27 20:16:07.257 E ns/openshift-sdn pod/sdn-hdh9h node/ip-10-0-153-35.us-east-2.compute.internal container=sdn container exited with code 255 (Error): -lb-available-2273/service-test:" (:30820/tcp)\nI0227 20:15:32.726864   12813 proxier.go:1609] Opened local port "nodePort for openshift-ingress/router-default:http" (:30664/tcp)\nI0227 20:15:32.727052   12813 proxier.go:1609] Opened local port "nodePort for openshift-ingress/router-default:https" (:30905/tcp)\nI0227 20:15:32.774199   12813 service_health.go:98] Opening healthcheck "openshift-ingress/router-default" on port 32593\nI0227 20:15:32.790421   12813 proxy.go:305] openshift-sdn proxy services and endpoints initialized\nI0227 20:15:32.790456   12813 cmd.go:173] openshift-sdn network plugin registering startup\nI0227 20:15:32.790589   12813 cmd.go:177] openshift-sdn network plugin ready\nI0227 20:15:35.128053   12813 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.17:6443 10.129.0.69:6443 10.130.0.64:6443]\nI0227 20:15:35.154844   12813 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.129.0.69:6443 10.130.0.64:6443]\nI0227 20:15:35.154886   12813 roundrobin.go:217] Delete endpoint 10.128.0.17:6443 for service "openshift-multus/multus-admission-controller:"\nI0227 20:15:35.422612   12813 proxier.go:368] userspace proxy: processing 0 service events\nI0227 20:15:35.422637   12813 proxier.go:347] userspace syncProxyRules took 96.418728ms\nI0227 20:15:35.702353   12813 proxier.go:368] userspace proxy: processing 0 service events\nI0227 20:15:35.702383   12813 proxier.go:347] userspace syncProxyRules took 74.179386ms\nI0227 20:16:06.051687   12813 proxier.go:368] userspace proxy: processing 0 service events\nI0227 20:16:06.051716   12813 proxier.go:347] userspace syncProxyRules took 93.345238ms\nI0227 20:16:06.327308   12813 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0227 20:16:06.327393   12813 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Feb 27 20:16:36.955 E ns/openshift-multus pod/multus-zzg9q node/ip-10-0-140-113.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Feb 27 20:17:13.582 E ns/openshift-multus pod/multus-jnlsm node/ip-10-0-130-82.us-east-2.compute.internal container=kube-multus container exited with code 137 (OOMKilled): 
Feb 27 20:17:53.631 E ns/openshift-multus pod/multus-4gmbt node/ip-10-0-153-35.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Feb 27 20:18:13.226 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Feb 27 20:18:17.094 E ns/openshift-machine-config-operator pod/machine-config-operator-847b8d5bdd-7j4h9 node/ip-10-0-136-188.us-east-2.compute.internal container=machine-config-operator container exited with code 2 (Error): -cab1-4f89-80b6-b8170aeda311", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorVersionChanged' clusteroperator/machine-config-operator is bootstrapping to [{operator 0.0.1-2020-02-27-192049}]\nE0227 19:46:24.998101       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.MachineConfigPool: the server could not find the requested resource (get machineconfigpools.machineconfiguration.openshift.io)\nE0227 19:46:25.068424       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.ControllerConfig: the server could not find the requested resource (get controllerconfigs.machineconfiguration.openshift.io)\nE0227 19:46:26.003036       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.MachineConfigPool: the server could not find the requested resource (get machineconfigpools.machineconfiguration.openshift.io)\nE0227 19:46:27.034025       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.MachineConfigPool: the server could not find the requested resource (get machineconfigpools.machineconfiguration.openshift.io)\nI0227 19:46:30.506401       1 sync.go:61] [init mode] synced RenderConfig in 5.68459906s\nI0227 19:46:30.913821       1 sync.go:61] [init mode] synced MachineConfigPools in 407.369323ms\nI0227 19:47:21.343782       1 sync.go:61] [init mode] synced MachineConfigDaemon in 50.429890834s\nI0227 19:47:26.473964       1 sync.go:61] [init mode] synced MachineConfigController in 5.130130492s\nI0227 19:47:39.549245       1 sync.go:61] [init mode] synced MachineConfigServer in 13.075227909s\nI0227 19:47:48.562051       1 sync.go:61] [init mode] synced RequiredPools in 9.012762184s\nI0227 19:47:48.623323       1 sync.go:85] Initialization complete\n
Feb 27 20:18:21.336 E clusterversion/version changed Failing to True: WorkloadNotAvailable: deployment openshift-machine-config-operator/machine-config-operator is progressing NewReplicaSetAvailable: ReplicaSet "machine-config-operator-847b8d5bdd" has successfully progressed.
Feb 27 20:20:12.256 E ns/openshift-machine-config-operator pod/machine-config-daemon-4fsk8 node/ip-10-0-151-30.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 27 20:20:21.227 E ns/openshift-machine-config-operator pod/machine-config-daemon-54q2r node/ip-10-0-130-82.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 27 20:20:32.394 E ns/openshift-machine-config-operator pod/machine-config-daemon-2r7kz node/ip-10-0-141-156.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 27 20:20:37.533 E ns/openshift-machine-config-operator pod/machine-config-daemon-zrtrv node/ip-10-0-140-113.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 27 20:20:49.685 E ns/openshift-machine-config-operator pod/machine-config-daemon-ssqlv node/ip-10-0-136-188.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 27 20:20:59.252 E ns/openshift-machine-config-operator pod/machine-config-daemon-whwjt node/ip-10-0-153-35.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 27 20:23:24.282 E ns/openshift-machine-config-operator pod/machine-config-server-xfxlc node/ip-10-0-136-188.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0227 19:47:28.222749       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-303-g38b43e66-dirty (38b43e66bab4746757f4388b82e7feb1eea7a0b2)\nI0227 19:47:28.223854       1 api.go:51] Launching server on :22624\nI0227 19:47:28.224007       1 api.go:51] Launching server on :22623\nI0227 19:49:42.691583       1 api.go:97] Pool worker requested by 10.0.142.187:49093\n
Feb 27 20:23:34.983 E ns/openshift-monitoring pod/prometheus-adapter-5cfbc4df5c-td7n4 node/ip-10-0-141-156.us-east-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0227 20:11:52.718483       1 adapter.go:93] successfully using in-cluster auth\nI0227 20:11:53.350071       1 secure_serving.go:116] Serving securely on [::]:6443\n
Feb 27 20:23:36.018 E ns/openshift-monitoring pod/thanos-querier-7689dc997-xhhzn node/ip-10-0-141-156.us-east-2.compute.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 20:23:36.018 E ns/openshift-monitoring pod/thanos-querier-7689dc997-xhhzn node/ip-10-0-141-156.us-east-2.compute.internal container=oauth-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 20:23:36.018 E ns/openshift-monitoring pod/thanos-querier-7689dc997-xhhzn node/ip-10-0-141-156.us-east-2.compute.internal container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 20:23:36.018 E ns/openshift-monitoring pod/thanos-querier-7689dc997-xhhzn node/ip-10-0-141-156.us-east-2.compute.internal container=thanos-querier container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 20:23:36.332 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-787brr26h node/ip-10-0-153-35.us-east-2.compute.internal container=operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 20:23:37.320 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-695bfc885b-77dsq node/ip-10-0-153-35.us-east-2.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): tor", Name:"openshift-kube-scheduler-operator", UID:"153f237a-2352-4793-9319-9b3e1e7c6a5c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"ip-10-0-153-35.us-east-2.compute.internal\" not ready since 2020-02-27 20:17:32 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)" to "NodeControllerDegraded: All master nodes are ready"\nI0227 20:23:36.288883       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0227 20:23:36.289385       1 base_controller.go:74] Shutting down InstallerStateController ...\nI0227 20:23:36.290266       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0227 20:23:36.290424       1 base_controller.go:74] Shutting down InstallerController ...\nI0227 20:23:36.290443       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0227 20:23:36.290478       1 base_controller.go:74] Shutting down NodeController ...\nI0227 20:23:36.290495       1 base_controller.go:74] Shutting down StaticPodStateController ...\nI0227 20:23:36.290520       1 base_controller.go:49] Shutting down worker of InstallerStateController controller ...\nI0227 20:23:36.290531       1 base_controller.go:39] All InstallerStateController workers have been terminated\nI0227 20:23:36.290557       1 base_controller.go:49] Shutting down worker of NodeController controller ...\nI0227 20:23:36.290565       1 base_controller.go:39] All NodeController workers have been terminated\nI0227 20:23:36.290595       1 base_controller.go:49] Shutting down worker of StaticPodStateController controller ...\nI0227 20:23:36.290604       1 base_controller.go:39] All StaticPodStateController workers have been terminated\nF0227 20:23:36.290774       1 builder.go:243] stopped\n
Feb 27 20:23:37.355 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-54c4b7c9ff-kxwfh node/ip-10-0-153-35.us-east-2.compute.internal container=kube-controller-manager-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 20:23:38.327 E ns/openshift-machine-api pod/machine-api-controllers-85c44cd8f8-5w64k node/ip-10-0-153-35.us-east-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Feb 27 20:23:40.356 E ns/openshift-machine-config-operator pod/machine-config-server-wf49w node/ip-10-0-153-35.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0227 19:47:37.764690       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-303-g38b43e66-dirty (38b43e66bab4746757f4388b82e7feb1eea7a0b2)\nI0227 19:47:37.765841       1 api.go:51] Launching server on :22624\nI0227 19:47:37.765930       1 api.go:51] Launching server on :22623\n
Feb 27 20:23:40.376 E ns/openshift-machine-api pod/machine-api-operator-5c455b6d49-jn7lb node/ip-10-0-153-35.us-east-2.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Feb 27 20:23:51.044 E ns/openshift-machine-config-operator pod/machine-config-server-8jskq node/ip-10-0-130-82.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0227 19:47:28.054945       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-303-g38b43e66-dirty (38b43e66bab4746757f4388b82e7feb1eea7a0b2)\nI0227 19:47:28.055813       1 api.go:51] Launching server on :22624\nI0227 19:47:28.055856       1 api.go:51] Launching server on :22623\nI0227 19:48:16.511795       1 api.go:97] Pool worker requested by 10.0.142.187:15728\nI0227 19:49:39.393517       1 api.go:97] Pool worker requested by 10.0.142.187:56560\n
Feb 27 20:23:58.082 E ns/openshift-operator-lifecycle-manager pod/packageserver-6996b67b56-2v95g node/ip-10-0-136-188.us-east-2.compute.internal container=packageserver container exited with code 1 (Error): C_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA\n      --tls-min-version string                                  Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13\n      --tls-private-key-file string                             File containing the default x509 private key matching --tls-cert-file.\n      --tls-sni-cert-key namedCertKey                           A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com". (default [])\n  -v, --v Level                                                 number for the log level verbosity (default 0)\n      --vmodule moduleSpec                                      comma-separated list of pattern=N settings for file-filtered logging\n\ntime="2020-02-27T20:23:57Z" level=fatal msg="Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: connection refused"\n
Feb 27 20:24:00.091 E ns/openshift-operator-lifecycle-manager pod/packageserver-6996b67b56-2v95g node/ip-10-0-136-188.us-east-2.compute.internal container=packageserver container exited with code 1 (Error): C_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA\n      --tls-min-version string                                  Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13\n      --tls-private-key-file string                             File containing the default x509 private key matching --tls-cert-file.\n      --tls-sni-cert-key namedCertKey                           A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com". (default [])\n  -v, --v Level                                                 number for the log level verbosity (default 0)\n      --vmodule moduleSpec                                      comma-separated list of pattern=N settings for file-filtered logging\n\ntime="2020-02-27T20:23:59Z" level=fatal msg="Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: connection refused"\n
Feb 27 20:24:08.256 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-140-113.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-27T20:23:55.351Z caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-27T20:23:55.355Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-27T20:23:55.355Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-27T20:23:55.356Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-27T20:23:55.356Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-27T20:23:55.356Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-27T20:23:55.356Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-27T20:23:55.356Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-27T20:23:55.357Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-27T20:23:55.357Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-27T20:23:55.357Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-27T20:23:55.357Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-27T20:23:55.357Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-27T20:23:55.357Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-27T20:23:55.358Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-27T20:23:55.358Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-27
Feb 27 20:25:44.031 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable
Feb 27 20:25:53.275 E kube-apiserver failed contacting the API: Get https://api.ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/apis/config.openshift.io/v1/clusterversions?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dversion&resourceVersion=35510&timeout=7m53s&timeoutSeconds=473&watch=true: dial tcp 3.20.230.62:6443: connect: connection refused
Feb 27 20:26:07.329 E ns/openshift-cluster-node-tuning-operator pod/tuned-vxhgd node/ip-10-0-153-35.us-east-2.compute.internal container=tuned container exited with code 143 (Error):  tuned profiles\nI0227 20:11:41.597303    1548 tuned.go:469] profile "ip-10-0-153-35.us-east-2.compute.internal" added, tuned profile requested: openshift-control-plane\nI0227 20:11:41.597687    1548 tuned.go:170] disabling system tuned...\nI0227 20:11:41.603849    1548 tuned.go:176] failed to disable system tuned: Failed to execute operation: Unit file tuned.service does not exist.\nI0227 20:11:42.582598    1548 tuned.go:393] getting recommended profile...\nI0227 20:11:42.878205    1548 tuned.go:421] active profile () != recommended profile (openshift-control-plane)\nI0227 20:11:42.878546    1548 tuned.go:286] starting tuned...\n2020-02-27 20:11:43,058 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-02-27 20:11:43,070 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-02-27 20:11:43,070 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-02-27 20:11:43,071 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-02-27 20:11:43,072 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-02-27 20:11:43,133 INFO     tuned.daemon.controller: starting controller\n2020-02-27 20:11:43,134 INFO     tuned.daemon.daemon: starting tuning\n2020-02-27 20:11:43,156 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-02-27 20:11:43,157 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-02-27 20:11:43,161 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-02-27 20:11:43,168 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-02-27 20:11:43,171 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-02-27 20:11:43,342 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-02-27 20:11:43,356 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\n
Feb 27 20:26:07.370 E ns/openshift-multus pod/multus-admission-controller-k4wzb node/ip-10-0-153-35.us-east-2.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Feb 27 20:26:07.387 E ns/openshift-sdn pod/ovs-pbhkx node/ip-10-0-153-35.us-east-2.compute.internal container=openvswitch container exited with code 143 (Error): : 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T20:23:37.860Z|00099|connmgr|INFO|br0<->unix#429: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T20:23:37.882Z|00100|bridge|INFO|bridge br0: deleted interface vethcf050eae on port 54\n2020-02-27T20:23:39.758Z|00101|connmgr|INFO|br0<->unix#435: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T20:23:39.799Z|00102|connmgr|INFO|br0<->unix#438: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T20:23:39.836Z|00103|bridge|INFO|bridge br0: deleted interface vethaba76a98 on port 47\n2020-02-27T20:23:40.663Z|00104|connmgr|INFO|br0<->unix#442: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T20:23:40.709Z|00105|connmgr|INFO|br0<->unix#447: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T20:23:40.736Z|00106|bridge|INFO|bridge br0: deleted interface veth49c451ee on port 61\n2020-02-27T20:23:50.700Z|00107|bridge|INFO|bridge br0: added interface vethcda0348c on port 67\n2020-02-27T20:23:50.736Z|00108|connmgr|INFO|br0<->unix#456: 5 flow_mods in the last 0 s (5 adds)\n2020-02-27T20:23:50.797Z|00109|connmgr|INFO|br0<->unix#460: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-02-27T20:23:50.809Z|00110|connmgr|INFO|br0<->unix#462: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T20:23:51.193Z|00111|bridge|INFO|bridge br0: added interface veth4809a459 on port 68\n2020-02-27T20:23:51.242Z|00112|connmgr|INFO|br0<->unix#465: 5 flow_mods in the last 0 s (5 adds)\n2020-02-27T20:23:51.311Z|00113|connmgr|INFO|br0<->unix#469: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-02-27T20:23:51.316Z|00114|connmgr|INFO|br0<->unix#471: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T20:23:50.755Z|00017|jsonrpc|WARN|unix#390: receive error: Connection reset by peer\n2020-02-27T20:23:50.755Z|00018|reconnect|WARN|unix#390: connection dropped (Connection reset by peer)\n2020-02-27T20:23:51.210Z|00019|jsonrpc|WARN|unix#392: receive error: Connection reset by peer\n2020-02-27T20:23:51.210Z|00020|reconnect|WARN|unix#392: connection dropped (Connection reset by peer)\ninfo: Saving flows ...\n
Feb 27 20:26:07.406 E ns/openshift-sdn pod/sdn-controller-7vxhh node/ip-10-0-153-35.us-east-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0227 20:14:31.093215       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Feb 27 20:26:07.428 E ns/openshift-multus pod/multus-x697x node/ip-10-0-153-35.us-east-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Feb 27 20:26:07.461 E ns/openshift-machine-config-operator pod/machine-config-daemon-dqh8m node/ip-10-0-153-35.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 27 20:26:07.476 E ns/openshift-machine-config-operator pod/machine-config-server-b68z7 node/ip-10-0-153-35.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0227 20:23:50.287923       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-303-g38b43e66-dirty (38b43e66bab4746757f4388b82e7feb1eea7a0b2)\nI0227 20:23:50.288901       1 api.go:51] Launching server on :22624\nI0227 20:23:50.288991       1 api.go:51] Launching server on :22623\n
Feb 27 20:26:07.488 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-153-35.us-east-2.compute.internal node/ip-10-0-153-35.us-east-2.compute.internal container=scheduler container exited with code 2 (Error): ion refused\nE0227 20:10:51.802049       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: unknown (get nodes)\nE0227 20:10:51.802233       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)\nE0227 20:10:51.802567       1 leaderelection.go:331] error retrieving resource lock openshift-kube-scheduler/kube-scheduler: configmaps "kube-scheduler" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "openshift-kube-scheduler"\nE0227 20:10:51.802631       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)\nE0227 20:10:51.802674       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)\nE0227 20:10:51.802706       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)\nE0227 20:10:51.802733       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)\nE0227 20:10:51.802959       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)\nE0227 20:10:51.803007       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)\nE0227 20:10:51.803036       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: unknown (get services)\nE0227 20:10:51.803130       1 reflector.go:307] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to watch *v1.Pod: unknown (get pods)\nE0227 20:10:51.803169       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)\n
Feb 27 20:26:07.500 E ns/openshift-etcd pod/etcd-ip-10-0-153-35.us-east-2.compute.internal node/ip-10-0-153-35.us-east-2.compute.internal container=etcd-metrics container exited with code 2 (Error): 2020-02-27 20:08:12.817502 I | etcdmain: ServerTLS: cert = /etc/kubernetes/static-pod-resources/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-153-35.us-east-2.compute.internal.crt, key = /etc/kubernetes/static-pod-resources/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-153-35.us-east-2.compute.internal.key, ca = /etc/kubernetes/static-pod-resources/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-02-27 20:08:12.818518 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-02-27 20:08:12.819146 I | etcdmain: ClientTLS: cert = /etc/kubernetes/static-pod-resources/secrets/etcd-all-peer/etcd-peer-ip-10-0-153-35.us-east-2.compute.internal.crt, key = /etc/kubernetes/static-pod-resources/secrets/etcd-all-peer/etcd-peer-ip-10-0-153-35.us-east-2.compute.internal.key, ca = /etc/kubernetes/static-pod-resources/configmaps/etcd-peer-client-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/02/27 20:08:12 grpc: addrConn.createTransport failed to connect to {https://etcd-1.ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.153.35:9978: connect: connection refused". Reconnecting...\n2020-02-27 20:08:12.824348 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Feb 27 20:26:07.530 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-153-35.us-east-2.compute.internal node/ip-10-0-153-35.us-east-2.compute.internal container=kube-apiserver container exited with code 1 (Error): beta1.metrics.k8s.io: Rate Limited Requeue.\n2020/02/27 20:23:34 httputil: ReverseProxy read error during body copy: unexpected EOF\n2020/02/27 20:23:34 httputil: ReverseProxy read error during body copy: unexpected EOF\n2020/02/27 20:23:34 httputil: ReverseProxy read error during body copy: unexpected EOF\nW0227 20:23:34.230242       1 reflector.go:326] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: watch of *v1.Group ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 1131; INTERNAL_ERROR") has prevented the request from succeeding\nE0227 20:23:42.263644       1 available_controller.go:406] v1.packages.operators.coreos.com failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.packages.operators.coreos.com": the object has been modified; please apply your changes to the latest version and try again\nE0227 20:23:42.340055       1 available_controller.go:406] v1.packages.operators.coreos.com failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.packages.operators.coreos.com": the object has been modified; please apply your changes to the latest version and try again\nI0227 20:23:42.344935       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io\nI0227 20:23:42.344989       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io\nI0227 20:23:50.797449       1 controller.go:606] quota admission added evaluator for: machines.machine.openshift.io\nI0227 20:23:52.047575       1 controller.go:180] Shutting down kubernetes service endpoint reconciler\nI0227 20:23:52.047557       1 genericapiserver.go:643] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-153-35.us-east-2.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\n
Feb 27 20:26:07.530 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-153-35.us-east-2.compute.internal node/ip-10-0-153-35.us-east-2.compute.internal container=kube-apiserver-cert-regeneration-controller container exited with code 1 (Error): ubeControllerManagerClient"\nI0227 20:21:45.120398       1 servicehostname.go:40] syncing servicenetwork hostnames: [172.30.0.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openshift openshift.default openshift.default.svc openshift.default.svc.cluster.local]\nI0227 20:21:45.121592       1 externalloadbalancer.go:26] syncing external loadbalancer hostnames: api.ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com\nI0227 20:23:52.032879       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0227 20:23:52.034015       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeControllerManagerClient"\nI0227 20:23:52.034127       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "LocalhostRecoveryServing"\nI0227 20:23:52.034200       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "InternalLoadBalancerServing"\nI0227 20:23:52.034257       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "ExternalLoadBalancerServing"\nI0227 20:23:52.034359       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "ServiceNetworkServing"\nI0227 20:23:52.034411       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "LocalhostServing"\nI0227 20:23:52.034471       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeAPIServerToKubeletClientCert"\nI0227 20:23:52.034510       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "AggregatorProxyClientCert"\nI0227 20:23:52.034528       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeSchedulerClient"\nI0227 20:23:52.034546       1 certrotationcontroller.go:560] Shutting down CertRotation\nI0227 20:23:52.034567       1 cabundlesyncer.go:84] Shutting down CA bundle controller\nI0227 20:23:52.034577       1 cabundlesyncer.go:86] CA bundle controller shut down\n
Feb 27 20:26:07.530 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-153-35.us-east-2.compute.internal node/ip-10-0-153-35.us-east-2.compute.internal container=kube-apiserver-insecure-readyz container exited with code 2 (Error): I0227 20:10:48.450218       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Feb 27 20:26:07.530 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-153-35.us-east-2.compute.internal node/ip-10-0-153-35.us-east-2.compute.internal container=kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0227 20:23:49.092635       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 20:23:49.093072       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0227 20:23:51.271732       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 20:23:51.272053       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Feb 27 20:26:07.551 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-153-35.us-east-2.compute.internal node/ip-10-0-153-35.us-east-2.compute.internal container=cluster-policy-controller container exited with code 1 (Error): I0227 20:08:16.269906       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0227 20:08:16.272976       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0227 20:08:16.273123       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\n
Feb 27 20:26:07.551 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-153-35.us-east-2.compute.internal node/ip-10-0-153-35.us-east-2.compute.internal container=kube-controller-manager-cert-syncer container exited with code 2 (Error):     1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 20:23:16.669931       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 20:23:16.670244       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 20:23:20.547822       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 20:23:20.548229       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 20:23:26.680370       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 20:23:26.680751       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 20:23:30.556595       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 20:23:30.558482       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 20:23:36.704512       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 20:23:36.704814       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 20:23:40.572210       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 20:23:40.572562       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 20:23:46.733480       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 20:23:46.733826       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 20:23:50.585797       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 20:23:50.586141       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\n
Feb 27 20:26:07.551 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-153-35.us-east-2.compute.internal node/ip-10-0-153-35.us-east-2.compute.internal container=kube-controller-manager container exited with code 2 (Error): loaded client CA [5/"client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt"]: "kubelet-bootstrap-kubeconfig-signer" [] issuer="<self>" (2020-02-27 19:31:04 +0000 UTC to 2030-02-24 19:31:04 +0000 UTC (now=2020-02-27 20:08:15.509443195 +0000 UTC))\nI0227 20:08:15.509493       1 tlsconfig.go:179] loaded client CA [6/"client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt"]: "aggregator-signer" [] issuer="<self>" (2020-02-27 19:31:08 +0000 UTC to 2020-02-28 19:31:08 +0000 UTC (now=2020-02-27 20:08:15.509478083 +0000 UTC))\nI0227 20:08:15.509720       1 tlsconfig.go:201] loaded serving cert ["serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key"]: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1582832794" (2020-02-27 19:46:47 +0000 UTC to 2022-02-26 19:46:48 +0000 UTC (now=2020-02-27 20:08:15.509705731 +0000 UTC))\nI0227 20:08:15.509924       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1582834095" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1582834095" (2020-02-27 19:08:14 +0000 UTC to 2021-02-26 19:08:14 +0000 UTC (now=2020-02-27 20:08:15.509913204 +0000 UTC))\nI0227 20:08:15.509967       1 secure_serving.go:178] Serving securely on [::]:10257\nI0227 20:08:15.510005       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-controller-manager...\nI0227 20:08:15.510161       1 tlsconfig.go:241] Starting DynamicServingCertificateController\n
Feb 27 20:26:07.551 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-153-35.us-east-2.compute.internal node/ip-10-0-153-35.us-east-2.compute.internal container=kube-controller-manager-recovery-controller container exited with code 255 (Error): 2       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0227 20:10:45.513333       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=23641&timeout=9m39s&timeoutSeconds=579&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0227 20:10:45.513465       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=23641&timeout=5m49s&timeoutSeconds=349&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0227 20:10:46.515037       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=23641&timeout=9m22s&timeoutSeconds=562&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0227 20:10:46.519868       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=23641&timeout=5m51s&timeoutSeconds=351&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0227 20:10:51.893500       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nE0227 20:10:51.893637       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nI0227 20:23:52.156580       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nF0227 20:23:52.156639       1 leaderelection.go:67] leaderelection lost\n
Feb 27 20:26:07.577 E ns/openshift-controller-manager pod/controller-manager-vd6l7 node/ip-10-0-153-35.us-east-2.compute.internal container=controller-manager container exited with code 1 (Error): I0227 20:12:02.557892       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0227 20:12:02.559499       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-zfmbybf1/stable@sha256:ca6b8e042a8fb0d9eb3539e4b6544fd7aae53da85fe037ce9a92e59ea19cd786"\nI0227 20:12:02.559519       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-zfmbybf1/stable@sha256:471891b26e981d2ed9c87cdd306bc028abe62b760a7af413bd9c05389c4ea5a4"\nI0227 20:12:02.559582       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0227 20:12:02.559706       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Feb 27 20:26:07.604 E ns/openshift-monitoring pod/node-exporter-f8txv node/ip-10-0-153-35.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:22:58Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:23:06Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:23:13Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:23:21Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:23:28Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:23:36Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:23:51Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 27 20:26:14.845 E ns/openshift-multus pod/multus-x697x node/ip-10-0-153-35.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Feb 27 20:26:17.553 E ns/openshift-monitoring pod/node-exporter-v2g4h node/ip-10-0-141-156.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:23:18Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:23:23Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:23:38Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:23:53Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:24:08Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:24:18Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:24:23Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 27 20:26:17.582 E ns/openshift-cluster-node-tuning-operator pod/tuned-bgtw2 node/ip-10-0-141-156.us-east-2.compute.internal container=tuned container exited with code 143 (Error): 3:33.233264    1081 tuned.go:219] extracting tuned profiles\nI0227 20:13:33.234475    1081 tuned.go:469] profile "ip-10-0-141-156.us-east-2.compute.internal" added, tuned profile requested: openshift-node\nI0227 20:13:33.234544    1081 tuned.go:170] disabling system tuned...\nI0227 20:13:33.240784    1081 tuned.go:176] failed to disable system tuned: Failed to execute operation: Unit file tuned.service does not exist.\nI0227 20:13:34.218443    1081 tuned.go:393] getting recommended profile...\nI0227 20:13:34.352477    1081 tuned.go:421] active profile () != recommended profile (openshift-node)\nI0227 20:13:34.352580    1081 tuned.go:286] starting tuned...\n2020-02-27 20:13:34,469 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-02-27 20:13:34,475 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-02-27 20:13:34,476 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-02-27 20:13:34,476 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-02-27 20:13:34,477 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-02-27 20:13:34,512 INFO     tuned.daemon.controller: starting controller\n2020-02-27 20:13:34,512 INFO     tuned.daemon.daemon: starting tuning\n2020-02-27 20:13:34,523 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-02-27 20:13:34,524 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-02-27 20:13:34,527 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-02-27 20:13:34,529 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-02-27 20:13:34,530 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-02-27 20:13:34,642 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-02-27 20:13:34,654 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\n
Feb 27 20:26:17.597 E ns/openshift-sdn pod/ovs-shdm8 node/ip-10-0-141-156.us-east-2.compute.internal container=openvswitch container exited with code 143 (Error): ->unix#471: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T20:23:34.543Z|00075|connmgr|INFO|br0<->unix#474: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T20:23:34.590Z|00076|bridge|INFO|bridge br0: deleted interface veth3dde8152 on port 17\n2020-02-27T20:23:34.652Z|00077|connmgr|INFO|br0<->unix#477: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T20:23:34.701Z|00078|connmgr|INFO|br0<->unix#480: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T20:23:34.740Z|00079|bridge|INFO|bridge br0: deleted interface vethab8335d8 on port 16\n2020-02-27T20:23:34.830Z|00080|connmgr|INFO|br0<->unix#483: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T20:23:34.870Z|00081|connmgr|INFO|br0<->unix#486: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T20:23:34.898Z|00082|bridge|INFO|bridge br0: deleted interface veth29aed4aa on port 22\n2020-02-27T20:23:34.946Z|00083|connmgr|INFO|br0<->unix#489: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T20:23:34.989Z|00084|connmgr|INFO|br0<->unix#492: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T20:23:35.034Z|00085|bridge|INFO|bridge br0: deleted interface vethd0f04647 on port 19\n2020-02-27T20:23:34.883Z|00013|jsonrpc|WARN|unix#418: receive error: Connection reset by peer\n2020-02-27T20:23:34.883Z|00014|reconnect|WARN|unix#418: connection dropped (Connection reset by peer)\n2020-02-27T20:23:35.025Z|00015|jsonrpc|WARN|unix#424: receive error: Connection reset by peer\n2020-02-27T20:23:35.025Z|00016|reconnect|WARN|unix#424: connection dropped (Connection reset by peer)\n2020-02-27T20:24:19.095Z|00086|connmgr|INFO|br0<->unix#525: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T20:24:19.125Z|00087|connmgr|INFO|br0<->unix#528: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T20:24:19.147Z|00088|bridge|INFO|bridge br0: deleted interface vethb7fd55c0 on port 14\n2020-02-27T20:24:19.141Z|00017|jsonrpc|WARN|unix#458: receive error: Connection reset by peer\n2020-02-27T20:24:19.141Z|00018|reconnect|WARN|unix#458: connection dropped (Connection reset by peer)\ninfo: Saving flows ...\n
Feb 27 20:26:17.623 E ns/openshift-multus pod/multus-hpxgr node/ip-10-0-141-156.us-east-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Feb 27 20:26:17.653 E ns/openshift-machine-config-operator pod/machine-config-daemon-w6ngn node/ip-10-0-141-156.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 27 20:26:18.057 E ns/openshift-machine-config-operator pod/machine-config-daemon-dqh8m node/ip-10-0-153-35.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 27 20:26:20.443 E ns/openshift-multus pod/multus-hpxgr node/ip-10-0-141-156.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Feb 27 20:26:25.981 E ns/openshift-machine-config-operator pod/machine-config-daemon-w6ngn node/ip-10-0-141-156.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 27 20:26:26.957 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Feb 27 20:26:31.461 E ns/openshift-kube-apiserver pod/revision-pruner-9-ip-10-0-130-82.us-east-2.compute.internal node/ip-10-0-130-82.us-east-2.compute.internal container=pruner container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 20:26:32.238 E ns/openshift-cluster-machine-approver pod/machine-approver-84569754f7-x6rrq node/ip-10-0-130-82.us-east-2.compute.internal container=machine-approver-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 20:26:32.238 E ns/openshift-cluster-machine-approver pod/machine-approver-84569754f7-x6rrq node/ip-10-0-130-82.us-east-2.compute.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 20:26:32.449 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-56c5f6777-5l6xb node/ip-10-0-130-82.us-east-2.compute.internal container=openshift-apiserver-operator container exited with code 255 (Error): Error on reading termination message from logs: failed to try resolving symlinks in path "/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-56c5f6777-5l6xb_1dedeeac-932e-44b2-8fbf-93e601264bf6/openshift-apiserver-operator/0.log": lstat /var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-56c5f6777-5l6xb_1dedeeac-932e-44b2-8fbf-93e601264bf6/openshift-apiserver-operator/0.log: no such file or directory
Feb 27 20:26:39.860 E ns/openshift-service-ca pod/service-ca-c966cd6c4-fw9lg node/ip-10-0-130-82.us-east-2.compute.internal container=service-ca-controller container exited with code 255 (OOMKilled): 
Feb 27 20:26:39.971 E ns/openshift-service-ca-operator pod/service-ca-operator-586465b7b-zzvv9 node/ip-10-0-130-82.us-east-2.compute.internal container=operator container exited with code 255 (Error): 
Feb 27 20:26:40.046 E ns/openshift-machine-config-operator pod/machine-config-controller-767c898dd8-lmrqt node/ip-10-0-130-82.us-east-2.compute.internal container=machine-config-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 20:26:41.911 E ns/openshift-csi-snapshot-controller pod/csi-snapshot-controller-c4bf98c9c-jfjwm node/ip-10-0-140-113.us-east-2.compute.internal container=snapshot-controller container exited with code 2 (Error): 
Feb 27 20:26:43.035 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-140-113.us-east-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/02/27 20:23:43 Watching directory: "/etc/alertmanager/config"\n
Feb 27 20:26:43.035 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-140-113.us-east-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/02/27 20:23:44 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/27 20:23:44 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/27 20:23:44 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/27 20:23:44 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/27 20:23:44 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/27 20:23:44 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/27 20:23:44 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/27 20:23:44 http.go:107: HTTPS: listening on [::]:9095\nI0227 20:23:44.278707       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Feb 27 20:26:43.136 E ns/openshift-marketplace pod/community-operators-7fb89788bd-pnljb node/ip-10-0-140-113.us-east-2.compute.internal container=community-operators container exited with code 2 (Error): 
Feb 27 20:26:43.249 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-140-113.us-east-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/02/27 20:12:19 Watching directory: "/etc/alertmanager/config"\n
Feb 27 20:26:43.249 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-140-113.us-east-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/02/27 20:12:21 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/27 20:12:21 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/27 20:12:21 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/27 20:12:21 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/27 20:12:21 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/27 20:12:21 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/27 20:12:21 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0227 20:12:21.273565       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/02/27 20:12:21 http.go:107: HTTPS: listening on [::]:9095\n
Feb 27 20:26:44.339 E ns/openshift-marketplace pod/redhat-marketplace-59f7c44dcd-8bpg2 node/ip-10-0-140-113.us-east-2.compute.internal container=redhat-marketplace container exited with code 2 (Error): 
Feb 27 20:27:04.276 E kube-apiserver Kube API started failing: Get https://api.ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Feb 27 20:27:06.287 E clusteroperator/ingress changed Degraded to True: IngressControllersDegraded: Some ingresscontrollers are degraded: default
Feb 27 20:27:26.367 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-141-156.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-27T20:27:24.687Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-27T20:27:24.692Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-27T20:27:24.694Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-27T20:27:24.695Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-27T20:27:24.695Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-27T20:27:24.695Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-27T20:27:24.695Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-27T20:27:24.695Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-27T20:27:24.695Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-27T20:27:24.695Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-27T20:27:24.695Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-27T20:27:24.695Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-27T20:27:24.695Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-27T20:27:24.695Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-27T20:27:24.696Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-27T20:27:24.696Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-27
Feb 27 20:28:17.635 E ns/openshift-machine-api pod/machine-api-controllers-85c44cd8f8-rpzjh node/ip-10-0-153-35.us-east-2.compute.internal container=machine-healthcheck-controller container exited with code 255 (Error): 
Feb 27 20:28:17.787 E ns/openshift-cloud-credential-operator pod/cloud-credential-operator-675dc9759d-x8jj7 node/ip-10-0-153-35.us-east-2.compute.internal container=manager container exited with code 1 (Error): Copying system trust bundle\ntime="2020-02-27T20:27:11Z" level=debug msg="debug logging enabled"\ntime="2020-02-27T20:27:11Z" level=info msg="setting up client for manager"\ntime="2020-02-27T20:27:11Z" level=info msg="setting up manager"\ntime="2020-02-27T20:27:11Z" level=fatal msg="unable to set up overall controller manager" error="Get https://172.30.0.1:443/api?timeout=32s: dial tcp 172.30.0.1:443: connect: connection refused"\n
Feb 27 20:28:17.944 E ns/openshift-operator-lifecycle-manager pod/packageserver-d499996c7-4287s node/ip-10-0-153-35.us-east-2.compute.internal container=packageserver container exited with code 1 (Error): C_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA\n      --tls-min-version string                                  Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13\n      --tls-private-key-file string                             File containing the default x509 private key matching --tls-cert-file.\n      --tls-sni-cert-key namedCertKey                           A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com". (default [])\n  -v, --v Level                                                 number for the log level verbosity (default 0)\n      --vmodule moduleSpec                                      comma-separated list of pattern=N settings for file-filtered logging\n\ntime="2020-02-27T20:27:16Z" level=fatal msg="Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: connection refused"\n
Feb 27 20:28:27.735 E ns/openshift-operator-lifecycle-manager pod/packageserver-6fc49bb4b7-jw2hx node/ip-10-0-136-188.us-east-2.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 20:29:13.081 E ns/openshift-controller-manager pod/controller-manager-wjv5z node/ip-10-0-130-82.us-east-2.compute.internal container=controller-manager container exited with code 1 (Error): ed EOF during watch stream event decoding: unexpected EOF\nI0227 20:25:53.199793       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 20:25:53.199803       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 20:25:53.203155       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nW0227 20:26:29.737657       1 reflector.go:340] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.ImageStream ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 49; INTERNAL_ERROR") has prevented the request from succeeding\nW0227 20:26:29.737790       1 reflector.go:340] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 83; INTERNAL_ERROR") has prevented the request from succeeding\nW0227 20:26:29.737893       1 reflector.go:340] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.Build ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 39; INTERNAL_ERROR") has prevented the request from succeeding\nW0227 20:26:29.737998       1 reflector.go:340] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: watch of *v1.TemplateInstance ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 27; INTERNAL_ERROR") has prevented the request from succeeding\nW0227 20:26:29.738120       1 reflector.go:340] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.BuildConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 45; INTERNAL_ERROR") has prevented the request from succeeding\n
Feb 27 20:29:13.167 E ns/openshift-cluster-node-tuning-operator pod/tuned-m574n node/ip-10-0-130-82.us-east-2.compute.internal container=tuned container exited with code 143 (Error): ins.base: instance net: assigning devices ens3\n2020-02-27 20:13:19,450 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-02-27 20:13:19,460 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\nI0227 20:23:52.354530     665 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 20:23:52.357155     665 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0227 20:23:52.371549     665 reflector.go:320] github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:594: Failed to watch *v1.Profile: Get https://172.30.0.1:443/apis/tuned.openshift.io/v1/namespaces/openshift-cluster-node-tuning-operator/profiles?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dip-10-0-130-82.us-east-2.compute.internal&resourceVersion=26256&timeoutSeconds=506&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0227 20:23:52.378079     665 reflector.go:320] github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:598: Failed to watch *v1.Tuned: Get https://172.30.0.1:443/apis/tuned.openshift.io/v1/namespaces/openshift-cluster-node-tuning-operator/tuneds?allowWatchBookmarks=true&fieldSelector=metadata.name%3Drendered&resourceVersion=26256&timeoutSeconds=427&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0227 20:23:53.396163     665 reflector.go:320] github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:598: Failed to watch *v1.Tuned: Get https://172.30.0.1:443/apis/tuned.openshift.io/v1/namespaces/openshift-cluster-node-tuning-operator/tuneds?allowWatchBookmarks=true&fieldSelector=metadata.name%3Drendered&resourceVersion=26256&timeoutSeconds=319&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nI0227 20:25:53.224760     665 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 20:25:53.225143     665 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\n
Feb 27 20:29:13.205 E ns/openshift-sdn pod/sdn-controller-xd57v node/ip-10-0-130-82.us-east-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0227 20:14:19.498981       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0227 20:14:19.521680       1 event.go:319] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"openshift-network-controller", GenerateName:"", Namespace:"openshift-sdn", SelfLink:"/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller", UID:"fdfbddee-4bed-4797-a24b-2b52c593286e", ResourceVersion:"29746", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63718429351, loc:(*time.Location)(0x2b2b940)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"ip-10-0-130-82\",\"leaseDurationSeconds\":60,\"acquireTime\":\"2020-02-27T19:42:31Z\",\"renewTime\":\"2020-02-27T20:14:19Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'ip-10-0-130-82 became leader'\nI0227 20:14:19.521761       1 leaderelection.go:252] successfully acquired lease openshift-sdn/openshift-network-controller\nI0227 20:14:19.526770       1 master.go:51] Initializing SDN master\nI0227 20:14:19.585738       1 network_controller.go:61] Started OpenShift Network Controller\n
Feb 27 20:29:13.241 E ns/openshift-multus pod/multus-admission-controller-kxj2s node/ip-10-0-130-82.us-east-2.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Feb 27 20:29:13.262 E ns/openshift-sdn pod/ovs-2fd5h node/ip-10-0-130-82.us-east-2.compute.internal container=openvswitch container exited with code 1 (Error): :26:41.283Z|00172|connmgr|INFO|br0<->unix#719: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-02-27T20:26:44.294Z|00173|connmgr|INFO|br0<->unix#724: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T20:26:44.363Z|00174|connmgr|INFO|br0<->unix#727: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T20:26:44.468Z|00175|bridge|INFO|bridge br0: deleted interface veth9d6f7ef8 on port 80\n2020-02-27T20:26:45.193Z|00176|connmgr|INFO|br0<->unix#730: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T20:26:45.224Z|00177|connmgr|INFO|br0<->unix#733: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T20:26:45.250Z|00178|bridge|INFO|bridge br0: deleted interface veth0c1af8a5 on port 79\n2020-02-27T20:26:45.243Z|00029|jsonrpc|WARN|Dropped 4 log messages in last 9 seconds (most recently, 8 seconds ago) due to excessive rate\n2020-02-27T20:26:45.243Z|00030|jsonrpc|WARN|unix#620: receive error: Connection reset by peer\n2020-02-27T20:26:45.243Z|00031|reconnect|WARN|unix#620: connection dropped (Connection reset by peer)\n2020-02-27T20:26:46.072Z|00179|bridge|INFO|bridge br0: added interface veth07c2a798 on port 81\n2020-02-27T20:26:46.107Z|00180|connmgr|INFO|br0<->unix#737: 5 flow_mods in the last 0 s (5 adds)\n2020-02-27T20:26:46.155Z|00181|connmgr|INFO|br0<->unix#741: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T20:26:46.157Z|00182|connmgr|INFO|br0<->unix#743: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-02-27T20:26:49.203Z|00183|connmgr|INFO|br0<->unix#748: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T20:26:49.236Z|00184|connmgr|INFO|br0<->unix#751: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T20:26:49.263Z|00185|bridge|INFO|bridge br0: deleted interface veth07c2a798 on port 81\n2020-02-27T20:26:49.257Z|00032|reconnect|WARN|unix#633: connection dropped (Connection reset by peer)\n2020-02-27T20:26:52.260Z|00033|reconnect|WARN|unix#637: connection dropped (Connection reset by peer)\ninfo: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Feb 27 20:29:13.282 E ns/openshift-monitoring pod/node-exporter-9rt2c node/ip-10-0-130-82.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:25:57Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:26:01Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:26:12Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:26:16Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:26:27Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:26:31Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:26:46Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 27 20:29:13.291 E ns/openshift-multus pod/multus-q9zsn node/ip-10-0-130-82.us-east-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Feb 27 20:29:13.338 E ns/openshift-machine-config-operator pod/machine-config-daemon-n9zct node/ip-10-0-130-82.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 27 20:29:13.350 E ns/openshift-machine-config-operator pod/machine-config-server-vc7vn node/ip-10-0-130-82.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0227 20:23:58.118325       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-303-g38b43e66-dirty (38b43e66bab4746757f4388b82e7feb1eea7a0b2)\nI0227 20:23:58.119621       1 api.go:51] Launching server on :22624\nI0227 20:23:58.119712       1 api.go:51] Launching server on :22623\n
Feb 27 20:29:13.374 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-130-82.us-east-2.compute.internal node/ip-10-0-130-82.us-east-2.compute.internal container=scheduler container exited with code 2 (Error): 27 20:26:45.818687       1 factory.go:453] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-6f55796db5-vnv9v: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0227 20:26:47.819667       1 factory.go:453] Unable to schedule openshift-apiserver/apiserver-679df69fd9-gk78l: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0227 20:26:48.819468       1 factory.go:453] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-6f55796db5-vnv9v: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0227 20:26:53.820418       1 factory.go:453] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-6f55796db5-vnv9v: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0227 20:26:55.644700       1 scheduler.go:751] pod openshift-monitoring/prometheus-k8s-0 is bound successfully on node "ip-10-0-141-156.us-east-2.compute.internal", 6 nodes evaluated, 2 nodes were found feasible.\nI0227 20:26:55.678302       1 scheduler.go:751] pod openshift-monitoring/alertmanager-main-0 is bound successfully on node "ip-10-0-141-156.us-east-2.compute.internal", 6 nodes evaluated, 2 nodes were found feasible.\nI0227 20:26:55.773429       1 scheduler.go:751] pod openshift-monitoring/alertmanager-main-1 is bound successfully on node "ip-10-0-151-30.us-east-2.compute.internal", 6 nodes evaluated, 2 nodes were found feasible.\nI0227 20:26:56.505029       1 scheduler.go:751] pod openshift-operator-lifecycle-manager/packageserver-d499996c7-4287s is bound successfully on node "ip-10-0-153-35.us-east-2.compute.internal", 6 nodes evaluated, 2 nodes were found feasible.\n
Feb 27 20:29:13.387 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-130-82.us-east-2.compute.internal node/ip-10-0-130-82.us-east-2.compute.internal container=kube-apiserver container exited with code 1 (Error): "transport is closing"\nI0227 20:26:58.052983       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0227 20:26:58.053122       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0227 20:26:58.053275       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0227 20:26:58.053412       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0227 20:26:58.053413       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0227 20:26:58.053577       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0227 20:26:58.053713       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0227 20:26:58.053830       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0227 20:26:58.053921       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0227 20:26:58.054138       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0227 20:26:58.054177       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0227 20:26:58.054306       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0227 20:26:58.054424       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0227 20:26:58.054513       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0227 20:26:58.054729       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\n
Feb 27 20:29:13.387 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-130-82.us-east-2.compute.internal node/ip-10-0-130-82.us-east-2.compute.internal container=kube-apiserver-insecure-readyz container exited with code 2 (Error): I0227 20:25:55.626962       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Feb 27 20:29:13.387 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-130-82.us-east-2.compute.internal node/ip-10-0-130-82.us-east-2.compute.internal container=kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0227 20:26:42.820342       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 20:26:42.820725       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0227 20:26:52.829277       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 20:26:52.829636       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Feb 27 20:29:13.387 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-130-82.us-east-2.compute.internal node/ip-10-0-130-82.us-east-2.compute.internal container=kube-apiserver-cert-regeneration-controller container exited with code 255 (Error): W0227 20:25:55.240481       1 cmd.go:200] Using insecure, self-signed certificates\nI0227 20:25:55.240786       1 crypto.go:580] Generating new CA for cert-regeneration-controller-signer@1582835155 cert, and key in /tmp/serving-cert-181353936/serving-signer.crt, /tmp/serving-cert-181353936/serving-signer.key\nI0227 20:25:56.631380       1 observer_polling.go:155] Starting file observer\nI0227 20:26:00.150155       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-apiserver/cert-regeneration-controller-lock...\nI0227 20:26:58.062704       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nF0227 20:26:58.062739       1 leaderelection.go:67] leaderelection lost\n
Feb 27 20:29:13.403 E ns/openshift-etcd pod/etcd-ip-10-0-130-82.us-east-2.compute.internal node/ip-10-0-130-82.us-east-2.compute.internal container=etcd-metrics container exited with code 2 (Error): 2020-02-27 20:07:21.231488 I | etcdmain: ServerTLS: cert = /etc/kubernetes/static-pod-resources/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-130-82.us-east-2.compute.internal.crt, key = /etc/kubernetes/static-pod-resources/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-130-82.us-east-2.compute.internal.key, ca = /etc/kubernetes/static-pod-resources/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-02-27 20:07:21.232162 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-02-27 20:07:21.232518 I | etcdmain: ClientTLS: cert = /etc/kubernetes/static-pod-resources/secrets/etcd-all-peer/etcd-peer-ip-10-0-130-82.us-east-2.compute.internal.crt, key = /etc/kubernetes/static-pod-resources/secrets/etcd-all-peer/etcd-peer-ip-10-0-130-82.us-east-2.compute.internal.key, ca = /etc/kubernetes/static-pod-resources/configmaps/etcd-peer-client-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/02/27 20:07:21 grpc: addrConn.createTransport failed to connect to {https://etcd-0.ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.130.82:9978: connect: connection refused". Reconnecting...\n2020-02-27 20:07:21.234938 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Feb 27 20:29:13.417 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-130-82.us-east-2.compute.internal node/ip-10-0-130-82.us-east-2.compute.internal container=cluster-policy-controller container exited with code 1 (Error): d to watch *v1.Namespace: unknown (get namespaces)\nE0227 20:25:59.999176       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)\nE0227 20:25:59.999239       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.DaemonSet: unknown (get daemonsets.apps)\nE0227 20:25:59.999318       1 reflector.go:307] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.BuildConfig: unknown (get buildconfigs.build.openshift.io)\nE0227 20:26:00.039826       1 reflector.go:307] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.Build: the server could not find the requested resource (get builds.build.openshift.io)\nE0227 20:26:00.066398       1 reflector.go:307] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: Failed to watch *v1.DeploymentConfig: the server could not find the requested resource (get deploymentconfigs.apps.openshift.io)\nW0227 20:26:29.738380       1 reflector.go:326] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.BuildConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 137; INTERNAL_ERROR") has prevented the request from succeeding\nW0227 20:26:29.738476       1 reflector.go:326] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.Build ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 191; INTERNAL_ERROR") has prevented the request from succeeding\nW0227 20:26:29.738535       1 reflector.go:326] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.ImageStream ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 149; INTERNAL_ERROR") has prevented the request from succeeding\n
Feb 27 20:29:13.417 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-130-82.us-east-2.compute.internal node/ip-10-0-130-82.us-east-2.compute.internal container=kube-controller-manager-cert-syncer container exited with code 2 (Error):     1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 20:26:20.755396       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 20:26:20.755758       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 20:26:27.696514       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 20:26:27.696898       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 20:26:30.787163       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 20:26:30.788597       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 20:26:37.729274       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 20:26:37.729656       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 20:26:40.817005       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 20:26:40.820089       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 20:26:47.729737       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 20:26:47.730163       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 20:26:50.819873       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 20:26:50.820204       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 20:26:57.739890       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 20:26:57.740308       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\n
Feb 27 20:29:13.417 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-130-82.us-east-2.compute.internal node/ip-10-0-130-82.us-east-2.compute.internal container=kube-controller-manager container exited with code 2 (Error): strap-kubeconfig-signer" [] issuer="<self>" (2020-02-27 19:31:04 +0000 UTC to 2030-02-24 19:31:04 +0000 UTC (now=2020-02-27 20:10:29.225561958 +0000 UTC))\nI0227 20:10:29.225610       1 tlsconfig.go:179] loaded client CA [6/"client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt"]: "aggregator-signer" [] issuer="<self>" (2020-02-27 19:31:08 +0000 UTC to 2020-02-28 19:31:08 +0000 UTC (now=2020-02-27 20:10:29.225595888 +0000 UTC))\nI0227 20:10:29.225272       1 dynamic_serving_content.go:129] Starting serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\nI0227 20:10:29.225945       1 tlsconfig.go:201] loaded serving cert ["serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key"]: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1582832794" (2020-02-27 19:46:47 +0000 UTC to 2022-02-26 19:46:48 +0000 UTC (now=2020-02-27 20:10:29.225922328 +0000 UTC))\nI0227 20:10:29.226326       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1582834229" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1582834228" (2020-02-27 19:10:28 +0000 UTC to 2021-02-26 19:10:28 +0000 UTC (now=2020-02-27 20:10:29.226304725 +0000 UTC))\nI0227 20:10:29.226417       1 secure_serving.go:178] Serving securely on [::]:10257\nI0227 20:10:29.226508       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-controller-manager...\nI0227 20:10:29.226678       1 tlsconfig.go:241] Starting DynamicServingCertificateController\n
Feb 27 20:29:13.417 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-130-82.us-east-2.compute.internal node/ip-10-0-130-82.us-east-2.compute.internal container=kube-controller-manager-recovery-controller container exited with code 255 (Error): 590       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0227 20:25:53.156148       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=35513&timeout=8m41s&timeoutSeconds=521&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0227 20:25:53.156238       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=34999&timeout=8m1s&timeoutSeconds=481&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0227 20:25:54.157239       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=35513&timeout=7m7s&timeoutSeconds=427&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0227 20:25:54.158085       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=34999&timeout=6m45s&timeoutSeconds=405&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0227 20:25:59.800686       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nE0227 20:25:59.800785       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nI0227 20:26:58.071836       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nF0227 20:26:58.071881       1 leaderelection.go:67] leaderelection lost\n
Feb 27 20:29:15.018 E ns/openshift-monitoring pod/node-exporter-vtkrr node/ip-10-0-140-113.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:26:15Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:26:23Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:26:30Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:26:38Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:26:53Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:27:08Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:27:23Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 27 20:29:15.032 E ns/openshift-cluster-node-tuning-operator pod/tuned-zjd6n node/ip-10-0-140-113.us-east-2.compute.internal container=tuned container exited with code 143 (Error): ned.go:170] disabling system tuned...\nI0227 20:13:47.643695     876 tuned.go:176] failed to disable system tuned: Failed to execute operation: Unit file tuned.service does not exist.\nI0227 20:13:48.626240     876 tuned.go:393] getting recommended profile...\nI0227 20:13:48.747713     876 tuned.go:421] active profile () != recommended profile (openshift-node)\nI0227 20:13:48.747811     876 tuned.go:286] starting tuned...\n2020-02-27 20:13:48,858 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-02-27 20:13:48,864 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-02-27 20:13:48,864 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-02-27 20:13:48,865 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-02-27 20:13:48,866 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-02-27 20:13:48,899 INFO     tuned.daemon.controller: starting controller\n2020-02-27 20:13:48,899 INFO     tuned.daemon.daemon: starting tuning\n2020-02-27 20:13:48,911 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-02-27 20:13:48,911 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-02-27 20:13:48,914 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-02-27 20:13:48,916 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-02-27 20:13:48,917 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-02-27 20:13:49,066 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-02-27 20:13:49,073 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0227 20:25:53.215110     876 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 20:25:53.215236     876 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\n
Feb 27 20:29:15.064 E ns/openshift-sdn pod/ovs-jjfqk node/ip-10-0-140-113.us-east-2.compute.internal container=openvswitch container exited with code 1 (Error): ->unix#647: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T20:26:42.780Z|00133|connmgr|INFO|br0<->unix#650: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T20:26:42.802Z|00134|bridge|INFO|bridge br0: deleted interface vetha8aa6887 on port 34\n2020-02-27T20:26:42.565Z|00013|jsonrpc|WARN|unix#545: receive error: Connection reset by peer\n2020-02-27T20:26:42.565Z|00014|reconnect|WARN|unix#545: connection dropped (Connection reset by peer)\n2020-02-27T20:26:42.696Z|00015|jsonrpc|WARN|unix#550: receive error: Connection reset by peer\n2020-02-27T20:26:42.696Z|00016|reconnect|WARN|unix#550: connection dropped (Connection reset by peer)\n2020-02-27T20:27:03.457Z|00135|connmgr|INFO|br0<->unix#668: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T20:27:03.494Z|00136|connmgr|INFO|br0<->unix#671: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T20:27:03.525Z|00137|bridge|INFO|bridge br0: deleted interface veth9afd161e on port 26\n2020-02-27T20:27:03.563Z|00138|connmgr|INFO|br0<->unix#675: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T20:27:03.605Z|00139|connmgr|INFO|br0<->unix#678: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T20:27:03.628Z|00140|bridge|INFO|bridge br0: deleted interface veth7d119ea8 on port 25\n2020-02-27T20:27:20.207Z|00017|jsonrpc|WARN|unix#594: receive error: Connection reset by peer\n2020-02-27T20:27:20.207Z|00018|reconnect|WARN|unix#594: connection dropped (Connection reset by peer)\n2020-02-27T20:27:25.912Z|00141|connmgr|INFO|br0<->unix#698: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T20:27:25.940Z|00142|connmgr|INFO|br0<->unix#701: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T20:27:25.962Z|00143|bridge|INFO|bridge br0: deleted interface veth3dd2b597 on port 27\n2020-02-27T20:27:25.956Z|00019|jsonrpc|WARN|unix#601: receive error: Connection reset by peer\n2020-02-27T20:27:25.956Z|00020|reconnect|WARN|unix#601: connection dropped (Connection reset by peer)\ninfo: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Feb 27 20:29:15.075 E ns/openshift-multus pod/multus-fdqj4 node/ip-10-0-140-113.us-east-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Feb 27 20:29:15.100 E ns/openshift-machine-config-operator pod/machine-config-daemon-6wjxx node/ip-10-0-140-113.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 27 20:29:17.657 E ns/openshift-monitoring pod/node-exporter-9rt2c node/ip-10-0-130-82.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Feb 27 20:29:17.828 E ns/openshift-multus pod/multus-fdqj4 node/ip-10-0-140-113.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Feb 27 20:29:18.700 E ns/openshift-multus pod/multus-q9zsn node/ip-10-0-130-82.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Feb 27 20:29:21.820 E ns/openshift-multus pod/multus-q9zsn node/ip-10-0-130-82.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Feb 27 20:29:23.518 E ns/openshift-machine-config-operator pod/machine-config-daemon-6wjxx node/ip-10-0-140-113.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 27 20:29:32.867 E ns/openshift-csi-snapshot-controller-operator pod/csi-snapshot-controller-operator-5fcdb5c887-4rjht node/ip-10-0-151-30.us-east-2.compute.internal container=operator container exited with code 255 (Error): 19.379709039\nI0227 20:26:37.957247       1 status_controller.go:176] clusteroperator/csi-snapshot-controller diff {"status":{"conditions":[{"lastTransitionTime":"2020-02-27T19:51:52Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-02-27T20:26:37Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-02-27T20:26:37Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-02-27T19:51:56Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0227 20:26:37.960744       1 operator.go:147] Finished syncing operator at 39.831962ms\nI0227 20:26:37.960791       1 operator.go:145] Starting syncing operator at 2020-02-27 20:26:37.960785004 +0000 UTC m=+919.419589092\nI0227 20:26:37.965460       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-csi-snapshot-controller-operator", Name:"csi-snapshot-controller-operator", UID:"b1e1ff48-5201-4c6c-ac63-1dabbbab716b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False (""),Available changed from False to True ("")\nI0227 20:26:37.999610       1 operator.go:147] Finished syncing operator at 38.819742ms\nI0227 20:27:07.645048       1 operator.go:145] Starting syncing operator at 2020-02-27 20:27:07.64504027 +0000 UTC m=+949.103844266\nI0227 20:27:07.686392       1 operator.go:147] Finished syncing operator at 41.345546ms\nI0227 20:27:07.686435       1 operator.go:145] Starting syncing operator at 2020-02-27 20:27:07.686429022 +0000 UTC m=+949.145233153\nI0227 20:27:07.729504       1 operator.go:147] Finished syncing operator at 43.06682ms\nI0227 20:29:30.872943       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0227 20:29:30.873287       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0227 20:29:30.873438       1 builder.go:210] server exited\n
Feb 27 20:29:32.942 E ns/openshift-monitoring pod/thanos-querier-7689dc997-d49qs node/ip-10-0-151-30.us-east-2.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/02/27 20:11:49 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/27 20:11:49 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/27 20:11:49 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/27 20:11:49 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/27 20:11:49 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/27 20:11:49 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/27 20:11:49 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/27 20:11:49 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/02/27 20:11:49 http.go:107: HTTPS: listening on [::]:9091\nI0227 20:11:49.907115       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Feb 27 20:29:33.534 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Feb 27 20:29:34.042 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-151-30.us-east-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/02/27 20:11:50 Watching directory: "/etc/alertmanager/config"\n
Feb 27 20:29:34.042 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-151-30.us-east-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/02/27 20:11:50 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/27 20:11:50 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/27 20:11:50 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/27 20:11:50 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/27 20:11:50 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/27 20:11:50 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/27 20:11:50 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/27 20:11:50 http.go:107: HTTPS: listening on [::]:9095\nI0227 20:11:50.536665       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Feb 27 20:29:34.128 E ns/openshift-ingress pod/router-default-6fd876f4dd-sdk2j node/ip-10-0-151-30.us-east-2.compute.internal container=router container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 20:29:53.936 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-140-113.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-27T20:29:51.767Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-27T20:29:51.778Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-27T20:29:51.778Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-27T20:29:51.779Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-27T20:29:51.780Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-27T20:29:51.780Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-27T20:29:51.780Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-27T20:29:51.780Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-27T20:29:51.780Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-27T20:29:51.780Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-27T20:29:51.780Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-27T20:29:51.780Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-27T20:29:51.780Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-27T20:29:51.780Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-27T20:29:51.781Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-27T20:29:51.781Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-27
Feb 27 20:29:55.384 E ns/openshift-operator-lifecycle-manager pod/packageserver-d499996c7-5twk2 node/ip-10-0-130-82.us-east-2.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 20:30:02.046 E kube-apiserver failed contacting the API: Get https://api.ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/apis/config.openshift.io/v1/clusterversions?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dversion&resourceVersion=39605&timeout=9m25s&timeoutSeconds=565&watch=true: dial tcp 3.21.64.37:6443: connect: connection refused
Feb 27 20:30:02.046 E kube-apiserver failed contacting the API: Get https://api.ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&resourceVersion=41196&timeout=7m1s&timeoutSeconds=421&watch=true: dial tcp 3.21.64.37:6443: connect: connection refused
Feb 27 20:30:03.697 E ns/openshift-insights pod/insights-operator-666567b569-nljp6 node/ip-10-0-136-188.us-east-2.compute.internal container=operator container exited with code 2 (Error): rastructure with fingerprint=\nI0227 20:28:45.442661       1 diskrecorder.go:63] Recording config/network with fingerprint=\nI0227 20:28:45.445522       1 diskrecorder.go:63] Recording config/authentication with fingerprint=\nI0227 20:28:45.448475       1 diskrecorder.go:63] Recording config/featuregate with fingerprint=\nI0227 20:28:45.451432       1 diskrecorder.go:63] Recording config/oauth with fingerprint=\nI0227 20:28:45.454396       1 diskrecorder.go:63] Recording config/ingress with fingerprint=\nI0227 20:28:45.457165       1 diskrecorder.go:63] Recording config/proxy with fingerprint=\nI0227 20:28:45.465751       1 diskrecorder.go:170] Writing 47 records to /var/lib/insights-operator/insights-2020-02-27-202845.tar.gz\nI0227 20:28:45.470643       1 diskrecorder.go:134] Wrote 47 records to disk in 5ms\nI0227 20:28:45.470676       1 periodic.go:151] Periodic gather config completed in 159ms\nI0227 20:28:56.475684       1 diskrecorder.go:303] Found files to send: [/var/lib/insights-operator/insights-2020-02-27-202845.tar.gz]\nI0227 20:28:56.475806       1 insightsuploader.go:126] Uploading latest report since 2020-02-27T20:13:36Z\nI0227 20:28:56.496382       1 insightsclient.go:163] Uploading application/vnd.redhat.openshift.periodic to https://cloud.redhat.com/api/ingress/v1/upload\nI0227 20:28:56.648953       1 insightsclient.go:213] Successfully reported id=2020-02-27T20:28:56Z x-rh-insights-request-id=53ac797be5b840f79e6bced46c136057, wrote=20708\nI0227 20:28:56.649083       1 insightsuploader.go:150] Uploaded report successfully in 173.283251ms\nI0227 20:28:56.652599       1 status.go:298] The operator is healthy\nI0227 20:29:01.598655       1 httplog.go:90] GET /metrics: (6.511631ms) 200 [Prometheus/2.15.2 10.129.2.22:41546]\nI0227 20:29:11.059577       1 httplog.go:90] GET /metrics: (3.900397ms) 200 [Prometheus/2.15.2 10.128.2.26:55168]\nI0227 20:29:31.682841       1 httplog.go:90] GET /metrics: (89.945606ms) 200 [Prometheus/2.15.2 10.129.2.22:41546]\nI0227 20:29:34.019830       1 status.go:298] The operator is healthy\n
Feb 27 20:30:05.267 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-54c4b7c9ff-mc6l6 node/ip-10-0-136-188.us-east-2.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): reated Pod/revision-pruner-9-ip-10-0-136-188.us-east-2.compute.internal -n openshift-kube-controller-manager because it was missing\nI0227 20:29:45.360916       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0227 20:29:45.361113       1 dynamic_serving_content.go:144] Shutting down serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key\nI0227 20:29:45.363293       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0227 20:29:45.363324       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0227 20:29:45.363340       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "CSRSigningCert"\nI0227 20:29:45.363358       1 satokensigner_controller.go:332] Shutting down SATokenSignerController\nI0227 20:29:45.363385       1 base_controller.go:74] Shutting down NodeController ...\nI0227 20:29:45.363413       1 base_controller.go:74] Shutting down RevisionController ...\nI0227 20:29:45.363427       1 status_controller.go:212] Shutting down StatusSyncer-kube-controller-manager\nI0227 20:29:45.363445       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0227 20:29:45.363460       1 base_controller.go:74] Shutting down InstallerController ...\nI0227 20:29:45.363474       1 remove_stale_conditions.go:84] Shutting down RemoveStaleConditions\nI0227 20:29:45.363497       1 base_controller.go:74] Shutting down PruneController ...\nI0227 20:29:45.363517       1 base_controller.go:74] Shutting down  ...\nI0227 20:29:45.363535       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0227 20:29:45.363552       1 base_controller.go:74] Shutting down InstallerStateController ...\nI0227 20:29:45.363567       1 base_controller.go:74] Shutting down StaticPodStateController ...\nI0227 20:29:45.363589       1 targetconfigcontroller.go:613] Shutting down TargetConfigController\nF0227 20:29:45.364129       1 builder.go:243] stopped\nF0227 20:29:45.396487       1 builder.go:209] server exited\n
Feb 27 20:30:05.722 E ns/openshift-console pod/console-767f8d7f56-6gsx4 node/ip-10-0-136-188.us-east-2.compute.internal container=console container exited with code 2 (Error): 2020-02-27T20:13:37Z cmd/main: cookies are secure!\n2020-02-27T20:13:37Z cmd/main: Binding to [::]:8443...\n2020-02-27T20:13:37Z cmd/main: using TLS\n2020-02-27T20:23:39Z auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n2020-02-27T20:23:39Z auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n
Feb 27 20:30:05.876 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-5f9ccc5bbb-vmjm4 node/ip-10-0-136-188.us-east-2.compute.internal container=kube-apiserver-operator container exited with code 255 (Error): opped\nI0227 20:29:40.387457       1 base_controller.go:74] Shutting down RevisionController ...\nF0227 20:29:40.387464       1 builder.go:209] server exited\nI0227 20:29:40.387469       1 certrotationtime_upgradeable.go:103] Shutting down CertRotationTimeUpgradeableController\nI0227 20:29:40.387482       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0227 20:29:40.387495       1 base_controller.go:74] Shutting down PruneController ...\nI0227 20:29:40.387508       1 base_controller.go:74] Shutting down  ...\nI0227 20:29:40.387522       1 feature_upgradeable_controller.go:106] Shutting down FeatureUpgradeableController\nI0227 20:29:40.387550       1 status_controller.go:212] Shutting down StatusSyncer-kube-apiserver\nI0227 20:29:40.387563       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0227 20:29:40.387573       1 remove_stale_conditions.go:84] Shutting down RemoveStaleConditions\nI0227 20:29:40.387711       1 base_controller.go:49] Shutting down worker of RevisionController controller ...\nI0227 20:29:40.387772       1 base_controller.go:49] Shutting down worker of UnsupportedConfigOverridesController controller ...\nI0227 20:29:40.388301       1 configmap_cafile_content.go:226] Shutting down client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nI0227 20:29:40.388323       1 configmap_cafile_content.go:226] Shutting down client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\nI0227 20:29:40.388364       1 base_controller.go:49] Shutting down worker of LoggingSyncer controller ...\nI0227 20:29:40.388375       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nI0227 20:29:40.388386       1 dynamic_serving_content.go:144] Shutting down serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key\nI0227 20:29:40.388634       1 secure_serving.go:222] Stopped listening on [::]:8443\nI0227 20:29:40.389241       1 base_controller.go:39] All NodeController workers have been terminated\n
Feb 27 20:30:09.863 E ns/openshift-machine-config-operator pod/machine-config-operator-798fd954f7-pdmnt node/ip-10-0-136-188.us-east-2.compute.internal container=machine-config-operator container exited with code 2 (Error): 99e3-eedcd4c34d7b became leader'\nI0227 20:20:11.106896       1 leaderelection.go:252] successfully acquired lease openshift-machine-config-operator/machine-config\nI0227 20:20:11.640653       1 operator.go:264] Starting MachineConfigOperator\nI0227 20:20:11.645424       1 event.go:281] Event(v1.ObjectReference{Kind:"", Namespace:"", Name:"machine-config", UID:"6ceb1c41-cab1-4f89-80b6-b8170aeda311", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorVersionChanged' clusteroperator/machine-config-operator started a version change from [{operator 0.0.1-2020-02-27-192049}] to [{operator 0.0.1-2020-02-27-192713}]\nE0227 20:23:52.373893       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ClusterRole: Get https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles?allowWatchBookmarks=true&resourceVersion=25142&timeout=7m37s&timeoutSeconds=457&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0227 20:23:52.373987       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ConfigMap: Get https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dkube-apiserver-to-kubelet-client-ca&resourceVersion=33013&timeout=9m19s&timeoutSeconds=559&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nW0227 20:23:52.374074       1 reflector.go:326] k8s.io/client-go/informers/factory.go:135: watch of *v1.DaemonSet ended with: very short watch: k8s.io/client-go/informers/factory.go:135: Unexpected watch close - watch lasted less than a second and no items received\nW0227 20:23:52.374149       1 reflector.go:326] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:117: watch of *v1beta1.CustomResourceDefinition ended with: very short watch: k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:117: Unexpected watch close - watch lasted less than a second and no items received\n
Feb 27 20:30:09.940 E ns/openshift-service-ca pod/service-ca-c966cd6c4-plt5m node/ip-10-0-136-188.us-east-2.compute.internal container=service-ca-controller container exited with code 255 (Error): 
Feb 27 20:30:10.126 E ns/openshift-machine-config-operator pod/machine-config-controller-767c898dd8-hqfjw node/ip-10-0-136-188.us-east-2.compute.internal container=machine-config-controller container exited with code 2 (Error): Unschedulable\nI0227 20:29:25.508164       1 node_controller.go:442] Pool worker: node ip-10-0-140-113.us-east-2.compute.internal has completed update to rendered-worker-bba991debd56363740aa5acdfa16ecee\nI0227 20:29:25.523064       1 node_controller.go:435] Pool worker: node ip-10-0-140-113.us-east-2.compute.internal is now reporting ready\nI0227 20:29:29.402122       1 node_controller.go:758] Setting node ip-10-0-151-30.us-east-2.compute.internal to desired config rendered-worker-bba991debd56363740aa5acdfa16ecee\nI0227 20:29:29.428559       1 node_controller.go:452] Pool worker: node ip-10-0-151-30.us-east-2.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-bba991debd56363740aa5acdfa16ecee\nI0227 20:29:30.433315       1 node_controller.go:452] Pool worker: node ip-10-0-151-30.us-east-2.compute.internal changed machineconfiguration.openshift.io/state = Working\nI0227 20:29:30.485543       1 node_controller.go:433] Pool worker: node ip-10-0-151-30.us-east-2.compute.internal is now reporting unready: node ip-10-0-151-30.us-east-2.compute.internal is reporting Unschedulable\nI0227 20:29:33.094534       1 node_controller.go:435] Pool master: node ip-10-0-130-82.us-east-2.compute.internal is now reporting ready\nI0227 20:29:34.095896       1 node_controller.go:758] Setting node ip-10-0-136-188.us-east-2.compute.internal to desired config rendered-master-c6b32aafb47c7076c0b5ce99578a9ab2\nI0227 20:29:34.139353       1 node_controller.go:452] Pool master: node ip-10-0-136-188.us-east-2.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-master-c6b32aafb47c7076c0b5ce99578a9ab2\nI0227 20:29:35.166790       1 node_controller.go:452] Pool master: node ip-10-0-136-188.us-east-2.compute.internal changed machineconfiguration.openshift.io/state = Working\nI0227 20:29:35.186137       1 node_controller.go:433] Pool master: node ip-10-0-136-188.us-east-2.compute.internal is now reporting unready: node ip-10-0-136-188.us-east-2.compute.internal is reporting Unschedulable\n
Feb 27 20:30:10.205 E ns/openshift-machine-api pod/machine-api-operator-5c455b6d49-gn2fc node/ip-10-0-136-188.us-east-2.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Feb 27 20:30:10.240 E ns/openshift-authentication-operator pod/authentication-operator-ff68f57b8-jr7kl node/ip-10-0-136-188.us-east-2.compute.internal container=operator container exited with code 255 (Error): o:176] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2020-02-27T19:52:15Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-02-27T20:29:50Z","message":"Progressing: not all deployment replicas are ready","reason":"_OAuthServerDeploymentNotReady","status":"True","type":"Progressing"},{"lastTransitionTime":"2020-02-27T20:04:10Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-02-27T19:46:29Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0227 20:29:55.056640       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"63dd1209-f0c3-4ddc-ab4c-abf3293f5da8", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Progressing message changed from "Progressing: deployment's observed generation did not reach the expected generation" to "Progressing: not all deployment replicas are ready"\nI0227 20:30:03.319070       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0227 20:30:03.323372       1 management_state_controller.go:112] Shutting down management-state-controller-authentication\nI0227 20:30:03.323434       1 dynamic_serving_content.go:144] Shutting down serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key\nI0227 20:30:03.323966       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0227 20:30:03.323986       1 controller.go:215] Shutting down RouterCertsDomainValidationController\nI0227 20:30:03.324000       1 controller.go:70] Shutting down AuthenticationOperator2\nI0227 20:30:03.324023       1 status_controller.go:212] Shutting down StatusSyncer-authentication\nI0227 20:30:03.324036       1 ingress_state_controller.go:157] Shutting down IngressStateController\nF0227 20:30:03.324061       1 builder.go:210] server exited\n
Feb 27 20:30:10.267 E ns/openshift-etcd-operator pod/etcd-operator-7569d5796f-8b2z5 node/ip-10-0-136-188.us-east-2.compute.internal container=operator container exited with code 255 (Error): ing"\nI0227 20:30:03.726607       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0227 20:30:03.727568       1 host_endpoints_controller.go:263] Shutting down HostEtcdEndpointsController\nI0227 20:30:03.727753       1 host_endpoints_controller.go:357] Shutting down HostEtcdEndpointsController\nI0227 20:30:03.727765       1 targetconfigcontroller.go:269] Shutting down TargetConfigController\nI0227 20:30:03.727778       1 scriptcontroller.go:144] Shutting down ScriptControllerController\nI0227 20:30:03.727788       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0227 20:30:03.727797       1 clustermembercontroller.go:104] Shutting down ClusterMemberController\nI0227 20:30:03.727809       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0227 20:30:03.727824       1 base_controller.go:74] Shutting down NodeController ...\nI0227 20:30:03.727839       1 base_controller.go:74] Shutting down RevisionController ...\nI0227 20:30:03.727852       1 base_controller.go:74] Shutting down PruneController ...\nI0227 20:30:03.727865       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0227 20:30:03.727883       1 status_controller.go:212] Shutting down StatusSyncer-etcd\nI0227 20:30:03.727892       1 bootstrap_teardown_controller.go:212] Shutting down BootstrapTeardownController\nI0227 20:30:03.727902       1 base_controller.go:74] Shutting down  ...\nI0227 20:30:03.727916       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0227 20:30:03.727929       1 base_controller.go:74] Shutting down  ...\nI0227 20:30:03.727940       1 etcdcertsignercontroller.go:118] Shutting down EtcdCertSignerController\nI0227 20:30:03.727956       1 base_controller.go:74] Shutting down StaticPodStateController ...\nI0227 20:30:03.727970       1 base_controller.go:74] Shutting down InstallerStateController ...\nI0227 20:30:03.728006       1 etcdmemberscontroller.go:192] Shutting down EtcdMembersController\nF0227 20:30:03.728017       1 builder.go:243] stopped\n
Feb 27 20:30:10.365 E ns/openshift-console-operator pod/console-operator-847659b6b4-mvr8p node/ip-10-0-136-188.us-east-2.compute.internal container=console-operator container exited with code 255 (Error): 689       1 status.go:73] DeploymentAvailable FailedUpdate 1 replicas ready at version 0.0.1-2020-02-27-192713\nI0227 20:29:53.330629       1 status_controller.go:176] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2020-02-27T19:51:03Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-02-27T20:13:55Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-02-27T20:29:53Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-02-27T19:51:03Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0227 20:29:53.349499       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"eec7b74a-63a9-45aa-8a37-6e17ca258eb4", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Available changed from False to True ("")\nI0227 20:30:03.716899       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0227 20:30:03.717737       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nI0227 20:30:03.718192       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0227 20:30:03.718216       1 management_state_controller.go:112] Shutting down management-state-controller-console\nI0227 20:30:03.718230       1 controller.go:138] shutting down ConsoleServiceSyncController\nI0227 20:30:03.718246       1 controller.go:70] Shutting down Console\nI0227 20:30:03.718305       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0227 20:30:03.718323       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0227 20:30:03.718338       1 controller.go:109] shutting down ConsoleResourceSyncDestinationController\nI0227 20:30:03.718354       1 status_controller.go:212] Shutting down StatusSyncer-console\nF0227 20:30:03.718766       1 builder.go:243] stopped\n
Feb 27 20:30:10.425 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-7f8ff47f4d-dp5ff node/ip-10-0-136-188.us-east-2.compute.internal container=operator container exited with code 255 (Error): /informers/factory.go:135: watch of *v1.Service ended with: too old resource version: 34845 (39629)\nI0227 20:30:02.436917       1 reflector.go:324] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Network ended with: too old resource version: 35145 (41189)\nI0227 20:30:03.430397       1 reflector.go:185] Listing and watching *v1.Namespace from k8s.io/client-go/informers/factory.go:135\nI0227 20:30:03.430560       1 reflector.go:185] Listing and watching *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:135\nI0227 20:30:03.431059       1 reflector.go:185] Listing and watching *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209\nI0227 20:30:03.431350       1 reflector.go:185] Listing and watching *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209\nI0227 20:30:03.437530       1 reflector.go:185] Listing and watching *v1.Deployment from k8s.io/client-go/informers/factory.go:135\nI0227 20:30:03.437657       1 reflector.go:185] Listing and watching *v1.Network from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0227 20:30:03.437845       1 reflector.go:185] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:135\nI0227 20:30:03.437988       1 reflector.go:185] Listing and watching *v1.Image from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0227 20:30:03.662651       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0227 20:30:03.664164       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nI0227 20:30:03.664934       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0227 20:30:03.664966       1 status_controller.go:212] Shutting down StatusSyncer-openshift-controller-manager\nI0227 20:30:03.665006       1 operator.go:135] Shutting down OpenShiftControllerManagerOperator\nF0227 20:30:03.665144       1 builder.go:243] stopped\n
Feb 27 20:30:10.538 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-787bjbmmv node/ip-10-0-136-188.us-east-2.compute.internal container=operator container exited with code 255 (Error): s/factory.go:101: Watch close - *v1.ServiceCatalogControllerManager total 0 items received\nI0227 20:30:02.017613       1 reflector.go:383] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: Watch close - *v1.Proxy total 0 items received\nI0227 20:30:02.017640       1 reflector.go:383] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ConfigMap total 0 items received\nI0227 20:30:02.017664       1 reflector.go:383] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 0 items received\nI0227 20:30:02.490112       1 reflector.go:297] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 35737 (39702)\nI0227 20:30:02.490780       1 reflector.go:297] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Proxy ended with: too old resource version: 35145 (41189)\nI0227 20:30:02.913148       1 reflector.go:297] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 35735 (39862)\nI0227 20:30:02.921264       1 reflector.go:297] k8s.io/client-go/informers/factory.go:134: watch of *v1.Namespace ended with: too old resource version: 38190 (39622)\nI0227 20:30:03.490881       1 reflector.go:158] Listing and watching *v1.ConfigMap from k8s.io/client-go/informers/factory.go:134\nI0227 20:30:03.490981       1 reflector.go:158] Listing and watching *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0227 20:30:03.829112       1 reflector.go:297] k8s.io/client-go/informers/factory.go:134: watch of *v1.Deployment ended with: too old resource version: 35489 (39799)\nI0227 20:30:03.829320       1 reflector.go:297] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: too old resource version: 34845 (39629)\nI0227 20:30:03.874289       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0227 20:30:03.874344       1 leaderelection.go:66] leaderelection lost\n
Feb 27 20:30:10.580 E ns/openshift-service-ca-operator pod/service-ca-operator-586465b7b-4266p node/ip-10-0-136-188.us-east-2.compute.internal container=operator container exited with code 255 (Error): 
Feb 27 20:30:10.648 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-695bfc885b-bhdcl node/ip-10-0-136-188.us-east-2.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): -2.compute.internal container=\"scheduler\" is not ready\nNodeControllerDegraded: The master nodes not ready: node \"ip-10-0-130-82.us-east-2.compute.internal\" not ready since 2020-02-27 20:29:12 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)" to "StaticPodsDegraded: nodes/ip-10-0-130-82.us-east-2.compute.internal pods/openshift-kube-scheduler-ip-10-0-130-82.us-east-2.compute.internal container=\"scheduler\" is not ready\nNodeControllerDegraded: All master nodes are ready"\nI0227 20:29:33.260433       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"153f237a-2352-4793-9319-9b3e1e7c6a5c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Degraded message changed from "StaticPodsDegraded: nodes/ip-10-0-130-82.us-east-2.compute.internal pods/openshift-kube-scheduler-ip-10-0-130-82.us-east-2.compute.internal container=\"scheduler\" is not ready\nNodeControllerDegraded: The master nodes not ready: node \"ip-10-0-130-82.us-east-2.compute.internal\" not ready since 2020-02-27 20:29:12 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)" to "StaticPodsDegraded: nodes/ip-10-0-130-82.us-east-2.compute.internal pods/openshift-kube-scheduler-ip-10-0-130-82.us-east-2.compute.internal container=\"scheduler\" is not ready\nNodeControllerDegraded: All master nodes are ready"\nI0227 20:30:03.473096       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0227 20:30:03.473727       1 base_controller.go:74] Shutting down NodeController ...\nF0227 20:30:03.474104       1 builder.go:243] stopped\nF0227 20:30:03.474119       1 builder.go:209] server exited\n
Feb 27 20:30:10.835 E ns/openshift-cluster-samples-operator pod/cluster-samples-operator-7df56bdb8-cdbj2 node/ip-10-0-136-188.us-east-2.compute.internal container=cluster-samples-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 20:30:10.835 E ns/openshift-cluster-samples-operator pod/cluster-samples-operator-7df56bdb8-cdbj2 node/ip-10-0-136-188.us-east-2.compute.internal container=cluster-samples-operator-watch container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 20:30:15.001 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-136-188.us-east-2.compute.internal node/ip-10-0-136-188.us-east-2.compute.internal container=kube-controller-manager container exited with code 255 (Error): ost:6443/apis/node.k8s.io/v1beta1/runtimeclasses?allowWatchBookmarks=true&resourceVersion=38190&timeout=6m33s&timeoutSeconds=393&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0227 20:30:05.254325       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.CronJob: Get https://localhost:6443/apis/batch/v1beta1/cronjobs?allowWatchBookmarks=true&resourceVersion=38190&timeout=5m12s&timeoutSeconds=312&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0227 20:30:05.267025       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/operator.openshift.io/v1alpha1/imagecontentsourcepolicies?allowWatchBookmarks=true&resourceVersion=38190&timeout=6m46s&timeoutSeconds=406&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0227 20:30:05.276610       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/config.openshift.io/v1/projects?allowWatchBookmarks=true&resourceVersion=38190&timeout=6m46s&timeoutSeconds=406&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0227 20:30:05.283721       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.Event: Get https://localhost:6443/apis/events.k8s.io/v1beta1/events?allowWatchBookmarks=true&resourceVersion=41219&timeout=7m44s&timeoutSeconds=464&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0227 20:30:14.000977       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: context deadline exceeded\nI0227 20:30:14.001055       1 leaderelection.go:288] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0227 20:30:14.001154       1 controllermanager.go:291] leaderelection lost\n
Feb 27 20:30:32.034 E ns/openshift-authentication pod/oauth-openshift-6f445dd587-n4vn4 node/ip-10-0-130-82.us-east-2.compute.internal container=oauth-openshift container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 20:30:40.563 E ns/openshift-authentication pod/oauth-openshift-848fd66cb7-xrbv4 node/ip-10-0-153-35.us-east-2.compute.internal container=oauth-openshift container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 20:30:43.187 E ns/openshift-authentication pod/oauth-openshift-65ccb74ff5-7h26m node/ip-10-0-130-82.us-east-2.compute.internal container=oauth-openshift container exited with code 255 (Error): Copying system trust bundle\nW0227 20:30:43.015741       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0227 20:30:43.016035       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nF0227 20:30:43.017396       1 cmd.go:49] unable to load configmap based request-header-client-ca-file: Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: connection refused\n
Feb 27 20:30:49.873 E ns/openshift-csi-snapshot-controller pod/csi-snapshot-controller-7bb7454df4-l9mww node/ip-10-0-140-113.us-east-2.compute.internal container=snapshot-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 20:31:51.298 E ns/openshift-marketplace pod/community-operators-7fb89788bd-qwcnh node/ip-10-0-141-156.us-east-2.compute.internal container=community-operators container exited with code 2 (Error): 
Feb 27 20:32:03.688 E ns/openshift-monitoring pod/node-exporter-v68fl node/ip-10-0-151-30.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:29:09Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:29:18Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:29:24Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:29:33Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:29:48Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:30:03Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:30:09Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 27 20:32:03.718 E ns/openshift-cluster-node-tuning-operator pod/tuned-xnw8h node/ip-10-0-151-30.us-east-2.compute.internal container=tuned container exited with code 143 (Error): l platform\n2020-02-27 20:13:41,420 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-02-27 20:13:41,422 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-02-27 20:13:41,423 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-02-27 20:13:41,534 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-02-27 20:13:41,543 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0227 20:23:52.359077     657 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 20:23:52.360020     657 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0227 20:23:52.369821     657 reflector.go:320] github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:598: Failed to watch *v1.Tuned: Get https://172.30.0.1:443/apis/tuned.openshift.io/v1/namespaces/openshift-cluster-node-tuning-operator/tuneds?allowWatchBookmarks=true&fieldSelector=metadata.name%3Drendered&resourceVersion=26256&timeoutSeconds=427&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0227 20:23:52.370116     657 reflector.go:320] github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:594: Failed to watch *v1.Profile: Get https://172.30.0.1:443/apis/tuned.openshift.io/v1/namespaces/openshift-cluster-node-tuning-operator/profiles?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dip-10-0-151-30.us-east-2.compute.internal&resourceVersion=26256&timeoutSeconds=506&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nI0227 20:30:01.955489     657 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 20:30:01.955515     657 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 20:30:18.855584     657 tuned.go:115] received signal: terminated\nI0227 20:30:18.855638     657 tuned.go:327] sending TERM to PID 747\n
Feb 27 20:32:03.743 E ns/openshift-sdn pod/ovs-zdq8p node/ip-10-0-151-30.us-east-2.compute.internal container=openvswitch container exited with code 143 (Error): >unix#755: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T20:29:32.500Z|00100|bridge|INFO|bridge br0: deleted interface veth3bcacb06 on port 19\n2020-02-27T20:29:32.594Z|00101|connmgr|INFO|br0<->unix#758: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T20:29:32.632Z|00102|connmgr|INFO|br0<->unix#761: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T20:29:32.677Z|00103|bridge|INFO|bridge br0: deleted interface veth6cc4955e on port 32\n2020-02-27T20:29:32.734Z|00104|connmgr|INFO|br0<->unix#764: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T20:29:32.781Z|00105|connmgr|INFO|br0<->unix#767: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T20:29:32.835Z|00106|bridge|INFO|bridge br0: deleted interface veth3db79b77 on port 26\n2020-02-27T20:29:32.909Z|00107|connmgr|INFO|br0<->unix#772: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T20:29:32.979Z|00108|connmgr|INFO|br0<->unix#775: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T20:29:33.013Z|00109|bridge|INFO|bridge br0: deleted interface vethfb388027 on port 29\n2020-02-27T20:29:33.057Z|00110|connmgr|INFO|br0<->unix#778: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T20:29:33.114Z|00015|jsonrpc|WARN|unix#682: receive error: Connection reset by peer\n2020-02-27T20:29:33.115Z|00016|reconnect|WARN|unix#682: connection dropped (Connection reset by peer)\n2020-02-27T20:29:33.092Z|00111|connmgr|INFO|br0<->unix#781: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T20:29:33.122Z|00112|bridge|INFO|bridge br0: deleted interface vethd9f9ca7b on port 27\n2020-02-27T20:29:37.082Z|00017|jsonrpc|WARN|unix#686: receive error: Connection reset by peer\n2020-02-27T20:29:37.082Z|00018|reconnect|WARN|unix#686: connection dropped (Connection reset by peer)\n2020-02-27T20:30:16.314Z|00113|connmgr|INFO|br0<->unix#815: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T20:30:16.342Z|00114|connmgr|INFO|br0<->unix#818: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T20:30:16.364Z|00115|bridge|INFO|bridge br0: deleted interface vethd4aa6fbd on port 31\ninfo: Saving flows ...\n
Feb 27 20:32:03.755 E ns/openshift-multus pod/multus-7t9gq node/ip-10-0-151-30.us-east-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Feb 27 20:32:03.782 E ns/openshift-machine-config-operator pod/machine-config-daemon-w44fz node/ip-10-0-151-30.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 27 20:32:06.535 E ns/openshift-multus pod/multus-7t9gq node/ip-10-0-151-30.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Feb 27 20:32:10.338 E clusteroperator/kube-apiserver changed Degraded to True: NodeController_MasterNodesReady::NodeInstaller_InstallerPodFailed: NodeInstallerDegraded: 1 nodes are failing on revision 9:\nNodeInstallerDegraded: pods "installer-9-ip-10-0-136-188.us-east-2.compute.internal" not found\nNodeControllerDegraded: The master nodes not ready: node "ip-10-0-136-188.us-east-2.compute.internal" not ready since 2020-02-27 20:31:20 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)
Feb 27 20:32:12.151 E ns/openshift-machine-config-operator pod/machine-config-daemon-w44fz node/ip-10-0-151-30.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 27 20:32:52.508 E ns/openshift-cluster-node-tuning-operator pod/tuned-lb7xh node/ip-10-0-136-188.us-east-2.compute.internal container=tuned container exited with code 143 (Error): starting tuned...\n2020-02-27 20:13:11,315 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-02-27 20:13:11,326 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-02-27 20:13:11,327 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-02-27 20:13:11,328 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-02-27 20:13:11,329 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-02-27 20:13:11,427 INFO     tuned.daemon.controller: starting controller\n2020-02-27 20:13:11,428 INFO     tuned.daemon.daemon: starting tuning\n2020-02-27 20:13:11,447 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-02-27 20:13:11,448 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-02-27 20:13:11,455 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-02-27 20:13:11,475 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-02-27 20:13:11,482 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-02-27 20:13:11,903 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-02-27 20:13:11,912 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\nI0227 20:25:53.193335    5693 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 20:25:53.208844    5693 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 20:30:01.956055    5693 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 20:30:01.956556    5693 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 20:30:37.025672    5693 tuned.go:115] received signal: terminated\nI0227 20:30:37.025855    5693 tuned.go:327] sending TERM to PID 5892\n
Feb 27 20:32:52.557 E ns/openshift-monitoring pod/node-exporter-ld8ht node/ip-10-0-136-188.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:29:51Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:30:01Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:30:06Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:30:16Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:30:21Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:30:31Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T20:30:36Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 27 20:32:52.577 E ns/openshift-controller-manager pod/controller-manager-cr94w node/ip-10-0-136-188.us-east-2.compute.internal container=controller-manager container exited with code 1 (Error):  controller\nI0227 20:28:10.166378       1 templateinstance_controller.go:288] Starting TemplateInstance controller\nI0227 20:28:10.171995       1 shared_informer.go:204] Caches are synced for DefaultRoleBindingController \nI0227 20:28:10.185333       1 shared_informer.go:204] Caches are synced for service account \nI0227 20:28:10.189823       1 factory.go:80] Deployer controller caches are synced. Starting workers.\nI0227 20:28:10.454821       1 build_controller.go:474] Starting build controller\nI0227 20:28:10.454843       1 build_controller.go:476] OpenShift image registry hostname: image-registry.openshift-image-registry.svc:5000\nI0227 20:28:10.506752       1 deleted_dockercfg_secrets.go:74] caches synced\nI0227 20:28:10.506779       1 deleted_token_secrets.go:69] caches synced\nI0227 20:28:10.506962       1 docker_registry_service.go:154] caches synced\nI0227 20:28:10.507007       1 create_dockercfg_secrets.go:218] urls found\nI0227 20:28:10.507020       1 create_dockercfg_secrets.go:224] caches synced\nI0227 20:28:10.507141       1 docker_registry_service.go:296] Updating registry URLs from map[172.30.24.226:5000:{} image-registry.openshift-image-registry.svc.cluster.local:5000:{} image-registry.openshift-image-registry.svc:5000:{}] to map[172.30.24.226:5000:{} image-registry.openshift-image-registry.svc.cluster.local:5000:{} image-registry.openshift-image-registry.svc:5000:{}]\nW0227 20:29:46.831144       1 reflector.go:340] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.Build ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 427; INTERNAL_ERROR") has prevented the request from succeeding\nW0227 20:29:46.832153       1 reflector.go:340] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 431; INTERNAL_ERROR") has prevented the request from succeeding\n
Feb 27 20:32:52.650 E ns/openshift-sdn pod/ovs-fxsm6 node/ip-10-0-136-188.us-east-2.compute.internal container=openvswitch container exited with code 143 (Error): _mods in the last 0 s (4 deletes)\n2020-02-27T20:30:09.220Z|00268|bridge|INFO|bridge br0: deleted interface vethd121044d on port 82\n2020-02-27T20:30:12.910Z|00269|bridge|INFO|bridge br0: added interface veth5979cc13 on port 102\n2020-02-27T20:30:12.948Z|00270|connmgr|INFO|br0<->unix#1105: 5 flow_mods in the last 0 s (5 adds)\n2020-02-27T20:30:13.004Z|00271|connmgr|INFO|br0<->unix#1109: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-02-27T20:30:13.009Z|00272|connmgr|INFO|br0<->unix#1111: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T20:30:14.368Z|00273|bridge|INFO|bridge br0: added interface veth0683db2a on port 103\n2020-02-27T20:30:14.435Z|00274|connmgr|INFO|br0<->unix#1114: 5 flow_mods in the last 0 s (5 adds)\n2020-02-27T20:30:14.555Z|00275|connmgr|INFO|br0<->unix#1117: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T20:30:16.770Z|00276|connmgr|INFO|br0<->unix#1123: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T20:30:16.822Z|00277|connmgr|INFO|br0<->unix#1126: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T20:30:16.920Z|00278|bridge|INFO|bridge br0: deleted interface veth5979cc13 on port 102\n2020-02-27T20:30:19.363Z|00279|connmgr|INFO|br0<->unix#1129: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T20:30:19.435Z|00280|connmgr|INFO|br0<->unix#1132: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T20:30:19.494Z|00281|bridge|INFO|bridge br0: deleted interface veth0683db2a on port 103\n2020-02-27T20:30:20.966Z|00282|bridge|INFO|bridge br0: added interface veth87e7e022 on port 104\n2020-02-27T20:30:21.088Z|00283|connmgr|INFO|br0<->unix#1138: 5 flow_mods in the last 0 s (5 adds)\n2020-02-27T20:30:21.205Z|00284|connmgr|INFO|br0<->unix#1141: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T20:30:24.417Z|00285|connmgr|INFO|br0<->unix#1144: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T20:30:24.470Z|00286|connmgr|INFO|br0<->unix#1147: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T20:30:24.519Z|00287|bridge|INFO|bridge br0: deleted interface veth87e7e022 on port 104\ninfo: Saving flows ...\n
Feb 27 20:32:52.668 E ns/openshift-multus pod/multus-b782s node/ip-10-0-136-188.us-east-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Feb 27 20:32:52.713 E ns/openshift-multus pod/multus-admission-controller-l9bbn node/ip-10-0-136-188.us-east-2.compute.internal container=multus-admission-controller container exited with code 255 (Error): 
Feb 27 20:32:52.745 E ns/openshift-sdn pod/sdn-controller-zttc8 node/ip-10-0-136-188.us-east-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0227 20:14:08.449025       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Feb 27 20:32:52.760 E ns/openshift-etcd pod/etcd-ip-10-0-136-188.us-east-2.compute.internal node/ip-10-0-136-188.us-east-2.compute.internal container=etcd-metrics container exited with code 2 (Error): 2020-02-27 20:08:44.165407 I | etcdmain: ServerTLS: cert = /etc/kubernetes/static-pod-resources/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-136-188.us-east-2.compute.internal.crt, key = /etc/kubernetes/static-pod-resources/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-136-188.us-east-2.compute.internal.key, ca = /etc/kubernetes/static-pod-resources/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-02-27 20:08:44.166760 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-02-27 20:08:44.167287 I | etcdmain: ClientTLS: cert = /etc/kubernetes/static-pod-resources/secrets/etcd-all-peer/etcd-peer-ip-10-0-136-188.us-east-2.compute.internal.crt, key = /etc/kubernetes/static-pod-resources/secrets/etcd-all-peer/etcd-peer-ip-10-0-136-188.us-east-2.compute.internal.key, ca = /etc/kubernetes/static-pod-resources/configmaps/etcd-peer-client-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-02-27 20:08:44.170203 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\nWARNING: 2020/02/27 20:08:44 grpc: addrConn.createTransport failed to connect to {https://etcd-2.ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.136.188:9978: connect: connection refused". Reconnecting...\n
Feb 27 20:32:52.772 E ns/openshift-machine-config-operator pod/machine-config-server-4zbxp node/ip-10-0-136-188.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0227 20:23:39.102070       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-303-g38b43e66-dirty (38b43e66bab4746757f4388b82e7feb1eea7a0b2)\nI0227 20:23:39.103593       1 api.go:51] Launching server on :22624\nI0227 20:23:39.103987       1 api.go:51] Launching server on :22623\n
Feb 27 20:32:52.794 E ns/openshift-machine-config-operator pod/machine-config-daemon-dtlmg node/ip-10-0-136-188.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 27 20:32:52.810 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-136-188.us-east-2.compute.internal node/ip-10-0-136-188.us-east-2.compute.internal container=kube-apiserver container exited with code 1 (Error): ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com:2379 0  <nil>}]\nI0227 20:30:34.465346       1 store.go:1342] Monitoring ingresses.config.openshift.io count at <storage-prefix>//config.openshift.io/ingresses\nI0227 20:30:34.524194       1 client.go:361] parsed scheme: "endpoint"\nI0227 20:30:34.524239       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-0.ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com:2379 0  <nil>} {https://etcd-2.ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com:2379 0  <nil>} {https://etcd-1.ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com:2379 0  <nil>}]\nI0227 20:30:34.537839       1 store.go:1342] Monitoring authentications.operator.openshift.io count at <storage-prefix>//operator.openshift.io/authentications\nI0227 20:30:35.544606       1 aggregator.go:229] Finished OpenAPI spec generation after 1.731297816s\nI0227 20:30:35.979755       1 aggregator.go:226] Updating OpenAPI spec because v1.project.openshift.io is updated\nI0227 20:30:36.594579       1 client.go:361] parsed scheme: "endpoint"\nI0227 20:30:36.594724       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-0.ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com:2379 0  <nil>} {https://etcd-2.ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com:2379 0  <nil>} {https://etcd-1.ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com:2379 0  <nil>}]\nI0227 20:30:36.616693       1 store.go:1342] Monitoring networks.config.openshift.io count at <storage-prefix>//config.openshift.io/networks\nI0227 20:30:36.971271       1 genericapiserver.go:643] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-136-188.us-east-2.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0227 20:30:36.971516       1 controller.go:180] Shutting down kubernetes service endpoint reconciler\n
Feb 27 20:32:52.810 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-136-188.us-east-2.compute.internal node/ip-10-0-136-188.us-east-2.compute.internal container=kube-apiserver-insecure-readyz container exited with code 2 (Error): I0227 20:30:10.369077       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Feb 27 20:32:52.810 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-136-188.us-east-2.compute.internal node/ip-10-0-136-188.us-east-2.compute.internal container=kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0227 20:30:20.364488       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 20:30:20.364872       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0227 20:30:28.901487       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 20:30:28.901844       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Feb 27 20:32:52.810 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-136-188.us-east-2.compute.internal node/ip-10-0-136-188.us-east-2.compute.internal container=kube-apiserver-cert-regeneration-controller container exited with code 255 (Error): W0227 20:30:08.757558       1 cmd.go:200] Using insecure, self-signed certificates\nI0227 20:30:08.758045       1 crypto.go:580] Generating new CA for cert-regeneration-controller-signer@1582835408 cert, and key in /tmp/serving-cert-020646497/serving-signer.crt, /tmp/serving-cert-020646497/serving-signer.key\nI0227 20:30:10.189610       1 observer_polling.go:155] Starting file observer\nI0227 20:30:14.254388       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-apiserver/cert-regeneration-controller-lock...\nI0227 20:30:37.020842       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nF0227 20:30:37.020982       1 leaderelection.go:67] leaderelection lost\n
Feb 27 20:32:52.836 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-136-188.us-east-2.compute.internal node/ip-10-0-136-188.us-east-2.compute.internal container=cluster-policy-controller container exited with code 1 (Error): ceVersion=41384&timeout=8m54s&timeoutSeconds=534&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0227 20:30:37.140775       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Role: Get https://localhost:6443/apis/rbac.authorization.k8s.io/v1/roles?allowWatchBookmarks=true&resourceVersion=41394&timeout=7m0s&timeoutSeconds=420&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0227 20:30:37.140908       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Lease: Get https://localhost:6443/apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=42037&timeout=5m20s&timeoutSeconds=320&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0227 20:30:37.141016       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: Get https://localhost:6443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=41762&timeout=7m18s&timeoutSeconds=438&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0227 20:30:37.141133       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.Event: Get https://localhost:6443/apis/events.k8s.io/v1beta1/events?allowWatchBookmarks=true&resourceVersion=42039&timeout=7m35s&timeoutSeconds=455&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0227 20:30:37.141231       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://localhost:6443/api/v1/services?allowWatchBookmarks=true&resourceVersion=41368&timeout=9m36s&timeoutSeconds=576&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0227 20:30:37.141395       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: Get https://localhost:6443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=41397&timeout=9m50s&timeoutSeconds=590&watch=true: dial tcp [::1]:6443: connect: connection refused\n
Feb 27 20:32:52.836 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-136-188.us-east-2.compute.internal node/ip-10-0-136-188.us-east-2.compute.internal container=kube-controller-manager-cert-syncer container exited with code 2 (Error):     1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 20:30:15.087633       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 20:30:15.088090       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 20:30:24.002861       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 20:30:24.003187       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 20:30:24.853082       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 20:30:24.853536       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 20:30:33.563470       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 20:30:33.563948       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 20:30:34.016096       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 20:30:34.016478       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 20:30:34.865670       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 20:30:34.867394       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 20:30:34.976304       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 20:30:34.976666       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 20:30:36.162911       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 20:30:36.163328       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\n
Feb 27 20:32:52.836 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-136-188.us-east-2.compute.internal node/ip-10-0-136-188.us-east-2.compute.internal container=kube-controller-manager container exited with code 2 (Error): ] loaded client CA [5/"client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt"]: "kubelet-bootstrap-kubeconfig-signer" [] issuer="<self>" (2020-02-27 19:31:04 +0000 UTC to 2030-02-24 19:31:04 +0000 UTC (now=2020-02-27 20:30:16.579939524 +0000 UTC))\nI0227 20:30:16.580025       1 tlsconfig.go:179] loaded client CA [6/"client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt"]: "aggregator-signer" [] issuer="<self>" (2020-02-27 19:31:08 +0000 UTC to 2020-02-28 19:31:08 +0000 UTC (now=2020-02-27 20:30:16.580009438 +0000 UTC))\nI0227 20:30:16.580611       1 tlsconfig.go:201] loaded serving cert ["serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key"]: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1582832794" (2020-02-27 19:46:47 +0000 UTC to 2022-02-26 19:46:48 +0000 UTC (now=2020-02-27 20:30:16.58058653 +0000 UTC))\nI0227 20:30:16.581017       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1582835416" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1582835416" (2020-02-27 19:30:15 +0000 UTC to 2021-02-26 19:30:15 +0000 UTC (now=2020-02-27 20:30:16.58099589 +0000 UTC))\nI0227 20:30:16.581125       1 secure_serving.go:178] Serving securely on [::]:10257\nI0227 20:30:16.581218       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-controller-manager...\nI0227 20:30:16.581581       1 tlsconfig.go:241] Starting DynamicServingCertificateController\n
Feb 27 20:32:52.836 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-136-188.us-east-2.compute.internal node/ip-10-0-136-188.us-east-2.compute.internal container=kube-controller-manager-recovery-controller container exited with code 255 (Error): _amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=40595&timeout=8m52s&timeoutSeconds=532&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0227 20:30:05.061326       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=33999&timeout=6m1s&timeoutSeconds=361&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0227 20:30:13.848578       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: unknown (get secrets)\nE0227 20:30:13.848742       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: unknown (get secrets)\nE0227 20:30:13.848810       1 csrcontroller.go:121] key failed with : configmaps "csr-signer-ca" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager-operator"\nE0227 20:30:13.849248       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nE0227 20:30:13.849334       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: unknown (get secrets)\nE0227 20:30:13.849479       1 reflector.go:156] runtime/asm_amd64.s:1357: Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:kube-controller-manager" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\nI0227 20:30:36.994398       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0227 20:30:36.995133       1 csrcontroller.go:83] Shutting down CSR controller\nI0227 20:30:36.995207       1 csrcontroller.go:85] CSR controller shut down\nF0227 20:30:36.995376       1 builder.go:209] server exited\n
Feb 27 20:32:52.854 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-136-188.us-east-2.compute.internal node/ip-10-0-136-188.us-east-2.compute.internal container=scheduler container exited with code 2 (Error): ion refused\nE0227 20:30:13.223106       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)\nE0227 20:30:13.230909       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)\nE0227 20:30:13.231008       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)\nE0227 20:30:13.338993       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)\nE0227 20:30:13.339189       1 reflector.go:307] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to watch *v1.Pod: unknown (get pods)\nE0227 20:30:13.339297       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)\nE0227 20:30:13.339802       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)\nE0227 20:30:13.339841       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)\nE0227 20:30:13.339877       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: unknown (get services)\nE0227 20:30:13.339913       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: unknown (get nodes)\nE0227 20:30:13.339950       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)\nE0227 20:30:13.340079       1 leaderelection.go:331] error retrieving resource lock openshift-kube-scheduler/kube-scheduler: configmaps "kube-scheduler" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "openshift-kube-scheduler"\n
Feb 27 20:32:59.024 E ns/openshift-multus pod/multus-b782s node/ip-10-0-136-188.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Feb 27 20:33:01.126 E ns/openshift-multus pod/multus-b782s node/ip-10-0-136-188.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Feb 27 20:33:05.143 E ns/openshift-machine-config-operator pod/machine-config-daemon-dtlmg node/ip-10-0-136-188.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 27 20:37:55.458 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator kube-apiserver is reporting a failure: NodeInstallerDegraded: 1 nodes are failing on revision 9:\nNodeInstallerDegraded: pods "installer-9-ip-10-0-136-188.us-east-2.compute.internal" not found
Feb 27 20:44:10.458 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator kube-apiserver is reporting a failure: NodeInstallerDegraded: 1 nodes are failing on revision 9:\nNodeInstallerDegraded: pods "installer-9-ip-10-0-136-188.us-east-2.compute.internal" not found
Feb 27 20:50:40.458 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator kube-apiserver is reporting a failure: NodeInstallerDegraded: 1 nodes are failing on revision 9:\nNodeInstallerDegraded: pods "installer-9-ip-10-0-136-188.us-east-2.compute.internal" not found
Feb 27 20:57:59.157 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator kube-apiserver is reporting a failure: NodeInstallerDegraded: 1 nodes are failing on revision 9:\nNodeInstallerDegraded: pods "installer-9-ip-10-0-136-188.us-east-2.compute.internal" not found
Feb 27 21:07:07.117 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator kube-apiserver is reporting a failure: NodeInstallerDegraded: 1 nodes are failing on revision 9:\nNodeInstallerDegraded: pods "installer-9-ip-10-0-136-188.us-east-2.compute.internal" not found
Feb 27 21:16:16.882 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator kube-apiserver is reporting a failure: NodeInstallerDegraded: 1 nodes are failing on revision 9:\nNodeInstallerDegraded: pods "installer-9-ip-10-0-136-188.us-east-2.compute.internal" not found