ResultSUCCESS
Tests 2 failed / 23 succeeded
Started2020-02-27 21:41
Elapsed1h26m
Work namespaceci-op-zfmbybf1
Refs openshift-4.5:29304dc2
34:7800a949
podeb5d8480-59a9-11ea-a557-0a58ac107de1
repoopenshift/etcd
revision1

Test Failures


Cluster upgrade Cluster frontend ingress remain available 35m54s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sCluster\sfrontend\singress\sremain\savailable$'
Frontends were unreachable during disruption for at least 27s of 35m53s (1%):

Feb 27 22:29:45.166 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests on reused connections
Feb 27 22:29:45.167 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Feb 27 22:29:45.390 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests on reused connections
Feb 27 22:29:45.395 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Feb 27 22:30:46.167 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Feb 27 22:30:46.464 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Feb 27 22:42:17.166 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Feb 27 22:42:17.167 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Feb 27 22:42:17.468 I ns/openshift-console route/console Route started responding to GET requests over new connections
Feb 27 22:42:17.475 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Feb 27 22:45:13.167 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Feb 27 22:45:14.166 - 3s    E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Feb 27 22:45:18.504 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Feb 27 22:45:41.684 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Feb 27 22:45:42.165 - 9s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Feb 27 22:45:52.166 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests on reused connections
Feb 27 22:45:52.388 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests on reused connections
Feb 27 22:45:52.393 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Feb 27 22:48:20.071 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Feb 27 22:48:20.165 - 3s    E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Feb 27 22:48:23.415 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Feb 27 22:48:28.167 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Feb 27 22:48:28.167 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests on reused connections
Feb 27 22:48:28.393 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Feb 27 22:48:28.394 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests on reused connections
				from junit_upgrade_1582844100.xml

Filter through log files


openshift-tests Monitor cluster while tests execute 37m1s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
219 error level events were detected during this test run:

Feb 27 22:20:34.484 E clusterversion/version changed Failing to True: WorkloadNotAvailable: deployment openshift-cluster-version/cluster-version-operator is progressing NewReplicaSetAvailable: ReplicaSet "cluster-version-operator-6cf76c5df7" has successfully progressed.
Feb 27 22:21:30.426 E ns/openshift-etcd-operator pod/etcd-operator-74d9ff8bf9-6r6wd node/ip-10-0-138-5.us-west-1.compute.internal container=operator container exited with code 255 (Error):      1 clientconn.go:577] ClientConn switching balancer to "pick_first"\nI0227 22:21:15.249885       1 balancer_conn_wrappers.go:127] pickfirstBalancer: HandleSubConnStateChange: 0xc0014cef30, CONNECTING\nI0227 22:21:15.261187       1 balancer_conn_wrappers.go:127] pickfirstBalancer: HandleSubConnStateChange: 0xc0014cef30, READY\nI0227 22:21:15.262034       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nW0227 22:21:15.262201       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://10.0.0.91:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.0.91:2379: operation was canceled". Reconnecting...\nI0227 22:21:15.262239       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0227 22:21:15.262311       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0227 22:21:15.262467       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0227 22:21:28.947156       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"ad7467fc-c887-4503-8f27-eb69f92f84d0", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/etcd-ca-bundle -n openshift-etcd-operator:\ncause by changes in data.ca-bundle.crt\nI0227 22:21:29.099332       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"ad7467fc-c887-4503-8f27-eb69f92f84d0", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretUpdated' Updated Secret/etcd-client -n openshift-etcd-operator because it changed\nI0227 22:21:29.398284       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nF0227 22:21:29.398757       1 builder.go:209] server exited\n
Feb 27 22:21:48.498 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-55f977646-ndfbx node/ip-10-0-138-5.us-west-1.compute.internal container=kube-apiserver-operator container exited with code 255 (Error): o:180] Shutting down CertRotationController - "InternalLoadBalancerServing"\nI0227 22:21:47.520081       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "ServiceNetworkServing"\nI0227 22:21:47.520103       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "LocalhostServing"\nI0227 22:21:47.520149       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "ExternalLoadBalancerServing"\nI0227 22:21:47.520190       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "AggregatorProxyClientCert"\nI0227 22:21:47.520213       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "LocalhostRecoveryServing"\nI0227 22:21:47.520240       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeSchedulerClient"\nI0227 22:21:47.520256       1 certrotationcontroller.go:560] Shutting down CertRotation\nI0227 22:21:47.520274       1 base_controller.go:74] Shutting down NodeController ...\nI0227 22:21:47.520291       1 base_controller.go:74] Shutting down  ...\nI0227 22:21:47.520308       1 base_controller.go:74] Shutting down PruneController ...\nI0227 22:21:47.520324       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0227 22:21:47.520339       1 certrotationtime_upgradeable.go:103] Shutting down CertRotationTimeUpgradeableController\nI0227 22:21:47.520354       1 feature_upgradeable_controller.go:106] Shutting down FeatureUpgradeableController\nI0227 22:21:47.520370       1 status_controller.go:212] Shutting down StatusSyncer-kube-apiserver\nI0227 22:21:47.520386       1 base_controller.go:74] Shutting down  ...\nI0227 22:21:47.520401       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0227 22:21:47.520414       1 remove_stale_conditions.go:84] Shutting down RemoveStaleConditions\nI0227 22:21:47.520431       1 base_controller.go:74] Shutting down RevisionController ...\nF0227 22:21:47.520435       1 builder.go:243] stopped\n
Feb 27 22:22:09.565 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-645c9f55cb-c55x9 node/ip-10-0-138-5.us-west-1.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): erator", Name:"kube-controller-manager-operator", UID:"d7e83db6-010d-4384-86d8-8444b5823ac1", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeCurrentRevisionChanged' Updated node "ip-10-0-138-5.us-west-1.compute.internal" from revision 7 to 8 because static pod is ready\nI0227 22:16:01.331237       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"d7e83db6-010d-4384-86d8-8444b5823ac1", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 8"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 7; 2 nodes are at revision 8" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 8"\nI0227 22:16:02.322875       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"d7e83db6-010d-4384-86d8-8444b5823ac1", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/revision-status-8 -n openshift-kube-controller-manager:\ncause by changes in data.status\nI0227 22:16:06.922828       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"d7e83db6-010d-4384-86d8-8444b5823ac1", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-8-ip-10-0-138-5.us-west-1.compute.internal -n openshift-kube-controller-manager because it was missing\nI0227 22:22:08.663936       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nF0227 22:22:08.664224       1 builder.go:209] server exited\n
Feb 27 22:22:20.613 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-5bdd555dd4-ljlqv node/ip-10-0-138-5.us-west-1.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error):  watch stream event decoding: unexpected EOF\nI0227 22:11:30.153101       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 22:11:30.153110       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 22:11:30.153121       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 22:11:30.153403       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 22:11:30.153414       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 22:11:30.153424       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 22:11:30.153433       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 22:11:30.153442       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 22:11:30.153451       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 22:11:30.153470       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 22:11:30.153480       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 22:11:30.153489       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 22:11:30.153501       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 22:11:30.153512       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 22:22:19.542244       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0227 22:22:19.543176       1 base_controller.go:74] Shutting down PruneController ...\nI0227 22:22:19.543209       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nF0227 22:22:19.543209       1 builder.go:243] stopped\n
Feb 27 22:24:08.438 E ns/openshift-machine-api pod/machine-api-operator-bc65d4fc6-dc7zh node/ip-10-0-138-5.us-west-1.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Feb 27 22:26:46.286 E ns/openshift-machine-api pod/machine-api-controllers-85957b58b6-c6lw4 node/ip-10-0-157-72.us-west-1.compute.internal container=controller-manager container exited with code 1 (Error): 
Feb 27 22:27:19.514 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-5647df4679-zrtr2 node/ip-10-0-138-5.us-west-1.compute.internal container=kube-storage-version-migrator-operator container exited with code 255 (Error): ): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: {"conditions":[{"type":"Degraded","status":"False","lastTransitionTime":"2020-02-27T22:00:52Z","reason":"AsExpected"},{"type":"Progressing","status":"False","lastTransitionTime":"2020-02-27T22:00:52Z","reason":"AsExpected"},{"type":"Available","status":"False","lastTransitionTime":"2020-02-27T22:00:52Z","reason":"_NoMigratorPod","message":"Available: deployment/migrator.openshift-kube-storage-version-migrator: no replicas are available"},{"type":"Upgradeable","status":"Unknown","lastTransitionTime":"2020-02-27T22:00:51Z","reason":"NoData"}],"versions":[{"name":"operator","version":"0.0.1-2020-02-27-214225"}\n\nA: ],"relatedObjects":[{"group":"operator.openshift.io","resource":"kubestorageversionmigrators","name":"cluster"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator-operator"}],"extension":null}\n\n\nB: ,{"name":"kube-storage-version-migrator","version":""}],"relatedObjects":[{"group":"operator.openshift.io","resource":"kubestorageversionmigrators","name":"cluster"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator-operator"}],"extension":null}\n\n\nI0227 22:06:09.399569       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"a6ea80f5-c647-44fe-b3b0-8eeea375d14b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from False to True ("")\nI0227 22:27:18.502637       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0227 22:27:18.502686       1 leaderelection.go:66] leaderelection lost\n
Feb 27 22:28:49.959 E ns/openshift-cluster-machine-approver pod/machine-approver-857bf95d65-9496p node/ip-10-0-138-5.us-west-1.compute.internal container=machine-approver-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 22:28:49.959 E ns/openshift-cluster-machine-approver pod/machine-approver-857bf95d65-9496p node/ip-10-0-138-5.us-west-1.compute.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 22:28:52.545 E ns/openshift-kube-storage-version-migrator pod/migrator-5b6866fd88-5gvdm node/ip-10-0-132-239.us-west-1.compute.internal container=migrator container exited with code 2 (Error): I0227 22:11:30.079341       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 22:28:06.599259       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\n
Feb 27 22:29:03.577 E ns/openshift-monitoring pod/node-exporter-hff7f node/ip-10-0-132-239.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:28:15Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:28:25Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:28:30Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:28:40Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:28:45Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:28:55Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:29:00Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 27 22:29:06.043 E ns/openshift-service-catalog-apiserver-operator pod/openshift-service-catalog-apiserver-operator-65bff6f755-bxwsb node/ip-10-0-138-5.us-west-1.compute.internal container=operator container exited with code 255 (Error):  10.128.2.18:53070]\nI0227 22:28:30.460068       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0227 22:28:35.195877       1 workload_controller.go:329] No service bindings found, nothing to delete.\nI0227 22:28:35.206437       1 workload_controller.go:181] apiservice v1beta1.servicecatalog.k8s.io deleted\nI0227 22:28:40.467782       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0227 22:28:50.485880       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0227 22:28:54.435058       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0227 22:28:54.435089       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0227 22:28:54.436457       1 httplog.go:90] GET /metrics: (5.982023ms) 200 [Prometheus/2.15.2 10.129.2.10:60840]\nI0227 22:28:55.197346       1 workload_controller.go:329] No service bindings found, nothing to delete.\nI0227 22:28:55.214870       1 workload_controller.go:181] apiservice v1beta1.servicecatalog.k8s.io deleted\nI0227 22:28:57.994012       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0227 22:28:57.994136       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0227 22:28:57.995661       1 httplog.go:90] GET /metrics: (1.792107ms) 200 [Prometheus/2.15.2 10.128.2.18:53070]\nI0227 22:29:00.519509       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0227 22:29:05.102270       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0227 22:29:05.111206       1 builder.go:217] server exited\n
Feb 27 22:29:09.630 E ns/openshift-monitoring pod/kube-state-metrics-6f8f76f5bd-z2j4d node/ip-10-0-132-239.us-west-1.compute.internal container=kube-state-metrics container exited with code 2 (Error): 
Feb 27 22:29:10.056 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-68588c4rq node/ip-10-0-138-5.us-west-1.compute.internal container=operator container exited with code 255 (Error): o old resource version: 20708 (25429)\nI0227 22:28:08.299593       1 reflector.go:158] Listing and watching *v1.ServiceCatalogControllerManager from github.com/openshift/client-go/operator/informers/externalversions/factory.go:101\nI0227 22:28:08.342925       1 reflector.go:297] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 24350 (26064)\nI0227 22:28:08.343179       1 reflector.go:297] k8s.io/client-go/informers/factory.go:134: watch of *v1.ServiceAccount ended with: too old resource version: 21393 (25378)\nI0227 22:28:08.343269       1 reflector.go:297] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: too old resource version: 21370 (25378)\nI0227 22:28:08.552743       1 reflector.go:158] Listing and watching *v1.Namespace from k8s.io/client-go/informers/factory.go:134\nI0227 22:28:09.192866       1 reflector.go:158] Listing and watching *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0227 22:28:09.348168       1 reflector.go:158] Listing and watching *v1.ConfigMap from k8s.io/client-go/informers/factory.go:134\nI0227 22:28:09.348321       1 reflector.go:158] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134\nI0227 22:28:09.348338       1 reflector.go:158] Listing and watching *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:134\nI0227 22:28:14.166169       1 httplog.go:90] GET /metrics: (9.930122ms) 200 [Prometheus/2.15.2 10.128.2.18:38296]\nI0227 22:28:15.040476       1 httplog.go:90] GET /metrics: (20.800459ms) 200 [Prometheus/2.15.2 10.129.2.10:58264]\nI0227 22:28:44.162984       1 httplog.go:90] GET /metrics: (6.74884ms) 200 [Prometheus/2.15.2 10.128.2.18:38296]\nI0227 22:28:45.021199       1 httplog.go:90] GET /metrics: (1.739409ms) 200 [Prometheus/2.15.2 10.129.2.10:58264]\nI0227 22:29:09.090229       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0227 22:29:09.090279       1 leaderelection.go:66] leaderelection lost\n
Feb 27 22:29:11.561 E ns/openshift-monitoring pod/node-exporter-2vf4d node/ip-10-0-129-131.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:28:17Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:28:27Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:28:32Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:28:42Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:28:47Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:28:57Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:29:02Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 27 22:29:14.526 E ns/openshift-image-registry pod/image-registry-649ffb6b87-k8wk8 node/ip-10-0-129-131.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 22:29:16.071 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-129-131.us-west-1.compute.internal container=config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 22:29:16.071 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-129-131.us-west-1.compute.internal container=alertmanager-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 22:29:16.071 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-129-131.us-west-1.compute.internal container=alertmanager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 22:29:20.119 E ns/openshift-authentication-operator pod/authentication-operator-6cb9d889d6-qt6xz node/ip-10-0-138-5.us-west-1.compute.internal container=operator container exited with code 255 (Error): 28       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 22:28:06.605537       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 22:28:06.605545       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 22:28:06.605555       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 22:28:06.605563       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 22:28:06.605595       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 22:28:06.605665       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 22:28:06.605720       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 22:28:06.605932       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 22:28:06.605946       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 22:28:06.605957       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 22:28:06.605967       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 22:28:06.605977       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 22:28:06.605219       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 22:29:19.296105       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0227 22:29:19.296256       1 dynamic_serving_content.go:144] Shutting down serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key\nF0227 22:29:19.296473       1 builder.go:210] server exited\nI0227 22:29:19.296567       1 controller.go:70] Shutting down AuthenticationOperator2\n
Feb 27 22:29:20.172 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-cccc795b5-vk8xz node/ip-10-0-138-5.us-west-1.compute.internal container=operator container exited with code 255 (Error): bserved generation is 3, desired generation is 4.","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2020-02-27T22:06:07Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-02-27T22:00:51Z","reason":"NoData","status":"Unknown","type":"Upgradeable"}]}}\nI0227 22:29:07.128860       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"375d8a78-5d96-4e12-953b-d848892d5927", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing changed from False to True ("Progressing: daemonset/controller-manager: observed generation is 9, desired generation is 10.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 3, desired generation is 4.")\nI0227 22:29:18.504516       1 httplog.go:90] GET /metrics: (14.211492ms) 200 [Prometheus/2.15.2 10.128.2.18:50832]\nI0227 22:29:19.199337       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0227 22:29:19.199758       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nI0227 22:29:19.200581       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0227 22:29:19.200609       1 status_controller.go:212] Shutting down StatusSyncer-openshift-controller-manager\nI0227 22:29:19.200646       1 configmap_cafile_content.go:226] Shutting down client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\nI0227 22:29:19.200675       1 configmap_cafile_content.go:226] Shutting down client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nI0227 22:29:19.200691       1 operator.go:135] Shutting down OpenShiftControllerManagerOperator\nF0227 22:29:19.200704       1 builder.go:243] stopped\nF0227 22:29:19.200717       1 builder.go:210] server exited\n
Feb 27 22:29:22.274 E ns/openshift-monitoring pod/prometheus-adapter-7c99f4b5d7-bqmpp node/ip-10-0-148-181.us-west-1.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0227 22:12:01.363705       1 adapter.go:93] successfully using in-cluster auth\nI0227 22:12:02.112911       1 secure_serving.go:116] Serving securely on [::]:6443\n
Feb 27 22:29:24.123 E ns/openshift-service-ca-operator pod/service-ca-operator-fdf678c59-gd9n8 node/ip-10-0-138-5.us-west-1.compute.internal container=operator container exited with code 255 (Error): 
Feb 27 22:29:24.251 E ns/openshift-operator-lifecycle-manager pod/packageserver-76c69db65-blm56 node/ip-10-0-132-186.us-west-1.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 22:29:24.600 E ns/openshift-monitoring pod/node-exporter-f8g9r node/ip-10-0-132-186.us-west-1.compute.internal container=node-exporter container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 22:29:24.600 E ns/openshift-monitoring pod/node-exporter-f8g9r node/ip-10-0-132-186.us-west-1.compute.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 22:29:28.811 E ns/openshift-monitoring pod/grafana-868dc858fc-57jdm node/ip-10-0-129-131.us-west-1.compute.internal container=grafana container exited with code 1 (Error): 
Feb 27 22:29:28.811 E ns/openshift-monitoring pod/grafana-868dc858fc-57jdm node/ip-10-0-129-131.us-west-1.compute.internal container=grafana-proxy container exited with code 2 (Error): 
Feb 27 22:29:35.225 E ns/openshift-monitoring pod/telemeter-client-d5c677d4-rs6fl node/ip-10-0-129-131.us-west-1.compute.internal container=reload container exited with code 2 (Error): 
Feb 27 22:29:35.225 E ns/openshift-monitoring pod/telemeter-client-d5c677d4-rs6fl node/ip-10-0-129-131.us-west-1.compute.internal container=telemeter-client container exited with code 2 (Error): 
Feb 27 22:29:35.796 E ns/openshift-monitoring pod/prometheus-adapter-7c99f4b5d7-qwjp6 node/ip-10-0-129-131.us-west-1.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0227 22:12:01.577861       1 adapter.go:93] successfully using in-cluster auth\nI0227 22:12:02.939861       1 secure_serving.go:116] Serving securely on [::]:6443\nW0227 22:17:21.652858       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.Node ended with: too old resource version: 19283 (19922)\n
Feb 27 22:29:38.191 E ns/openshift-controller-manager pod/controller-manager-s6pnw node/ip-10-0-138-5.us-west-1.compute.internal container=controller-manager container exited with code 137 (Error): I0227 22:08:14.962933       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0227 22:08:14.965566       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-zfmbybf1/stable-initial@sha256:ca6b8e042a8fb0d9eb3539e4b6544fd7aae53da85fe037ce9a92e59ea19cd786"\nI0227 22:08:14.965669       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-zfmbybf1/stable-initial@sha256:471891b26e981d2ed9c87cdd306bc028abe62b760a7af413bd9c05389c4ea5a4"\nI0227 22:08:14.965769       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0227 22:08:14.966017       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\nE0227 22:13:34.580057       1 authentication.go:104] Unable to authenticate the request due to an error: [invalid bearer token, square/go-jose: error in cryptographic primitive, token lookup failed]\n
Feb 27 22:29:38.221 E ns/openshift-controller-manager pod/controller-manager-xh599 node/ip-10-0-157-72.us-west-1.compute.internal container=controller-manager container exited with code 137 (Error): I0227 22:07:28.038498       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0227 22:07:28.040871       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-zfmbybf1/stable-initial@sha256:ca6b8e042a8fb0d9eb3539e4b6544fd7aae53da85fe037ce9a92e59ea19cd786"\nI0227 22:07:28.040899       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-zfmbybf1/stable-initial@sha256:471891b26e981d2ed9c87cdd306bc028abe62b760a7af413bd9c05389c4ea5a4"\nI0227 22:07:28.041010       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0227 22:07:28.041062       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Feb 27 22:29:38.590 E ns/openshift-controller-manager pod/controller-manager-j6pxp node/ip-10-0-132-186.us-west-1.compute.internal container=controller-manager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 22:29:40.610 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-148-181.us-west-1.compute.internal container=alertmanager-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 22:29:40.610 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-148-181.us-west-1.compute.internal container=config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 22:29:40.610 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-148-181.us-west-1.compute.internal container=alertmanager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 22:29:46.310 E ns/openshift-authentication pod/oauth-openshift-757c59b656-zsc65 node/ip-10-0-138-5.us-west-1.compute.internal container=oauth-openshift container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 22:29:58.909 E ns/openshift-marketplace pod/redhat-marketplace-5ddfb4bccf-zp696 node/ip-10-0-132-239.us-west-1.compute.internal container=redhat-marketplace container exited with code 2 (Error): 
Feb 27 22:29:59.340 E ns/openshift-monitoring pod/node-exporter-q6fds node/ip-10-0-138-5.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:28:49Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:29:03Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:29:04Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:29:18Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:29:19Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:29:35Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:29:49Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 27 22:30:01.084 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-132-239.us-west-1.compute.internal container=config-reloader container exited with code 2 (Error): 2020/02/27 22:12:44 Watching directory: "/etc/alertmanager/config"\n
Feb 27 22:30:01.084 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-132-239.us-west-1.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/02/27 22:12:44 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/27 22:12:44 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/27 22:12:44 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/27 22:12:44 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/27 22:12:44 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/27 22:12:44 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/27 22:12:44 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0227 22:12:44.994647       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/02/27 22:12:44 http.go:107: HTTPS: listening on [::]:9095\n
Feb 27 22:30:02.590 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-148-181.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-27T22:29:51.387Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-27T22:29:51.391Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-27T22:29:51.391Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-27T22:29:51.392Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-27T22:29:51.392Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-27T22:29:51.392Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-27T22:29:51.392Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-27T22:29:51.392Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-27T22:29:51.392Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-27T22:29:51.392Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-27T22:29:51.392Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-27T22:29:51.392Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-27T22:29:51.392Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-27T22:29:51.392Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-27T22:29:51.394Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-27T22:29:51.394Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-27
Feb 27 22:30:05.860 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-129-131.us-west-1.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/02/27 22:14:37 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Feb 27 22:30:05.860 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-129-131.us-west-1.compute.internal container=prometheus-proxy container exited with code 2 (Error): 2020/02/27 22:14:37 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/02/27 22:14:37 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/27 22:14:37 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/27 22:14:38 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/27 22:14:38 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/27 22:14:38 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/02/27 22:14:38 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/27 22:14:38 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/02/27 22:14:38 http.go:107: HTTPS: listening on [::]:9091\nI0227 22:14:38.009644       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/02/27 22:29:29 oauthproxy.go:774: basicauth: 10.131.0.27:46982 Authorization header does not start with 'Basic', skipping basic authentication\n
Feb 27 22:30:05.860 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-129-131.us-west-1.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-02-27T22:14:37.314413174Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-02-27T22:14:37.315874488Z caller=runutil.go:95 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-02-27T22:14:42.448716204Z caller=reloader.go:286 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-02-27T22:14:42.448790092Z caller=reloader.go:154 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\n
Feb 27 22:30:08.623 E ns/openshift-monitoring pod/thanos-querier-77fc4474b9-94twd node/ip-10-0-148-181.us-west-1.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/02/27 22:12:55 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/27 22:12:55 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/27 22:12:55 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/27 22:12:55 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/27 22:12:55 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/27 22:12:55 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/27 22:12:55 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/27 22:12:55 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/02/27 22:12:55 http.go:107: HTTPS: listening on [::]:9091\nI0227 22:12:55.577945       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Feb 27 22:30:09.618 E ns/openshift-ingress pod/router-default-7b7c5c7c87-khnvk node/ip-10-0-148-181.us-west-1.compute.internal container=router container exited with code 2 (Error): lhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 22:29:23.635680       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 22:29:28.659050       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 22:29:33.632894       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 22:29:38.653550       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 22:29:43.661720       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 22:29:48.779655       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 22:29:53.633403       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 22:29:58.641834       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 22:30:03.629333       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 22:30:08.635102       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Feb 27 22:30:17.541 E ns/openshift-monitoring pod/node-exporter-p2k78 node/ip-10-0-157-72.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:29:08Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:29:11Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:29:23Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:29:26Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:29:38Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:29:53Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:30:11Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 27 22:30:22.270 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-132-239.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-27T22:30:15.575Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-27T22:30:15.578Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-27T22:30:15.579Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-27T22:30:15.579Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-27T22:30:15.579Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-27T22:30:15.579Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-27T22:30:15.580Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-27T22:30:15.580Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-27T22:30:15.580Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-27T22:30:15.580Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-27T22:30:15.580Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-27T22:30:15.580Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-27T22:30:15.580Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-27T22:30:15.580Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-27T22:30:15.580Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-27T22:30:15.580Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-27
Feb 27 22:30:26.657 E ns/openshift-marketplace pod/certified-operators-66c4c649c8-bcbj9 node/ip-10-0-148-181.us-west-1.compute.internal container=certified-operators container exited with code 2 (Error): 
Feb 27 22:30:34.617 E ns/openshift-service-ca pod/service-ca-554b97dfcf-g5ttd node/ip-10-0-157-72.us-west-1.compute.internal container=service-ca-controller container exited with code 255 (Error): 
Feb 27 22:30:36.633 E ns/openshift-console pod/console-54f86db86d-cnvkj node/ip-10-0-157-72.us-west-1.compute.internal container=console container exited with code 2 (Error): uth/token failed: Head https://oauth-openshift.apps.ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com: x509: certificate signed by unknown authority\n2020-02-27T22:11:09Z auth: error contacting auth provider (retrying in 10s): request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com: x509: certificate signed by unknown authority\n2020-02-27T22:11:19Z auth: error contacting auth provider (retrying in 10s): request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com: x509: certificate signed by unknown authority\n2020-02-27T22:11:29Z auth: error contacting auth provider (retrying in 10s): request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com: x509: certificate signed by unknown authority\n2020-02-27T22:11:39Z auth: error contacting auth provider (retrying in 10s): request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com: x509: certificate signed by unknown authority\n2020-02-27T22:11:49Z cmd/main: Binding to [::]:8443...\n2020-02-27T22:11:49Z cmd/main: using TLS\n2020-02-27T22:29:40Z auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n
Feb 27 22:32:18.968 E ns/openshift-sdn pod/sdn-controller-7vvp7 node/ip-10-0-157-72.us-west-1.compute.internal container=sdn-controller container exited with code 2 (Error): I0227 21:57:13.205500       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0227 22:03:59.388151       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: etcdserver: request timed out\n
Feb 27 22:32:29.376 E ns/openshift-sdn pod/sdn-mtswp node/ip-10-0-132-186.us-west-1.compute.internal container=sdn container exited with code 255 (Error): tting endpoints for openshift-console/console:https to [10.129.0.23:8443 10.129.0.61:8443 10.130.0.71:8443]\nI0227 22:30:35.823870    2887 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-console/console:https to [10.129.0.61:8443 10.130.0.71:8443]\nI0227 22:30:35.824045    2887 roundrobin.go:217] Delete endpoint 10.129.0.23:8443 for service "openshift-console/console:https"\nI0227 22:30:36.184846    2887 proxier.go:368] userspace proxy: processing 0 service events\nI0227 22:30:36.184874    2887 proxier.go:347] userspace syncProxyRules took 106.771122ms\nI0227 22:30:36.396394    2887 pod.go:503] CNI_ADD openshift-service-ca/service-ca-c966cd6c4-4pmcj got IP 10.128.0.76, ofport 77\nI0227 22:30:36.530873    2887 proxier.go:368] userspace proxy: processing 0 service events\nI0227 22:30:36.530921    2887 proxier.go:347] userspace syncProxyRules took 78.043383ms\nI0227 22:31:06.837988    2887 proxier.go:368] userspace proxy: processing 0 service events\nI0227 22:31:06.838123    2887 proxier.go:347] userspace syncProxyRules took 94.9468ms\nI0227 22:31:37.121500    2887 proxier.go:368] userspace proxy: processing 0 service events\nI0227 22:31:37.121528    2887 proxier.go:347] userspace syncProxyRules took 78.553844ms\nI0227 22:32:07.416037    2887 proxier.go:368] userspace proxy: processing 0 service events\nI0227 22:32:07.416063    2887 proxier.go:347] userspace syncProxyRules took 78.829336ms\nI0227 22:32:16.237326    2887 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.3:6443 10.129.0.3:6443]\nI0227 22:32:16.237514    2887 roundrobin.go:217] Delete endpoint 10.130.0.19:6443 for service "openshift-multus/multus-admission-controller:"\nI0227 22:32:16.536652    2887 proxier.go:368] userspace proxy: processing 0 service events\nI0227 22:32:16.536688    2887 proxier.go:347] userspace syncProxyRules took 82.747217ms\nF0227 22:32:28.338959    2887 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Feb 27 22:32:40.855 E ns/openshift-sdn pod/sdn-controller-vw2rw node/ip-10-0-138-5.us-west-1.compute.internal container=sdn-controller container exited with code 2 (Error): I0227 21:57:13.844128       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0227 22:03:59.377415       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: etcdserver: request timed out\nE0227 22:04:17.899435       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: etcdserver: request timed out\n
Feb 27 22:32:46.880 E ns/openshift-multus pod/multus-admission-controller-rf8sz node/ip-10-0-138-5.us-west-1.compute.internal container=multus-admission-controller container exited with code 137 (OOMKilled): 
Feb 27 22:32:47.152 E ns/openshift-multus pod/multus-rpdf4 node/ip-10-0-129-131.us-west-1.compute.internal container=kube-multus container exited with code 137 (OOMKilled): 
Feb 27 22:32:50.616 E ns/openshift-sdn pod/sdn-852br node/ip-10-0-132-239.us-west-1.compute.internal container=sdn container exited with code 255 (Error): :https to [10.129.0.61:8443 10.130.0.71:8443]\nI0227 22:30:35.820284    2718 roundrobin.go:217] Delete endpoint 10.129.0.23:8443 for service "openshift-console/console:https"\nI0227 22:30:36.044279    2718 proxier.go:368] userspace proxy: processing 0 service events\nI0227 22:30:36.044306    2718 proxier.go:347] userspace syncProxyRules took 73.734524ms\nI0227 22:30:36.289061    2718 proxier.go:368] userspace proxy: processing 0 service events\nI0227 22:30:36.289088    2718 proxier.go:347] userspace syncProxyRules took 72.374475ms\nI0227 22:31:06.540281    2718 proxier.go:368] userspace proxy: processing 0 service events\nI0227 22:31:06.540305    2718 proxier.go:347] userspace syncProxyRules took 72.583698ms\nI0227 22:31:36.826496    2718 proxier.go:368] userspace proxy: processing 0 service events\nI0227 22:31:36.826525    2718 proxier.go:347] userspace syncProxyRules took 72.718254ms\nI0227 22:32:07.077247    2718 proxier.go:368] userspace proxy: processing 0 service events\nI0227 22:32:07.077275    2718 proxier.go:347] userspace syncProxyRules took 73.65563ms\nI0227 22:32:16.236980    2718 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.3:6443 10.129.0.3:6443]\nI0227 22:32:16.237020    2718 roundrobin.go:217] Delete endpoint 10.130.0.19:6443 for service "openshift-multus/multus-admission-controller:"\nI0227 22:32:16.514986    2718 proxier.go:368] userspace proxy: processing 0 service events\nI0227 22:32:16.515009    2718 proxier.go:347] userspace syncProxyRules took 71.438308ms\nI0227 22:32:46.797635    2718 proxier.go:368] userspace proxy: processing 0 service events\nI0227 22:32:46.797663    2718 proxier.go:347] userspace syncProxyRules took 72.869619ms\nI0227 22:32:50.337764    2718 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0227 22:32:50.337803    2718 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Feb 27 22:33:15.059 E ns/openshift-sdn pod/sdn-w64wj node/ip-10-0-148-181.us-west-1.compute.internal container=sdn container exited with code 255 (Error): syncProxyRules took 78.579928ms\nI0227 22:32:07.189882    2767 proxier.go:368] userspace proxy: processing 0 service events\nI0227 22:32:07.189910    2767 proxier.go:347] userspace syncProxyRules took 73.107423ms\nI0227 22:32:16.237543    2767 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.3:6443 10.129.0.3:6443]\nI0227 22:32:16.237589    2767 roundrobin.go:217] Delete endpoint 10.130.0.19:6443 for service "openshift-multus/multus-admission-controller:"\nI0227 22:32:16.489481    2767 proxier.go:368] userspace proxy: processing 0 service events\nI0227 22:32:16.489510    2767 proxier.go:347] userspace syncProxyRules took 73.978823ms\nI0227 22:32:46.753445    2767 proxier.go:368] userspace proxy: processing 0 service events\nI0227 22:32:46.753470    2767 proxier.go:347] userspace syncProxyRules took 73.613937ms\nI0227 22:32:53.882224    2767 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.3:6443 10.129.0.3:6443 10.130.0.72:6443]\nI0227 22:32:53.907480    2767 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.3:6443 10.130.0.72:6443]\nI0227 22:32:53.907545    2767 roundrobin.go:217] Delete endpoint 10.129.0.3:6443 for service "openshift-multus/multus-admission-controller:"\nI0227 22:32:54.133719    2767 proxier.go:368] userspace proxy: processing 0 service events\nI0227 22:32:54.133753    2767 proxier.go:347] userspace syncProxyRules took 74.160727ms\nI0227 22:32:54.390175    2767 proxier.go:368] userspace proxy: processing 0 service events\nI0227 22:32:54.390202    2767 proxier.go:347] userspace syncProxyRules took 72.831655ms\nI0227 22:33:14.914759    2767 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0227 22:33:14.914807    2767 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Feb 27 22:33:25.249 E ns/openshift-multus pod/multus-admission-controller-qt8z4 node/ip-10-0-157-72.us-west-1.compute.internal container=multus-admission-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 22:33:31.045 E ns/openshift-multus pod/multus-lh2r8 node/ip-10-0-138-5.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Feb 27 22:33:42.268 E ns/openshift-sdn pod/sdn-z9hrr node/ip-10-0-129-131.us-west-1.compute.internal container=sdn container exited with code 255 (Error): lugin ready\nI0227 22:32:53.882484    6418 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.3:6443 10.129.0.3:6443 10.130.0.72:6443]\nI0227 22:32:53.909141    6418 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.3:6443 10.130.0.72:6443]\nI0227 22:32:53.909172    6418 roundrobin.go:217] Delete endpoint 10.129.0.3:6443 for service "openshift-multus/multus-admission-controller:"\nI0227 22:32:54.126017    6418 proxier.go:368] userspace proxy: processing 0 service events\nI0227 22:32:54.126039    6418 proxier.go:347] userspace syncProxyRules took 70.887971ms\nI0227 22:32:54.363925    6418 proxier.go:368] userspace proxy: processing 0 service events\nI0227 22:32:54.363950    6418 proxier.go:347] userspace syncProxyRules took 69.815911ms\nI0227 22:33:24.600430    6418 proxier.go:368] userspace proxy: processing 0 service events\nI0227 22:33:24.600452    6418 proxier.go:347] userspace syncProxyRules took 70.385721ms\nI0227 22:33:41.204155    6418 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.3:6443 10.129.0.62:6443 10.130.0.72:6443]\nI0227 22:33:41.224521    6418 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.129.0.62:6443 10.130.0.72:6443]\nI0227 22:33:41.224553    6418 roundrobin.go:217] Delete endpoint 10.128.0.3:6443 for service "openshift-multus/multus-admission-controller:"\nI0227 22:33:41.457157    6418 proxier.go:368] userspace proxy: processing 0 service events\nI0227 22:33:41.457179    6418 proxier.go:347] userspace syncProxyRules took 70.115626ms\nI0227 22:33:41.692829    6418 proxier.go:368] userspace proxy: processing 0 service events\nI0227 22:33:41.692851    6418 proxier.go:347] userspace syncProxyRules took 69.774036ms\nF0227 22:33:41.846940    6418 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Feb 27 22:34:01.304 E ns/openshift-csi-snapshot-controller pod/csi-snapshot-controller-6545546bf6-7pjcq node/ip-10-0-129-131.us-west-1.compute.internal container=snapshot-controller container exited with code 255 (Error): 
Feb 27 22:34:08.195 E ns/openshift-sdn pod/sdn-7hq89 node/ip-10-0-138-5.us-west-1.compute.internal container=sdn container exited with code 255 (Error): 0227 22:33:17.345100    8429 proxier.go:347] userspace syncProxyRules took 194.615006ms\nI0227 22:33:17.459279    8429 proxier.go:1609] Opened local port "nodePort for openshift-ingress/router-default:https" (:32446/tcp)\nI0227 22:33:17.459560    8429 proxier.go:1609] Opened local port "nodePort for openshift-ingress/router-default:http" (:30508/tcp)\nI0227 22:33:17.459624    8429 proxier.go:1609] Opened local port "nodePort for e2e-k8s-service-lb-available-771/service-test:" (:30774/tcp)\nI0227 22:33:17.510734    8429 service_health.go:98] Opening healthcheck "openshift-ingress/router-default" on port 31786\nI0227 22:33:17.519373    8429 proxy.go:305] openshift-sdn proxy services and endpoints initialized\nI0227 22:33:17.519417    8429 cmd.go:173] openshift-sdn network plugin registering startup\nI0227 22:33:17.519535    8429 cmd.go:177] openshift-sdn network plugin ready\nI0227 22:33:41.201149    8429 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.3:6443 10.129.0.62:6443 10.130.0.72:6443]\nI0227 22:33:41.221481    8429 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.129.0.62:6443 10.130.0.72:6443]\nI0227 22:33:41.221516    8429 roundrobin.go:217] Delete endpoint 10.128.0.3:6443 for service "openshift-multus/multus-admission-controller:"\nI0227 22:33:41.490211    8429 proxier.go:368] userspace proxy: processing 0 service events\nI0227 22:33:41.490236    8429 proxier.go:347] userspace syncProxyRules took 90.447907ms\nI0227 22:33:41.778493    8429 proxier.go:368] userspace proxy: processing 0 service events\nI0227 22:33:41.778518    8429 proxier.go:347] userspace syncProxyRules took 92.747342ms\nI0227 22:34:07.209914    8429 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0227 22:34:07.209954    8429 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Feb 27 22:34:37.424 E ns/openshift-sdn pod/sdn-pzjfj node/ip-10-0-157-72.us-west-1.compute.internal container=sdn container exited with code 255 (Error): 29314    7833 service.go:363] Adding new service port "openshift-console/console:https" at 172.30.79.1:443/TCP\nI0227 22:33:44.529538    7833 proxier.go:766] Stale udp service openshift-dns/dns-default:dns -> 172.30.0.10\nI0227 22:33:44.763640    7833 proxier.go:368] userspace proxy: processing 0 service events\nI0227 22:33:44.763777    7833 proxier.go:347] userspace syncProxyRules took 233.899157ms\nI0227 22:33:44.833461    7833 proxier.go:1609] Opened local port "nodePort for openshift-ingress/router-default:http" (:30508/tcp)\nI0227 22:33:44.833567    7833 proxier.go:1609] Opened local port "nodePort for e2e-k8s-service-lb-available-771/service-test:" (:30774/tcp)\nI0227 22:33:44.833845    7833 proxier.go:1609] Opened local port "nodePort for openshift-ingress/router-default:https" (:32446/tcp)\nI0227 22:33:44.881471    7833 service_health.go:98] Opening healthcheck "openshift-ingress/router-default" on port 31786\nI0227 22:33:44.892291    7833 proxy.go:305] openshift-sdn proxy services and endpoints initialized\nI0227 22:33:44.892321    7833 cmd.go:173] openshift-sdn network plugin registering startup\nI0227 22:33:44.892419    7833 cmd.go:177] openshift-sdn network plugin ready\nI0227 22:34:14.695827    7833 proxier.go:368] userspace proxy: processing 0 service events\nI0227 22:34:14.695852    7833 proxier.go:347] userspace syncProxyRules took 77.340384ms\nI0227 22:34:31.805479    7833 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.77:6443 10.129.0.62:6443 10.130.0.72:6443]\nI0227 22:34:32.078889    7833 proxier.go:368] userspace proxy: processing 0 service events\nI0227 22:34:32.078917    7833 proxier.go:347] userspace syncProxyRules took 81.115072ms\nI0227 22:34:36.365560    7833 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0227 22:34:36.365635    7833 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Feb 27 22:34:47.235 E ns/openshift-multus pod/multus-276cf node/ip-10-0-148-181.us-west-1.compute.internal container=kube-multus container exited with code 137 (OOMKilled): 
Feb 27 22:35:32.678 E ns/openshift-multus pod/multus-2jj78 node/ip-10-0-157-72.us-west-1.compute.internal container=kube-multus container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 22:36:11.261 E ns/openshift-multus pod/multus-44ct8 node/ip-10-0-132-186.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Feb 27 22:36:37.691 E ns/openshift-machine-config-operator pod/machine-config-operator-5b4d9759d5-7c6m4 node/ip-10-0-138-5.us-west-1.compute.internal container=machine-config-operator container exited with code 2 (Error): 9       1 operator.go:227] Couldn't find machineconfigpool CRD, in cluster bringup mode\nI0227 22:00:44.887150       1 operator.go:264] Starting MachineConfigOperator\nI0227 22:00:44.908864       1 event.go:281] Event(v1.ObjectReference{Kind:"", Namespace:"", Name:"machine-config", UID:"d45b59c2-3162-4e52-ad97-fc985fc4faaf", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorVersionChanged' clusteroperator/machine-config-operator is bootstrapping to [{operator 0.0.1-2020-02-27-214225}]\nE0227 22:00:45.597146       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.MachineConfigPool: the server could not find the requested resource (get machineconfigpools.machineconfiguration.openshift.io)\nE0227 22:00:45.606207       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.ControllerConfig: the server could not find the requested resource (get controllerconfigs.machineconfiguration.openshift.io)\nE0227 22:00:46.603915       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.MachineConfigPool: the server could not find the requested resource (get machineconfigpools.machineconfiguration.openshift.io)\nI0227 22:00:50.209734       1 sync.go:61] [init mode] synced RenderConfig in 5.280545986s\nI0227 22:00:50.316292       1 sync.go:61] [init mode] synced MachineConfigPools in 106.108438ms\nI0227 22:01:43.953130       1 sync.go:61] [init mode] synced MachineConfigDaemon in 53.636796799s\nI0227 22:01:47.006105       1 sync.go:61] [init mode] synced MachineConfigController in 3.052916529s\nI0227 22:01:59.083778       1 sync.go:61] [init mode] synced MachineConfigServer in 12.077627793s\nI0227 22:02:18.090648       1 sync.go:61] [init mode] synced RequiredPools in 19.006830388s\nI0227 22:02:18.118815       1 sync.go:85] Initialization complete\n
Feb 27 22:38:50.143 E ns/openshift-machine-config-operator pod/machine-config-daemon-kptgj node/ip-10-0-138-5.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 27 22:39:03.009 E ns/openshift-machine-config-operator pod/machine-config-daemon-flgjw node/ip-10-0-132-186.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 27 22:39:23.185 E ns/openshift-machine-config-operator pod/machine-config-daemon-ss8hm node/ip-10-0-132-239.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 27 22:39:37.917 E ns/openshift-machine-config-operator pod/machine-config-daemon-lw4dk node/ip-10-0-129-131.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 27 22:39:48.465 E ns/openshift-machine-config-operator pod/machine-config-controller-67d7f988cf-4njhh node/ip-10-0-157-72.us-west-1.compute.internal container=machine-config-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 22:41:57.937 E ns/openshift-machine-config-operator pod/machine-config-server-vsj8m node/ip-10-0-157-72.us-west-1.compute.internal container=machine-config-server container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 22:42:08.842 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-132-239.us-west-1.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/02/27 22:30:20 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Feb 27 22:42:08.842 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-132-239.us-west-1.compute.internal container=prometheus-proxy container exited with code 2 (Error): 2020/02/27 22:30:21 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/02/27 22:30:21 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/27 22:30:21 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/27 22:30:21 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/27 22:30:21 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/27 22:30:21 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/02/27 22:30:21 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/27 22:30:21 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0227 22:30:21.321509       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/02/27 22:30:21 http.go:107: HTTPS: listening on [::]:9091\n2020/02/27 22:34:00 oauthproxy.go:774: basicauth: 10.131.0.27:51910 Authorization header does not start with 'Basic', skipping basic authentication\n2020/02/27 22:38:30 oauthproxy.go:774: basicauth: 10.131.0.27:57130 Authorization header does not start with 'Basic', skipping basic authentication\n
Feb 27 22:42:08.842 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-132-239.us-west-1.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-02-27T22:30:20.472633345Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-02-27T22:30:20.474329345Z caller=runutil.go:95 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-02-27T22:30:25.650285306Z caller=reloader.go:286 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-02-27T22:30:25.650386923Z caller=reloader.go:154 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\n
Feb 27 22:42:09.005 E ns/openshift-ingress pod/router-default-6fd876f4dd-k7bvm node/ip-10-0-132-239.us-west-1.compute.internal container=router container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 22:42:09.335 E ns/openshift-apiserver pod/apiserver-57cb98c5df-bgmk4 node/ip-10-0-138-5.us-west-1.compute.internal container=openshift-apiserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 22:42:09.542 E ns/openshift-machine-config-operator pod/machine-config-server-hcq9c node/ip-10-0-138-5.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): I0227 22:01:48.241995       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-303-g38b43e66-dirty (38b43e66bab4746757f4388b82e7feb1eea7a0b2)\nI0227 22:01:48.243252       1 api.go:51] Launching server on :22624\nI0227 22:01:48.243321       1 api.go:51] Launching server on :22623\n
Feb 27 22:42:12.856 E ns/openshift-machine-config-operator pod/machine-config-server-9srjp node/ip-10-0-132-186.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): I0227 22:01:58.325174       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-303-g38b43e66-dirty (38b43e66bab4746757f4388b82e7feb1eea7a0b2)\nI0227 22:01:58.326390       1 api.go:51] Launching server on :22624\nI0227 22:01:58.326921       1 api.go:51] Launching server on :22623\nI0227 22:05:16.648769       1 api.go:97] Pool worker requested by 10.0.136.84:29014\n
Feb 27 22:42:14.082 E ns/openshift-machine-config-operator pod/machine-config-operator-cf4cdb74d-bcwv8 node/ip-10-0-138-5.us-west-1.compute.internal container=machine-config-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 22:42:46.665 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-129-131.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-27T22:42:41.648Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-27T22:42:41.653Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-27T22:42:41.653Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-27T22:42:41.654Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-27T22:42:41.654Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-27T22:42:41.654Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-27T22:42:41.654Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-27T22:42:41.654Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-27T22:42:41.654Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-27T22:42:41.654Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-27T22:42:41.654Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-27T22:42:41.654Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-27T22:42:41.654Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-27T22:42:41.654Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-27T22:42:41.655Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-27T22:42:41.655Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-27
Feb 27 22:44:36.751 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-138-5.us-west-1.compute.internal node/ip-10-0-138-5.us-west-1.compute.internal container=cluster-policy-controller container exited with code 1 (Error): I0227 22:25:56.396429       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0227 22:25:56.399809       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0227 22:25:56.399894       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\n
Feb 27 22:44:36.751 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-138-5.us-west-1.compute.internal node/ip-10-0-138-5.us-west-1.compute.internal container=kube-controller-manager-cert-syncer container exited with code 2 (Error):     1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 22:41:44.223710       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 22:41:44.224243       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 22:41:45.427892       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 22:41:45.428274       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 22:41:54.234235       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 22:41:54.234622       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 22:41:55.437839       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 22:41:55.438282       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 22:42:04.243526       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 22:42:04.243837       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 22:42:05.447047       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 22:42:05.447379       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 22:42:14.255423       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 22:42:14.255721       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 22:42:15.459557       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 22:42:15.459910       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\n
Feb 27 22:44:36.751 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-138-5.us-west-1.compute.internal node/ip-10-0-138-5.us-west-1.compute.internal container=kube-controller-manager container exited with code 2 (Error): strap-kubeconfig-signer" [] issuer="<self>" (2020-02-27 21:44:58 +0000 UTC to 2030-02-24 21:44:58 +0000 UTC (now=2020-02-27 22:25:51.818908178 +0000 UTC))\nI0227 22:25:51.818988       1 tlsconfig.go:179] loaded client CA [6/"client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt"]: "aggregator-signer" [] issuer="<self>" (2020-02-27 21:45:02 +0000 UTC to 2020-02-28 21:45:02 +0000 UTC (now=2020-02-27 22:25:51.818972884 +0000 UTC))\nI0227 22:25:51.819498       1 tlsconfig.go:201] loaded serving cert ["serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key"]: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1582840854" (2020-02-27 22:01:11 +0000 UTC to 2022-02-26 22:01:12 +0000 UTC (now=2020-02-27 22:25:51.819474909 +0000 UTC))\nI0227 22:25:51.819946       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1582842351" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1582842351" (2020-02-27 21:25:51 +0000 UTC to 2021-02-26 21:25:51 +0000 UTC (now=2020-02-27 22:25:51.819924488 +0000 UTC))\nI0227 22:25:51.820033       1 dynamic_serving_content.go:129] Starting serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\nI0227 22:25:51.820102       1 tlsconfig.go:241] Starting DynamicServingCertificateController\nI0227 22:25:51.820063       1 secure_serving.go:178] Serving securely on [::]:10257\nI0227 22:25:51.820254       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-controller-manager...\n
Feb 27 22:44:36.751 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-138-5.us-west-1.compute.internal node/ip-10-0-138-5.us-west-1.compute.internal container=kube-controller-manager-recovery-controller container exited with code 255 (Error): ] loaded client CA [4/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "kubelet-bootstrap-kubeconfig-signer" [] issuer="<self>" (2020-02-27 21:44:58 +0000 UTC to 2030-02-24 21:44:58 +0000 UTC (now=2020-02-27 22:25:58.307353269 +0000 UTC))\nI0227 22:25:58.307376       1 tlsconfig.go:157] loaded client CA [5/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "kube-csr-signer_@1582840855" [] issuer="kubelet-signer" (2020-02-27 22:00:54 +0000 UTC to 2020-02-28 21:45:06 +0000 UTC (now=2020-02-27 22:25:58.307368781 +0000 UTC))\nI0227 22:25:58.307412       1 tlsconfig.go:157] loaded client CA [6/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "aggregator-signer" [] issuer="<self>" (2020-02-27 21:45:02 +0000 UTC to 2020-02-28 21:45:02 +0000 UTC (now=2020-02-27 22:25:58.307389442 +0000 UTC))\nI0227 22:25:58.307649       1 tlsconfig.go:179] loaded serving cert ["serving-cert::/tmp/serving-cert-022334708/tls.crt::/tmp/serving-cert-022334708/tls.key"]: "localhost" [serving] validServingFor=[localhost] issuer="cert-recovery-controller-signer@1582842356" (2020-02-27 22:25:56 +0000 UTC to 2020-03-28 22:25:57 +0000 UTC (now=2020-02-27 22:25:58.307638468 +0000 UTC))\nI0227 22:25:58.307848       1 named_certificates.go:52] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1582842358" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1582842357" (2020-02-27 21:25:57 +0000 UTC to 2021-02-26 21:25:57 +0000 UTC (now=2020-02-27 22:25:58.307837632 +0000 UTC))\nI0227 22:42:18.544156       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nF0227 22:42:18.544200       1 leaderelection.go:67] leaderelection lost\n
Feb 27 22:44:36.784 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-138-5.us-west-1.compute.internal node/ip-10-0-138-5.us-west-1.compute.internal container=kube-apiserver container exited with code 1 (Error): .138.5:2379: connect: connection refused". Reconnecting...\nW0227 22:42:19.021174       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://etcd-2.ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.138.5:2379: connect: connection refused". Reconnecting...\nE0227 22:42:19.031172       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0227 22:42:19.031297       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0227 22:42:19.032393       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0227 22:42:19.032527       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0227 22:42:19.032604       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0227 22:42:19.032690       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0227 22:42:19.032698       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0227 22:42:19.033120       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0227 22:42:19.033240       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0227 22:42:19.033506       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0227 22:42:19.033617       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0227 22:42:19.033803       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0227 22:42:19.081459       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0227 22:42:19.100550       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\n
Feb 27 22:44:36.784 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-138-5.us-west-1.compute.internal node/ip-10-0-138-5.us-west-1.compute.internal container=kube-apiserver-insecure-readyz container exited with code 2 (Error): I0227 22:24:10.096258       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Feb 27 22:44:36.784 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-138-5.us-west-1.compute.internal node/ip-10-0-138-5.us-west-1.compute.internal container=kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0227 22:42:08.092340       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 22:42:08.092738       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0227 22:42:18.102056       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 22:42:18.102396       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Feb 27 22:44:36.784 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-138-5.us-west-1.compute.internal node/ip-10-0-138-5.us-west-1.compute.internal container=kube-apiserver-cert-regeneration-controller container exited with code 255 (Error): W0227 22:24:09.556535       1 cmd.go:200] Using insecure, self-signed certificates\nI0227 22:24:09.557152       1 crypto.go:580] Generating new CA for cert-regeneration-controller-signer@1582842249 cert, and key in /tmp/serving-cert-090374271/serving-signer.crt, /tmp/serving-cert-090374271/serving-signer.key\nI0227 22:24:10.622934       1 observer_polling.go:155] Starting file observer\nI0227 22:24:15.162510       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-apiserver/cert-regeneration-controller-lock...\nI0227 22:42:18.557179       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nF0227 22:42:18.557245       1 leaderelection.go:67] leaderelection lost\n
Feb 27 22:44:36.794 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable
Feb 27 22:44:36.827 E ns/openshift-etcd pod/etcd-ip-10-0-138-5.us-west-1.compute.internal node/ip-10-0-138-5.us-west-1.compute.internal container=etcd-metrics container exited with code 2 (Error): 2020-02-27 22:24:08.758935 I | etcdmain: ServerTLS: cert = /etc/kubernetes/static-pod-resources/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-138-5.us-west-1.compute.internal.crt, key = /etc/kubernetes/static-pod-resources/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-138-5.us-west-1.compute.internal.key, ca = /etc/kubernetes/static-pod-resources/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-02-27 22:24:08.759833 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-02-27 22:24:08.760403 I | etcdmain: ClientTLS: cert = /etc/kubernetes/static-pod-resources/secrets/etcd-all-peer/etcd-peer-ip-10-0-138-5.us-west-1.compute.internal.crt, key = /etc/kubernetes/static-pod-resources/secrets/etcd-all-peer/etcd-peer-ip-10-0-138-5.us-west-1.compute.internal.key, ca = /etc/kubernetes/static-pod-resources/configmaps/etcd-peer-client-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-02-27 22:24:08.764586 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Feb 27 22:44:36.848 E ns/openshift-monitoring pod/node-exporter-8cwh9 node/ip-10-0-138-5.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:41:19Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:41:26Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:41:34Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:41:41Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:41:49Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:41:56Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:42:04Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 27 22:44:36.866 E ns/openshift-multus pod/multus-admission-controller-mjfmb node/ip-10-0-138-5.us-west-1.compute.internal container=multus-admission-controller container exited with code 255 (Error): 
Feb 27 22:44:36.905 E ns/openshift-multus pod/multus-zvb74 node/ip-10-0-138-5.us-west-1.compute.internal container=kube-multus container exited with code 143 (Error): 
Feb 27 22:44:36.957 E ns/openshift-cluster-node-tuning-operator pod/tuned-twmbc node/ip-10-0-138-5.us-west-1.compute.internal container=tuned container exited with code 143 (Error): 21] tuned "rendered" added\nI0227 22:30:11.872827    1253 tuned.go:219] extracting tuned profiles\nI0227 22:30:11.876649    1253 tuned.go:176] failed to disable system tuned: Failed to execute operation: Unit file tuned.service does not exist.\nI0227 22:30:12.861232    1253 tuned.go:393] getting recommended profile...\nI0227 22:30:12.988285    1253 tuned.go:421] active profile () != recommended profile (openshift-control-plane)\nI0227 22:30:12.988374    1253 tuned.go:286] starting tuned...\n2020-02-27 22:30:13,112 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-02-27 22:30:13,119 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-02-27 22:30:13,120 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-02-27 22:30:13,120 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-02-27 22:30:13,121 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-02-27 22:30:13,163 INFO     tuned.daemon.controller: starting controller\n2020-02-27 22:30:13,163 INFO     tuned.daemon.daemon: starting tuning\n2020-02-27 22:30:13,175 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-02-27 22:30:13,175 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-02-27 22:30:13,179 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-02-27 22:30:13,180 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-02-27 22:30:13,182 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-02-27 22:30:13,297 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-02-27 22:30:13,306 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\nI0227 22:42:18.592311    1253 tuned.go:115] received signal: terminated\nI0227 22:42:18.592386    1253 tuned.go:327] sending TERM to PID 1370\n
Feb 27 22:44:36.972 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-138-5.us-west-1.compute.internal node/ip-10-0-138-5.us-west-1.compute.internal container=scheduler container exited with code 2 (Error): s evaluated, 1 nodes were found feasible.\nI0227 22:42:12.371618       1 factory.go:453] Unable to schedule openshift-apiserver/apiserver-57cb98c5df-bjwwm: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0227 22:42:12.656598       1 scheduler.go:751] pod openshift-machine-config-operator/machine-config-operator-cf4cdb74d-r7c9k is bound successfully on node "ip-10-0-132-186.us-west-1.compute.internal", 6 nodes evaluated, 2 nodes were found feasible.\nI0227 22:42:12.853641       1 factory.go:453] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-6f55796db5-245t2: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nE0227 22:42:12.870349       1 factory.go:494] pod is already present in the activeQ\nI0227 22:42:12.882222       1 factory.go:453] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-6f55796db5-245t2: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0227 22:42:13.082525       1 scheduler.go:751] pod openshift-network-operator/network-operator-56b9d6c9b5-8nm5c is bound successfully on node "ip-10-0-157-72.us-west-1.compute.internal", 6 nodes evaluated, 2 nodes were found feasible.\nI0227 22:42:17.288050       1 factory.go:453] Unable to schedule openshift-apiserver/apiserver-57cb98c5df-bjwwm: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0227 22:42:17.295468       1 factory.go:453] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-6f55796db5-245t2: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\n
Feb 27 22:44:36.992 E ns/openshift-machine-config-operator pod/machine-config-server-42qkc node/ip-10-0-138-5.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): I0227 22:42:11.015511       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-303-g38b43e66-dirty (38b43e66bab4746757f4388b82e7feb1eea7a0b2)\nI0227 22:42:11.016648       1 api.go:51] Launching server on :22624\nI0227 22:42:11.016716       1 api.go:51] Launching server on :22623\n
Feb 27 22:44:37.008 E ns/openshift-controller-manager pod/controller-manager-4n7gf node/ip-10-0-138-5.us-west-1.compute.internal container=controller-manager container exited with code 1 (Error): I0227 22:29:52.802485       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0227 22:29:52.805000       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-zfmbybf1/stable@sha256:ca6b8e042a8fb0d9eb3539e4b6544fd7aae53da85fe037ce9a92e59ea19cd786"\nI0227 22:29:52.805043       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-zfmbybf1/stable@sha256:471891b26e981d2ed9c87cdd306bc028abe62b760a7af413bd9c05389c4ea5a4"\nI0227 22:29:52.805089       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0227 22:29:52.805160       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Feb 27 22:44:37.049 E ns/openshift-machine-config-operator pod/machine-config-daemon-49mzr node/ip-10-0-138-5.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 27 22:44:37.097 E ns/openshift-sdn pod/sdn-controller-kptwz node/ip-10-0-138-5.us-west-1.compute.internal container=sdn-controller container exited with code 2 (Error): I0227 22:32:49.326442       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Feb 27 22:44:43.983 E ns/openshift-multus pod/multus-zvb74 node/ip-10-0-138-5.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Feb 27 22:44:46.155 E ns/openshift-monitoring pod/node-exporter-8z8sn node/ip-10-0-132-239.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:41:52Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:41:56Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:42:11Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:42:26Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:42:41Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:42:52Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:42:56Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 27 22:44:46.194 E ns/openshift-cluster-node-tuning-operator pod/tuned-jstzp node/ip-10-0-132-239.us-west-1.compute.internal container=tuned container exited with code 143 (Error): mpute.internal" added, tuned profile requested: openshift-node\nI0227 22:30:26.632731    2182 tuned.go:170] disabling system tuned...\nI0227 22:30:26.649712    2182 tuned.go:176] failed to disable system tuned: Failed to execute operation: Unit file tuned.service does not exist.\nI0227 22:30:27.619112    2182 tuned.go:393] getting recommended profile...\nI0227 22:30:27.762762    2182 tuned.go:421] active profile () != recommended profile (openshift-node)\nI0227 22:30:27.762897    2182 tuned.go:286] starting tuned...\n2020-02-27 22:30:27,890 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-02-27 22:30:27,904 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-02-27 22:30:27,905 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-02-27 22:30:27,906 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-02-27 22:30:27,908 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-02-27 22:30:27,973 INFO     tuned.daemon.controller: starting controller\n2020-02-27 22:30:27,973 INFO     tuned.daemon.daemon: starting tuning\n2020-02-27 22:30:27,989 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-02-27 22:30:27,990 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-02-27 22:30:27,993 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-02-27 22:30:27,995 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-02-27 22:30:27,996 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-02-27 22:30:28,136 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-02-27 22:30:28,153 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0227 22:43:03.045896    2182 tuned.go:115] received signal: terminated\nI0227 22:43:03.045934    2182 tuned.go:327] sending TERM to PID 2333\n
Feb 27 22:44:46.251 E ns/openshift-multus pod/multus-r9mnc node/ip-10-0-132-239.us-west-1.compute.internal container=kube-multus container exited with code 143 (Error): 
Feb 27 22:44:46.268 E ns/openshift-machine-config-operator pod/machine-config-daemon-tw7vp node/ip-10-0-132-239.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 27 22:44:47.106 E ns/openshift-multus pod/multus-zvb74 node/ip-10-0-138-5.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Feb 27 22:44:50.340 E ns/openshift-multus pod/multus-r9mnc node/ip-10-0-132-239.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Feb 27 22:44:54.230 E ns/openshift-machine-config-operator pod/machine-config-daemon-49mzr node/ip-10-0-138-5.us-west-1.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 27 22:44:56.528 E ns/openshift-machine-config-operator pod/machine-config-daemon-tw7vp node/ip-10-0-132-239.us-west-1.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 27 22:44:57.347 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Feb 27 22:45:05.079 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-148-181.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-27T22:29:51.387Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-27T22:29:51.391Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-27T22:29:51.391Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-27T22:29:51.392Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-27T22:29:51.392Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-27T22:29:51.392Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-27T22:29:51.392Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-27T22:29:51.392Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-27T22:29:51.392Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-27T22:29:51.392Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-27T22:29:51.392Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-27T22:29:51.392Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-27T22:29:51.392Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-27T22:29:51.392Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-27T22:29:51.394Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-27T22:29:51.394Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-27
Feb 27 22:45:05.079 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-148-181.us-west-1.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/02/27 22:29:57 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Feb 27 22:45:05.079 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-148-181.us-west-1.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-02-27T22:29:54.566754193Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-02-27T22:29:54.568356862Z caller=runutil.go:95 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2020-02-27T22:29:59.568361582Z caller=runutil.go:95 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-02-27T22:30:04.842889389Z caller=reloader.go:286 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-02-27T22:30:04.842995263Z caller=reloader.go:154 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\n
Feb 27 22:45:05.154 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-148-181.us-west-1.compute.internal container=config-reloader container exited with code 2 (Error): 2020/02/27 22:29:58 Watching directory: "/etc/alertmanager/config"\n
Feb 27 22:45:05.154 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-148-181.us-west-1.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/02/27 22:29:58 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/27 22:29:58 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/27 22:29:58 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/27 22:29:58 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/27 22:29:58 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/27 22:29:58 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/27 22:29:58 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0227 22:29:58.797788       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/02/27 22:29:58 http.go:107: HTTPS: listening on [::]:9095\n
Feb 27 22:45:05.391 E ns/openshift-monitoring pod/thanos-querier-d9766c89-psgj7 node/ip-10-0-148-181.us-west-1.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/02/27 22:29:54 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/27 22:29:54 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/27 22:29:54 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/27 22:29:54 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/27 22:29:54 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/27 22:29:54 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/27 22:29:54 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/27 22:29:54 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0227 22:29:54.488927       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/02/27 22:29:54 http.go:107: HTTPS: listening on [::]:9091\n
Feb 27 22:45:05.595 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-7c8dp6w4z node/ip-10-0-157-72.us-west-1.compute.internal container=operator container exited with code 255 (Error): 5.2 10.131.0.32:59872]\nI0227 22:42:08.085800       1 httplog.go:90] GET /metrics: (2.210181ms) 200 [Prometheus/2.15.2 10.128.2.28:36940]\nI0227 22:42:20.217127       1 reflector.go:383] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ServiceAccount total 0 items received\nI0227 22:42:20.537915       1 reflector.go:297] k8s.io/client-go/informers/factory.go:134: watch of *v1.ServiceAccount ended with: too old resource version: 25378 (36262)\nI0227 22:42:21.538336       1 reflector.go:158] Listing and watching *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:134\nI0227 22:42:38.091803       1 httplog.go:90] GET /metrics: (8.267263ms) 200 [Prometheus/2.15.2 10.128.2.28:36940]\nI0227 22:43:01.909505       1 httplog.go:90] GET /metrics: (8.068353ms) 200 [Prometheus/2.15.2 10.129.2.35:54836]\nI0227 22:43:08.085002       1 httplog.go:90] GET /metrics: (1.460168ms) 200 [Prometheus/2.15.2 10.128.2.28:36940]\nI0227 22:43:22.208168       1 reflector.go:383] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Deployment total 1 items received\nI0227 22:43:31.899306       1 httplog.go:90] GET /metrics: (7.325397ms) 200 [Prometheus/2.15.2 10.129.2.35:54836]\nI0227 22:43:38.084886       1 httplog.go:90] GET /metrics: (1.327917ms) 200 [Prometheus/2.15.2 10.128.2.28:36940]\nI0227 22:44:01.904324       1 httplog.go:90] GET /metrics: (12.430576ms) 200 [Prometheus/2.15.2 10.129.2.35:54836]\nI0227 22:44:08.084799       1 httplog.go:90] GET /metrics: (1.354574ms) 200 [Prometheus/2.15.2 10.128.2.28:36940]\nI0227 22:44:31.899149       1 httplog.go:90] GET /metrics: (7.245681ms) 200 [Prometheus/2.15.2 10.129.2.35:54836]\nI0227 22:44:38.085032       1 httplog.go:90] GET /metrics: (1.512363ms) 200 [Prometheus/2.15.2 10.128.2.28:36940]\nI0227 22:45:01.900549       1 httplog.go:90] GET /metrics: (8.116019ms) 200 [Prometheus/2.15.2 10.129.2.35:54836]\nI0227 22:45:04.723283       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0227 22:45:04.723409       1 leaderelection.go:66] leaderelection lost\n
Feb 27 22:45:08.650 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-5cdb64c6f5-gwtmn node/ip-10-0-157-72.us-west-1.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): workPluginNotReady message:Network plugin returns error: Missing CNI default network)\nStaticPodsDegraded: nodes/ip-10-0-138-5.us-west-1.compute.internal pods/openshift-kube-scheduler-ip-10-0-138-5.us-west-1.compute.internal container=\"scheduler\" is not ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-138-5.us-west-1.compute.internal pods/openshift-kube-scheduler-ip-10-0-138-5.us-west-1.compute.internal container=\"scheduler\" is not ready"\nI0227 22:45:07.698799       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0227 22:45:07.698960       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nI0227 22:45:07.699108       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0227 22:45:07.699172       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0227 22:45:07.699225       1 base_controller.go:74] Shutting down  ...\nI0227 22:45:07.699277       1 base_controller.go:74] Shutting down InstallerStateController ...\nI0227 22:45:07.699328       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0227 22:45:07.699376       1 target_config_reconciler.go:124] Shutting down TargetConfigReconciler\nI0227 22:45:07.699435       1 base_controller.go:74] Shutting down StaticPodStateController ...\nI0227 22:45:07.699485       1 base_controller.go:74] Shutting down PruneController ...\nI0227 22:45:07.699533       1 base_controller.go:74] Shutting down InstallerController ...\nI0227 22:45:07.699913       1 base_controller.go:74] Shutting down NodeController ...\nI0227 22:45:07.699983       1 status_controller.go:212] Shutting down StatusSyncer-kube-scheduler\nI0227 22:45:07.700039       1 base_controller.go:74] Shutting down RevisionController ...\nI0227 22:45:07.700089       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0227 22:45:07.700137       1 remove_stale_conditions.go:84] Shutting down RemoveStaleConditions\nF0227 22:45:07.700362       1 builder.go:243] stopped\n
Feb 27 22:45:09.749 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-575fdb5f85-q78bt node/ip-10-0-157-72.us-west-1.compute.internal container=operator container exited with code 255 (Error): theus-k8s\nI0227 22:44:03.288500       1 httplog.go:90] GET /metrics: (8.268376ms) 200 [Prometheus/2.15.2 10.129.2.35:60246]\nI0227 22:44:15.690369       1 httplog.go:90] GET /metrics: (9.837214ms) 200 [Prometheus/2.15.2 10.128.2.28:53370]\nI0227 22:44:20.622950       1 request.go:565] Throttling request took 133.174208ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0227 22:44:20.822967       1 request.go:565] Throttling request took 195.035156ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0227 22:44:33.288417       1 httplog.go:90] GET /metrics: (8.255865ms) 200 [Prometheus/2.15.2 10.129.2.35:60246]\nI0227 22:44:40.623384       1 request.go:565] Throttling request took 136.909946ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0227 22:44:40.823443       1 request.go:565] Throttling request took 195.739627ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0227 22:44:45.690497       1 httplog.go:90] GET /metrics: (9.90926ms) 200 [Prometheus/2.15.2 10.128.2.28:53370]\nI0227 22:45:00.626808       1 request.go:565] Throttling request took 114.818511ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0227 22:45:00.826740       1 request.go:565] Throttling request took 189.380579ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0227 22:45:03.446431       1 httplog.go:90] GET /metrics: (166.248414ms) 200 [Prometheus/2.15.2 10.129.2.35:60246]\nI0227 22:45:08.500775       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0227 22:45:08.501489       1 builder.go:243] stopped\n
Feb 27 22:45:11.173 E clusterversion/version changed Failing to True: ClusterOperatorNotAvailable: Cluster operator machine-config is still updating
Feb 27 22:45:12.674 E ns/openshift-operator-lifecycle-manager pod/packageserver-5b4fd6dd4f-x86jk node/ip-10-0-138-5.us-west-1.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 22:45:19.743 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-132-239.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-27T22:45:17.862Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-27T22:45:17.867Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-27T22:45:17.868Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-27T22:45:17.869Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-27T22:45:17.869Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-27T22:45:17.869Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-27T22:45:17.869Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-27T22:45:17.869Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-27T22:45:17.869Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-27T22:45:17.869Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-27T22:45:17.869Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-27T22:45:17.869Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-27T22:45:17.869Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-27T22:45:17.869Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-27T22:45:17.870Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-27T22:45:17.870Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-27
Feb 27 22:47:42.941 E ns/openshift-cluster-node-tuning-operator pod/tuned-dv59j node/ip-10-0-148-181.us-west-1.compute.internal container=tuned container exited with code 143 (Error): : Unit file tuned.service does not exist.\nI0227 22:29:42.096380    4648 tuned.go:393] getting recommended profile...\nI0227 22:29:42.233780    4648 tuned.go:421] active profile () != recommended profile (openshift-node)\nI0227 22:29:42.233881    4648 tuned.go:286] starting tuned...\n2020-02-27 22:29:42,362 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-02-27 22:29:42,369 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-02-27 22:29:42,370 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-02-27 22:29:42,370 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-02-27 22:29:42,371 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-02-27 22:29:42,434 INFO     tuned.daemon.controller: starting controller\n2020-02-27 22:29:42,435 INFO     tuned.daemon.daemon: starting tuning\n2020-02-27 22:29:42,447 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-02-27 22:29:42,447 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-02-27 22:29:42,452 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-02-27 22:29:42,455 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-02-27 22:29:42,459 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-02-27 22:29:42,586 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-02-27 22:29:42,596 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0227 22:45:27.011647    4648 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 22:45:27.011647    4648 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 22:45:55.282673    4648 tuned.go:115] received signal: terminated\nI0227 22:45:55.282723    4648 tuned.go:327] sending TERM to PID 4912\n
Feb 27 22:47:42.954 E ns/openshift-monitoring pod/node-exporter-h2dz6 node/ip-10-0-148-181.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:44:59Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:45:00Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:45:15Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:45:29Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:45:30Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:45:44Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:45:45Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 27 22:47:42.977 E ns/openshift-sdn pod/ovs-68n8p node/ip-10-0-148-181.us-west-1.compute.internal container=openvswitch container exited with code 143 (Error): INFO|br0<->unix#614: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T22:45:04.236Z|00096|bridge|INFO|bridge br0: deleted interface veth321aac5c on port 35\n2020-02-27T22:45:04.280Z|00097|connmgr|INFO|br0<->unix#617: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T22:45:04.337Z|00098|connmgr|INFO|br0<->unix#620: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T22:45:04.370Z|00099|bridge|INFO|bridge br0: deleted interface vethd8431730 on port 29\n2020-02-27T22:45:04.448Z|00100|connmgr|INFO|br0<->unix#623: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T22:45:04.486Z|00101|connmgr|INFO|br0<->unix#626: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T22:45:04.515Z|00102|bridge|INFO|bridge br0: deleted interface vethdc2a9562 on port 22\n2020-02-27T22:45:04.563Z|00103|connmgr|INFO|br0<->unix#629: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T22:45:04.607Z|00104|connmgr|INFO|br0<->unix#632: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T22:45:04.635Z|00105|bridge|INFO|bridge br0: deleted interface veth48e4dfc2 on port 24\n2020-02-27T22:45:04.680Z|00106|connmgr|INFO|br0<->unix#635: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T22:45:04.721Z|00107|connmgr|INFO|br0<->unix#638: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T22:45:04.770Z|00108|bridge|INFO|bridge br0: deleted interface veth3b2b24f6 on port 31\n2020-02-27T22:45:32.903Z|00109|connmgr|INFO|br0<->unix#663: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T22:45:32.933Z|00110|connmgr|INFO|br0<->unix#666: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T22:45:32.955Z|00111|bridge|INFO|bridge br0: deleted interface vethb00bb854 on port 34\n2020-02-27T22:45:48.293Z|00112|connmgr|INFO|br0<->unix#678: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T22:45:48.320Z|00113|connmgr|INFO|br0<->unix#681: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T22:45:48.341Z|00114|bridge|INFO|bridge br0: deleted interface veth5c518871 on port 21\ninfo: Saving flows ...\n2020-02-27T22:45:55Z|00001|fatal_signal|WARN|terminating with signal 15 (Terminated)\n
Feb 27 22:47:43.010 E ns/openshift-multus pod/multus-r8zrw node/ip-10-0-148-181.us-west-1.compute.internal container=kube-multus container exited with code 143 (Error): 
Feb 27 22:47:43.023 E ns/openshift-machine-config-operator pod/machine-config-daemon-lvgp8 node/ip-10-0-148-181.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 27 22:47:44.024 E ns/openshift-controller-manager pod/controller-manager-gwrkq node/ip-10-0-157-72.us-west-1.compute.internal container=controller-manager container exited with code 1 (Error): -registry.openshift-image-registry.svc.cluster.local:5000:{} image-registry.openshift-image-registry.svc:5000:{}]\nE0227 22:30:55.553424       1 imagestream_controller.go:135] Error syncing image stream "openshift/installer-artifacts": Operation cannot be fulfilled on imagestreamimports.image.openshift.io "installer-artifacts": the object has been modified; please apply your changes to the latest version and try again\nW0227 22:42:07.461740       1 reflector.go:340] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 559; INTERNAL_ERROR") has prevented the request from succeeding\nW0227 22:42:07.462340       1 reflector.go:340] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.ImageStream ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 565; INTERNAL_ERROR") has prevented the request from succeeding\nW0227 22:45:10.431620       1 reflector.go:340] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 509; INTERNAL_ERROR") has prevented the request from succeeding\nW0227 22:45:10.433085       1 reflector.go:340] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.ImageStream ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 715; INTERNAL_ERROR") has prevented the request from succeeding\nW0227 22:45:10.433269       1 reflector.go:340] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.Image ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 571; INTERNAL_ERROR") has prevented the request from succeeding\n
Feb 27 22:47:44.055 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-157-72.us-west-1.compute.internal node/ip-10-0-157-72.us-west-1.compute.internal container=cluster-policy-controller container exited with code 1 (Error): I0227 22:24:47.697907       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0227 22:24:47.701013       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0227 22:24:47.701049       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\n
Feb 27 22:47:44.055 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-157-72.us-west-1.compute.internal node/ip-10-0-157-72.us-west-1.compute.internal container=kube-controller-manager-cert-syncer container exited with code 2 (Error):     1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 22:44:54.454979       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 22:44:54.455412       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 22:44:55.661825       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 22:44:55.662226       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 22:45:04.467105       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 22:45:04.467419       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 22:45:05.675688       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 22:45:05.676091       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 22:45:14.511670       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 22:45:14.512019       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 22:45:15.696559       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 22:45:15.697170       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 22:45:24.521491       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 22:45:24.521894       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 22:45:25.719318       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 22:45:25.719789       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\n
Feb 27 22:47:44.055 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-157-72.us-west-1.compute.internal node/ip-10-0-157-72.us-west-1.compute.internal container=kube-controller-manager container exited with code 2 (Error): loaded client CA [5/"client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt"]: "kubelet-bootstrap-kubeconfig-signer" [] issuer="<self>" (2020-02-27 21:44:58 +0000 UTC to 2030-02-24 21:44:58 +0000 UTC (now=2020-02-27 22:24:42.879594763 +0000 UTC))\nI0227 22:24:42.879620       1 tlsconfig.go:179] loaded client CA [6/"client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt"]: "aggregator-signer" [] issuer="<self>" (2020-02-27 21:45:02 +0000 UTC to 2020-02-28 21:45:02 +0000 UTC (now=2020-02-27 22:24:42.879613499 +0000 UTC))\nI0227 22:24:42.879841       1 tlsconfig.go:201] loaded serving cert ["serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key"]: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1582840854" (2020-02-27 22:01:11 +0000 UTC to 2022-02-26 22:01:12 +0000 UTC (now=2020-02-27 22:24:42.879827844 +0000 UTC))\nI0227 22:24:42.880029       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1582842282" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1582842282" (2020-02-27 21:24:42 +0000 UTC to 2021-02-26 21:24:42 +0000 UTC (now=2020-02-27 22:24:42.880018761 +0000 UTC))\nI0227 22:24:42.880074       1 secure_serving.go:178] Serving securely on [::]:10257\nI0227 22:24:42.880105       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-controller-manager...\nI0227 22:24:42.880269       1 tlsconfig.go:241] Starting DynamicServingCertificateController\n
Feb 27 22:47:44.055 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-157-72.us-west-1.compute.internal node/ip-10-0-157-72.us-west-1.compute.internal container=kube-controller-manager-recovery-controller container exited with code 255 (Error):       1 leaderelection.go:331] error retrieving resource lock openshift-kube-controller-manager/cert-recovery-controller-lock: configmaps "cert-recovery-controller-lock" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\nE0227 22:28:13.236275       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nE0227 22:28:13.236345       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nE0227 22:28:13.266684       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nE0227 22:28:13.266736       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: unknown (get secrets)\nE0227 22:28:13.266765       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nE0227 22:28:13.266785       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nE0227 22:28:13.266806       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: unknown (get secrets)\nE0227 22:28:13.266829       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: unknown (get secrets)\nE0227 22:28:13.266882       1 csrcontroller.go:121] key failed with : configmaps "csr-signer-ca" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager-operator"\nE0227 22:28:13.266920       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: unknown (get secrets)\nI0227 22:45:26.957884       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0227 22:45:26.958602       1 csrcontroller.go:83] Shutting down CSR controller\nI0227 22:45:26.958618       1 csrcontroller.go:85] CSR controller shut down\nF0227 22:45:26.958861       1 builder.go:209] server exited\n
Feb 27 22:47:44.082 E ns/openshift-monitoring pod/node-exporter-rmw4x node/ip-10-0-157-72.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:44:24Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:44:33Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:44:39Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:44:48Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:44:54Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:45:03Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:45:18Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 27 22:47:44.096 E ns/openshift-sdn pod/sdn-controller-n2z56 node/ip-10-0-157-72.us-west-1.compute.internal container=sdn-controller container exited with code 2 (Error): I0227 22:32:23.617852       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Feb 27 22:47:44.108 E ns/openshift-multus pod/multus-admission-controller-4dg8v node/ip-10-0-157-72.us-west-1.compute.internal container=multus-admission-controller container exited with code 255 (Error): 
Feb 27 22:47:44.135 E ns/openshift-cluster-node-tuning-operator pod/tuned-l7gjr node/ip-10-0-157-72.us-west-1.compute.internal container=tuned container exited with code 143 (Error): ested: openshift-control-plane\nI0227 22:30:33.923965     921 tuned.go:170] disabling system tuned...\nI0227 22:30:33.926868     921 tuned.go:521] tuned "rendered" added\nI0227 22:30:33.926897     921 tuned.go:219] extracting tuned profiles\nI0227 22:30:33.930425     921 tuned.go:176] failed to disable system tuned: Failed to execute operation: Unit file tuned.service does not exist.\nI0227 22:30:34.908635     921 tuned.go:393] getting recommended profile...\nI0227 22:30:35.049168     921 tuned.go:421] active profile () != recommended profile (openshift-control-plane)\nI0227 22:30:35.049322     921 tuned.go:286] starting tuned...\n2020-02-27 22:30:35,177 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-02-27 22:30:35,185 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-02-27 22:30:35,185 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-02-27 22:30:35,186 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-02-27 22:30:35,187 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-02-27 22:30:35,229 INFO     tuned.daemon.controller: starting controller\n2020-02-27 22:30:35,229 INFO     tuned.daemon.daemon: starting tuning\n2020-02-27 22:30:35,243 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-02-27 22:30:35,245 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-02-27 22:30:35,249 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-02-27 22:30:35,251 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-02-27 22:30:35,254 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-02-27 22:30:35,407 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-02-27 22:30:35,415 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\n
Feb 27 22:47:44.193 E ns/openshift-sdn pod/ovs-cm5x7 node/ip-10-0-157-72.us-west-1.compute.internal container=openvswitch container exited with code 143 (Error): 0: deleted interface vethe284061e on port 56\n2020-02-27T22:45:11.233Z|00101|connmgr|INFO|br0<->unix#569: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T22:45:11.306Z|00102|connmgr|INFO|br0<->unix#572: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T22:45:11.365Z|00103|bridge|INFO|bridge br0: deleted interface veth31bff6ef on port 45\n2020-02-27T22:45:11.453Z|00104|connmgr|INFO|br0<->unix#575: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T22:45:11.521Z|00105|connmgr|INFO|br0<->unix#578: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T22:45:11.589Z|00106|bridge|INFO|bridge br0: deleted interface vethe6d656a8 on port 59\n2020-02-27T22:45:11.766Z|00107|connmgr|INFO|br0<->unix#582: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T22:45:11.833Z|00108|connmgr|INFO|br0<->unix#585: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T22:45:11.881Z|00109|bridge|INFO|bridge br0: deleted interface vethfd1f8c9a on port 53\n2020-02-27T22:45:13.778Z|00110|bridge|INFO|bridge br0: added interface veth626d4091 on port 66\n2020-02-27T22:45:13.858Z|00111|connmgr|INFO|br0<->unix#590: 5 flow_mods in the last 0 s (5 adds)\n2020-02-27T22:45:13.952Z|00112|connmgr|INFO|br0<->unix#594: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T22:45:13.959Z|00113|connmgr|INFO|br0<->unix#596: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-02-27T22:45:17.287Z|00019|jsonrpc|WARN|unix#518: receive error: Connection reset by peer\n2020-02-27T22:45:17.288Z|00020|reconnect|WARN|unix#518: connection dropped (Connection reset by peer)\n2020-02-27T22:45:17.293Z|00021|jsonrpc|WARN|unix#519: receive error: Connection reset by peer\n2020-02-27T22:45:17.293Z|00022|reconnect|WARN|unix#519: connection dropped (Connection reset by peer)\n2020-02-27T22:45:17.239Z|00114|connmgr|INFO|br0<->unix#602: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T22:45:17.275Z|00115|connmgr|INFO|br0<->unix#605: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T22:45:17.300Z|00116|bridge|INFO|bridge br0: deleted interface veth626d4091 on port 66\ninfo: Saving flows ...\n
Feb 27 22:47:44.206 E ns/openshift-multus pod/multus-sgrwb node/ip-10-0-157-72.us-west-1.compute.internal container=kube-multus container exited with code 143 (Error): 
Feb 27 22:47:44.281 E ns/openshift-machine-config-operator pod/machine-config-daemon-mlkc9 node/ip-10-0-157-72.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 27 22:47:44.304 E ns/openshift-machine-config-operator pod/machine-config-server-6dq7t node/ip-10-0-157-72.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): I0227 22:42:06.125884       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-303-g38b43e66-dirty (38b43e66bab4746757f4388b82e7feb1eea7a0b2)\nI0227 22:42:06.127743       1 api.go:51] Launching server on :22624\nI0227 22:42:06.127836       1 api.go:51] Launching server on :22623\n
Feb 27 22:47:44.356 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-157-72.us-west-1.compute.internal node/ip-10-0-157-72.us-west-1.compute.internal container=scheduler container exited with code 2 (Error): configmaps)\nE0227 22:28:13.214399       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)\nE0227 22:28:13.214833       1 leaderelection.go:331] error retrieving resource lock openshift-kube-scheduler/kube-scheduler: configmaps "kube-scheduler" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "openshift-kube-scheduler"\nE0227 22:28:13.214984       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)\nE0227 22:28:13.215073       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)\nE0227 22:28:13.215138       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: unknown (get services)\nE0227 22:28:13.215213       1 reflector.go:307] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to watch *v1.Pod: unknown (get pods)\nE0227 22:28:13.215278       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)\nE0227 22:28:13.215341       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: unknown (get nodes)\nE0227 22:28:13.215407       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)\nE0227 22:28:13.215465       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)\nE0227 22:28:13.215527       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)\nE0227 22:28:13.215596       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)\n
Feb 27 22:47:44.372 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-157-72.us-west-1.compute.internal node/ip-10-0-157-72.us-west-1.compute.internal container=kube-apiserver container exited with code 1 (Error): shift/client-go/user/informers/externalversions/factory.go:101: watch of *v1.Group ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 1853; INTERNAL_ERROR") has prevented the request from succeeding\nE0227 22:45:11.129296       1 available_controller.go:406] v1.packages.operators.coreos.com failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.packages.operators.coreos.com": the object has been modified; please apply your changes to the latest version and try again\nE0227 22:45:11.224243       1 available_controller.go:406] v1.packages.operators.coreos.com failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.packages.operators.coreos.com": the object has been modified; please apply your changes to the latest version and try again\nI0227 22:45:11.248396       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io\nE0227 22:45:11.856191       1 available_controller.go:406] v1.packages.operators.coreos.com failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.packages.operators.coreos.com": the object has been modified; please apply your changes to the latest version and try again\nE0227 22:45:11.966797       1 available_controller.go:406] v1.packages.operators.coreos.com failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.packages.operators.coreos.com": the object has been modified; please apply your changes to the latest version and try again\nI0227 22:45:26.910642       1 genericapiserver.go:643] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-157-72.us-west-1.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0227 22:45:26.911024       1 controller.go:180] Shutting down kubernetes service endpoint reconciler\n
Feb 27 22:47:44.372 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-157-72.us-west-1.compute.internal node/ip-10-0-157-72.us-west-1.compute.internal container=kube-apiserver-insecure-readyz container exited with code 2 (Error): I0227 22:28:08.980485       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Feb 27 22:47:44.372 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-157-72.us-west-1.compute.internal node/ip-10-0-157-72.us-west-1.compute.internal container=kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0227 22:45:08.320666       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 22:45:08.321078       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0227 22:45:18.343720       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 22:45:18.345166       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Feb 27 22:47:44.372 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-157-72.us-west-1.compute.internal node/ip-10-0-157-72.us-west-1.compute.internal container=kube-apiserver-cert-regeneration-controller container exited with code 255 (Error): source "pods" in API group "" in the namespace "openshift-kube-apiserver"\nI0227 22:28:13.309808       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-apiserver/cert-regeneration-controller-lock...\nE0227 22:28:13.322616       1 leaderelection.go:331] error retrieving resource lock openshift-kube-apiserver/cert-regeneration-controller-lock: configmaps "cert-regeneration-controller-lock" is forbidden: User "system:serviceaccount:openshift-kube-apiserver:localhost-recovery-client" cannot get resource "configmaps" in API group "" in the namespace "openshift-kube-apiserver": RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "self-access-reviewer" not found, clusterrole.rbac.authorization.k8s.io "cluster-status" not found, clusterrole.rbac.authorization.k8s.io "system:scope-impersonation" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-source" not found, clusterrole.rbac.authorization.k8s.io "system:oauth-token-deleter" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:webhook" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-jenkinspipeline" not found, clusterrole.rbac.authorization.k8s.io "console-extensions-reader" not found, clusterrole.rbac.authorization.k8s.io "cluster-admin" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-docker" not found, clusterrole.rbac.authorization.k8s.io "system:image-puller" not found]\nI0227 22:45:26.927389       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nF0227 22:45:26.927428       1 leaderelection.go:67] leaderelection lost\n
Feb 27 22:47:47.084 E ns/openshift-multus pod/multus-r8zrw node/ip-10-0-148-181.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Feb 27 22:47:48.320 E ns/openshift-etcd pod/etcd-ip-10-0-157-72.us-west-1.compute.internal node/ip-10-0-157-72.us-west-1.compute.internal container=etcd-metrics container exited with code 2 (Error): 2020-02-27 22:23:04.428773 I | etcdmain: ServerTLS: cert = /etc/kubernetes/static-pod-resources/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-157-72.us-west-1.compute.internal.crt, key = /etc/kubernetes/static-pod-resources/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-157-72.us-west-1.compute.internal.key, ca = /etc/kubernetes/static-pod-resources/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-02-27 22:23:04.429775 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-02-27 22:23:04.430216 I | etcdmain: ClientTLS: cert = /etc/kubernetes/static-pod-resources/secrets/etcd-all-peer/etcd-peer-ip-10-0-157-72.us-west-1.compute.internal.crt, key = /etc/kubernetes/static-pod-resources/secrets/etcd-all-peer/etcd-peer-ip-10-0-157-72.us-west-1.compute.internal.key, ca = /etc/kubernetes/static-pod-resources/configmaps/etcd-peer-client-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/02/27 22:23:04 grpc: addrConn.createTransport failed to connect to {https://etcd-1.ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.157.72:9978: connect: connection refused". Reconnecting...\n2020-02-27 22:23:04.432539 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Feb 27 22:47:50.611 E ns/openshift-multus pod/multus-sgrwb node/ip-10-0-157-72.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Feb 27 22:47:54.256 E ns/openshift-machine-config-operator pod/machine-config-daemon-lvgp8 node/ip-10-0-148-181.us-west-1.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 27 22:47:54.305 E ns/openshift-machine-config-operator pod/machine-config-daemon-mlkc9 node/ip-10-0-157-72.us-west-1.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 27 22:47:54.344 E ns/openshift-multus pod/multus-sgrwb node/ip-10-0-157-72.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Feb 27 22:48:02.472 E ns/openshift-monitoring pod/openshift-state-metrics-7957757848-bd67d node/ip-10-0-129-131.us-west-1.compute.internal container=openshift-state-metrics container exited with code 2 (Error): 
Feb 27 22:48:02.516 E ns/openshift-csi-snapshot-controller pod/csi-snapshot-controller-6545546bf6-7pjcq node/ip-10-0-129-131.us-west-1.compute.internal container=snapshot-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 22:48:02.532 E ns/openshift-marketplace pod/redhat-operators-cb8cb6f44-rr96t node/ip-10-0-129-131.us-west-1.compute.internal container=redhat-operators container exited with code 2 (Error): 
Feb 27 22:48:03.567 E ns/openshift-monitoring pod/thanos-querier-d9766c89-7h6pv node/ip-10-0-129-131.us-west-1.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/02/27 22:42:13 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/27 22:42:13 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/27 22:42:13 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/27 22:42:13 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/27 22:42:13 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/27 22:42:13 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/27 22:42:13 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/27 22:42:13 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/02/27 22:42:13 http.go:107: HTTPS: listening on [::]:9091\nI0227 22:42:13.142651       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Feb 27 22:48:03.708 E ns/openshift-console pod/downloads-67c5874fd7-jplm7 node/ip-10-0-129-131.us-west-1.compute.internal container=download-server container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 22:48:03.832 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Feb 27 22:48:04.587 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-129-131.us-west-1.compute.internal container=config-reloader container exited with code 2 (Error): 2020/02/27 22:42:38 Watching directory: "/etc/alertmanager/config"\n
Feb 27 22:48:04.587 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-129-131.us-west-1.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/02/27 22:42:38 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/27 22:42:38 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/27 22:42:38 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/27 22:42:38 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/27 22:42:38 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/27 22:42:38 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/27 22:42:38 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0227 22:42:38.259611       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/02/27 22:42:38 http.go:107: HTTPS: listening on [::]:9095\n2020/02/27 22:42:53 reverseproxy.go:437: http: proxy error: dial tcp [::1]:9093: connect: connection refused\n
Feb 27 22:48:04.726 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-129-131.us-west-1.compute.internal container=config-reloader container exited with code 2 (Error): 2020/02/27 22:29:31 Watching directory: "/etc/alertmanager/config"\n
Feb 27 22:48:04.726 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-129-131.us-west-1.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/02/27 22:29:31 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/27 22:29:31 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/27 22:29:31 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/27 22:29:31 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/27 22:29:31 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/27 22:29:31 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/27 22:29:31 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0227 22:29:31.843657       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/02/27 22:29:31 http.go:107: HTTPS: listening on [::]:9095\n
Feb 27 22:48:12.378 E ns/openshift-machine-api pod/machine-api-operator-77f66c9644-9xcxw node/ip-10-0-132-186.us-west-1.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Feb 27 22:48:14.801 E ns/openshift-service-catalog-apiserver-operator pod/openshift-service-catalog-apiserver-operator-7d5cbf56f-9stvl node/ip-10-0-132-186.us-west-1.compute.internal container=operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 22:48:17.809 E ns/openshift-operator-lifecycle-manager pod/olm-operator-547b6f4557-w498t node/ip-10-0-132-186.us-west-1.compute.internal container=olm-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 22:48:18.538 E ns/openshift-machine-api pod/cluster-autoscaler-operator-7d58d58ddc-bm6fx node/ip-10-0-132-186.us-west-1.compute.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 22:48:18.538 E ns/openshift-machine-api pod/cluster-autoscaler-operator-7d58d58ddc-bm6fx node/ip-10-0-132-186.us-west-1.compute.internal container=cluster-autoscaler-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 22:48:20.383 E ns/openshift-etcd-operator pod/etcd-operator-588c65f8cf-lj6ls node/ip-10-0-132-186.us-west-1.compute.internal container=operator container exited with code 255 (Error): own controller.\nI0227 22:48:16.623359       1 clustermembercontroller.go:104] Shutting down ClusterMemberController\nI0227 22:48:16.623462       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0227 22:48:16.623522       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0227 22:48:16.623601       1 scriptcontroller.go:144] Shutting down ScriptControllerController\nI0227 22:48:16.623691       1 targetconfigcontroller.go:269] Shutting down TargetConfigController\nI0227 22:48:16.623819       1 host_endpoints_controller.go:357] Shutting down HostEtcdEndpointsController\nI0227 22:48:16.623905       1 host_endpoints_controller.go:263] Shutting down HostEtcdEndpointsController\nI0227 22:48:16.623972       1 base_controller.go:74] Shutting down InstallerController ...\nI0227 22:48:16.624023       1 base_controller.go:74] Shutting down InstallerStateController ...\nI0227 22:48:16.624103       1 base_controller.go:74] Shutting down StaticPodStateController ...\nI0227 22:48:16.624162       1 base_controller.go:74] Shutting down  ...\nI0227 22:48:16.624224       1 etcdcertsignercontroller.go:118] Shutting down EtcdCertSignerController\nI0227 22:48:16.624277       1 base_controller.go:74] Shutting down RevisionController ...\nI0227 22:48:16.624321       1 bootstrap_teardown_controller.go:212] Shutting down BootstrapTeardownController\nI0227 22:48:16.624391       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0227 22:48:16.624438       1 base_controller.go:74] Shutting down  ...\nI0227 22:48:16.624483       1 base_controller.go:74] Shutting down PruneController ...\nI0227 22:48:16.624548       1 base_controller.go:74] Shutting down NodeController ...\nI0227 22:48:16.624592       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0227 22:48:16.624648       1 status_controller.go:212] Shutting down StatusSyncer-etcd\nI0227 22:48:16.625153       1 etcdmemberscontroller.go:192] Shutting down EtcdMembersController\nF0227 22:48:16.625241       1 builder.go:243] stopped\n
Feb 27 22:48:21.688 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-148-181.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-27T22:48:19.465Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-27T22:48:19.472Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-27T22:48:19.472Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-27T22:48:19.473Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-27T22:48:19.473Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-27T22:48:19.473Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-27T22:48:19.473Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-27T22:48:19.473Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-27T22:48:19.474Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-27T22:48:19.474Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-27T22:48:19.474Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-27T22:48:19.474Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-27T22:48:19.474Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-27T22:48:19.474Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-27T22:48:19.475Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-27T22:48:19.475Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-27
Feb 27 22:48:22.743 E ns/openshift-cluster-machine-approver pod/machine-approver-84569754f7-lglnx node/ip-10-0-132-186.us-west-1.compute.internal container=machine-approver-controller container exited with code 2 (Error): .\nI0227 22:29:12.025426       1 config.go:33] using default as failed to load config /var/run/configmaps/config/config.yaml: open /var/run/configmaps/config/config.yaml: no such file or directory\nI0227 22:29:12.025497       1 config.go:23] machine approver config: {NodeClientCert:{Disabled:false}}\nI0227 22:29:12.025753       1 main.go:236] Starting Machine Approver\nI0227 22:29:12.126018       1 main.go:146] CSR csr-nw8sn added\nI0227 22:29:12.126175       1 main.go:149] CSR csr-nw8sn is already approved\nI0227 22:29:12.126207       1 main.go:146] CSR csr-r8s6v added\nI0227 22:29:12.126218       1 main.go:149] CSR csr-r8s6v is already approved\nI0227 22:29:12.126231       1 main.go:146] CSR csr-t9t4m added\nI0227 22:29:12.126241       1 main.go:149] CSR csr-t9t4m is already approved\nI0227 22:29:12.126269       1 main.go:146] CSR csr-w2m54 added\nI0227 22:29:12.126288       1 main.go:149] CSR csr-w2m54 is already approved\nI0227 22:29:12.126303       1 main.go:146] CSR csr-26tbz added\nI0227 22:29:12.126345       1 main.go:149] CSR csr-26tbz is already approved\nI0227 22:29:12.126359       1 main.go:146] CSR csr-9rpnx added\nI0227 22:29:12.126368       1 main.go:149] CSR csr-9rpnx is already approved\nI0227 22:29:12.126381       1 main.go:146] CSR csr-bj5lg added\nI0227 22:29:12.126391       1 main.go:149] CSR csr-bj5lg is already approved\nI0227 22:29:12.126404       1 main.go:146] CSR csr-hdqpw added\nI0227 22:29:12.126415       1 main.go:149] CSR csr-hdqpw is already approved\nI0227 22:29:12.126433       1 main.go:146] CSR csr-k5kdk added\nI0227 22:29:12.126444       1 main.go:149] CSR csr-k5kdk is already approved\nI0227 22:29:12.126457       1 main.go:146] CSR csr-rnb6f added\nI0227 22:29:12.126467       1 main.go:149] CSR csr-rnb6f is already approved\nI0227 22:29:12.126509       1 main.go:146] CSR csr-wxw84 added\nI0227 22:29:12.126523       1 main.go:149] CSR csr-wxw84 is already approved\nI0227 22:29:12.126536       1 main.go:146] CSR csr-4n5zk added\nI0227 22:29:12.126545       1 main.go:149] CSR csr-4n5zk is already approved\n
Feb 27 22:48:23.837 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-6b546ddc6f-ln7fd node/ip-10-0-132-186.us-west-1.compute.internal container=kube-apiserver-operator container exited with code 255 (Error):  not ready: node \"ip-10-0-157-72.us-west-1.compute.internal\" not ready since 2020-02-27 22:47:43 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)\nStaticPodsDegraded: nodes/ip-10-0-157-72.us-west-1.compute.internal pods/kube-apiserver-ip-10-0-157-72.us-west-1.compute.internal container=\"kube-apiserver\" is not ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-157-72.us-west-1.compute.internal pods/kube-apiserver-ip-10-0-157-72.us-west-1.compute.internal container=\"kube-apiserver\" is not ready"\nI0227 22:48:09.380790       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"d462febd-a217-4ddd-ac37-840489077670", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-157-72.us-west-1.compute.internal pods/kube-apiserver-ip-10-0-157-72.us-west-1.compute.internal container=\"kube-apiserver\" is not ready" to "NodeControllerDegraded: All master nodes are ready"\nI0227 22:48:16.724766       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"d462febd-a217-4ddd-ac37-840489077670", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-10-ip-10-0-132-186.us-west-1.compute.internal -n openshift-kube-apiserver because it was missing\nI0227 22:48:21.699226       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0227 22:48:21.699548       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0227 22:48:21.699618       1 builder.go:209] server exited\n
Feb 27 22:48:23.937 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-575fdb5f85-bzdqx node/ip-10-0-132-186.us-west-1.compute.internal container=operator container exited with code 255 (Error): /rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0227 22:47:25.697575       1 request.go:565] Throttling request took 194.467702ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0227 22:47:41.116866       1 httplog.go:90] GET /metrics: (6.409227ms) 200 [Prometheus/2.15.2 10.131.0.17:36992]\nI0227 22:47:44.946505       1 httplog.go:90] GET /metrics: (5.390072ms) 200 [Prometheus/2.15.2 10.129.2.35:47034]\nI0227 22:47:45.498259       1 request.go:565] Throttling request took 168.135505ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0227 22:47:45.697797       1 request.go:565] Throttling request took 196.102465ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0227 22:48:05.497233       1 request.go:565] Throttling request took 164.760872ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0227 22:48:05.697163       1 request.go:565] Throttling request took 197.144901ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0227 22:48:11.140820       1 httplog.go:90] GET /metrics: (27.998179ms) 200 [Prometheus/2.15.2 10.131.0.17:36992]\nI0227 22:48:20.243252       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0227 22:48:20.244014       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0227 22:48:20.244038       1 status_controller.go:212] Shutting down StatusSyncer-openshift-controller-manager\nI0227 22:48:20.244710       1 operator.go:135] Shutting down OpenShiftControllerManagerOperator\nF0227 22:48:20.244801       1 builder.go:243] stopped\nF0227 22:48:20.309926       1 builder.go:210] server exited\n
Feb 27 22:48:24.115 E ns/openshift-console pod/console-7d57879bb9-p2g72 node/ip-10-0-132-186.us-west-1.compute.internal container=console container exited with code 2 (Error): 2020-02-27T22:42:14Z cmd/main: cookies are secure!\n2020-02-27T22:42:17Z cmd/main: Binding to [::]:8443...\n2020-02-27T22:42:17Z cmd/main: using TLS\n2020-02-27T22:45:47Z auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n
Feb 27 22:48:24.207 E ns/openshift-cluster-node-tuning-operator pod/cluster-node-tuning-operator-7cbc894c94-c9ml6 node/ip-10-0-132-186.us-west-1.compute.internal container=cluster-node-tuning-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 22:48:24.298 E ns/openshift-console-operator pod/console-operator-5d87689bcc-8n2hz node/ip-10-0-132-186.us-west-1.compute.internal container=console-operator container exited with code 255 (Error): dy at version 0.0.1-2020-02-27-214230\nI0227 22:48:18.764999       1 status_controller.go:176] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2020-02-27T22:06:24Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-02-27T22:30:35Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-02-27T22:48:18Z","message":"DeploymentAvailable: 1 replicas ready at version 0.0.1-2020-02-27-214230","reason":"Deployment_FailedUpdate","status":"False","type":"Available"},{"lastTransitionTime":"2020-02-27T22:06:24Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0227 22:48:18.857942       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"b988978e-1294-4774-8d82-02030ff70d32", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Available changed from True to False ("DeploymentAvailable: 1 replicas ready at version 0.0.1-2020-02-27-214230")\nI0227 22:48:19.477705       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0227 22:48:19.479743       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0227 22:48:19.479855       1 status_controller.go:212] Shutting down StatusSyncer-console\nI0227 22:48:19.479947       1 controller.go:70] Shutting down Console\nI0227 22:48:19.480015       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0227 22:48:19.480078       1 controller.go:138] shutting down ConsoleServiceSyncController\nI0227 22:48:19.480201       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0227 22:48:19.480261       1 controller.go:109] shutting down ConsoleResourceSyncDestinationController\nI0227 22:48:19.480329       1 management_state_controller.go:112] Shutting down management-state-controller-console\nF0227 22:48:19.481209       1 builder.go:243] stopped\n
Feb 27 22:48:24.323 E ns/openshift-insights pod/insights-operator-647ddcbcf8-q5dqj node/ip-10-0-132-186.us-west-1.compute.internal container=operator container exited with code 2 (Error): er.go:63] Recording events/openshift-kube-storage-version-migrator with fingerprint=\nI0227 22:48:16.735049       1 diskrecorder.go:63] Recording events/openshift-kube-storage-version-migrator-operator with fingerprint=\nI0227 22:48:16.735249       1 diskrecorder.go:63] Recording events/openshift-marketplace with fingerprint=\nI0227 22:48:16.743093       1 diskrecorder.go:63] Recording events/openshift-config with fingerprint=\nI0227 22:48:16.743119       1 diskrecorder.go:63] Recording events/openshift-config-managed with fingerprint=\nI0227 22:48:16.743150       1 diskrecorder.go:63] Recording events/openshift-apiserver-operator with fingerprint=\nI0227 22:48:16.743616       1 diskrecorder.go:63] Recording config/pod/openshift-apiserver/apiserver-57cb98c5df-m9dcj with fingerprint=\nI0227 22:48:16.743824       1 diskrecorder.go:63] Recording events/openshift-apiserver with fingerprint=\nI0227 22:48:16.839673       1 diskrecorder.go:63] Recording config/version with fingerprint=\nI0227 22:48:16.839818       1 diskrecorder.go:63] Recording config/id with fingerprint=\nI0227 22:48:16.887343       1 diskrecorder.go:63] Recording config/infrastructure with fingerprint=\nI0227 22:48:17.044768       1 diskrecorder.go:63] Recording config/network with fingerprint=\nI0227 22:48:17.082504       1 diskrecorder.go:63] Recording config/authentication with fingerprint=\nI0227 22:48:17.118449       1 diskrecorder.go:63] Recording config/featuregate with fingerprint=\nI0227 22:48:17.147314       1 diskrecorder.go:63] Recording config/oauth with fingerprint=\nI0227 22:48:17.211818       1 diskrecorder.go:63] Recording config/ingress with fingerprint=\nI0227 22:48:17.263561       1 diskrecorder.go:63] Recording config/proxy with fingerprint=\nI0227 22:48:17.356587       1 diskrecorder.go:170] Writing 52 records to /var/lib/insights-operator/insights-2020-02-27-224817.tar.gz\nI0227 22:48:17.439850       1 diskrecorder.go:134] Wrote 52 records to disk in 108ms\nI0227 22:48:17.450243       1 periodic.go:151] Periodic gather config completed in 2.742s\n
Feb 27 22:48:43.677 E kube-apiserver Kube API started failing: Get https://api.ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: context deadline exceeded
Feb 27 22:48:45.827 E ns/openshift-authentication pod/oauth-openshift-7bd9866b9f-rh9lq node/ip-10-0-157-72.us-west-1.compute.internal container=oauth-openshift container exited with code 255 (Error): Copying system trust bundle\nW0227 22:48:42.163434       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0227 22:48:42.164036       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nF0227 22:48:42.166612       1 cmd.go:49] unable to load configmap based request-header-client-ca-file: Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: connection refused\n
Feb 27 22:48:46.755 E ns/openshift-csi-snapshot-controller pod/csi-snapshot-controller-6545546bf6-4cjxr node/ip-10-0-148-181.us-west-1.compute.internal container=snapshot-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 22:49:18.861 E ns/openshift-operator-lifecycle-manager pod/packageserver-6bd6bd869c-b29h9 node/ip-10-0-138-5.us-west-1.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 22:50:02.963 E ns/openshift-marketplace pod/redhat-operators-cb8cb6f44-chf6z node/ip-10-0-148-181.us-west-1.compute.internal container=redhat-operators container exited with code 2 (Error): 
Feb 27 22:50:15.639 E ns/openshift-marketplace pod/community-operators-5d7d558b94-xnkpl node/ip-10-0-132-239.us-west-1.compute.internal container=community-operators container exited with code 2 (Error): 
Feb 27 22:50:24.825 E ns/openshift-monitoring pod/node-exporter-9j9l7 node/ip-10-0-129-131.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:47:35Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:47:49Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:47:50Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:48:04Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:48:19Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:48:34Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:48:35Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 27 22:50:24.854 E ns/openshift-cluster-node-tuning-operator pod/tuned-gb7ww node/ip-10-0-129-131.us-west-1.compute.internal container=tuned container exited with code 143 (Error): ommended profile (openshift-node)\nI0227 22:29:59.585272    1365 tuned.go:286] starting tuned...\n2020-02-27 22:29:59,697 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-02-27 22:29:59,704 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-02-27 22:29:59,704 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-02-27 22:29:59,705 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-02-27 22:29:59,706 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-02-27 22:29:59,746 INFO     tuned.daemon.controller: starting controller\n2020-02-27 22:29:59,746 INFO     tuned.daemon.daemon: starting tuning\n2020-02-27 22:29:59,761 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-02-27 22:29:59,762 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-02-27 22:29:59,765 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-02-27 22:29:59,767 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-02-27 22:29:59,768 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-02-27 22:29:59,891 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-02-27 22:29:59,897 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0227 22:48:38.738357    1365 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 22:48:38.738657    1365 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nW0227 22:48:38.757338    1365 reflector.go:340] github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:594: watch of *v1.Profile ended with: very short watch: github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:594: Unexpected watch close - watch lasted less than a second and no items received\n
Feb 27 22:50:24.896 E ns/openshift-multus pod/multus-x7m8b node/ip-10-0-129-131.us-west-1.compute.internal container=kube-multus container exited with code 143 (Error): 
Feb 27 22:50:24.897 E ns/openshift-sdn pod/ovs-plg82 node/ip-10-0-129-131.us-west-1.compute.internal container=openvswitch container exited with code 1 (Error): e br0: deleted interface veth0944a442 on port 8\n2020-02-27T22:48:02.886Z|00021|jsonrpc|WARN|unix#757: receive error: Connection reset by peer\n2020-02-27T22:48:02.886Z|00022|reconnect|WARN|unix#757: connection dropped (Connection reset by peer)\n2020-02-27T22:48:02.939Z|00203|connmgr|INFO|br0<->unix#883: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T22:48:02.989Z|00204|connmgr|INFO|br0<->unix#886: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T22:48:03.016Z|00205|bridge|INFO|bridge br0: deleted interface vethb557189f on port 21\n2020-02-27T22:48:03.105Z|00206|connmgr|INFO|br0<->unix#889: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T22:48:03.144Z|00207|connmgr|INFO|br0<->unix#892: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T22:48:03.167Z|00208|bridge|INFO|bridge br0: deleted interface vethd92bab40 on port 14\n2020-02-27T22:48:03.212Z|00209|connmgr|INFO|br0<->unix#895: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T22:48:03.252Z|00210|connmgr|INFO|br0<->unix#898: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T22:48:03.307Z|00211|bridge|INFO|bridge br0: deleted interface veth83e019ff on port 23\n2020-02-27T22:48:03.346Z|00212|connmgr|INFO|br0<->unix#901: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T22:48:03.396Z|00213|connmgr|INFO|br0<->unix#904: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T22:48:03.455Z|00214|bridge|INFO|bridge br0: deleted interface vethe815bf33 on port 9\n2020-02-27T22:48:17.425Z|00023|jsonrpc|WARN|unix#789: receive error: Connection reset by peer\n2020-02-27T22:48:17.425Z|00024|reconnect|WARN|unix#789: connection dropped (Connection reset by peer)\n2020-02-27T22:48:31.242Z|00215|connmgr|INFO|br0<->unix#926: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T22:48:31.270Z|00216|connmgr|INFO|br0<->unix#929: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T22:48:31.291Z|00217|bridge|INFO|bridge br0: deleted interface vethba6148d0 on port 22\ninfo: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Feb 27 22:50:24.923 E ns/openshift-machine-config-operator pod/machine-config-daemon-vf92k node/ip-10-0-129-131.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 27 22:50:28.684 E ns/openshift-multus pod/multus-x7m8b node/ip-10-0-129-131.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Feb 27 22:50:35.710 E ns/openshift-machine-config-operator pod/machine-config-daemon-vf92k node/ip-10-0-129-131.us-west-1.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 27 22:50:54.392 E ns/openshift-etcd pod/etcd-ip-10-0-132-186.us-west-1.compute.internal node/ip-10-0-132-186.us-west-1.compute.internal container=etcd-metrics container exited with code 2 (Error): 2020-02-27 22:22:26.677015 I | etcdmain: ServerTLS: cert = /etc/kubernetes/static-pod-resources/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-132-186.us-west-1.compute.internal.crt, key = /etc/kubernetes/static-pod-resources/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-132-186.us-west-1.compute.internal.key, ca = /etc/kubernetes/static-pod-resources/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-02-27 22:22:26.677844 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-02-27 22:22:26.678207 I | etcdmain: ClientTLS: cert = /etc/kubernetes/static-pod-resources/secrets/etcd-all-peer/etcd-peer-ip-10-0-132-186.us-west-1.compute.internal.crt, key = /etc/kubernetes/static-pod-resources/secrets/etcd-all-peer/etcd-peer-ip-10-0-132-186.us-west-1.compute.internal.key, ca = /etc/kubernetes/static-pod-resources/configmaps/etcd-peer-client-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-02-27 22:22:26.679970 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\nWARNING: 2020/02/27 22:22:26 grpc: addrConn.createTransport failed to connect to {https://etcd-0.ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.132.186:9978: connect: connection refused". Reconnecting...\n
Feb 27 22:50:54.464 E ns/openshift-monitoring pod/node-exporter-nmbcs node/ip-10-0-132-186.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:47:42Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:47:51Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:47:57Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:48:06Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:48:21Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:48:27Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T22:48:36Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 27 22:50:54.483 E ns/openshift-controller-manager pod/controller-manager-fd2fh node/ip-10-0-132-186.us-west-1.compute.internal container=controller-manager container exited with code 1 (Error): I0227 22:29:50.951745       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0227 22:29:50.954003       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-zfmbybf1/stable@sha256:ca6b8e042a8fb0d9eb3539e4b6544fd7aae53da85fe037ce9a92e59ea19cd786"\nI0227 22:29:50.954027       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-zfmbybf1/stable@sha256:471891b26e981d2ed9c87cdd306bc028abe62b760a7af413bd9c05389c4ea5a4"\nI0227 22:29:50.954161       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0227 22:29:50.954203       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Feb 27 22:50:54.517 E ns/openshift-sdn pod/ovs-czgvr node/ip-10-0-132-186.us-west-1.compute.internal container=openvswitch container exited with code 143 (Error): s (2 deletes)\n2020-02-27T22:48:23.625Z|00223|connmgr|INFO|br0<->unix#1027: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T22:48:23.668Z|00224|bridge|INFO|bridge br0: deleted interface veth9260f026 on port 64\n2020-02-27T22:48:23.729Z|00225|bridge|INFO|bridge br0: added interface vethaf40196f on port 90\n2020-02-27T22:48:23.850Z|00226|connmgr|INFO|br0<->unix#1030: 5 flow_mods in the last 0 s (5 adds)\n2020-02-27T22:48:23.956Z|00227|connmgr|INFO|br0<->unix#1034: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T22:48:23.977Z|00228|connmgr|INFO|br0<->unix#1036: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-02-27T22:48:25.924Z|00229|bridge|INFO|bridge br0: added interface vethd48eed06 on port 91\n2020-02-27T22:48:25.980Z|00230|connmgr|INFO|br0<->unix#1042: 5 flow_mods in the last 0 s (5 adds)\n2020-02-27T22:48:26.037Z|00231|connmgr|INFO|br0<->unix#1046: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-02-27T22:48:26.040Z|00232|connmgr|INFO|br0<->unix#1048: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T22:48:25.990Z|00040|jsonrpc|WARN|Dropped 15 log messages in last 10 seconds (most recently, 4 seconds ago) due to excessive rate\n2020-02-27T22:48:25.990Z|00041|jsonrpc|WARN|unix#881: send error: Broken pipe\n2020-02-27T22:48:25.990Z|00042|reconnect|WARN|unix#881: connection dropped (Broken pipe)\n2020-02-27T22:48:26.247Z|00233|connmgr|INFO|br0<->unix#1051: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T22:48:26.278Z|00234|connmgr|INFO|br0<->unix#1054: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T22:48:26.314Z|00235|bridge|INFO|bridge br0: deleted interface vethaf40196f on port 90\n2020-02-27T22:48:29.253Z|00236|connmgr|INFO|br0<->unix#1057: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T22:48:29.301Z|00237|connmgr|INFO|br0<->unix#1061: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T22:48:29.329Z|00238|bridge|INFO|bridge br0: deleted interface vethd48eed06 on port 91\n2020-02-27T22:48:29.321Z|00043|reconnect|WARN|unix#893: connection dropped (Connection reset by peer)\ninfo: Saving flows ...\n
Feb 27 22:50:54.525 E ns/openshift-sdn pod/sdn-controller-c7snj node/ip-10-0-132-186.us-west-1.compute.internal container=sdn-controller container exited with code 2 (Error): I0227 22:32:35.798327       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0227 22:32:35.828143       1 event.go:319] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"openshift-network-controller", GenerateName:"", Namespace:"openshift-sdn", SelfLink:"/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller", UID:"b0417304-0346-4464-a347-0173628466a4", ResourceVersion:"30927", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63718437430, loc:(*time.Location)(0x2b2b940)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"ip-10-0-132-186\",\"leaseDurationSeconds\":60,\"acquireTime\":\"2020-02-27T21:57:10Z\",\"renewTime\":\"2020-02-27T22:32:35Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'ip-10-0-132-186 became leader'\nI0227 22:32:35.828252       1 leaderelection.go:252] successfully acquired lease openshift-sdn/openshift-network-controller\nI0227 22:32:35.836411       1 master.go:51] Initializing SDN master\nI0227 22:32:35.889453       1 network_controller.go:61] Started OpenShift Network Controller\n
Feb 27 22:50:54.558 E ns/openshift-multus pod/multus-admission-controller-vvs5d node/ip-10-0-132-186.us-west-1.compute.internal container=multus-admission-controller container exited with code 255 (Error): 
Feb 27 22:50:54.579 E ns/openshift-multus pod/multus-nlx8f node/ip-10-0-132-186.us-west-1.compute.internal container=kube-multus container exited with code 143 (Error): 
Feb 27 22:50:54.593 E ns/openshift-machine-config-operator pod/machine-config-server-fs8vl node/ip-10-0-132-186.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): I0227 22:42:36.530348       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-303-g38b43e66-dirty (38b43e66bab4746757f4388b82e7feb1eea7a0b2)\nI0227 22:42:36.531620       1 api.go:51] Launching server on :22624\nI0227 22:42:36.531749       1 api.go:51] Launching server on :22623\n
Feb 27 22:50:54.609 E ns/openshift-cluster-node-tuning-operator pod/tuned-kf8tw node/ip-10-0-132-186.us-west-1.compute.internal container=tuned container exited with code 143 (Error): .\nI0227 22:29:11.494125    2020 tuned.go:176] failed to disable system tuned: Failed to execute operation: Unit file tuned.service does not exist.\nI0227 22:29:12.445253    2020 tuned.go:393] getting recommended profile...\nI0227 22:29:12.693758    2020 tuned.go:421] active profile () != recommended profile (openshift-control-plane)\nI0227 22:29:12.693986    2020 tuned.go:286] starting tuned...\n2020-02-27 22:29:12,992 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-02-27 22:29:13,012 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-02-27 22:29:13,013 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-02-27 22:29:13,015 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-02-27 22:29:13,019 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-02-27 22:29:13,155 INFO     tuned.daemon.controller: starting controller\n2020-02-27 22:29:13,156 INFO     tuned.daemon.daemon: starting tuning\n2020-02-27 22:29:13,186 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-02-27 22:29:13,188 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-02-27 22:29:13,194 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-02-27 22:29:13,199 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-02-27 22:29:13,205 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-02-27 22:29:13,489 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-02-27 22:29:13,498 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\nI0227 22:45:27.027079    2020 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 22:45:27.027556    2020 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\n
Feb 27 22:50:54.661 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-132-186.us-west-1.compute.internal node/ip-10-0-132-186.us-west-1.compute.internal container=kube-apiserver container exited with code 1 (Error): .186:2379: connect: connection refused". Reconnecting...\nW0227 22:48:38.381267       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://etcd-0.ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.132.186:2379: connect: connection refused". Reconnecting...\nE0227 22:48:38.452573       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0227 22:48:38.486758       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0227 22:48:38.486863       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0227 22:48:38.487124       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0227 22:48:38.487165       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0227 22:48:38.487358       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0227 22:48:38.487572       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0227 22:48:38.487726       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0227 22:48:38.515767       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0227 22:48:38.516174       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0227 22:48:38.516262       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0227 22:48:38.516352       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0227 22:48:38.517197       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0227 22:48:38.517453       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\n
Feb 27 22:50:54.661 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-132-186.us-west-1.compute.internal node/ip-10-0-132-186.us-west-1.compute.internal container=kube-apiserver-insecure-readyz container exited with code 2 (Error): I0227 22:26:05.559124       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Feb 27 22:50:54.661 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-132-186.us-west-1.compute.internal node/ip-10-0-132-186.us-west-1.compute.internal container=kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0227 22:48:18.666016       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 22:48:18.666526       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0227 22:48:28.677518       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 22:48:28.684307       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Feb 27 22:50:54.661 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-132-186.us-west-1.compute.internal node/ip-10-0-132-186.us-west-1.compute.internal container=kube-apiserver-cert-regeneration-controller container exited with code 255 (Error): lancer.go:26] syncing external loadbalancer hostnames: api.ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com\nI0227 22:46:57.094662       1 servicehostname.go:40] syncing servicenetwork hostnames: [172.30.0.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openshift openshift.default openshift.default.svc openshift.default.svc.cluster.local]\nI0227 22:48:38.472000       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0227 22:48:38.472751       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeControllerManagerClient"\nI0227 22:48:38.474767       1 cabundlesyncer.go:84] Shutting down CA bundle controller\nI0227 22:48:38.476042       1 cabundlesyncer.go:86] CA bundle controller shut down\nI0227 22:48:38.474791       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "LocalhostRecoveryServing"\nI0227 22:48:38.474811       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "InternalLoadBalancerServing"\nI0227 22:48:38.474827       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "ExternalLoadBalancerServing"\nI0227 22:48:38.474841       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "ServiceNetworkServing"\nI0227 22:48:38.474853       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "LocalhostServing"\nI0227 22:48:38.474862       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeAPIServerToKubeletClientCert"\nI0227 22:48:38.474872       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "AggregatorProxyClientCert"\nI0227 22:48:38.474902       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeSchedulerClient"\nI0227 22:48:38.474914       1 certrotationcontroller.go:560] Shutting down CertRotation\nF0227 22:48:38.518139       1 leaderelection.go:67] leaderelection lost\n
Feb 27 22:50:54.699 E ns/openshift-machine-config-operator pod/machine-config-daemon-cc4ht node/ip-10-0-132-186.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 27 22:50:54.741 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-132-186.us-west-1.compute.internal node/ip-10-0-132-186.us-west-1.compute.internal container=cluster-policy-controller container exited with code 1 (Error): : stream error: stream ID 665; INTERNAL_ERROR") has prevented the request from succeeding\nW0227 22:45:10.436934       1 reflector.go:326] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 671; INTERNAL_ERROR") has prevented the request from succeeding\nW0227 22:45:10.437002       1 reflector.go:326] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.BuildConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 689; INTERNAL_ERROR") has prevented the request from succeeding\nW0227 22:48:17.818030       1 reflector.go:326] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 847; INTERNAL_ERROR") has prevented the request from succeeding\nW0227 22:48:17.822035       1 reflector.go:326] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.Build ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 713; INTERNAL_ERROR") has prevented the request from succeeding\nW0227 22:48:17.822114       1 reflector.go:326] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.BuildConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 849; INTERNAL_ERROR") has prevented the request from succeeding\nW0227 22:48:17.822174       1 reflector.go:326] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 851; INTERNAL_ERROR") has prevented the request from succeeding\n
Feb 27 22:50:54.741 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-132-186.us-west-1.compute.internal node/ip-10-0-132-186.us-west-1.compute.internal container=kube-controller-manager-cert-syncer container exited with code 2 (Error):     1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 22:48:00.986178       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 22:48:00.986652       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 22:48:05.909394       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 22:48:05.909784       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 22:48:11.005108       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 22:48:11.010738       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 22:48:15.945782       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 22:48:15.946326       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 22:48:21.028782       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 22:48:21.030288       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 22:48:25.970959       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 22:48:25.971331       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 22:48:31.036278       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 22:48:31.036718       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 22:48:35.979102       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 22:48:35.979711       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\n
Feb 27 22:50:54.741 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-132-186.us-west-1.compute.internal node/ip-10-0-132-186.us-west-1.compute.internal container=kube-controller-manager container exited with code 2 (Error): cycle-manager/packageserver-7bc8476f4c, need 1, creating 1\nI0227 22:48:34.182233       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"286b644e-895d-4916-be62-d7434935beb8", APIVersion:"apps/v1", ResourceVersion:"42265", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set packageserver-7bc8476f4c to 1\nI0227 22:48:34.225131       1 deployment_controller.go:484] Error syncing deployment openshift-operator-lifecycle-manager/packageserver: Operation cannot be fulfilled on deployments.apps "packageserver": the object has been modified; please apply your changes to the latest version and try again\nI0227 22:48:34.262429       1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver-7bc8476f4c", UID:"927899f4-90da-4407-8862-9a03416be362", APIVersion:"apps/v1", ResourceVersion:"42266", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: packageserver-7bc8476f4c-hn49j\nI0227 22:48:35.184350       1 garbagecollector.go:404] processing item [v1/ConfigMap, namespace: openshift-cluster-node-tuning-operator, name: node-tuning-operator-lock, uid: 6454f84d-c396-4ae2-ba4f-bde3d686e749]\nE0227 22:48:35.290136       1 memcache.go:199] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request\nE0227 22:48:35.320423       1 memcache.go:111] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request\nI0227 22:48:35.325067       1 garbagecollector.go:517] delete object [v1/ConfigMap, namespace: openshift-cluster-node-tuning-operator, name: node-tuning-operator-lock, uid: 6454f84d-c396-4ae2-ba4f-bde3d686e749] with propagation policy Background\nW0227 22:48:37.698373       1 garbagecollector.go:639] failed to discover some groups: map[packages.operators.coreos.com/v1:the server is currently unable to handle the request]\n
Feb 27 22:50:54.741 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-132-186.us-west-1.compute.internal node/ip-10-0-132-186.us-west-1.compute.internal container=kube-controller-manager-recovery-controller container exited with code 255 (Error): tchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=24094&timeout=5m0s&timeoutSeconds=300&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0227 22:26:03.963319       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=24094&timeout=6m0s&timeoutSeconds=360&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0227 22:26:03.964648       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=24094&timeout=8m49s&timeoutSeconds=529&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0227 22:46:50.859429       1 leaderelection.go:252] successfully acquired lease openshift-kube-controller-manager/cert-recovery-controller-lock\nI0227 22:46:50.864575       1 csrcontroller.go:81] Starting CSR controller\nI0227 22:46:50.864601       1 shared_informer.go:197] Waiting for caches to sync for CSRController\nI0227 22:46:50.864644       1 event.go:281] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"openshift-kube-controller-manager", Name:"cert-recovery-controller-lock", UID:"fa29da9b-8ea5-47d6-b4a8-ab46d7f63282", APIVersion:"v1", ResourceVersion:"39791", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' b0383837-dcf1-4372-a444-ffdf8c93ffba became leader\nI0227 22:46:50.964831       1 shared_informer.go:204] Caches are synced for CSRController \nI0227 22:48:38.476364       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0227 22:48:38.476641       1 csrcontroller.go:83] Shutting down CSR controller\nI0227 22:48:38.476663       1 csrcontroller.go:85] CSR controller shut down\nF0227 22:48:38.477109       1 builder.go:209] server exited\n
Feb 27 22:50:58.674 E kube-apiserver failed contacting the API: Get https://api.ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/apis/config.openshift.io/v1/clusterversions?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dversion&resourceVersion=39294&timeout=7m4s&timeoutSeconds=424&watch=true: dial tcp 52.53.134.116:6443: connect: connection refused
Feb 27 22:50:58.674 E kube-apiserver failed contacting the API: Get https://api.ci-op-zfmbybf1-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&resourceVersion=44029&timeout=6m5s&timeoutSeconds=365&watch=true: dial tcp 52.53.134.116:6443: connect: connection refused
Feb 27 22:50:59.370 E ns/openshift-monitoring pod/node-exporter-nmbcs node/ip-10-0-132-186.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Feb 27 22:51:02.706 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-132-186.us-west-1.compute.internal node/ip-10-0-132-186.us-west-1.compute.internal container=scheduler container exited with code 2 (Error): :25.936940       1 factory.go:494] pod is already present in the activeQ\nI0227 22:48:25.952773       1 factory.go:453] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-6f55796db5-6xvbk: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0227 22:48:27.373296       1 factory.go:453] Unable to schedule openshift-apiserver/apiserver-57cb98c5df-cjbck: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0227 22:48:27.382392       1 factory.go:453] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-6f55796db5-6xvbk: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0227 22:48:30.373345       1 factory.go:453] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-6f55796db5-6xvbk: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0227 22:48:34.284300       1 scheduler.go:751] pod openshift-operator-lifecycle-manager/packageserver-7bc8476f4c-hn49j is bound successfully on node "ip-10-0-157-72.us-west-1.compute.internal", 6 nodes evaluated, 2 nodes were found feasible.\nI0227 22:48:35.374455       1 factory.go:453] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-6f55796db5-6xvbk: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0227 22:48:36.374971       1 factory.go:453] Unable to schedule openshift-apiserver/apiserver-57cb98c5df-cjbck: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\n
Feb 27 22:51:03.024 E ns/openshift-multus pod/multus-nlx8f node/ip-10-0-132-186.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Feb 27 22:51:03.702 E ns/openshift-multus pod/multus-nlx8f node/ip-10-0-132-186.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Feb 27 22:51:06.662 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator openshift-apiserver is reporting a failure: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable
Feb 27 22:51:11.444 E ns/openshift-machine-config-operator pod/machine-config-daemon-cc4ht node/ip-10-0-132-186.us-west-1.compute.internal container=oauth-proxy container exited with code 1 (Error):