ResultFAILURE
Tests 5 failed / 20 succeeded
Started2020-02-27 11:07
Elapsed1h15m
Work namespaceci-op-z6y52xgr
Refs openshift-4.5:29304dc2
33:90dc402d
pod39e3a775-5951-11ea-bfc2-0a58ac1062ea
repoopenshift/etcd
revision1

Test Failures


Cluster upgrade Application behind service load balancer with PDB is not disrupted 33m5s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sApplication\sbehind\sservice\sload\sbalancer\swith\sPDB\sis\snot\sdisrupted$'
Feb 27 12:12:29.805: Service was unreachable during disruption for at least 40s of 31m50s (2%):

Feb 27 11:40:40.590 E ns/e2e-k8s-service-lb-available-7320 svc/service-test Service stopped responding to GET requests on reused connections
Feb 27 11:40:40.590 E ns/e2e-k8s-service-lb-available-7320 svc/service-test Service stopped responding to GET requests over new connections
Feb 27 11:40:41.554 E ns/e2e-k8s-service-lb-available-7320 svc/service-test Service is not responding to GET requests on reused connections
Feb 27 11:40:41.554 E ns/e2e-k8s-service-lb-available-7320 svc/service-test Service is not responding to GET requests over new connections
Feb 27 11:40:41.611 I ns/e2e-k8s-service-lb-available-7320 svc/service-test Service started responding to GET requests on reused connections
Feb 27 11:40:41.611 I ns/e2e-k8s-service-lb-available-7320 svc/service-test Service started responding to GET requests over new connections
Feb 27 11:40:42.561 E ns/e2e-k8s-service-lb-available-7320 svc/service-test Service stopped responding to GET requests over new connections
Feb 27 11:40:43.554 - 3s    E ns/e2e-k8s-service-lb-available-7320 svc/service-test Service is not responding to GET requests over new connections
Feb 27 11:40:46.629 I ns/e2e-k8s-service-lb-available-7320 svc/service-test Service started responding to GET requests over new connections
Feb 27 11:40:48.561 E ns/e2e-k8s-service-lb-available-7320 svc/service-test Service stopped responding to GET requests over new connections
Feb 27 11:40:49.554 E ns/e2e-k8s-service-lb-available-7320 svc/service-test Service is not responding to GET requests over new connections
Feb 27 11:40:49.609 I ns/e2e-k8s-service-lb-available-7320 svc/service-test Service started responding to GET requests over new connections
Feb 27 11:40:50.587 E ns/e2e-k8s-service-lb-available-7320 svc/service-test Service stopped responding to GET requests over new connections
Feb 27 11:40:51.554 - 999ms E ns/e2e-k8s-service-lb-available-7320 svc/service-test Service is not responding to GET requests over new connections
Feb 27 11:40:52.644 I ns/e2e-k8s-service-lb-available-7320 svc/service-test Service started responding to GET requests over new connections
Feb 27 11:40:53.561 E ns/e2e-k8s-service-lb-available-7320 svc/service-test Service stopped responding to GET requests over new connections
Feb 27 11:40:54.554 E ns/e2e-k8s-service-lb-available-7320 svc/service-test Service is not responding to GET requests over new connections
Feb 27 11:40:54.614 I ns/e2e-k8s-service-lb-available-7320 svc/service-test Service started responding to GET requests over new connections
Feb 27 11:40:55.582 E ns/e2e-k8s-service-lb-available-7320 svc/service-test Service stopped responding to GET requests over new connections
Feb 27 11:40:56.554 E ns/e2e-k8s-service-lb-available-7320 svc/service-test Service is not responding to GET requests over new connections
Feb 27 11:40:56.610 I ns/e2e-k8s-service-lb-available-7320 svc/service-test Service started responding to GET requests over new connections
Feb 27 11:40:57.561 E ns/e2e-k8s-service-lb-available-7320 svc/service-test Service stopped responding to GET requests over new connections
Feb 27 11:40:58.554 E ns/e2e-k8s-service-lb-available-7320 svc/service-test Service is not responding to GET requests over new connections
Feb 27 11:40:58.612 I ns/e2e-k8s-service-lb-available-7320 svc/service-test Service started responding to GET requests over new connections
Feb 27 11:41:25.588 E ns/e2e-k8s-service-lb-available-7320 svc/service-test Service stopped responding to GET requests over new connections
Feb 27 11:41:26.554 E ns/e2e-k8s-service-lb-available-7320 svc/service-test Service is not responding to GET requests over new connections
Feb 27 11:41:26.609 I ns/e2e-k8s-service-lb-available-7320 svc/service-test Service started responding to GET requests over new connections
Feb 27 11:41:32.561 E ns/e2e-k8s-service-lb-available-7320 svc/service-test Service stopped responding to GET requests over new connections
Feb 27 11:41:33.554 E ns/e2e-k8s-service-lb-available-7320 svc/service-test Service is not responding to GET requests over new connections
Feb 27 11:41:33.610 I ns/e2e-k8s-service-lb-available-7320 svc/service-test Service started responding to GET requests over new connections
Feb 27 11:41:38.561 E ns/e2e-k8s-service-lb-available-7320 svc/service-test Service stopped responding to GET requests over new connections
Feb 27 11:41:39.554 - 1s    E ns/e2e-k8s-service-lb-available-7320 svc/service-test Service is not responding to GET requests over new connections
Feb 27 11:41:40.620 I ns/e2e-k8s-service-lb-available-7320 svc/service-test Service started responding to GET requests over new connections
Feb 27 11:41:41.561 E ns/e2e-k8s-service-lb-available-7320 svc/service-test Service stopped responding to GET requests over new connections
Feb 27 11:41:42.554 E ns/e2e-k8s-service-lb-available-7320 svc/service-test Service is not responding to GET requests over new connections
Feb 27 11:41:42.609 I ns/e2e-k8s-service-lb-available-7320 svc/service-test Service started responding to GET requests over new connections
Feb 27 11:51:19.555 E ns/e2e-k8s-service-lb-available-7320 svc/service-test Service stopped responding to GET requests over new connections
Feb 27 11:51:20.554 - 2s    E ns/e2e-k8s-service-lb-available-7320 svc/service-test Service is not responding to GET requests over new connections
Feb 27 11:51:22.656 I ns/e2e-k8s-service-lb-available-7320 svc/service-test Service started responding to GET requests over new connections
Feb 27 12:07:52.555 E ns/e2e-k8s-service-lb-available-7320 svc/service-test Service stopped responding to GET requests on reused connections
Feb 27 12:07:52.634 I ns/e2e-k8s-service-lb-available-7320 svc/service-test Service started responding to GET requests on reused connections
Feb 27 12:07:55.555 E ns/e2e-k8s-service-lb-available-7320 svc/service-test Service stopped responding to GET requests over new connections
Feb 27 12:07:56.554 - 9s    E ns/e2e-k8s-service-lb-available-7320 svc/service-test Service is not responding to GET requests over new connections
Feb 27 12:08:05.769 I ns/e2e-k8s-service-lb-available-7320 svc/service-test Service started responding to GET requests over new connections

github.com/openshift/origin/test/extended/util/disruption.ExpectNoDisruption(0xc0038bba40, 0x3f947ae147ae147b, 0x1bcc3c764a8, 0xc004d8ab00, 0x2c, 0x54, 0x56093ba, 0x29)
	/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/disruption/disruption.go:226 +0x68b
github.com/openshift/origin/test/e2e/upgrade/service.(*UpgradeTest).Test(0xc004375fe0, 0xc0038bba40, 0xc00499aa80, 0x2)
	/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/e2e/upgrade/service/service.go:133 +0x540
github.com/openshift/origin/test/extended/util/disruption.(*chaosMonkeyAdapter).Test(0xc003161e40, 0xc004fc65a0)
	/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/disruption/disruption.go:143 +0x38b
github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do.func1(0xc004fc65a0, 0xc004fb7b70)
	/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:90 +0x93
created by github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do
	/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:87 +0xab
				from junit_upgrade_1582805549.xml

Filter through log files


Cluster upgrade Cluster frontend ingress remain available 33m5s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sCluster\sfrontend\singress\sremain\savailable$'
Frontends were unreachable during disruption for at least 8s of 33m5s (0%):

Feb 27 11:48:41.777 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Feb 27 11:48:41.777 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests on reused connections
Feb 27 11:48:41.861 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Feb 27 11:48:41.868 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests on reused connections
Feb 27 11:48:55.777 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Feb 27 11:48:55.891 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Feb 27 12:01:18.777 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Feb 27 12:01:18.777 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests on reused connections
Feb 27 12:01:19.161 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Feb 27 12:01:19.201 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests on reused connections
Feb 27 12:01:53.777 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Feb 27 12:01:53.878 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Feb 27 12:04:13.778 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Feb 27 12:04:13.912 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Feb 27 12:07:31.777 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Feb 27 12:07:31.898 I ns/openshift-console route/console Route started responding to GET requests over new connections
				from junit_upgrade_1582805549.xml

Filter through log files


openshift-tests Monitor cluster while tests execute 33m8s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
233 error level events were detected during this test run:

Feb 27 11:39:26.079 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-155-97.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-27T11:39:23.741Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-27T11:39:23.756Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-27T11:39:23.757Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-27T11:39:23.758Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-27T11:39:23.758Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-27T11:39:23.758Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-27T11:39:23.758Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-27T11:39:23.758Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-27T11:39:23.758Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-27T11:39:23.758Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-27T11:39:23.758Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-27T11:39:23.759Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-27T11:39:23.759Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-27T11:39:23.759Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-27T11:39:23.759Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-27T11:39:23.759Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-27
Feb 27 11:39:28.337 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-131-75.us-east-2.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/02/27 11:38:23 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Feb 27 11:39:28.337 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-131-75.us-east-2.compute.internal container=prometheus-proxy container exited with code 2 (Error): 2020/02/27 11:38:23 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/02/27 11:38:23 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/27 11:38:23 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/27 11:38:23 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/27 11:38:23 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/27 11:38:23 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/02/27 11:38:23 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/27 11:38:23 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/02/27 11:38:23 http.go:107: HTTPS: listening on [::]:9091\nI0227 11:38:23.654364       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/02/27 11:39:20 oauthproxy.go:774: basicauth: 10.129.2.5:38472 Authorization header does not start with 'Basic', skipping basic authentication\n
Feb 27 11:39:28.337 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-131-75.us-east-2.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-02-27T11:38:22.735100694Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-02-27T11:38:22.736967237Z caller=runutil.go:95 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-02-27T11:38:27.870949187Z caller=reloader.go:286 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-02-27T11:38:27.871051936Z caller=reloader.go:154 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\nlevel=info ts=2020-02-27T11:38:28.006974772Z caller=reloader.go:286 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Feb 27 11:39:35.396 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-131-75.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-27T11:39:33.551Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-27T11:39:33.557Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-27T11:39:33.558Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-27T11:39:33.559Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-27T11:39:33.559Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-27T11:39:33.559Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-27T11:39:33.559Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-27T11:39:33.559Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-27T11:39:33.559Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-27T11:39:33.559Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-27T11:39:33.559Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-27T11:39:33.559Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-27T11:39:33.559Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-27T11:39:33.559Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-27T11:39:33.560Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-27T11:39:33.560Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-27
Feb 27 11:41:10.740 E ns/openshift-etcd-operator pod/etcd-operator-7c8784647f-bcdbm node/ip-10-0-156-239.us-east-2.compute.internal container=operator container exited with code 255 (Error): own controller.\nI0227 11:41:09.898706       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0227 11:41:09.898740       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0227 11:41:09.898758       1 targetconfigcontroller.go:269] Shutting down TargetConfigController\nI0227 11:41:09.898772       1 clustermembercontroller.go:104] Shutting down ClusterMemberController\nI0227 11:41:09.898787       1 host_endpoints_controller.go:263] Shutting down HostEtcdEndpointsController\nI0227 11:41:09.898801       1 host_endpoints_controller.go:357] Shutting down HostEtcdEndpointsController\nI0227 11:41:09.898816       1 scriptcontroller.go:144] Shutting down ScriptControllerController\nI0227 11:41:09.898835       1 base_controller.go:74] Shutting down PruneController ...\nI0227 11:41:09.898851       1 base_controller.go:74] Shutting down RevisionController ...\nI0227 11:41:09.898866       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0227 11:41:09.898881       1 base_controller.go:74] Shutting down  ...\nI0227 11:41:09.898893       1 status_controller.go:212] Shutting down StatusSyncer-etcd\nI0227 11:41:09.898909       1 base_controller.go:74] Shutting down NodeController ...\nI0227 11:41:09.898923       1 base_controller.go:74] Shutting down StaticPodStateController ...\nI0227 11:41:09.898939       1 base_controller.go:74] Shutting down InstallerController ...\nI0227 11:41:09.898953       1 base_controller.go:74] Shutting down InstallerStateController ...\nI0227 11:41:09.898965       1 etcdcertsignercontroller.go:118] Shutting down EtcdCertSignerController\nI0227 11:41:09.898981       1 base_controller.go:74] Shutting down  ...\nI0227 11:41:09.899006       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0227 11:41:09.899069       1 bootstrap_teardown_controller.go:212] Shutting down BootstrapTeardownController\nI0227 11:41:09.899403       1 etcdmemberscontroller.go:192] Shutting down EtcdMembersController\nF0227 11:41:09.899472       1 builder.go:243] stopped\n
Feb 27 11:41:18.783 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-76c7bc6c46-fd28f node/ip-10-0-156-239.us-east-2.compute.internal container=kube-apiserver-operator container exited with code 255 (Error): 1:11.903908       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"35cf806c-5610-4f9d-8a28-a3bd8f46b7a0", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/revision-status-7 -n openshift-kube-apiserver:\ncause by changes in data.status\nI0227 11:41:14.909853       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"35cf806c-5610-4f9d-8a28-a3bd8f46b7a0", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-7-ip-10-0-131-12.us-east-2.compute.internal -n openshift-kube-apiserver because it was missing\nI0227 11:41:16.786609       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"35cf806c-5610-4f9d-8a28-a3bd8f46b7a0", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-131-12.us-east-2.compute.internal pods/kube-apiserver-ip-10-0-131-12.us-east-2.compute.internal container=\"kube-apiserver\" is not ready" to "NodeControllerDegraded: All master nodes are ready"\nI0227 11:41:18.023915       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0227 11:41:18.024012       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nI0227 11:41:18.024029       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeAPIServerToKubeletClientCert"\nI0227 11:41:18.024043       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeControllerManagerClient"\nF0227 11:41:18.024045       1 builder.go:209] server exited\n
Feb 27 11:41:31.838 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-5d4b9d8ff6-t8knt node/ip-10-0-156-239.us-east-2.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error):  from "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 7; 2 nodes are at revision 8" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 8"\nI0227 11:39:42.128577       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"b3737434-d593-4f5c-88f8-94b896a5dcf8", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-131-12.us-east-2.compute.internal pods/kube-controller-manager-ip-10-0-131-12.us-east-2.compute.internal container=\"kube-controller-manager\" is not ready" to "NodeControllerDegraded: All master nodes are ready"\nI0227 11:39:42.706685       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"b3737434-d593-4f5c-88f8-94b896a5dcf8", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/revision-status-8 -n openshift-kube-controller-manager:\ncause by changes in data.status\nI0227 11:39:45.917837       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"b3737434-d593-4f5c-88f8-94b896a5dcf8", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-8-ip-10-0-131-12.us-east-2.compute.internal -n openshift-kube-controller-manager because it was missing\nI0227 11:41:31.084551       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0227 11:41:31.084776       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0227 11:41:31.085005       1 builder.go:209] server exited\n
Feb 27 11:41:36.866 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-7c64c848-6gk46 node/ip-10-0-156-239.us-east-2.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): ieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/revision-status-6 -n openshift-kube-scheduler:\ncause by changes in data.status\nI0227 11:35:30.285593       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"b1beb8b5-f72a-4c00-a9bc-21b5d5dcd5e1", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-6-ip-10-0-156-239.us-east-2.compute.internal -n openshift-kube-scheduler because it was missing\nI0227 11:41:35.822210       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0227 11:41:35.823480       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0227 11:41:35.823513       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0227 11:41:35.823532       1 base_controller.go:74] Shutting down InstallerStateController ...\nI0227 11:41:35.823549       1 base_controller.go:74] Shutting down InstallerController ...\nI0227 11:41:35.823564       1 base_controller.go:74] Shutting down StaticPodStateController ...\nI0227 11:41:35.823579       1 base_controller.go:74] Shutting down NodeController ...\nI0227 11:41:35.823595       1 base_controller.go:74] Shutting down  ...\nI0227 11:41:35.823609       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0227 11:41:35.823624       1 base_controller.go:74] Shutting down PruneController ...\nI0227 11:41:35.823640       1 base_controller.go:74] Shutting down RevisionController ...\nI0227 11:41:35.823654       1 status_controller.go:212] Shutting down StatusSyncer-kube-scheduler\nI0227 11:41:35.823671       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0227 11:41:35.823683       1 target_config_reconciler.go:124] Shutting down TargetConfigReconciler\nI0227 11:41:35.823700       1 remove_stale_conditions.go:84] Shutting down RemoveStaleConditions\nF0227 11:41:35.823992       1 builder.go:243] stopped\n
Feb 27 11:42:47.030 E kube-apiserver Kube API started failing: Get https://api.ci-op-z6y52xgr-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Feb 27 11:43:03.904 E clusteroperator/monitoring changed Degraded to True: UpdatingGrafanaFailed: Failed to rollout the stack. Error: running task Updating Grafana failed: reconciling Grafana Dashboard Definitions ConfigMaps failed: updating ConfigMap object failed: etcdserver: leader changed
Feb 27 11:43:11.078 E ns/openshift-machine-api pod/machine-api-operator-5d85c6f844-5nvkl node/ip-10-0-156-239.us-east-2.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Feb 27 11:45:23.692 E ns/openshift-machine-api pod/machine-api-controllers-7fc446749d-8g6m8 node/ip-10-0-156-239.us-east-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Feb 27 11:47:37.176 E ns/openshift-cluster-machine-approver pod/machine-approver-5c79757b7c-69q8s node/ip-10-0-156-239.us-east-2.compute.internal container=machine-approver-controller container exited with code 2 (Error): sr_check.go:418] retrieving serving cert from ip-10-0-141-90.us-east-2.compute.internal (10.0.141.90:10250)\nW0227 11:29:29.333836       1 csr_check.go:178] Failed to retrieve current serving cert: remote error: tls: internal error\nI0227 11:29:29.334481       1 csr_check.go:183] Falling back to machine-api authorization for ip-10-0-141-90.us-east-2.compute.internal\nI0227 11:29:29.361452       1 main.go:196] CSR csr-246p9 approved\nI0227 11:29:29.361500       1 main.go:146] CSR csr-dpb25 added\nI0227 11:29:29.361511       1 main.go:149] CSR csr-dpb25 is already approved\nI0227 11:29:29.361526       1 main.go:146] CSR csr-l57nc added\nI0227 11:29:29.361534       1 main.go:149] CSR csr-l57nc is already approved\nI0227 11:33:01.222825       1 main.go:146] CSR csr-8ck4f added\nI0227 11:33:01.259799       1 main.go:196] CSR csr-8ck4f approved\nI0227 11:33:13.920313       1 main.go:146] CSR csr-mr4bn added\nI0227 11:33:13.944751       1 csr_check.go:418] retrieving serving cert from ip-10-0-131-75.us-east-2.compute.internal (10.0.131.75:10250)\nW0227 11:33:13.946522       1 csr_check.go:178] Failed to retrieve current serving cert: remote error: tls: internal error\nI0227 11:33:13.946615       1 csr_check.go:183] Falling back to machine-api authorization for ip-10-0-131-75.us-east-2.compute.internal\nI0227 11:33:13.952932       1 main.go:196] CSR csr-mr4bn approved\nE0227 11:37:07.755853       1 reflector.go:270] github.com/openshift/cluster-machine-approver/main.go:238: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?resourceVersion=15016&timeoutSeconds=582&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0227 11:37:08.756524       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\n
Feb 27 11:47:41.797 E ns/openshift-cluster-node-tuning-operator pod/cluster-node-tuning-operator-67c745cf8d-mq42p node/ip-10-0-156-239.us-east-2.compute.internal container=cluster-node-tuning-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 11:47:47.675 E ns/openshift-csi-snapshot-controller pod/csi-snapshot-controller-59d5fccc9f-7tv49 node/ip-10-0-141-90.us-east-2.compute.internal container=snapshot-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 11:47:50.019 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-bb477hlwl node/ip-10-0-156-239.us-east-2.compute.internal container=operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 11:47:50.362 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-131-75.us-east-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/02/27 11:38:24 Watching directory: "/etc/alertmanager/config"\n
Feb 27 11:47:50.362 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-131-75.us-east-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/02/27 11:38:25 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/27 11:38:25 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/27 11:38:25 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/27 11:38:25 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/27 11:38:25 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/27 11:38:25 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/27 11:38:25 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0227 11:38:25.070794       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/02/27 11:38:25 http.go:107: HTTPS: listening on [::]:9095\n
Feb 27 11:47:54.472 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-74f88f6845-xzrv2 node/ip-10-0-156-239.us-east-2.compute.internal container=operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 11:47:56.529 E ns/openshift-service-ca-operator pod/service-ca-operator-67db7d78f5-7f9pl node/ip-10-0-156-239.us-east-2.compute.internal container=operator container exited with code 255 (Error): 
Feb 27 11:47:58.897 E ns/openshift-authentication pod/oauth-openshift-55998bc5f8-7d74q node/ip-10-0-137-62.us-east-2.compute.internal container=oauth-openshift container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 11:47:59.384 E ns/openshift-monitoring pod/node-exporter-r5fpm node/ip-10-0-131-75.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T11:47:11Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T11:47:17Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T11:47:26Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T11:47:32Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T11:47:41Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T11:47:47Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T11:47:56Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 27 11:48:02.847 E ns/openshift-monitoring pod/kube-state-metrics-b4648f69b-m26jh node/ip-10-0-141-90.us-east-2.compute.internal container=kube-state-metrics container exited with code 2 (Error): 
Feb 27 11:48:03.285 E ns/openshift-monitoring pod/openshift-state-metrics-76bf5d77cf-nhk24 node/ip-10-0-155-97.us-east-2.compute.internal container=openshift-state-metrics container exited with code 2 (Error): 
Feb 27 11:48:03.372 E ns/openshift-console pod/downloads-54cf6c4b74-f88zm node/ip-10-0-155-97.us-east-2.compute.internal container=download-server container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 11:48:04.681 E ns/openshift-image-registry pod/image-registry-d958fcddd-46jrj node/ip-10-0-141-90.us-east-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 11:48:05.939 E ns/openshift-csi-snapshot-controller pod/csi-snapshot-controller-586c7687f4-289ld node/ip-10-0-141-90.us-east-2.compute.internal container=snapshot-controller container exited with code 2 (Error): 
Feb 27 11:48:06.362 E ns/openshift-image-registry pod/image-registry-d958fcddd-hj8dt node/ip-10-0-131-75.us-east-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 11:48:08.654 E ns/openshift-monitoring pod/telemeter-client-84b5c865f5-6p7x2 node/ip-10-0-131-75.us-east-2.compute.internal container=telemeter-client container exited with code 2 (Error): 
Feb 27 11:48:08.654 E ns/openshift-monitoring pod/telemeter-client-84b5c865f5-6p7x2 node/ip-10-0-131-75.us-east-2.compute.internal container=reload container exited with code 2 (Error): 
Feb 27 11:48:12.415 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-155-97.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-27T11:39:23.741Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-27T11:39:23.756Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-27T11:39:23.757Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-27T11:39:23.758Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-27T11:39:23.758Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-27T11:39:23.758Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-27T11:39:23.758Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-27T11:39:23.758Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-27T11:39:23.758Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-27T11:39:23.758Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-27T11:39:23.758Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-27T11:39:23.759Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-27T11:39:23.759Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-27T11:39:23.759Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-27T11:39:23.759Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-27T11:39:23.759Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-27
Feb 27 11:48:12.415 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-155-97.us-east-2.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-02-27T11:39:24.031472282Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-02-27T11:39:24.034528844Z caller=runutil.go:95 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-02-27T11:39:29.197001734Z caller=reloader.go:286 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-02-27T11:39:29.197094409Z caller=reloader.go:154 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\n
Feb 27 11:48:14.635 E ns/openshift-console-operator pod/console-operator-59d685b8f-4flq9 node/ip-10-0-137-62.us-east-2.compute.internal container=console-operator container exited with code 255 (Error): t-go/oauth/informers/externalversions/factory.go:101: watch of *v1.OAuthClient ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 9; INTERNAL_ERROR") has prevented the request from succeeding\nW0227 11:44:02.537657       1 reflector.go:326] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 25; INTERNAL_ERROR") has prevented the request from succeeding\nW0227 11:44:12.739291       1 reflector.go:326] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 433; INTERNAL_ERROR") has prevented the request from succeeding\nW0227 11:44:12.739609       1 reflector.go:326] github.com/openshift/client-go/oauth/informers/externalversions/factory.go:101: watch of *v1.OAuthClient ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 371; INTERNAL_ERROR") has prevented the request from succeeding\nI0227 11:48:13.727955       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0227 11:48:13.728176       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nI0227 11:48:13.728542       1 configmap_cafile_content.go:226] Shutting down client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nI0227 11:48:13.728590       1 configmap_cafile_content.go:226] Shutting down client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\nI0227 11:48:13.728674       1 dynamic_serving_content.go:144] Shutting down serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key\nI0227 11:48:13.729720       1 secure_serving.go:222] Stopped listening on [::]:8443\nF0227 11:48:13.729788       1 builder.go:210] server exited\n
Feb 27 11:48:14.910 E ns/openshift-monitoring pod/thanos-querier-586f8bc447-8hnw7 node/ip-10-0-141-90.us-east-2.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/02/27 11:38:56 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/27 11:38:56 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/27 11:38:56 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/27 11:38:56 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/27 11:38:56 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/27 11:38:56 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/27 11:38:56 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/27 11:38:57 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0227 11:38:57.000817       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/02/27 11:38:57 http.go:107: HTTPS: listening on [::]:9091\n
Feb 27 11:48:15.234 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-141-90.us-east-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/02/27 11:38:34 Watching directory: "/etc/alertmanager/config"\n
Feb 27 11:48:15.234 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-141-90.us-east-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/02/27 11:38:34 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/27 11:38:34 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/27 11:38:34 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/27 11:38:34 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/27 11:38:34 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/27 11:38:34 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/27 11:38:34 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0227 11:38:34.951871       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/02/27 11:38:34 http.go:107: HTTPS: listening on [::]:9095\n
Feb 27 11:48:16.347 E ns/openshift-monitoring pod/prometheus-adapter-86c67fd6f-mnvr4 node/ip-10-0-131-75.us-east-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0227 11:38:19.753214       1 adapter.go:93] successfully using in-cluster auth\nI0227 11:38:20.624080       1 secure_serving.go:116] Serving securely on [::]:6443\n
Feb 27 11:48:19.426 E ns/openshift-monitoring pod/node-exporter-nfr98 node/ip-10-0-155-97.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T11:47:24Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T11:47:37Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T11:47:39Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T11:47:53Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T11:47:54Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T11:48:08Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T11:48:09Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 27 11:48:22.373 E ns/openshift-operator-lifecycle-manager pod/packageserver-fb4678fb8-rbt46 node/ip-10-0-137-62.us-east-2.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 11:48:29.582 E ns/openshift-marketplace pod/redhat-marketplace-57b7d4cbc5-xb5gm node/ip-10-0-155-97.us-east-2.compute.internal container=redhat-marketplace container exited with code 2 (Error): 
Feb 27 11:48:29.603 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-155-97.us-east-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/02/27 11:38:41 Watching directory: "/etc/alertmanager/config"\n
Feb 27 11:48:29.603 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-155-97.us-east-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/02/27 11:38:42 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/27 11:38:42 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/27 11:38:42 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/27 11:38:42 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/27 11:38:42 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/27 11:38:42 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/27 11:38:42 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0227 11:38:42.161916       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/02/27 11:38:42 http.go:107: HTTPS: listening on [::]:9095\n
Feb 27 11:48:31.069 E ns/openshift-ingress pod/router-default-8ffccb959-zphfs node/ip-10-0-141-90.us-east-2.compute.internal container=router container exited with code 2 (Error): lhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 11:47:41.969380       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 11:47:47.024786       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 11:47:51.994856       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 11:47:56.998282       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 11:48:01.974312       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 11:48:06.974257       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 11:48:11.976175       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 11:48:16.954602       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 11:48:21.983194       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 11:48:26.959443       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Feb 27 11:48:31.583 E ns/openshift-monitoring pod/node-exporter-9h56q node/ip-10-0-131-12.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T11:47:32Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T11:47:36Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T11:47:47Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T11:47:51Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T11:48:02Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T11:48:06Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T11:48:17Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 27 11:48:31.652 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-155-97.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-27T11:48:20.417Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-27T11:48:20.424Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-27T11:48:20.425Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-27T11:48:20.426Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-27T11:48:20.426Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-27T11:48:20.426Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-27T11:48:20.426Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-27T11:48:20.426Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-27T11:48:20.426Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-27T11:48:20.426Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-27T11:48:20.426Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-27T11:48:20.426Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-27T11:48:20.426Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-27T11:48:20.426Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-27T11:48:20.427Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-27T11:48:20.427Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-27
Feb 27 11:48:34.072 E ns/openshift-marketplace pod/redhat-operators-6745dbc4f6-6ghgv node/ip-10-0-141-90.us-east-2.compute.internal container=redhat-operators container exited with code 2 (Error): 
Feb 27 11:48:36.769 E ns/openshift-monitoring pod/thanos-querier-586f8bc447-kk54p node/ip-10-0-155-97.us-east-2.compute.internal container=thanos-querier container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 11:48:36.769 E ns/openshift-monitoring pod/thanos-querier-586f8bc447-kk54p node/ip-10-0-155-97.us-east-2.compute.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 11:48:36.769 E ns/openshift-monitoring pod/thanos-querier-586f8bc447-kk54p node/ip-10-0-155-97.us-east-2.compute.internal container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 11:48:36.769 E ns/openshift-monitoring pod/thanos-querier-586f8bc447-kk54p node/ip-10-0-155-97.us-east-2.compute.internal container=oauth-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 11:48:37.566 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-131-75.us-east-2.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/02/27 11:39:34 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Feb 27 11:48:37.566 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-131-75.us-east-2.compute.internal container=prometheus-proxy container exited with code 2 (Error): 2020/02/27 11:39:34 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/02/27 11:39:34 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/27 11:39:34 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/27 11:39:34 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/27 11:39:34 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/27 11:39:34 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/02/27 11:39:34 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/27 11:39:34 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/02/27 11:39:34 http.go:107: HTTPS: listening on [::]:9091\nI0227 11:39:34.532301       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/02/27 11:43:54 oauthproxy.go:774: basicauth: 10.129.0.15:60138 Authorization header does not start with 'Basic', skipping basic authentication\n2020/02/27 11:47:50 oauthproxy.go:774: basicauth: 10.129.0.62:34798 Authorization header does not start with 'Basic', skipping basic authentication\n2020/02/27 11:48:04 oauthproxy.go:774: basicauth: 10.131.0.31:51702 Authorization header does not start with 'Basic', skipping basic authentication\n
Feb 27 11:48:37.566 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-131-75.us-east-2.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-02-27T11:39:33.737086048Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-02-27T11:39:33.738689264Z caller=runutil.go:95 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-02-27T11:39:38.873794431Z caller=reloader.go:286 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-02-27T11:39:38.873889108Z caller=reloader.go:154 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\n
Feb 27 11:48:39.749 E ns/openshift-marketplace pod/certified-operators-8c67bcccd-9bdvk node/ip-10-0-155-97.us-east-2.compute.internal container=certified-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 11:48:46.762 E ns/openshift-marketplace pod/community-operators-6d4d4fc768-mgzf2 node/ip-10-0-155-97.us-east-2.compute.internal container=community-operators container exited with code 2 (Error): 
Feb 27 11:48:47.875 E ns/openshift-monitoring pod/node-exporter-xmbdt node/ip-10-0-137-62.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T11:47:40Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T11:47:43Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T11:47:55Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T11:47:58Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T11:48:10Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T11:48:25Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T11:48:43Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 27 11:48:53.817 E ns/openshift-ingress pod/router-default-8ffccb959-f7cf7 node/ip-10-0-155-97.us-east-2.compute.internal container=router container exited with code 2 (Error): lhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 11:48:01.986778       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 11:48:06.981971       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 11:48:11.972120       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 11:48:17.003146       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 11:48:21.982388       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 11:48:26.991303       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 11:48:31.994907       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 11:48:36.989366       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 11:48:41.979491       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0227 11:48:47.001265       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Feb 27 11:48:54.162 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-141-90.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-27T11:48:49.174Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-27T11:48:49.178Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-27T11:48:49.179Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-27T11:48:49.179Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-27T11:48:49.179Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-27T11:48:49.179Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-27T11:48:49.180Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-27T11:48:49.180Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-27T11:48:49.180Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-27T11:48:49.180Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-27T11:48:49.180Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-27T11:48:49.180Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-27T11:48:49.180Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-27T11:48:49.180Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-27T11:48:49.180Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-27T11:48:49.180Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-27
Feb 27 11:49:07.388 E ns/openshift-image-registry pod/node-ca-vdtvn node/ip-10-0-141-90.us-east-2.compute.internal container=node-ca container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 11:51:06.915 E ns/openshift-sdn pod/sdn-ldkw2 node/ip-10-0-156-239.us-east-2.compute.internal container=sdn container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 11:51:15.140 E ns/openshift-sdn pod/sdn-fccrp node/ip-10-0-155-97.us-east-2.compute.internal container=sdn container exited with code 255 (Error):  to [10.128.0.76:8443 10.129.0.64:8443 10.130.0.25:8443]\nI0227 11:48:56.599706    2641 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-console/console:https to [10.128.0.76:8443 10.129.0.64:8443]\nI0227 11:48:56.599735    2641 roundrobin.go:217] Delete endpoint 10.130.0.25:8443 for service "openshift-console/console:https"\nI0227 11:48:56.684766    2641 proxier.go:368] userspace proxy: processing 0 service events\nI0227 11:48:56.684787    2641 proxier.go:347] userspace syncProxyRules took 70.06619ms\nI0227 11:48:56.944856    2641 proxier.go:368] userspace proxy: processing 0 service events\nI0227 11:48:56.944880    2641 proxier.go:347] userspace syncProxyRules took 72.298178ms\nI0227 11:49:27.209909    2641 proxier.go:368] userspace proxy: processing 0 service events\nI0227 11:49:27.209944    2641 proxier.go:347] userspace syncProxyRules took 95.288267ms\nI0227 11:49:57.450238    2641 proxier.go:368] userspace proxy: processing 0 service events\nI0227 11:49:57.450259    2641 proxier.go:347] userspace syncProxyRules took 70.823404ms\nI0227 11:50:27.704560    2641 proxier.go:368] userspace proxy: processing 0 service events\nI0227 11:50:27.704590    2641 proxier.go:347] userspace syncProxyRules took 71.911252ms\nI0227 11:50:57.987735    2641 proxier.go:368] userspace proxy: processing 0 service events\nI0227 11:50:57.987791    2641 proxier.go:347] userspace syncProxyRules took 73.310299ms\nI0227 11:51:02.556445    2641 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.18:6443 10.129.0.6:6443]\nI0227 11:51:02.556489    2641 roundrobin.go:217] Delete endpoint 10.130.0.3:6443 for service "openshift-multus/multus-admission-controller:"\nI0227 11:51:02.820660    2641 proxier.go:368] userspace proxy: processing 0 service events\nI0227 11:51:02.820698    2641 proxier.go:347] userspace syncProxyRules took 88.309353ms\nF0227 11:51:14.579805    2641 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Feb 27 11:51:22.208 E ns/openshift-sdn pod/sdn-controller-x9mmb node/ip-10-0-131-12.us-east-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0227 11:23:37.207328       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Feb 27 11:51:33.510 E ns/openshift-multus pod/multus-admission-controller-7w7gq node/ip-10-0-137-62.us-east-2.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Feb 27 11:51:33.532 E ns/openshift-multus pod/multus-g6wzd node/ip-10-0-137-62.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Feb 27 11:51:37.992 E ns/openshift-sdn pod/sdn-controller-v5fcw node/ip-10-0-156-239.us-east-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0227 11:23:50.272684       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Feb 27 11:51:51.307 E ns/openshift-sdn pod/sdn-ppvjj node/ip-10-0-131-12.us-east-2.compute.internal container=sdn container exited with code 255 (Error): .go:347] userspace syncProxyRules took 74.524621ms\nI0227 11:49:57.948434    3030 proxier.go:368] userspace proxy: processing 0 service events\nI0227 11:49:57.948457    3030 proxier.go:347] userspace syncProxyRules took 82.418927ms\nI0227 11:50:28.231302    3030 proxier.go:368] userspace proxy: processing 0 service events\nI0227 11:50:28.231325    3030 proxier.go:347] userspace syncProxyRules took 78.083279ms\nI0227 11:50:58.499934    3030 proxier.go:368] userspace proxy: processing 0 service events\nI0227 11:50:58.499959    3030 proxier.go:347] userspace syncProxyRules took 80.937507ms\nI0227 11:51:02.554142    3030 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.18:6443 10.129.0.6:6443]\nI0227 11:51:02.554189    3030 roundrobin.go:217] Delete endpoint 10.130.0.3:6443 for service "openshift-multus/multus-admission-controller:"\nI0227 11:51:02.833972    3030 proxier.go:368] userspace proxy: processing 0 service events\nI0227 11:51:02.833995    3030 proxier.go:347] userspace syncProxyRules took 80.73542ms\nI0227 11:51:14.637729    3030 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.129.0.6:6443]\nI0227 11:51:14.637766    3030 roundrobin.go:217] Delete endpoint 10.128.0.18:6443 for service "openshift-multus/multus-admission-controller:"\nI0227 11:51:14.920762    3030 proxier.go:368] userspace proxy: processing 0 service events\nI0227 11:51:14.920791    3030 proxier.go:347] userspace syncProxyRules took 76.561691ms\nI0227 11:51:42.178319    3030 healthcheck.go:92] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0227 11:51:45.213606    3030 proxier.go:368] userspace proxy: processing 0 service events\nI0227 11:51:45.213635    3030 proxier.go:347] userspace syncProxyRules took 82.959603ms\nF0227 11:51:50.622928    3030 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Feb 27 11:52:22.694 E ns/openshift-sdn pod/sdn-k5dvh node/ip-10-0-141-90.us-east-2.compute.internal container=sdn container exited with code 255 (Error): port "nodePort for openshift-ingress/router-default:https" (:32752/tcp)\nI0227 11:52:02.212773   13705 proxier.go:1609] Opened local port "nodePort for openshift-ingress/router-default:http" (:31987/tcp)\nI0227 11:52:02.212908   13705 proxier.go:1609] Opened local port "nodePort for e2e-k8s-service-lb-available-7320/service-test:" (:30651/tcp)\nI0227 11:52:02.250001   13705 service_health.go:98] Opening healthcheck "openshift-ingress/router-default" on port 30575\nI0227 11:52:02.258030   13705 proxy.go:305] openshift-sdn proxy services and endpoints initialized\nI0227 11:52:02.258064   13705 cmd.go:173] openshift-sdn network plugin registering startup\nI0227 11:52:02.258185   13705 cmd.go:177] openshift-sdn network plugin ready\nI0227 11:52:04.120631   13705 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.77:6443 10.129.0.6:6443]\nI0227 11:52:04.132686   13705 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.77:6443]\nI0227 11:52:04.132717   13705 roundrobin.go:217] Delete endpoint 10.129.0.6:6443 for service "openshift-multus/multus-admission-controller:"\nI0227 11:52:04.360054   13705 proxier.go:368] userspace proxy: processing 0 service events\nI0227 11:52:04.360077   13705 proxier.go:347] userspace syncProxyRules took 72.014621ms\nI0227 11:52:04.615020   13705 proxier.go:368] userspace proxy: processing 0 service events\nI0227 11:52:04.615043   13705 proxier.go:347] userspace syncProxyRules took 73.835878ms\nI0227 11:52:10.534848   13705 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.77:6443 10.130.0.65:6443]\nI0227 11:52:10.792408   13705 proxier.go:368] userspace proxy: processing 0 service events\nI0227 11:52:10.792437   13705 proxier.go:347] userspace syncProxyRules took 73.07365ms\nF0227 11:52:21.856486   13705 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Feb 27 11:52:28.447 E ns/openshift-multus pod/multus-fmv9r node/ip-10-0-131-12.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Feb 27 11:52:48.975 E ns/openshift-sdn pod/sdn-r4tbt node/ip-10-0-131-75.us-east-2.compute.internal container=sdn container exited with code 255 (Error): 36:80/TCP\nI0227 11:52:21.825334    7714 service.go:363] Adding new service port "openshift-ingress/router-internal-default:https" at 172.30.84.36:443/TCP\nI0227 11:52:21.825351    7714 service.go:363] Adding new service port "openshift-machine-api/machine-api-operator:https" at 172.30.71.1:8443/TCP\nI0227 11:52:21.825367    7714 service.go:363] Adding new service port "openshift-kube-storage-version-migrator-operator/metrics:https" at 172.30.148.8:443/TCP\nI0227 11:52:21.825589    7714 proxier.go:766] Stale udp service openshift-dns/dns-default:dns -> 172.30.0.10\nI0227 11:52:22.010281    7714 proxier.go:368] userspace proxy: processing 0 service events\nI0227 11:52:22.010306    7714 proxier.go:347] userspace syncProxyRules took 184.641665ms\nI0227 11:52:22.092198    7714 proxier.go:1609] Opened local port "nodePort for openshift-ingress/router-default:https" (:32752/tcp)\nI0227 11:52:22.092605    7714 proxier.go:1609] Opened local port "nodePort for openshift-ingress/router-default:http" (:31987/tcp)\nI0227 11:52:22.093016    7714 proxier.go:1609] Opened local port "nodePort for e2e-k8s-service-lb-available-7320/service-test:" (:30651/tcp)\nI0227 11:52:22.125008    7714 service_health.go:98] Opening healthcheck "openshift-ingress/router-default" on port 30575\nI0227 11:52:22.131285    7714 proxy.go:305] openshift-sdn proxy services and endpoints initialized\nI0227 11:52:22.131312    7714 cmd.go:173] openshift-sdn network plugin registering startup\nI0227 11:52:22.131417    7714 cmd.go:177] openshift-sdn network plugin ready\nI0227 11:52:46.508312    7714 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.77:6443 10.129.0.66:6443 10.130.0.65:6443]\nI0227 11:52:46.744188    7714 proxier.go:368] userspace proxy: processing 0 service events\nI0227 11:52:46.744218    7714 proxier.go:347] userspace syncProxyRules took 70.264595ms\nF0227 11:52:48.194227    7714 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Feb 27 11:53:06.376 E ns/openshift-multus pod/multus-4lzxr node/ip-10-0-156-239.us-east-2.compute.internal container=kube-multus container exited with code 137 (OOMKilled): 
Feb 27 11:53:18.836 E ns/openshift-sdn pod/sdn-9j59p node/ip-10-0-137-62.us-east-2.compute.internal container=sdn container exited with code 255 (Error): nshift-dns/dns-default:dns -> 172.30.0.10\nI0227 11:52:40.086106   12006 proxier.go:368] userspace proxy: processing 0 service events\nI0227 11:52:40.086131   12006 proxier.go:347] userspace syncProxyRules took 248.665422ms\nI0227 11:52:40.148784   12006 proxier.go:1609] Opened local port "nodePort for e2e-k8s-service-lb-available-7320/service-test:" (:30651/tcp)\nI0227 11:52:40.149254   12006 proxier.go:1609] Opened local port "nodePort for openshift-ingress/router-default:https" (:32752/tcp)\nI0227 11:52:40.149395   12006 proxier.go:1609] Opened local port "nodePort for openshift-ingress/router-default:http" (:31987/tcp)\nI0227 11:52:40.197800   12006 service_health.go:98] Opening healthcheck "openshift-ingress/router-default" on port 30575\nI0227 11:52:40.212030   12006 proxy.go:305] openshift-sdn proxy services and endpoints initialized\nI0227 11:52:40.212059   12006 cmd.go:173] openshift-sdn network plugin registering startup\nI0227 11:52:40.212161   12006 cmd.go:177] openshift-sdn network plugin ready\nI0227 11:52:46.504804   12006 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.77:6443 10.129.0.66:6443 10.130.0.65:6443]\nI0227 11:52:47.006811   12006 proxier.go:368] userspace proxy: processing 0 service events\nI0227 11:52:47.006855   12006 proxier.go:347] userspace syncProxyRules took 199.274645ms\nI0227 11:53:09.299462   12006 healthcheck.go:92] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0227 11:53:17.464219   12006 proxier.go:368] userspace proxy: processing 0 service events\nI0227 11:53:17.464251   12006 proxier.go:347] userspace syncProxyRules took 106.208503ms\nI0227 11:53:17.948368   12006 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0227 11:53:17.948482   12006 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Feb 27 11:53:39.515 E ns/openshift-sdn pod/sdn-mk7tz node/ip-10-0-156-239.us-east-2.compute.internal container=sdn container exited with code 255 (Error): :6443]\nI0227 11:52:04.133513    6278 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.77:6443]\nI0227 11:52:04.133538    6278 roundrobin.go:217] Delete endpoint 10.129.0.6:6443 for service "openshift-multus/multus-admission-controller:"\nI0227 11:52:04.386670    6278 proxier.go:368] userspace proxy: processing 0 service events\nI0227 11:52:04.386694    6278 proxier.go:347] userspace syncProxyRules took 79.662615ms\nI0227 11:52:04.679022    6278 proxier.go:368] userspace proxy: processing 0 service events\nI0227 11:52:04.679067    6278 proxier.go:347] userspace syncProxyRules took 108.6193ms\nI0227 11:52:10.535563    6278 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.77:6443 10.130.0.65:6443]\nI0227 11:52:10.812236    6278 proxier.go:368] userspace proxy: processing 0 service events\nI0227 11:52:10.812259    6278 proxier.go:347] userspace syncProxyRules took 87.231985ms\nI0227 11:52:41.080628    6278 proxier.go:368] userspace proxy: processing 0 service events\nI0227 11:52:41.080652    6278 proxier.go:347] userspace syncProxyRules took 77.655408ms\nI0227 11:52:46.501057    6278 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.77:6443 10.129.0.66:6443 10.130.0.65:6443]\nI0227 11:52:46.826546    6278 proxier.go:368] userspace proxy: processing 0 service events\nI0227 11:52:46.826650    6278 proxier.go:347] userspace syncProxyRules took 134.249305ms\nI0227 11:53:17.101812    6278 proxier.go:368] userspace proxy: processing 0 service events\nI0227 11:53:17.101846    6278 proxier.go:347] userspace syncProxyRules took 83.839004ms\nI0227 11:53:38.884966    6278 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0227 11:53:38.885016    6278 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Feb 27 11:53:53.913 E ns/openshift-multus pod/multus-kprzp node/ip-10-0-141-90.us-east-2.compute.internal container=kube-multus container exited with code 137 (OOMKilled): 
Feb 27 11:54:40.502 E ns/openshift-multus pod/multus-4sgxm node/ip-10-0-155-97.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Feb 27 11:55:23.238 E ns/openshift-multus pod/multus-8pszt node/ip-10-0-131-75.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Feb 27 11:55:45.138 E ns/openshift-dns pod/dns-default-64sf4 node/ip-10-0-131-12.us-east-2.compute.internal container=dns container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 11:55:45.138 E ns/openshift-dns pod/dns-default-64sf4 node/ip-10-0-131-12.us-east-2.compute.internal container=dns-node-resolver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 11:55:50.972 E ns/openshift-machine-config-operator pod/machine-config-operator-6f757cfc7-29l8q node/ip-10-0-156-239.us-east-2.compute.internal container=machine-config-operator container exited with code 2 (Error):  not find the requested resource (get machineconfigs.machineconfiguration.openshift.io)\nI0227 11:24:36.799548       1 operator.go:264] Starting MachineConfigOperator\nI0227 11:24:36.860249       1 event.go:281] Event(v1.ObjectReference{Kind:"", Namespace:"", Name:"machine-config", UID:"583b3008-6098-424b-bfef-8065d1bcf307", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorVersionChanged' clusteroperator/machine-config-operator is bootstrapping to [{operator 0.0.1-2020-02-27-110736}]\nE0227 11:24:37.333321       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.MachineConfigPool: the server could not find the requested resource (get machineconfigpools.machineconfiguration.openshift.io)\nE0227 11:24:37.368429       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.ControllerConfig: the server could not find the requested resource (get controllerconfigs.machineconfiguration.openshift.io)\nE0227 11:24:38.337124       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.MachineConfigPool: the server could not find the requested resource (get machineconfigpools.machineconfiguration.openshift.io)\nI0227 11:24:42.299412       1 sync.go:61] [init mode] synced RenderConfig in 5.421933708s\nI0227 11:24:42.655408       1 sync.go:61] [init mode] synced MachineConfigPools in 355.708372ms\nI0227 11:25:06.517457       1 sync.go:61] [init mode] synced MachineConfigDaemon in 23.862011962s\nI0227 11:25:10.671714       1 sync.go:61] [init mode] synced MachineConfigController in 4.154212961s\nI0227 11:25:15.925686       1 sync.go:61] [init mode] synced MachineConfigServer in 5.253921936s\nI0227 11:28:20.937818       1 sync.go:61] [init mode] synced RequiredPools in 3m5.012085215s\nI0227 11:28:20.992108       1 sync.go:85] Initialization complete\n
Feb 27 11:57:46.447 E ns/openshift-machine-config-operator pod/machine-config-daemon-dzlff node/ip-10-0-141-90.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 27 11:58:10.863 E ns/openshift-machine-config-operator pod/machine-config-daemon-8jttq node/ip-10-0-137-62.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 27 11:58:21.641 E ns/openshift-machine-config-operator pod/machine-config-daemon-2vpnw node/ip-10-0-131-12.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 27 11:58:36.568 E ns/openshift-machine-config-operator pod/machine-config-daemon-pb5bl node/ip-10-0-156-239.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 27 11:58:50.615 E ns/openshift-machine-config-operator pod/machine-config-daemon-pcmpj node/ip-10-0-131-75.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 27 11:58:59.648 E ns/openshift-machine-config-operator pod/machine-config-controller-c5cbd4659-6rzpd node/ip-10-0-156-239.us-east-2.compute.internal container=machine-config-controller container exited with code 2 (Error): node ip-10-0-137-62.us-east-2.compute.internal is now reporting unready: node ip-10-0-137-62.us-east-2.compute.internal is reporting NotReady=False\nI0227 11:52:04.534973       1 node_controller.go:435] Pool master: node ip-10-0-137-62.us-east-2.compute.internal is now reporting ready\nI0227 11:52:05.143526       1 node_controller.go:433] Pool master: node ip-10-0-131-12.us-east-2.compute.internal is now reporting unready: node ip-10-0-131-12.us-east-2.compute.internal is reporting NotReady=False\nI0227 11:52:25.185035       1 node_controller.go:435] Pool master: node ip-10-0-131-12.us-east-2.compute.internal is now reporting ready\nI0227 11:52:46.734342       1 node_controller.go:433] Pool master: node ip-10-0-156-239.us-east-2.compute.internal is now reporting unready: node ip-10-0-156-239.us-east-2.compute.internal is reporting NotReady=False\nI0227 11:53:26.765874       1 node_controller.go:435] Pool master: node ip-10-0-156-239.us-east-2.compute.internal is now reporting ready\nI0227 11:53:32.992511       1 node_controller.go:433] Pool worker: node ip-10-0-141-90.us-east-2.compute.internal is now reporting unready: node ip-10-0-141-90.us-east-2.compute.internal is reporting NotReady=False\nI0227 11:54:13.017475       1 node_controller.go:435] Pool worker: node ip-10-0-141-90.us-east-2.compute.internal is now reporting ready\nI0227 11:54:17.534591       1 node_controller.go:433] Pool worker: node ip-10-0-155-97.us-east-2.compute.internal is now reporting unready: node ip-10-0-155-97.us-east-2.compute.internal is reporting NotReady=False\nI0227 11:54:57.560688       1 node_controller.go:435] Pool worker: node ip-10-0-155-97.us-east-2.compute.internal is now reporting ready\nI0227 11:55:04.762083       1 node_controller.go:433] Pool worker: node ip-10-0-131-75.us-east-2.compute.internal is now reporting unready: node ip-10-0-131-75.us-east-2.compute.internal is reporting NotReady=False\nI0227 11:55:44.788200       1 node_controller.go:435] Pool worker: node ip-10-0-131-75.us-east-2.compute.internal is now reporting ready\n
Feb 27 12:00:50.466 E ns/openshift-machine-config-operator pod/machine-config-server-bvlnz node/ip-10-0-137-62.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0227 11:27:55.336395       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-301-g27bac44b-dirty (27bac44b16d95ebec18855f42876e228fc1446d3)\nI0227 11:27:55.337640       1 api.go:51] Launching server on :22624\nI0227 11:27:55.337841       1 api.go:51] Launching server on :22623\n
Feb 27 12:00:54.140 E ns/openshift-machine-config-operator pod/machine-config-server-dh2bz node/ip-10-0-131-12.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0227 11:27:58.023174       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-301-g27bac44b-dirty (27bac44b16d95ebec18855f42876e228fc1446d3)\nI0227 11:27:58.025051       1 api.go:51] Launching server on :22624\nI0227 11:27:58.025150       1 api.go:51] Launching server on :22623\nI0227 11:31:02.167022       1 api.go:97] Pool worker requested by 10.0.140.184:19071\n
Feb 27 12:00:59.876 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-131-75.us-east-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/02/27 11:48:07 Watching directory: "/etc/alertmanager/config"\n
Feb 27 12:00:59.876 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-131-75.us-east-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/02/27 11:48:10 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/27 11:48:10 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/27 11:48:10 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/27 11:48:10 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/27 11:48:10 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/27 11:48:10 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/27 11:48:10 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0227 11:48:10.175149       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/02/27 11:48:10 http.go:107: HTTPS: listening on [::]:9095\n
Feb 27 12:01:02.411 E ns/openshift-cluster-machine-approver pod/machine-approver-6d4496bcb5-nldzq node/ip-10-0-137-62.us-east-2.compute.internal container=machine-approver-controller container exited with code 2 (Error): .\nI0227 11:48:03.522896       1 config.go:33] using default as failed to load config /var/run/configmaps/config/config.yaml: open /var/run/configmaps/config/config.yaml: no such file or directory\nI0227 11:48:03.523021       1 config.go:23] machine approver config: {NodeClientCert:{Disabled:false}}\nI0227 11:48:03.523112       1 main.go:236] Starting Machine Approver\nI0227 11:48:03.623458       1 main.go:146] CSR csr-vv9g4 added\nI0227 11:48:03.623589       1 main.go:149] CSR csr-vv9g4 is already approved\nI0227 11:48:03.623649       1 main.go:146] CSR csr-x972f added\nI0227 11:48:03.623692       1 main.go:149] CSR csr-x972f is already approved\nI0227 11:48:03.623775       1 main.go:146] CSR csr-xspxs added\nI0227 11:48:03.623822       1 main.go:149] CSR csr-xspxs is already approved\nI0227 11:48:03.623867       1 main.go:146] CSR csr-zkp58 added\nI0227 11:48:03.623940       1 main.go:149] CSR csr-zkp58 is already approved\nI0227 11:48:03.624002       1 main.go:146] CSR csr-8ck4f added\nI0227 11:48:03.624046       1 main.go:149] CSR csr-8ck4f is already approved\nI0227 11:48:03.624124       1 main.go:146] CSR csr-l57nc added\nI0227 11:48:03.624168       1 main.go:149] CSR csr-l57nc is already approved\nI0227 11:48:03.625025       1 main.go:146] CSR csr-mr4bn added\nI0227 11:48:03.625089       1 main.go:149] CSR csr-mr4bn is already approved\nI0227 11:48:03.625203       1 main.go:146] CSR csr-q8vzk added\nI0227 11:48:03.625262       1 main.go:149] CSR csr-q8vzk is already approved\nI0227 11:48:03.625310       1 main.go:146] CSR csr-246p9 added\nI0227 11:48:03.625385       1 main.go:149] CSR csr-246p9 is already approved\nI0227 11:48:03.625438       1 main.go:146] CSR csr-dpb25 added\nI0227 11:48:03.625500       1 main.go:149] CSR csr-dpb25 is already approved\nI0227 11:48:03.625573       1 main.go:146] CSR csr-j276d added\nI0227 11:48:03.625619       1 main.go:149] CSR csr-j276d is already approved\nI0227 11:48:03.625672       1 main.go:146] CSR csr-m4jkv added\nI0227 11:48:03.625743       1 main.go:149] CSR csr-m4jkv is already approved\n
Feb 27 12:01:03.420 E ns/openshift-operator-lifecycle-manager pod/packageserver-55954545d8-thp97 node/ip-10-0-131-12.us-east-2.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 12:01:03.835 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-654bd8974-pfdfz node/ip-10-0-137-62.us-east-2.compute.internal container=operator container exited with code 255 (Error): est took 159.764342ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0227 12:00:12.803364       1 request.go:565] Throttling request took 195.935915ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0227 12:00:14.728741       1 httplog.go:90] GET /metrics: (1.932602ms) 200 [Prometheus/2.15.2 10.128.2.32:47650]\nI0227 12:00:32.603854       1 request.go:565] Throttling request took 157.92717ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0227 12:00:32.803813       1 request.go:565] Throttling request took 196.217683ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0227 12:00:36.179747       1 httplog.go:90] GET /metrics: (6.20848ms) 200 [Prometheus/2.15.2 10.131.0.36:33300]\nI0227 12:00:44.728757       1 httplog.go:90] GET /metrics: (1.851771ms) 200 [Prometheus/2.15.2 10.128.2.32:47650]\nI0227 12:00:52.603409       1 request.go:565] Throttling request took 157.291652ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0227 12:00:52.803405       1 request.go:565] Throttling request took 195.321062ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0227 12:01:01.374058       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0227 12:01:01.375274       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0227 12:01:01.375367       1 status_controller.go:212] Shutting down StatusSyncer-openshift-controller-manager\nI0227 12:01:01.375510       1 operator.go:135] Shutting down OpenShiftControllerManagerOperator\nF0227 12:01:01.375755       1 builder.go:243] stopped\n
Feb 27 12:01:03.996 E ns/openshift-etcd-operator pod/etcd-operator-fc7868687-xb66n node/ip-10-0-137-62.us-east-2.compute.internal container=operator container exited with code 255 (Error): transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0227 12:01:00.357601       1 balancer_conn_wrappers.go:127] pickfirstBalancer: HandleSubConnStateChange: 0xc0024f2590, READY\nI0227 12:01:00.363385       1 client.go:361] parsed scheme: "passthrough"\nI0227 12:01:00.363430       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://10.0.137.62:2379 0  <nil>}] <nil>}\nI0227 12:01:00.363445       1 clientconn.go:577] ClientConn switching balancer to "pick_first"\nI0227 12:01:00.363508       1 balancer_conn_wrappers.go:127] pickfirstBalancer: HandleSubConnStateChange: 0xc0024f27e0, CONNECTING\nI0227 12:01:00.363863       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0227 12:01:00.419576       1 balancer_conn_wrappers.go:127] pickfirstBalancer: HandleSubConnStateChange: 0xc0024f27e0, READY\nW0227 12:01:00.430044       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://10.0.6.187:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.6.187:2379: operation was canceled". Reconnecting...\nI0227 12:01:00.430086       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0227 12:01:00.430219       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0227 12:01:00.439027       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0227 12:01:00.467411       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0227 12:01:01.671639       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0227 12:01:01.672676       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0227 12:01:01.672862       1 builder.go:209] server exited\nF0227 12:01:01.694695       1 leaderelection.go:67] leaderelection lost\n
Feb 27 12:01:04.102 E ns/openshift-image-registry pod/cluster-image-registry-operator-568f6494bb-xjv5d node/ip-10-0-137-62.us-east-2.compute.internal container=cluster-image-registry-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 12:01:04.102 E ns/openshift-image-registry pod/cluster-image-registry-operator-568f6494bb-xjv5d node/ip-10-0-137-62.us-east-2.compute.internal container=cluster-image-registry-operator-watch container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 12:01:05.652 E ns/openshift-machine-config-operator pod/machine-config-server-8n76f node/ip-10-0-156-239.us-east-2.compute.internal container=machine-config-server container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 12:01:05.935 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-ddb94987d-95wd6 node/ip-10-0-137-62.us-east-2.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): orkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)"\nE0227 11:53:18.505218       1 webhook.go:109] Failed to make webhook authenticator request: context canceled\nE0227 11:53:18.505254       1 authentication.go:104] Unable to authenticate the request due to an error: [invalid bearer token, context canceled]\nI0227 11:53:26.800641       1 status_controller.go:176] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2020-02-27T11:33:16Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-02-27T11:45:49Z","message":"NodeInstallerProgressing: 3 nodes are at revision 7","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-02-27T11:29:19Z","message":"StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-02-27T11:24:44Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0227 11:53:26.955693       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"b1beb8b5-f72a-4c00-a9bc-21b5d5dcd5e1", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"ip-10-0-156-239.us-east-2.compute.internal\" not ready since 2020-02-27 11:52:46 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)" to "NodeControllerDegraded: All master nodes are ready"\nI0227 12:01:05.179061       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nF0227 12:01:05.181326       1 builder.go:243] stopped\n
Feb 27 12:01:09.356 E ns/openshift-operator-lifecycle-manager pod/olm-operator-7b5ff4c67-8mjrh node/ip-10-0-137-62.us-east-2.compute.internal container=olm-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 12:01:37.878 E ns/openshift-authentication pod/oauth-openshift-646776d9cf-scpb2 node/ip-10-0-156-239.us-east-2.compute.internal container=oauth-openshift container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 12:01:54.720 E ns/openshift-authentication pod/oauth-openshift-54d7d68d69-nw2xf node/ip-10-0-156-239.us-east-2.compute.internal container=oauth-openshift container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 12:03:26.369 E ns/openshift-monitoring pod/node-exporter-9nj49 node/ip-10-0-131-75.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:00:49Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:00:58Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:01:04Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:01:13Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:01:19Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:01:28Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:01:34Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 27 12:03:26.398 E ns/openshift-cluster-node-tuning-operator pod/tuned-tpjgv node/ip-10-0-131-75.us-east-2.compute.internal container=tuned container exited with code 143 (Error): : Unit file tuned.service does not exist.\nI0227 11:48:34.410571    1538 tuned.go:393] getting recommended profile...\nI0227 11:48:34.532153    1538 tuned.go:421] active profile () != recommended profile (openshift-node)\nI0227 11:48:34.532253    1538 tuned.go:286] starting tuned...\n2020-02-27 11:48:34,640 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-02-27 11:48:34,646 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-02-27 11:48:34,647 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-02-27 11:48:34,647 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-02-27 11:48:34,648 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-02-27 11:48:34,680 INFO     tuned.daemon.controller: starting controller\n2020-02-27 11:48:34,680 INFO     tuned.daemon.daemon: starting tuning\n2020-02-27 11:48:34,691 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-02-27 11:48:34,692 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-02-27 11:48:34,696 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-02-27 11:48:34,697 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-02-27 11:48:34,698 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-02-27 11:48:34,802 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-02-27 11:48:34,811 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0227 12:01:18.516856    1538 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 12:01:18.516873    1538 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 12:01:41.326555    1538 tuned.go:115] received signal: terminated\nI0227 12:01:41.326590    1538 tuned.go:327] sending TERM to PID 1649\n
Feb 27 12:03:26.425 E ns/openshift-sdn pod/ovs-hpfdd node/ip-10-0-131-75.us-east-2.compute.internal container=openvswitch container exited with code 143 (Error): O|75236 kB peak resident set size after 10.0 seconds\n2020-02-27T11:53:02.044Z|00053|memory|INFO|handlers:2 ports:9 revalidators:2 rules:103 udpif keys:95\n2020-02-27T11:57:52.103Z|00054|connmgr|INFO|br0<->unix#256: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T11:57:52.135Z|00055|connmgr|INFO|br0<->unix#259: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T11:57:52.167Z|00056|bridge|INFO|bridge br0: deleted interface veth100a5800 on port 4\n2020-02-27T11:57:54.240Z|00057|bridge|INFO|bridge br0: added interface vethc08ef325 on port 18\n2020-02-27T11:57:54.268Z|00058|connmgr|INFO|br0<->unix#266: 5 flow_mods in the last 0 s (5 adds)\n2020-02-27T11:57:54.306Z|00059|connmgr|INFO|br0<->unix#269: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T12:00:59.562Z|00060|connmgr|INFO|br0<->unix#403: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T12:00:59.590Z|00061|connmgr|INFO|br0<->unix#406: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T12:00:59.612Z|00062|bridge|INFO|bridge br0: deleted interface veth9ad6662b on port 15\n2020-02-27T12:01:29.417Z|00063|connmgr|INFO|br0<->unix#430: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T12:01:29.446Z|00064|connmgr|INFO|br0<->unix#433: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T12:01:29.470Z|00065|bridge|INFO|bridge br0: deleted interface veth1524f7e4 on port 13\n2020-02-27T12:01:29.507Z|00066|connmgr|INFO|br0<->unix#436: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T12:01:29.550Z|00067|connmgr|INFO|br0<->unix#439: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T12:01:29.572Z|00068|bridge|INFO|bridge br0: deleted interface veth79d18bed on port 12\n2020-02-27T12:01:29.463Z|00013|jsonrpc|WARN|unix#370: receive error: Connection reset by peer\n2020-02-27T12:01:29.464Z|00014|reconnect|WARN|unix#370: connection dropped (Connection reset by peer)\n2020-02-27T12:01:29.566Z|00015|jsonrpc|WARN|unix#375: receive error: Connection reset by peer\n2020-02-27T12:01:29.566Z|00016|reconnect|WARN|unix#375: connection dropped (Connection reset by peer)\ninfo: Saving flows ...\n
Feb 27 12:03:26.440 E ns/openshift-multus pod/multus-mvwtb node/ip-10-0-131-75.us-east-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Feb 27 12:03:26.466 E ns/openshift-machine-config-operator pod/machine-config-daemon-7dfn8 node/ip-10-0-131-75.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 27 12:03:29.254 E ns/openshift-multus pod/multus-mvwtb node/ip-10-0-131-75.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Feb 27 12:03:32.081 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable
Feb 27 12:03:35.879 E ns/openshift-machine-config-operator pod/machine-config-daemon-7dfn8 node/ip-10-0-131-75.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 27 12:03:36.057 E ns/openshift-controller-manager pod/controller-manager-r59jg node/ip-10-0-137-62.us-east-2.compute.internal container=controller-manager container exited with code 1 (Error): I0227 11:48:22.077226       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0227 11:48:22.082013       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-z6y52xgr/stable@sha256:2cbab524497dd001c11e8ccba8a5bb3e9d764b97f35d725ebfbd3ea88b56b544"\nI0227 11:48:22.082095       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-z6y52xgr/stable@sha256:471891b26e981d2ed9c87cdd306bc028abe62b760a7af413bd9c05389c4ea5a4"\nI0227 11:48:22.082186       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0227 11:48:22.083498       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Feb 27 12:03:36.268 E ns/openshift-sdn pod/sdn-controller-lkwrd node/ip-10-0-137-62.us-east-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0227 11:51:20.889343       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0227 11:51:20.914105       1 event.go:319] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"openshift-network-controller", GenerateName:"", Namespace:"openshift-sdn", SelfLink:"/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller", UID:"2d0059e4-0b41-4cb0-8d2f-b01f85837109", ResourceVersion:"29228", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63718399417, loc:(*time.Location)(0x2b2b940)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"ip-10-0-137-62\",\"leaseDurationSeconds\":60,\"acquireTime\":\"2020-02-27T11:23:37Z\",\"renewTime\":\"2020-02-27T11:51:20Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'ip-10-0-137-62 became leader'\nI0227 11:51:20.914185       1 leaderelection.go:252] successfully acquired lease openshift-sdn/openshift-network-controller\nI0227 11:51:20.920236       1 master.go:51] Initializing SDN master\nI0227 11:51:20.976891       1 network_controller.go:61] Started OpenShift Network Controller\n
Feb 27 12:03:36.310 E ns/openshift-machine-config-operator pod/machine-config-daemon-n7n6f node/ip-10-0-137-62.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 27 12:03:36.342 E ns/openshift-machine-config-operator pod/machine-config-server-dlwl7 node/ip-10-0-137-62.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0227 12:00:52.501639       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-301-g27bac44b-dirty (27bac44b16d95ebec18855f42876e228fc1446d3)\nI0227 12:00:52.503045       1 api.go:51] Launching server on :22624\nI0227 12:00:52.503099       1 api.go:51] Launching server on :22623\n
Feb 27 12:03:36.414 E ns/openshift-etcd pod/etcd-ip-10-0-137-62.us-east-2.compute.internal node/ip-10-0-137-62.us-east-2.compute.internal container=etcd-metrics container exited with code 2 (Error): 2020-02-27 11:42:54.412559 I | etcdmain: ServerTLS: cert = /etc/kubernetes/static-pod-resources/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-137-62.us-east-2.compute.internal.crt, key = /etc/kubernetes/static-pod-resources/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-137-62.us-east-2.compute.internal.key, ca = /etc/kubernetes/static-pod-resources/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-02-27 11:42:54.413275 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-02-27 11:42:54.413649 I | etcdmain: ClientTLS: cert = /etc/kubernetes/static-pod-resources/secrets/etcd-all-peer/etcd-peer-ip-10-0-137-62.us-east-2.compute.internal.crt, key = /etc/kubernetes/static-pod-resources/secrets/etcd-all-peer/etcd-peer-ip-10-0-137-62.us-east-2.compute.internal.key, ca = /etc/kubernetes/static-pod-resources/configmaps/etcd-peer-client-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-02-27 11:42:54.416672 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\nWARNING: 2020/02/27 11:42:54 grpc: addrConn.createTransport failed to connect to {https://etcd-2.ci-op-z6y52xgr-f83f1.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.137.62:9978: connect: connection refused". Reconnecting...\n
Feb 27 12:03:36.472 E ns/openshift-sdn pod/ovs-fh57z node/ip-10-0-137-62.us-east-2.compute.internal container=openvswitch container exited with code 143 (Error):  s (4 deletes)\n2020-02-27T12:01:08.583Z|00123|bridge|INFO|bridge br0: deleted interface vethd7abbbfe on port 59\n2020-02-27T12:01:08.640Z|00124|connmgr|INFO|br0<->unix#484: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T12:01:08.720Z|00125|connmgr|INFO|br0<->unix#487: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T12:01:08.755Z|00126|bridge|INFO|bridge br0: deleted interface veth3cbe8598 on port 62\n2020-02-27T12:01:08.650Z|00027|reconnect|WARN|unix#410: connection dropped (Broken pipe)\n2020-02-27T12:01:14.643Z|00127|bridge|INFO|bridge br0: added interface veth898142da on port 68\n2020-02-27T12:01:14.679Z|00128|connmgr|INFO|br0<->unix#493: 5 flow_mods in the last 0 s (5 adds)\n2020-02-27T12:01:14.741Z|00129|connmgr|INFO|br0<->unix#497: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T12:01:14.743Z|00130|connmgr|INFO|br0<->unix#499: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-02-27T12:01:14.821Z|00131|bridge|INFO|bridge br0: added interface veth81cd886a on port 69\n2020-02-27T12:01:14.855Z|00132|connmgr|INFO|br0<->unix#502: 5 flow_mods in the last 0 s (5 adds)\n2020-02-27T12:01:14.903Z|00133|connmgr|INFO|br0<->unix#506: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T12:01:14.906Z|00134|connmgr|INFO|br0<->unix#508: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-02-27T12:01:14.860Z|00028|jsonrpc|WARN|Dropped 3 log messages in last 7 seconds (most recently, 6 seconds ago) due to excessive rate\n2020-02-27T12:01:14.860Z|00029|jsonrpc|WARN|unix#425: receive error: Connection reset by peer\n2020-02-27T12:01:14.860Z|00030|reconnect|WARN|unix#425: connection dropped (Connection reset by peer)\n2020-02-27T12:01:17.344Z|00135|connmgr|INFO|br0<->unix#513: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T12:01:17.381Z|00136|connmgr|INFO|br0<->unix#516: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T12:01:17.405Z|00137|bridge|INFO|bridge br0: deleted interface veth898142da on port 68\ninfo: Saving flows ...\nTerminated\n2020-02-27T12:01:18Z|00001|fatal_signal|WARN|terminating with signal 15 (Terminated)\n
Feb 27 12:03:36.503 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-137-62.us-east-2.compute.internal node/ip-10-0-137-62.us-east-2.compute.internal container=kube-controller-manager-cert-syncer container exited with code 2 (Error):     1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 12:00:39.046162       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 12:00:39.046507       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 12:00:47.132536       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 12:00:47.132885       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 12:00:49.057207       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 12:00:49.057577       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 12:00:57.141582       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 12:00:57.141905       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 12:00:59.070249       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 12:00:59.070841       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 12:01:07.175610       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 12:01:07.177741       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 12:01:09.080861       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 12:01:09.081205       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 12:01:17.214650       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 12:01:17.215631       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\n
Feb 27 12:03:36.503 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-137-62.us-east-2.compute.internal node/ip-10-0-137-62.us-east-2.compute.internal container=kube-controller-manager container exited with code 2 (Error): din:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc004c5add0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"node-role.kubernetes.io/master":""}, ServiceAccountName:"machine-config-server", DeprecatedServiceAccount:"machine-config-server", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc007b22540), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/etcd", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0051a1708)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc004c5ae0c)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:3, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:2, ObservedGeneration:2, UpdatedNumberScheduled:2, NumberAvailable:2, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "machine-config-server": the object has been modified; please apply your changes to the latest version and try again\n
Feb 27 12:03:36.503 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-137-62.us-east-2.compute.internal node/ip-10-0-137-62.us-east-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): g: Get https://localhost:6443/apis/rbac.authorization.k8s.io/v1/rolebindings?allowWatchBookmarks=true&resourceVersion=23882&timeout=9m16s&timeoutSeconds=556&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0227 12:01:53.716484       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicaSet: Get https://localhost:6443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=34843&timeout=5m23s&timeoutSeconds=323&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0227 12:01:53.717698       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ServiceAccount: Get https://localhost:6443/api/v1/serviceaccounts?allowWatchBookmarks=true&resourceVersion=23880&timeout=5m35s&timeoutSeconds=335&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0227 12:01:53.719236       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Lease: Get https://localhost:6443/apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=34814&timeout=7m17s&timeoutSeconds=437&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0227 12:01:53.720349       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Pod: Get https://localhost:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=34858&timeout=9m38s&timeoutSeconds=578&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0227 12:01:53.722486       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: Get https://localhost:6443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=23880&timeout=7m31s&timeoutSeconds=451&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0227 12:01:54.082508       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cluster-policy-controller: timed out waiting for the condition\nF0227 12:01:54.082565       1 policy_controller.go:94] leaderelection lost\n
Feb 27 12:03:36.503 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-137-62.us-east-2.compute.internal node/ip-10-0-137-62.us-east-2.compute.internal container=kube-controller-manager-recovery-controller container exited with code 255 (Error): sed\nE0227 11:45:45.782085       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nE0227 11:45:45.782229       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nE0227 11:45:45.782394       1 leaderelection.go:331] error retrieving resource lock openshift-kube-controller-manager/cert-recovery-controller-lock: configmaps "cert-recovery-controller-lock" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\nE0227 11:45:45.829123       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: unknown (get secrets)\nE0227 11:45:45.829824       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: unknown (get secrets)\nE0227 11:45:45.829876       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nE0227 11:45:45.833046       1 csrcontroller.go:121] key failed with : configmaps "csr-signer-ca" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager-operator"\nE0227 11:45:45.833094       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: unknown (get secrets)\nE0227 11:45:45.833149       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nE0227 11:45:45.833176       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nE0227 11:45:45.833209       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: unknown (get secrets)\nE0227 11:45:45.844771       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nI0227 12:01:18.253449       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nF0227 12:01:18.253758       1 builder.go:209] server exited\n
Feb 27 12:03:36.564 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-137-62.us-east-2.compute.internal node/ip-10-0-137-62.us-east-2.compute.internal container=scheduler container exited with code 2 (Error): configmaps)\nE0227 11:45:46.475099       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)\nE0227 11:45:46.476723       1 leaderelection.go:331] error retrieving resource lock openshift-kube-scheduler/kube-scheduler: configmaps "kube-scheduler" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "openshift-kube-scheduler"\nE0227 11:45:46.476931       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: unknown (get nodes)\nE0227 11:45:46.477093       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)\nE0227 11:45:46.477219       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)\nE0227 11:45:46.477340       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)\nE0227 11:45:46.477448       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: unknown (get services)\nE0227 11:45:46.477561       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)\nE0227 11:45:46.477667       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)\nE0227 11:45:46.477754       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)\nE0227 11:45:46.477859       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)\nE0227 11:45:46.478010       1 reflector.go:307] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to watch *v1.Pod: unknown (get pods)\n
Feb 27 12:03:36.581 E ns/openshift-cluster-node-tuning-operator pod/tuned-98542 node/ip-10-0-137-62.us-east-2.compute.internal container=tuned container exited with code 143 (Error): .daemon.application: dynamic tuning is globally disabled\n2020-02-27 11:48:21,747 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-02-27 11:48:21,751 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-02-27 11:48:21,752 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-02-27 11:48:21,766 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-02-27 11:48:22,002 INFO     tuned.daemon.controller: starting controller\n2020-02-27 11:48:22,003 INFO     tuned.daemon.daemon: starting tuning\n2020-02-27 11:48:22,031 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-02-27 11:48:22,032 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-02-27 11:48:22,049 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-02-27 11:48:22,051 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-02-27 11:48:22,056 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-02-27 11:48:22,916 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-02-27 11:48:22,943 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\nI0227 12:01:06.525508     756 tuned.go:494] profile "ip-10-0-137-62.us-east-2.compute.internal" changed, tuned profile requested: openshift-node\nI0227 12:01:06.589349     756 tuned.go:494] profile "ip-10-0-137-62.us-east-2.compute.internal" changed, tuned profile requested: openshift-control-plane\nI0227 12:01:06.893415     756 tuned.go:393] getting recommended profile...\nI0227 12:01:07.563926     756 tuned.go:430] active and recommended profile (openshift-control-plane) match; profile change will not trigger profile reload\nI0227 12:01:18.257144     756 tuned.go:115] received signal: terminated\nI0227 12:01:18.257184     756 tuned.go:327] sending TERM to PID 885\n
Feb 27 12:03:36.614 E ns/openshift-multus pod/multus-admission-controller-h94j8 node/ip-10-0-137-62.us-east-2.compute.internal container=multus-admission-controller container exited with code 255 (Error): 
Feb 27 12:03:36.634 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-137-62.us-east-2.compute.internal node/ip-10-0-137-62.us-east-2.compute.internal container=kube-apiserver container exited with code 1 (Error): om": the object has been modified; please apply your changes to the latest version and try again\nE0227 12:01:02.346613       1 available_controller.go:406] v1.packages.operators.coreos.com failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.packages.operators.coreos.com": the object has been modified; please apply your changes to the latest version and try again\nE0227 12:01:02.421793       1 available_controller.go:406] v1.packages.operators.coreos.com failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.packages.operators.coreos.com": the object has been modified; please apply your changes to the latest version and try again\nE0227 12:01:02.444464       1 available_controller.go:406] v1.packages.operators.coreos.com failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.packages.operators.coreos.com": the object has been modified; please apply your changes to the latest version and try again\nI0227 12:01:06.517022       1 controller.go:606] quota admission added evaluator for: profiles.tuned.openshift.io\nI0227 12:01:06.517165       1 controller.go:606] quota admission added evaluator for: profiles.tuned.openshift.io\nI0227 12:01:13.960096       1 trace.go:116] Trace[1104781632]: "List" url:/api/v1/secrets,user-agent:manager/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.0.156.239 (started: 2020-02-27 12:01:13.360776536 +0000 UTC m=+932.588096380) (total time: 599.282391ms):\nTrace[1104781632]: [599.279348ms] [598.173693ms] Writing http response done count:1164\nI0227 12:01:18.239998       1 genericapiserver.go:643] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-137-62.us-east-2.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0227 12:01:18.240288       1 controller.go:180] Shutting down kubernetes service endpoint reconciler\n
Feb 27 12:03:36.634 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-137-62.us-east-2.compute.internal node/ip-10-0-137-62.us-east-2.compute.internal container=kube-apiserver-cert-regeneration-controller container exited with code 1 (Error): ubeControllerManagerClient"\nI0227 11:56:35.030028       1 externalloadbalancer.go:26] syncing external loadbalancer hostnames: api.ci-op-z6y52xgr-f83f1.origin-ci-int-aws.dev.rhcloud.com\nI0227 11:56:35.033046       1 servicehostname.go:40] syncing servicenetwork hostnames: [172.30.0.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openshift openshift.default openshift.default.svc openshift.default.svc.cluster.local]\nI0227 12:01:18.477344       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0227 12:01:18.477849       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "LocalhostServing"\nI0227 12:01:18.478015       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "ExternalLoadBalancerServing"\nI0227 12:01:18.478039       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "ServiceNetworkServing"\nI0227 12:01:18.478054       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeSchedulerClient"\nI0227 12:01:18.478095       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "LocalhostRecoveryServing"\nI0227 12:01:18.478105       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "InternalLoadBalancerServing"\nI0227 12:01:18.478146       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeControllerManagerClient"\nI0227 12:01:18.478166       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "AggregatorProxyClientCert"\nI0227 12:01:18.478182       1 certrotationcontroller.go:560] Shutting down CertRotation\nI0227 12:01:18.478180       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeAPIServerToKubeletClientCert"\nI0227 12:01:18.478201       1 cabundlesyncer.go:84] Shutting down CA bundle controller\nI0227 12:01:18.478208       1 cabundlesyncer.go:86] CA bundle controller shut down\n
Feb 27 12:03:36.634 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-137-62.us-east-2.compute.internal node/ip-10-0-137-62.us-east-2.compute.internal container=kube-apiserver-insecure-readyz container exited with code 2 (Error): I0227 11:45:42.088644       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Feb 27 12:03:36.634 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-137-62.us-east-2.compute.internal node/ip-10-0-137-62.us-east-2.compute.internal container=kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0227 12:01:06.489351       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 12:01:06.489887       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0227 12:01:16.502375       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 12:01:16.502922       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Feb 27 12:03:36.649 E ns/openshift-monitoring pod/node-exporter-4bgx8 node/ip-10-0-137-62.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:00:31Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:00:41Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:00:46Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:00:56Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:01:01Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:01:11Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:01:16Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 27 12:03:36.668 E ns/openshift-multus pod/multus-7xfzp node/ip-10-0-137-62.us-east-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Feb 27 12:03:40.436 E ns/openshift-monitoring pod/node-exporter-4bgx8 node/ip-10-0-137-62.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Feb 27 12:03:41.566 E ns/openshift-multus pod/multus-7xfzp node/ip-10-0-137-62.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Feb 27 12:03:45.851 E ns/openshift-multus pod/multus-7xfzp node/ip-10-0-137-62.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Feb 27 12:03:46.870 E ns/openshift-machine-config-operator pod/machine-config-daemon-n7n6f node/ip-10-0-137-62.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 27 12:03:49.413 E ns/openshift-kube-storage-version-migrator pod/migrator-6585599bf7-js6bt node/ip-10-0-141-90.us-east-2.compute.internal container=migrator container exited with code 2 (Error): 
Feb 27 12:03:49.472 E ns/openshift-csi-snapshot-controller pod/csi-snapshot-controller-5c86bc66fc-ggnvx node/ip-10-0-141-90.us-east-2.compute.internal container=snapshot-controller container exited with code 2 (Error): 
Feb 27 12:03:49.519 E ns/openshift-monitoring pod/kube-state-metrics-6d5d6786c-8mdk9 node/ip-10-0-141-90.us-east-2.compute.internal container=kube-state-metrics container exited with code 2 (Error): 
Feb 27 12:03:49.629 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-141-90.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-27T11:48:49.174Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-27T11:48:49.178Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-27T11:48:49.179Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-27T11:48:49.179Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-27T11:48:49.179Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-27T11:48:49.179Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-27T11:48:49.180Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-27T11:48:49.180Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-27T11:48:49.180Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-27T11:48:49.180Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-27T11:48:49.180Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-27T11:48:49.180Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-27T11:48:49.180Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-27T11:48:49.180Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-27T11:48:49.180Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-27T11:48:49.180Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-27
Feb 27 12:03:49.629 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-141-90.us-east-2.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/02/27 11:48:52 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Feb 27 12:03:49.629 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-141-90.us-east-2.compute.internal container=prometheus-proxy container exited with code 2 (Error): 2020/02/27 11:48:53 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/02/27 11:48:53 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/27 11:48:53 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/27 11:48:53 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/27 11:48:53 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/27 11:48:53 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/02/27 11:48:53 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/27 11:48:53 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/02/27 11:48:53 http.go:107: HTTPS: listening on [::]:9091\nI0227 11:48:53.154045       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/02/27 11:52:34 oauthproxy.go:774: basicauth: 10.131.0.31:57060 Authorization header does not start with 'Basic', skipping basic authentication\n2020/02/27 11:57:04 oauthproxy.go:774: basicauth: 10.131.0.31:33934 Authorization header does not start with 'Basic', skipping basic authentication\n2020/02/27 12:01:34 oauthproxy.go:774: basicauth: 10.131.0.31:39224 Authorization header does not start with 'Basic', skipping basic authentication\n
Feb 27 12:03:49.629 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-141-90.us-east-2.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-02-27T11:48:52.216302706Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-02-27T11:48:52.218594006Z caller=runutil.go:95 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-02-27T11:48:57.363162953Z caller=reloader.go:286 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-02-27T11:48:57.363289695Z caller=reloader.go:154 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\n
Feb 27 12:03:50.415 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-141-90.us-east-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/02/27 11:48:27 Watching directory: "/etc/alertmanager/config"\n
Feb 27 12:03:50.415 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-141-90.us-east-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/02/27 11:48:27 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/27 11:48:27 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/27 11:48:27 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/27 11:48:27 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/27 11:48:27 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/27 11:48:27 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/27 11:48:27 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/27 11:48:27 http.go:107: HTTPS: listening on [::]:9095\nI0227 11:48:27.648311       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Feb 27 12:03:50.484 E ns/openshift-monitoring pod/thanos-querier-76f44d48c9-d8cmv node/ip-10-0-141-90.us-east-2.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/02/27 11:48:05 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/27 11:48:05 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/27 11:48:05 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/27 11:48:05 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/27 11:48:05 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/27 11:48:05 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/27 11:48:05 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/27 11:48:05 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/02/27 11:48:05 http.go:107: HTTPS: listening on [::]:9091\nI0227 11:48:05.810968       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Feb 27 12:03:56.141 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Feb 27 12:04:02.613 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-131-75.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-27T12:03:58.390Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-27T12:03:58.394Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-27T12:03:58.394Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-27T12:03:58.395Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-27T12:03:58.395Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-27T12:03:58.395Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-27T12:03:58.396Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-27T12:03:58.396Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-27T12:03:58.396Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-27T12:03:58.396Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-27T12:03:58.396Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-27T12:03:58.396Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-27T12:03:58.396Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-27T12:03:58.396Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-27T12:03:58.396Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-27T12:03:58.396Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-27
Feb 27 12:04:06.958 E ns/openshift-console-operator pod/console-operator-9f86b8fcb-bf7g2 node/ip-10-0-156-239.us-east-2.compute.internal container=console-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 12:04:10.630 E ns/openshift-marketplace pod/marketplace-operator-6d75d87d7c-7cfb6 node/ip-10-0-156-239.us-east-2.compute.internal container=marketplace-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 12:04:10.659 E ns/openshift-monitoring pod/thanos-querier-76f44d48c9-n6kdv node/ip-10-0-156-239.us-east-2.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/02/27 11:48:26 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/27 11:48:26 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/27 11:48:26 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/27 11:48:26 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/27 11:48:26 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/27 11:48:26 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/27 11:48:26 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/27 11:48:26 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/02/27 11:48:26 http.go:107: HTTPS: listening on [::]:9091\nI0227 11:48:26.141747       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Feb 27 12:04:11.670 E ns/openshift-etcd-operator pod/etcd-operator-fc7868687-fbbpd node/ip-10-0-156-239.us-east-2.compute.internal container=operator container exited with code 255 (Error): 144] Shutting down ScriptControllerController\nI0227 12:04:10.088438       1 base_controller.go:74] Shutting down NodeController ...\nI0227 12:04:10.088464       1 base_controller.go:74] Shutting down RevisionController ...\nI0227 12:04:10.088489       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0227 12:04:10.088559       1 host_endpoints_controller.go:263] Shutting down HostEtcdEndpointsController\nI0227 12:04:10.088598       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0227 12:04:10.088625       1 base_controller.go:74] Shutting down PruneController ...\nI0227 12:04:10.088662       1 base_controller.go:49] Shutting down worker of  controller ...\nI0227 12:04:10.088673       1 base_controller.go:39] All  workers have been terminated\nI0227 12:04:10.088687       1 etcdmemberscontroller.go:192] Shutting down EtcdMembersController\nI0227 12:04:10.088707       1 base_controller.go:49] Shutting down worker of InstallerStateController controller ...\nI0227 12:04:10.088718       1 base_controller.go:39] All InstallerStateController workers have been terminated\nI0227 12:04:10.088765       1 base_controller.go:49] Shutting down worker of NodeController controller ...\nI0227 12:04:10.088785       1 base_controller.go:39] All NodeController workers have been terminated\nI0227 12:04:10.088803       1 base_controller.go:49] Shutting down worker of RevisionController controller ...\nI0227 12:04:10.088813       1 base_controller.go:39] All RevisionController workers have been terminated\nI0227 12:04:10.088830       1 base_controller.go:49] Shutting down worker of LoggingSyncer controller ...\nI0227 12:04:10.088838       1 base_controller.go:39] All LoggingSyncer workers have been terminated\nI0227 12:04:10.088860       1 base_controller.go:49] Shutting down worker of UnsupportedConfigOverridesController controller ...\nI0227 12:04:10.088868       1 base_controller.go:39] All UnsupportedConfigOverridesController workers have been terminated\nF0227 12:04:10.088951       1 builder.go:243] stopped\n
Feb 27 12:05:45.067 E ns/openshift-marketplace pod/redhat-marketplace-7b48d555b-jd6jf node/ip-10-0-155-97.us-east-2.compute.internal container=redhat-marketplace container exited with code 2 (Error): 
Feb 27 12:06:26.139 E ns/openshift-cluster-node-tuning-operator pod/tuned-lp428 node/ip-10-0-141-90.us-east-2.compute.internal container=tuned container exited with code 143 (Error): ommended profile (openshift-node)\nI0227 11:48:16.631714    1995 tuned.go:286] starting tuned...\n2020-02-27 11:48:16,743 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-02-27 11:48:16,750 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-02-27 11:48:16,750 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-02-27 11:48:16,751 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-02-27 11:48:16,752 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-02-27 11:48:16,784 INFO     tuned.daemon.controller: starting controller\n2020-02-27 11:48:16,784 INFO     tuned.daemon.daemon: starting tuning\n2020-02-27 11:48:16,795 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-02-27 11:48:16,796 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-02-27 11:48:16,799 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-02-27 11:48:16,800 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-02-27 11:48:16,802 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-02-27 11:48:16,921 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-02-27 11:48:16,930 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0227 12:04:29.312592    1995 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 12:04:29.312608    1995 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nW0227 12:04:29.324269    1995 reflector.go:340] github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:594: watch of *v1.Profile ended with: very short watch: github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:594: Unexpected watch close - watch lasted less than a second and no items received\n
Feb 27 12:06:26.181 E ns/openshift-monitoring pod/node-exporter-svhj9 node/ip-10-0-141-90.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:03:52Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:04:07Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:04:08Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:04:22Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:04:23Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:04:37Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:04:38Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 27 12:06:26.326 E ns/openshift-sdn pod/ovs-j7b82 node/ip-10-0-141-90.us-east-2.compute.internal container=openvswitch container exited with code 143 (Error): 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T12:03:49.322Z|00093|bridge|INFO|bridge br0: deleted interface vethc766352c on port 30\n2020-02-27T12:03:49.369Z|00094|connmgr|INFO|br0<->unix#590: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T12:03:49.406Z|00095|connmgr|INFO|br0<->unix#593: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T12:03:49.421Z|00009|jsonrpc|WARN|unix#516: receive error: Connection reset by peer\n2020-02-27T12:03:49.421Z|00010|reconnect|WARN|unix#516: connection dropped (Connection reset by peer)\n2020-02-27T12:03:49.434Z|00096|bridge|INFO|bridge br0: deleted interface vethba90d43b on port 28\n2020-02-27T12:03:49.478Z|00097|connmgr|INFO|br0<->unix#596: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T12:03:49.568Z|00098|connmgr|INFO|br0<->unix#599: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T12:03:49.603Z|00099|bridge|INFO|bridge br0: deleted interface veth3ec7bbda on port 27\n2020-02-27T12:03:49.680Z|00100|connmgr|INFO|br0<->unix#602: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T12:03:49.712Z|00101|connmgr|INFO|br0<->unix#605: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T12:03:49.739Z|00102|bridge|INFO|bridge br0: deleted interface vethf400fbaf on port 36\n2020-02-27T12:03:49.782Z|00103|connmgr|INFO|br0<->unix#608: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T12:03:49.824Z|00104|connmgr|INFO|br0<->unix#611: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T12:03:49.852Z|00105|bridge|INFO|bridge br0: deleted interface veth9a49939c on port 32\n2020-02-27T12:04:36.990Z|00011|jsonrpc|WARN|unix#567: receive error: Connection reset by peer\n2020-02-27T12:04:36.990Z|00012|reconnect|WARN|unix#567: connection dropped (Connection reset by peer)\n2020-02-27T12:04:36.952Z|00106|connmgr|INFO|br0<->unix#648: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T12:04:36.979Z|00107|connmgr|INFO|br0<->unix#651: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T12:04:37.000Z|00108|bridge|INFO|bridge br0: deleted interface veth189a7f18 on port 23\ninfo: Saving flows ...\nTerminated\n
Feb 27 12:06:26.384 E ns/openshift-multus pod/multus-qstkb node/ip-10-0-141-90.us-east-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Feb 27 12:06:26.498 E ns/openshift-machine-config-operator pod/machine-config-daemon-j9wvl node/ip-10-0-141-90.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 27 12:06:29.253 E ns/openshift-multus pod/multus-qstkb node/ip-10-0-141-90.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Feb 27 12:06:35.268 E ns/openshift-machine-config-operator pod/machine-config-daemon-j9wvl node/ip-10-0-141-90.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 27 12:06:43.700 E ns/openshift-etcd pod/etcd-ip-10-0-156-239.us-east-2.compute.internal node/ip-10-0-156-239.us-east-2.compute.internal container=etcd-metrics container exited with code 2 (Error): 2020-02-27 11:41:48.237016 I | etcdmain: ServerTLS: cert = /etc/kubernetes/static-pod-resources/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-156-239.us-east-2.compute.internal.crt, key = /etc/kubernetes/static-pod-resources/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-156-239.us-east-2.compute.internal.key, ca = /etc/kubernetes/static-pod-resources/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-02-27 11:41:48.237814 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-02-27 11:41:48.238236 I | etcdmain: ClientTLS: cert = /etc/kubernetes/static-pod-resources/secrets/etcd-all-peer/etcd-peer-ip-10-0-156-239.us-east-2.compute.internal.crt, key = /etc/kubernetes/static-pod-resources/secrets/etcd-all-peer/etcd-peer-ip-10-0-156-239.us-east-2.compute.internal.key, ca = /etc/kubernetes/static-pod-resources/configmaps/etcd-peer-client-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/02/27 11:41:48 grpc: addrConn.createTransport failed to connect to {https://etcd-1.ci-op-z6y52xgr-f83f1.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.156.239:9978: connect: connection refused". Reconnecting...\n2020-02-27 11:41:48.240416 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Feb 27 12:06:43.770 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-156-239.us-east-2.compute.internal node/ip-10-0-156-239.us-east-2.compute.internal container=scheduler container exited with code 2 (Error):   1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: unknown (get services)\nE0227 11:47:43.094199       1 leaderelection.go:331] error retrieving resource lock openshift-kube-scheduler/kube-scheduler: configmaps "kube-scheduler" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "openshift-kube-scheduler"\nE0227 11:47:43.094321       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)\nE0227 11:47:43.094402       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: unknown (get nodes)\nE0227 11:47:43.094463       1 reflector.go:307] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to watch *v1.Pod: unknown (get pods)\nE0227 11:47:43.094522       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)\nE0227 11:47:43.094586       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)\nE0227 11:47:43.094650       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)\nE0227 11:47:43.094721       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)\nE0227 11:47:43.094790       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)\nE0227 11:47:43.136239       1 reflector.go:307] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nE0227 11:47:43.136384       1 reflector.go:307] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to watch *v1.ConfigMap: unknown (get configmaps)\n
Feb 27 12:06:43.804 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-156-239.us-east-2.compute.internal node/ip-10-0-156-239.us-east-2.compute.internal container=kube-apiserver container exited with code 1 (Error):  cacher.go:782] cacher (*core.Endpoints): 1 objects queued in incoming channel.\nW0227 12:04:00.932864       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://etcd-2.ci-op-z6y52xgr-f83f1.origin-ci-int-aws.dev.rhcloud.com:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.137.62:2379: i/o timeout". Reconnecting...\n2020/02/27 12:04:05 httputil: ReverseProxy read error during body copy: unexpected EOF\n2020/02/27 12:04:05 httputil: ReverseProxy read error during body copy: unexpected EOF\n2020/02/27 12:04:05 httputil: ReverseProxy read error during body copy: unexpected EOF\n2020/02/27 12:04:05 httputil: ReverseProxy read error during body copy: unexpected EOF\n2020/02/27 12:04:05 httputil: ReverseProxy read error during body copy: unexpected EOF\n2020/02/27 12:04:05 httputil: ReverseProxy read error during body copy: unexpected EOF\n2020/02/27 12:04:05 httputil: ReverseProxy read error during body copy: unexpected EOF\n2020/02/27 12:04:05 httputil: ReverseProxy read error during body copy: unexpected EOF\n2020/02/27 12:04:05 httputil: ReverseProxy read error during body copy: unexpected EOF\n2020/02/27 12:04:05 httputil: ReverseProxy read error during body copy: unexpected EOF\nW0227 12:04:05.348590       1 reflector.go:326] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: watch of *v1.Group ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 1603; INTERNAL_ERROR") has prevented the request from succeeding\nI0227 12:04:29.043242       1 genericapiserver.go:643] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-156-239.us-east-2.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0227 12:04:29.043654       1 controller.go:180] Shutting down kubernetes service endpoint reconciler\n
Feb 27 12:06:43.804 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-156-239.us-east-2.compute.internal node/ip-10-0-156-239.us-east-2.compute.internal container=kube-apiserver-cert-regeneration-controller container exited with code 1 (Error): ationController - "LocalhostRecoveryServing"\nI0227 12:02:22.813690       1 client_cert_rotation_controller.go:140] Starting CertRotationController - "KubeControllerManagerClient"\nI0227 12:02:22.813696       1 client_cert_rotation_controller.go:121] Waiting for CertRotationController - "KubeControllerManagerClient"\nI0227 12:02:22.813702       1 client_cert_rotation_controller.go:128] Finished waiting for CertRotationController - "KubeControllerManagerClient"\nI0227 12:04:29.110310       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0227 12:04:29.110844       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeControllerManagerClient"\nI0227 12:04:29.110868       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "LocalhostRecoveryServing"\nI0227 12:04:29.110884       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "InternalLoadBalancerServing"\nI0227 12:04:29.110902       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "ExternalLoadBalancerServing"\nI0227 12:04:29.110917       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "ServiceNetworkServing"\nI0227 12:04:29.110933       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "LocalhostServing"\nI0227 12:04:29.110944       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeAPIServerToKubeletClientCert"\nI0227 12:04:29.110957       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "AggregatorProxyClientCert"\nI0227 12:04:29.110970       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeSchedulerClient"\nI0227 12:04:29.110984       1 certrotationcontroller.go:560] Shutting down CertRotation\nI0227 12:04:29.110994       1 cabundlesyncer.go:84] Shutting down CA bundle controller\nI0227 12:04:29.111003       1 cabundlesyncer.go:86] CA bundle controller shut down\n
Feb 27 12:06:43.804 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-156-239.us-east-2.compute.internal node/ip-10-0-156-239.us-east-2.compute.internal container=kube-apiserver-insecure-readyz container exited with code 2 (Error): I0227 11:47:37.942828       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Feb 27 12:06:43.804 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-156-239.us-east-2.compute.internal node/ip-10-0-156-239.us-east-2.compute.internal container=kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0227 12:04:12.904678       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 12:04:12.904978       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0227 12:04:22.913358       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 12:04:22.913695       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Feb 27 12:06:43.837 E ns/openshift-controller-manager pod/controller-manager-9hb9d node/ip-10-0-156-239.us-east-2.compute.internal container=controller-manager container exited with code 1 (Error): I0227 11:48:23.762257       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0227 11:48:23.765441       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-z6y52xgr/stable@sha256:2cbab524497dd001c11e8ccba8a5bb3e9d764b97f35d725ebfbd3ea88b56b544"\nI0227 11:48:23.765483       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-z6y52xgr/stable@sha256:471891b26e981d2ed9c87cdd306bc028abe62b760a7af413bd9c05389c4ea5a4"\nI0227 11:48:23.765639       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\nI0227 11:48:23.766226       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\n
Feb 27 12:06:43.867 E ns/openshift-cluster-node-tuning-operator pod/tuned-87k8z node/ip-10-0-156-239.us-east-2.compute.internal container=tuned container exited with code 143 (Error): emon.daemon: using sleep interval of 1 second(s)\n2020-02-27 11:48:56,311 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-02-27 11:48:56,312 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-02-27 11:48:56,313 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-02-27 11:48:56,349 INFO     tuned.daemon.controller: starting controller\n2020-02-27 11:48:56,349 INFO     tuned.daemon.daemon: starting tuning\n2020-02-27 11:48:56,360 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-02-27 11:48:56,361 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-02-27 11:48:56,365 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-02-27 11:48:56,366 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-02-27 11:48:56,368 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-02-27 11:48:56,511 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-02-27 11:48:56,520 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\nI0227 12:01:06.530486     621 tuned.go:494] profile "ip-10-0-156-239.us-east-2.compute.internal" changed, tuned profile requested: openshift-node\nI0227 12:01:06.609565     621 tuned.go:494] profile "ip-10-0-156-239.us-east-2.compute.internal" changed, tuned profile requested: openshift-control-plane\nI0227 12:01:07.064392     621 tuned.go:393] getting recommended profile...\nI0227 12:01:07.834154     621 tuned.go:430] active and recommended profile (openshift-control-plane) match; profile change will not trigger profile reload\nI0227 12:01:18.515253     621 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 12:01:18.515356     621 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\n
Feb 27 12:06:43.894 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-156-239.us-east-2.compute.internal node/ip-10-0-156-239.us-east-2.compute.internal container=cluster-policy-controller container exited with code 1 (Error): I0227 11:45:00.694381       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0227 11:45:00.696404       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0227 11:45:00.696477       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nE0227 11:47:35.812871       1 leaderelection.go:331] error retrieving resource lock openshift-kube-controller-manager/cluster-policy-controller: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cluster-policy-controller: dial tcp [::1]:6443: connect: connection refused\n
Feb 27 12:06:43.894 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-156-239.us-east-2.compute.internal node/ip-10-0-156-239.us-east-2.compute.internal container=kube-controller-manager-cert-syncer container exited with code 2 (Error):     1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 12:03:51.387091       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 12:03:51.387469       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 12:03:55.916682       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 12:03:55.917037       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 12:04:01.397308       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 12:04:01.398045       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 12:04:05.927641       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 12:04:05.927983       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 12:04:11.409104       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 12:04:11.409605       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 12:04:15.938909       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 12:04:15.939362       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 12:04:21.420579       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 12:04:21.420988       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 12:04:25.951527       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 12:04:25.951977       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\n
Feb 27 12:06:43.894 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-156-239.us-east-2.compute.internal node/ip-10-0-156-239.us-east-2.compute.internal container=kube-controller-manager container exited with code 2 (Error):  the latest version and try again\nI0227 12:04:18.277293       1 garbagecollector.go:404] processing item [v1/ConfigMap, namespace: openshift-cluster-node-tuning-operator, name: node-tuning-operator-lock, uid: b037ab32-1197-470b-88a0-4d4934d074e1]\nI0227 12:04:18.315787       1 garbagecollector.go:404] processing item [v1/ConfigMap, namespace: openshift-marketplace, name: marketplace-operator-lock, uid: 8d1993bd-86b7-4635-b0ea-44b5f73d1d60]\nI0227 12:04:18.345959       1 garbagecollector.go:404] processing item [v1/ConfigMap, namespace: openshift-network-operator, name: cluster-network-operator, uid: 8e5241af-adcb-4954-b77a-b6c093a33ed4]\nI0227 12:04:18.413154       1 garbagecollector.go:517] delete object [v1/ConfigMap, namespace: openshift-marketplace, name: marketplace-operator-lock, uid: 8d1993bd-86b7-4635-b0ea-44b5f73d1d60] with propagation policy Background\nI0227 12:04:18.413799       1 garbagecollector.go:517] delete object [v1/ConfigMap, namespace: openshift-network-operator, name: cluster-network-operator, uid: 8e5241af-adcb-4954-b77a-b6c093a33ed4] with propagation policy Background\nI0227 12:04:18.414000       1 garbagecollector.go:517] delete object [v1/ConfigMap, namespace: openshift-cluster-node-tuning-operator, name: node-tuning-operator-lock, uid: b037ab32-1197-470b-88a0-4d4934d074e1] with propagation policy Background\nI0227 12:04:21.461041       1 replica_set.go:561] Too few replicas for ReplicaSet openshift-machine-config-operator/etcd-quorum-guard-5d7667544d, need 3, creating 1\nI0227 12:04:21.470234       1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-machine-config-operator", Name:"etcd-quorum-guard-5d7667544d", UID:"b26b79cb-fac1-487b-9f5f-deceb30c6cd5", APIVersion:"apps/v1", ResourceVersion:"37709", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: etcd-quorum-guard-5d7667544d-gdvxf\nI0227 12:04:22.196441       1 garbagecollector.go:404] processing item [v1/clusteroperator, namespace: , name: storage, uid: b03d256c-54ba-4958-bbd5-3449fcb1383a]\n
Feb 27 12:06:43.894 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-156-239.us-east-2.compute.internal node/ip-10-0-156-239.us-east-2.compute.internal container=kube-controller-manager-recovery-controller container exited with code 255 (Error):  11:47:36.577340       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=23520&timeout=7m39s&timeoutSeconds=459&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0227 11:47:36.577586       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=23520&timeout=5m43s&timeoutSeconds=343&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0227 11:47:43.157973       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nE0227 11:47:43.158033       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nI0227 12:01:21.170038       1 leaderelection.go:252] successfully acquired lease openshift-kube-controller-manager/cert-recovery-controller-lock\nI0227 12:01:21.172256       1 event.go:281] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"openshift-kube-controller-manager", Name:"cert-recovery-controller-lock", UID:"e4e08554-5c69-4009-89c4-926c52b99cc3", APIVersion:"v1", ResourceVersion:"34905", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' 40ffb01f-7eec-439f-bcce-6798933db9a6 became leader\nI0227 12:01:21.183621       1 csrcontroller.go:81] Starting CSR controller\nI0227 12:01:21.183646       1 shared_informer.go:197] Waiting for caches to sync for CSRController\nI0227 12:01:21.385471       1 shared_informer.go:204] Caches are synced for CSRController \nI0227 12:04:29.290729       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0227 12:04:29.291102       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0227 12:04:29.292333       1 builder.go:209] server exited\n
Feb 27 12:06:43.940 E ns/openshift-multus pod/multus-admission-controller-mtjzg node/ip-10-0-156-239.us-east-2.compute.internal container=multus-admission-controller container exited with code 255 (Error): 
Feb 27 12:06:43.958 E ns/openshift-sdn pod/sdn-controller-8b2lh node/ip-10-0-156-239.us-east-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0227 11:51:48.947329       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0227 12:02:21.177171       1 event.go:319] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"openshift-network-controller", GenerateName:"", Namespace:"openshift-sdn", SelfLink:"/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller", UID:"2d0059e4-0b41-4cb0-8d2f-b01f85837109", ResourceVersion:"35520", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63718399417, loc:(*time.Location)(0x2b2b940)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"ip-10-0-156-239\",\"leaseDurationSeconds\":60,\"acquireTime\":\"2020-02-27T12:02:21Z\",\"renewTime\":\"2020-02-27T12:02:21Z\",\"leaderTransitions\":1}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'ip-10-0-156-239 became leader'\nI0227 12:02:21.177375       1 leaderelection.go:252] successfully acquired lease openshift-sdn/openshift-network-controller\nI0227 12:02:21.182849       1 master.go:51] Initializing SDN master\nI0227 12:02:21.208043       1 network_controller.go:61] Started OpenShift Network Controller\n
Feb 27 12:06:43.977 E ns/openshift-multus pod/multus-b4npn node/ip-10-0-156-239.us-east-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Feb 27 12:06:43.999 E ns/openshift-sdn pod/ovs-6hbth node/ip-10-0-156-239.us-east-2.compute.internal container=openvswitch container exited with code 143 (Error): -27T12:04:10.651Z|00210|bridge|INFO|bridge br0: deleted interface veth27b8724b on port 85\n2020-02-27T12:04:10.692Z|00211|connmgr|INFO|br0<->unix#782: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T12:04:10.752Z|00212|connmgr|INFO|br0<->unix#785: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T12:04:10.799Z|00213|bridge|INFO|bridge br0: deleted interface veth03daf79f on port 79\n2020-02-27T12:04:10.862Z|00214|connmgr|INFO|br0<->unix#788: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T12:04:10.910Z|00215|connmgr|INFO|br0<->unix#793: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T12:04:10.941Z|00216|bridge|INFO|bridge br0: deleted interface veth0b60d715 on port 87\n2020-02-27T12:04:10.993Z|00217|connmgr|INFO|br0<->unix#797: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T12:04:11.047Z|00218|connmgr|INFO|br0<->unix#800: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T12:04:11.073Z|00219|bridge|INFO|bridge br0: deleted interface veth9b8ac01e on port 70\n2020-02-27T12:04:11.172Z|00220|connmgr|INFO|br0<->unix#803: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T12:04:11.227Z|00221|connmgr|INFO|br0<->unix#806: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T12:04:11.255Z|00222|bridge|INFO|bridge br0: deleted interface veth27ce8cf3 on port 89\n2020-02-27T12:04:25.284Z|00039|jsonrpc|WARN|Dropped 4 log messages in last 17 seconds (most recently, 15 seconds ago) due to excessive rate\n2020-02-27T12:04:25.284Z|00040|jsonrpc|WARN|unix#685: receive error: Connection reset by peer\n2020-02-27T12:04:25.285Z|00041|reconnect|WARN|unix#685: connection dropped (Connection reset by peer)\n2020-02-27T12:04:25.235Z|00223|bridge|INFO|bridge br0: added interface vethae5eef33 on port 96\n2020-02-27T12:04:25.269Z|00224|connmgr|INFO|br0<->unix#818: 5 flow_mods in the last 0 s (5 adds)\n2020-02-27T12:04:25.317Z|00225|connmgr|INFO|br0<->unix#822: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T12:04:25.321Z|00226|connmgr|INFO|br0<->unix#824: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\ninfo: Saving flows ...\nTerminated\n
Feb 27 12:06:44.045 E ns/openshift-machine-config-operator pod/machine-config-server-2grgb node/ip-10-0-156-239.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0227 12:01:22.374818       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-301-g27bac44b-dirty (27bac44b16d95ebec18855f42876e228fc1446d3)\nI0227 12:01:22.380223       1 api.go:51] Launching server on :22624\nI0227 12:01:22.380400       1 api.go:51] Launching server on :22623\n
Feb 27 12:06:44.073 E ns/openshift-monitoring pod/node-exporter-t7zlf node/ip-10-0-156-239.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:03:31Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:03:40Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:03:46Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:04:01Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:04:10Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:04:16Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:04:25Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 27 12:06:44.148 E ns/openshift-machine-config-operator pod/machine-config-daemon-xjst8 node/ip-10-0-156-239.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 27 12:06:44.500 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-155-97.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-27T11:48:20.417Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-27T11:48:20.424Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-27T11:48:20.425Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-27T11:48:20.426Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-27T11:48:20.426Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-27T11:48:20.426Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-27T11:48:20.426Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-27T11:48:20.426Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-27T11:48:20.426Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-27T11:48:20.426Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-27T11:48:20.426Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-27T11:48:20.426Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-27T11:48:20.426Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-27T11:48:20.426Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-27T11:48:20.427Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-27T11:48:20.427Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-27
Feb 27 12:06:44.500 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-155-97.us-east-2.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/02/27 11:48:24 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Feb 27 12:06:44.500 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-155-97.us-east-2.compute.internal container=prometheus-proxy container exited with code 2 (Error): 2020/02/27 11:48:27 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/02/27 11:48:27 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/27 11:48:27 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/27 11:48:27 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/27 11:48:27 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/27 11:48:27 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/02/27 11:48:27 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/27 11:48:27 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/02/27 11:48:27 http.go:107: HTTPS: listening on [::]:9091\nI0227 11:48:27.308043       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/02/27 12:03:58 oauthproxy.go:774: basicauth: 10.129.2.13:57686 Authorization header does not start with 'Basic', skipping basic authentication\n2020/02/27 12:06:09 oauthproxy.go:774: basicauth: 10.129.0.62:44222 Authorization header does not start with 'Basic', skipping basic authentication\n
Feb 27 12:06:44.500 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-155-97.us-east-2.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-02-27T11:48:22.45830273Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-02-27T11:48:22.460860315Z caller=runutil.go:95 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2020-02-27T11:48:27.460272643Z caller=runutil.go:95 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-02-27T11:48:32.647664724Z caller=reloader.go:286 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-02-27T11:48:32.64773841Z caller=reloader.go:154 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\n
Feb 27 12:06:44.521 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-155-97.us-east-2.compute.internal container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 12:06:44.521 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-155-97.us-east-2.compute.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 12:06:44.594 E ns/openshift-marketplace pod/redhat-operators-84f77b7f46-zdgh6 node/ip-10-0-155-97.us-east-2.compute.internal container=redhat-operators container exited with code 2 (Error): 
Feb 27 12:06:44.608 E ns/openshift-monitoring pod/prometheus-adapter-756546bf6b-df25l node/ip-10-0-155-97.us-east-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0227 11:48:06.217220       1 adapter.go:93] successfully using in-cluster auth\nI0227 11:48:07.083243       1 secure_serving.go:116] Serving securely on [::]:6443\nW0227 12:01:18.628661       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.Pod ended with: very short watch: k8s.io/client-go/informers/factory.go:133: Unexpected watch close - watch lasted less than a second and no items received\nE0227 12:04:29.333125       1 reflector.go:270] k8s.io/client-go/informers/factory.go:133: Failed to watch *v1.Node: Get https://172.30.0.1:443/api/v1/nodes?resourceVersion=37684&timeout=5m29s&timeoutSeconds=329&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0227 12:04:29.338089       1 reflector.go:270] k8s.io/client-go/informers/factory.go:133: Failed to watch *v1.Pod: Get https://172.30.0.1:443/api/v1/pods?resourceVersion=37997&timeout=5m46s&timeoutSeconds=346&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0227 12:04:30.340215       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Pod: Get https://172.30.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 172.30.0.1:443: connect: connection refused\n
Feb 27 12:06:45.583 E ns/openshift-marketplace pod/community-operators-7c769955cb-7wzdf node/ip-10-0-155-97.us-east-2.compute.internal container=community-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 12:06:45.620 E ns/openshift-csi-snapshot-controller-operator pod/csi-snapshot-controller-operator-c57dcc4f-b9vjq node/ip-10-0-155-97.us-east-2.compute.internal container=operator container exited with code 255 (Error):  pods")\nI0227 12:03:48.008410       1 operator.go:147] Finished syncing operator at 134.91471ms\nI0227 12:03:59.297997       1 operator.go:145] Starting syncing operator at 2020-02-27 12:03:59.297981266 +0000 UTC m=+974.631155510\nI0227 12:03:59.337072       1 status_controller.go:176] clusteroperator/csi-snapshot-controller diff {"status":{"conditions":[{"lastTransitionTime":"2020-02-27T11:30:17Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-02-27T12:03:59Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-02-27T12:03:59Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-02-27T11:30:20Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0227 12:03:59.337609       1 operator.go:147] Finished syncing operator at 39.619228ms\nI0227 12:03:59.337649       1 operator.go:145] Starting syncing operator at 2020-02-27 12:03:59.337640664 +0000 UTC m=+974.670814937\nI0227 12:03:59.352902       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-csi-snapshot-controller-operator", Name:"csi-snapshot-controller-operator", UID:"7147020c-796f-4409-851a-17bcd05f6667", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False (""),Available changed from False to True ("")\nI0227 12:03:59.370127       1 operator.go:147] Finished syncing operator at 32.479754ms\nI0227 12:06:42.984088       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0227 12:06:42.984730       1 management_state_controller.go:112] Shutting down management-state-controller-csi-snapshot-controller\nI0227 12:06:42.984770       1 logging_controller.go:93] Shutting down LogLevelController\nI0227 12:06:42.984784       1 status_controller.go:212] Shutting down StatusSyncer-csi-snapshot-controller\nF0227 12:06:42.984866       1 builder.go:243] stopped\n
Feb 27 12:06:45.643 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-155-97.us-east-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/02/27 12:01:14 Watching directory: "/etc/alertmanager/config"\n
Feb 27 12:06:45.643 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-155-97.us-east-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/02/27 12:01:14 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/27 12:01:14 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/27 12:01:14 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/27 12:01:14 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/27 12:01:14 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/27 12:01:14 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/27 12:01:14 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0227 12:01:14.678496       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/02/27 12:01:14 http.go:107: HTTPS: listening on [::]:9095\nE0227 12:04:36.696460       1 webhook.go:109] Failed to make webhook authenticator request: Post https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews: dial tcp 172.30.0.1:443: connect: connection refused\n2020/02/27 12:04:36 oauthproxy.go:782: requestauth: 10.129.2.16:58896 Post https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews: dial tcp 172.30.0.1:443: connect: connection refused\nE0227 12:04:45.356397       1 webhook.go:109] Failed to make webhook authenticator request: Post https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews: dial tcp 172.30.0.1:443: connect: connection refused\n2020/02/27 12:04:45 oauthproxy.go:782: requestauth: 10.128.2.32:60442 Post https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews: dial tcp 172.30.0.1:443: connect: connection refused\n
Feb 27 12:06:47.042 E ns/openshift-monitoring pod/node-exporter-t7zlf node/ip-10-0-156-239.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Feb 27 12:06:47.315 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers::NodeController_MasterNodesReady::StaticPods_Error: StaticPodsDegraded: nodes/ip-10-0-156-239.us-east-2.compute.internal pods/etcd-ip-10-0-156-239.us-east-2.compute.internal container="etcd" is not ready\nStaticPodsDegraded: nodes/ip-10-0-156-239.us-east-2.compute.internal pods/etcd-ip-10-0-156-239.us-east-2.compute.internal container="etcd" is terminated: "Completed" - ""\nStaticPodsDegraded: nodes/ip-10-0-156-239.us-east-2.compute.internal pods/etcd-ip-10-0-156-239.us-east-2.compute.internal container="etcd-metrics" is not ready\nStaticPodsDegraded: nodes/ip-10-0-156-239.us-east-2.compute.internal pods/etcd-ip-10-0-156-239.us-east-2.compute.internal container="etcd-metrics" is terminated: "Error" - "2020-02-27 11:41:48.237016 I | etcdmain: ServerTLS: cert = /etc/kubernetes/static-pod-resources/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-156-239.us-east-2.compute.internal.crt, key = /etc/kubernetes/static-pod-resources/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-156-239.us-east-2.compute.internal.key, ca = /etc/kubernetes/static-pod-resources/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-02-27 11:41:48.237814 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-02-27 11:41:48.238236 I | etcdmain: ClientTLS: cert = /etc/kubernetes/static-pod-resources/secrets/etcd-all-peer/etcd-peer-ip-10-0-156-239.us-east-2.compute.internal.crt, key = /etc/kubernetes/static-pod-resources/secrets/etcd-all-peer/etcd-peer-ip-10-0-156-239.us-east-2.compute.internal.key, ca = /etc/kubernetes/static-pod-resources/configmaps/etcd-peer-client-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/02/27 11:41:48 grpc: addrConn.createTransport failed to connect to {https://etcd-1.ci-op-z6y52xgr-f83f1.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 10.0.156.239:9978: connect: connection refused\". Reconnecting...\n2020-02-27 11:41:48.240416 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n"\nNodeControllerDegraded: The master nodes not ready: node "ip-10-0-156-239.us-east-2.compute.internal" not ready since 2020-02-27 12:06:43 +0000 UTC because KubeletNotReady ([PLEG is not healthy: pleg has yet to be successful, runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network])\nEtcdMembersDegraded: ip-10-0-156-239.us-east-2.compute.internal members are unhealthy,  members are unknown
Feb 27 12:06:49.492 E ns/openshift-multus pod/multus-b4npn node/ip-10-0-156-239.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Feb 27 12:06:51.753 E ns/openshift-multus pod/multus-b4npn node/ip-10-0-156-239.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Feb 27 12:06:51.763 E ns/openshift-sdn pod/sdn-mk7tz node/ip-10-0-156-239.us-east-2.compute.internal container=sdn container exited with code 255 (Error): I0227 12:06:50.559822    4138 cmd.go:123] Reading proxy configuration from /config/kube-proxy-config.yaml\nI0227 12:06:50.566114    4138 feature_gate.go:243] feature gates: &{map[]}\nI0227 12:06:50.566172    4138 cmd.go:227] Watching config file /config/kube-proxy-config.yaml for changes\nI0227 12:06:50.566216    4138 cmd.go:227] Watching config file /config/..2020_02_27_11_51_07.464051222/kube-proxy-config.yaml for changes\nI0227 12:06:50.689145    4138 node.go:147] Initializing SDN node of type "redhat/openshift-ovs-networkpolicy" with configured hostname "ip-10-0-156-239.us-east-2.compute.internal" (IP "10.0.156.239")\nI0227 12:06:50.707264    4138 cmd.go:160] Starting node networking (unknown)\nI0227 12:06:50.707349    4138 node.go:385] Starting openshift-sdn network plugin\nI0227 12:06:51.138601    4138 sdn_controller.go:139] [SDN setup] full SDN setup required (local subnet gateway CIDR not found)\nI0227 12:06:51.496483    4138 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0: failed to connect to socket (Broken pipe)\nF0227 12:06:51.496589    4138 cmd.go:111] Failed to start sdn: node SDN setup failed: exit status 1\n
Feb 27 12:07:03.617 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-141-90.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-27T12:07:00.812Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-27T12:07:00.822Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-27T12:07:00.823Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-27T12:07:00.824Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-27T12:07:00.824Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-27T12:07:00.824Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-27T12:07:00.824Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-27T12:07:00.824Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-27T12:07:00.824Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-27T12:07:00.824Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-27T12:07:00.824Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-27T12:07:00.825Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-27T12:07:00.824Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-27T12:07:00.825Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-27T12:07:00.826Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-27T12:07:00.826Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-27
Feb 27 12:07:07.858 E ns/openshift-machine-config-operator pod/machine-config-daemon-xjst8 node/ip-10-0-156-239.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 27 12:07:09.114 E ns/openshift-csi-snapshot-controller pod/csi-snapshot-controller-5c86bc66fc-gzsjf node/ip-10-0-131-75.us-east-2.compute.internal container=snapshot-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 12:07:24.349 E ns/openshift-insights pod/insights-operator-69bd6d59bc-pfbn9 node/ip-10-0-131-12.us-east-2.compute.internal container=operator container exited with code 2 (Error): pod/openshift-apiserver/apiserver-7bb49cb8f9-5czgg with fingerprint=\nI0227 12:06:09.973061       1 diskrecorder.go:63] Recording events/openshift-apiserver with fingerprint=\nI0227 12:06:09.983823       1 diskrecorder.go:63] Recording config/node/ip-10-0-141-90.us-east-2.compute.internal with fingerprint=\nI0227 12:06:09.984549       1 diskrecorder.go:63] Recording config/node/ip-10-0-156-239.us-east-2.compute.internal with fingerprint=\nI0227 12:06:09.987483       1 diskrecorder.go:63] Recording config/version with fingerprint=\nI0227 12:06:09.987598       1 diskrecorder.go:63] Recording config/id with fingerprint=\nI0227 12:06:09.990735       1 diskrecorder.go:63] Recording config/infrastructure with fingerprint=\nI0227 12:06:09.993180       1 diskrecorder.go:63] Recording config/network with fingerprint=\nI0227 12:06:09.995977       1 diskrecorder.go:63] Recording config/authentication with fingerprint=\nI0227 12:06:09.999025       1 diskrecorder.go:63] Recording config/featuregate with fingerprint=\nI0227 12:06:10.001659       1 diskrecorder.go:63] Recording config/oauth with fingerprint=\nI0227 12:06:10.008385       1 diskrecorder.go:63] Recording config/ingress with fingerprint=\nI0227 12:06:10.011122       1 diskrecorder.go:63] Recording config/proxy with fingerprint=\nI0227 12:06:10.017570       1 diskrecorder.go:170] Writing 47 records to /var/lib/insights-operator/insights-2020-02-27-120610.tar.gz\nI0227 12:06:10.022911       1 diskrecorder.go:134] Wrote 47 records to disk in 5ms\nI0227 12:06:10.022941       1 periodic.go:151] Periodic gather config completed in 147ms\nI0227 12:06:18.443870       1 httplog.go:90] GET /metrics: (6.291486ms) 200 [Prometheus/2.15.2 10.129.2.16:52884]\nI0227 12:06:29.375392       1 httplog.go:90] GET /metrics: (20.861235ms) 200 [Prometheus/2.15.2 10.128.2.32:35606]\nI0227 12:06:48.444275       1 httplog.go:90] GET /metrics: (6.786096ms) 200 [Prometheus/2.15.2 10.129.2.16:52884]\nI0227 12:07:18.444055       1 httplog.go:90] GET /metrics: (6.446436ms) 200 [Prometheus/2.15.2 10.129.2.16:52884]\n
Feb 27 12:07:26.425 E ns/openshift-machine-api pod/machine-api-controllers-6874f9566-5wd5h node/ip-10-0-131-12.us-east-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Feb 27 12:07:27.508 E ns/openshift-machine-api pod/machine-api-operator-84468977c5-8h7n7 node/ip-10-0-131-12.us-east-2.compute.internal container=machine-api-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 12:07:27.508 E ns/openshift-machine-api pod/machine-api-operator-84468977c5-8h7n7 node/ip-10-0-131-12.us-east-2.compute.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 12:07:29.686 E ns/openshift-service-catalog-apiserver-operator pod/openshift-service-catalog-apiserver-operator-8469b87b79-4q8gc node/ip-10-0-131-12.us-east-2.compute.internal container=operator container exited with code 255 (Error): handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0227 12:06:58.634937       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0227 12:06:58.636205       1 httplog.go:90] GET /metrics: (5.552674ms) 200 [Prometheus/2.15.2 10.129.2.16:55654]\nI0227 12:07:00.923924       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0227 12:07:07.455555       1 reflector.go:383] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Service total 1 items received\nI0227 12:07:09.679593       1 workload_controller.go:329] No service bindings found, nothing to delete.\nI0227 12:07:09.687906       1 workload_controller.go:181] apiservice v1beta1.servicecatalog.k8s.io deleted\nI0227 12:07:10.937077       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0227 12:07:18.687657       1 reflector.go:383] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.DaemonSet total 1 items received\nI0227 12:07:20.960361       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0227 12:07:23.677386       1 reflector.go:383] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Secret total 1 items received\nI0227 12:07:27.227425       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0227 12:07:27.227543       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0227 12:07:27.229792       1 httplog.go:90] GET /metrics: (39.487116ms) 200 [Prometheus/2.15.2 10.131.0.23:60908]\nI0227 12:07:28.414995       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0227 12:07:28.415094       1 leaderelection.go:66] leaderelection lost\n
Feb 27 12:07:29.715 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-744f6654d5-zvw7z node/ip-10-0-131-12.us-east-2.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): egraded: The master nodes not ready: node \"ip-10-0-156-239.us-east-2.compute.internal\" not ready since 2020-02-27 12:06:43 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)" to "NodeControllerDegraded: All master nodes are ready"\nI0227 12:07:28.629972       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0227 12:07:28.631663       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0227 12:07:28.631758       1 satokensigner_controller.go:332] Shutting down SATokenSignerController\nI0227 12:07:28.631824       1 base_controller.go:74] Shutting down RevisionController ...\nI0227 12:07:28.631875       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0227 12:07:28.631929       1 base_controller.go:74] Shutting down StaticPodStateController ...\nI0227 12:07:28.631980       1 base_controller.go:74] Shutting down InstallerController ...\nI0227 12:07:28.632030       1 base_controller.go:74] Shutting down InstallerStateController ...\nI0227 12:07:28.632079       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "CSRSigningCert"\nI0227 12:07:28.632158       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0227 12:07:28.632214       1 base_controller.go:74] Shutting down PruneController ...\nI0227 12:07:28.632264       1 status_controller.go:212] Shutting down StatusSyncer-kube-controller-manager\nI0227 12:07:28.632314       1 base_controller.go:74] Shutting down  ...\nI0227 12:07:28.632366       1 base_controller.go:74] Shutting down NodeController ...\nI0227 12:07:28.632415       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0227 12:07:28.632470       1 remove_stale_conditions.go:84] Shutting down RemoveStaleConditions\nI0227 12:07:28.632514       1 targetconfigcontroller.go:613] Shutting down TargetConfigController\nF0227 12:07:28.632956       1 builder.go:243] stopped\n
Feb 27 12:07:29.764 E ns/openshift-operator-lifecycle-manager pod/packageserver-5955fbf4d4-cdn4x node/ip-10-0-131-12.us-east-2.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 27 12:07:45.137 E ns/openshift-monitoring pod/prometheus-operator-579bdcfb4c-2pskv node/ip-10-0-156-239.us-east-2.compute.internal container=prometheus-operator container exited with code 1 (Error): ts=2020-02-27T12:07:44.58020736Z caller=main.go:208 msg="Starting Prometheus Operator version '0.36.0'."\nts=2020-02-27T12:07:44.666490707Z caller=main.go:98 msg="Staring insecure server on :8080"\nts=2020-02-27T12:07:44.677449839Z caller=main.go:304 msg="Unhandled error received. Exiting..." err="communicating with server failed: Get https://172.30.0.1:443/version?timeout=32s: dial tcp 172.30.0.1:443: connect: connection refused"\n
Feb 27 12:09:28.133 E ns/openshift-monitoring pod/node-exporter-4xc72 node/ip-10-0-155-97.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:06:33Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:06:41Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:06:56Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:07:11Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:07:18Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:07:26Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:07:33Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 27 12:09:28.164 E ns/openshift-cluster-node-tuning-operator pod/tuned-gwjmv node/ip-10-0-155-97.us-east-2.compute.internal container=tuned container exited with code 143 (Error): 2.compute.internal" added, tuned profile requested: openshift-node\nI0227 11:48:49.545197    2663 tuned.go:521] tuned "rendered" added\nI0227 11:48:49.545280    2663 tuned.go:170] disabling system tuned...\nI0227 11:48:49.545285    2663 tuned.go:219] extracting tuned profiles\nI0227 11:48:49.549646    2663 tuned.go:176] failed to disable system tuned: Failed to execute operation: Unit file tuned.service does not exist.\nI0227 11:48:50.530068    2663 tuned.go:393] getting recommended profile...\nI0227 11:48:50.648960    2663 tuned.go:421] active profile () != recommended profile (openshift-node)\nI0227 11:48:50.649033    2663 tuned.go:286] starting tuned...\n2020-02-27 11:48:50,757 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-02-27 11:48:50,763 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-02-27 11:48:50,764 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-02-27 11:48:50,764 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-02-27 11:48:50,765 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-02-27 11:48:50,800 INFO     tuned.daemon.controller: starting controller\n2020-02-27 11:48:50,801 INFO     tuned.daemon.daemon: starting tuning\n2020-02-27 11:48:50,812 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-02-27 11:48:50,813 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-02-27 11:48:50,816 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-02-27 11:48:50,818 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-02-27 11:48:50,819 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-02-27 11:48:50,933 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-02-27 11:48:50,942 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\n
Feb 27 12:09:28.178 E ns/openshift-sdn pod/ovs-mdjjp node/ip-10-0-155-97.us-east-2.compute.internal container=openvswitch container exited with code 143 (Error):  2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T12:06:45.178Z|00149|connmgr|INFO|br0<->unix#867: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T12:06:45.203Z|00150|bridge|INFO|bridge br0: deleted interface veth01f67ca5 on port 45\n2020-02-27T12:06:45.241Z|00151|connmgr|INFO|br0<->unix#870: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T12:06:45.310Z|00152|connmgr|INFO|br0<->unix#873: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T12:06:45.354Z|00153|bridge|INFO|bridge br0: deleted interface vethc36f2390 on port 34\n2020-02-27T12:07:12.622Z|00154|connmgr|INFO|br0<->unix#895: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T12:07:12.654Z|00155|connmgr|INFO|br0<->unix#898: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T12:07:12.684Z|00156|bridge|INFO|bridge br0: deleted interface vethd96e4dec on port 39\n2020-02-27T12:07:12.792Z|00157|connmgr|INFO|br0<->unix#901: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T12:07:12.823Z|00158|connmgr|INFO|br0<->unix#904: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T12:07:12.844Z|00159|bridge|INFO|bridge br0: deleted interface veth873b9d6f on port 38\n2020-02-27T12:07:12.833Z|00013|jsonrpc|WARN|unix#790: receive error: Connection reset by peer\n2020-02-27T12:07:12.833Z|00014|reconnect|WARN|unix#790: connection dropped (Connection reset by peer)\n2020-02-27T12:07:28.155Z|00160|connmgr|INFO|br0<->unix#916: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T12:07:28.194Z|00161|connmgr|INFO|br0<->unix#919: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T12:07:28.219Z|00162|bridge|INFO|bridge br0: deleted interface vethde9a1bd6 on port 22\n2020-02-27T12:07:28.199Z|00015|jsonrpc|WARN|unix#803: receive error: Connection reset by peer\n2020-02-27T12:07:28.199Z|00016|reconnect|WARN|unix#803: connection dropped (Connection reset by peer)\n2020-02-27T12:07:34.778Z|00017|jsonrpc|WARN|unix#813: receive error: Connection reset by peer\n2020-02-27T12:07:34.778Z|00018|reconnect|WARN|unix#813: connection dropped (Connection reset by peer)\ninfo: Saving flows ...\nTerminated\n
Feb 27 12:09:28.204 E ns/openshift-multus pod/multus-xjxmx node/ip-10-0-155-97.us-east-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Feb 27 12:09:28.218 E ns/openshift-machine-config-operator pod/machine-config-daemon-k2zvs node/ip-10-0-155-97.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 27 12:09:31.088 E ns/openshift-multus pod/multus-xjxmx node/ip-10-0-155-97.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Feb 27 12:09:35.976 E ns/openshift-machine-config-operator pod/machine-config-daemon-k2zvs node/ip-10-0-155-97.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 27 12:09:56.184 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-12.us-east-2.compute.internal node/ip-10-0-131-12.us-east-2.compute.internal container=kube-apiserver container exited with code 1 (Error): ID 2447; INTERNAL_ERROR") has prevented the request from succeeding\n2020/02/27 12:07:20 httputil: ReverseProxy read error during body copy: unexpected EOF\n2020/02/27 12:07:20 httputil: ReverseProxy read error during body copy: unexpected EOF\n2020/02/27 12:07:20 httputil: ReverseProxy read error during body copy: unexpected EOF\n2020/02/27 12:07:20 httputil: ReverseProxy read error during body copy: unexpected EOF\n2020/02/27 12:07:20 httputil: ReverseProxy read error during body copy: unexpected EOF\n2020/02/27 12:07:20 httputil: ReverseProxy read error during body copy: unexpected EOF\n2020/02/27 12:07:20 httputil: ReverseProxy read error during body copy: unexpected EOF\nE0227 12:07:28.866196       1 available_controller.go:406] v1.packages.operators.coreos.com failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.packages.operators.coreos.com": the object has been modified; please apply your changes to the latest version and try again\nE0227 12:07:28.931535       1 available_controller.go:406] v1.packages.operators.coreos.com failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.packages.operators.coreos.com": the object has been modified; please apply your changes to the latest version and try again\nE0227 12:07:28.955772       1 available_controller.go:406] v1.packages.operators.coreos.com failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.packages.operators.coreos.com": the object has been modified; please apply your changes to the latest version and try again\nI0227 12:07:42.875229       1 controller.go:180] Shutting down kubernetes service endpoint reconciler\nI0227 12:07:42.875221       1 genericapiserver.go:643] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-131-12.us-east-2.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\n
Feb 27 12:09:56.184 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-12.us-east-2.compute.internal node/ip-10-0-131-12.us-east-2.compute.internal container=kube-apiserver-insecure-readyz container exited with code 2 (Error): I0227 11:43:37.374542       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Feb 27 12:09:56.184 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-12.us-east-2.compute.internal node/ip-10-0-131-12.us-east-2.compute.internal container=kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0227 12:07:30.186474       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 12:07:30.186824       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0227 12:07:40.194336       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 12:07:40.194832       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Feb 27 12:09:56.184 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-12.us-east-2.compute.internal node/ip-10-0-131-12.us-east-2.compute.internal container=kube-apiserver-cert-regeneration-controller container exited with code 255 (Error): 205641       1 client_cert_rotation_controller.go:128] Finished waiting for CertRotationController - "LocalhostServing"\nI0227 12:07:42.889203       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0227 12:07:42.889489       1 certrotationcontroller.go:560] Shutting down CertRotation\nI0227 12:07:42.889622       1 cabundlesyncer.go:84] Shutting down CA bundle controller\nI0227 12:07:42.890380       1 cabundlesyncer.go:86] CA bundle controller shut down\nI0227 12:07:42.890041       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeAPIServerToKubeletClientCert"\nI0227 12:07:42.890061       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "LocalhostServing"\nI0227 12:07:42.890073       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "InternalLoadBalancerServing"\nI0227 12:07:42.890086       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "LocalhostRecoveryServing"\nI0227 12:07:42.890113       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "ExternalLoadBalancerServing"\nI0227 12:07:42.890124       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeControllerManagerClient"\nI0227 12:07:42.890133       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "AggregatorProxyClientCert"\nI0227 12:07:42.890147       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "ServiceNetworkServing"\nI0227 12:07:42.890284       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeSchedulerClient"\nE0227 12:07:43.011558       1 leaderelection.go:307] Failed to release lock: Put https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/configmaps/cert-regeneration-controller-lock?timeout=35s: read tcp [::1]:37322->[::1]:6443: read: connection reset by peer\nF0227 12:07:43.011652       1 leaderelection.go:67] leaderelection lost\n
Feb 27 12:09:56.227 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-131-12.us-east-2.compute.internal node/ip-10-0-131-12.us-east-2.compute.internal container=cluster-policy-controller container exited with code 1 (Error): : stream error: stream ID 419; INTERNAL_ERROR") has prevented the request from succeeding\nW0227 12:04:05.350891       1 reflector.go:326] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 449; INTERNAL_ERROR") has prevented the request from succeeding\nW0227 12:07:20.832366       1 reflector.go:326] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.Build ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 551; INTERNAL_ERROR") has prevented the request from succeeding\nW0227 12:07:20.832921       1 reflector.go:326] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 427; INTERNAL_ERROR") has prevented the request from succeeding\nW0227 12:07:20.834574       1 reflector.go:326] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.BuildConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 547; INTERNAL_ERROR") has prevented the request from succeeding\nW0227 12:07:20.834975       1 reflector.go:326] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.ImageStream ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 463; INTERNAL_ERROR") has prevented the request from succeeding\nW0227 12:07:20.835066       1 reflector.go:326] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 549; INTERNAL_ERROR") has prevented the request from succeeding\n
Feb 27 12:09:56.227 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-131-12.us-east-2.compute.internal node/ip-10-0-131-12.us-east-2.compute.internal container=kube-controller-manager-cert-syncer container exited with code 2 (Error):     1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 12:07:05.488215       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 12:07:05.488851       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 12:07:06.098558       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 12:07:06.099038       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 12:07:15.503264       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 12:07:15.503714       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 12:07:16.105684       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 12:07:16.106289       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 12:07:25.515995       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 12:07:25.516463       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 12:07:26.115961       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 12:07:26.116687       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 12:07:35.528309       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 12:07:35.528650       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0227 12:07:36.123734       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0227 12:07:36.124413       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\n
Feb 27 12:09:56.227 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-131-12.us-east-2.compute.internal node/ip-10-0-131-12.us-east-2.compute.internal container=kube-controller-manager container exited with code 2 (Error): ackageserver": the object has been modified; please apply your changes to the latest version and try again\nI0227 12:07:31.599639       1 garbagecollector.go:404] processing item [v1/ConfigMap, namespace: openshift-cluster-storage-operator, name: cluster-storage-operator-lock, uid: 09b2d39d-4aa6-4c16-8c12-798d28825c57]\nE0227 12:07:31.733493       1 memcache.go:199] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request\nE0227 12:07:31.767430       1 memcache.go:111] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request\nI0227 12:07:31.773876       1 garbagecollector.go:517] delete object [v1/ConfigMap, namespace: openshift-cluster-storage-operator, name: cluster-storage-operator-lock, uid: 09b2d39d-4aa6-4c16-8c12-798d28825c57] with propagation policy Background\nI0227 12:07:37.825436       1 replica_set.go:561] Too few replicas for ReplicaSet openshift-authentication/oauth-openshift-76b8d8c5f7, need 1, creating 1\nI0227 12:07:37.826724       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication", Name:"oauth-openshift", UID:"e730c761-81aa-4afe-8de7-88e5f7f7fec0", APIVersion:"apps/v1", ResourceVersion:"41307", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set oauth-openshift-76b8d8c5f7 to 1\nI0227 12:07:37.843405       1 deployment_controller.go:484] Error syncing deployment openshift-authentication/oauth-openshift: Operation cannot be fulfilled on deployments.apps "oauth-openshift": the object has been modified; please apply your changes to the latest version and try again\nI0227 12:07:37.865402       1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-authentication", Name:"oauth-openshift-76b8d8c5f7", UID:"d97f7aee-f5ce-44dd-9d00-b1ca8ca00d94", APIVersion:"apps/v1", ResourceVersion:"41308", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: oauth-openshift-76b8d8c5f7-tfl2b\n
Feb 27 12:09:56.227 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-131-12.us-east-2.compute.internal node/ip-10-0-131-12.us-east-2.compute.internal container=kube-controller-manager-recovery-controller container exited with code 255 (Error): ube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "aggregator-signer" [] issuer="<self>" (2020-02-27 11:12:24 +0000 UTC to 2020-02-28 11:12:24 +0000 UTC (now=2020-02-27 11:46:04.43441412 +0000 UTC))\nI0227 11:46:04.434654       1 tlsconfig.go:179] loaded serving cert ["serving-cert::/tmp/serving-cert-466631808/tls.crt::/tmp/serving-cert-466631808/tls.key"]: "localhost" [serving] validServingFor=[localhost] issuer="cert-recovery-controller-signer@1582803963" (2020-02-27 11:46:02 +0000 UTC to 2020-03-28 11:46:03 +0000 UTC (now=2020-02-27 11:46:04.434642989 +0000 UTC))\nI0227 11:46:04.434877       1 named_certificates.go:52] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1582803964" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1582803964" (2020-02-27 10:46:03 +0000 UTC to 2021-02-26 10:46:03 +0000 UTC (now=2020-02-27 11:46:04.434864327 +0000 UTC))\nI0227 12:05:35.394241       1 leaderelection.go:252] successfully acquired lease openshift-kube-controller-manager/cert-recovery-controller-lock\nI0227 12:05:35.394847       1 event.go:281] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"openshift-kube-controller-manager", Name:"cert-recovery-controller-lock", UID:"e4e08554-5c69-4009-89c4-926c52b99cc3", APIVersion:"v1", ResourceVersion:"38728", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' e296c3fe-2a9e-48a8-aca3-5ed6dd7cfc3a became leader\nI0227 12:05:35.405834       1 csrcontroller.go:81] Starting CSR controller\nI0227 12:05:35.405857       1 shared_informer.go:197] Waiting for caches to sync for CSRController\nI0227 12:05:35.507289       1 shared_informer.go:204] Caches are synced for CSRController \nI0227 12:07:42.846372       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0227 12:07:42.846755       1 csrcontroller.go:83] Shutting down CSR controller\nI0227 12:07:42.846775       1 csrcontroller.go:85] CSR controller shut down\nF0227 12:07:42.847004       1 builder.go:209] server exited\n
Feb 27 12:09:56.247 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-131-12.us-east-2.compute.internal node/ip-10-0-131-12.us-east-2.compute.internal container=scheduler container exited with code 2 (Error): able: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0227 12:07:28.903901       1 factory.go:453] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-5d7667544d-6r9s9: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0227 12:07:29.091051       1 scheduler.go:751] pod openshift-operator-lifecycle-manager/packageserver-688b7d96fb-t5cmw is bound successfully on node "ip-10-0-156-239.us-east-2.compute.internal", 6 nodes evaluated, 2 nodes were found feasible.\nI0227 12:07:31.506390       1 factory.go:453] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-5d7667544d-6r9s9: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0227 12:07:33.506801       1 factory.go:453] Unable to schedule openshift-apiserver/apiserver-7bb49cb8f9-fm6gm: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0227 12:07:36.506519       1 factory.go:453] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-5d7667544d-6r9s9: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0227 12:07:37.877425       1 scheduler.go:751] pod openshift-authentication/oauth-openshift-76b8d8c5f7-tfl2b is bound successfully on node "ip-10-0-156-239.us-east-2.compute.internal", 6 nodes evaluated, 2 nodes were found feasible.\nI0227 12:07:42.508137       1 factory.go:453] Unable to schedule openshift-apiserver/apiserver-7bb49cb8f9-fm6gm: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\n
Feb 27 12:09:56.282 E ns/openshift-cluster-node-tuning-operator pod/tuned-9gfvg node/ip-10-0-131-12.us-east-2.compute.internal container=tuned container exited with code 143 (Error): ended profile (openshift-control-plane)\nI0227 11:48:05.863696     732 tuned.go:286] starting tuned...\n2020-02-27 11:48:06,037 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-02-27 11:48:06,055 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-02-27 11:48:06,055 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-02-27 11:48:06,056 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-02-27 11:48:06,057 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-02-27 11:48:06,188 INFO     tuned.daemon.controller: starting controller\n2020-02-27 11:48:06,189 INFO     tuned.daemon.daemon: starting tuning\n2020-02-27 11:48:06,207 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-02-27 11:48:06,209 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-02-27 11:48:06,218 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-02-27 11:48:06,220 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-02-27 11:48:06,239 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-02-27 11:48:06,564 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-02-27 11:48:06,580 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\nI0227 12:01:06.556522     732 tuned.go:494] profile "ip-10-0-131-12.us-east-2.compute.internal" changed, tuned profile requested: openshift-node\nI0227 12:01:06.638944     732 tuned.go:494] profile "ip-10-0-131-12.us-east-2.compute.internal" changed, tuned profile requested: openshift-control-plane\nI0227 12:01:06.703335     732 tuned.go:393] getting recommended profile...\nI0227 12:01:07.082570     732 tuned.go:430] active and recommended profile (openshift-control-plane) match; profile change will not trigger profile reload\n
Feb 27 12:09:56.296 E ns/openshift-controller-manager pod/controller-manager-mbdrl node/ip-10-0-131-12.us-east-2.compute.internal container=controller-manager container exited with code 1 (Error): 35: Failed to watch *v1beta1.Ingress: Get https://172.30.0.1:443/apis/extensions/v1beta1/ingresses?allowWatchBookmarks=true&resourceVersion=34891&timeout=7m24s&timeoutSeconds=444&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nI0227 12:07:43.041681       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 12:07:43.042029       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 12:07:43.051519       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 12:07:43.055609       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 12:07:43.055928       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 12:07:43.056244       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 12:07:43.056802       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 12:07:43.057182       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 12:07:43.057479       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0227 12:07:43.057760       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0227 12:07:43.067380       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.DaemonSet: Get https://172.30.0.1:443/apis/apps/v1/daemonsets?allowWatchBookmarks=true&resourceVersion=40882&timeout=5m52s&timeoutSeconds=352&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0227 12:07:43.067796       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Secret: Get https://172.30.0.1:443/api/v1/secrets?allowWatchBookmarks=true&resourceVersion=41091&timeout=9m49s&timeoutSeconds=589&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\n
Feb 27 12:09:56.313 E ns/openshift-monitoring pod/node-exporter-ff66f node/ip-10-0-131-12.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:06:55Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:07:10Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:07:10Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:07:25Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:07:25Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:07:40Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-27T12:07:40Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 27 12:09:56.337 E ns/openshift-sdn pod/sdn-controller-5hh5w node/ip-10-0-131-12.us-east-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0227 11:51:36.772447       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Feb 27 12:09:56.351 E ns/openshift-sdn pod/ovs-4955w node/ip-10-0-131-12.us-east-2.compute.internal container=openvswitch container exited with code 143 (Error): Z|00164|bridge|INFO|bridge br0: deleted interface vetha4f7fcd1 on port 73\n2020-02-27T12:07:28.324Z|00165|connmgr|INFO|br0<->unix#919: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T12:07:28.409Z|00166|connmgr|INFO|br0<->unix#922: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T12:07:28.470Z|00167|bridge|INFO|bridge br0: deleted interface veth9c036097 on port 78\n2020-02-27T12:07:28.810Z|00168|connmgr|INFO|br0<->unix#925: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T12:07:28.861Z|00169|connmgr|INFO|br0<->unix#928: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T12:07:28.939Z|00170|bridge|INFO|bridge br0: deleted interface vethf4c9eeee on port 62\n2020-02-27T12:07:29.016Z|00171|connmgr|INFO|br0<->unix#931: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T12:07:29.087Z|00172|connmgr|INFO|br0<->unix#934: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T12:07:29.171Z|00173|bridge|INFO|bridge br0: deleted interface vethd9b1f4a2 on port 48\n2020-02-27T12:07:28.909Z|00025|jsonrpc|WARN|unix#784: send error: Broken pipe\n2020-02-27T12:07:28.909Z|00026|reconnect|WARN|unix#784: connection dropped (Broken pipe)\n2020-02-27T12:07:35.401Z|00174|bridge|INFO|bridge br0: added interface veth1966a85e on port 79\n2020-02-27T12:07:35.443Z|00175|connmgr|INFO|br0<->unix#940: 5 flow_mods in the last 0 s (5 adds)\n2020-02-27T12:07:35.502Z|00176|connmgr|INFO|br0<->unix#944: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-02-27T12:07:35.509Z|00177|connmgr|INFO|br0<->unix#946: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T12:07:35.463Z|00027|jsonrpc|WARN|unix#799: receive error: Connection reset by peer\n2020-02-27T12:07:35.463Z|00028|reconnect|WARN|unix#799: connection dropped (Connection reset by peer)\n2020-02-27T12:07:38.799Z|00178|connmgr|INFO|br0<->unix#955: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-27T12:07:38.836Z|00179|connmgr|INFO|br0<->unix#958: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-27T12:07:38.866Z|00180|bridge|INFO|bridge br0: deleted interface veth1966a85e on port 79\ninfo: Saving flows ...\n
Feb 27 12:09:56.394 E ns/openshift-multus pod/multus-w8rrb node/ip-10-0-131-12.us-east-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Feb 27 12:09:56.466 E ns/openshift-machine-config-operator pod/machine-config-daemon-tbh62 node/ip-10-0-131-12.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 27 12:09:56.482 E ns/openshift-multus pod/multus-admission-controller-zsrxz node/ip-10-0-131-12.us-east-2.compute.internal container=multus-admission-controller container exited with code 255 (Error): 
Feb 27 12:09:56.565 E ns/openshift-machine-config-operator pod/machine-config-server-jb26t node/ip-10-0-131-12.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0227 12:01:02.936939       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-301-g27bac44b-dirty (27bac44b16d95ebec18855f42876e228fc1446d3)\nI0227 12:01:02.938417       1 api.go:51] Launching server on :22624\nI0227 12:01:02.938520       1 api.go:51] Launching server on :22623\n
Feb 27 12:09:59.140 E ns/openshift-etcd pod/etcd-ip-10-0-131-12.us-east-2.compute.internal node/ip-10-0-131-12.us-east-2.compute.internal container=etcd-metrics container exited with code 2 (Error): 2020-02-27 11:42:18.957281 I | etcdmain: ServerTLS: cert = /etc/kubernetes/static-pod-resources/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-131-12.us-east-2.compute.internal.crt, key = /etc/kubernetes/static-pod-resources/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-131-12.us-east-2.compute.internal.key, ca = /etc/kubernetes/static-pod-resources/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-02-27 11:42:18.957940 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-02-27 11:42:18.958373 I | etcdmain: ClientTLS: cert = /etc/kubernetes/static-pod-resources/secrets/etcd-all-peer/etcd-peer-ip-10-0-131-12.us-east-2.compute.internal.crt, key = /etc/kubernetes/static-pod-resources/secrets/etcd-all-peer/etcd-peer-ip-10-0-131-12.us-east-2.compute.internal.key, ca = /etc/kubernetes/static-pod-resources/configmaps/etcd-peer-client-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-02-27 11:42:18.961300 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\nWARNING: 2020/02/27 11:42:18 grpc: addrConn.createTransport failed to connect to {https://etcd-0.ci-op-z6y52xgr-f83f1.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.131.12:9978: connect: connection refused". Reconnecting...\n
Feb 27 12:10:01.559 E ns/openshift-multus pod/multus-w8rrb node/ip-10-0-131-12.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Feb 27 12:10:06.665 E ns/openshift-multus pod/multus-w8rrb node/ip-10-0-131-12.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Feb 27 12:10:11.554 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator openshift-apiserver is reporting a failure: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable
Feb 27 12:10:15.859 E ns/openshift-machine-config-operator pod/machine-config-daemon-tbh62 node/ip-10-0-131-12.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 27 12:11:19.211 E ns/openshift-machine-config-operator pod/etcd-quorum-guard-5d7667544d-gdvxf node/ip-10-0-156-239.us-east-2.compute.internal container=guard container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated