ResultSUCCESS
Tests 4 failed / 22 succeeded
Started2020-06-29 09:07
Elapsed1h28m
Work namespaceci-op-m302gljz
Refs release-4.4:515c49d1
892:9571b6db
pode9006fd8-b9e7-11ea-86a5-0a580a8104ab
repoopenshift/cluster-kube-apiserver-operator
revision1

Test Failures


Cluster upgrade Application behind service load balancer with PDB is not disrupted 35m29s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sApplication\sbehind\sservice\sload\sbalancer\swith\sPDB\sis\snot\sdisrupted$'
Service was unreachable during disruption for at least 10s of 31m45s (1%):

Jun 29 10:15:17.501 E ns/e2e-k8s-service-lb-available-357 svc/service-test Service stopped responding to GET requests over new connections
Jun 29 10:15:18.501 E ns/e2e-k8s-service-lb-available-357 svc/service-test Service is not responding to GET requests over new connections
Jun 29 10:15:18.888 I ns/e2e-k8s-service-lb-available-357 svc/service-test Service started responding to GET requests over new connections
Jun 29 10:17:55.501 E ns/e2e-k8s-service-lb-available-357 svc/service-test Service stopped responding to GET requests on reused connections
Jun 29 10:17:56.501 E ns/e2e-k8s-service-lb-available-357 svc/service-test Service is not responding to GET requests on reused connections
Jun 29 10:17:56.554 I ns/e2e-k8s-service-lb-available-357 svc/service-test Service started responding to GET requests on reused connections
Jun 29 10:18:27.501 E ns/e2e-k8s-service-lb-available-357 svc/service-test Service stopped responding to GET requests over new connections
Jun 29 10:18:28.500 - 5s    E ns/e2e-k8s-service-lb-available-357 svc/service-test Service is not responding to GET requests over new connections
Jun 29 10:18:33.660 I ns/e2e-k8s-service-lb-available-357 svc/service-test Service started responding to GET requests over new connections
				from junit_upgrade_1593426343.xml

Filter through log files


Cluster upgrade Kubernetes APIs remain available 34m58s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sKubernetes\sAPIs\sremain\savailable$'
API was unreachable during disruption for at least 2s of 34m58s (0%):

Jun 29 10:21:26.601 E kube-apiserver Kube API started failing: Get https://api.ci-op-m302gljz-11e38.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=15s: dial tcp 3.16.248.44:6443: connect: connection refused
Jun 29 10:21:27.576 - 1s    E kube-apiserver Kube API is not responding to GET requests
Jun 29 10:21:28.615 I kube-apiserver Kube API started responding to GET requests
				from junit_upgrade_1593426343.xml

Filter through log files


Cluster upgrade OpenShift APIs remain available 34m58s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sOpenShift\sAPIs\sremain\savailable$'
API was unreachable during disruption for at least 1s of 34m58s (0%):

Jun 29 10:04:26.540 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-m302gljz-11e38.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Jun 29 10:04:26.567 I openshift-apiserver OpenShift API started responding to GET requests
Jun 29 10:04:48.540 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-m302gljz-11e38.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Jun 29 10:04:48.567 I openshift-apiserver OpenShift API started responding to GET requests
Jun 29 10:21:26.565 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-m302gljz-11e38.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: dial tcp 3.16.248.44:6443: connect: connection refused
Jun 29 10:21:27.539 E openshift-apiserver OpenShift API is not responding to GET requests
Jun 29 10:21:28.412 I openshift-apiserver OpenShift API started responding to GET requests
				from junit_upgrade_1593426343.xml

Filter through log files


openshift-tests Monitor cluster while tests execute 35m31s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
202 error level events were detected during this test run:

Jun 29 09:50:14.289 E ns/openshift-monitoring pod/thanos-querier-5d476645d8-dtxsp node/ip-10-0-199-245.us-east-2.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/06/29 09:49:04 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/06/29 09:49:04 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/06/29 09:49:04 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/06/29 09:49:04 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/06/29 09:49:04 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/06/29 09:49:04 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/06/29 09:49:04 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/06/29 09:49:04 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/06/29 09:49:04 http.go:107: HTTPS: listening on [::]:9091\nI0629 09:49:04.393867       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/06/29 09:49:25 oauthproxy.go:774: basicauth: 10.128.0.4:40882 Authorization header does not start with 'Basic', skipping basic authentication\n
Jun 29 09:50:14.461 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-199-245.us-east-2.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/06/29 09:49:24 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Jun 29 09:50:14.461 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-199-245.us-east-2.compute.internal container=prometheus-proxy container exited with code 2 (Error): 2020/06/29 09:49:24 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/06/29 09:49:24 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/06/29 09:49:24 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/06/29 09:49:24 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/06/29 09:49:24 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/06/29 09:49:24 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/06/29 09:49:24 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/06/29 09:49:24 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0629 09:49:24.660999       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/06/29 09:49:24 http.go:107: HTTPS: listening on [::]:9091\n2020/06/29 09:50:05 oauthproxy.go:774: basicauth: 10.129.2.6:60548 Authorization header does not start with 'Basic', skipping basic authentication\n
Jun 29 09:50:14.461 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-199-245.us-east-2.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-06-29T09:49:23.988010238Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=info ts=2020-06-29T09:49:23.988124762Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-06-29T09:49:23.990234798Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-06-29T09:49:29.139067899Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-06-29T09:49:29.258846971Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Jun 29 09:50:24.438 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-199-245.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-06-29T09:50:22.582Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-06-29T09:50:22.588Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-06-29T09:50:22.589Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-06-29T09:50:22.590Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-06-29T09:50:22.590Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-06-29T09:50:22.590Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-06-29T09:50:22.590Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-06-29T09:50:22.590Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-06-29T09:50:22.590Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-06-29T09:50:22.590Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-06-29T09:50:22.590Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-06-29T09:50:22.590Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-06-29T09:50:22.590Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-06-29T09:50:22.590Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-06-29T09:50:22.591Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-06-29T09:50:22.591Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-06-29
Jun 29 09:53:34.543 E clusterversion/version changed Failing to True: WorkloadNotAvailable: deployment openshift-cluster-version/cluster-version-operator is progressing NewReplicaSetAvailable: ReplicaSet "cluster-version-operator-5c4c9d459c" has successfully progressed.
Jun 29 09:54:04.647 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-7b88655749-pv4dw node/ip-10-0-216-157.us-east-2.compute.internal container=kube-apiserver-operator container exited with code 255 (Error): ller\nI0629 09:54:03.630356       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0629 09:54:03.630375       1 key_controller.go:363] Shutting down EncryptionKeyController\nI0629 09:54:03.630394       1 state_controller.go:171] Shutting down EncryptionStateController\nI0629 09:54:03.630412       1 prune_controller.go:204] Shutting down EncryptionPruneController\nI0629 09:54:03.630444       1 migration_controller.go:327] Shutting down EncryptionMigrationController\nI0629 09:54:03.630463       1 condition_controller.go:202] Shutting down EncryptionConditionController\nI0629 09:54:03.630488       1 base_controller.go:74] Shutting down RevisionController ...\nI0629 09:54:03.630509       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0629 09:54:03.630539       1 base_controller.go:74] Shutting down PruneController ...\nI0629 09:54:03.630558       1 base_controller.go:74] Shutting down NodeController ...\nI0629 09:54:03.630589       1 base_controller.go:74] Shutting down  ...\nI0629 09:54:03.630609       1 base_controller.go:74] Shutting down StaticPodStateController ...\nI0629 09:54:03.630637       1 base_controller.go:74] Shutting down InstallerController ...\nI0629 09:54:03.630654       1 certrotationtime_upgradeable.go:103] Shutting down CertRotationTimeUpgradeableController\nI0629 09:54:03.630685       1 feature_upgradeable_controller.go:106] Shutting down FeatureUpgradeableController\nI0629 09:54:03.630707       1 base_controller.go:74] Shutting down InstallerStateController ...\nI0629 09:54:03.630723       1 status_controller.go:212] Shutting down StatusSyncer-kube-apiserver\nI0629 09:54:03.630744       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0629 09:54:03.630780       1 base_controller.go:74] Shutting down  ...\nI0629 09:54:03.630797       1 termination_observer.go:154] Shutting down TerminationObserver\nI0629 09:54:03.630816       1 remove_stale_conditions.go:84] Shutting down RemoveStaleConditions\nF0629 09:54:03.631268       1 builder.go:209] server exited\n
Jun 29 09:54:24.741 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-7f68fb7454-pq5qf node/ip-10-0-216-157.us-east-2.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): o:84] Shutting down RemoveStaleConditions\nI0629 09:54:23.876270       1 base_controller.go:49] Shutting down worker of RevisionController controller ...\nI0629 09:54:23.876284       1 base_controller.go:39] All RevisionController workers have been terminated\nI0629 09:54:23.876367       1 base_controller.go:49] Shutting down worker of InstallerController controller ...\nI0629 09:54:23.876384       1 base_controller.go:39] All InstallerController workers have been terminated\nI0629 09:54:23.876408       1 base_controller.go:49] Shutting down worker of UnsupportedConfigOverridesController controller ...\nI0629 09:54:23.876419       1 base_controller.go:39] All UnsupportedConfigOverridesController workers have been terminated\nI0629 09:54:23.876448       1 base_controller.go:49] Shutting down worker of LoggingSyncer controller ...\nI0629 09:54:23.876458       1 base_controller.go:39] All LoggingSyncer workers have been terminated\nI0629 09:54:23.876486       1 base_controller.go:49] Shutting down worker of InstallerStateController controller ...\nI0629 09:54:23.876496       1 base_controller.go:39] All InstallerStateController workers have been terminated\nI0629 09:54:23.876517       1 base_controller.go:49] Shutting down worker of  controller ...\nI0629 09:54:23.876527       1 base_controller.go:39] All  workers have been terminated\nI0629 09:54:23.876549       1 base_controller.go:49] Shutting down worker of NodeController controller ...\nI0629 09:54:23.876560       1 base_controller.go:39] All NodeController workers have been terminated\nI0629 09:54:23.876581       1 base_controller.go:49] Shutting down worker of PruneController controller ...\nI0629 09:54:23.876592       1 base_controller.go:39] All PruneController workers have been terminated\nI0629 09:54:23.876612       1 base_controller.go:49] Shutting down worker of StaticPodStateController controller ...\nI0629 09:54:23.876623       1 base_controller.go:39] All StaticPodStateController workers have been terminated\nF0629 09:54:23.878543       1 builder.go:209] server exited\n
Jun 29 09:54:40.840 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-c85947dc5-9bpkn node/ip-10-0-216-157.us-east-2.compute.internal container=openshift-apiserver-operator container exited with code 255 (Error): ances are unavailable"\nI0629 09:39:59.857509       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"6945489e-58a0-4894-af50-a3d95f04adb3", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable" to ""\nE0629 09:43:01.461372       1 key_controller.go:383] key failed with: Get https://172.30.0.1:443/api/v1/namespaces/openshift-apiserver/secrets/encryption-config-0: unexpected EOF\nI0629 09:43:02.530568       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"6945489e-58a0-4894-af50-a3d95f04adb3", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "" to "EncryptionKeyControllerDegraded: Get https://172.30.0.1:443/api/v1/namespaces/openshift-apiserver/secrets/encryption-config-0: unexpected EOF"\nI0629 09:43:09.464349       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"6945489e-58a0-4894-af50-a3d95f04adb3", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "EncryptionKeyControllerDegraded: Get https://172.30.0.1:443/api/v1/namespaces/openshift-apiserver/secrets/encryption-config-0: unexpected EOF" to ""\nI0629 09:54:40.056566       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0629 09:54:40.056816       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0629 09:54:40.057745       1 builder.go:210] server exited\n
Jun 29 09:55:08.056 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-216-157.us-east-2.compute.internal node/ip-10-0-216-157.us-east-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): I0629 09:55:07.437588       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0629 09:55:07.442056       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0629 09:55:07.444651       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0629 09:55:07.444735       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0629 09:55:07.445328       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jun 29 09:55:12.154 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-216-157.us-east-2.compute.internal node/ip-10-0-216-157.us-east-2.compute.internal container=kube-apiserver container exited with code 1 (Error): PRanger"\nI0629 09:55:10.463219       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0629 09:55:10.463229       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0629 09:55:10.463238       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0629 09:55:10.463248       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0629 09:55:10.463257       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0629 09:55:10.463267       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0629 09:55:10.463276       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0629 09:55:10.463286       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0629 09:55:10.463297       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0629 09:55:10.463308       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0629 09:55:10.463323       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0629 09:55:10.463334       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0629 09:55:10.463346       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0629 09:55:10.463356       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0629 09:55:10.463390       1 server.go:627] external host was not specified, using 10.0.216.157\nI0629 09:55:10.463554       1 server.go:670] Initializing cache sizes based on 0MB limit\nI0629 09:55:10.463806       1 server.go:188] Version: v1.17.1\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jun 29 09:55:22.014 E clusteroperator/machine-config changed Degraded to True: MachineConfigDaemonFailed: Failed to resync 0.0.1-2020-06-29-090755 because: etcdserver: leader changed
Jun 29 09:55:24.212 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-216-157.us-east-2.compute.internal node/ip-10-0-216-157.us-east-2.compute.internal container=kube-apiserver container exited with code 1 (Error): PRanger"\nI0629 09:55:23.212332       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0629 09:55:23.212337       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0629 09:55:23.212343       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0629 09:55:23.212348       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0629 09:55:23.212353       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0629 09:55:23.212359       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0629 09:55:23.212365       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0629 09:55:23.212370       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0629 09:55:23.212375       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0629 09:55:23.212382       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0629 09:55:23.212390       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0629 09:55:23.212406       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0629 09:55:23.212412       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0629 09:55:23.212419       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0629 09:55:23.212441       1 server.go:627] external host was not specified, using 10.0.216.157\nI0629 09:55:23.212580       1 server.go:670] Initializing cache sizes based on 0MB limit\nI0629 09:55:23.212818       1 server.go:188] Version: v1.17.1\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jun 29 09:55:24.235 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-216-157.us-east-2.compute.internal node/ip-10-0-216-157.us-east-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): I0629 09:55:24.053581       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0629 09:55:24.056875       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0629 09:55:24.060386       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0629 09:55:24.060504       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0629 09:55:24.062711       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jun 29 09:55:49.476 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-216-157.us-east-2.compute.internal node/ip-10-0-216-157.us-east-2.compute.internal container=kube-apiserver container exited with code 1 (Error): PRanger"\nI0629 09:55:49.048373       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0629 09:55:49.048386       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0629 09:55:49.048397       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0629 09:55:49.048409       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0629 09:55:49.048421       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0629 09:55:49.048433       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0629 09:55:49.048477       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0629 09:55:49.048488       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0629 09:55:49.048496       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0629 09:55:49.048507       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0629 09:55:49.048527       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0629 09:55:49.048541       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0629 09:55:49.048555       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0629 09:55:49.048569       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0629 09:55:49.048619       1 server.go:627] external host was not specified, using 10.0.216.157\nI0629 09:55:49.048947       1 server.go:670] Initializing cache sizes based on 0MB limit\nI0629 09:55:49.049378       1 server.go:188] Version: v1.17.1\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jun 29 09:56:32.583 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-167-45.us-east-2.compute.internal node/ip-10-0-167-45.us-east-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): I0629 09:56:31.236008       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0629 09:56:31.238036       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0629 09:56:31.241508       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nF0629 09:56:31.242465       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jun 29 09:56:54.222 E ns/openshift-machine-api pod/machine-api-controllers-77db9fdd65-ljql2 node/ip-10-0-182-62.us-east-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Jun 29 09:56:55.699 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-167-45.us-east-2.compute.internal node/ip-10-0-167-45.us-east-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): I0629 09:56:55.097819       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0629 09:56:55.099341       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0629 09:56:55.101058       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0629 09:56:55.101125       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0629 09:56:55.101770       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jun 29 09:57:28.009 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-167-45.us-east-2.compute.internal node/ip-10-0-167-45.us-east-2.compute.internal container=kube-apiserver container exited with code 1 (Error): IPRanger"\nI0629 09:57:26.260270       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0629 09:57:26.260281       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0629 09:57:26.260289       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0629 09:57:26.260327       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0629 09:57:26.260337       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0629 09:57:26.260346       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0629 09:57:26.260356       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0629 09:57:26.260365       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0629 09:57:26.260375       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0629 09:57:26.260385       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0629 09:57:26.260401       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0629 09:57:26.260413       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0629 09:57:26.260423       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0629 09:57:26.260436       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0629 09:57:26.260475       1 server.go:627] external host was not specified, using 10.0.167.45\nI0629 09:57:26.260707       1 server.go:670] Initializing cache sizes based on 0MB limit\nI0629 09:57:26.261049       1 server.go:188] Version: v1.17.1\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jun 29 09:57:49.106 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-167-45.us-east-2.compute.internal node/ip-10-0-167-45.us-east-2.compute.internal container=kube-apiserver container exited with code 1 (Error): IPRanger"\nI0629 09:57:48.180635       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0629 09:57:48.180641       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0629 09:57:48.180646       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0629 09:57:48.180652       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0629 09:57:48.180659       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0629 09:57:48.180665       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0629 09:57:48.180671       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0629 09:57:48.180676       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0629 09:57:48.180681       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0629 09:57:48.180687       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0629 09:57:48.180696       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0629 09:57:48.180703       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0629 09:57:48.180710       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0629 09:57:48.180716       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0629 09:57:48.180766       1 server.go:627] external host was not specified, using 10.0.167.45\nI0629 09:57:48.181019       1 server.go:670] Initializing cache sizes based on 0MB limit\nI0629 09:57:48.181263       1 server.go:188] Version: v1.17.1\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jun 29 09:58:04.625 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-182-62.us-east-2.compute.internal node/ip-10-0-182-62.us-east-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): I0629 09:58:03.372428       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0629 09:58:03.374609       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0629 09:58:03.375711       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0629 09:58:03.375823       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0629 09:58:03.377997       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jun 29 09:58:10.203 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-167-45.us-east-2.compute.internal node/ip-10-0-167-45.us-east-2.compute.internal container=kube-apiserver container exited with code 1 (Error): IPRanger"\nI0629 09:58:10.096631       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0629 09:58:10.096638       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0629 09:58:10.096643       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0629 09:58:10.096649       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0629 09:58:10.096654       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0629 09:58:10.096660       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0629 09:58:10.096666       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0629 09:58:10.096671       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0629 09:58:10.096676       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0629 09:58:10.096682       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0629 09:58:10.096690       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0629 09:58:10.096697       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0629 09:58:10.096704       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0629 09:58:10.096710       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0629 09:58:10.096760       1 server.go:627] external host was not specified, using 10.0.167.45\nI0629 09:58:10.096913       1 server.go:670] Initializing cache sizes based on 0MB limit\nI0629 09:58:10.097147       1 server.go:188] Version: v1.17.1\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jun 29 09:58:27.717 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-182-62.us-east-2.compute.internal node/ip-10-0-182-62.us-east-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): I0629 09:58:27.244269       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0629 09:58:27.244855       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0629 09:58:27.246191       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0629 09:58:27.246270       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0629 09:58:27.246850       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jun 29 09:58:31.253 E ns/openshift-cluster-machine-approver pod/machine-approver-68c5f9746b-k9bht node/ip-10-0-216-157.us-east-2.compute.internal container=machine-approver-controller container exited with code 2 (Error): sed\nE0629 09:56:35.474471       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0629 09:56:36.475108       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0629 09:56:37.475950       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0629 09:56:38.476681       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0629 09:56:39.477344       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0629 09:56:44.511495       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: certificatesigningrequests.certificates.k8s.io is forbidden: User "system:serviceaccount:openshift-cluster-machine-approver:machine-approver-sa" cannot list resource "certificatesigningrequests" in API group "certificates.k8s.io" at the cluster scope\n
Jun 29 09:58:34.790 E ns/openshift-kube-storage-version-migrator pod/migrator-6c45f8bfcd-7sldc node/ip-10-0-179-214.us-east-2.compute.internal container=migrator container exited with code 2 (Error): I0629 09:56:18.525635       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\n
Jun 29 09:58:44.436 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-167-45.us-east-2.compute.internal node/ip-10-0-167-45.us-east-2.compute.internal container=kube-controller-manager container exited with code 255 (Error):  dial tcp [::1]:6443: connect: connection refused\nE0629 09:58:44.235966       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/config.openshift.io/v1/consoles?allowWatchBookmarks=true&resourceVersion=23826&timeout=8m14s&timeoutSeconds=494&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0629 09:58:44.236939       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/metal3.io/v1alpha1/provisionings?allowWatchBookmarks=true&resourceVersion=25089&timeout=9m5s&timeoutSeconds=545&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0629 09:58:44.238197       1 reflector.go:307] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: Failed to watch *v1.Route: Get https://localhost:6443/apis/route.openshift.io/v1/routes?allowWatchBookmarks=true&resourceVersion=26820&timeout=7m1s&timeoutSeconds=421&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0629 09:58:44.239854       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: Get https://localhost:6443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=22686&timeout=8m9s&timeoutSeconds=489&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0629 09:58:44.244130       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/snapshot.storage.k8s.io/v1beta1/volumesnapshotcontents?allowWatchBookmarks=true&resourceVersion=25036&timeout=8m7s&timeoutSeconds=487&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0629 09:58:44.251297       1 leaderelection.go:288] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0629 09:58:44.251391       1 controllermanager.go:291] leaderelection lost\n
Jun 29 09:58:45.448 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-167-45.us-east-2.compute.internal node/ip-10-0-167-45.us-east-2.compute.internal container=kube-scheduler container exited with code 255 (Error): xtension-apiserver-authentication&resourceVersion=25687&timeout=7m47s&timeoutSeconds=467&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0629 09:58:44.152375       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: Get https://localhost:6443/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=21749&timeout=8m0s&timeoutSeconds=480&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0629 09:58:44.154630       1 reflector.go:307] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to watch *v1.Pod: Get https://localhost:6443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=27054&timeoutSeconds=584&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0629 09:58:44.154891       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: Get https://localhost:6443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=22715&timeout=9m8s&timeoutSeconds=548&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0629 09:58:44.156269       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: Get https://localhost:6443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=21745&timeout=8m7s&timeoutSeconds=487&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0629 09:58:44.157656       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: Get https://localhost:6443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=22686&timeout=7m41s&timeoutSeconds=461&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0629 09:58:44.972922       1 leaderelection.go:288] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0629 09:58:44.972974       1 server.go:257] leaderelection lost\n
Jun 29 09:58:49.830 E ns/openshift-monitoring pod/node-exporter-6sfwg node/ip-10-0-179-214.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): -29T09:42:20Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-06-29T09:42:20Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-06-29T09:42:20Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-06-29T09:42:20Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-06-29T09:42:20Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-06-29T09:42:20Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-06-29T09:42:20Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-06-29T09:42:20Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-06-29T09:42:20Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-06-29T09:42:20Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-06-29T09:42:20Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-06-29T09:42:20Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-06-29T09:42:20Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-06-29T09:42:20Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-06-29T09:42:20Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-06-29T09:42:20Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-06-29T09:42:20Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-06-29T09:42:20Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-06-29T09:42:20Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-06-29T09:42:20Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-06-29T09:42:20Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-06-29T09:42:20Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-06-29T09:42:20Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-06-29T09:42:20Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jun 29 09:58:56.902 E ns/openshift-monitoring pod/openshift-state-metrics-85f45c7bcf-4x69r node/ip-10-0-179-214.us-east-2.compute.internal container=openshift-state-metrics container exited with code 2 (Error): 
Jun 29 09:58:56.945 E ns/openshift-monitoring pod/kube-state-metrics-d56846fcc-fz7jj node/ip-10-0-179-214.us-east-2.compute.internal container=kube-state-metrics container exited with code 2 (Error): 
Jun 29 09:59:02.286 E ns/openshift-monitoring pod/prometheus-adapter-6787d7d8fb-szwrr node/ip-10-0-165-242.us-east-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0629 09:49:07.381270       1 adapter.go:93] successfully using in-cluster auth\nI0629 09:49:08.545073       1 secure_serving.go:116] Serving securely on [::]:6443\n
Jun 29 09:59:06.279 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-165-242.us-east-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/06/29 09:50:19 Watching directory: "/etc/alertmanager/config"\n
Jun 29 09:59:06.279 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-165-242.us-east-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/06/29 09:50:20 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/06/29 09:50:20 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/06/29 09:50:20 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/06/29 09:50:20 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/06/29 09:50:20 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/06/29 09:50:20 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/06/29 09:50:20 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/06/29 09:50:20 http.go:107: HTTPS: listening on [::]:9095\nI0629 09:50:20.118386       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Jun 29 09:59:06.451 E ns/openshift-monitoring pod/prometheus-adapter-6787d7d8fb-mqlfd node/ip-10-0-199-245.us-east-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0629 09:49:06.196579       1 adapter.go:93] successfully using in-cluster auth\nI0629 09:49:06.692347       1 secure_serving.go:116] Serving securely on [::]:6443\n
Jun 29 09:59:06.476 E ns/openshift-monitoring pod/thanos-querier-7c87bd9844-jt7jp node/ip-10-0-199-245.us-east-2.compute.internal container=oauth-proxy container exited with code 2 (Error): provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/06/29 09:50:05 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/06/29 09:50:05 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/06/29 09:50:05 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/06/29 09:50:05 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/06/29 09:50:05 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/06/29 09:50:05 http.go:107: HTTPS: listening on [::]:9091\nI0629 09:50:05.442675       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/06/29 09:50:25 oauthproxy.go:774: basicauth: 10.128.0.4:43060 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/29 09:52:25 oauthproxy.go:774: basicauth: 10.128.0.4:44594 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/29 09:53:25 oauthproxy.go:774: basicauth: 10.128.0.4:45250 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/29 09:55:04 oauthproxy.go:774: basicauth: 10.130.0.41:54374 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/29 09:57:03 oauthproxy.go:774: basicauth: 10.130.0.41:34524 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/29 09:57:03 oauthproxy.go:774: basicauth: 10.130.0.41:34524 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/29 09:58:03 oauthproxy.go:774: basicauth: 10.130.0.41:35236 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/29 09:58:03 oauthproxy.go:774: basicauth: 10.130.0.41:35236 Authorization header does not start with 'Basic', skipping basic authentication\n
Jun 29 09:59:10.304 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-165-242.us-east-2.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/06/29 09:50:10 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Jun 29 09:59:10.304 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-165-242.us-east-2.compute.internal container=prometheus-proxy container exited with code 2 (Error): 2020/06/29 09:50:10 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/06/29 09:50:10 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/06/29 09:50:10 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/06/29 09:50:10 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/06/29 09:50:10 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/06/29 09:50:10 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/06/29 09:50:10 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/06/29 09:50:10 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/06/29 09:50:10 http.go:107: HTTPS: listening on [::]:9091\nI0629 09:50:10.731893       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Jun 29 09:59:10.304 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-165-242.us-east-2.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-06-29T09:50:10.002347247Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=info ts=2020-06-29T09:50:10.002466872Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-06-29T09:50:10.004090293Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-06-29T09:50:15.208308834Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Jun 29 09:59:11.983 E ns/openshift-insights pod/insights-operator-6746d4759b-2rtcz node/ip-10-0-182-62.us-east-2.compute.internal container=operator container exited with code 2 (Error):     1 diskrecorder.go:63] Recording config/proxy with fingerprint=\nI0629 09:56:52.746926       1 diskrecorder.go:170] Writing 50 records to /var/lib/insights-operator/insights-2020-06-29-095652.tar.gz\nI0629 09:56:52.750739       1 diskrecorder.go:134] Wrote 50 records to disk in 3ms\nI0629 09:56:52.750771       1 periodic.go:151] Periodic gather config completed in 85ms\nI0629 09:56:59.800439       1 httplog.go:90] GET /metrics: (5.753687ms) 200 [Prometheus/2.15.2 10.128.2.17:54112]\nI0629 09:57:01.767577       1 httplog.go:90] GET /metrics: (1.726615ms) 200 [Prometheus/2.15.2 10.129.2.11:60632]\nI0629 09:57:16.782255       1 reflector.go:418] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Watch close - *v1.ConfigMap total 1 items received\nI0629 09:57:26.294149       1 configobserver.go:65] Refreshing configuration from cluster pull secret\nI0629 09:57:26.307060       1 configobserver.go:90] Found cloud.openshift.com token\nI0629 09:57:26.307102       1 configobserver.go:107] Refreshing configuration from cluster secret\nI0629 09:57:29.802781       1 httplog.go:90] GET /metrics: (8.308547ms) 200 [Prometheus/2.15.2 10.128.2.17:54112]\nI0629 09:57:31.767876       1 httplog.go:90] GET /metrics: (1.695656ms) 200 [Prometheus/2.15.2 10.129.2.11:60632]\nI0629 09:57:59.855762       1 httplog.go:90] GET /metrics: (61.040055ms) 200 [Prometheus/2.15.2 10.128.2.17:54112]\nI0629 09:58:01.768334       1 httplog.go:90] GET /metrics: (2.386567ms) 200 [Prometheus/2.15.2 10.129.2.11:60632]\nI0629 09:58:26.286094       1 status.go:298] The operator is healthy\nI0629 09:58:29.805352       1 httplog.go:90] GET /metrics: (10.829916ms) 200 [Prometheus/2.15.2 10.128.2.17:54112]\nI0629 09:58:31.767867       1 httplog.go:90] GET /metrics: (1.910318ms) 200 [Prometheus/2.15.2 10.129.2.11:60632]\nI0629 09:58:59.800935       1 httplog.go:90] GET /metrics: (6.483882ms) 200 [Prometheus/2.15.2 10.128.2.17:54112]\nI0629 09:59:01.780857       1 httplog.go:90] GET /metrics: (14.790315ms) 200 [Prometheus/2.15.2 10.129.2.11:60632]\n
Jun 29 09:59:13.457 E ns/openshift-monitoring pod/node-exporter-t5qjb node/ip-10-0-216-157.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): -29T09:38:44Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-06-29T09:38:44Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-06-29T09:38:44Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-06-29T09:38:44Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-06-29T09:38:44Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-06-29T09:38:44Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-06-29T09:38:44Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-06-29T09:38:44Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-06-29T09:38:44Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-06-29T09:38:44Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-06-29T09:38:44Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-06-29T09:38:44Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-06-29T09:38:44Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-06-29T09:38:44Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-06-29T09:38:44Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-06-29T09:38:44Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-06-29T09:38:44Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-06-29T09:38:44Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-06-29T09:38:44Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-06-29T09:38:44Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-06-29T09:38:44Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-06-29T09:38:44Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-06-29T09:38:44Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-06-29T09:38:44Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jun 29 09:59:15.584 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-199-245.us-east-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/06/29 09:50:31 Watching directory: "/etc/alertmanager/config"\n
Jun 29 09:59:15.584 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-199-245.us-east-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/06/29 09:50:31 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/06/29 09:50:31 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/06/29 09:50:31 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/06/29 09:50:31 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/06/29 09:50:31 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/06/29 09:50:31 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/06/29 09:50:31 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0629 09:50:31.402524       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/06/29 09:50:31 http.go:107: HTTPS: listening on [::]:9095\n
Jun 29 09:59:16.504 E ns/openshift-authentication-operator pod/authentication-operator-6468cfcc7f-gncgl node/ip-10-0-216-157.us-east-2.compute.internal container=operator container exited with code 255 (Error): ing watch stream event decoding: unexpected EOF\nI0629 09:58:34.022215       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0629 09:58:34.022634       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0629 09:58:34.022959       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0629 09:58:34.023323       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0629 09:58:34.023654       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0629 09:58:34.029058       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0629 09:58:34.029417       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0629 09:59:15.574845       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0629 09:59:15.575576       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nI0629 09:59:15.577081       1 controller.go:70] Shutting down AuthenticationOperator2\nI0629 09:59:15.577667       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0629 09:59:15.577741       1 controller.go:215] Shutting down RouterCertsDomainValidationController\nI0629 09:59:15.577791       1 unsupportedconfigoverrides_controller.go:162] Shutting down UnsupportedConfigOverridesController\nI0629 09:59:15.577832       1 management_state_controller.go:112] Shutting down management-state-controller-authentication\nI0629 09:59:15.577882       1 status_controller.go:212] Shutting down StatusSyncer-authentication\nI0629 09:59:15.578006       1 remove_stale_conditions.go:83] Shutting down RemoveStaleConditions\nI0629 09:59:15.578048       1 logging_controller.go:93] Shutting down LogLevelController\nI0629 09:59:15.578105       1 ingress_state_controller.go:157] Shutting down IngressStateController\nF0629 09:59:15.578479       1 builder.go:243] stopped\n
Jun 29 09:59:20.350 E ns/openshift-monitoring pod/thanos-querier-7c87bd9844-sms9g node/ip-10-0-165-242.us-east-2.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/06/29 09:49:56 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/06/29 09:49:56 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/06/29 09:49:56 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/06/29 09:49:56 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/06/29 09:49:56 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/06/29 09:49:56 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/06/29 09:49:56 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/06/29 09:49:56 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/06/29 09:49:56 http.go:107: HTTPS: listening on [::]:9091\nI0629 09:49:56.938320       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/06/29 09:51:25 oauthproxy.go:774: basicauth: 10.128.0.4:43924 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/29 09:54:03 oauthproxy.go:774: basicauth: 10.130.0.41:49830 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/29 09:56:04 oauthproxy.go:774: basicauth: 10.130.0.41:32908 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/29 09:56:04 oauthproxy.go:774: basicauth: 10.130.0.41:32908 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/29 09:59:03 oauthproxy.go:774: basicauth: 10.130.0.41:35920 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/29 09:59:03 oauthproxy.go:774: basicauth: 10.130.0.41:35920 Authorization header does not start with 'Basic', skipping basic authentication\n
Jun 29 09:59:20.419 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-165-242.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-06-29T09:59:14.953Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-06-29T09:59:14.955Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-06-29T09:59:14.955Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-06-29T09:59:14.956Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-06-29T09:59:14.956Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-06-29T09:59:14.956Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-06-29T09:59:14.956Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-06-29T09:59:14.956Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-06-29T09:59:14.956Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-06-29T09:59:14.956Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-06-29T09:59:14.956Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-06-29T09:59:14.956Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-06-29T09:59:14.956Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-06-29T09:59:14.956Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-06-29T09:59:14.957Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-06-29T09:59:14.957Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-06-29
Jun 29 09:59:27.188 E clusterversion/version changed Failing to True: MultipleErrors: Multiple errors are preventing progress:\n* deployment openshift-console/downloads is progressing ReplicaSetUpdated: ReplicaSet "downloads-5547b86d97" is progressing.\n* deployment openshift-controller-manager-operator/openshift-controller-manager-operator is progressing ReplicaSetUpdated: ReplicaSet "openshift-controller-manager-operator-748f84bd98" is progressing.\n* deployment openshift-machine-api/cluster-autoscaler-operator is progressing ReplicaSetUpdated: ReplicaSet "cluster-autoscaler-operator-7b7fd6b9b7" is progressing.\n* deployment openshift-marketplace/marketplace-operator is progressing ReplicaSetUpdated: ReplicaSet "marketplace-operator-6479bbfdd7" is progressing.\n* deployment openshift-service-catalog-apiserver-operator/openshift-service-catalog-apiserver-operator is progressing ReplicaSetUpdated: ReplicaSet "openshift-service-catalog-apiserver-operator-68fd7f4848" is progressing.\n* deployment openshift-service-catalog-controller-manager-operator/openshift-service-catalog-controller-manager-operator is progressing ReplicaSetUpdated: ReplicaSet "openshift-service-catalog-controller-manager-operator-6fb7c69cb6" is progressing.
Jun 29 09:59:27.207 E ns/openshift-monitoring pod/node-exporter-xnhwm node/ip-10-0-167-45.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): Error on reading termination message from logs: failed to try resolving symlinks in path "/var/log/pods/openshift-monitoring_node-exporter-xnhwm_0c5a9546-5d89-4d1d-8224-8d96dabee4ac/node-exporter/0.log": lstat /var/log/pods/openshift-monitoring_node-exporter-xnhwm_0c5a9546-5d89-4d1d-8224-8d96dabee4ac/node-exporter/0.log: no such file or directory
Jun 29 09:59:27.622 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-64c6f4b95f-mpp8t node/ip-10-0-199-245.us-east-2.compute.internal container=snapshot-controller container exited with code 2 (Error): 
Jun 29 09:59:43.310 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-182-62.us-east-2.compute.internal node/ip-10-0-182-62.us-east-2.compute.internal container=kube-apiserver container exited with code 1 (Error): IPRanger"\nI0629 09:59:41.711436       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0629 09:59:41.711445       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0629 09:59:41.711453       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0629 09:59:41.711462       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0629 09:59:41.711470       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0629 09:59:41.711499       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0629 09:59:41.711518       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0629 09:59:41.711527       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0629 09:59:41.711535       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0629 09:59:41.711545       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0629 09:59:41.711554       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0629 09:59:41.711561       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0629 09:59:41.711567       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0629 09:59:41.711573       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0629 09:59:41.711599       1 server.go:627] external host was not specified, using 10.0.182.62\nI0629 09:59:41.711764       1 server.go:670] Initializing cache sizes based on 0MB limit\nI0629 09:59:41.712097       1 server.go:188] Version: v1.17.1\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jun 29 09:59:44.354 E ns/openshift-monitoring pod/node-exporter-n8fx6 node/ip-10-0-182-62.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): -29T09:38:45Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-06-29T09:38:45Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-06-29T09:38:45Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-06-29T09:38:45Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-06-29T09:38:45Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-06-29T09:38:45Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-06-29T09:38:45Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-06-29T09:38:45Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-06-29T09:38:45Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-06-29T09:38:45Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-06-29T09:38:45Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-06-29T09:38:45Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-06-29T09:38:45Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-06-29T09:38:45Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-06-29T09:38:45Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-06-29T09:38:45Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-06-29T09:38:45Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-06-29T09:38:45Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-06-29T09:38:45Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-06-29T09:38:45Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-06-29T09:38:45Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-06-29T09:38:45Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-06-29T09:38:45Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-06-29T09:38:45Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jun 29 09:59:44.766 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-199-245.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-06-29T09:59:41.628Z caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-06-29T09:59:41.637Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-06-29T09:59:41.638Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-06-29T09:59:41.639Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-06-29T09:59:41.639Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-06-29T09:59:41.639Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-06-29T09:59:41.639Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-06-29T09:59:41.639Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-06-29T09:59:41.639Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-06-29T09:59:41.639Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-06-29T09:59:41.639Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-06-29T09:59:41.639Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-06-29T09:59:41.639Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-06-29T09:59:41.639Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-06-29T09:59:41.640Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-06-29T09:59:41.640Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-06-29
Jun 29 09:59:49.005 E ns/openshift-controller-manager pod/controller-manager-7wmkj node/ip-10-0-216-157.us-east-2.compute.internal container=controller-manager container exited with code 137 (Error): ror on the server ("unable to decode an event from the watch stream: stream error: stream ID 51; INTERNAL_ERROR") has prevented the request from succeeding\nW0629 09:55:39.842306       1 reflector.go:340] runtime/asm_amd64.s:1357: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 763; INTERNAL_ERROR") has prevented the request from succeeding\nW0629 09:56:07.374805       1 reflector.go:340] runtime/asm_amd64.s:1357: watch of *v1.TemplateInstance ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 823; INTERNAL_ERROR") has prevented the request from succeeding\nW0629 09:56:07.375070       1 reflector.go:340] runtime/asm_amd64.s:1357: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 809; INTERNAL_ERROR") has prevented the request from succeeding\nW0629 09:56:07.376958       1 reflector.go:340] runtime/asm_amd64.s:1357: watch of *v1.BuildConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 811; INTERNAL_ERROR") has prevented the request from succeeding\nW0629 09:56:07.377070       1 reflector.go:340] runtime/asm_amd64.s:1357: watch of *v1.Image ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 717; INTERNAL_ERROR") has prevented the request from succeeding\nW0629 09:56:07.377173       1 reflector.go:340] runtime/asm_amd64.s:1357: watch of *v1.ImageStream ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 637; INTERNAL_ERROR") has prevented the request from succeeding\nW0629 09:56:07.377278       1 reflector.go:340] runtime/asm_amd64.s:1357: watch of *v1.Build ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 807; INTERNAL_ERROR") has prevented the request from succeeding\n
Jun 29 10:00:06.498 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-182-62.us-east-2.compute.internal node/ip-10-0-182-62.us-east-2.compute.internal container=kube-apiserver container exited with code 1 (Error): IPRanger"\nI0629 10:00:06.285673       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0629 10:00:06.285679       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0629 10:00:06.285684       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0629 10:00:06.285690       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0629 10:00:06.285695       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0629 10:00:06.285701       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0629 10:00:06.285707       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0629 10:00:06.285712       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0629 10:00:06.285717       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0629 10:00:06.285723       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0629 10:00:06.285732       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0629 10:00:06.285746       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0629 10:00:06.285753       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0629 10:00:06.285759       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0629 10:00:06.285782       1 server.go:627] external host was not specified, using 10.0.182.62\nI0629 10:00:06.285923       1 server.go:670] Initializing cache sizes based on 0MB limit\nI0629 10:00:06.286247       1 server.go:188] Version: v1.17.1\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jun 29 10:00:28.549 E ns/openshift-service-ca pod/service-ca-b84fb566-m98nw node/ip-10-0-182-62.us-east-2.compute.internal container=service-ca-controller container exited with code 255 (Error): 
Jun 29 10:00:37.605 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-182-62.us-east-2.compute.internal node/ip-10-0-182-62.us-east-2.compute.internal container=kube-apiserver container exited with code 1 (Error): IPRanger"\nI0629 10:00:37.272268       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0629 10:00:37.272278       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0629 10:00:37.272287       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0629 10:00:37.272297       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0629 10:00:37.272307       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0629 10:00:37.272316       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0629 10:00:37.272325       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0629 10:00:37.272334       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0629 10:00:37.272343       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0629 10:00:37.272353       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0629 10:00:37.272370       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0629 10:00:37.272382       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0629 10:00:37.272393       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0629 10:00:37.272404       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0629 10:00:37.272441       1 server.go:627] external host was not specified, using 10.0.182.62\nI0629 10:00:37.272612       1 server.go:670] Initializing cache sizes based on 0MB limit\nI0629 10:00:37.272876       1 server.go:188] Version: v1.17.1\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jun 29 10:00:38.726 E ns/openshift-marketplace pod/certified-operators-76bdf47b9f-b2c75 node/ip-10-0-179-214.us-east-2.compute.internal container=certified-operators container exited with code 2 (Error): 
Jun 29 10:00:59.780 E ns/openshift-marketplace pod/community-operators-5b9fd64fcc-r7xv2 node/ip-10-0-179-214.us-east-2.compute.internal container=community-operators container exited with code 2 (Error): 
Jun 29 10:01:00.675 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-182-62.us-east-2.compute.internal node/ip-10-0-182-62.us-east-2.compute.internal container=kube-scheduler container exited with code 255 (Error): 43/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=30826&timeoutSeconds=377&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0629 10:00:59.821382       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: Get https://localhost:6443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=29757&timeout=7m56s&timeoutSeconds=476&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0629 10:00:59.822315       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: Get https://localhost:6443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=22686&timeout=9m50s&timeoutSeconds=590&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0629 10:00:59.824919       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: Get https://localhost:6443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=25435&timeout=6m52s&timeoutSeconds=412&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0629 10:00:59.826914       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicaSet: Get https://localhost:6443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=30789&timeout=7m13s&timeoutSeconds=433&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0629 10:00:59.830138       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://localhost:6443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=25127&timeout=6m17s&timeoutSeconds=377&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0629 10:01:00.023475       1 leaderelection.go:288] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0629 10:01:00.023515       1 server.go:257] leaderelection lost\n
Jun 29 10:03:59.335 E ns/openshift-sdn pod/sdn-controller-zlvhv node/ip-10-0-182-62.us-east-2.compute.internal container=sdn-controller container exited with code 2 (Error): go:115] Allocated netid 14551307 for namespace "openshift-console-operator"\nI0629 09:38:38.775143       1 vnids.go:115] Allocated netid 8836586 for namespace "openshift-ingress"\nI0629 09:41:52.742388       1 subnets.go:149] Created HostSubnet ip-10-0-179-214.us-east-2.compute.internal (host: "ip-10-0-179-214.us-east-2.compute.internal", ip: "10.0.179.214", subnet: "10.131.0.0/23")\nI0629 09:42:00.544844       1 subnets.go:149] Created HostSubnet ip-10-0-199-245.us-east-2.compute.internal (host: "ip-10-0-199-245.us-east-2.compute.internal", ip: "10.0.199.245", subnet: "10.128.2.0/23")\nI0629 09:42:23.615975       1 subnets.go:149] Created HostSubnet ip-10-0-165-242.us-east-2.compute.internal (host: "ip-10-0-165-242.us-east-2.compute.internal", ip: "10.0.165.242", subnet: "10.129.2.0/23")\nI0629 09:50:13.897367       1 vnids.go:115] Allocated netid 4398176 for namespace "e2e-k8s-sig-apps-replicaset-upgrade-4723"\nI0629 09:50:13.902887       1 vnids.go:115] Allocated netid 6366663 for namespace "e2e-openshift-api-available-4225"\nI0629 09:50:13.916617       1 vnids.go:115] Allocated netid 11404423 for namespace "e2e-k8s-service-lb-available-357"\nI0629 09:50:13.930094       1 vnids.go:115] Allocated netid 8977313 for namespace "e2e-kubernetes-api-available-5096"\nI0629 09:50:13.947107       1 vnids.go:115] Allocated netid 4970029 for namespace "e2e-frontend-ingress-available-327"\nI0629 09:50:13.953777       1 vnids.go:115] Allocated netid 16414361 for namespace "e2e-k8s-sig-apps-deployment-upgrade-792"\nI0629 09:50:13.970565       1 vnids.go:115] Allocated netid 3239985 for namespace "e2e-k8s-sig-apps-job-upgrade-9398"\nI0629 09:50:13.977665       1 vnids.go:115] Allocated netid 2304178 for namespace "e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-724"\nI0629 09:50:14.002323       1 vnids.go:115] Allocated netid 16233687 for namespace "e2e-k8s-sig-apps-daemonset-upgrade-3014"\nI0629 09:50:14.010007       1 vnids.go:115] Allocated netid 14799125 for namespace "e2e-k8s-sig-storage-sig-api-machinery-secret-upgrade-5046"\n
Jun 29 10:04:04.147 E ns/openshift-sdn pod/sdn-controller-7clg8 node/ip-10-0-216-157.us-east-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0629 09:31:11.199076       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Jun 29 10:04:08.776 E ns/openshift-sdn pod/sdn-controller-dz9sh node/ip-10-0-167-45.us-east-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0629 09:31:08.483104       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0629 09:38:17.416996       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-m302gljz-11e38.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: unexpected EOF\n
Jun 29 10:04:09.791 E ns/openshift-sdn pod/sdn-m98h9 node/ip-10-0-167-45.us-east-2.compute.internal container=sdn container exited with code 255 (Error): 10:01:45.345687    1971 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-kube-scheduler/scheduler:https to [10.0.167.45:10259 10.0.182.62:10259 10.0.216.157:10259]\nI0629 10:01:45.511642    1971 proxier.go:368] userspace proxy: processing 0 service events\nI0629 10:01:45.511667    1971 proxier.go:347] userspace syncProxyRules took 30.808758ms\nI0629 10:02:15.658627    1971 proxier.go:368] userspace proxy: processing 0 service events\nI0629 10:02:15.658653    1971 proxier.go:347] userspace syncProxyRules took 30.150976ms\nI0629 10:02:45.805965    1971 proxier.go:368] userspace proxy: processing 0 service events\nI0629 10:02:45.805988    1971 proxier.go:347] userspace syncProxyRules took 29.590369ms\nI0629 10:03:15.961447    1971 proxier.go:368] userspace proxy: processing 0 service events\nI0629 10:03:15.961493    1971 proxier.go:347] userspace syncProxyRules took 40.073026ms\nI0629 10:03:46.129966    1971 proxier.go:368] userspace proxy: processing 0 service events\nI0629 10:03:46.130051    1971 proxier.go:347] userspace syncProxyRules took 29.64965ms\nI0629 10:03:56.570482    1971 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.17:6443 10.129.0.3:6443]\nI0629 10:03:56.570530    1971 roundrobin.go:217] Delete endpoint 10.130.0.4:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0629 10:03:56.570553    1971 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.17:8443 10.129.0.3:8443]\nI0629 10:03:56.570564    1971 roundrobin.go:217] Delete endpoint 10.130.0.4:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0629 10:03:56.746930    1971 proxier.go:368] userspace proxy: processing 0 service events\nI0629 10:03:56.746958    1971 proxier.go:347] userspace syncProxyRules took 37.055116ms\nF0629 10:04:08.968290    1971 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Jun 29 10:04:27.395 E ns/openshift-multus pod/multus-m56p5 node/ip-10-0-199-245.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Jun 29 10:04:27.436 E ns/openshift-multus pod/multus-admission-controller-r894v node/ip-10-0-182-62.us-east-2.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Jun 29 10:04:27.666 E openshift-apiserver OpenShift API is not responding to GET requests
Jun 29 10:04:35.266 E ns/openshift-sdn pod/sdn-h76xf node/ip-10-0-179-214.us-east-2.compute.internal container=sdn container exited with code 255 (Error): k 28.312209ms\nI0629 10:02:15.632203    2179 proxier.go:368] userspace proxy: processing 0 service events\nI0629 10:02:15.632227    2179 proxier.go:347] userspace syncProxyRules took 27.7095ms\nI0629 10:02:45.780963    2179 proxier.go:368] userspace proxy: processing 0 service events\nI0629 10:02:45.780991    2179 proxier.go:347] userspace syncProxyRules took 28.384559ms\nI0629 10:03:15.912244    2179 proxier.go:368] userspace proxy: processing 0 service events\nI0629 10:03:15.912266    2179 proxier.go:347] userspace syncProxyRules took 27.721472ms\nI0629 10:03:46.054844    2179 proxier.go:368] userspace proxy: processing 0 service events\nI0629 10:03:46.054898    2179 proxier.go:347] userspace syncProxyRules took 38.191934ms\nI0629 10:03:56.573054    2179 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.17:6443 10.129.0.3:6443]\nI0629 10:03:56.573160    2179 roundrobin.go:217] Delete endpoint 10.130.0.4:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0629 10:03:56.573182    2179 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.17:8443 10.129.0.3:8443]\nI0629 10:03:56.573194    2179 roundrobin.go:217] Delete endpoint 10.130.0.4:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0629 10:03:56.709174    2179 proxier.go:368] userspace proxy: processing 0 service events\nI0629 10:03:56.709198    2179 proxier.go:347] userspace syncProxyRules took 28.504465ms\nI0629 10:04:26.853288    2179 proxier.go:368] userspace proxy: processing 0 service events\nI0629 10:04:26.853312    2179 proxier.go:347] userspace syncProxyRules took 28.706135ms\nI0629 10:04:34.804681    2179 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0629 10:04:34.804729    2179 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Jun 29 10:05:06.084 E ns/openshift-multus pod/multus-admission-controller-8r9pf node/ip-10-0-167-45.us-east-2.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Jun 29 10:05:12.596 E ns/openshift-multus pod/multus-bk6g4 node/ip-10-0-182-62.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Jun 29 10:05:48.185 E ns/openshift-sdn pod/sdn-22snd node/ip-10-0-165-242.us-east-2.compute.internal container=sdn container exited with code 255 (Error):   62420 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.17:6443 10.129.0.74:6443 10.130.0.62:6443]\nI0629 10:05:12.110087   62420 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.17:8443 10.129.0.74:8443 10.130.0.62:8443]\nI0629 10:05:12.123692   62420 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.129.0.74:6443 10.130.0.62:6443]\nI0629 10:05:12.123730   62420 roundrobin.go:217] Delete endpoint 10.128.0.17:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0629 10:05:12.123747   62420 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.129.0.74:8443 10.130.0.62:8443]\nI0629 10:05:12.123761   62420 roundrobin.go:217] Delete endpoint 10.128.0.17:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0629 10:05:12.249926   62420 proxier.go:368] userspace proxy: processing 0 service events\nI0629 10:05:12.249953   62420 proxier.go:347] userspace syncProxyRules took 27.768987ms\nI0629 10:05:12.382093   62420 proxier.go:368] userspace proxy: processing 0 service events\nI0629 10:05:12.382114   62420 proxier.go:347] userspace syncProxyRules took 27.968738ms\nI0629 10:05:39.663491   62420 healthcheck.go:92] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0629 10:05:42.514088   62420 proxier.go:368] userspace proxy: processing 0 service events\nI0629 10:05:42.514112   62420 proxier.go:347] userspace syncProxyRules took 26.996381ms\nI0629 10:05:47.935869   62420 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0629 10:05:47.935913   62420 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Jun 29 10:06:03.093 E ns/openshift-multus pod/multus-f46k2 node/ip-10-0-179-214.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Jun 29 10:06:11.813 E ns/openshift-sdn pod/sdn-5kzm5 node/ip-10-0-182-62.us-east-2.compute.internal container=sdn container exited with code 255 (Error): 818 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.129.0.74:6443 10.130.0.62:6443]\nI0629 10:05:12.122376   95818 roundrobin.go:217] Delete endpoint 10.128.0.17:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0629 10:05:12.122396   95818 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.129.0.74:8443 10.130.0.62:8443]\nI0629 10:05:12.122408   95818 roundrobin.go:217] Delete endpoint 10.128.0.17:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0629 10:05:12.276078   95818 proxier.go:368] userspace proxy: processing 0 service events\nI0629 10:05:12.276107   95818 proxier.go:347] userspace syncProxyRules took 29.771876ms\nI0629 10:05:12.432105   95818 proxier.go:368] userspace proxy: processing 0 service events\nI0629 10:05:12.432132   95818 proxier.go:347] userspace syncProxyRules took 31.791163ms\nI0629 10:05:42.577580   95818 proxier.go:368] userspace proxy: processing 0 service events\nI0629 10:05:42.577604   95818 proxier.go:347] userspace syncProxyRules took 29.296911ms\nI0629 10:05:48.635733   95818 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.87:6443 10.129.0.74:6443 10.130.0.62:6443]\nI0629 10:05:48.635779   95818 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.87:8443 10.129.0.74:8443 10.130.0.62:8443]\nI0629 10:05:48.798747   95818 proxier.go:368] userspace proxy: processing 0 service events\nI0629 10:05:48.798776   95818 proxier.go:347] userspace syncProxyRules took 36.639817ms\nI0629 10:06:11.077018   95818 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0629 10:06:11.077061   95818 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Jun 29 10:07:46.456 E ns/openshift-multus pod/multus-qhzp2 node/ip-10-0-165-242.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Jun 29 10:08:30.230 E ns/openshift-multus pod/multus-ncq4q node/ip-10-0-216-157.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Jun 29 10:09:10.405 E ns/openshift-machine-config-operator pod/machine-config-operator-558fcd97b-nn5n5 node/ip-10-0-216-157.us-east-2.compute.internal container=machine-config-operator container exited with code 2 (Error): 00       1 operator.go:227] Couldn't find machineconfigpool CRD, in cluster bringup mode\nI0629 09:31:56.020823       1 operator.go:264] Starting MachineConfigOperator\nI0629 09:31:56.044565       1 event.go:281] Event(v1.ObjectReference{Kind:"", Namespace:"", Name:"machine-config", UID:"42cbfe5a-f40e-4063-9fa5-d8490a983288", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorVersionChanged' clusteroperator/machine-config-operator is bootstrapping to [{operator 0.0.1-2020-06-29-090755}]\nE0629 09:31:56.470887       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.MachineConfigPool: the server could not find the requested resource (get machineconfigpools.machineconfiguration.openshift.io)\nE0629 09:31:56.476699       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.ControllerConfig: the server could not find the requested resource (get controllerconfigs.machineconfiguration.openshift.io)\nE0629 09:31:57.510868       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.MachineConfigPool: the server could not find the requested resource (get machineconfigpools.machineconfiguration.openshift.io)\nI0629 09:32:01.345607       1 sync.go:61] [init mode] synced RenderConfig in 5.29549159s\nI0629 09:32:01.679637       1 sync.go:61] [init mode] synced MachineConfigPools in 333.705544ms\nI0629 09:32:48.404960       1 sync.go:61] [init mode] synced MachineConfigDaemon in 46.725222503s\nI0629 09:32:54.471336       1 sync.go:61] [init mode] synced MachineConfigController in 6.066321178s\nI0629 09:33:11.550066       1 sync.go:61] [init mode] synced MachineConfigServer in 17.078677303s\nI0629 09:33:48.557601       1 sync.go:61] [init mode] synced RequiredPools in 37.007496849s\nI0629 09:33:48.753308       1 sync.go:89] Initialization complete\n
Jun 29 10:11:05.957 E ns/openshift-machine-config-operator pod/machine-config-daemon-gd28v node/ip-10-0-165-242.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jun 29 10:11:21.679 E ns/openshift-machine-config-operator pod/machine-config-daemon-49zgd node/ip-10-0-167-45.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jun 29 10:11:49.848 E ns/openshift-machine-config-operator pod/machine-config-daemon-jknbk node/ip-10-0-179-214.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jun 29 10:11:55.001 E ns/openshift-machine-config-operator pod/machine-config-daemon-jv2vk node/ip-10-0-216-157.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jun 29 10:13:41.263 E ns/openshift-machine-config-operator pod/machine-config-server-qpj5h node/ip-10-0-167-45.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0629 09:33:04.156422       1 start.go:38] Version: machine-config-daemon-4.4.0-202006242133.p0-4-g716cfce9-dirty (716cfce99c3b38375fbc22f49d83b202cfcb3d50)\nI0629 09:33:04.157753       1 api.go:56] Launching server on :22624\nI0629 09:33:04.157904       1 api.go:56] Launching server on :22623\nI0629 09:39:37.287708       1 api.go:102] Pool worker requested by 10.0.134.29:27428\nI0629 09:39:41.653859       1 api.go:102] Pool worker requested by 10.0.134.29:23076\n
Jun 29 10:13:51.610 E ns/openshift-machine-config-operator pod/machine-config-server-ph4p9 node/ip-10-0-216-157.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0629 09:33:10.505026       1 start.go:38] Version: machine-config-daemon-4.4.0-202006242133.p0-4-g716cfce9-dirty (716cfce99c3b38375fbc22f49d83b202cfcb3d50)\nI0629 09:33:10.506067       1 api.go:56] Launching server on :22624\nI0629 09:33:10.510701       1 api.go:56] Launching server on :22623\n
Jun 29 10:13:52.230 E ns/openshift-monitoring pod/prometheus-adapter-6598b69687-42xs5 node/ip-10-0-179-214.us-east-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0629 09:59:01.018241       1 adapter.go:93] successfully using in-cluster auth\nI0629 09:59:01.663829       1 secure_serving.go:116] Serving securely on [::]:6443\n
Jun 29 10:13:52.287 E ns/openshift-monitoring pod/thanos-querier-7654ddd488-g2fxk node/ip-10-0-179-214.us-east-2.compute.internal container=oauth-proxy container exited with code 2 (Error): vider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/06/29 09:59:01 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/06/29 09:59:01 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/06/29 09:59:01 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/06/29 09:59:01 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/06/29 09:59:01 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/06/29 09:59:01 http.go:107: HTTPS: listening on [::]:9091\nI0629 09:59:01.050086       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/06/29 10:00:03 oauthproxy.go:774: basicauth: 10.130.0.41:37064 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/29 10:01:03 oauthproxy.go:774: basicauth: 10.130.0.41:41604 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/29 10:02:03 oauthproxy.go:774: basicauth: 10.130.0.41:46928 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/29 10:04:08 oauthproxy.go:774: basicauth: 10.130.0.41:48378 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/29 10:05:03 oauthproxy.go:774: basicauth: 10.130.0.41:49120 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/29 10:06:03 oauthproxy.go:774: basicauth: 10.130.0.41:49960 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/29 10:09:03 oauthproxy.go:774: basicauth: 10.130.0.41:52016 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/29 10:10:03 oauthproxy.go:774: basicauth: 10.130.0.41:52772 Authorization header does not start with 'Basic', skipping basic authentication\n
Jun 29 10:13:53.291 E ns/openshift-marketplace pod/certified-operators-7d6cbb6796-6dwjq node/ip-10-0-179-214.us-east-2.compute.internal container=certified-operators container exited with code 2 (Error): 
Jun 29 10:13:53.305 E ns/openshift-kube-storage-version-migrator pod/migrator-5ff6c5c6c5-wd8v5 node/ip-10-0-179-214.us-east-2.compute.internal container=migrator container exited with code 2 (Error): 
Jun 29 10:13:57.667 E ns/openshift-machine-api pod/machine-api-operator-74ffdb656f-92wb6 node/ip-10-0-182-62.us-east-2.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Jun 29 10:13:57.694 E ns/openshift-machine-api pod/machine-api-controllers-65956f55f7-d429p node/ip-10-0-182-62.us-east-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Jun 29 10:14:02.698 E ns/openshift-machine-config-operator pod/machine-config-server-thh4x node/ip-10-0-182-62.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0629 09:32:59.142577       1 start.go:38] Version: machine-config-daemon-4.4.0-202006242133.p0-4-g716cfce9-dirty (716cfce99c3b38375fbc22f49d83b202cfcb3d50)\nI0629 09:32:59.143875       1 api.go:56] Launching server on :22624\nI0629 09:32:59.143991       1 api.go:56] Launching server on :22623\nI0629 09:39:40.677364       1 api.go:102] Pool worker requested by 10.0.202.75:23118\n
Jun 29 10:14:17.973 E ns/openshift-console pod/console-85fc646844-q8csk node/ip-10-0-182-62.us-east-2.compute.internal container=console container exited with code 2 (Error): 2020-06-29T10:00:48Z cmd/main: cookies are secure!\n2020-06-29T10:00:48Z cmd/main: Binding to [::]:8443...\n2020-06-29T10:00:48Z cmd/main: using TLS\n2020-06-29T10:04:50Z auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-m302gljz-11e38.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-m302gljz-11e38.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n
Jun 29 10:14:21.172 E clusteroperator/ingress changed Degraded to True: IngressControllersDegraded: Some ingresscontrollers are degraded: default
Jun 29 10:15:18.497 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-216-157.us-east-2.compute.internal node/ip-10-0-216-157.us-east-2.compute.internal container=kube-apiserver container exited with code 1 (Error): PRanger"\nI0629 10:15:16.805290       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0629 10:15:16.805299       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0629 10:15:16.805307       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0629 10:15:16.805316       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0629 10:15:16.805324       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0629 10:15:16.805332       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0629 10:15:16.805341       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0629 10:15:16.805349       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0629 10:15:16.805357       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0629 10:15:16.805366       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0629 10:15:16.805380       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0629 10:15:16.805432       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0629 10:15:16.805454       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0629 10:15:16.805463       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0629 10:15:16.805502       1 server.go:627] external host was not specified, using 10.0.216.157\nI0629 10:15:16.805703       1 server.go:670] Initializing cache sizes based on 0MB limit\nI0629 10:15:16.806024       1 server.go:188] Version: v1.17.1\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jun 29 10:15:38.619 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-216-157.us-east-2.compute.internal node/ip-10-0-216-157.us-east-2.compute.internal container=kube-apiserver container exited with code 1 (Error): PRanger"\nI0629 10:15:37.863457       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0629 10:15:37.863463       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0629 10:15:37.863471       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0629 10:15:37.863477       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0629 10:15:37.863483       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0629 10:15:37.863489       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0629 10:15:37.863495       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0629 10:15:37.863500       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0629 10:15:37.863505       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0629 10:15:37.863511       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0629 10:15:37.863523       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0629 10:15:37.863532       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0629 10:15:37.863540       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0629 10:15:37.863546       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0629 10:15:37.863575       1 server.go:627] external host was not specified, using 10.0.216.157\nI0629 10:15:37.863748       1 server.go:670] Initializing cache sizes based on 0MB limit\nI0629 10:15:37.864712       1 server.go:188] Version: v1.17.1\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jun 29 10:16:01.715 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-216-157.us-east-2.compute.internal node/ip-10-0-216-157.us-east-2.compute.internal container=kube-apiserver container exited with code 1 (Error): PRanger"\nI0629 10:16:00.873495       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0629 10:16:00.873500       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0629 10:16:00.873505       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0629 10:16:00.873511       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0629 10:16:00.873516       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0629 10:16:00.873522       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0629 10:16:00.873527       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0629 10:16:00.873533       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0629 10:16:00.873538       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0629 10:16:00.873544       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0629 10:16:00.873553       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0629 10:16:00.873559       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0629 10:16:00.873565       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0629 10:16:00.873571       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0629 10:16:00.873593       1 server.go:627] external host was not specified, using 10.0.216.157\nI0629 10:16:00.873731       1 server.go:670] Initializing cache sizes based on 0MB limit\nI0629 10:16:00.874104       1 server.go:188] Version: v1.17.1\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jun 29 10:16:03.497 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable
Jun 29 10:16:33.823 E ns/openshift-monitoring pod/node-exporter-flpxc node/ip-10-0-179-214.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:16:33.837 E ns/openshift-cluster-node-tuning-operator pod/tuned-rtjmz node/ip-10-0-179-214.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:16:33.852 E ns/openshift-image-registry pod/node-ca-8fdrr node/ip-10-0-179-214.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:16:33.865 E ns/openshift-sdn pod/ovs-pr85k node/ip-10-0-179-214.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:16:33.875 E ns/openshift-sdn pod/sdn-tmkcx node/ip-10-0-179-214.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:16:33.889 E ns/openshift-multus pod/multus-qjtj8 node/ip-10-0-179-214.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:16:33.904 E ns/openshift-dns pod/dns-default-6cdms node/ip-10-0-179-214.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:16:33.916 E ns/openshift-machine-config-operator pod/machine-config-daemon-qh55s node/ip-10-0-179-214.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:16:34.862 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-216-157.us-east-2.compute.internal node/ip-10-0-216-157.us-east-2.compute.internal container=kube-controller-manager container exited with code 255 (Error): owWatchBookmarks=true&resourceVersion=27081&timeout=7m38s&timeoutSeconds=458&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0629 10:16:33.967562       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/operator.openshift.io/v1/kubecontrollermanagers?allowWatchBookmarks=true&resourceVersion=38147&timeout=9m33s&timeoutSeconds=573&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0629 10:16:34.656350       1 leaderelection.go:288] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0629 10:16:34.656454       1 controllermanager.go:291] leaderelection lost\nI0629 10:16:34.668804       1 attach_detach_controller.go:378] Shutting down attach detach controller\nI0629 10:16:34.668823       1 disruption.go:347] Shutting down disruption controller\nI0629 10:16:34.668831       1 replica_set.go:193] Shutting down replicaset controller\nI0629 10:16:34.668841       1 gc_controller.go:99] Shutting down GC controller\nE0629 10:16:34.713064       1 event.go:272] Unable to write event: 'Post https://localhost:6443/api/v1/namespaces/default/events: dial tcp [::1]:6443: connect: connection refused' (may retry after sleeping)\nI0629 10:16:34.668849       1 replica_set.go:193] Shutting down replicationcontroller controller\nI0629 10:16:34.668860       1 job_controller.go:156] Shutting down job controller\nI0629 10:16:34.668868       1 pvc_protection_controller.go:112] Shutting down PVC protection controller\nI0629 10:16:34.668875       1 deployment_controller.go:164] Shutting down deployment controller\nI0629 10:16:34.668882       1 daemon_controller.go:281] Shutting down daemon sets controller\nI0629 10:16:34.668903       1 node_lifecycle_controller.go:601] Shutting down node controller\nI0629 10:16:34.668943       1 pv_controller_base.go:310] Shutting down persistent volume controller\nI0629 10:16:34.713175       1 pv_controller_base.go:421] claim worker queue shutting down\n
Jun 29 10:16:34.862 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-216-157.us-east-2.compute.internal node/ip-10-0-216-157.us-east-2.compute.internal container=kube-scheduler container exited with code 255 (Error): Failed to watch *v1.Service: Get https://localhost:6443/api/v1/services?allowWatchBookmarks=true&resourceVersion=37690&timeout=9m15s&timeoutSeconds=555&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0629 10:16:33.766910       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicaSet: Get https://localhost:6443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=38333&timeout=9m19s&timeoutSeconds=559&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0629 10:16:33.768066       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: Get https://localhost:6443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=25930&timeout=7m27s&timeoutSeconds=447&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0629 10:16:33.769277       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: Get https://localhost:6443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=25937&timeout=9m15s&timeoutSeconds=555&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0629 10:16:33.770308       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: Get https://localhost:6443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=37859&timeout=6m1s&timeoutSeconds=361&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0629 10:16:33.771455       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: Get https://localhost:6443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=37226&timeout=8m47s&timeoutSeconds=527&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0629 10:16:33.807502       1 leaderelection.go:288] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0629 10:16:33.807532       1 server.go:257] leaderelection lost\n
Jun 29 10:16:41.377 E ns/openshift-machine-config-operator pod/machine-config-daemon-qh55s node/ip-10-0-179-214.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Jun 29 10:16:44.836 E ns/openshift-cluster-node-tuning-operator pod/tuned-8t5c6 node/ip-10-0-182-62.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:16:44.853 E ns/openshift-image-registry pod/node-ca-bh8t5 node/ip-10-0-182-62.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:16:44.875 E ns/openshift-monitoring pod/node-exporter-7mhs7 node/ip-10-0-182-62.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:16:44.894 E ns/openshift-controller-manager pod/controller-manager-mlwc7 node/ip-10-0-182-62.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:16:44.909 E ns/openshift-sdn pod/sdn-controller-rcd2v node/ip-10-0-182-62.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:16:44.922 E ns/openshift-multus pod/multus-admission-controller-stvh5 node/ip-10-0-182-62.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:16:44.952 E ns/openshift-multus pod/multus-29x9q node/ip-10-0-182-62.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:16:44.966 E ns/openshift-sdn pod/ovs-gh6ql node/ip-10-0-182-62.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:16:44.982 E ns/openshift-dns pod/dns-default-xfrvx node/ip-10-0-182-62.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:16:44.996 E ns/openshift-machine-config-operator pod/machine-config-server-ltvjt node/ip-10-0-182-62.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:16:45.009 E ns/openshift-machine-config-operator pod/machine-config-daemon-jcg5f node/ip-10-0-182-62.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:16:51.394 E ns/openshift-monitoring pod/grafana-6467c467bd-424xl node/ip-10-0-199-245.us-east-2.compute.internal container=grafana container exited with code 1 (Error): 
Jun 29 10:16:51.394 E ns/openshift-monitoring pod/grafana-6467c467bd-424xl node/ip-10-0-199-245.us-east-2.compute.internal container=grafana-proxy container exited with code 2 (Error): 
Jun 29 10:16:51.465 E ns/openshift-monitoring pod/thanos-querier-7654ddd488-bgmnn node/ip-10-0-199-245.us-east-2.compute.internal container=oauth-proxy container exited with code 2 (Error): vider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/06/29 09:59:11 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/06/29 09:59:11 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/06/29 09:59:11 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/06/29 09:59:11 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/06/29 09:59:11 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0629 09:59:11.030792       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/06/29 09:59:11 http.go:107: HTTPS: listening on [::]:9091\n2020/06/29 10:03:03 oauthproxy.go:774: basicauth: 10.130.0.41:47546 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/29 10:07:03 oauthproxy.go:774: basicauth: 10.130.0.41:50632 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/29 10:08:03 oauthproxy.go:774: basicauth: 10.130.0.41:51336 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/29 10:11:03 oauthproxy.go:774: basicauth: 10.130.0.41:53500 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/29 10:12:03 oauthproxy.go:774: basicauth: 10.130.0.41:54256 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/29 10:13:03 oauthproxy.go:774: basicauth: 10.130.0.41:54998 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/29 10:13:58 oauthproxy.go:774: basicauth: 10.128.0.92:44616 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/29 10:15:57 oauthproxy.go:774: basicauth: 10.128.0.92:53624 Authorization header does not start with 'Basic', skipping basic authentication\n
Jun 29 10:16:52.480 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-199-245.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-06-29T09:59:41.628Z caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-06-29T09:59:41.637Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-06-29T09:59:41.638Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-06-29T09:59:41.639Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-06-29T09:59:41.639Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-06-29T09:59:41.639Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-06-29T09:59:41.639Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-06-29T09:59:41.639Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-06-29T09:59:41.639Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-06-29T09:59:41.639Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-06-29T09:59:41.639Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-06-29T09:59:41.639Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-06-29T09:59:41.639Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-06-29T09:59:41.639Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-06-29T09:59:41.640Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-06-29T09:59:41.640Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-06-29
Jun 29 10:16:52.480 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-199-245.us-east-2.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/06/29 09:59:43 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Jun 29 10:16:52.480 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-199-245.us-east-2.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-06-29T09:59:43.308967398Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=info ts=2020-06-29T09:59:43.309087702Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-06-29T09:59:43.311417621Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-06-29T09:59:48.475490679Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Jun 29 10:16:52.527 E ns/openshift-monitoring pod/openshift-state-metrics-78789d8744-4s6pt node/ip-10-0-199-245.us-east-2.compute.internal container=openshift-state-metrics container exited with code 2 (Error): 
Jun 29 10:16:52.549 E ns/openshift-monitoring pod/telemeter-client-64878b9f6-5xq7h node/ip-10-0-199-245.us-east-2.compute.internal container=telemeter-client container exited with code 2 (Error): 
Jun 29 10:16:52.549 E ns/openshift-monitoring pod/telemeter-client-64878b9f6-5xq7h node/ip-10-0-199-245.us-east-2.compute.internal container=reload container exited with code 2 (Error): 
Jun 29 10:16:54.757 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Jun 29 10:16:55.983 E ns/openshift-machine-config-operator pod/machine-config-daemon-jcg5f node/ip-10-0-182-62.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Jun 29 10:16:59.180 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-5fc8d64b44-mkhvz node/ip-10-0-165-242.us-east-2.compute.internal container=snapshot-controller container exited with code 2 (Error): 
Jun 29 10:17:11.268 E ns/openshift-machine-api pod/machine-api-controllers-65956f55f7-gz2cm node/ip-10-0-216-157.us-east-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Jun 29 10:17:12.420 E ns/openshift-monitoring pod/thanos-querier-7654ddd488-2scb5 node/ip-10-0-216-157.us-east-2.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/06/29 10:17:01 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/06/29 10:17:01 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/06/29 10:17:01 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/06/29 10:17:01 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/06/29 10:17:01 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/06/29 10:17:01 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/06/29 10:17:01 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/06/29 10:17:01 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/06/29 10:17:01 http.go:107: HTTPS: listening on [::]:9091\nI0629 10:17:01.267350       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Jun 29 10:17:12.452 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-179-214.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-06-29T10:17:07.169Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-06-29T10:17:07.174Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-06-29T10:17:07.175Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-06-29T10:17:07.176Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-06-29T10:17:07.176Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-06-29T10:17:07.176Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-06-29T10:17:07.176Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-06-29T10:17:07.176Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-06-29T10:17:07.176Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-06-29T10:17:07.176Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-06-29T10:17:07.176Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-06-29T10:17:07.176Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-06-29T10:17:07.176Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-06-29T10:17:07.176Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-06-29T10:17:07.177Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-06-29T10:17:07.177Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-06-29
Jun 29 10:17:15.757 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers::InstallerController_Error: InstallerControllerDegraded: context canceled\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-182-62.us-east-2.compute.internal is unhealthy
Jun 29 10:17:28.562 E ns/openshift-console pod/console-85fc646844-wgwnr node/ip-10-0-216-157.us-east-2.compute.internal container=console container exited with code 2 (Error): 2020-06-29T10:01:09Z cmd/main: cookies are secure!\n2020-06-29T10:01:09Z cmd/main: Binding to [::]:8443...\n2020-06-29T10:01:09Z cmd/main: using TLS\n
Jun 29 10:18:01.652 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-167-45.us-east-2.compute.internal node/ip-10-0-167-45.us-east-2.compute.internal container=kube-apiserver container exited with code 1 (Error): IPRanger"\nI0629 10:17:59.985974       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0629 10:17:59.985985       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0629 10:17:59.985996       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0629 10:17:59.986007       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0629 10:17:59.986027       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0629 10:17:59.986037       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0629 10:17:59.986047       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0629 10:17:59.986057       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0629 10:17:59.986067       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0629 10:17:59.986078       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0629 10:17:59.986093       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0629 10:17:59.986116       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0629 10:17:59.986128       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0629 10:17:59.986141       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0629 10:17:59.986193       1 server.go:627] external host was not specified, using 10.0.167.45\nI0629 10:17:59.986444       1 server.go:670] Initializing cache sizes based on 0MB limit\nI0629 10:17:59.986825       1 server.go:188] Version: v1.17.1\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jun 29 10:18:06.310 E ns/openshift-marketplace pod/redhat-operators-695b784fc6-4r7vh node/ip-10-0-165-242.us-east-2.compute.internal container=redhat-operators container exited with code 2 (Error): 
Jun 29 10:18:16.765 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-167-45.us-east-2.compute.internal node/ip-10-0-167-45.us-east-2.compute.internal container=kube-apiserver container exited with code 1 (Error): IPRanger"\nI0629 10:18:16.116419       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0629 10:18:16.116455       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0629 10:18:16.116465       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0629 10:18:16.116474       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0629 10:18:16.116480       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0629 10:18:16.116486       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0629 10:18:16.116552       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0629 10:18:16.116558       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0629 10:18:16.116564       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0629 10:18:16.116573       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0629 10:18:16.116588       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0629 10:18:16.116598       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0629 10:18:16.116606       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0629 10:18:16.116615       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0629 10:18:16.116652       1 server.go:627] external host was not specified, using 10.0.167.45\nI0629 10:18:16.116855       1 server.go:670] Initializing cache sizes based on 0MB limit\nI0629 10:18:16.117889       1 server.go:188] Version: v1.17.1\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jun 29 10:18:17.438 E ns/openshift-marketplace pod/certified-operators-d87b788f4-k6mdb node/ip-10-0-165-242.us-east-2.compute.internal container=certified-operators container exited with code 2 (Error): 
Jun 29 10:18:24.450 E ns/openshift-marketplace pod/community-operators-5fc6f44db6-lpzrg node/ip-10-0-165-242.us-east-2.compute.internal container=community-operators container exited with code 2 (Error): 
Jun 29 10:18:39.851 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-167-45.us-east-2.compute.internal node/ip-10-0-167-45.us-east-2.compute.internal container=kube-apiserver container exited with code 1 (Error): IPRanger"\nI0629 10:18:39.197899       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0629 10:18:39.197905       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0629 10:18:39.197911       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0629 10:18:39.197917       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0629 10:18:39.197922       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0629 10:18:39.197927       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0629 10:18:39.197933       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0629 10:18:39.197938       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0629 10:18:39.197944       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0629 10:18:39.197949       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0629 10:18:39.197957       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0629 10:18:39.197964       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0629 10:18:39.197970       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0629 10:18:39.197978       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0629 10:18:39.198002       1 server.go:627] external host was not specified, using 10.0.167.45\nI0629 10:18:39.198175       1 server.go:670] Initializing cache sizes based on 0MB limit\nI0629 10:18:39.198427       1 server.go:188] Version: v1.17.1\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jun 29 10:19:20.085 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-167-45.us-east-2.compute.internal node/ip-10-0-167-45.us-east-2.compute.internal container=kube-controller-manager container exited with code 255 (Error): /operator.openshift.io/v1/kubestorageversionmigrators?allowWatchBookmarks=true&resourceVersion=39488&timeout=8m42s&timeoutSeconds=522&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0629 10:19:18.630678       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/tuned.openshift.io/v1/profiles?allowWatchBookmarks=true&resourceVersion=29588&timeout=6m43s&timeoutSeconds=403&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0629 10:19:18.632059       1 reflector.go:307] github.com/openshift/client-go/security/informers/externalversions/factory.go:101: Failed to watch *v1.SecurityContextConstraints: Get https://localhost:6443/apis/security.openshift.io/v1/securitycontextconstraints?allowWatchBookmarks=true&resourceVersion=27973&timeout=5m9s&timeoutSeconds=309&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0629 10:19:18.633029       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PriorityClass: Get https://localhost:6443/apis/scheduling.k8s.io/v1/priorityclasses?allowWatchBookmarks=true&resourceVersion=37859&timeout=6m25s&timeoutSeconds=385&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0629 10:19:18.634043       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/config.openshift.io/v1/apiservers?allowWatchBookmarks=true&resourceVersion=28406&timeout=7m25s&timeoutSeconds=445&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0629 10:19:18.930827       1 leaderelection.go:288] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0629 10:19:18.931020       1 controllermanager.go:291] leaderelection lost\nI0629 10:19:18.961423       1 node_lifecycle_controller.go:601] Shutting down node controller\nI0629 10:19:18.961455       1 disruption.go:347] Shutting down disruption controller\n
Jun 29 10:19:20.086 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-167-45.us-east-2.compute.internal node/ip-10-0-167-45.us-east-2.compute.internal container=kube-scheduler container exited with code 255 (Error): Class: Get https://localhost:6443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=27576&timeout=6m27s&timeoutSeconds=387&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0629 10:19:19.331214       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicaSet: Get https://localhost:6443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=41858&timeout=6m17s&timeoutSeconds=377&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0629 10:19:19.334095       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: Get https://localhost:6443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=40663&timeout=6m34s&timeoutSeconds=394&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0629 10:19:19.335238       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: Get https://localhost:6443/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=37859&timeout=5m15s&timeoutSeconds=315&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0629 10:19:19.336303       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://localhost:6443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=41311&timeout=9m55s&timeoutSeconds=595&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0629 10:19:19.337495       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: Get https://localhost:6443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=39629&timeout=5m28s&timeoutSeconds=328&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0629 10:19:19.659874       1 leaderelection.go:288] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0629 10:19:19.659914       1 server.go:257] leaderelection lost\n
Jun 29 10:19:24.512 E ns/openshift-image-registry pod/node-ca-r4q57 node/ip-10-0-199-245.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:19:24.528 E ns/openshift-monitoring pod/node-exporter-vtsmd node/ip-10-0-199-245.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:19:24.550 E ns/openshift-cluster-node-tuning-operator pod/tuned-g7zds node/ip-10-0-199-245.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:19:24.564 E ns/openshift-multus pod/multus-gvjzp node/ip-10-0-199-245.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:19:24.576 E ns/openshift-sdn pod/ovs-4jx4d node/ip-10-0-199-245.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:19:24.586 E ns/openshift-sdn pod/sdn-rcklm node/ip-10-0-199-245.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:19:24.596 E ns/openshift-dns pod/dns-default-8chxh node/ip-10-0-199-245.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:19:24.607 E ns/openshift-machine-config-operator pod/machine-config-daemon-6d9fg node/ip-10-0-199-245.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:19:31.822 E ns/openshift-machine-config-operator pod/machine-config-daemon-6d9fg node/ip-10-0-199-245.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Jun 29 10:19:36.175 E clusterversion/version changed Failing to True: ClusterOperatorNotAvailable: Cluster operator machine-config is still updating
Jun 29 10:19:52.285 E ns/openshift-monitoring pod/node-exporter-qkps9 node/ip-10-0-216-157.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:19:52.297 E ns/openshift-controller-manager pod/controller-manager-dhdlp node/ip-10-0-216-157.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:19:52.311 E ns/openshift-cluster-node-tuning-operator pod/tuned-dfs5v node/ip-10-0-216-157.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:19:52.322 E ns/openshift-image-registry pod/node-ca-gbmms node/ip-10-0-216-157.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:19:52.332 E ns/openshift-sdn pod/sdn-controller-hlnb2 node/ip-10-0-216-157.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:19:52.342 E ns/openshift-sdn pod/sdn-jdgwm node/ip-10-0-216-157.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:19:52.353 E ns/openshift-sdn pod/ovs-r5998 node/ip-10-0-216-157.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:19:52.366 E ns/openshift-multus pod/multus-admission-controller-9zr52 node/ip-10-0-216-157.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:19:52.382 E ns/openshift-multus pod/multus-94jkq node/ip-10-0-216-157.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:19:52.393 E ns/openshift-machine-config-operator pod/machine-config-daemon-2p88w node/ip-10-0-216-157.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:19:52.407 E ns/openshift-dns pod/dns-default-9pn89 node/ip-10-0-216-157.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:19:52.420 E ns/openshift-machine-config-operator pod/machine-config-server-n886h node/ip-10-0-216-157.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:20:01.872 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-199-245.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-06-29T10:19:59.543Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-06-29T10:19:59.546Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-06-29T10:19:59.546Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-06-29T10:19:59.547Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-06-29T10:19:59.547Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-06-29T10:19:59.547Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-06-29T10:19:59.547Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-06-29T10:19:59.547Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-06-29T10:19:59.547Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-06-29T10:19:59.547Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-06-29T10:19:59.547Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-06-29T10:19:59.547Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-06-29T10:19:59.547Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-06-29T10:19:59.547Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-06-29T10:19:59.548Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-06-29T10:19:59.548Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-06-29
Jun 29 10:20:01.886 E ns/openshift-machine-config-operator pod/machine-config-daemon-2p88w node/ip-10-0-216-157.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Jun 29 10:20:20.511 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-182-62.us-east-2.compute.internal node/ip-10-0-182-62.us-east-2.compute.internal container=kube-apiserver container exited with code 1 (Error): IPRanger"\nI0629 10:20:18.865312       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0629 10:20:18.865322       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0629 10:20:18.865332       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0629 10:20:18.865341       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0629 10:20:18.865350       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0629 10:20:18.865359       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0629 10:20:18.865368       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0629 10:20:18.865377       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0629 10:20:18.865392       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0629 10:20:18.865402       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0629 10:20:18.865424       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0629 10:20:18.865440       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0629 10:20:18.865451       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0629 10:20:18.865461       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0629 10:20:18.865517       1 server.go:627] external host was not specified, using 10.0.182.62\nI0629 10:20:18.865914       1 server.go:670] Initializing cache sizes based on 0MB limit\nI0629 10:20:18.868446       1 server.go:188] Version: v1.17.1\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jun 29 10:20:24.615 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-7b46cdc6d6-qdwmg node/ip-10-0-167-45.us-east-2.compute.internal container=openshift-apiserver-operator container exited with code 255 (Error): , UID:"6945489e-58a0-4894-af50-a3d95f04adb3", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable" to "APIServerDeploymentDegraded: 2 of 3 requested instances are unavailable"\nI0629 10:17:31.664796       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"6945489e-58a0-4894-af50-a3d95f04adb3", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 2 of 3 requested instances are unavailable" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable"\nI0629 10:20:15.411837       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"6945489e-58a0-4894-af50-a3d95f04adb3", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable" to "APIServerDeploymentDegraded: 2 of 3 requested instances are unavailable"\nI0629 10:20:23.401826       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0629 10:20:23.403540       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0629 10:20:23.404004       1 state_controller.go:171] Shutting down EncryptionStateController\nI0629 10:20:23.405108       1 migration_controller.go:327] Shutting down EncryptionMigrationController\nI0629 10:20:23.405124       1 prune_controller.go:204] Shutting down EncryptionPruneController\nF0629 10:20:23.405123       1 builder.go:210] server exited\n
Jun 29 10:20:38.619 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-182-62.us-east-2.compute.internal node/ip-10-0-182-62.us-east-2.compute.internal container=kube-apiserver container exited with code 1 (Error): IPRanger"\nI0629 10:20:37.737608       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0629 10:20:37.737617       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0629 10:20:37.737626       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0629 10:20:37.737634       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0629 10:20:37.737643       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0629 10:20:37.737651       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0629 10:20:37.737660       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0629 10:20:37.737669       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0629 10:20:37.737678       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0629 10:20:37.737686       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0629 10:20:37.737698       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0629 10:20:37.737709       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0629 10:20:37.737719       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0629 10:20:37.737731       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0629 10:20:37.737768       1 server.go:627] external host was not specified, using 10.0.182.62\nI0629 10:20:37.737948       1 server.go:670] Initializing cache sizes based on 0MB limit\nI0629 10:20:37.738211       1 server.go:188] Version: v1.17.1\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jun 29 10:20:48.689 E ns/openshift-console pod/console-85fc646844-8r4jh node/ip-10-0-167-45.us-east-2.compute.internal container=console container exited with code 2 (Error): 2020-06-29T10:13:55Z cmd/main: cookies are secure!\n2020-06-29T10:13:56Z cmd/main: Binding to [::]:8443...\n2020-06-29T10:13:56Z cmd/main: using TLS\n
Jun 29 10:20:51.994 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-77979dfcdb-mhnzt node/ip-10-0-216-157.us-east-2.compute.internal container=cluster-storage-operator container exited with code 1 (Error): {"level":"info","ts":1593426051.4927418,"logger":"cmd","msg":"Go Version: go1.10.8"}\n{"level":"info","ts":1593426051.4977033,"logger":"cmd","msg":"Go OS/Arch: linux/amd64"}\n{"level":"info","ts":1593426051.4977698,"logger":"cmd","msg":"Version of operator-sdk: v0.4.0"}\n{"level":"info","ts":1593426051.498809,"logger":"leader","msg":"Trying to become the leader."}\n{"level":"error","ts":1593426051.5182586,"logger":"cmd","msg":"","error":"Get https://172.30.0.1:443/api?timeout=32s: dial tcp 172.30.0.1:443: connect: connection refused","stacktrace":"github.com/openshift/cluster-storage-operator/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/cluster-storage-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nmain.main\n\t/go/src/github.com/openshift/cluster-storage-operator/cmd/manager/main.go:53\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:198"}\n
Jun 29 10:20:52.022 E ns/openshift-monitoring pod/cluster-monitoring-operator-7489879599-2v6wb node/ip-10-0-216-157.us-east-2.compute.internal container=cluster-monitoring-operator container exited with code 1 (Error): W0629 10:20:51.611927       1 client_config.go:543] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.\n
Jun 29 10:20:52.084 E ns/openshift-monitoring pod/prometheus-operator-864f94cffb-qxlbl node/ip-10-0-216-157.us-east-2.compute.internal container=prometheus-operator container exited with code 1 (Error): ts=2020-06-29T10:20:50.838350408Z caller=main.go:199 msg="Starting Prometheus Operator version '0.35.1'."\nts=2020-06-29T10:20:50.86806469Z caller=main.go:96 msg="Staring insecure server on :8080"\nts=2020-06-29T10:20:50.878859298Z caller=main.go:288 msg="Unhandled error received. Exiting..." err="communicating with server failed: Get https://172.30.0.1:443/version?timeout=32s: dial tcp 172.30.0.1:443: connect: connection refused"\n
Jun 29 10:20:52.251 E ns/openshift-machine-api pod/cluster-autoscaler-operator-7b7fd6b9b7-zwr5r node/ip-10-0-216-157.us-east-2.compute.internal container=cluster-autoscaler-operator container exited with code 255 (Error): I0629 10:20:51.216006       1 main.go:13] Go Version: go1.12.16\nI0629 10:20:51.216114       1 main.go:14] Go OS/Arch: linux/amd64\nI0629 10:20:51.216124       1 main.go:15] Version: cluster-autoscaler-operator v0.0.0-244-g9c4a47c-dirty\nF0629 10:20:51.223820       1 main.go:33] Failed to create operator: failed to create manager: Get https://172.30.0.1:443/api?timeout=32s: dial tcp 172.30.0.1:443: connect: connection refused\n
Jun 29 10:20:56.265 E ns/openshift-operator-lifecycle-manager pod/packageserver-6b9cf4d7f6-fnvwl node/ip-10-0-216-157.us-east-2.compute.internal container=packageserver container exited with code 1 (Error): C_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA\n      --tls-min-version string                                  Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13\n      --tls-private-key-file string                             File containing the default x509 private key matching --tls-cert-file.\n      --tls-sni-cert-key namedCertKey                           A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com". (default [])\n  -v, --v Level                                                 number for the log level verbosity (default 0)\n      --vmodule moduleSpec                                      comma-separated list of pattern=N settings for file-filtered logging\n\ntime="2020-06-29T10:20:55Z" level=fatal msg="Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: connection refused"\n
Jun 29 10:21:01.726 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-182-62.us-east-2.compute.internal node/ip-10-0-182-62.us-east-2.compute.internal container=kube-apiserver container exited with code 1 (Error): IPRanger"\nI0629 10:21:00.706845       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0629 10:21:00.706854       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0629 10:21:00.706863       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0629 10:21:00.706871       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0629 10:21:00.706879       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0629 10:21:00.706888       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0629 10:21:00.706896       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0629 10:21:00.706904       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0629 10:21:00.706912       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0629 10:21:00.706920       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0629 10:21:00.706934       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0629 10:21:00.706943       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0629 10:21:00.706954       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0629 10:21:00.706965       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0629 10:21:00.707000       1 server.go:627] external host was not specified, using 10.0.182.62\nI0629 10:21:00.707211       1 server.go:670] Initializing cache sizes based on 0MB limit\nI0629 10:21:00.707521       1 server.go:188] Version: v1.17.1\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jun 29 10:21:10.351 E ns/openshift-machine-api pod/cluster-autoscaler-operator-7b7fd6b9b7-zwr5r node/ip-10-0-216-157.us-east-2.compute.internal container=cluster-autoscaler-operator container exited with code 255 (Error): I0629 10:21:09.654318       1 main.go:13] Go Version: go1.12.16\nI0629 10:21:09.654693       1 main.go:14] Go OS/Arch: linux/amd64\nI0629 10:21:09.654770       1 main.go:15] Version: cluster-autoscaler-operator v0.0.0-244-g9c4a47c-dirty\nF0629 10:21:09.661135       1 main.go:33] Failed to create operator: failed to create manager: Get https://172.30.0.1:443/api?timeout=32s: dial tcp 172.30.0.1:443: connect: connection refused\n
Jun 29 10:21:25.923 E kube-apiserver failed contacting the API: Get https://api.ci-op-m302gljz-11e38.origin-ci-int-aws.dev.rhcloud.com:6443/apis/config.openshift.io/v1/clusterversions?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dversion&resourceVersion=44566&timeout=5m21s&timeoutSeconds=321&watch=true: dial tcp 3.16.248.44:6443: connect: connection refused
Jun 29 10:21:26.691 E kube-apiserver Kube API started failing: Get https://api.ci-op-m302gljz-11e38.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: dial tcp 3.16.248.44:6443: connect: connection refused
Jun 29 10:21:27.666 E kube-apiserver Kube API is not responding to GET requests
Jun 29 10:21:27.666 E openshift-apiserver OpenShift API is not responding to GET requests
Jun 29 10:21:36.861 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-182-62.us-east-2.compute.internal node/ip-10-0-182-62.us-east-2.compute.internal container=kube-scheduler container exited with code 255 (Error): ulSet: Get https://localhost:6443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=44967&timeout=8m49s&timeoutSeconds=529&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0629 10:21:34.900618       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://localhost:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=44914&timeout=7m52s&timeoutSeconds=472&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0629 10:21:34.901871       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: Get https://localhost:6443/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=39455&timeout=6m12s&timeoutSeconds=372&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0629 10:21:34.903069       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: Get https://localhost:6443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=42719&timeout=6m59s&timeoutSeconds=419&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0629 10:21:34.906868       1 reflector.go:307] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to watch *v1.Pod: Get https://localhost:6443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=44970&timeoutSeconds=359&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0629 10:21:34.925280       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicaSet: Get https://localhost:6443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=44798&timeout=8m44s&timeoutSeconds=524&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0629 10:21:35.849107       1 leaderelection.go:288] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0629 10:21:35.849145       1 server.go:257] leaderelection lost\n
Jun 29 10:21:38.873 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-182-62.us-east-2.compute.internal node/ip-10-0-182-62.us-east-2.compute.internal container=kube-controller-manager container exited with code 255 (Error): s=537&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0629 10:21:38.129194       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/operator.openshift.io/v1/kubeapiservers?allowWatchBookmarks=true&resourceVersion=44903&timeout=9m16s&timeoutSeconds=556&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0629 10:21:38.129194       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/operator.openshift.io/v1/kubestorageversionmigrators?allowWatchBookmarks=true&resourceVersion=42128&timeout=6m1s&timeoutSeconds=361&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0629 10:21:38.130292       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ResourceQuota: Get https://localhost:6443/api/v1/resourcequotas?allowWatchBookmarks=true&resourceVersion=39377&timeout=8m2s&timeoutSeconds=482&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0629 10:21:38.131466       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.RoleBinding: Get https://localhost:6443/apis/rbac.authorization.k8s.io/v1/rolebindings?allowWatchBookmarks=true&resourceVersion=39448&timeout=5m48s&timeoutSeconds=348&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0629 10:21:38.464452       1 leaderelection.go:288] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0629 10:21:38.464582       1 controllermanager.go:291] leaderelection lost\nI0629 10:21:38.479276       1 pv_protection_controller.go:93] Shutting down PV protection controller\nI0629 10:21:38.479291       1 disruption.go:347] Shutting down disruption controller\nI0629 10:21:38.479299       1 gc_controller.go:99] Shutting down GC controller\nI0629 10:21:38.479308       1 pv_controller_base.go:310] Shutting down persistent volume controller\n
Jun 29 10:22:22.930 E ns/openshift-monitoring pod/node-exporter-tl5s4 node/ip-10-0-165-242.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:22:22.950 E ns/openshift-image-registry pod/node-ca-sldlp node/ip-10-0-165-242.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:22:22.963 E ns/openshift-cluster-node-tuning-operator pod/tuned-v9gb2 node/ip-10-0-165-242.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:22:22.992 E ns/openshift-sdn pod/ovs-xk4b7 node/ip-10-0-165-242.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:22:23.005 E ns/openshift-multus pod/multus-cwcd2 node/ip-10-0-165-242.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:22:23.019 E ns/openshift-dns pod/dns-default-x5hk9 node/ip-10-0-165-242.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:22:23.032 E ns/openshift-machine-config-operator pod/machine-config-daemon-jng4p node/ip-10-0-165-242.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:22:30.827 E ns/openshift-machine-config-operator pod/machine-config-daemon-jng4p node/ip-10-0-165-242.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Jun 29 10:23:04.692 E ns/openshift-cluster-node-tuning-operator pod/tuned-5q555 node/ip-10-0-167-45.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:23:04.704 E ns/openshift-monitoring pod/node-exporter-kmpxl node/ip-10-0-167-45.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:23:04.715 E ns/openshift-controller-manager pod/controller-manager-zdmxs node/ip-10-0-167-45.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:23:04.729 E ns/openshift-image-registry pod/node-ca-bkflm node/ip-10-0-167-45.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:23:04.741 E ns/openshift-sdn pod/ovs-tg4kx node/ip-10-0-167-45.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:23:04.754 E ns/openshift-sdn pod/sdn-controller-mv6mn node/ip-10-0-167-45.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:23:04.770 E ns/openshift-sdn pod/sdn-6kvhl node/ip-10-0-167-45.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:23:04.781 E ns/openshift-multus pod/multus-admission-controller-zwdx6 node/ip-10-0-167-45.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:23:04.793 E ns/openshift-multus pod/multus-2k5k6 node/ip-10-0-167-45.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:23:04.806 E ns/openshift-machine-config-operator pod/machine-config-daemon-gn2rz node/ip-10-0-167-45.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:23:04.825 E ns/openshift-dns pod/dns-default-lfll8 node/ip-10-0-167-45.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:23:04.837 E ns/openshift-machine-config-operator pod/machine-config-server-7cvgt node/ip-10-0-167-45.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jun 29 10:23:15.417 E ns/openshift-machine-config-operator pod/machine-config-daemon-gn2rz node/ip-10-0-167-45.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error):