ResultSUCCESS
Tests 4 failed / 23 succeeded
Started2020-07-02 13:24
Elapsed2h36m
Work namespaceci-op-yjvtzh17
Refs release-4.4:b26c35c9
390:0874e0cd
pod52c15d21-bc67-11ea-83cd-0a580a8102ef
repoopenshift/cluster-version-operator
revision1

Test Failures


Cluster upgrade Application behind service load balancer with PDB is not disrupted 1h15m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sApplication\sbehind\sservice\sload\sbalancer\swith\sPDB\sis\snot\sdisrupted$'
Service was unreachable during disruption for at least 5s of 1h11m50s (0%):

Jul 02 15:10:11.909 E ns/e2e-k8s-service-lb-available-1282 svc/service-test Service stopped responding to GET requests on reused connections
Jul 02 15:10:12.905 - 4s    E ns/e2e-k8s-service-lb-available-1282 svc/service-test Service is not responding to GET requests on reused connections
Jul 02 15:10:17.870 I ns/e2e-k8s-service-lb-available-1282 svc/service-test Service started responding to GET requests on reused connections
				from junit_upgrade_1593705052.xml

Filter through log files


Cluster upgrade Kubernetes APIs remain available 1h15m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sKubernetes\sAPIs\sremain\savailable$'
API was unreachable during disruption for at least 10s of 1h14m59s (0%):

Jul 02 15:09:53.796 E kube-apiserver Kube API started failing: Get https://api.ci-op-yjvtzh17-7a679.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=15s: dial tcp 3.128.175.74:6443: connect: connection refused
Jul 02 15:09:54.746 E kube-apiserver Kube API is not responding to GET requests
Jul 02 15:09:54.823 I kube-apiserver Kube API started responding to GET requests
Jul 02 15:21:19.746 E kube-apiserver Kube API started failing: Get https://api.ci-op-yjvtzh17-7a679.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Jul 02 15:21:20.746 - 7s    E kube-apiserver Kube API is not responding to GET requests
Jul 02 15:21:28.436 I kube-apiserver Kube API started responding to GET requests
				from junit_upgrade_1593705052.xml

Filter through log files


Cluster upgrade OpenShift APIs remain available 1h15m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sOpenShift\sAPIs\sremain\savailable$'
API was unreachable during disruption for at least 6s of 1h14m59s (0%):

Jul 02 15:21:20.075 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-yjvtzh17-7a679.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Jul 02 15:21:21.075 - 6s    E openshift-apiserver OpenShift API is not responding to GET requests
Jul 02 15:21:28.424 I openshift-apiserver OpenShift API started responding to GET requests
				from junit_upgrade_1593705052.xml

Filter through log files


openshift-tests Monitor cluster while tests execute 1h15m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
354 error level events were detected during this test run:

Jul 02 14:35:50.216 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-254-230.us-east-2.compute.internal node/ip-10-0-254-230.us-east-2.compute.internal container=kube-scheduler container exited with code 255 (Error): 8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://localhost:6443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=15927&timeout=5m42s&timeoutSeconds=342&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:35:49.107820       1 reflector.go:307] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to watch *v1.Pod: Get https://localhost:6443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=22283&timeoutSeconds=475&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:35:49.108856       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicaSet: Get https://localhost:6443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=22288&timeout=8m30s&timeoutSeconds=510&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:35:49.181020       1 reflector.go:307] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=20325&timeout=5m15s&timeoutSeconds=315&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:35:49.181274       1 reflector.go:307] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=21436&timeout=6m16s&timeoutSeconds=376&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0702 14:35:49.843256       1 leaderelection.go:288] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0702 14:35:49.843291       1 server.go:257] leaderelection lost\n
Jul 02 14:35:51.244 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-254-230.us-east-2.compute.internal node/ip-10-0-254-230.us-east-2.compute.internal container=kube-controller-manager container exited with code 255 (Error): 14:35:50.653186       1 event.go:281] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"", Name:"", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ip-10-0-254-230_f9afa863-6636-4926-8838-d1c636f4a53f stopped leading\nI0702 14:35:50.653412       1 pv_protection_controller.go:93] Shutting down PV protection controller\nI0702 14:35:50.653438       1 replica_set.go:193] Shutting down replicaset controller\nI0702 14:35:50.653466       1 pv_controller_base.go:310] Shutting down persistent volume controller\nI0702 14:35:50.697444       1 pv_controller_base.go:421] claim worker queue shutting down\nI0702 14:35:50.653466       1 attach_detach_controller.go:378] Shutting down attach detach controller\nI0702 14:35:50.697510       1 pv_controller_base.go:364] volume worker queue shutting down\nI0702 14:35:50.653483       1 daemon_controller.go:281] Shutting down daemon sets controller\nI0702 14:35:50.653487       1 disruption.go:347] Shutting down disruption controller\nI0702 14:35:50.653491       1 job_controller.go:156] Shutting down job controller\nI0702 14:35:50.653496       1 replica_set.go:193] Shutting down replicationcontroller controller\nI0702 14:35:50.653500       1 pvc_protection_controller.go:112] Shutting down PVC protection controller\nI0702 14:35:50.653506       1 gc_controller.go:99] Shutting down GC controller\nI0702 14:35:50.653515       1 stateful_set.go:157] Shutting down statefulset controller\nI0702 14:35:50.653518       1 node_lifecycle_controller.go:601] Shutting down node controller\nI0702 14:35:50.653524       1 horizontal.go:167] Shutting down HPA controller\nI0702 14:35:50.653532       1 deployment_controller.go:164] Shutting down deployment controller\nI0702 14:35:50.697729       1 horizontal.go:202] horizontal pod autoscaler controller worker shutting down\nI0702 14:35:50.653534       1 endpoints_controller.go:198] Shutting down endpoint controller\nI0702 14:35:50.653545       1 clusterroleaggregation_controller.go:160] Shutting down ClusterRoleAggregator\n
Jul 02 14:36:15.322 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-254-230.us-east-2.compute.internal node/ip-10-0-254-230.us-east-2.compute.internal container=setup init container exited with code 124 (Error): ................................................................................
Jul 02 14:36:16.353 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-254-230.us-east-2.compute.internal node/ip-10-0-254-230.us-east-2.compute.internal container=kube-controller-manager-recovery-controller container exited with code 255 (Error): ct: connection refused\nE0702 14:36:15.234647       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?allowWatchBookmarks=true&resourceVersion=22213&timeout=5m25s&timeoutSeconds=325&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:36:15.241585       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-config/configmaps?allowWatchBookmarks=true&resourceVersion=21253&timeout=7m51s&timeoutSeconds=471&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:36:15.244791       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?allowWatchBookmarks=true&resourceVersion=22244&timeout=5m1s&timeoutSeconds=301&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:36:15.246168       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-config/secrets?allowWatchBookmarks=true&resourceVersion=21584&timeout=8m55s&timeoutSeconds=535&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:36:15.247259       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?allowWatchBookmarks=true&resourceVersion=17329&timeout=6m31s&timeoutSeconds=391&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0702 14:36:16.125364       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cert-recovery-controller-lock: timed out waiting for the condition\nF0702 14:36:16.125498       1 leaderelection.go:67] leaderelection lost\nI0702 14:36:16.128176       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "CSRSigningCert"\n
Jul 02 14:36:18.398 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-254-230.us-east-2.compute.internal node/ip-10-0-254-230.us-east-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): /localhost:6443/apis/networking.k8s.io/v1/networkpolicies?allowWatchBookmarks=true&resourceVersion=15932&timeout=5m12s&timeoutSeconds=312&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:36:17.168163       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicaSet: Get https://localhost:6443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=22288&timeout=7m27s&timeoutSeconds=447&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:36:17.169681       1 reflector.go:307] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.BuildConfig: Get https://localhost:6443/apis/build.openshift.io/v1/buildconfigs?allowWatchBookmarks=true&resourceVersion=20887&timeout=8m56s&timeoutSeconds=536&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:36:17.172974       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Endpoints: Get https://localhost:6443/api/v1/endpoints?allowWatchBookmarks=true&resourceVersion=22255&timeout=6m43s&timeoutSeconds=403&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:36:17.174825       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Job: Get https://localhost:6443/apis/batch/v1/jobs?allowWatchBookmarks=true&resourceVersion=21989&timeout=9m55s&timeoutSeconds=595&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:36:17.178734       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/configmaps?allowWatchBookmarks=true&resourceVersion=22301&timeout=7m8s&timeoutSeconds=428&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0702 14:36:17.956764       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cluster-policy-controller: timed out waiting for the condition\nF0702 14:36:17.956818       1 policy_controller.go:94] leaderelection lost\n
Jul 02 14:39:44.311 E clusterversion/version changed Failing to True: WorkloadNotAvailable: deployment openshift-cluster-version/cluster-version-operator is progressing NewReplicaSetAvailable: ReplicaSet "cluster-version-operator-674fbd989c" has successfully progressed.
Jul 02 14:41:04.556 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-6ffbb749fc-hlz57 node/ip-10-0-254-230.us-east-2.compute.internal container=openshift-apiserver-operator container exited with code 255 (Error): -apiserver-operator", UID:"f07d757e-f55c-46e5-8eb8-af35443b688c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable" to ""\nI0702 14:41:03.807186       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0702 14:41:03.807444       1 dynamic_serving_content.go:144] Shutting down serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key\nI0702 14:41:03.807609       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0702 14:41:03.807641       1 prune_controller.go:204] Shutting down EncryptionPruneController\nI0702 14:41:03.807660       1 state_controller.go:171] Shutting down EncryptionStateController\nI0702 14:41:03.807676       1 migration_controller.go:327] Shutting down EncryptionMigrationController\nI0702 14:41:03.807692       1 key_controller.go:363] Shutting down EncryptionKeyController\nI0702 14:41:03.807730       1 base_controller.go:73] Shutting down RevisionController ...\nI0702 14:41:03.807748       1 condition_controller.go:202] Shutting down EncryptionConditionController\nI0702 14:41:03.807764       1 prune_controller.go:232] Shutting down PruneController\nI0702 14:41:03.807784       1 base_controller.go:73] Shutting down LoggingSyncer ...\nI0702 14:41:03.807813       1 base_controller.go:73] Shutting down UnsupportedConfigOverridesController ...\nI0702 14:41:03.807822       1 apiservice_controller.go:215] Shutting down APIServiceController_openshift-apiserver\nI0702 14:41:03.807828       1 status_controller.go:212] Shutting down StatusSyncer-openshift-apiserver\nI0702 14:41:03.807849       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0702 14:41:03.807879       1 finalizer_controller.go:148] Shutting down NamespaceFinalizerController_openshift-apiserver\nF0702 14:41:03.807899       1 builder.go:243] stopped\n
Jul 02 14:41:11.985 E ns/openshift-machine-api pod/machine-api-operator-8694cf57dd-q6pqz node/ip-10-0-187-100.us-east-2.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Jul 02 14:41:33.124 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-187-100.us-east-2.compute.internal node/ip-10-0-187-100.us-east-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): I0702 14:41:32.430158       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0702 14:41:32.432732       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0702 14:41:32.435864       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0702 14:41:32.436127       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0702 14:41:32.436667       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jul 02 14:41:48.196 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-187-100.us-east-2.compute.internal node/ip-10-0-187-100.us-east-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): I0702 14:41:47.747141       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0702 14:41:47.749043       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0702 14:41:47.749354       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nI0702 14:41:47.749378       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nF0702 14:41:47.750279       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jul 02 14:41:57.451 E kube-apiserver Kube API started failing: Get https://api.ci-op-yjvtzh17-7a679.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Jul 02 14:42:58.584 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-160-167.us-east-2.compute.internal node/ip-10-0-160-167.us-east-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): I0702 14:42:57.500495       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0702 14:42:57.503473       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0702 14:42:57.506276       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0702 14:42:57.506726       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0702 14:42:57.508218       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jul 02 14:43:12.621 E ns/openshift-machine-api pod/machine-api-controllers-6458d7cddf-4wj8f node/ip-10-0-160-167.us-east-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Jul 02 14:43:16.665 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-160-167.us-east-2.compute.internal node/ip-10-0-160-167.us-east-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): I0702 14:43:15.703175       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0702 14:43:15.704823       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0702 14:43:15.706377       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0702 14:43:15.706670       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0702 14:43:15.708006       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jul 02 14:44:27.163 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-254-230.us-east-2.compute.internal node/ip-10-0-254-230.us-east-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): I0702 14:44:25.987612       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0702 14:44:25.990714       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0702 14:44:25.993871       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nF0702 14:44:25.995012       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jul 02 14:44:36.953 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-187-100.us-east-2.compute.internal node/ip-10-0-187-100.us-east-2.compute.internal container=kube-controller-manager container exited with code 255 (Error): 2022-07-02 14:16:17 +0000 UTC (now=2020-07-02 14:41:31.174529217 +0000 UTC))\nI0702 14:41:31.174750       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1593700891" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1593700890" (2020-07-02 13:41:30 +0000 UTC to 2021-07-02 13:41:30 +0000 UTC (now=2020-07-02 14:41:31.174739545 +0000 UTC))\nI0702 14:41:31.174793       1 secure_serving.go:178] Serving securely on [::]:10257\nI0702 14:41:31.175013       1 tlsconfig.go:241] Starting DynamicServingCertificateController\nI0702 14:41:31.175142       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-controller-manager...\nI0702 14:44:35.928235       1 leaderelection.go:252] successfully acquired lease kube-system/kube-controller-manager\nI0702 14:44:35.928706       1 event.go:281] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"kube-system", Name:"kube-controller-manager", UID:"0ea3f847-71eb-4641-8752-f4ac626aefd6", APIVersion:"v1", ResourceVersion:"26651", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ip-10-0-187-100_1542b424-031a-4c7c-bc5b-417f4d15a227 became leader\nI0702 14:44:35.932217       1 controllermanager.go:218] using dynamic client builder\nW0702 14:44:35.980816       1 plugins.go:115] WARNING: aws built-in cloud provider is now deprecated. The AWS provider is deprecated and will be removed in a future release\nI0702 14:44:35.981036       1 aws.go:1214] Building AWS cloudprovider\nI0702 14:44:35.981102       1 aws.go:1180] Zone not specified in configuration file; querying AWS metadata service\nI0702 14:44:36.209430       1 tags.go:79] AWS cloud filtering on ClusterID: ci-op-yjvtzh17-7a679-mpwn6\nI0702 14:44:36.209461       1 aws.go:771] Setting up informers for Cloud\nF0702 14:44:36.213251       1 controllermanager.go:243] error creating recycler serviceaccount: Post https://localhost:6443/api/v1/namespaces/openshift-infra/serviceaccounts: dial tcp [::1]:6443: connect: connection refused\n
Jul 02 14:44:47.994 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-187-100.us-east-2.compute.internal node/ip-10-0-187-100.us-east-2.compute.internal container=kube-scheduler container exited with code 255 (Error): lector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: Get https://localhost:6443/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=24779&timeout=8m2s&timeoutSeconds=482&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:44:47.141162       1 reflector.go:307] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=25528&timeout=8m51s&timeoutSeconds=531&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:44:47.141767       1 reflector.go:307] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=25528&timeout=8m44s&timeoutSeconds=524&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:44:47.143379       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: Get https://localhost:6443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=24137&timeout=8m34s&timeoutSeconds=514&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:44:47.144654       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: Get https://localhost:6443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=22264&timeout=6m22s&timeoutSeconds=382&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0702 14:44:47.587011       1 leaderelection.go:288] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0702 14:44:47.587043       1 server.go:257] leaderelection lost\n
Jul 02 14:44:48.271 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-254-230.us-east-2.compute.internal node/ip-10-0-254-230.us-east-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): I0702 14:44:47.521826       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0702 14:44:47.523367       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0702 14:44:47.525184       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nF0702 14:44:47.525742       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jul 02 14:44:56.090 E ns/openshift-insights pod/insights-operator-676594f596-rwcd9 node/ip-10-0-160-167.us-east-2.compute.internal container=operator container exited with code 2 (Error): go:90] GET /metrics: (6.817288ms) 200 [Prometheus/2.15.2 10.131.0.18:55254]\nI0702 14:43:08.733407       1 httplog.go:90] GET /metrics: (1.754695ms) 200 [Prometheus/2.15.2 10.129.2.13:45170]\nI0702 14:43:29.727279       1 httplog.go:90] GET /metrics: (7.642008ms) 200 [Prometheus/2.15.2 10.131.0.18:55254]\nI0702 14:43:29.787968       1 status.go:298] The operator is healthy\nI0702 14:43:38.733658       1 httplog.go:90] GET /metrics: (1.883822ms) 200 [Prometheus/2.15.2 10.129.2.13:45170]\nI0702 14:43:59.726013       1 httplog.go:90] GET /metrics: (6.359762ms) 200 [Prometheus/2.15.2 10.131.0.18:55254]\nI0702 14:44:08.733470       1 httplog.go:90] GET /metrics: (1.815799ms) 200 [Prometheus/2.15.2 10.129.2.13:45170]\nI0702 14:44:29.731643       1 httplog.go:90] GET /metrics: (11.972446ms) 200 [Prometheus/2.15.2 10.131.0.18:55254]\nI0702 14:44:36.062165       1 reflector.go:418] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Watch close - *v1.ConfigMap total 0 items received\nI0702 14:44:36.091298       1 reflector.go:418] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Watch close - *v1.ConfigMap total 0 items received\nI0702 14:44:37.134841       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: watch of *v1.ConfigMap ended with: too old resource version: 25286 (26417)\nI0702 14:44:37.148897       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: watch of *v1.ConfigMap ended with: too old resource version: 25101 (26417)\nI0702 14:44:38.135066       1 reflector.go:185] Listing and watching *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209\nI0702 14:44:38.149114       1 reflector.go:185] Listing and watching *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209\nI0702 14:44:38.733503       1 httplog.go:90] GET /metrics: (1.810448ms) 200 [Prometheus/2.15.2 10.129.2.13:45170]\n
Jul 02 14:45:01.908 E ns/openshift-kube-storage-version-migrator pod/migrator-c4f758d6d-xjz78 node/ip-10-0-222-239.us-east-2.compute.internal container=migrator container exited with code 2 (Error): I0702 14:31:50.005542       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\n
Jul 02 14:45:12.147 E ns/openshift-monitoring pod/openshift-state-metrics-8c98b8857-7dwkp node/ip-10-0-151-70.us-east-2.compute.internal container=openshift-state-metrics container exited with code 2 (Error): 
Jul 02 14:45:13.209 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-187-100.us-east-2.compute.internal node/ip-10-0-187-100.us-east-2.compute.internal container=setup init container exited with code 124 (Error): ................................................................................
Jul 02 14:45:14.944 E ns/openshift-monitoring pod/node-exporter-p2jdx node/ip-10-0-222-239.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): -02T14:26:21Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-07-02T14:26:21Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-07-02T14:26:21Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-07-02T14:26:21Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-07-02T14:26:21Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-07-02T14:26:21Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-07-02T14:26:21Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-07-02T14:26:21Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-07-02T14:26:21Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-07-02T14:26:21Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-07-02T14:26:21Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-07-02T14:26:21Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-07-02T14:26:21Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-07-02T14:26:21Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-07-02T14:26:21Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-07-02T14:26:21Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-07-02T14:26:21Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-07-02T14:26:21Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-07-02T14:26:21Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-07-02T14:26:21Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-07-02T14:26:21Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-07-02T14:26:21Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-07-02T14:26:21Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-07-02T14:26:21Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jul 02 14:45:15.948 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-131-218.us-east-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/07/02 14:28:46 Watching directory: "/etc/alertmanager/config"\n
Jul 02 14:45:15.948 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-131-218.us-east-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/07/02 14:28:46 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/02 14:28:46 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/02 14:28:46 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/02 14:28:46 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/07/02 14:28:46 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/07/02 14:28:46 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/02 14:28:46 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0702 14:28:46.760815       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/07/02 14:28:46 http.go:107: HTTPS: listening on [::]:9095\n
Jul 02 14:45:17.273 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-187-100.us-east-2.compute.internal node/ip-10-0-187-100.us-east-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): ta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=22264&timeout=8m32s&timeoutSeconds=512&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:45:16.240924       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PodTemplate: Get https://localhost:6443/api/v1/podtemplates?allowWatchBookmarks=true&resourceVersion=24779&timeout=6m40s&timeoutSeconds=400&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:45:16.241976       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ResourceQuota: Get https://localhost:6443/api/v1/resourcequotas?allowWatchBookmarks=true&resourceVersion=24779&timeout=9m4s&timeoutSeconds=544&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:45:16.243165       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Job: Get https://localhost:6443/apis/batch/v1/jobs?allowWatchBookmarks=true&resourceVersion=23591&timeout=5m37s&timeoutSeconds=337&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:45:16.244301       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.NetworkPolicy: Get https://localhost:6443/apis/networking.k8s.io/v1/networkpolicies?allowWatchBookmarks=true&resourceVersion=24783&timeout=6m59s&timeoutSeconds=419&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:45:16.245323       1 reflector.go:307] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.BuildConfig: Get https://localhost:6443/apis/build.openshift.io/v1/buildconfigs?allowWatchBookmarks=true&resourceVersion=25510&timeout=7m10s&timeoutSeconds=430&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0702 14:45:16.968869       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cluster-policy-controller: timed out waiting for the condition\nF0702 14:45:16.968976       1 policy_controller.go:94] leaderelection lost\n
Jul 02 14:45:20.305 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-187-100.us-east-2.compute.internal node/ip-10-0-187-100.us-east-2.compute.internal container=kube-controller-manager-recovery-controller container exited with code 255 (Error): namespaces/openshift-config-managed/configmaps?allowWatchBookmarks=true&resourceVersion=26558&timeout=7m2s&timeoutSeconds=422&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:45:19.161671       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-config/configmaps?allowWatchBookmarks=true&resourceVersion=26558&timeout=7m6s&timeoutSeconds=426&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:45:19.164031       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-config/secrets?allowWatchBookmarks=true&resourceVersion=25667&timeout=7m20s&timeoutSeconds=440&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:45:19.170966       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-config-managed/secrets?allowWatchBookmarks=true&resourceVersion=25667&timeout=7m57s&timeoutSeconds=477&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:45:19.175470       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?allowWatchBookmarks=true&resourceVersion=26647&timeout=5m52s&timeoutSeconds=352&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:45:19.179582       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *unstructured.Unstructured: Get https://localhost:6443/apis/operator.openshift.io/v1/kubecontrollermanagers?allowWatchBookmarks=true&resourceVersion=26623&timeoutSeconds=500&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0702 14:45:19.866128       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cert-recovery-controller-lock: timed out waiting for the condition\nF0702 14:45:19.866181       1 leaderelection.go:67] leaderelection lost\n
Jul 02 14:45:22.457 E ns/openshift-monitoring pod/node-exporter-7mdxr node/ip-10-0-254-230.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): -02T14:22:13Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jul 02 14:45:24.996 E ns/openshift-monitoring pod/prometheus-adapter-5f84fcb9fb-4zj2d node/ip-10-0-222-239.us-east-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0702 14:27:39.956931       1 adapter.go:93] successfully using in-cluster auth\nI0702 14:27:40.980226       1 secure_serving.go:116] Serving securely on [::]:6443\n
Jul 02 14:45:25.994 E ns/openshift-monitoring pod/telemeter-client-8bb66898c-g4qxm node/ip-10-0-222-239.us-east-2.compute.internal container=telemeter-client container exited with code 2 (Error): 
Jul 02 14:45:25.994 E ns/openshift-monitoring pod/telemeter-client-8bb66898c-g4qxm node/ip-10-0-222-239.us-east-2.compute.internal container=reload container exited with code 2 (Error): 
Jul 02 14:45:35.245 E ns/openshift-monitoring pod/node-exporter-4hq2m node/ip-10-0-151-70.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): -02T14:26:30Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-07-02T14:26:30Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-07-02T14:26:30Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-07-02T14:26:30Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-07-02T14:26:30Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-07-02T14:26:30Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-07-02T14:26:30Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-07-02T14:26:30Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-07-02T14:26:30Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-07-02T14:26:30Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-07-02T14:26:30Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-07-02T14:26:30Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-07-02T14:26:30Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-07-02T14:26:30Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-07-02T14:26:30Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-07-02T14:26:30Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-07-02T14:26:30Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-07-02T14:26:30Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-07-02T14:26:30Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-07-02T14:26:30Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-07-02T14:26:30Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-07-02T14:26:30Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-07-02T14:26:30Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-07-02T14:26:30Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jul 02 14:45:47.146 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-222-239.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-07-02T14:45:42.451Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-07-02T14:45:42.454Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-07-02T14:45:42.455Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-07-02T14:45:42.455Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-07-02T14:45:42.455Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-07-02T14:45:42.456Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-07-02T14:45:42.456Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-07-02T14:45:42.456Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-07-02T14:45:42.456Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-07-02T14:45:42.456Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-07-02T14:45:42.456Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-07-02T14:45:42.456Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-07-02T14:45:42.456Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-07-02T14:45:42.456Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-07-02T14:45:42.456Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-07-02T14:45:42.457Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-07-02
Jul 02 14:45:47.256 E ns/openshift-monitoring pod/node-exporter-dfg2x node/ip-10-0-160-167.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): -02T14:22:13Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-07-02T14:22:13Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jul 02 14:45:49.155 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-222-239.us-east-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/07/02 14:29:23 Watching directory: "/etc/alertmanager/config"\n
Jul 02 14:45:49.155 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-222-239.us-east-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/07/02 14:29:23 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/02 14:29:23 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/02 14:29:23 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/02 14:29:23 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/07/02 14:29:23 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/07/02 14:29:23 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/02 14:29:23 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0702 14:29:23.875094       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/07/02 14:29:23 http.go:107: HTTPS: listening on [::]:9095\n
Jul 02 14:45:54.068 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-131-218.us-east-2.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/07/02 14:30:09 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Jul 02 14:45:54.068 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-131-218.us-east-2.compute.internal container=prometheus-proxy container exited with code 2 (Error): 2020/07/02 14:30:09 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/07/02 14:30:09 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/02 14:30:09 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/02 14:30:09 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/07/02 14:30:09 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/07/02 14:30:09 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/07/02 14:30:09 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/02 14:30:09 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/07/02 14:30:09 http.go:107: HTTPS: listening on [::]:9091\nI0702 14:30:09.434858       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/07/02 14:40:16 oauthproxy.go:774: basicauth: 10.130.0.13:41166 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 14:45:22 oauthproxy.go:774: basicauth: 10.129.2.20:55484 Authorization header does not start with 'Basic', skipping basic authentication\n
Jul 02 14:45:54.068 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-131-218.us-east-2.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-07-02T14:30:08.737139174Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=info ts=2020-07-02T14:30:08.737248479Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-07-02T14:30:08.739855226Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-07-02T14:30:13.877929305Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Jul 02 14:45:55.078 E ns/openshift-monitoring pod/node-exporter-96vlm node/ip-10-0-131-218.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): -02T14:26:34Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-07-02T14:26:34Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-07-02T14:26:34Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-07-02T14:26:34Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-07-02T14:26:34Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-07-02T14:26:34Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-07-02T14:26:34Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-07-02T14:26:34Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-07-02T14:26:34Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-07-02T14:26:34Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-07-02T14:26:34Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-07-02T14:26:34Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-07-02T14:26:34Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-07-02T14:26:34Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-07-02T14:26:34Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-07-02T14:26:34Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-07-02T14:26:34Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-07-02T14:26:34Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-07-02T14:26:34Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-07-02T14:26:34Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-07-02T14:26:34Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-07-02T14:26:34Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-07-02T14:26:34Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-07-02T14:26:34Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jul 02 14:46:15.215 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-131-218.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-07-02T14:46:09.062Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-07-02T14:46:09.065Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-07-02T14:46:09.066Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-07-02T14:46:09.067Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-07-02T14:46:09.067Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-07-02T14:46:09.067Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-07-02T14:46:09.067Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-07-02T14:46:09.067Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-07-02T14:46:09.067Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-07-02T14:46:09.067Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-07-02T14:46:09.067Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-07-02T14:46:09.067Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-07-02T14:46:09.067Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-07-02T14:46:09.067Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-07-02T14:46:09.068Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-07-02T14:46:09.068Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-07-02
Jul 02 14:48:25.889 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-160-167.us-east-2.compute.internal node/ip-10-0-160-167.us-east-2.compute.internal container=kube-controller-manager container exited with code 255 (Error): ory.go:135: Failed to watch *v1.VolumeAttachment: Get https://localhost:6443/apis/storage.k8s.io/v1/volumeattachments?allowWatchBookmarks=true&resourceVersion=21092&timeout=6m35s&timeoutSeconds=395&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:48:25.202601       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/config.openshift.io/v1/apiservers?allowWatchBookmarks=true&resourceVersion=22328&timeout=5m32s&timeoutSeconds=332&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:48:25.205460       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/operators.coreos.com/v1alpha1/subscriptions?allowWatchBookmarks=true&resourceVersion=26664&timeout=7m5s&timeoutSeconds=425&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:48:25.212437       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/machineconfiguration.openshift.io/v1/machineconfigs?allowWatchBookmarks=true&resourceVersion=22325&timeout=9m22s&timeoutSeconds=562&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:48:25.220142       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/operator.openshift.io/v1/kubeschedulers?allowWatchBookmarks=true&resourceVersion=27771&timeout=9m44s&timeoutSeconds=584&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0702 14:48:25.223141       1 leaderelection.go:288] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0702 14:48:25.223254       1 controllermanager.go:291] leaderelection lost\nI0702 14:48:25.300648       1 resource_quota_controller.go:290] Shutting down resource quota controller\n
Jul 02 14:48:26.861 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-160-167.us-east-2.compute.internal node/ip-10-0-160-167.us-east-2.compute.internal container=kube-scheduler container exited with code 255 (Error): st:6443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=28401&timeout=8m33s&timeoutSeconds=513&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:48:26.246403       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://localhost:6443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=21090&timeout=8m46s&timeoutSeconds=526&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:48:26.247594       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicaSet: Get https://localhost:6443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=28118&timeout=8m30s&timeoutSeconds=510&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:48:26.248987       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://localhost:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=28740&timeout=7m45s&timeoutSeconds=465&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:48:26.250215       1 reflector.go:307] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to watch *v1.Pod: Get https://localhost:6443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=28734&timeoutSeconds=483&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:48:26.251221       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: Get https://localhost:6443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=22264&timeout=5m25s&timeoutSeconds=325&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0702 14:48:26.355355       1 leaderelection.go:288] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0702 14:48:26.355384       1 server.go:257] leaderelection lost\n
Jul 02 14:48:51.952 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-160-167.us-east-2.compute.internal node/ip-10-0-160-167.us-east-2.compute.internal container=setup init container exited with code 124 (Error): ................................................................................
Jul 02 14:48:55.992 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-160-167.us-east-2.compute.internal node/ip-10-0-160-167.us-east-2.compute.internal container=kube-controller-manager-recovery-controller container exited with code 255 (Error): or/configmaps?allowWatchBookmarks=true&resourceVersion=29006&timeout=6m9s&timeoutSeconds=369&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:48:55.311853       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-config/configmaps?allowWatchBookmarks=true&resourceVersion=28079&timeout=9m23s&timeoutSeconds=563&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:48:55.314232       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?allowWatchBookmarks=true&resourceVersion=27196&timeout=6m42s&timeoutSeconds=402&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:48:55.317024       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-config-managed/configmaps?allowWatchBookmarks=true&resourceVersion=28079&timeout=9m11s&timeoutSeconds=551&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:48:55.318261       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?allowWatchBookmarks=true&resourceVersion=29022&timeout=6m19s&timeoutSeconds=379&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:48:55.319458       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *unstructured.Unstructured: Get https://localhost:6443/apis/operator.openshift.io/v1/kubecontrollermanagers?allowWatchBookmarks=true&resourceVersion=28439&timeoutSeconds=531&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0702 14:48:55.778726       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cert-recovery-controller-lock: timed out waiting for the condition\nF0702 14:48:55.778794       1 leaderelection.go:67] leaderelection lost\n
Jul 02 14:48:57.002 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-160-167.us-east-2.compute.internal node/ip-10-0-160-167.us-east-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): ailed to watch *v1.Pod: Get https://localhost:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=28734&timeout=7m56s&timeoutSeconds=476&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:48:56.412439       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://localhost:6443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=21090&timeout=8m22s&timeoutSeconds=502&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:48:56.413303       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PodTemplate: Get https://localhost:6443/api/v1/podtemplates?allowWatchBookmarks=true&resourceVersion=21090&timeout=9m4s&timeoutSeconds=544&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:48:56.414420       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/configmaps?allowWatchBookmarks=true&resourceVersion=29043&timeout=5m22s&timeoutSeconds=322&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:48:56.415955       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.RoleBinding: Get https://localhost:6443/apis/rbac.authorization.k8s.io/v1/rolebindings?allowWatchBookmarks=true&resourceVersion=22018&timeout=5m31s&timeoutSeconds=331&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 14:48:56.417322       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Endpoints: Get https://localhost:6443/api/v1/endpoints?allowWatchBookmarks=true&resourceVersion=28735&timeout=9m58s&timeoutSeconds=598&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0702 14:48:56.489533       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cluster-policy-controller: timed out waiting for the condition\nF0702 14:48:56.489582       1 policy_controller.go:94] leaderelection lost\n
Jul 02 14:49:28.453 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-56f5k4hbp node/ip-10-0-254-230.us-east-2.compute.internal container=operator container exited with code 255 (Error): -go@v0.17.1/tools/cache/reflector.go:105\nI0702 14:48:16.972273       1 reflector.go:185] Listing and watching *v1.ConfigMap from k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105\nI0702 14:48:16.972282       1 reflector.go:185] Listing and watching *v1.ConfigMap from k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105\nI0702 14:48:19.669842       1 reflector.go:324] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: watch of *v1.Proxy ended with: too old resource version: 22305 (29062)\nI0702 14:48:19.712801       1 reflector.go:324] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: watch of *v1.ServiceCatalogControllerManager ended with: too old resource version: 22304 (29063)\nI0702 14:48:19.793890       1 reflector.go:324] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: watch of *v1.ClusterOperator ended with: too old resource version: 28763 (29056)\nI0702 14:48:20.670264       1 reflector.go:185] Listing and watching *v1.Proxy from k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105\nI0702 14:48:20.713106       1 reflector.go:185] Listing and watching *v1.ServiceCatalogControllerManager from k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105\nI0702 14:48:20.794150       1 reflector.go:185] Listing and watching *v1.ClusterOperator from k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105\nI0702 14:48:30.485944       1 httplog.go:90] GET /metrics: (8.83684ms) 200 [Prometheus/2.15.2 10.129.2.23:40036]\nI0702 14:48:31.474666       1 httplog.go:90] GET /metrics: (2.451407ms) 200 [Prometheus/2.15.2 10.131.0.27:37382]\nI0702 14:49:00.484882       1 httplog.go:90] GET /metrics: (7.678192ms) 200 [Prometheus/2.15.2 10.129.2.23:40036]\nI0702 14:49:01.478408       1 httplog.go:90] GET /metrics: (6.394523ms) 200 [Prometheus/2.15.2 10.131.0.27:37382]\nI0702 14:49:27.404970       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0702 14:49:27.405544       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0702 14:49:27.405791       1 builder.go:210] server exited\n
Jul 02 14:49:30.481 E ns/openshift-service-catalog-apiserver-operator pod/openshift-service-catalog-apiserver-operator-545fbd976-22rqs node/ip-10-0-254-230.us-east-2.compute.internal container=operator container exited with code 255 (Error): found, nothing to delete.\nI0702 14:48:51.371100       1 workload_controller.go:193] apiservice v1beta1.servicecatalog.k8s.io deleted\nI0702 14:48:52.628297       1 leaderelection.go:283] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0702 14:49:02.639286       1 leaderelection.go:283] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0702 14:49:07.751569       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0702 14:49:07.751599       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0702 14:49:07.753784       1 httplog.go:90] GET /metrics: (6.746969ms) 200 [Prometheus/2.15.2 10.129.2.23:41134]\nI0702 14:49:11.365950       1 workload_controller.go:347] No service bindings found, nothing to delete.\nI0702 14:49:11.375343       1 workload_controller.go:193] apiservice v1beta1.servicecatalog.k8s.io deleted\nI0702 14:49:12.649622       1 leaderelection.go:283] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0702 14:49:19.369660       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0702 14:49:19.369690       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0702 14:49:19.371923       1 httplog.go:90] GET /metrics: (6.524966ms) 200 [Prometheus/2.15.2 10.131.0.27:57846]\nI0702 14:49:22.659564       1 leaderelection.go:283] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0702 14:49:29.363751       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0702 14:49:29.364351       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0702 14:49:29.364431       1 builder.go:209] server exited\n
Jul 02 14:49:30.761 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-98fd49898-wpmxn node/ip-10-0-222-239.us-east-2.compute.internal container=operator container exited with code 255 (Error): eam event decoding: unexpected EOF\nI0702 14:48:17.078816       1 operator.go:146] Starting syncing operator at 2020-07-02 14:48:17.078805274 +0000 UTC m=+1235.688054647\nI0702 14:48:17.826801       1 operator.go:148] Finished syncing operator at 747.985945ms\nI0702 14:48:17.826849       1 operator.go:146] Starting syncing operator at 2020-07-02 14:48:17.826845067 +0000 UTC m=+1236.436094119\nI0702 14:48:18.174061       1 operator.go:148] Finished syncing operator at 347.206303ms\nI0702 14:48:20.597566       1 operator.go:146] Starting syncing operator at 2020-07-02 14:48:20.59755629 +0000 UTC m=+1239.206805396\nI0702 14:48:20.627559       1 operator.go:148] Finished syncing operator at 29.995881ms\nI0702 14:49:24.494336       1 operator.go:146] Starting syncing operator at 2020-07-02 14:49:24.49432779 +0000 UTC m=+1303.103576906\nI0702 14:49:24.516225       1 operator.go:148] Finished syncing operator at 21.889528ms\nI0702 14:49:24.518896       1 operator.go:146] Starting syncing operator at 2020-07-02 14:49:24.518888071 +0000 UTC m=+1303.128137358\nI0702 14:49:24.546023       1 operator.go:148] Finished syncing operator at 27.127611ms\nI0702 14:49:24.546065       1 operator.go:146] Starting syncing operator at 2020-07-02 14:49:24.546061318 +0000 UTC m=+1303.155310377\nI0702 14:49:24.575188       1 operator.go:148] Finished syncing operator at 29.11815ms\nI0702 14:49:24.575236       1 operator.go:146] Starting syncing operator at 2020-07-02 14:49:24.575231246 +0000 UTC m=+1303.184480347\nI0702 14:49:24.906405       1 operator.go:148] Finished syncing operator at 331.159585ms\nI0702 14:49:29.780831       1 operator.go:146] Starting syncing operator at 2020-07-02 14:49:29.780816195 +0000 UTC m=+1308.390065416\nI0702 14:49:29.815697       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0702 14:49:29.815841       1 dynamic_serving_content.go:144] Shutting down serving-cert::/tmp/serving-cert-823990008/tls.crt::/tmp/serving-cert-823990008/tls.key\nF0702 14:49:29.815867       1 builder.go:210] server exited\n
Jul 02 14:50:55.915 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-c8b4bfcd9-ghfg2 node/ip-10-0-151-70.us-east-2.compute.internal container=snapshot-controller container exited with code 2 (Error): 
Jul 02 14:51:16.436 E ns/openshift-marketplace pod/community-operators-78697cb9c4-p8bkv node/ip-10-0-222-239.us-east-2.compute.internal container=community-operators container exited with code 2 (Error): 
Jul 02 14:52:26.844 E ns/openshift-sdn pod/sdn-ctzsp node/ip-10-0-254-230.us-east-2.compute.internal container=sdn container exited with code 255 (Error): ndpoint 10.131.0.9:50051 for service "openshift-marketplace/community-operators:grpc"\nI0702 14:51:15.584832    2013 proxier.go:368] userspace proxy: processing 0 service events\nI0702 14:51:15.584860    2013 proxier.go:347] userspace syncProxyRules took 33.599524ms\nI0702 14:51:15.751678    2013 proxier.go:368] userspace proxy: processing 0 service events\nI0702 14:51:15.751723    2013 proxier.go:347] userspace syncProxyRules took 34.562687ms\nI0702 14:51:33.677448    2013 pod.go:539] CNI_DEL openshift-console/console-f79d54fd8-9vbxg\nI0702 14:51:45.913690    2013 proxier.go:368] userspace proxy: processing 0 service events\nI0702 14:51:45.913737    2013 proxier.go:347] userspace syncProxyRules took 32.635354ms\nI0702 14:52:16.190205    2013 proxier.go:368] userspace proxy: processing 0 service events\nI0702 14:52:16.190236    2013 proxier.go:347] userspace syncProxyRules took 61.248171ms\nI0702 14:52:24.091616    2013 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.18:6443 10.130.0.2:6443]\nI0702 14:52:24.091664    2013 roundrobin.go:217] Delete endpoint 10.129.0.4:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0702 14:52:24.091680    2013 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.18:8443 10.130.0.2:8443]\nI0702 14:52:24.091687    2013 roundrobin.go:217] Delete endpoint 10.129.0.4:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0702 14:52:24.273340    2013 proxier.go:368] userspace proxy: processing 0 service events\nI0702 14:52:24.273364    2013 proxier.go:347] userspace syncProxyRules took 31.278435ms\nI0702 14:52:26.497970    2013 ovs.go:169] Error executing ovs-ofctl: 2020-07-02T14:52:26Z|00001|vconn_stream|ERR|connection dropped mid-packet\novs-ofctl: OpenFlow receive failed (Protocol error)\nF0702 14:52:26.498011    2013 healthcheck.go:99] SDN healthcheck detected unhealthy OVS server, restarting: plugin is not setup\n
Jul 02 14:52:31.878 E ns/openshift-sdn pod/sdn-controller-27mjp node/ip-10-0-254-230.us-east-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0702 14:15:23.245025       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Jul 02 14:52:55.106 E ns/openshift-multus pod/multus-6lx8j node/ip-10-0-131-218.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Jul 02 14:52:59.096 E ns/openshift-sdn pod/sdn-tjhsr node/ip-10-0-131-218.us-east-2.compute.internal container=sdn container exited with code 255 (Error): cessing 0 service events\nI0702 14:51:15.676580    2237 proxier.go:347] userspace syncProxyRules took 28.084605ms\nI0702 14:51:45.811168    2237 proxier.go:368] userspace proxy: processing 0 service events\nI0702 14:51:45.811197    2237 proxier.go:347] userspace syncProxyRules took 32.651998ms\nI0702 14:52:15.961707    2237 proxier.go:368] userspace proxy: processing 0 service events\nI0702 14:52:15.961730    2237 proxier.go:347] userspace syncProxyRules took 28.249969ms\nI0702 14:52:24.090904    2237 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.18:6443 10.130.0.2:6443]\nI0702 14:52:24.090941    2237 roundrobin.go:217] Delete endpoint 10.129.0.4:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0702 14:52:24.090957    2237 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.18:8443 10.130.0.2:8443]\nI0702 14:52:24.090970    2237 roundrobin.go:217] Delete endpoint 10.129.0.4:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0702 14:52:24.225825    2237 proxier.go:368] userspace proxy: processing 0 service events\nI0702 14:52:24.225847    2237 proxier.go:347] userspace syncProxyRules took 27.306811ms\nI0702 14:52:54.363587    2237 proxier.go:368] userspace proxy: processing 0 service events\nI0702 14:52:54.363614    2237 proxier.go:347] userspace syncProxyRules took 30.09264ms\nI0702 14:52:54.389905    2237 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nE0702 14:52:54.389929    2237 metrics.go:133] failed to dump OVS flows for metrics: exit status 1\nI0702 14:52:55.895672    2237 healthcheck.go:92] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nF0702 14:52:58.969616    2237 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Jul 02 14:53:42.160 E ns/openshift-multus pod/multus-admission-controller-t54wz node/ip-10-0-254-230.us-east-2.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Jul 02 14:53:47.249 E ns/openshift-sdn pod/sdn-v67tm node/ip-10-0-151-70.us-east-2.compute.internal container=sdn container exited with code 255 (Error): 3    2275 proxier.go:368] userspace proxy: processing 0 service events\nI0702 14:52:54.384460    2275 proxier.go:347] userspace syncProxyRules took 27.73301ms\nI0702 14:53:11.431305    2275 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.18:6443 10.129.0.67:6443 10.130.0.2:6443]\nI0702 14:53:11.431346    2275 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.18:8443 10.129.0.67:8443 10.130.0.2:8443]\nI0702 14:53:11.446607    2275 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.129.0.67:6443 10.130.0.2:6443]\nI0702 14:53:11.446641    2275 roundrobin.go:217] Delete endpoint 10.128.0.18:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0702 14:53:11.446660    2275 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.129.0.67:8443 10.130.0.2:8443]\nI0702 14:53:11.446672    2275 roundrobin.go:217] Delete endpoint 10.128.0.18:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0702 14:53:11.569734    2275 proxier.go:368] userspace proxy: processing 0 service events\nI0702 14:53:11.569759    2275 proxier.go:347] userspace syncProxyRules took 28.01085ms\nI0702 14:53:11.698407    2275 proxier.go:368] userspace proxy: processing 0 service events\nI0702 14:53:11.698434    2275 proxier.go:347] userspace syncProxyRules took 27.705722ms\nI0702 14:53:41.834415    2275 proxier.go:368] userspace proxy: processing 0 service events\nI0702 14:53:41.834438    2275 proxier.go:347] userspace syncProxyRules took 27.711388ms\nI0702 14:53:46.522716    2275 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0702 14:53:46.522768    2275 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Jul 02 14:54:07.979 E ns/openshift-multus pod/multus-cls6j node/ip-10-0-151-70.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Jul 02 14:54:09.879 E ns/openshift-sdn pod/sdn-dmb4x node/ip-10-0-222-239.us-east-2.compute.internal container=sdn container exited with code 255 (Error): Opening healthcheck "openshift-ingress/router-default" on port 32265\nI0702 14:53:52.848247  112658 proxy.go:305] openshift-sdn proxy services and endpoints initialized\nI0702 14:53:52.848282  112658 cmd.go:173] openshift-sdn network plugin registering startup\nI0702 14:53:52.848422  112658 cmd.go:177] openshift-sdn network plugin ready\nI0702 14:53:56.220913  112658 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.80:6443 10.129.0.67:6443 10.130.0.2:6443]\nI0702 14:53:56.220969  112658 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.80:8443 10.129.0.67:8443 10.130.0.2:8443]\nI0702 14:53:56.239924  112658 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.80:6443 10.129.0.67:6443]\nI0702 14:53:56.240008  112658 roundrobin.go:217] Delete endpoint 10.130.0.2:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0702 14:53:56.240029  112658 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.80:8443 10.129.0.67:8443]\nI0702 14:53:56.240043  112658 roundrobin.go:217] Delete endpoint 10.130.0.2:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0702 14:53:56.359521  112658 proxier.go:368] userspace proxy: processing 0 service events\nI0702 14:53:56.359551  112658 proxier.go:347] userspace syncProxyRules took 28.655627ms\nI0702 14:53:56.514600  112658 proxier.go:368] userspace proxy: processing 0 service events\nI0702 14:53:56.514624  112658 proxier.go:347] userspace syncProxyRules took 28.778581ms\nI0702 14:54:09.370400  112658 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0702 14:54:09.370437  112658 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Jul 02 14:54:29.732 E ns/openshift-sdn pod/sdn-56kwp node/ip-10-0-187-100.us-east-2.compute.internal container=sdn container exited with code 255 (Error):   103071 proxier.go:368] userspace proxy: processing 0 service events\nI0702 14:53:41.954459  103071 proxier.go:347] userspace syncProxyRules took 32.826324ms\nI0702 14:53:56.214988  103071 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.80:6443 10.129.0.67:6443 10.130.0.2:6443]\nI0702 14:53:56.215038  103071 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.80:8443 10.129.0.67:8443 10.130.0.2:8443]\nI0702 14:53:56.234133  103071 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.80:6443 10.129.0.67:6443]\nI0702 14:53:56.234241  103071 roundrobin.go:217] Delete endpoint 10.130.0.2:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0702 14:53:56.234300  103071 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.80:8443 10.129.0.67:8443]\nI0702 14:53:56.234346  103071 roundrobin.go:217] Delete endpoint 10.130.0.2:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0702 14:53:56.383358  103071 proxier.go:368] userspace proxy: processing 0 service events\nI0702 14:53:56.383443  103071 proxier.go:347] userspace syncProxyRules took 36.382828ms\nI0702 14:53:56.584946  103071 proxier.go:368] userspace proxy: processing 0 service events\nI0702 14:53:56.584987  103071 proxier.go:347] userspace syncProxyRules took 35.29946ms\nI0702 14:54:26.785952  103071 proxier.go:368] userspace proxy: processing 0 service events\nI0702 14:54:26.786050  103071 proxier.go:347] userspace syncProxyRules took 58.131748ms\nI0702 14:54:28.693521  103071 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0702 14:54:28.693644  103071 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Jul 02 14:54:57.463 E ns/openshift-multus pod/multus-sbk6m node/ip-10-0-254-230.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Jul 02 14:55:51.168 E ns/openshift-multus pod/multus-dk8wl node/ip-10-0-222-239.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Jul 02 14:56:40.192 E ns/openshift-multus pod/multus-jlw84 node/ip-10-0-187-100.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Jul 02 14:57:31.363 E ns/openshift-multus pod/multus-qzshs node/ip-10-0-160-167.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Jul 02 14:58:09.224 E ns/openshift-machine-config-operator pod/machine-config-operator-8cf7dd664-94nfp node/ip-10-0-254-230.us-east-2.compute.internal container=machine-config-operator container exited with code 2 (Error): "", Name:"machine-config", UID:"f5d4a57e-eb38-47c2-8282-596ed3849eca", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorVersionChanged' clusteroperator/machine-config-operator is bootstrapping to [{operator 0.0.1-2020-07-02-132442}]\nE0702 14:15:59.959384       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.MachineConfigPool: the server could not find the requested resource (get machineconfigpools.machineconfiguration.openshift.io)\nE0702 14:15:59.959638       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.ControllerConfig: the server could not find the requested resource (get controllerconfigs.machineconfiguration.openshift.io)\nE0702 14:16:00.983394       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.MachineConfigPool: the server could not find the requested resource (get machineconfigpools.machineconfiguration.openshift.io)\nI0702 14:16:04.926596       1 sync.go:61] [init mode] synced RenderConfig in 5.328760067s\nI0702 14:16:05.471026       1 sync.go:61] [init mode] synced MachineConfigPools in 543.835313ms\nI0702 14:16:37.680548       1 sync.go:61] [init mode] synced MachineConfigDaemon in 32.209481485s\nI0702 14:16:43.756187       1 sync.go:61] [init mode] synced MachineConfigController in 6.075590596s\nI0702 14:16:47.865197       1 sync.go:61] [init mode] synced MachineConfigServer in 4.108964766s\nI0702 14:17:22.880370       1 sync.go:61] [init mode] synced RequiredPools in 35.015126325s\nI0702 14:17:23.085402       1 sync.go:89] Initialization complete\nE0702 14:21:29.555189       1 leaderelection.go:331] error retrieving resource lock openshift-machine-config-operator/machine-config: Get https://172.30.0.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps/machine-config: unexpected EOF\n
Jul 02 15:00:05.006 E ns/openshift-machine-config-operator pod/machine-config-daemon-g5bv7 node/ip-10-0-160-167.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jul 02 15:00:15.972 E ns/openshift-machine-config-operator pod/machine-config-daemon-vtstf node/ip-10-0-187-100.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jul 02 15:00:24.703 E ns/openshift-machine-config-operator pod/machine-config-daemon-ncnzj node/ip-10-0-254-230.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jul 02 15:00:29.733 E ns/openshift-machine-config-operator pod/machine-config-daemon-lmsmq node/ip-10-0-151-70.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jul 02 15:00:34.828 E ns/openshift-machine-config-operator pod/machine-config-daemon-2mxk2 node/ip-10-0-222-239.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jul 02 15:00:41.366 E ns/openshift-machine-config-operator pod/machine-config-daemon-vz2rq node/ip-10-0-131-218.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jul 02 15:02:28.513 E ns/openshift-machine-config-operator pod/machine-config-server-x7nzl node/ip-10-0-160-167.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0702 14:16:44.645929       1 start.go:38] Version: machine-config-daemon-4.4.0-202006242133.p0-4-g716cfce9-dirty (716cfce99c3b38375fbc22f49d83b202cfcb3d50)\nI0702 14:16:44.646689       1 api.go:56] Launching server on :22624\nI0702 14:16:44.646758       1 api.go:56] Launching server on :22623\nI0702 14:22:47.631629       1 api.go:102] Pool worker requested by 10.0.247.108:7762\nI0702 14:22:48.012636       1 api.go:102] Pool worker requested by 10.0.247.108:38240\nI0702 14:22:51.477599       1 api.go:102] Pool worker requested by 10.0.247.108:6599\n
Jul 02 15:02:38.766 E ns/openshift-monitoring pod/prometheus-adapter-84f98d65c9-rfbrl node/ip-10-0-131-218.us-east-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0702 14:45:29.605167       1 adapter.go:93] successfully using in-cluster auth\nI0702 14:45:30.008450       1 secure_serving.go:116] Serving securely on [::]:6443\n
Jul 02 15:02:39.483 E ns/openshift-console-operator pod/console-operator-56cb7f655c-78pvp node/ip-10-0-254-230.us-east-2.compute.internal container=console-operator container exited with code 255 (Error): e FailedUpdate 2 replicas ready at version 0.0.1-2020-07-02-132900\nI0702 14:51:08.355206       1 status_controller.go:176] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2020-07-02T14:22:40Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-07-02T14:51:08Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-07-02T14:51:08Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-07-02T14:22:40Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0702 14:51:08.365085       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"528390ad-31e0-4475-bbb9-f5fc0abb2107", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Progressing changed from True to False (""),Available changed from False to True ("")\nI0702 15:02:38.121822       1 cmd.go:83] Received SIGTERM or SIGINT signal, shutting down controller.\nI0702 15:02:38.122417       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nI0702 15:02:38.123635       1 controller.go:70] Shutting down Console\nI0702 15:02:38.123762       1 management_state_controller.go:112] Shutting down management-state-controller-console\nI0702 15:02:38.123879       1 controller.go:138] shutting down ConsoleServiceSyncController\nI0702 15:02:38.123945       1 controller.go:109] shutting down ConsoleResourceSyncDestinationController\nI0702 15:02:38.124027       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0702 15:02:38.124087       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0702 15:02:38.124185       1 status_controller.go:212] Shutting down StatusSyncer-console\nI0702 15:02:38.124246       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nF0702 15:02:38.124843       1 builder.go:243] stopped\n
Jul 02 15:02:39.742 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-131-218.us-east-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/07/02 14:45:29 Watching directory: "/etc/alertmanager/config"\n
Jul 02 15:02:39.742 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-131-218.us-east-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/07/02 14:45:31 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/02 14:45:31 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/02 14:45:31 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/02 14:45:31 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/07/02 14:45:31 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/07/02 14:45:31 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/02 14:45:31 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/02 14:45:31 http.go:107: HTTPS: listening on [::]:9095\nI0702 14:45:31.359354       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Jul 02 15:02:42.501 E ns/openshift-machine-config-operator pod/machine-config-server-bv7nx node/ip-10-0-254-230.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0702 14:16:46.888476       1 start.go:38] Version: machine-config-daemon-4.4.0-202006242133.p0-4-g716cfce9-dirty (716cfce99c3b38375fbc22f49d83b202cfcb3d50)\nI0702 14:16:46.889725       1 api.go:56] Launching server on :22624\nI0702 14:16:46.889841       1 api.go:56] Launching server on :22623\n
Jul 02 15:03:02.125 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-151-70.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-07-02T15:02:56.753Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-07-02T15:02:56.758Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-07-02T15:02:56.759Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-07-02T15:02:56.760Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-07-02T15:02:56.760Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-07-02T15:02:56.760Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-07-02T15:02:56.760Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-07-02T15:02:56.760Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-07-02T15:02:56.760Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-07-02T15:02:56.760Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-07-02T15:02:56.760Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-07-02T15:02:56.760Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-07-02T15:02:56.760Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-07-02T15:02:56.760Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-07-02T15:02:56.761Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-07-02T15:02:56.761Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-07-02
Jul 02 15:03:05.046 E ns/openshift-console pod/console-589b644dbd-5jvcw node/ip-10-0-254-230.us-east-2.compute.internal container=console container exited with code 2 (Error): 2020-07-02T14:51:06Z cmd/main: cookies are secure!\n2020-07-02T14:51:07Z cmd/main: Binding to [::]:8443...\n2020-07-02T14:51:07Z cmd/main: using TLS\n
Jul 02 15:04:59.340 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable
Jul 02 15:05:21.205 E ns/openshift-monitoring pod/node-exporter-nk2qb node/ip-10-0-131-218.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:05:21.224 E ns/openshift-cluster-node-tuning-operator pod/tuned-rmxpc node/ip-10-0-131-218.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:05:21.240 E ns/openshift-sdn pod/ovs-qp6p8 node/ip-10-0-131-218.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:05:21.251 E ns/openshift-image-registry pod/node-ca-cfwq8 node/ip-10-0-131-218.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:05:21.265 E ns/openshift-multus pod/multus-tjxv2 node/ip-10-0-131-218.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:05:21.275 E ns/openshift-dns pod/dns-default-qvpdz node/ip-10-0-131-218.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:05:21.291 E ns/openshift-sdn pod/sdn-wbb7t node/ip-10-0-131-218.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:05:21.301 E ns/openshift-machine-config-operator pod/machine-config-daemon-z5flg node/ip-10-0-131-218.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:05:22.676 E ns/openshift-monitoring pod/node-exporter-c55dr node/ip-10-0-254-230.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:05:22.691 E ns/openshift-cluster-node-tuning-operator pod/tuned-fc4th node/ip-10-0-254-230.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:05:22.708 E ns/openshift-image-registry pod/node-ca-qhzrn node/ip-10-0-254-230.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:05:22.719 E ns/openshift-controller-manager pod/controller-manager-w6p8d node/ip-10-0-254-230.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:05:22.737 E ns/openshift-sdn pod/sdn-controller-2jkpv node/ip-10-0-254-230.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:05:22.752 E ns/openshift-sdn pod/sdn-bzkt8 node/ip-10-0-254-230.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:05:22.764 E ns/openshift-sdn pod/ovs-7zssp node/ip-10-0-254-230.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:05:22.774 E ns/openshift-multus pod/multus-admission-controller-h8w8x node/ip-10-0-254-230.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:05:22.785 E ns/openshift-multus pod/multus-cg96z node/ip-10-0-254-230.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:05:22.795 E ns/openshift-dns pod/dns-default-x5k4c node/ip-10-0-254-230.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:05:22.806 E ns/openshift-machine-config-operator pod/machine-config-daemon-nvbt4 node/ip-10-0-254-230.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:05:22.817 E ns/openshift-machine-config-operator pod/machine-config-server-6b6r9 node/ip-10-0-254-230.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:05:29.520 E ns/openshift-machine-config-operator pod/machine-config-daemon-z5flg node/ip-10-0-131-218.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Jul 02 15:05:30.957 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers::NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-254-230.us-east-2.compute.internal" not ready since 2020-07-02 15:05:22 +0000 UTC because KubeletNotReady (container runtime status check may not have completed yet)\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-254-230.us-east-2.compute.internal is unhealthy
Jul 02 15:05:32.671 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Jul 02 15:05:34.046 E ns/openshift-machine-config-operator pod/machine-config-daemon-nvbt4 node/ip-10-0-254-230.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Jul 02 15:05:41.703 E ns/openshift-monitoring pod/thanos-querier-65778d588-2zr5t node/ip-10-0-151-70.us-east-2.compute.internal container=oauth-proxy container exited with code 2 (Error): 45:24 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/02 14:45:24 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/02 14:45:24 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/07/02 14:45:24 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/07/02 14:45:24 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/07/02 14:45:24 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/02 14:45:24 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/07/02 14:45:24 http.go:107: HTTPS: listening on [::]:9091\nI0702 14:45:24.587036       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/07/02 14:46:15 oauthproxy.go:774: basicauth: 10.129.0.44:54490 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 14:48:15 oauthproxy.go:774: basicauth: 10.129.0.44:57584 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 14:49:15 oauthproxy.go:774: basicauth: 10.129.0.44:58986 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 14:50:15 oauthproxy.go:774: basicauth: 10.129.0.44:60106 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 14:57:15 oauthproxy.go:774: basicauth: 10.129.0.44:36632 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 14:59:15 oauthproxy.go:774: basicauth: 10.129.0.44:37922 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:04:15 oauthproxy.go:774: basicauth: 10.129.0.44:60364 Authorization header does not start with 'Basic', skipping basic authentication\n
Jul 02 15:05:42.471 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-7b586b4457-55vzk node/ip-10-0-160-167.us-east-2.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): 2.compute.internal pods/openshift-kube-scheduler-ip-10-0-254-230.us-east-2.compute.internal container=\"kube-scheduler\" is not ready\nNodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-07-02T14:44:21Z","message":"NodeInstallerProgressing: 3 nodes are at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-07-02T14:17:58Z","message":"StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-07-02T14:16:06Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0702 15:05:36.978440       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"33ec8e5d-f2b6-45fc-ba68-288d28c8694e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Degraded message changed from "StaticPodsDegraded: nodes/ip-10-0-254-230.us-east-2.compute.internal pods/openshift-kube-scheduler-ip-10-0-254-230.us-east-2.compute.internal container=\"kube-scheduler\" is not ready\nStaticPodsDegraded: nodes/ip-10-0-254-230.us-east-2.compute.internal pods/openshift-kube-scheduler-ip-10-0-254-230.us-east-2.compute.internal container=\"kube-scheduler-cert-syncer\" is not ready\nNodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: nodes/ip-10-0-254-230.us-east-2.compute.internal pods/openshift-kube-scheduler-ip-10-0-254-230.us-east-2.compute.internal container=\"kube-scheduler\" is not ready\nNodeControllerDegraded: All master nodes are ready"\nI0702 15:05:41.222720       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0702 15:05:41.223201       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0702 15:05:41.223229       1 builder.go:209] server exited\n
Jul 02 15:05:42.706 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-151-70.us-east-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/07/02 14:45:47 Watching directory: "/etc/alertmanager/config"\n
Jul 02 15:05:42.706 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-151-70.us-east-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/07/02 14:45:47 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/02 14:45:47 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/02 14:45:47 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/02 14:45:47 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/07/02 14:45:47 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/07/02 14:45:47 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/02 14:45:47 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/02 14:45:47 http.go:107: HTTPS: listening on [::]:9095\nI0702 14:45:47.516768       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Jul 02 15:05:47.239 E ns/openshift-service-ca-operator pod/service-ca-operator-6bd8cfb5d-kmb48 node/ip-10-0-160-167.us-east-2.compute.internal container=operator container exited with code 255 (Error): 
Jul 02 15:05:56.828 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-131-218.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-07-02T15:05:55.035Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-07-02T15:05:55.042Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-07-02T15:05:55.042Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-07-02T15:05:55.043Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-07-02T15:05:55.043Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-07-02T15:05:55.043Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-07-02T15:05:55.043Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-07-02T15:05:55.043Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-07-02T15:05:55.043Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-07-02T15:05:55.043Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-07-02T15:05:55.043Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-07-02T15:05:55.043Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-07-02T15:05:55.043Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-07-02T15:05:55.043Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-07-02T15:05:55.045Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-07-02T15:05:55.045Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-07-02
Jul 02 15:06:23.635 E ns/openshift-authentication pod/oauth-openshift-57995b8746-cjnpg node/ip-10-0-187-100.us-east-2.compute.internal container=oauth-openshift container exited with code 255 (Error): Copying system trust bundle\nW0702 15:06:23.054778       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0702 15:06:23.055195       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nF0702 15:06:23.058382       1 cmd.go:49] unable to load configmap based request-header-client-ca-file: Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: connection refused\n
Jul 02 15:06:25.622 E ns/openshift-operator-lifecycle-manager pod/packageserver-6dcdfccf5f-72jnm node/ip-10-0-187-100.us-east-2.compute.internal container=packageserver container exited with code 1 (Error): C_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA\n      --tls-min-version string                                  Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13\n      --tls-private-key-file string                             File containing the default x509 private key matching --tls-cert-file.\n      --tls-sni-cert-key namedCertKey                           A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com". (default [])\n  -v, --v Level                                                 number for the log level verbosity (default 0)\n      --vmodule moduleSpec                                      comma-separated list of pattern=N settings for file-filtered logging\n\ntime="2020-07-02T15:06:24Z" level=fatal msg="Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: connection refused"\n
Jul 02 15:06:42.779 E ns/openshift-marketplace pod/redhat-operators-67f4bf4bf6-hbwbc node/ip-10-0-222-239.us-east-2.compute.internal container=redhat-operators container exited with code 2 (Error): 
Jul 02 15:06:50.807 E ns/openshift-marketplace pod/certified-operators-85bb5f9cf4-t2kl8 node/ip-10-0-222-239.us-east-2.compute.internal container=certified-operators container exited with code 2 (Error): 
Jul 02 15:06:54.825 E ns/openshift-marketplace pod/community-operators-84d66d6d57-s65s4 node/ip-10-0-222-239.us-east-2.compute.internal container=community-operators container exited with code 2 (Error): 
Jul 02 15:07:32.205 E clusterversion/version changed Failing to True: ClusterOperatorNotAvailable: Cluster operator machine-config is still updating
Jul 02 15:08:16.475 E ns/openshift-monitoring pod/node-exporter-lgrlg node/ip-10-0-151-70.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:08:16.500 E ns/openshift-cluster-node-tuning-operator pod/tuned-wqh9l node/ip-10-0-151-70.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:08:16.517 E ns/openshift-image-registry pod/node-ca-xf22f node/ip-10-0-151-70.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:08:16.530 E ns/openshift-sdn pod/ovs-bbqw7 node/ip-10-0-151-70.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:08:16.542 E ns/openshift-sdn pod/sdn-qb2jg node/ip-10-0-151-70.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:08:16.557 E ns/openshift-multus pod/multus-czqn9 node/ip-10-0-151-70.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:08:16.572 E ns/openshift-dns pod/dns-default-hd5jk node/ip-10-0-151-70.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:08:16.589 E ns/openshift-machine-config-operator pod/machine-config-daemon-f2dg4 node/ip-10-0-151-70.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:08:26.877 E ns/openshift-machine-config-operator pod/machine-config-daemon-f2dg4 node/ip-10-0-151-70.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Jul 02 15:08:35.378 E ns/openshift-cluster-node-tuning-operator pod/tuned-sdljk node/ip-10-0-160-167.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:08:35.403 E ns/openshift-monitoring pod/node-exporter-d85l8 node/ip-10-0-160-167.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:08:35.426 E ns/openshift-controller-manager pod/controller-manager-wvkzx node/ip-10-0-160-167.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:08:35.448 E ns/openshift-image-registry pod/node-ca-gxqcj node/ip-10-0-160-167.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:08:35.469 E ns/openshift-sdn pod/sdn-controller-jl2j9 node/ip-10-0-160-167.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:08:35.490 E ns/openshift-sdn pod/ovs-brbzb node/ip-10-0-160-167.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:08:35.510 E ns/openshift-sdn pod/sdn-fsbcv node/ip-10-0-160-167.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:08:35.527 E ns/openshift-multus pod/multus-admission-controller-mbjjg node/ip-10-0-160-167.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:08:35.543 E ns/openshift-multus pod/multus-8wcgx node/ip-10-0-160-167.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:08:35.558 E ns/openshift-dns pod/dns-default-mfw6r node/ip-10-0-160-167.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:08:35.571 E ns/openshift-machine-config-operator pod/machine-config-daemon-9kfsb node/ip-10-0-160-167.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:08:35.597 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-222-239.us-east-2.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/07/02 14:45:46 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Jul 02 15:08:35.597 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-222-239.us-east-2.compute.internal container=prometheus-proxy container exited with code 2 (Error): 2020/07/02 14:45:46 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/07/02 14:45:46 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/02 14:45:46 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/02 14:45:46 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/07/02 14:45:46 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/07/02 14:45:46 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/07/02 14:45:46 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/02 14:45:46 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0702 14:45:46.652203       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/07/02 14:45:46 http.go:107: HTTPS: listening on [::]:9091\n2020/07/02 14:49:52 oauthproxy.go:774: basicauth: 10.129.2.20:60926 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 14:54:22 oauthproxy.go:774: basicauth: 10.129.2.20:37804 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 14:58:52 oauthproxy.go:774: basicauth: 10.129.2.20:42212 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:02:43 oauthproxy.go:774: basicauth: 10.128.2.30:47754 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:04:1
Jul 02 15:08:35.597 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-222-239.us-east-2.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-07-02T14:45:44.715526104Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=info ts=2020-07-02T14:45:44.715665461Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-07-02T14:45:44.717567218Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-07-02T14:45:49.846671017Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Jul 02 15:08:35.608 E ns/openshift-machine-config-operator pod/machine-config-server-zv74k node/ip-10-0-160-167.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:09:04.221 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-151-70.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-07-02T15:09:02.178Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-07-02T15:09:02.184Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-07-02T15:09:02.196Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-07-02T15:09:02.197Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-07-02T15:09:02.197Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-07-02T15:09:02.197Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-07-02T15:09:02.197Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-07-02T15:09:02.197Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-07-02T15:09:02.197Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-07-02T15:09:02.197Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-07-02T15:09:02.197Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-07-02T15:09:02.197Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-07-02T15:09:02.197Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-07-02T15:09:02.197Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-07-02T15:09:02.199Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-07-02T15:09:02.199Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-07-02
Jul 02 15:09:24.588 E ns/openshift-authentication-operator pod/authentication-operator-74c447f79c-vdhcq node/ip-10-0-187-100.us-east-2.compute.internal container=operator container exited with code 255 (Error): rStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "" to "RouteHealthDegraded: failed to GET route: dial tcp: i/o timeout"\nI0702 15:07:23.672272       1 status_controller.go:176] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2020-07-02T14:28:07Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-07-02T15:07:23Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-07-02T14:34:32Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-07-02T14:16:04Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0702 15:07:23.684246       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"73db882a-3f67-4410-b170-4e01ab57f555", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "RouteHealthDegraded: failed to GET route: dial tcp: i/o timeout" to "",Progressing changed from True to False ("")\nI0702 15:09:23.622660       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0702 15:09:23.623093       1 configmap_cafile_content.go:226] Shutting down client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\nI0702 15:09:23.623128       1 configmap_cafile_content.go:226] Shutting down client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nI0702 15:09:23.623304       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nI0702 15:09:23.623324       1 dynamic_serving_content.go:144] Shutting down serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key\nI0702 15:09:23.623362       1 secure_serving.go:222] Stopped listening on [::]:8443\nF0702 15:09:23.623434       1 builder.go:210] server exited\n
Jul 02 15:09:26.712 E ns/openshift-cluster-machine-approver pod/machine-approver-5df75dd6c4-kp2k4 node/ip-10-0-187-100.us-east-2.compute.internal container=machine-approver-controller container exited with code 2 (Error): https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nI0702 14:46:30.872525       1 main.go:146] CSR csr-ph9vm added\nI0702 14:46:30.872619       1 main.go:149] CSR csr-ph9vm is already approved\nI0702 14:46:30.872676       1 main.go:146] CSR csr-47dd6 added\nI0702 14:46:30.874071       1 main.go:149] CSR csr-47dd6 is already approved\nI0702 14:46:30.874173       1 main.go:146] CSR csr-fd4kj added\nI0702 14:46:30.874272       1 main.go:149] CSR csr-fd4kj is already approved\nI0702 14:46:30.874373       1 main.go:146] CSR csr-hsksg added\nI0702 14:46:30.874447       1 main.go:149] CSR csr-hsksg is already approved\nI0702 14:46:30.874505       1 main.go:146] CSR csr-ncq2l added\nI0702 14:46:30.874552       1 main.go:149] CSR csr-ncq2l is already approved\nI0702 14:46:30.874600       1 main.go:146] CSR csr-nrrzc added\nI0702 14:46:30.874692       1 main.go:149] CSR csr-nrrzc is already approved\nI0702 14:46:30.874746       1 main.go:146] CSR csr-rj4rr added\nI0702 14:46:30.874783       1 main.go:149] CSR csr-rj4rr is already approved\nI0702 14:46:30.874837       1 main.go:146] CSR csr-swxc8 added\nI0702 14:46:30.874874       1 main.go:149] CSR csr-swxc8 is already approved\nI0702 14:46:30.874912       1 main.go:146] CSR csr-49dqb added\nI0702 14:46:30.874980       1 main.go:149] CSR csr-49dqb is already approved\nI0702 14:46:30.875025       1 main.go:146] CSR csr-8cj4s added\nI0702 14:46:30.875062       1 main.go:149] CSR csr-8cj4s is already approved\nI0702 14:46:30.875104       1 main.go:146] CSR csr-lxtzp added\nI0702 14:46:30.875141       1 main.go:149] CSR csr-lxtzp is already approved\nI0702 14:46:30.875179       1 main.go:146] CSR csr-mtvwf added\nI0702 14:46:30.875214       1 main.go:149] CSR csr-mtvwf is already approved\nW0702 15:03:10.868759       1 reflector.go:289] github.com/openshift/cluster-machine-approver/main.go:238: watch of *v1beta1.CertificateSigningRequest ended with: too old resource version: 28475 (37824)\n
Jul 02 15:09:34.729 E ns/openshift-service-catalog-apiserver-operator pod/openshift-service-catalog-apiserver-operator-6c488d6dc6-tvwgc node/ip-10-0-187-100.us-east-2.compute.internal container=operator container exited with code 255 (Error):  1 reflector.go:268] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: forcing resync\nI0702 15:09:29.498427       1 workload_controller.go:193] apiservice v1beta1.servicecatalog.k8s.io deleted\nI0702 15:09:29.500020       1 reflector.go:268] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: forcing resync\nI0702 15:09:29.509259       1 reflector.go:268] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: forcing resync\nI0702 15:09:29.664164       1 reflector.go:268] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: forcing resync\nI0702 15:09:30.065229       1 reflector.go:268] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: forcing resync\nI0702 15:09:31.024834       1 leaderelection.go:283] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0702 15:09:31.794766       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0702 15:09:31.820077       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0702 15:09:31.880146       1 httplog.go:90] GET /metrics: (148.230422ms) 200 [Prometheus/2.15.2 10.129.2.18:34652]\nI0702 15:09:33.265050       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0702 15:09:33.265398       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0702 15:09:33.265497       1 finalizer_controller.go:140] Shutting down FinalizerController\nI0702 15:09:33.265560       1 status_controller.go:212] Shutting down StatusSyncer-service-catalog-apiserver\nI0702 15:09:33.265621       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0702 15:09:33.265636       1 configmap_cafile_content.go:226] Shutting down client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nI0702 15:09:33.265720       1 workload_controller.go:254] Shutting down OpenShiftSvCatAPIServerOperator\nF0702 15:09:33.265515       1 builder.go:209] server exited\n
Jul 02 15:09:34.758 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-7b586b4457-7ws2p node/ip-10-0-187-100.us-east-2.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): al container=\"kube-scheduler\" is not ready\nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready"\nI0702 15:09:32.310997       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"33ec8e5d-f2b6-45fc-ba68-288d28c8694e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-8-ip-10-0-187-100.us-east-2.compute.internal -n openshift-kube-scheduler because it was missing\nI0702 15:09:33.520043       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0702 15:09:33.521491       1 base_controller.go:74] Shutting down NodeController ...\nI0702 15:09:33.521574       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0702 15:09:33.521640       1 base_controller.go:74] Shutting down RevisionController ...\nI0702 15:09:33.521697       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0702 15:09:33.521779       1 base_controller.go:74] Shutting down  ...\nI0702 15:09:33.521875       1 base_controller.go:74] Shutting down PruneController ...\nI0702 15:09:33.521954       1 status_controller.go:212] Shutting down StatusSyncer-kube-scheduler\nI0702 15:09:33.522018       1 target_config_reconciler.go:126] Shutting down TargetConfigReconciler\nI0702 15:09:33.522074       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0702 15:09:33.522127       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0702 15:09:33.522179       1 remove_stale_conditions.go:84] Shutting down RemoveStaleConditions\nI0702 15:09:33.522231       1 base_controller.go:74] Shutting down StaticPodStateController ...\nI0702 15:09:33.522285       1 base_controller.go:74] Shutting down InstallerStateController ...\nI0702 15:09:33.522337       1 base_controller.go:74] Shutting down InstallerController ...\nF0702 15:09:33.522719       1 builder.go:243] stopped\n
Jul 02 15:09:50.850 E ns/openshift-console pod/console-589b644dbd-8c9qb node/ip-10-0-187-100.us-east-2.compute.internal container=console container exited with code 2 (Error): 2020-07-02T15:02:43Z cmd/main: cookies are secure!\n2020-07-02T15:02:44Z cmd/main: Binding to [::]:8443...\n2020-07-02T15:02:44Z cmd/main: using TLS\n
Jul 02 15:09:53.556 E kube-apiserver Kube API started failing: Get https://api.ci-op-yjvtzh17-7a679.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: unexpected EOF
Jul 02 15:09:59.213 E ns/openshift-operator-lifecycle-manager pod/packageserver-54dfb4b684-tq8jf node/ip-10-0-160-167.us-east-2.compute.internal container=packageserver container exited with code 1 (Error): C_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA\n      --tls-min-version string                                  Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13\n      --tls-private-key-file string                             File containing the default x509 private key matching --tls-cert-file.\n      --tls-sni-cert-key namedCertKey                           A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com". (default [])\n  -v, --v Level                                                 number for the log level verbosity (default 0)\n      --vmodule moduleSpec                                      comma-separated list of pattern=N settings for file-filtered logging\n\ntime="2020-07-02T15:09:56Z" level=fatal msg="Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: connection refused"\n
Jul 02 15:11:13.427 E ns/openshift-monitoring pod/node-exporter-xp69h node/ip-10-0-222-239.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:11:13.451 E ns/openshift-cluster-node-tuning-operator pod/tuned-d4st8 node/ip-10-0-222-239.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:11:13.466 E ns/openshift-image-registry pod/node-ca-krrrn node/ip-10-0-222-239.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:11:13.497 E ns/openshift-sdn pod/ovs-5wgx2 node/ip-10-0-222-239.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:11:13.520 E ns/openshift-multus pod/multus-hk7gw node/ip-10-0-222-239.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:11:13.536 E ns/openshift-dns pod/dns-default-bkf68 node/ip-10-0-222-239.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:11:13.551 E ns/openshift-machine-config-operator pod/machine-config-daemon-kgssd node/ip-10-0-222-239.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:11:23.828 E ns/openshift-machine-config-operator pod/machine-config-daemon-kgssd node/ip-10-0-222-239.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Jul 02 15:11:35.770 E ns/openshift-marketplace pod/certified-operators-99cc8f8cb-ws99x node/ip-10-0-151-70.us-east-2.compute.internal container=certified-operators container exited with code 2 (Error): 
Jul 02 15:11:55.977 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-254-230.us-east-2.compute.internal node/ip-10-0-254-230.us-east-2.compute.internal container=kube-scheduler container exited with code 255 (Error): 44839&timeout=8m44s&timeoutSeconds=524&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:11:55.319850       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: Get https://localhost:6443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=42660&timeout=9m34s&timeoutSeconds=574&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:11:55.321020       1 reflector.go:307] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to watch *v1.Pod: Get https://localhost:6443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=45889&timeoutSeconds=455&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:11:55.322104       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://localhost:6443/api/v1/services?allowWatchBookmarks=true&resourceVersion=45173&timeout=9m2s&timeoutSeconds=542&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:11:55.323103       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://localhost:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=45898&timeout=6m45s&timeoutSeconds=405&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:11:55.369569       1 reflector.go:307] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=39124&timeout=9m23s&timeoutSeconds=563&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0702 15:11:55.376006       1 leaderelection.go:288] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0702 15:11:55.376030       1 server.go:257] leaderelection lost\n
Jul 02 15:12:08.034 E ns/openshift-monitoring pod/node-exporter-sxcwf node/ip-10-0-187-100.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:12:08.046 E ns/openshift-cluster-node-tuning-operator pod/tuned-8z265 node/ip-10-0-187-100.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:12:08.064 E ns/openshift-controller-manager pod/controller-manager-tzgtk node/ip-10-0-187-100.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:12:08.100 E ns/openshift-image-registry pod/node-ca-wvs9x node/ip-10-0-187-100.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:12:08.115 E ns/openshift-sdn pod/sdn-controller-njz4w node/ip-10-0-187-100.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:12:08.144 E ns/openshift-multus pod/multus-admission-controller-vh97m node/ip-10-0-187-100.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:12:08.162 E ns/openshift-sdn pod/ovs-gjqq7 node/ip-10-0-187-100.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:12:08.179 E ns/openshift-multus pod/multus-qw8nl node/ip-10-0-187-100.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:12:08.195 E ns/openshift-dns pod/dns-default-68bjw node/ip-10-0-187-100.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:12:08.212 E ns/openshift-machine-config-operator pod/machine-config-daemon-hcjw8 node/ip-10-0-187-100.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:12:08.225 E ns/openshift-machine-config-operator pod/machine-config-server-6jcpl node/ip-10-0-187-100.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:12:08.245 E ns/openshift-kube-apiserver pod/revision-pruner-7-ip-10-0-187-100.us-east-2.compute.internal node/ip-10-0-187-100.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:12:21.081 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-254-230.us-east-2.compute.internal node/ip-10-0-254-230.us-east-2.compute.internal container=kube-controller-manager-recovery-controller container exited with code 255 (Error): ller-manager/configmaps?allowWatchBookmarks=true&resourceVersion=45896&timeout=9m40s&timeoutSeconds=580&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:12:20.338775       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?allowWatchBookmarks=true&resourceVersion=44361&timeout=9m28s&timeoutSeconds=568&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:12:20.344282       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-config-managed/configmaps?allowWatchBookmarks=true&resourceVersion=44780&timeout=5m59s&timeoutSeconds=359&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:12:20.347797       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-config-managed/secrets?allowWatchBookmarks=true&resourceVersion=44361&timeout=5m10s&timeoutSeconds=310&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:12:20.353505       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-config/configmaps?allowWatchBookmarks=true&resourceVersion=44780&timeout=6m32s&timeoutSeconds=392&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:12:20.369701       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?allowWatchBookmarks=true&resourceVersion=44361&timeout=5m7s&timeoutSeconds=307&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0702 15:12:20.457046       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cert-recovery-controller-lock: timed out waiting for the condition\nF0702 15:12:20.457105       1 leaderelection.go:67] leaderelection lost\n
Jul 02 15:12:22.120 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-254-230.us-east-2.compute.internal node/ip-10-0-254-230.us-east-2.compute.internal container=setup init container exited with code 124 (Error): ................................................................................
Jul 02 15:12:22.358 E ns/openshift-machine-config-operator pod/machine-config-daemon-hcjw8 node/ip-10-0-187-100.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Jul 02 15:16:36.672 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-66b8bd95f-6t4x5 node/ip-10-0-160-167.us-east-2.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): ectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"7aa4cc28-5206-4134-83cc-615503f7a07d", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: nodes/ip-10-0-187-100.us-east-2.compute.internal pods/kube-controller-manager-ip-10-0-187-100.us-east-2.compute.internal container=\"cluster-policy-controller\" is not ready\nStaticPodsDegraded: nodes/ip-10-0-187-100.us-east-2.compute.internal pods/kube-controller-manager-ip-10-0-187-100.us-east-2.compute.internal container=\"kube-controller-manager\" is not ready\nNodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: nodes/ip-10-0-187-100.us-east-2.compute.internal pods/kube-controller-manager-ip-10-0-187-100.us-east-2.compute.internal container=\"cluster-policy-controller\" is not ready\nNodeControllerDegraded: All master nodes are ready"\nI0702 15:12:37.766679       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"7aa4cc28-5206-4134-83cc-615503f7a07d", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: nodes/ip-10-0-187-100.us-east-2.compute.internal pods/kube-controller-manager-ip-10-0-187-100.us-east-2.compute.internal container=\"cluster-policy-controller\" is not ready\nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready"\nI0702 15:16:35.651638       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0702 15:16:35.652053       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0702 15:16:35.652079       1 builder.go:209] server exited\n
Jul 02 15:16:39.688 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-7b586b4457-8npcv node/ip-10-0-160-167.us-east-2.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): o:49] Shutting down worker of UnsupportedConfigOverridesController controller ...\nI0702 15:16:38.602572       1 base_controller.go:49] Shutting down worker of  controller ...\nI0702 15:16:38.602587       1 base_controller.go:49] Shutting down worker of InstallerStateController controller ...\nI0702 15:16:38.602602       1 base_controller.go:49] Shutting down worker of InstallerController controller ...\nI0702 15:16:38.602617       1 base_controller.go:49] Shutting down worker of LoggingSyncer controller ...\nI0702 15:16:38.602632       1 base_controller.go:49] Shutting down worker of StaticPodStateController controller ...\nI0702 15:16:38.602647       1 base_controller.go:49] Shutting down worker of PruneController controller ...\nI0702 15:16:38.602801       1 dynamic_serving_content.go:144] Shutting down serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key\nI0702 15:16:38.603602       1 secure_serving.go:222] Stopped listening on [::]:8443\nI0702 15:16:38.603680       1 base_controller.go:49] Shutting down worker of NodeController controller ...\nF0702 15:16:38.603694       1 builder.go:243] stopped\nF0702 15:16:38.602239       1 builder.go:209] server exited\nI0702 15:16:38.606826       1 base_controller.go:39] All UnsupportedConfigOverridesController workers have been terminated\nI0702 15:16:38.606905       1 base_controller.go:39] All  workers have been terminated\nI0702 15:16:38.606968       1 base_controller.go:39] All InstallerStateController workers have been terminated\nI0702 15:16:38.607020       1 base_controller.go:39] All InstallerController workers have been terminated\nI0702 15:16:38.607072       1 base_controller.go:39] All LoggingSyncer workers have been terminated\nI0702 15:16:38.607125       1 base_controller.go:39] All StaticPodStateController workers have been terminated\nI0702 15:16:38.607176       1 base_controller.go:39] All PruneController workers have been terminated\nI0702 15:16:38.607325       1 base_controller.go:39] All NodeController workers have been terminated\n
Jul 02 15:17:10.826 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-160-167.us-east-2.compute.internal node/ip-10-0-160-167.us-east-2.compute.internal container=kube-scheduler container exited with code 255 (Error): onds=372&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:17:09.553878       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: Get https://localhost:6443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=42954&timeout=7m18s&timeoutSeconds=438&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:17:09.557749       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: Get https://localhost:6443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=42968&timeout=6m50s&timeoutSeconds=410&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:17:09.559533       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: Get https://localhost:6443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=46625&timeout=5m52s&timeoutSeconds=352&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:17:09.565374       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: Get https://localhost:6443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=44839&timeout=6m31s&timeoutSeconds=391&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:17:10.439666       1 reflector.go:307] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=43062&timeout=9m29s&timeoutSeconds=569&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0702 15:17:10.521544       1 leaderelection.go:288] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0702 15:17:10.521575       1 server.go:257] leaderelection lost\n
Jul 02 15:17:13.844 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-160-167.us-east-2.compute.internal node/ip-10-0-160-167.us-east-2.compute.internal container=kube-controller-manager container exited with code 255 (Error): nnect: connection refused\nE0702 15:17:12.757524       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/machine.openshift.io/v1beta1/machines?allowWatchBookmarks=true&resourceVersion=46218&timeout=9m28s&timeoutSeconds=568&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:17:12.758668       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/config.openshift.io/v1/images?allowWatchBookmarks=true&resourceVersion=44177&timeout=6m11s&timeoutSeconds=371&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:17:12.760124       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.CronJob: Get https://localhost:6443/apis/batch/v1beta1/cronjobs?allowWatchBookmarks=true&resourceVersion=42955&timeout=5m21s&timeoutSeconds=321&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:17:12.761343       1 reflector.go:307] github.com/openshift/client-go/security/informers/externalversions/factory.go:101: Failed to watch *v1.SecurityContextConstraints: Get https://localhost:6443/apis/security.openshift.io/v1/securitycontextconstraints?allowWatchBookmarks=true&resourceVersion=43202&timeout=6m45s&timeoutSeconds=405&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:17:12.762695       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/config.openshift.io/v1/featuregates?allowWatchBookmarks=true&resourceVersion=44321&timeout=7m59s&timeoutSeconds=479&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0702 15:17:12.827454       1 leaderelection.go:288] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0702 15:17:12.827590       1 controllermanager.go:291] leaderelection lost\n
Jul 02 15:17:18.768 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-187-100.us-east-2.compute.internal node/ip-10-0-187-100.us-east-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): I0702 15:17:17.567330       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0702 15:17:17.568283       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0702 15:17:17.571474       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nF0702 15:17:17.572136       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jul 02 15:17:36.858 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-187-100.us-east-2.compute.internal node/ip-10-0-187-100.us-east-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): I0702 15:17:36.654207       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0702 15:17:36.655946       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0702 15:17:36.658242       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0702 15:17:36.658350       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0702 15:17:36.658879       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jul 02 15:19:03.171 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-160-167.us-east-2.compute.internal node/ip-10-0-160-167.us-east-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): I0702 15:19:01.728133       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0702 15:19:01.732600       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0702 15:19:01.732743       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0702 15:19:01.732928       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0702 15:19:01.733511       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jul 02 15:19:03.194 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-160-167.us-east-2.compute.internal node/ip-10-0-160-167.us-east-2.compute.internal container=setup init container exited with code 124 (Error): ................................................................................
Jul 02 15:19:04.941 E clusteroperator/kube-apiserver changed Degraded to True: NodeInstaller_InstallerPodFailed: NodeInstallerDegraded: 1 nodes are failing on revision 8:\nNodeInstallerDegraded: 
Jul 02 15:19:24.918 E ns/openshift-machine-api pod/machine-api-controllers-6fd94c57c7-8r2rn node/ip-10-0-254-230.us-east-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Jul 02 15:19:27.930 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-956d5cf6b-qdfbn node/ip-10-0-254-230.us-east-2.compute.internal container=kube-storage-version-migrator-operator container exited with code 255 (Error): 4.817719       1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nI0702 15:09:34.823077       1 secure_serving.go:178] Serving securely on 0.0.0.0:8443\nI0702 15:09:34.828438       1 dynamic_serving_content.go:129] Starting serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key\nI0702 15:09:34.829731       1 tlsconfig.go:219] Starting DynamicServingCertificateController\nI0702 15:09:34.829232       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-storage-version-migrator-operator/openshift-kube-storage-version-migrator-operator-lock...\nI0702 15:09:34.917928       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file \nI0702 15:09:34.919723       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file \nI0702 15:10:36.130704       1 leaderelection.go:252] successfully acquired lease openshift-kube-storage-version-migrator-operator/openshift-kube-storage-version-migrator-operator-lock\nI0702 15:10:36.137979       1 event.go:281] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"openshift-kube-storage-version-migrator-operator-lock", UID:"52ac087c-001e-42ef-ac02-58107b65b157", APIVersion:"v1", ResourceVersion:"45235", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' 85ca549b-621c-4527-8ee2-c35df31b1793 became leader\nI0702 15:10:36.141717       1 logging_controller.go:83] Starting LogLevelController\nI0702 15:10:36.144792       1 status_controller.go:199] Starting StatusSyncer-kube-storage-version-migrator\nI0702 15:10:36.144883       1 controller.go:109] Starting KubeStorageVersionMigratorOperator\nI0702 15:19:27.406731       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0702 15:19:27.406870       1 leaderelection.go:66] leaderelection lost\n
Jul 02 15:20:00.132 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-254-230.us-east-2.compute.internal node/ip-10-0-254-230.us-east-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): I0702 15:19:58.667338       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0702 15:19:58.670782       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0702 15:19:58.673992       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nF0702 15:19:58.674798       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jul 02 15:20:12.189 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-254-230.us-east-2.compute.internal node/ip-10-0-254-230.us-east-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): I0702 15:20:11.350884       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0702 15:20:11.351162       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0702 15:20:11.357275       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0702 15:20:11.357417       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0702 15:20:11.358215       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jul 02 15:20:50.711 E ns/openshift-insights pod/insights-operator-cdf7ccf9d-wvbds node/ip-10-0-160-167.us-east-2.compute.internal container=operator container exited with code 2 (Error): 2.15.2 10.128.2.22:46238]\nI0702 15:17:22.360438       1 httplog.go:90] GET /metrics: (17.372856ms) 200 [Prometheus/2.15.2 10.129.2.18:38598]\nI0702 15:17:36.027815       1 httplog.go:90] GET /metrics: (8.955679ms) 200 [Prometheus/2.15.2 10.128.2.22:46238]\nI0702 15:17:49.734825       1 status.go:298] The operator is healthy\nI0702 15:17:52.358434       1 httplog.go:90] GET /metrics: (16.300103ms) 200 [Prometheus/2.15.2 10.129.2.18:38598]\nI0702 15:18:06.026984       1 httplog.go:90] GET /metrics: (9.222084ms) 200 [Prometheus/2.15.2 10.128.2.22:46238]\nI0702 15:18:22.354285       1 httplog.go:90] GET /metrics: (12.105003ms) 200 [Prometheus/2.15.2 10.129.2.18:38598]\nI0702 15:18:36.025321       1 httplog.go:90] GET /metrics: (7.620375ms) 200 [Prometheus/2.15.2 10.128.2.22:46238]\nI0702 15:18:52.348383       1 httplog.go:90] GET /metrics: (6.221368ms) 200 [Prometheus/2.15.2 10.129.2.18:38598]\nI0702 15:19:06.024824       1 httplog.go:90] GET /metrics: (7.169159ms) 200 [Prometheus/2.15.2 10.128.2.22:46238]\nI0702 15:19:22.349233       1 httplog.go:90] GET /metrics: (7.168292ms) 200 [Prometheus/2.15.2 10.129.2.18:38598]\nI0702 15:19:36.051280       1 httplog.go:90] GET /metrics: (31.283814ms) 200 [Prometheus/2.15.2 10.128.2.22:46238]\nI0702 15:19:49.732074       1 status.go:298] The operator is healthy\nI0702 15:19:49.733572       1 configobserver.go:65] Refreshing configuration from cluster pull secret\nI0702 15:19:49.738038       1 configobserver.go:90] Found cloud.openshift.com token\nI0702 15:19:49.740809       1 configobserver.go:107] Refreshing configuration from cluster secret\nI0702 15:19:52.357614       1 httplog.go:90] GET /metrics: (15.34098ms) 200 [Prometheus/2.15.2 10.129.2.18:38598]\nI0702 15:20:06.025851       1 httplog.go:90] GET /metrics: (7.917408ms) 200 [Prometheus/2.15.2 10.128.2.22:46238]\nI0702 15:20:22.348264       1 httplog.go:90] GET /metrics: (6.112115ms) 200 [Prometheus/2.15.2 10.129.2.18:38598]\nI0702 15:20:36.024745       1 httplog.go:90] GET /metrics: (6.847966ms) 200 [Prometheus/2.15.2 10.128.2.22:46238]\n
Jul 02 15:20:59.237 E ns/openshift-monitoring pod/kube-state-metrics-84c4647b7b-wrh7v node/ip-10-0-151-70.us-east-2.compute.internal container=kube-state-metrics container exited with code 2 (Error): 
Jul 02 15:21:01.515 E ns/openshift-service-catalog-apiserver-operator pod/openshift-service-catalog-apiserver-operator-6c488d6dc6-nz9bl node/ip-10-0-254-230.us-east-2.compute.internal container=operator container exited with code 255 (Error): found, nothing to delete.\nI0702 15:20:24.791146       1 workload_controller.go:193] apiservice v1beta1.servicecatalog.k8s.io deleted\nI0702 15:20:25.624166       1 leaderelection.go:283] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0702 15:20:35.634707       1 leaderelection.go:283] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0702 15:20:41.431913       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0702 15:20:41.431941       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0702 15:20:41.435029       1 httplog.go:90] GET /metrics: (8.738009ms) 200 [Prometheus/2.15.2 10.129.2.18:56192]\nI0702 15:20:44.781782       1 workload_controller.go:347] No service bindings found, nothing to delete.\nI0702 15:20:44.790196       1 workload_controller.go:193] apiservice v1beta1.servicecatalog.k8s.io deleted\nI0702 15:20:45.653140       1 leaderelection.go:283] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0702 15:20:53.995086       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0702 15:20:53.995158       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0702 15:20:53.998091       1 httplog.go:90] GET /metrics: (7.196713ms) 200 [Prometheus/2.15.2 10.128.2.22:34074]\nI0702 15:20:55.666693       1 leaderelection.go:283] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0702 15:21:00.594427       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0702 15:21:00.595583       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0702 15:21:00.596540       1 builder.go:209] server exited\n
Jul 02 15:21:02.279 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-151-70.us-east-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/07/02 15:09:02 Watching directory: "/etc/alertmanager/config"\n
Jul 02 15:21:02.279 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-151-70.us-east-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/07/02 15:09:02 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/02 15:09:02 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/02 15:09:02 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/02 15:09:02 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/07/02 15:09:02 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/07/02 15:09:02 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/02 15:09:02 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0702 15:09:02.791809       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/07/02 15:09:02 http.go:107: HTTPS: listening on [::]:9095\n
Jul 02 15:21:06.077 E ns/openshift-monitoring pod/node-exporter-d85l8 node/ip-10-0-160-167.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): -02T15:08:41Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-07-02T15:08:41Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-07-02T15:08:41Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-07-02T15:08:41Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-07-02T15:08:41Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-07-02T15:08:41Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-07-02T15:08:41Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-07-02T15:08:41Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-07-02T15:08:41Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-07-02T15:08:41Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-07-02T15:08:41Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-07-02T15:08:41Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-07-02T15:08:41Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-07-02T15:08:41Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-07-02T15:08:41Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-07-02T15:08:41Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-07-02T15:08:41Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-07-02T15:08:41Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-07-02T15:08:41Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-07-02T15:08:41Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-07-02T15:08:41Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-07-02T15:08:41Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-07-02T15:08:41Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-07-02T15:08:41Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jul 02 15:21:10.531 E ns/openshift-service-ca-operator pod/service-ca-operator-6bd8cfb5d-qfl7v node/ip-10-0-254-230.us-east-2.compute.internal container=operator container exited with code 255 (Error): 
Jul 02 15:21:12.350 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-151-70.us-east-2.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/07/02 15:09:03 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Jul 02 15:21:12.350 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-151-70.us-east-2.compute.internal container=prometheus-proxy container exited with code 2 (Error): 2020/07/02 15:09:03 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/07/02 15:09:03 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/02 15:09:03 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/02 15:09:03 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/07/02 15:09:03 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/07/02 15:09:03 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/07/02 15:09:03 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/02 15:09:03 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0702 15:09:03.590869       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/07/02 15:09:03 http.go:107: HTTPS: listening on [::]:9091\n
Jul 02 15:21:12.350 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-151-70.us-east-2.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-07-02T15:09:02.572171765Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=info ts=2020-07-02T15:09:02.572304436Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-07-02T15:09:02.57448085Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-07-02T15:09:07.75967045Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Jul 02 15:21:14.583 E ns/openshift-service-ca pod/service-ca-57bd6d9bc7-zlxzz node/ip-10-0-254-230.us-east-2.compute.internal container=service-ca-controller container exited with code 255 (Error): 
Jul 02 15:21:15.073 E ns/openshift-monitoring pod/telemeter-client-85f78744b8-s7sxr node/ip-10-0-131-218.us-east-2.compute.internal container=reload container exited with code 2 (Error): 
Jul 02 15:21:15.073 E ns/openshift-monitoring pod/telemeter-client-85f78744b8-s7sxr node/ip-10-0-131-218.us-east-2.compute.internal container=telemeter-client container exited with code 2 (Error): 
Jul 02 15:21:15.440 E ns/openshift-monitoring pod/thanos-querier-65778d588-tzzqc node/ip-10-0-151-70.us-east-2.compute.internal container=oauth-proxy container exited with code 2 (Error): cs"\n2020/07/02 15:08:36 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/07/02 15:08:36 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/02 15:08:36 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/07/02 15:08:36 http.go:107: HTTPS: listening on [::]:9091\nI0702 15:08:36.736687       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/07/02 15:09:46 oauthproxy.go:774: basicauth: 10.130.0.18:33536 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:12:40 oauthproxy.go:774: basicauth: 10.130.0.18:45222 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:12:40 oauthproxy.go:774: basicauth: 10.130.0.18:45222 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:13:40 oauthproxy.go:774: basicauth: 10.130.0.18:46282 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:13:40 oauthproxy.go:774: basicauth: 10.130.0.18:46282 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:15:40 oauthproxy.go:774: basicauth: 10.130.0.18:47588 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:16:19 oauthproxy.go:774: basicauth: 10.129.0.11:58376 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:17:18 oauthproxy.go:774: basicauth: 10.129.0.11:34236 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:19:17 oauthproxy.go:774: basicauth: 10.129.0.11:39782 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:20:17 oauthproxy.go:774: basicauth: 10.129.0.11:40554 Authorization header does not start with 'Basic', skipping basic authentication\n
Jul 02 15:21:22.769 E ns/openshift-monitoring pod/node-exporter-xp69h node/ip-10-0-222-239.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): -02T15:11:20Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-07-02T15:11:20Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-07-02T15:11:20Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-07-02T15:11:20Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-07-02T15:11:20Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-07-02T15:11:20Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-07-02T15:11:20Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-07-02T15:11:20Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-07-02T15:11:20Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-07-02T15:11:20Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-07-02T15:11:20Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-07-02T15:11:20Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-07-02T15:11:20Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-07-02T15:11:20Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-07-02T15:11:20Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-07-02T15:11:20Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-07-02T15:11:20Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-07-02T15:11:20Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-07-02T15:11:20Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-07-02T15:11:20Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-07-02T15:11:20Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-07-02T15:11:20Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-07-02T15:11:20Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-07-02T15:11:20Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jul 02 15:21:23.468 E ns/openshift-monitoring pod/prometheus-adapter-84f98d65c9-8x8rr node/ip-10-0-151-70.us-east-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0702 15:08:36.118761       1 adapter.go:93] successfully using in-cluster auth\nI0702 15:08:37.872271       1 secure_serving.go:116] Serving securely on [::]:6443\n
Jul 02 15:21:25.137 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-131-218.us-east-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/07/02 15:05:55 Watching directory: "/etc/alertmanager/config"\n
Jul 02 15:21:25.137 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-131-218.us-east-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/07/02 15:05:55 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/02 15:05:55 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/02 15:05:55 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/02 15:05:55 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/07/02 15:05:55 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/07/02 15:05:55 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/02 15:05:55 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/02 15:05:55 http.go:107: HTTPS: listening on [::]:9095\nI0702 15:05:55.982169       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Jul 02 15:21:26.141 E ns/openshift-monitoring pod/prometheus-adapter-84f98d65c9-b8mmf node/ip-10-0-131-218.us-east-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0702 15:05:40.179005       1 adapter.go:93] successfully using in-cluster auth\nI0702 15:05:40.936866       1 secure_serving.go:116] Serving securely on [::]:6443\n
Jul 02 15:21:27.646 E ns/openshift-controller-manager pod/controller-manager-w6p8d node/ip-10-0-254-230.us-east-2.compute.internal container=controller-manager container exited with code 137 (Error): erver ("unable to decode an event from the watch stream: stream error: stream ID 131; INTERNAL_ERROR") has prevented the request from succeeding\nW0702 15:18:12.840660       1 reflector.go:340] runtime/asm_amd64.s:1357: watch of *v1.Image ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 139; INTERNAL_ERROR") has prevented the request from succeeding\nW0702 15:18:12.840913       1 reflector.go:340] runtime/asm_amd64.s:1357: watch of *v1.Build ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 125; INTERNAL_ERROR") has prevented the request from succeeding\nW0702 15:18:42.827010       1 reflector.go:340] runtime/asm_amd64.s:1357: watch of *v1.DeploymentConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 133; INTERNAL_ERROR") has prevented the request from succeeding\nW0702 15:18:42.827167       1 reflector.go:340] runtime/asm_amd64.s:1357: watch of *v1.Build ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 181; INTERNAL_ERROR") has prevented the request from succeeding\nW0702 15:18:42.827261       1 reflector.go:340] runtime/asm_amd64.s:1357: watch of *v1.BuildConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 127; INTERNAL_ERROR") has prevented the request from succeeding\nW0702 15:18:42.827359       1 reflector.go:340] runtime/asm_amd64.s:1357: watch of *v1.TemplateInstance ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 185; INTERNAL_ERROR") has prevented the request from succeeding\nW0702 15:18:42.827441       1 reflector.go:340] runtime/asm_amd64.s:1357: watch of *v1.ImageStream ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 135; INTERNAL_ERROR") has prevented the request from succeeding\n
Jul 02 15:21:27.685 E ns/openshift-controller-manager pod/controller-manager-tzgtk node/ip-10-0-187-100.us-east-2.compute.internal container=controller-manager container exited with code 137 (Error): I0702 14:50:13.152090       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (v0.0.0-alpha.0-109-g75548a0)\nI0702 14:50:13.153558       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-yjvtzh17/stable@sha256:2ade819b54479625070665c6d987dbf1b446d10c7ad495bf6d52c70711d8000e"\nI0702 14:50:13.153581       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-yjvtzh17/stable@sha256:2e9fa701fb05ce0c7a3a0ce59d48165fbc50bedfbe3033f5eec1051fbda305b0"\nI0702 14:50:13.153662       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0702 14:50:13.153747       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\nI0702 15:12:24.542630       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (v0.0.0-alpha.0-109-g75548a0)\nI0702 15:12:24.546705       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-yjvtzh17/stable@sha256:2ade819b54479625070665c6d987dbf1b446d10c7ad495bf6d52c70711d8000e"\nI0702 15:12:24.546729       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-yjvtzh17/stable@sha256:2e9fa701fb05ce0c7a3a0ce59d48165fbc50bedfbe3033f5eec1051fbda305b0"\nI0702 15:12:24.546953       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0702 15:12:24.547269       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Jul 02 15:21:29.675 E ns/openshift-monitoring pod/node-exporter-c55dr node/ip-10-0-254-230.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): -02T15:05:34Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-07-02T15:05:34Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-07-02T15:05:34Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-07-02T15:05:34Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-07-02T15:05:34Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-07-02T15:05:34Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-07-02T15:05:34Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-07-02T15:05:34Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-07-02T15:05:34Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-07-02T15:05:34Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-07-02T15:05:34Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-07-02T15:05:34Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-07-02T15:05:34Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-07-02T15:05:34Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-07-02T15:05:34Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-07-02T15:05:34Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-07-02T15:05:34Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-07-02T15:05:34Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-07-02T15:05:34Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-07-02T15:05:34Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-07-02T15:05:34Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-07-02T15:05:34Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-07-02T15:05:34Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-07-02T15:05:34Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jul 02 15:21:30.138 E ns/openshift-monitoring pod/thanos-querier-65778d588-m4j66 node/ip-10-0-131-218.us-east-2.compute.internal container=oauth-proxy container exited with code 2 (Error): oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/07/02 15:05:44 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/07/02 15:05:44 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/07/02 15:05:44 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/02 15:05:44 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0702 15:05:44.951249       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/07/02 15:05:44 http.go:107: HTTPS: listening on [::]:9091\n2020/07/02 15:06:15 oauthproxy.go:774: basicauth: 10.129.0.44:42016 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:09:15 oauthproxy.go:774: basicauth: 10.129.0.44:42366 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:10:45 oauthproxy.go:774: basicauth: 10.130.0.18:39998 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:11:40 oauthproxy.go:774: basicauth: 10.130.0.18:42814 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:11:40 oauthproxy.go:774: basicauth: 10.130.0.18:42814 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:14:40 oauthproxy.go:774: basicauth: 10.130.0.18:46928 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:14:40 oauthproxy.go:774: basicauth: 10.130.0.18:46928 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:18:18 oauthproxy.go:774: basicauth: 10.129.0.11:39156 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:21:17 oauthproxy.go:774: basicauth: 10.129.0.11:41306 Authorization header does not start with 'Basic', skipping basic authentication\n
Jul 02 15:21:30.660 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-222-239.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-07-02T15:21:27.745Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-07-02T15:21:27.752Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-07-02T15:21:27.752Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-07-02T15:21:27.753Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-07-02T15:21:27.753Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-07-02T15:21:27.753Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-07-02T15:21:27.753Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-07-02T15:21:27.753Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-07-02T15:21:27.753Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-07-02T15:21:27.753Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-07-02T15:21:27.753Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-07-02T15:21:27.753Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-07-02T15:21:27.753Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-07-02T15:21:27.753Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-07-02T15:21:27.755Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-07-02T15:21:27.755Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-07-02
Jul 02 15:21:35.169 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-131-218.us-east-2.compute.internal container=prometheus-proxy container exited with code 2 (Error):  certificates.\n2020/07/02 15:05:56 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/07/02 15:05:56 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/07/02 15:05:56 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/07/02 15:05:56 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/02 15:05:56 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/07/02 15:05:56 http.go:107: HTTPS: listening on [::]:9091\nI0702 15:05:56.252992       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/07/02 15:10:04 oauthproxy.go:774: basicauth: 10.130.0.19:40172 Authorization header does not start with 'Basic', skipping basic authentication\nE0702 15:10:04.754503       1 webhook.go:109] Failed to make webhook authenticator request: Post https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews: dial tcp 172.30.0.1:443: connect: connection refused\n2020/07/02 15:10:04 oauthproxy.go:782: requestauth: 10.130.0.19:40172 Post https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews: dial tcp 172.30.0.1:443: connect: connection refused\n2020/07/02 15:10:37 oauthproxy.go:774: basicauth: 10.129.2.9:39284 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:15:23 oauthproxy.go:774: basicauth: 10.129.2.9:45050 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:19:53 oauthproxy.go:774: basicauth: 10.129.2.9:50640 Authorization header does not start with 'Basic', skipping basic authentication\n202
Jul 02 15:21:35.169 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-131-218.us-east-2.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/07/02 15:05:55 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Jul 02 15:21:35.169 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-131-218.us-east-2.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-07-02T15:05:55.189301181Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=info ts=2020-07-02T15:05:55.189425528Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-07-02T15:05:55.191320793Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-07-02T15:06:00.327691839Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Jul 02 15:21:39.695 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-254-230.us-east-2.compute.internal node/ip-10-0-254-230.us-east-2.compute.internal container=kube-scheduler container exited with code 255 (Error):  watch *v1.ReplicationController: Get https://localhost:6443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=46775&timeout=5m4s&timeoutSeconds=304&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:21:39.039956       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://localhost:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=53008&timeout=6m34s&timeoutSeconds=394&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:21:39.044626       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: Get https://localhost:6443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=46778&timeout=8m2s&timeoutSeconds=482&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:21:39.049880       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: Get https://localhost:6443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=46775&timeout=7m5s&timeoutSeconds=425&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:21:39.057542       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://localhost:6443/api/v1/services?allowWatchBookmarks=true&resourceVersion=52366&timeout=9m59s&timeoutSeconds=599&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:21:39.058650       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://localhost:6443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=46775&timeout=7m30s&timeoutSeconds=450&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0702 15:21:39.362806       1 leaderelection.go:288] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0702 15:21:39.362845       1 server.go:257] leaderelection lost\n
Jul 02 15:21:42.776 E clusterversion/version changed Failing to True: WorkloadNotAvailable: deployment openshift-console-operator/console-operator is progressing ReplicaSetUpdated: ReplicaSet "console-operator-6bdbdc4df6" is progressing.
Jul 02 15:21:48.208 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-131-218.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-07-02T15:21:46.371Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-07-02T15:21:46.376Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-07-02T15:21:46.378Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-07-02T15:21:46.379Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-07-02T15:21:46.379Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-07-02T15:21:46.379Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-07-02T15:21:46.379Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-07-02T15:21:46.379Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-07-02T15:21:46.379Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-07-02T15:21:46.379Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-07-02T15:21:46.379Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-07-02T15:21:46.379Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-07-02T15:21:46.379Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-07-02T15:21:46.379Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-07-02T15:21:46.380Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-07-02T15:21:46.380Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-07-02
Jul 02 15:21:52.808 E ns/openshift-console-operator pod/console-operator-56cb7f655c-8fpfh node/ip-10-0-254-230.us-east-2.compute.internal container=console-operator container exited with code 255 (Error): r LoggingSyncer \nI0702 15:10:46.114438       1 base_controller.go:45] Starting #1 worker of LoggingSyncer controller ...\nI0702 15:10:47.429423       1 status_controller.go:176] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2020-07-02T14:22:40Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-07-02T14:51:08Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-07-02T15:10:47Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-07-02T14:22:40Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0702 15:10:47.441145       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"528390ad-31e0-4475-bbb9-f5fc0abb2107", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Available changed from False to True ("")\nW0702 15:18:42.831051       1 reflector.go:326] github.com/openshift/client-go/oauth/informers/externalversions/factory.go:101: watch of *v1.OAuthClient ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 3; INTERNAL_ERROR") has prevented the request from succeeding\nW0702 15:18:42.831154       1 reflector.go:326] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 89; INTERNAL_ERROR") has prevented the request from succeeding\nI0702 15:21:51.896587       1 cmd.go:83] Received SIGTERM or SIGINT signal, shutting down controller.\nI0702 15:21:51.897869       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nI0702 15:21:51.898002       1 controller.go:109] shutting down ConsoleResourceSyncDestinationController\nF0702 15:21:51.898022       1 builder.go:209] server exited\n
Jul 02 15:22:04.850 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-254-230.us-east-2.compute.internal node/ip-10-0-254-230.us-east-2.compute.internal container=setup init container exited with code 124 (Error): ................................................................................
Jul 02 15:22:20.680 E ns/openshift-marketplace pod/redhat-operators-7c845b9644-6f4vz node/ip-10-0-151-70.us-east-2.compute.internal container=redhat-operators container exited with code 2 (Error): 
Jul 02 15:22:31.684 E ns/openshift-marketplace pod/community-operators-c4596cd45-r87l8 node/ip-10-0-151-70.us-east-2.compute.internal container=community-operators container exited with code 2 (Error): 
Jul 02 15:22:39.213 E ns/openshift-console pod/console-589b644dbd-tzdzr node/ip-10-0-160-167.us-east-2.compute.internal container=console container exited with code 2 (Error): 2020-07-02T15:09:29Z cmd/main: cookies are secure!\n2020-07-02T15:09:35Z auth: error contacting auth provider (retrying in 10s): Get https://kubernetes.default.svc/.well-known/oauth-authorization-server: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n2020-07-02T15:09:45Z cmd/main: Binding to [::]:8443...\n2020-07-02T15:09:45Z cmd/main: using TLS\n2020-07-02T15:10:42Z auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-yjvtzh17-7a679.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-yjvtzh17-7a679.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n
Jul 02 15:22:46.000 E ns/openshift-console pod/console-589b644dbd-p65jl node/ip-10-0-254-230.us-east-2.compute.internal container=console container exited with code 2 (Error): 2020-07-02T15:05:49Z cmd/main: cookies are secure!\n2020-07-02T15:05:49Z cmd/main: Binding to [::]:8443...\n2020-07-02T15:05:49Z cmd/main: using TLS\n2020-07-02T15:07:10Z auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-yjvtzh17-7a679.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-yjvtzh17-7a679.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n2020-07-02T15:10:16Z auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-yjvtzh17-7a679.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-yjvtzh17-7a679.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n
Jul 02 15:23:37.844 E ns/openshift-sdn pod/sdn-qb2jg node/ip-10-0-151-70.us-east-2.compute.internal container=sdn container exited with code 255 (Error): 25 proxier.go:368] userspace proxy: processing 0 service events\nI0702 15:22:31.482759    1925 proxier.go:347] userspace syncProxyRules took 34.744104ms\nI0702 15:22:31.503153    1925 pod.go:539] CNI_DEL openshift-marketplace/certified-operators-5d97497c7d-mj6rm\nI0702 15:22:31.746617    1925 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-kube-scheduler/scheduler:https to [10.0.160.167:10259 10.0.187.100:10259 10.0.254.230:10259]\nI0702 15:22:31.897145    1925 proxier.go:368] userspace proxy: processing 0 service events\nI0702 15:22:31.897168    1925 proxier.go:347] userspace syncProxyRules took 28.810937ms\nI0702 15:22:50.325001    1925 pod.go:539] CNI_DEL openshift-ingress/router-default-5f698d6bbb-t77h5\nI0702 15:23:02.028256    1925 proxier.go:368] userspace proxy: processing 0 service events\nI0702 15:23:02.028280    1925 proxier.go:347] userspace syncProxyRules took 28.229947ms\nI0702 15:23:29.988366    1925 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.129.0.2:6443 10.130.0.2:6443]\nI0702 15:23:29.988432    1925 roundrobin.go:217] Delete endpoint 10.128.0.8:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0702 15:23:29.988455    1925 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.129.0.2:8443 10.130.0.2:8443]\nI0702 15:23:29.988469    1925 roundrobin.go:217] Delete endpoint 10.128.0.8:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0702 15:23:30.131212    1925 proxier.go:368] userspace proxy: processing 0 service events\nI0702 15:23:30.131232    1925 proxier.go:347] userspace syncProxyRules took 27.247907ms\nI0702 15:23:37.670281    1925 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0702 15:23:37.670329    1925 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Jul 02 15:23:45.174 E ns/openshift-sdn pod/sdn-controller-njz4w node/ip-10-0-187-100.us-east-2.compute.internal container=sdn-controller container exited with code 2 (Error): 0.425004       1 event.go:319] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"openshift-network-controller", GenerateName:"", Namespace:"openshift-sdn", SelfLink:"/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller", UID:"c1639a3d-4a50-4920-9789-b74b6e50cad3", ResourceVersion:"32500", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63729296120, loc:(*time.Location)(0x2b2b940)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"ip-10-0-187-100\",\"leaseDurationSeconds\":60,\"acquireTime\":\"2020-07-02T14:15:20Z\",\"renewTime\":\"2020-07-02T14:52:30Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'ip-10-0-187-100 became leader'\nI0702 14:52:30.425088       1 leaderelection.go:252] successfully acquired lease openshift-sdn/openshift-network-controller\nI0702 14:52:30.430354       1 master.go:51] Initializing SDN master\nI0702 14:52:30.478409       1 network_controller.go:61] Started OpenShift Network Controller\nE0702 15:06:21.729272       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Namespace: Get https://api-int.ci-op-yjvtzh17-7a679.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=31060&timeout=8m46s&timeoutSeconds=526&watch=true: dial tcp 10.0.147.123:6443: connect: connection refused\nI0702 15:12:12.938012       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Jul 02 15:23:47.597 E ns/openshift-sdn pod/sdn-controller-jl2j9 node/ip-10-0-160-167.us-east-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0702 14:52:57.449872       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nI0702 15:08:40.344473       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Jul 02 15:24:00.562 E ns/openshift-multus pod/multus-czqn9 node/ip-10-0-151-70.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Jul 02 15:24:08.066 E ns/openshift-sdn pod/sdn-fsbcv node/ip-10-0-160-167.us-east-2.compute.internal container=sdn container exited with code 255 (Error): s\nI0702 15:22:31.906509    2411 proxier.go:347] userspace syncProxyRules took 34.943031ms\nI0702 15:22:38.676823    2411 pod.go:539] CNI_DEL openshift-console/console-589b644dbd-tzdzr\nI0702 15:23:02.061871    2411 proxier.go:368] userspace proxy: processing 0 service events\nI0702 15:23:02.061897    2411 proxier.go:347] userspace syncProxyRules took 40.939992ms\nI0702 15:23:29.988110    2411 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.129.0.2:6443 10.130.0.2:6443]\nI0702 15:23:29.988159    2411 roundrobin.go:217] Delete endpoint 10.128.0.8:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0702 15:23:29.988183    2411 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.129.0.2:8443 10.130.0.2:8443]\nI0702 15:23:29.988196    2411 roundrobin.go:217] Delete endpoint 10.128.0.8:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0702 15:23:30.152781    2411 proxier.go:368] userspace proxy: processing 0 service events\nI0702 15:23:30.152804    2411 proxier.go:347] userspace syncProxyRules took 34.214763ms\nI0702 15:23:38.375012    2411 roundrobin.go:267] LoadBalancerRR: Setting endpoints for default/kubernetes:https to [10.0.160.167:6443 10.0.187.100:6443 10.0.254.230:6443]\nI0702 15:23:38.521889    2411 proxier.go:368] userspace proxy: processing 0 service events\nI0702 15:23:38.521919    2411 proxier.go:347] userspace syncProxyRules took 29.477043ms\nI0702 15:23:40.549289    2411 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-kube-apiserver/apiserver:https to [10.0.160.167:6443 10.0.187.100:6443 10.0.254.230:6443]\nI0702 15:23:40.683870    2411 proxier.go:368] userspace proxy: processing 0 service events\nI0702 15:23:40.683894    2411 proxier.go:347] userspace syncProxyRules took 28.396102ms\nF0702 15:24:05.390248    2411 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Jul 02 15:24:32.198 E ns/openshift-sdn pod/sdn-9rhqq node/ip-10-0-222-239.us-east-2.compute.internal container=sdn container exited with code 255 (Error):  openshift-ingress/router-default:http" (:31712/tcp)\nI0702 15:24:05.530046   34919 service_health.go:98] Opening healthcheck "openshift-ingress/router-default" on port 32265\nI0702 15:24:05.642036   34919 proxy.go:305] openshift-sdn proxy services and endpoints initialized\nI0702 15:24:05.642067   34919 cmd.go:173] openshift-sdn network plugin registering startup\nI0702 15:24:05.642225   34919 cmd.go:177] openshift-sdn network plugin ready\nI0702 15:24:05.783567   34919 proxier.go:368] userspace proxy: processing 0 service events\nI0702 15:24:05.783592   34919 proxier.go:347] userspace syncProxyRules took 28.488704ms\nI0702 15:24:05.928426   34919 proxier.go:368] userspace proxy: processing 0 service events\nI0702 15:24:05.928449   34919 proxier.go:347] userspace syncProxyRules took 33.43281ms\nI0702 15:24:13.498261   34919 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-kube-apiserver/apiserver:https to [10.0.160.167:6443 10.0.254.230:6443]\nI0702 15:24:13.498293   34919 roundrobin.go:217] Delete endpoint 10.0.187.100:6443 for service "openshift-kube-apiserver/apiserver:https"\nI0702 15:24:13.533010   34919 roundrobin.go:267] LoadBalancerRR: Setting endpoints for default/kubernetes:https to [10.0.160.167:6443 10.0.254.230:6443]\nI0702 15:24:13.533050   34919 roundrobin.go:217] Delete endpoint 10.0.187.100:6443 for service "default/kubernetes:https"\nI0702 15:24:13.632585   34919 proxier.go:368] userspace proxy: processing 0 service events\nI0702 15:24:13.632607   34919 proxier.go:347] userspace syncProxyRules took 28.562003ms\nI0702 15:24:13.779189   34919 proxier.go:368] userspace proxy: processing 0 service events\nI0702 15:24:13.779214   34919 proxier.go:347] userspace syncProxyRules took 29.976904ms\nI0702 15:24:31.171228   34919 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0702 15:24:31.171267   34919 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Jul 02 15:24:36.553 E ns/openshift-multus pod/multus-admission-controller-mbjjg node/ip-10-0-160-167.us-east-2.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Jul 02 15:24:39.407 E ns/openshift-multus pod/multus-qw8nl node/ip-10-0-187-100.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Jul 02 15:24:53.633 E ns/openshift-sdn pod/sdn-n6tnq node/ip-10-0-131-218.us-east-2.compute.internal container=sdn container exited with code 255 (Error): Opening healthcheck "openshift-ingress/router-default" on port 32265\nI0702 15:24:27.254176   65877 proxy.go:305] openshift-sdn proxy services and endpoints initialized\nI0702 15:24:27.254205   65877 cmd.go:173] openshift-sdn network plugin registering startup\nI0702 15:24:27.254307   65877 cmd.go:177] openshift-sdn network plugin ready\nI0702 15:24:40.557105   65877 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.58:6443 10.129.0.2:6443 10.130.0.56:6443]\nI0702 15:24:40.557150   65877 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.58:8443 10.129.0.2:8443 10.130.0.56:8443]\nI0702 15:24:40.585630   65877 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.58:6443 10.130.0.56:6443]\nI0702 15:24:40.585668   65877 roundrobin.go:217] Delete endpoint 10.129.0.2:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0702 15:24:40.585687   65877 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.58:8443 10.130.0.56:8443]\nI0702 15:24:40.585699   65877 roundrobin.go:217] Delete endpoint 10.129.0.2:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0702 15:24:40.688319   65877 proxier.go:368] userspace proxy: processing 0 service events\nI0702 15:24:40.688340   65877 proxier.go:347] userspace syncProxyRules took 27.132727ms\nI0702 15:24:40.820673   65877 proxier.go:368] userspace proxy: processing 0 service events\nI0702 15:24:40.820695   65877 proxier.go:347] userspace syncProxyRules took 28.143013ms\nI0702 15:24:52.598681   65877 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0702 15:24:52.598718   65877 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Jul 02 15:25:11.530 E ns/openshift-multus pod/multus-admission-controller-vh97m node/ip-10-0-187-100.us-east-2.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Jul 02 15:25:13.671 E ns/openshift-sdn pod/sdn-pwxq8 node/ip-10-0-254-230.us-east-2.compute.internal container=sdn container exited with code 255 (Error):   63492 proxier.go:368] userspace proxy: processing 0 service events\nI0702 15:24:13.804094   63492 proxier.go:347] userspace syncProxyRules took 33.862104ms\nI0702 15:24:40.560017   63492 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.58:6443 10.129.0.2:6443 10.130.0.56:6443]\nI0702 15:24:40.560064   63492 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.58:8443 10.129.0.2:8443 10.130.0.56:8443]\nI0702 15:24:40.583489   63492 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.58:6443 10.130.0.56:6443]\nI0702 15:24:40.583675   63492 roundrobin.go:217] Delete endpoint 10.129.0.2:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0702 15:24:40.583751   63492 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.58:8443 10.130.0.56:8443]\nI0702 15:24:40.583888   63492 roundrobin.go:217] Delete endpoint 10.129.0.2:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0702 15:24:40.716672   63492 proxier.go:368] userspace proxy: processing 0 service events\nI0702 15:24:40.716698   63492 proxier.go:347] userspace syncProxyRules took 30.515261ms\nI0702 15:24:40.859799   63492 proxier.go:368] userspace proxy: processing 0 service events\nI0702 15:24:40.859824   63492 proxier.go:347] userspace syncProxyRules took 29.657753ms\nI0702 15:25:11.005304   63492 proxier.go:368] userspace proxy: processing 0 service events\nI0702 15:25:11.005338   63492 proxier.go:347] userspace syncProxyRules took 31.961422ms\nI0702 15:25:13.380733   63492 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0702 15:25:13.380786   63492 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Jul 02 15:25:34.621 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-187-100.us-east-2.compute.internal node/ip-10-0-187-100.us-east-2.compute.internal container=kube-scheduler container exited with code 255 (Error): torageclasses?allowWatchBookmarks=true&resourceVersion=47417&timeout=6m45s&timeoutSeconds=405&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:25:33.662471       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: Get https://localhost:6443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=54220&timeout=8m49s&timeoutSeconds=529&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:25:33.665549       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: Get https://localhost:6443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=47409&timeout=9m41s&timeoutSeconds=581&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:25:33.669632       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicaSet: Get https://localhost:6443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=54826&timeout=7m11s&timeoutSeconds=431&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:25:33.673339       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: Get https://localhost:6443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=47413&timeout=9m22s&timeoutSeconds=562&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:25:33.674933       1 reflector.go:307] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to watch *v1.Pod: Get https://localhost:6443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=56114&timeoutSeconds=537&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0702 15:25:34.553250       1 leaderelection.go:288] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0702 15:25:34.553305       1 server.go:257] leaderelection lost\n
Jul 02 15:25:36.653 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-187-100.us-east-2.compute.internal node/ip-10-0-187-100.us-east-2.compute.internal container=kube-controller-manager container exited with code 255 (Error): nnect: connection refused\nE0702 15:25:34.880216       1 reflector.go:307] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: Failed to watch *v1.EgressNetworkPolicy: Get https://localhost:6443/apis/network.openshift.io/v1/egressnetworkpolicies?allowWatchBookmarks=true&resourceVersion=48491&timeout=5m11s&timeoutSeconds=311&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:25:34.881503       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/apiregistration.k8s.io/v1/apiservices?allowWatchBookmarks=true&resourceVersion=52514&timeout=7m53s&timeoutSeconds=473&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:25:34.882244       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/config.openshift.io/v1/oauths?allowWatchBookmarks=true&resourceVersion=47802&timeout=9m35s&timeoutSeconds=575&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:25:34.883619       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: Get https://localhost:6443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=47417&timeout=5m6s&timeoutSeconds=306&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:25:34.884682       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/whereabouts.cni.cncf.io/v1alpha1/ippools?allowWatchBookmarks=true&resourceVersion=50563&timeout=6m38s&timeoutSeconds=398&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0702 15:25:35.523039       1 leaderelection.go:288] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0702 15:25:35.523130       1 controllermanager.go:291] leaderelection lost\n
Jul 02 15:25:56.800 E ns/openshift-sdn pod/sdn-wssld node/ip-10-0-187-100.us-east-2.compute.internal container=sdn container exited with code 255 (Error): ts for openshift-multus/multus-admission-controller:webhook to [10.128.0.58:6443 10.129.0.34:6443 10.130.0.56:6443]\nI0702 15:25:20.580097   40401 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.58:8443 10.129.0.34:8443 10.130.0.56:8443]\nI0702 15:25:20.741388   40401 proxier.go:368] userspace proxy: processing 0 service events\nI0702 15:25:20.741412   40401 proxier.go:347] userspace syncProxyRules took 30.410782ms\nI0702 15:25:46.433865   40401 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-kube-controller-manager/kube-controller-manager:https to [10.0.160.167:10257 10.0.254.230:10257]\nI0702 15:25:46.433908   40401 roundrobin.go:217] Delete endpoint 10.0.187.100:10257 for service "openshift-kube-controller-manager/kube-controller-manager:https"\nI0702 15:25:46.600457   40401 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-kube-scheduler/scheduler:https to [10.0.160.167:10259 10.0.254.230:10259]\nI0702 15:25:46.600489   40401 roundrobin.go:217] Delete endpoint 10.0.187.100:10259 for service "openshift-kube-scheduler/scheduler:https"\nI0702 15:25:46.609256   40401 proxier.go:368] userspace proxy: processing 0 service events\nI0702 15:25:46.609371   40401 proxier.go:347] userspace syncProxyRules took 35.908848ms\nI0702 15:25:46.787434   40401 proxier.go:368] userspace proxy: processing 0 service events\nI0702 15:25:46.787457   40401 proxier.go:347] userspace syncProxyRules took 36.814614ms\nI0702 15:25:53.186915   40401 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-kube-controller-manager/kube-controller-manager:https to [10.0.160.167:10257 10.0.187.100:10257 10.0.254.230:10257]\nI0702 15:25:53.325637   40401 proxier.go:368] userspace proxy: processing 0 service events\nI0702 15:25:53.325662   40401 proxier.go:347] userspace syncProxyRules took 29.603113ms\nF0702 15:25:56.515674   40401 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Jul 02 15:25:59.842 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-187-100.us-east-2.compute.internal node/ip-10-0-187-100.us-east-2.compute.internal container=setup init container exited with code 124 (Error): ................................................................................
Jul 02 15:26:06.875 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-187-100.us-east-2.compute.internal node/ip-10-0-187-100.us-east-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): rue: dial tcp [::1]:6443: connect: connection refused\nE0702 15:26:05.740824       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Role: Get https://localhost:6443/apis/rbac.authorization.k8s.io/v1/roles?allowWatchBookmarks=true&resourceVersion=47413&timeout=5m39s&timeoutSeconds=339&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:26:05.754284       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.LimitRange: Get https://localhost:6443/api/v1/limitranges?allowWatchBookmarks=true&resourceVersion=47408&timeout=8m22s&timeoutSeconds=502&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:26:05.755583       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Job: Get https://localhost:6443/apis/batch/v1/jobs?allowWatchBookmarks=true&resourceVersion=47609&timeout=5m49s&timeoutSeconds=349&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:26:05.757755       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: Get https://localhost:6443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=47413&timeout=6m13s&timeoutSeconds=373&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:26:05.758820       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.HorizontalPodAutoscaler: Get https://localhost:6443/apis/autoscaling/v1/horizontalpodautoscalers?allowWatchBookmarks=true&resourceVersion=47409&timeout=6m44s&timeoutSeconds=404&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0702 15:26:06.128508       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cluster-policy-controller: timed out waiting for the condition\nF0702 15:26:06.128557       1 policy_controller.go:94] leaderelection lost\nI0702 15:26:06.136280       1 reconciliation_controller.go:152] Shutting down ClusterQuotaReconcilationController\n
Jul 02 15:26:23.908 E ns/openshift-multus pod/multus-cg96z node/ip-10-0-254-230.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Jul 02 15:27:03.068 E ns/openshift-multus pod/multus-8wcgx node/ip-10-0-160-167.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Jul 02 15:27:53.720 E ns/openshift-multus pod/multus-hk7gw node/ip-10-0-222-239.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Jul 02 15:29:42.587 E ns/openshift-machine-config-operator pod/machine-config-operator-75846856db-w297f node/ip-10-0-254-230.us-east-2.compute.internal container=machine-config-operator container exited with code 2 (Error): pe v1.ConfigMap in scheme "github.com/openshift/machine-config-operator/cmd/common/helpers.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'machine-config-operator-75846856db-w297f_cd567224-99cc-488f-a0e2-3926d0545f91 became leader'\nI0702 15:07:49.838611       1 leaderelection.go:252] successfully acquired lease openshift-machine-config-operator/machine-config\nI0702 15:07:50.666743       1 operator.go:264] Starting MachineConfigOperator\nE0702 15:09:53.569083       1 reflector.go:307] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to watch *v1.MachineConfig: Get https://172.30.0.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?allowWatchBookmarks=true&resourceVersion=36668&timeout=6m30s&timeoutSeconds=390&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0702 15:09:53.569162       1 reflector.go:307] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to watch *v1.ControllerConfig: Get https://172.30.0.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?allowWatchBookmarks=true&resourceVersion=36634&timeout=8m27s&timeoutSeconds=507&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0702 15:09:53.571981       1 reflector.go:307] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: Failed to watch *v1.Network: Get https://172.30.0.1:443/apis/config.openshift.io/v1/networks?allowWatchBookmarks=true&resourceVersion=29053&timeout=7m37s&timeoutSeconds=457&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nI0702 15:12:44.003734       1 event.go:281] Event(v1.ObjectReference{Kind:"", Namespace:"", Name:"machine-config", UID:"f5d4a57e-eb38-47c2-8282-596ed3849eca", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorVersionChanged' clusteroperator/machine-config-operator version changed from [{operator 0.0.1-2020-07-02-132442}] to [{operator 0.0.1-2020-07-02-132900}]\n
Jul 02 15:31:38.235 E ns/openshift-machine-config-operator pod/machine-config-daemon-kgssd node/ip-10-0-222-239.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jul 02 15:31:45.933 E ns/openshift-machine-config-operator pod/machine-config-daemon-9kfsb node/ip-10-0-160-167.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jul 02 15:31:49.508 E ns/openshift-machine-config-operator pod/machine-config-daemon-f2dg4 node/ip-10-0-151-70.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jul 02 15:31:59.488 E ns/openshift-machine-config-operator pod/machine-config-daemon-z5flg node/ip-10-0-131-218.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jul 02 15:32:13.070 E ns/openshift-machine-config-operator pod/machine-config-daemon-hcjw8 node/ip-10-0-187-100.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jul 02 15:32:17.078 E ns/openshift-machine-config-operator pod/machine-config-daemon-nvbt4 node/ip-10-0-254-230.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jul 02 15:32:28.099 E ns/openshift-machine-config-operator pod/machine-config-controller-84c5f7b679-c7sbz node/ip-10-0-160-167.us-east-2.compute.internal container=machine-config-controller container exited with code 2 (Error): 5:12:00.766974       1 template_controller.go:182] Starting MachineConfigController-TemplateController\nI0702 15:12:00.766979       1 container_runtime_config_controller.go:189] Starting MachineConfigController-ContainerRuntimeConfigController\nI0702 15:12:00.843403       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool master\nI0702 15:12:00.874901       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool worker\nI0702 15:12:05.690862       1 status.go:82] Pool worker: All nodes are updated with rendered-worker-5d2d32bad4b6243c3525f5122d52571a\nI0702 15:12:07.894895       1 node_controller.go:433] Pool master: node ip-10-0-187-100.us-east-2.compute.internal is now reporting unready: node ip-10-0-187-100.us-east-2.compute.internal is reporting NotReady=False\nI0702 15:12:17.506674       1 node_controller.go:433] Pool master: node ip-10-0-187-100.us-east-2.compute.internal is now reporting unready: node ip-10-0-187-100.us-east-2.compute.internal is reporting Unschedulable\nI0702 15:12:38.596242       1 node_controller.go:442] Pool master: node ip-10-0-187-100.us-east-2.compute.internal has completed update to rendered-master-f5f9502ce1f3daef95c6ac56657bf04b\nI0702 15:12:38.625623       1 node_controller.go:435] Pool master: node ip-10-0-187-100.us-east-2.compute.internal is now reporting ready\nI0702 15:12:43.596870       1 status.go:82] Pool master: All nodes are updated with rendered-master-f5f9502ce1f3daef95c6ac56657bf04b\nI0702 15:17:05.309179       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool master\nI0702 15:17:05.364254       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool worker\nI0702 15:25:25.198115       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool master\nI0702 15:25:25.253995       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool worker\n
Jul 02 15:34:25.446 E ns/openshift-machine-config-operator pod/machine-config-server-zv74k node/ip-10-0-160-167.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0702 15:02:41.504536       1 start.go:38] Version: machine-config-daemon-4.4.0-202006242133.p0-4-g716cfce9-dirty (716cfce99c3b38375fbc22f49d83b202cfcb3d50)\nI0702 15:02:41.506125       1 api.go:56] Launching server on :22624\nI0702 15:02:41.506328       1 api.go:56] Launching server on :22623\nI0702 15:08:37.460392       1 start.go:38] Version: machine-config-daemon-4.4.0-202006242133.p0-4-g716cfce9-dirty (716cfce99c3b38375fbc22f49d83b202cfcb3d50)\nI0702 15:08:37.481282       1 api.go:56] Launching server on :22624\nI0702 15:08:37.482007       1 api.go:56] Launching server on :22623\n
Jul 02 15:34:36.970 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-76f86c5c75-2pt88 node/ip-10-0-187-100.us-east-2.compute.internal container=kube-apiserver-operator container exited with code 255 (Error):  15:34:35.469614       1 base_controller.go:49] Shutting down worker of  controller ...\nI0702 15:34:35.473149       1 base_controller.go:39] All  workers have been terminated\nI0702 15:34:35.469637       1 base_controller.go:49] Shutting down worker of NodeController controller ...\nI0702 15:34:35.473302       1 base_controller.go:39] All NodeController workers have been terminated\nI0702 15:34:35.469659       1 base_controller.go:49] Shutting down worker of  controller ...\nI0702 15:34:35.473320       1 base_controller.go:39] All  workers have been terminated\nI0702 15:34:35.469681       1 base_controller.go:49] Shutting down worker of LoggingSyncer controller ...\nI0702 15:34:35.473337       1 base_controller.go:39] All LoggingSyncer workers have been terminated\nI0702 15:34:35.469869       1 condition_controller.go:202] Shutting down EncryptionConditionController\nI0702 15:34:35.469884       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0702 15:34:35.469896       1 prune_controller.go:204] Shutting down EncryptionPruneController\nI0702 15:34:35.469908       1 state_controller.go:171] Shutting down EncryptionStateController\nI0702 15:34:35.469921       1 key_controller.go:363] Shutting down EncryptionKeyController\nI0702 15:34:35.469934       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeControllerManagerClient"\nI0702 15:34:35.469953       1 migration_controller.go:327] Shutting down EncryptionMigrationController\nI0702 15:34:35.469969       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "LocalhostServing"\nI0702 15:34:35.469983       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeAPIServerToKubeletClientCert"\nI0702 15:34:35.470005       1 controller.go:331] Shutting down BoundSATokenSignerController\nI0702 15:34:35.470021       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "LocalhostRecoveryServing"\nF0702 15:34:35.470021       1 builder.go:243] stopped\n
Jul 02 15:34:40.709 E ns/openshift-machine-config-operator pod/machine-config-server-6b6r9 node/ip-10-0-254-230.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0702 15:02:50.319067       1 start.go:38] Version: machine-config-daemon-4.4.0-202006242133.p0-4-g716cfce9-dirty (716cfce99c3b38375fbc22f49d83b202cfcb3d50)\nI0702 15:02:50.320443       1 api.go:56] Launching server on :22624\nI0702 15:02:50.320527       1 api.go:56] Launching server on :22623\nI0702 15:05:28.276639       1 start.go:38] Version: machine-config-daemon-4.4.0-202006242133.p0-4-g716cfce9-dirty (716cfce99c3b38375fbc22f49d83b202cfcb3d50)\nI0702 15:05:28.289268       1 api.go:56] Launching server on :22623\nI0702 15:05:28.289168       1 api.go:56] Launching server on :22624\n
Jul 02 15:34:54.303 E ns/openshift-machine-config-operator pod/machine-config-server-6jcpl node/ip-10-0-187-100.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0702 15:03:05.192633       1 start.go:38] Version: machine-config-daemon-4.4.0-202006242133.p0-4-g716cfce9-dirty (716cfce99c3b38375fbc22f49d83b202cfcb3d50)\nI0702 15:03:05.194340       1 api.go:56] Launching server on :22624\nI0702 15:03:05.194438       1 api.go:56] Launching server on :22623\nI0702 15:12:12.613717       1 start.go:38] Version: machine-config-daemon-4.4.0-202006242133.p0-4-g716cfce9-dirty (716cfce99c3b38375fbc22f49d83b202cfcb3d50)\nI0702 15:12:12.616755       1 api.go:56] Launching server on :22624\nI0702 15:12:12.616935       1 api.go:56] Launching server on :22623\n
Jul 02 15:35:01.339 E ns/openshift-console pod/console-f79d54fd8-wxrvt node/ip-10-0-187-100.us-east-2.compute.internal container=console container exited with code 2 (Error): 2020-07-02T15:22:03Z cmd/main: cookies are secure!\n2020-07-02T15:22:03Z cmd/main: Binding to [::]:8443...\n2020-07-02T15:22:03Z cmd/main: using TLS\n
Jul 02 15:35:37.002 E ns/openshift-marketplace pod/certified-operators-64c86bbc7c-754wm node/ip-10-0-222-239.us-east-2.compute.internal container=certified-operators container exited with code 2 (Error): 
Jul 02 15:37:12.090 E ns/openshift-cluster-node-tuning-operator pod/tuned-dmzsw node/ip-10-0-151-70.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:37:12.116 E ns/openshift-monitoring pod/node-exporter-jtbf5 node/ip-10-0-151-70.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:37:12.131 E ns/openshift-image-registry pod/node-ca-j64jp node/ip-10-0-151-70.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:37:12.144 E ns/openshift-sdn pod/ovs-2qq4n node/ip-10-0-151-70.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:37:12.158 E ns/openshift-sdn pod/sdn-xxg5b node/ip-10-0-151-70.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:37:12.174 E ns/openshift-multus pod/multus-dcqlf node/ip-10-0-151-70.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:37:12.191 E ns/openshift-dns pod/dns-default-dbnlh node/ip-10-0-151-70.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:37:12.206 E ns/openshift-machine-config-operator pod/machine-config-daemon-ds76c node/ip-10-0-151-70.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:37:19.712 E ns/openshift-machine-config-operator pod/machine-config-daemon-ds76c node/ip-10-0-151-70.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Jul 02 15:37:20.718 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable
Jul 02 15:37:22.118 E ns/openshift-cluster-node-tuning-operator pod/tuned-8gf66 node/ip-10-0-187-100.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:37:22.139 E ns/openshift-controller-manager pod/controller-manager-6psx7 node/ip-10-0-187-100.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:37:22.163 E ns/openshift-monitoring pod/node-exporter-t8qgr node/ip-10-0-187-100.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:37:22.193 E ns/openshift-image-registry pod/node-ca-ll6rs node/ip-10-0-187-100.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:37:22.208 E ns/openshift-sdn pod/sdn-controller-rf9jq node/ip-10-0-187-100.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:37:22.246 E ns/openshift-multus pod/multus-bm9tz node/ip-10-0-187-100.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:37:22.267 E ns/openshift-multus pod/multus-admission-controller-jkmnl node/ip-10-0-187-100.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:37:22.281 E ns/openshift-sdn pod/ovs-tnt4s node/ip-10-0-187-100.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:37:22.295 E ns/openshift-dns pod/dns-default-mwmxw node/ip-10-0-187-100.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:37:22.308 E ns/openshift-machine-config-operator pod/machine-config-daemon-w9x78 node/ip-10-0-187-100.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:37:22.320 E ns/openshift-cluster-version pod/cluster-version-operator-674fbd989c-728wt node/ip-10-0-187-100.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:37:22.353 E ns/openshift-machine-config-operator pod/machine-config-server-szfb5 node/ip-10-0-187-100.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:37:29.502 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-98fd49898-r866p node/ip-10-0-222-239.us-east-2.compute.internal container=operator container exited with code 255 (Error): 25.646657       1 operator.go:148] Finished syncing operator at 51.838394ms\nI0702 15:25:25.646699       1 operator.go:146] Starting syncing operator at 2020-07-02 15:25:25.646694744 +0000 UTC m=+264.535330351\nI0702 15:25:25.770314       1 operator.go:148] Finished syncing operator at 123.61073ms\nI0702 15:25:26.808420       1 operator.go:146] Starting syncing operator at 2020-07-02 15:25:26.808409217 +0000 UTC m=+265.697045022\nI0702 15:25:26.829285       1 operator.go:148] Finished syncing operator at 20.868891ms\nI0702 15:34:38.459836       1 operator.go:146] Starting syncing operator at 2020-07-02 15:34:38.459824336 +0000 UTC m=+817.348460089\nI0702 15:34:38.496876       1 operator.go:148] Finished syncing operator at 37.040563ms\nI0702 15:34:38.504016       1 operator.go:146] Starting syncing operator at 2020-07-02 15:34:38.504007327 +0000 UTC m=+817.392643108\nI0702 15:34:38.547095       1 operator.go:148] Finished syncing operator at 43.079175ms\nI0702 15:34:47.125405       1 operator.go:146] Starting syncing operator at 2020-07-02 15:34:47.125395079 +0000 UTC m=+826.014030737\nI0702 15:34:47.151374       1 operator.go:148] Finished syncing operator at 25.971514ms\nI0702 15:35:10.229165       1 operator.go:146] Starting syncing operator at 2020-07-02 15:35:10.22915368 +0000 UTC m=+849.117789478\nI0702 15:35:10.283319       1 operator.go:148] Finished syncing operator at 54.070984ms\nI0702 15:37:27.813239       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0702 15:37:27.813677       1 dynamic_serving_content.go:144] Shutting down serving-cert::/tmp/serving-cert-084003992/tls.crt::/tmp/serving-cert-084003992/tls.key\nI0702 15:37:27.813946       1 status_controller.go:212] Shutting down StatusSyncer-csi-snapshot-controller\nI0702 15:37:27.813965       1 logging_controller.go:93] Shutting down LogLevelController\nI0702 15:37:27.813980       1 management_state_controller.go:112] Shutting down management-state-controller-csi-snapshot-controller\nF0702 15:37:27.814063       1 builder.go:243] stopped\n
Jul 02 15:37:30.810 E ns/openshift-monitoring pod/thanos-querier-67fc899fc6-sj6xz node/ip-10-0-222-239.us-east-2.compute.internal container=oauth-proxy container exited with code 2 (Error): 2 15:21:22 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/02 15:21:22 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/07/02 15:21:22 http.go:107: HTTPS: listening on [::]:9091\nI0702 15:21:22.686297       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/07/02 15:22:17 oauthproxy.go:774: basicauth: 10.129.0.11:42328 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:24:17 oauthproxy.go:774: basicauth: 10.129.0.11:44580 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:26:17 oauthproxy.go:774: basicauth: 10.129.0.11:34660 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:27:17 oauthproxy.go:774: basicauth: 10.129.0.11:36656 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:28:17 oauthproxy.go:774: basicauth: 10.129.0.11:37290 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:29:17 oauthproxy.go:774: basicauth: 10.129.0.11:38148 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:32:17 oauthproxy.go:774: basicauth: 10.129.0.11:40200 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:34:42 oauthproxy.go:774: basicauth: 10.130.0.61:57246 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:35:43 oauthproxy.go:774: basicauth: 10.130.0.61:38564 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:36:42 oauthproxy.go:774: basicauth: 10.130.0.61:49980 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:36:42 oauthproxy.go:774: basicauth: 10.130.0.61:49980 Authorization header does not start with 'Basic', skipping basic authentication\n
Jul 02 15:37:30.846 E ns/openshift-marketplace pod/redhat-marketplace-6bf779c8d8-9h99r node/ip-10-0-222-239.us-east-2.compute.internal container=redhat-marketplace container exited with code 2 (Error): 
Jul 02 15:37:30.914 E ns/openshift-monitoring pod/kube-state-metrics-6dc5ddd5dc-sw5zh node/ip-10-0-222-239.us-east-2.compute.internal container=kube-state-metrics container exited with code 2 (Error): 
Jul 02 15:37:30.930 E ns/openshift-monitoring pod/openshift-state-metrics-8c98b8857-2kclc node/ip-10-0-222-239.us-east-2.compute.internal container=openshift-state-metrics container exited with code 2 (Error): 
Jul 02 15:37:30.961 E ns/openshift-marketplace pod/redhat-operators-969d9d6c-67gzl node/ip-10-0-222-239.us-east-2.compute.internal container=redhat-operators container exited with code 2 (Error): 
Jul 02 15:37:31.949 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Jul 02 15:37:32.124 E ns/openshift-monitoring pod/prometheus-adapter-5f84fcb9fb-pjfrd node/ip-10-0-222-239.us-east-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0702 15:21:22.607434       1 adapter.go:93] successfully using in-cluster auth\nI0702 15:21:23.361994       1 secure_serving.go:116] Serving securely on [::]:6443\n
Jul 02 15:37:32.166 E ns/openshift-monitoring pod/telemeter-client-8bb66898c-tn6tj node/ip-10-0-222-239.us-east-2.compute.internal container=telemeter-client container exited with code 2 (Error): 
Jul 02 15:37:32.166 E ns/openshift-monitoring pod/telemeter-client-8bb66898c-tn6tj node/ip-10-0-222-239.us-east-2.compute.internal container=reload container exited with code 2 (Error): 
Jul 02 15:37:32.462 E ns/openshift-machine-config-operator pod/machine-config-daemon-w9x78 node/ip-10-0-187-100.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Jul 02 15:38:02.304 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-151-70.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-07-02T15:37:57.939Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-07-02T15:37:57.946Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-07-02T15:37:57.946Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-07-02T15:37:57.947Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-07-02T15:37:57.947Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-07-02T15:37:57.947Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-07-02T15:37:57.947Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-07-02T15:37:57.947Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-07-02T15:37:57.947Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-07-02T15:37:57.947Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-07-02T15:37:57.947Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-07-02T15:37:57.947Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-07-02T15:37:57.947Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-07-02T15:37:57.947Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-07-02T15:37:57.948Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-07-02T15:37:57.948Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-07-02
Jul 02 15:40:18.857 E ns/openshift-monitoring pod/node-exporter-rxdgz node/ip-10-0-222-239.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:40:18.881 E ns/openshift-cluster-node-tuning-operator pod/tuned-cpbvr node/ip-10-0-222-239.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:40:18.892 E ns/openshift-image-registry pod/node-ca-hzt9n node/ip-10-0-222-239.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:40:18.915 E ns/openshift-sdn pod/ovs-fd47x node/ip-10-0-222-239.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:40:18.928 E ns/openshift-multus pod/multus-26btp node/ip-10-0-222-239.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:40:18.941 E ns/openshift-dns pod/dns-default-mkkkc node/ip-10-0-222-239.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:40:18.953 E ns/openshift-machine-config-operator pod/machine-config-daemon-fd8bt node/ip-10-0-222-239.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:40:26.134 E ns/openshift-machine-config-operator pod/machine-config-daemon-fd8bt node/ip-10-0-222-239.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Jul 02 15:40:36.606 E ns/openshift-monitoring pod/thanos-querier-67fc899fc6-wgz72 node/ip-10-0-131-218.us-east-2.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/07/02 15:34:37 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/07/02 15:34:37 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/02 15:34:37 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/02 15:34:37 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/07/02 15:34:37 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/07/02 15:34:37 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/07/02 15:34:37 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/02 15:34:37 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0702 15:34:37.107728       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/07/02 15:34:37 http.go:107: HTTPS: listening on [::]:9091\n2020/07/02 15:37:43 oauthproxy.go:774: basicauth: 10.130.0.61:55840 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:37:43 oauthproxy.go:774: basicauth: 10.130.0.61:55840 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:38:47 oauthproxy.go:774: basicauth: 10.130.0.61:56968 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:38:47 oauthproxy.go:774: basicauth: 10.130.0.61:56968 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:39:42 oauthproxy.go:774: basicauth: 10.130.0.61:57694 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:39:42 oauthproxy.go:774: basicauth: 10.130.0.61:57694 Authorization header does not start with 'Basic', skipping basic authentication\n
Jul 02 15:40:36.658 E ns/openshift-monitoring pod/prometheus-adapter-5f84fcb9fb-jnrgz node/ip-10-0-131-218.us-east-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0702 15:34:39.236273       1 adapter.go:93] successfully using in-cluster auth\nI0702 15:34:40.264211       1 secure_serving.go:116] Serving securely on [::]:6443\n
Jul 02 15:40:36.687 E ns/openshift-monitoring pod/grafana-7f9d7d6c5d-4znb5 node/ip-10-0-131-218.us-east-2.compute.internal container=grafana container exited with code 1 (Error): 
Jul 02 15:40:36.687 E ns/openshift-monitoring pod/grafana-7f9d7d6c5d-4znb5 node/ip-10-0-131-218.us-east-2.compute.internal container=grafana-proxy container exited with code 2 (Error): 
Jul 02 15:40:36.714 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-131-218.us-east-2.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/07/02 15:21:47 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Jul 02 15:40:36.714 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-131-218.us-east-2.compute.internal container=prometheus-proxy container exited with code 2 (Error): 2020/07/02 15:21:47 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/07/02 15:21:47 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/02 15:21:47 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/02 15:21:47 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/07/02 15:21:47 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/07/02 15:21:47 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/07/02 15:21:47 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/02 15:21:47 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0702 15:21:47.524337       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/07/02 15:21:47 http.go:107: HTTPS: listening on [::]:9091\n2020/07/02 15:37:34 oauthproxy.go:774: basicauth: 10.128.2.18:46148 Authorization header does not start with 'Basic', skipping basic authentication\n
Jul 02 15:40:36.714 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-131-218.us-east-2.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-07-02T15:21:46.654973538Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=info ts=2020-07-02T15:21:46.655111215Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-07-02T15:21:46.659084398Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-07-02T15:21:51.793233587Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Jul 02 15:40:46.265 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-222-239.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-07-02T15:40:43.826Z caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-07-02T15:40:43.829Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-07-02T15:40:43.830Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-07-02T15:40:43.831Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-07-02T15:40:43.831Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-07-02T15:40:43.831Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-07-02T15:40:43.831Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-07-02T15:40:43.831Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-07-02T15:40:43.831Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-07-02T15:40:43.831Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-07-02T15:40:43.831Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-07-02T15:40:43.831Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-07-02T15:40:43.831Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-07-02T15:40:43.831Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-07-02T15:40:43.831Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-07-02T15:40:43.832Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-07-02
Jul 02 15:42:42.702 E ns/openshift-console-operator pod/console-operator-6bdbdc4df6-mcsmz node/ip-10-0-160-167.us-east-2.compute.internal container=console-operator container exited with code 255 (Error): ersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Available changed from True to False ("DeploymentAvailable: 1 replicas ready at version 0.0.1-2020-07-02-132442")\nE0702 15:34:36.746031       1 status.go:73] DeploymentAvailable FailedUpdate 1 replicas ready at version 0.0.1-2020-07-02-132442\nW0702 15:34:37.988960       1 reflector.go:326] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 1075; INTERNAL_ERROR") has prevented the request from succeeding\nE0702 15:34:38.547717       1 status.go:73] DeploymentAvailable FailedUpdate 1 replicas ready at version 0.0.1-2020-07-02-132442\nE0702 15:34:40.507474       1 status.go:73] DeploymentAvailable FailedUpdate 1 replicas ready at version 0.0.1-2020-07-02-132442\nI0702 15:34:46.731516       1 status_controller.go:176] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2020-07-02T14:22:40Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-07-02T15:22:19Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-07-02T15:34:46Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-07-02T14:22:40Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0702 15:34:46.746231       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"528390ad-31e0-4475-bbb9-f5fc0abb2107", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Available changed from False to True ("")\nI0702 15:42:39.865307       1 cmd.go:83] Received SIGTERM or SIGINT signal, shutting down controller.\nF0702 15:42:39.867386       1 builder.go:209] server exited\n
Jul 02 15:42:43.767 E ns/openshift-insights pod/insights-operator-676594f596-bkl47 node/ip-10-0-160-167.us-east-2.compute.internal container=operator container exited with code 2 (Error): theus/2.15.2 10.128.2.23:51040]\nI0702 15:40:54.275456       1 insightsuploader.go:117] Nothing to report since 2020-07-02T15:23:39Z\nI0702 15:40:59.659597       1 status.go:298] The operator is healthy\nI0702 15:40:59.676685       1 configobserver.go:65] Refreshing configuration from cluster pull secret\nI0702 15:40:59.679338       1 configobserver.go:90] Found cloud.openshift.com token\nI0702 15:40:59.679364       1 configobserver.go:107] Refreshing configuration from cluster secret\nI0702 15:41:08.187047       1 reflector.go:418] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Watch close - *v1.ConfigMap total 1 items received\nI0702 15:41:09.275765       1 insightsuploader.go:117] Nothing to report since 2020-07-02T15:23:39Z\nI0702 15:41:11.748190       1 httplog.go:90] GET /metrics: (6.228411ms) 200 [Prometheus/2.15.2 10.131.0.14:58500]\nI0702 15:41:12.319932       1 httplog.go:90] GET /metrics: (1.81706ms) 200 [Prometheus/2.15.2 10.128.2.23:51040]\nI0702 15:41:24.276224       1 insightsuploader.go:117] Nothing to report since 2020-07-02T15:23:39Z\nI0702 15:41:39.276558       1 insightsuploader.go:117] Nothing to report since 2020-07-02T15:23:39Z\nI0702 15:41:41.739599       1 httplog.go:90] GET /metrics: (6.373975ms) 200 [Prometheus/2.15.2 10.131.0.14:58500]\nI0702 15:41:42.319928       1 httplog.go:90] GET /metrics: (1.850894ms) 200 [Prometheus/2.15.2 10.128.2.23:51040]\nI0702 15:41:54.276878       1 insightsuploader.go:117] Nothing to report since 2020-07-02T15:23:39Z\nI0702 15:42:09.277496       1 insightsuploader.go:117] Nothing to report since 2020-07-02T15:23:39Z\nI0702 15:42:11.739377       1 httplog.go:90] GET /metrics: (6.140514ms) 200 [Prometheus/2.15.2 10.131.0.14:58500]\nI0702 15:42:12.320275       1 httplog.go:90] GET /metrics: (2.001887ms) 200 [Prometheus/2.15.2 10.128.2.23:51040]\nI0702 15:42:24.278011       1 insightsuploader.go:117] Nothing to report since 2020-07-02T15:23:39Z\nI0702 15:42:39.278659       1 insightsuploader.go:117] Nothing to report since 2020-07-02T15:23:39Z\n
Jul 02 15:42:45.444 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-6ffbb749fc-9lng9 node/ip-10-0-160-167.us-east-2.compute.internal container=openshift-apiserver-operator container exited with code 255 (Error): s are unavailable")\nI0702 15:42:41.397631       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0702 15:42:41.399086       1 prune_controller.go:204] Shutting down EncryptionPruneController\nI0702 15:42:41.399120       1 state_controller.go:171] Shutting down EncryptionStateController\nI0702 15:42:41.399136       1 migration_controller.go:327] Shutting down EncryptionMigrationController\nI0702 15:42:41.399153       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0702 15:42:41.399168       1 condition_controller.go:202] Shutting down EncryptionConditionController\nI0702 15:42:41.399183       1 key_controller.go:363] Shutting down EncryptionKeyController\nI0702 15:42:41.399203       1 base_controller.go:73] Shutting down LoggingSyncer ...\nI0702 15:42:41.399232       1 base_controller.go:73] Shutting down UnsupportedConfigOverridesController ...\nI0702 15:42:41.399247       1 status_controller.go:212] Shutting down StatusSyncer-openshift-apiserver\nI0702 15:42:41.399264       1 base_controller.go:73] Shutting down RevisionController ...\nI0702 15:42:41.399278       1 finalizer_controller.go:148] Shutting down NamespaceFinalizerController_openshift-apiserver\nI0702 15:42:41.399294       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0702 15:42:41.399323       1 base_controller.go:73] Shutting down  ...\nI0702 15:42:41.399337       1 prune_controller.go:232] Shutting down PruneController\nI0702 15:42:41.399352       1 remove_stale_conditions.go:84] Shutting down RemoveStaleConditions\nI0702 15:42:41.399382       1 apiservice_controller.go:215] Shutting down APIServiceController_openshift-apiserver\nI0702 15:42:41.399644       1 workload_controller.go:177] Shutting down OpenShiftAPIServerOperator\nI0702 15:42:41.399809       1 base_controller.go:48] Shutting down worker of RevisionController controller ...\nI0702 15:42:41.399832       1 base_controller.go:38] All RevisionController workers have been terminated\nF0702 15:42:41.400036       1 builder.go:243] stopped\n
Jul 02 15:42:46.748 E ns/openshift-machine-api pod/machine-api-operator-8694cf57dd-stkh2 node/ip-10-0-160-167.us-east-2.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Jul 02 15:42:46.778 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-785b774cf8-45bcw node/ip-10-0-160-167.us-east-2.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): RevisionController controller ...\nI0702 15:42:43.571644       1 base_controller.go:39] All RevisionController workers have been terminated\nI0702 15:42:43.565451       1 base_controller.go:49] Shutting down worker of NodeController controller ...\nI0702 15:42:43.594908       1 base_controller.go:39] All NodeController workers have been terminated\nI0702 15:42:43.565466       1 base_controller.go:49] Shutting down worker of PruneController controller ...\nI0702 15:42:43.595001       1 base_controller.go:39] All PruneController workers have been terminated\nI0702 15:42:43.565482       1 base_controller.go:49] Shutting down worker of  controller ...\nI0702 15:42:43.595068       1 base_controller.go:39] All  workers have been terminated\nI0702 15:42:43.565497       1 base_controller.go:49] Shutting down worker of UnsupportedConfigOverridesController controller ...\nI0702 15:42:43.595160       1 base_controller.go:39] All UnsupportedConfigOverridesController workers have been terminated\nI0702 15:42:43.565516       1 base_controller.go:49] Shutting down worker of LoggingSyncer controller ...\nF0702 15:42:43.565630       1 builder.go:243] stopped\nI0702 15:42:43.565952       1 targetconfigcontroller.go:644] Shutting down TargetConfigController\nF0702 15:42:43.566210       1 builder.go:209] server exited\nI0702 15:42:43.566395       1 dynamic_serving_content.go:144] Shutting down serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key\nI0702 15:42:43.566414       1 configmap_cafile_content.go:226] Shutting down client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nI0702 15:42:43.566429       1 configmap_cafile_content.go:226] Shutting down client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\nI0702 15:42:43.566878       1 secure_serving.go:222] Stopped listening on [::]:8443\nF0702 15:42:43.573264       1 leaderelection.go:67] leaderelection lost\nI0702 15:42:43.612459       1 base_controller.go:39] All LoggingSyncer workers have been terminated\n
Jul 02 15:43:13.142 E ns/openshift-operator-lifecycle-manager pod/packageserver-7b5d9cf86-cms2c node/ip-10-0-187-100.us-east-2.compute.internal container=packageserver container exited with code -1 (Error): 
Jul 02 15:43:16.930 E ns/openshift-monitoring pod/node-exporter-7pskp node/ip-10-0-131-218.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:43:16.931 E ns/openshift-cluster-node-tuning-operator pod/tuned-22njg node/ip-10-0-131-218.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:43:16.932 E ns/openshift-image-registry pod/node-ca-d2cn4 node/ip-10-0-131-218.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:43:16.990 E ns/openshift-sdn pod/ovs-hqlfx node/ip-10-0-131-218.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:43:17.087 E ns/openshift-multus pod/multus-vtntt node/ip-10-0-131-218.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:43:17.183 E ns/openshift-dns pod/dns-default-5sq9g node/ip-10-0-131-218.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:43:17.225 E ns/openshift-machine-config-operator pod/machine-config-daemon-64vxc node/ip-10-0-131-218.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:43:24.970 E ns/openshift-machine-config-operator pod/machine-config-daemon-64vxc node/ip-10-0-131-218.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Jul 02 15:43:29.190 E ns/openshift-marketplace pod/redhat-operators-969d9d6c-6zt4s node/ip-10-0-151-70.us-east-2.compute.internal container=redhat-operators container exited with code 2 (Error): 
Jul 02 15:45:33.734 E ns/openshift-cluster-node-tuning-operator pod/tuned-zpzc2 node/ip-10-0-160-167.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:45:33.767 E ns/openshift-monitoring pod/node-exporter-rdq4t node/ip-10-0-160-167.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:45:33.796 E ns/openshift-image-registry pod/node-ca-bdh7d node/ip-10-0-160-167.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:45:33.818 E ns/openshift-controller-manager pod/controller-manager-xt5x8 node/ip-10-0-160-167.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:45:33.839 E ns/openshift-sdn pod/sdn-controller-nmh99 node/ip-10-0-160-167.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:45:33.878 E ns/openshift-sdn pod/ovs-qt2kw node/ip-10-0-160-167.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:45:33.905 E ns/openshift-sdn pod/sdn-bv4tm node/ip-10-0-160-167.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:45:33.940 E ns/openshift-multus pod/multus-admission-controller-h6prv node/ip-10-0-160-167.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:45:33.960 E ns/openshift-multus pod/multus-8m7pv node/ip-10-0-160-167.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:45:34.012 E ns/openshift-dns pod/dns-default-w272n node/ip-10-0-160-167.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:45:34.031 E ns/openshift-machine-config-operator pod/machine-config-daemon-8rhrq node/ip-10-0-160-167.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:45:34.058 E ns/openshift-machine-config-operator pod/machine-config-server-8kmns node/ip-10-0-160-167.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:45:42.066 E ns/openshift-machine-config-operator pod/machine-config-daemon-8rhrq node/ip-10-0-160-167.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Jul 02 15:45:53.314 E ns/openshift-cluster-machine-approver pod/machine-approver-6bb66997f8-vlvzs node/ip-10-0-254-230.us-east-2.compute.internal container=machine-approver-controller container exited with code 2 (Error): list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0702 15:23:25.124669       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0702 15:23:26.125512       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0702 15:23:27.126239       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0702 15:23:28.127039       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0702 15:23:29.127923       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nW0702 15:43:17.486090       1 reflector.go:289] github.com/openshift/cluster-machine-approver/main.go:238: watch of *v1beta1.CertificateSigningRequest ended with: too old resource version: 54938 (66437)\n
Jul 02 15:45:53.880 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-56f5gwj98 node/ip-10-0-254-230.us-east-2.compute.internal container=operator container exited with code 255 (Error): nager ended with: too old resource version: 51623 (66442)\nI0702 15:43:18.123814       1 reflector.go:185] Listing and watching *v1.ConfigMap from k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105\nI0702 15:43:18.124176       1 reflector.go:185] Listing and watching *v1.ConfigMap from k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105\nI0702 15:43:18.130467       1 reflector.go:185] Listing and watching *v1.Namespace from k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105\nI0702 15:43:19.121013       1 reflector.go:185] Listing and watching *v1.ServiceCatalogControllerManager from k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105\nI0702 15:43:34.824250       1 httplog.go:90] GET /metrics: (8.161552ms) 200 [Prometheus/2.15.2 10.131.0.14:42852]\nI0702 15:43:35.827274       1 httplog.go:90] GET /metrics: (1.971824ms) 200 [Prometheus/2.15.2 10.128.2.23:50516]\nI0702 15:44:04.824134       1 httplog.go:90] GET /metrics: (7.750059ms) 200 [Prometheus/2.15.2 10.131.0.14:42852]\nI0702 15:44:05.827455       1 httplog.go:90] GET /metrics: (2.093009ms) 200 [Prometheus/2.15.2 10.128.2.23:50516]\nI0702 15:44:34.824093       1 httplog.go:90] GET /metrics: (7.601635ms) 200 [Prometheus/2.15.2 10.131.0.14:42852]\nI0702 15:44:35.827913       1 httplog.go:90] GET /metrics: (2.367373ms) 200 [Prometheus/2.15.2 10.128.2.23:50516]\nI0702 15:45:04.824434       1 httplog.go:90] GET /metrics: (7.629831ms) 200 [Prometheus/2.15.2 10.131.0.14:42852]\nI0702 15:45:05.827089       1 httplog.go:90] GET /metrics: (1.866125ms) 200 [Prometheus/2.15.2 10.128.2.23:50516]\nI0702 15:45:34.825058       1 httplog.go:90] GET /metrics: (8.256396ms) 200 [Prometheus/2.15.2 10.131.0.14:42852]\nI0702 15:45:35.827217       1 httplog.go:90] GET /metrics: (1.838318ms) 200 [Prometheus/2.15.2 10.128.2.23:50516]\nI0702 15:45:51.832406       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0702 15:45:51.832847       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0702 15:45:51.832872       1 builder.go:210] server exited\n
Jul 02 15:45:55.205 E ns/openshift-monitoring pod/thanos-querier-67fc899fc6-5wvl2 node/ip-10-0-254-230.us-east-2.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/07/02 15:37:35 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/07/02 15:37:35 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/02 15:37:35 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/02 15:37:35 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/07/02 15:37:35 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/07/02 15:37:35 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/07/02 15:37:35 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/02 15:37:35 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/07/02 15:37:35 http.go:107: HTTPS: listening on [::]:9091\nI0702 15:37:35.512029       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/07/02 15:40:42 oauthproxy.go:774: basicauth: 10.130.0.61:58738 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:42:53 oauthproxy.go:774: basicauth: 10.129.0.13:34192 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:43:52 oauthproxy.go:774: basicauth: 10.129.0.13:39054 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:44:52 oauthproxy.go:774: basicauth: 10.129.0.13:42696 Authorization header does not start with 'Basic', skipping basic authentication\n
Jul 02 15:45:57.584 E ns/openshift-service-catalog-apiserver-operator pod/openshift-service-catalog-apiserver-operator-545fbd976-s88xp node/ip-10-0-254-230.us-east-2.compute.internal container=operator container exited with code 255 (Error): erver-operator-lock\nI0702 15:45:22.727943       1 workload_controller.go:347] No service bindings found, nothing to delete.\nI0702 15:45:22.744354       1 workload_controller.go:193] apiservice v1beta1.servicecatalog.k8s.io deleted\nI0702 15:45:24.300252       1 leaderelection.go:283] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0702 15:45:34.311738       1 leaderelection.go:283] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0702 15:45:35.179854       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0702 15:45:35.179889       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0702 15:45:35.182178       1 httplog.go:90] GET /metrics: (8.190961ms) 200 [Prometheus/2.15.2 10.128.2.23:60350]\nI0702 15:45:42.206886       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0702 15:45:42.206925       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0702 15:45:42.209118       1 httplog.go:90] GET /metrics: (2.432307ms) 200 [Prometheus/2.15.2 10.131.0.14:53514]\nI0702 15:45:42.705925       1 workload_controller.go:347] No service bindings found, nothing to delete.\nI0702 15:45:42.724193       1 workload_controller.go:193] apiservice v1beta1.servicecatalog.k8s.io deleted\nI0702 15:45:44.322867       1 leaderelection.go:283] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0702 15:45:54.354231       1 leaderelection.go:283] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0702 15:45:55.146281       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nF0702 15:45:55.179163       1 builder.go:209] server exited\n
Jul 02 15:45:57.617 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-6dbd4cf8d7-9ngrq node/ip-10-0-254-230.us-east-2.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): Error on reading termination message from logs: failed to try resolving symlinks in path "/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-6dbd4cf8d7-9ngrq_a2897aa0-49ba-4014-a13c-4a1e4ca05c15/kube-scheduler-operator-container/0.log": lstat /var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-6dbd4cf8d7-9ngrq_a2897aa0-49ba-4014-a13c-4a1e4ca05c15/kube-scheduler-operator-container/0.log: no such file or directory
Jul 02 15:45:57.642 E ns/openshift-service-ca-operator pod/service-ca-operator-858bcdd6b7-nfn6l node/ip-10-0-254-230.us-east-2.compute.internal container=operator container exited with code 255 (Error): 
Jul 02 15:45:57.672 E ns/openshift-service-ca pod/service-ca-69f55f58d8-5mb6q node/ip-10-0-254-230.us-east-2.compute.internal container=service-ca-controller container exited with code 255 (Error): 
Jul 02 15:45:57.708 E ns/openshift-authentication-operator pod/authentication-operator-5c894f98bc-hlfzc node/ip-10-0-254-230.us-east-2.compute.internal container=operator container exited with code 255 (Error): "2020-07-02T15:43:01Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-07-02T14:34:32Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-07-02T14:16:04Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0702 15:43:42.004483       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"73db882a-3f67-4410-b170-4e01ab57f555", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "WellKnownEndpointDegraded: failed to GET well-known https://10.0.160.167:6443/.well-known/oauth-authorization-server: dial tcp 10.0.160.167:6443: connect: connection refused" to ""\nI0702 15:45:53.241893       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0702 15:45:53.242147       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nI0702 15:45:53.242241       1 controller.go:70] Shutting down AuthenticationOperator2\nI0702 15:45:53.242265       1 ingress_state_controller.go:157] Shutting down IngressStateController\nI0702 15:45:53.242281       1 controller.go:215] Shutting down RouterCertsDomainValidationController\nI0702 15:45:53.242327       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0702 15:45:53.242336       1 status_controller.go:212] Shutting down StatusSyncer-authentication\nI0702 15:45:53.242347       1 unsupportedconfigoverrides_controller.go:162] Shutting down UnsupportedConfigOverridesController\nI0702 15:45:53.242356       1 logging_controller.go:93] Shutting down LogLevelController\nI0702 15:45:53.242366       1 remove_stale_conditions.go:83] Shutting down RemoveStaleConditions\nI0702 15:45:53.242376       1 management_state_controller.go:112] Shutting down management-state-controller-authentication\nF0702 15:45:53.253091       1 builder.go:243] stopped\n
Jul 02 15:46:08.916 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator openshift-apiserver is reporting a failure: APIServerDeploymentDegraded: 2 of 3 requested instances are unavailable
Jul 02 15:48:46.067 E ns/openshift-controller-manager pod/controller-manager-xwnz5 node/ip-10-0-254-230.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:48:46.088 E ns/openshift-cluster-node-tuning-operator pod/tuned-r9fzp node/ip-10-0-254-230.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:48:46.111 E ns/openshift-monitoring pod/node-exporter-52m88 node/ip-10-0-254-230.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:48:46.125 E ns/openshift-image-registry pod/node-ca-2tpxr node/ip-10-0-254-230.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:48:46.136 E ns/openshift-sdn pod/sdn-controller-kj5z2 node/ip-10-0-254-230.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:48:46.160 E ns/openshift-multus pod/multus-admission-controller-dhwv5 node/ip-10-0-254-230.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:48:46.172 E ns/openshift-sdn pod/ovs-b4mbv node/ip-10-0-254-230.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:48:46.183 E ns/openshift-multus pod/multus-wkdwk node/ip-10-0-254-230.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:48:46.196 E ns/openshift-dns pod/dns-default-pft8t node/ip-10-0-254-230.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:48:46.216 E ns/openshift-machine-config-operator pod/machine-config-daemon-cqp9l node/ip-10-0-254-230.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:48:46.224 E ns/openshift-machine-config-operator pod/machine-config-server-qc947 node/ip-10-0-254-230.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:48:55.689 E ns/openshift-machine-config-operator pod/machine-config-daemon-cqp9l node/ip-10-0-254-230.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error):