ResultSUCCESS
Tests 3 failed / 24 succeeded
Started2020-08-04 19:33
Elapsed1h31m
Work namespaceci-op-is1jm1fc
Refs release-4.4:333a3cbb
80:7e36a329
pod581da8fe-d689-11ea-8492-0a580a81054f
repoopenshift/cluster-svcat-controller-manager-operator
revision1

Test Failures


Cluster upgrade Kubernetes APIs remain available 37m57s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sKubernetes\sAPIs\sremain\savailable$'
API was unreachable during disruption for at least 7s of 37m55s (0%):

Aug 04 20:39:58.235 E kube-apiserver Kube API started failing: etcdserver: leader changed
Aug 04 20:39:59.126 - 6s    E kube-apiserver Kube API is not responding to GET requests
Aug 04 20:40:05.303 I kube-apiserver Kube API started responding to GET requests
				from junit_upgrade_1596574435.xml

Filter through log files


Cluster upgrade OpenShift APIs remain available 37m57s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sOpenShift\sAPIs\sremain\savailable$'
API was unreachable during disruption for at least 6s of 37m55s (0%):

Aug 04 20:36:50.281 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-is1jm1fc-90c52.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: dial tcp 54.151.49.5:6443: connect: connection refused
Aug 04 20:36:51.091 E openshift-apiserver OpenShift API is not responding to GET requests
Aug 04 20:36:51.166 I openshift-apiserver OpenShift API started responding to GET requests
Aug 04 20:39:58.238 I openshift-apiserver OpenShift API stopped responding to GET requests: etcdserver: leader changed
Aug 04 20:39:59.091 - 5s    E openshift-apiserver OpenShift API is not responding to GET requests
Aug 04 20:40:05.282 I openshift-apiserver OpenShift API started responding to GET requests
				from junit_upgrade_1596574435.xml

Filter through log files


openshift-tests Monitor cluster while tests execute 43m24s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
182 error level events were detected during this test run:

Aug 04 20:11:11.480 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-250-156.us-west-1.compute.internal node/ip-10-0-250-156.us-west-1.compute.internal container=kube-controller-manager container exited with code 255 (Error): :11:10.886771       1 reflector.go:307] github.com/openshift/client-go/security/informers/externalversions/factory.go:101: Failed to watch *v1.RangeAllocation: Get https://localhost:6443/apis/security.openshift.io/v1/rangeallocations?allowWatchBookmarks=true&resourceVersion=22334&timeout=7m47s&timeoutSeconds=467&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0804 20:11:10.887861       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.RuntimeClass: Get https://localhost:6443/apis/node.k8s.io/v1beta1/runtimeclasses?allowWatchBookmarks=true&resourceVersion=17080&timeout=6m36s&timeoutSeconds=396&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0804 20:11:10.889024       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/operators.coreos.com/v1alpha1/subscriptions?allowWatchBookmarks=true&resourceVersion=19276&timeout=6m37s&timeoutSeconds=397&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0804 20:11:10.890070       1 reflector.go:307] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: Get https://localhost:6443/apis/template.openshift.io/v1/templates?allowWatchBookmarks=true&resourceVersion=20910&timeout=9m59s&timeoutSeconds=599&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0804 20:11:10.891405       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/machineconfiguration.openshift.io/v1/machineconfigs?allowWatchBookmarks=true&resourceVersion=20792&timeout=7m37s&timeoutSeconds=457&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0804 20:11:10.913088       1 leaderelection.go:288] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0804 20:11:10.913197       1 controllermanager.go:291] leaderelection lost\n
Aug 04 20:11:36.671 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-250-156.us-west-1.compute.internal node/ip-10-0-250-156.us-west-1.compute.internal container=cluster-policy-controller container exited with code 255 (Error): fused\nE0804 20:11:34.727272       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://localhost:6443/api/v1/services?allowWatchBookmarks=true&resourceVersion=22606&timeout=8m52s&timeoutSeconds=532&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0804 20:11:34.729545       1 reflector.go:307] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: Get https://localhost:6443/apis/quota.openshift.io/v1/clusterresourcequotas?allowWatchBookmarks=true&resourceVersion=17341&timeout=9m19s&timeoutSeconds=559&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0804 20:11:34.731386       1 reflector.go:307] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: Failed to watch *v1.Route: Get https://localhost:6443/apis/route.openshift.io/v1/routes?allowWatchBookmarks=true&resourceVersion=19875&timeout=5m36s&timeoutSeconds=336&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0804 20:11:34.732629       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Role: Get https://localhost:6443/apis/rbac.authorization.k8s.io/v1/roles?allowWatchBookmarks=true&resourceVersion=18008&timeout=9m50s&timeoutSeconds=590&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0804 20:11:34.733835       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Namespace: Get https://localhost:6443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=22542&timeout=9m56s&timeoutSeconds=596&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0804 20:11:35.533729       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cluster-policy-controller: timed out waiting for the condition\nF0804 20:11:35.533776       1 policy_controller.go:94] leaderelection lost\nI0804 20:11:35.541531       1 reconciliation_controller.go:152] Shutting down ClusterQuotaReconcilationController\n
Aug 04 20:11:36.671 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-250-156.us-west-1.compute.internal node/ip-10-0-250-156.us-west-1.compute.internal container=setup init container exited with code 124 (Error): ................................................................................
Aug 04 20:11:41.696 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-250-156.us-west-1.compute.internal node/ip-10-0-250-156.us-west-1.compute.internal container=kube-controller-manager-recovery-controller container exited with code 255 (Error): anager-operator/configmaps?allowWatchBookmarks=true&resourceVersion=22865&timeout=5m17s&timeoutSeconds=317&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0804 20:11:40.576095       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?allowWatchBookmarks=true&resourceVersion=22854&timeout=5m29s&timeoutSeconds=329&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0804 20:11:40.578424       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-config-managed/configmaps?allowWatchBookmarks=true&resourceVersion=19832&timeout=5m9s&timeoutSeconds=309&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0804 20:11:40.580767       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-config/secrets?allowWatchBookmarks=true&resourceVersion=22072&timeout=5m1s&timeoutSeconds=301&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0804 20:11:40.582258       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-config/configmaps?allowWatchBookmarks=true&resourceVersion=19832&timeout=5m43s&timeoutSeconds=343&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0804 20:11:40.583501       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?allowWatchBookmarks=true&resourceVersion=21100&timeout=5m25s&timeoutSeconds=325&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0804 20:11:40.786044       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cert-recovery-controller-lock: timed out waiting for the condition\nF0804 20:11:40.786102       1 leaderelection.go:67] leaderelection lost\n
Aug 04 20:15:01.551 E clusterversion/version changed Failing to True: WorkloadNotAvailable: deployment openshift-cluster-version/cluster-version-operator is progressing NewReplicaSetAvailable: ReplicaSet "cluster-version-operator-8795579f7" has successfully progressed.
Aug 04 20:15:30.642 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-74cdbf554f-46zqj node/ip-10-0-250-156.us-west-1.compute.internal container=kube-apiserver-operator container exited with code 255 (Error): 0804 20:15:29.869718       1 base_controller.go:74] Shutting down  ...\nI0804 20:15:29.869728       1 certrotationtime_upgradeable.go:103] Shutting down CertRotationTimeUpgradeableController\nI0804 20:15:29.869741       1 base_controller.go:74] Shutting down PruneController ...\nI0804 20:15:29.869751       1 feature_upgradeable_controller.go:106] Shutting down FeatureUpgradeableController\nI0804 20:15:29.869763       1 status_controller.go:212] Shutting down StatusSyncer-kube-apiserver\nI0804 20:15:29.869775       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0804 20:15:29.869787       1 remove_stale_conditions.go:84] Shutting down RemoveStaleConditions\nI0804 20:15:29.869813       1 targetconfigcontroller.go:440] Shutting down TargetConfigController\nI0804 20:15:29.870023       1 base_controller.go:49] Shutting down worker of RevisionController controller ...\nI0804 20:15:29.872111       1 base_controller.go:39] All RevisionController workers have been terminated\nI0804 20:15:29.870040       1 base_controller.go:49] Shutting down worker of StaticPodStateController controller ...\nI0804 20:15:29.872136       1 base_controller.go:39] All StaticPodStateController workers have been terminated\nI0804 20:15:29.870055       1 base_controller.go:49] Shutting down worker of InstallerController controller ...\nI0804 20:15:29.872149       1 base_controller.go:39] All InstallerController workers have been terminated\nI0804 20:15:29.870071       1 base_controller.go:49] Shutting down worker of InstallerStateController controller ...\nI0804 20:15:29.872161       1 base_controller.go:39] All InstallerStateController workers have been terminated\nI0804 20:15:29.870085       1 base_controller.go:49] Shutting down worker of NodeController controller ...\nI0804 20:15:29.872172       1 base_controller.go:39] All NodeController workers have been terminated\nF0804 20:15:29.870086       1 builder.go:243] stopped\nI0804 20:15:29.870099       1 base_controller.go:49] Shutting down worker of UnsupportedConfigOverridesController controller ...\n
Aug 04 20:15:52.721 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-5559f4f48b-p767r node/ip-10-0-250-156.us-west-1.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): ect: connection refused\\nE0804 20:11:34.733835       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Namespace: Get https://localhost:6443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=22542&timeout=9m56s&timeoutSeconds=596&watch=true: dial tcp [::1]:6443: connect: connection refused\\nI0804 20:11:35.533729       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cluster-policy-controller: timed out waiting for the condition\\nF0804 20:11:35.533776       1 policy_controller.go:94] leaderelection lost\\nI0804 20:11:35.541531       1 reconciliation_controller.go:152] Shutting down ClusterQuotaReconcilationController\\n\"" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-250-156.us-west-1.compute.internal pods/kube-controller-manager-ip-10-0-250-156.us-west-1.compute.internal container=\"cluster-policy-controller\" is not ready"\nI0804 20:11:52.396950       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"cc1490ba-b5fd-4e1e-b6ec-5bef38e37441", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-250-156.us-west-1.compute.internal pods/kube-controller-manager-ip-10-0-250-156.us-west-1.compute.internal container=\"cluster-policy-controller\" is not ready" to "NodeControllerDegraded: All master nodes are ready"\nI0804 20:15:51.946233       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0804 20:15:51.946606       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0804 20:15:51.946705       1 satokensigner_controller.go:332] Shutting down SATokenSignerController\nF0804 20:15:51.947146       1 builder.go:209] server exited\n
Aug 04 20:16:10.798 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-7d895b5664-fkvgw node/ip-10-0-250-156.us-west-1.compute.internal container=openshift-apiserver-operator container exited with code 255 (Error): -operator", Name:"openshift-apiserver-operator", UID:"7473929d-82d4-44d8-8690-7c35c99ea6b9", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Progressing changed from True to False ("")\nI0804 19:55:42.671506       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"7473929d-82d4-44d8-8690-7c35c99ea6b9", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable",Progressing changed from True to False ("")\nI0804 19:55:42.704419       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"7473929d-82d4-44d8-8690-7c35c99ea6b9", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable"\nI0804 19:57:00.515628       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"7473929d-82d4-44d8-8690-7c35c99ea6b9", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable" to ""\nI0804 20:16:10.037486       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0804 20:16:10.037699       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0804 20:16:10.037741       1 builder.go:210] server exited\n
Aug 04 20:16:22.986 E ns/openshift-machine-api pod/machine-api-operator-7d4949d89d-zf6rn node/ip-10-0-182-102.us-west-1.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Aug 04 20:16:42.998 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-250-156.us-west-1.compute.internal node/ip-10-0-250-156.us-west-1.compute.internal container=cluster-policy-controller container exited with code 255 (Error): + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 10357 \))" ]; do sleep 1; done'\n++ ss -Htanop '(' sport = 10357 ')'\n/bin/bash: ss: command not found\n+ '[' -n '' ']'\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml\nI0804 20:16:42.259206       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0804 20:16:42.266019       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0804 20:16:42.267837       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0804 20:16:42.267922       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0804 20:16:42.268634       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Aug 04 20:17:04.198 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-250-156.us-west-1.compute.internal node/ip-10-0-250-156.us-west-1.compute.internal container=cluster-policy-controller container exited with code 255 (Error): + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 10357 \))" ]; do sleep 1; done'\n++ ss -Htanop '(' sport = 10357 ')'\n/bin/bash: ss: command not found\n+ '[' -n '' ']'\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml\nI0804 20:17:03.675785       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0804 20:17:03.677382       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0804 20:17:03.678994       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0804 20:17:03.679068       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0804 20:17:03.679711       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Aug 04 20:18:17.070 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-128-49.us-west-1.compute.internal node/ip-10-0-128-49.us-west-1.compute.internal container=cluster-policy-controller container exited with code 255 (Error): + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 10357 \))" ]; do sleep 1; done'\n++ ss -Htanop '(' sport = 10357 ')'\n/bin/bash: ss: command not found\n+ '[' -n '' ']'\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml\nI0804 20:18:16.004509       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0804 20:18:16.006512       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0804 20:18:16.008840       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0804 20:18:16.009011       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0804 20:18:16.010194       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Aug 04 20:18:23.566 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-250-156.us-west-1.compute.internal node/ip-10-0-250-156.us-west-1.compute.internal container=setup init container exited with code 124 (Error): ................................................................................
Aug 04 20:19:35.950 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-182-102.us-west-1.compute.internal node/ip-10-0-182-102.us-west-1.compute.internal container=cluster-policy-controller container exited with code 255 (Error): + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 10357 \))" ]; do sleep 1; done'\n++ ss -Htanop '(' sport = 10357 ')'\n/bin/bash: ss: command not found\n+ '[' -n '' ']'\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml\nI0804 20:19:35.482386       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0804 20:19:35.483917       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0804 20:19:35.485912       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0804 20:19:35.485968       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0804 20:19:35.486525       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Aug 04 20:20:33.236 E ns/openshift-cluster-machine-approver pod/machine-approver-56b89dd965-dn58f node/ip-10-0-250-156.us-west-1.compute.internal container=machine-approver-controller container exited with code 2 (Error): sts?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0804 20:18:44.532830       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0804 20:18:45.533563       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0804 20:18:46.534223       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0804 20:18:47.535280       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0804 20:18:48.535943       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0804 20:18:49.536669       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\n
Aug 04 20:20:35.707 E ns/openshift-kube-storage-version-migrator pod/migrator-5696dbb579-4n8ff node/ip-10-0-185-68.us-west-1.compute.internal container=migrator container exited with code 2 (Error): 
Aug 04 20:20:46.763 E ns/openshift-monitoring pod/node-exporter-dq5qh node/ip-10-0-185-68.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): -04T20:00:27Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-08-04T20:00:27Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-08-04T20:00:27Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-08-04T20:00:27Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-08-04T20:00:27Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-08-04T20:00:27Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-08-04T20:00:27Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-08-04T20:00:27Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-08-04T20:00:27Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-08-04T20:00:27Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-08-04T20:00:27Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-08-04T20:00:27Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-08-04T20:00:27Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-08-04T20:00:27Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-08-04T20:00:27Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-08-04T20:00:27Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-08-04T20:00:27Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-08-04T20:00:27Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-08-04T20:00:27Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-08-04T20:00:27Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-08-04T20:00:27Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-08-04T20:00:27Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-08-04T20:00:27Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-08-04T20:00:27Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Aug 04 20:20:51.423 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-64f87d5d94-vgsfz node/ip-10-0-250-156.us-west-1.compute.internal container=operator container exited with code 255 (Error): -manager-operator", Name:"openshift-controller-manager-operator", UID:"30f627ac-b25e-4b23-b5f7-6bd54c14db9f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ObservedConfigChanged' Writing updated observed config:   map[string]interface{}{\n  	"build": map[string]interface{}{\n  		"buildDefaults": map[string]interface{}{"resources": map[string]interface{}{}},\n- 		"imageTemplateFormat": map[string]interface{}{\n- 			"format": string("registry.svc.ci.openshift.org/ci-op-is1jm1fc/stable-initial@sha256:2e9fa701fb05ce0c7a3a0ce59d48165fbc50bedfbe3033f5eec1051fbda305b0"),\n- 		},\n+ 		"imageTemplateFormat": map[string]interface{}{\n+ 			"format": string("registry.svc.ci.openshift.org/ci-op-is1jm1fc/stable@sha256:2e9fa701fb05ce0c7a3a0ce59d48165fbc50bedfbe3033f5eec1051fbda305b0"),\n+ 		},\n  	},\n- 	"deployer": map[string]interface{}{\n- 		"imageTemplateFormat": map[string]interface{}{\n- 			"format": string("registry.svc.ci.openshift.org/ci-op-is1jm1fc/stable-initial@sha256:0f56ff26b2d388481871be41986f024b0afd0efe84d591edf4afc67a75915a7e"),\n- 		},\n- 	},\n+ 	"deployer": map[string]interface{}{\n+ 		"imageTemplateFormat": map[string]interface{}{\n+ 			"format": string("registry.svc.ci.openshift.org/ci-op-is1jm1fc/stable@sha256:0f56ff26b2d388481871be41986f024b0afd0efe84d591edf4afc67a75915a7e"),\n+ 		},\n+ 	},\n  	"dockerPullSecret": map[string]interface{}{"internalRegistryHostname": string("image-registry.openshift-image-registry.svc:5000")},\n  	"ingress":          map[string]interface{}{"ingressIPNetworkCIDR": string("")},\n  }\nI0804 20:20:50.475725       1 cmd.go:83] Received SIGTERM or SIGINT signal, shutting down controller.\nI0804 20:20:50.476768       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0804 20:20:50.476847       1 status_controller.go:212] Shutting down StatusSyncer-openshift-controller-manager\nI0804 20:20:50.476900       1 operator.go:135] Shutting down OpenShiftControllerManagerOperator\nF0804 20:20:50.477108       1 builder.go:243] stopped\n
Aug 04 20:20:52.844 E ns/openshift-monitoring pod/kube-state-metrics-647cdf7d7d-p7ptf node/ip-10-0-185-68.us-west-1.compute.internal container=kube-state-metrics container exited with code 2 (Error): 
Aug 04 20:20:53.837 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-7d6c5996dd-9krf7 node/ip-10-0-185-68.us-west-1.compute.internal container=operator container exited with code 255 (Error): 10\nI0804 20:20:44.633964       1 operator.go:148] Finished syncing operator at 32.554989ms\nI0804 20:20:44.634004       1 operator.go:146] Starting syncing operator at 2020-08-04 20:20:44.633997765 +0000 UTC m=+1163.948167857\nI0804 20:20:44.652603       1 operator.go:148] Finished syncing operator at 18.599265ms\nI0804 20:20:44.652637       1 operator.go:146] Starting syncing operator at 2020-08-04 20:20:44.652633195 +0000 UTC m=+1163.966802990\nI0804 20:20:44.987501       1 operator.go:148] Finished syncing operator at 334.859099ms\nI0804 20:20:47.987288       1 operator.go:146] Starting syncing operator at 2020-08-04 20:20:47.987275302 +0000 UTC m=+1167.301445244\nI0804 20:20:49.190946       1 operator.go:148] Finished syncing operator at 1.203661049s\nI0804 20:20:49.379550       1 operator.go:146] Starting syncing operator at 2020-08-04 20:20:49.379540642 +0000 UTC m=+1168.693710561\nI0804 20:20:49.439027       1 operator.go:148] Finished syncing operator at 59.479558ms\nI0804 20:20:49.439065       1 operator.go:146] Starting syncing operator at 2020-08-04 20:20:49.439061112 +0000 UTC m=+1168.753230890\nI0804 20:20:49.499174       1 operator.go:148] Finished syncing operator at 60.10662ms\nI0804 20:20:52.809534       1 operator.go:146] Starting syncing operator at 2020-08-04 20:20:52.809522126 +0000 UTC m=+1172.123692084\nI0804 20:20:52.886150       1 operator.go:148] Finished syncing operator at 76.616876ms\nI0804 20:20:52.886201       1 operator.go:146] Starting syncing operator at 2020-08-04 20:20:52.886194718 +0000 UTC m=+1172.200364690\nI0804 20:20:52.946991       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0804 20:20:52.947435       1 logging_controller.go:93] Shutting down LogLevelController\nI0804 20:20:52.947461       1 status_controller.go:212] Shutting down StatusSyncer-csi-snapshot-controller\nI0804 20:20:52.947476       1 management_state_controller.go:112] Shutting down management-state-controller-csi-snapshot-controller\nF0804 20:20:52.947537       1 builder.go:210] server exited\n
Aug 04 20:20:53.927 E ns/openshift-monitoring pod/openshift-state-metrics-7f768f899b-kz9vf node/ip-10-0-185-68.us-west-1.compute.internal container=openshift-state-metrics container exited with code 2 (Error): 
Aug 04 20:20:54.567 E ns/openshift-monitoring pod/node-exporter-sdj6f node/ip-10-0-192-57.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): -04T20:00:26Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-08-04T20:00:26Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-08-04T20:00:26Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-08-04T20:00:26Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-08-04T20:00:26Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-08-04T20:00:26Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-08-04T20:00:26Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-08-04T20:00:26Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-08-04T20:00:26Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-08-04T20:00:26Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-08-04T20:00:26Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-08-04T20:00:26Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-08-04T20:00:26Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-08-04T20:00:26Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-08-04T20:00:26Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-08-04T20:00:26Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-08-04T20:00:26Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-08-04T20:00:26Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-08-04T20:00:26Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-08-04T20:00:26Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-08-04T20:00:26Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-08-04T20:00:26Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-08-04T20:00:26Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-08-04T20:00:26Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Aug 04 20:21:00.524 E clusterversion/version changed Failing to True: MultipleErrors: Multiple errors are preventing progress:\n* deployment openshift-authentication-operator/authentication-operator is progressing ReplicaSetUpdated: ReplicaSet "authentication-operator-6946854477" is progressing.\n* deployment openshift-console/downloads is progressing ReplicaSetUpdated: ReplicaSet "downloads-7f77bf578f" is progressing.\n* deployment openshift-image-registry/cluster-image-registry-operator is progressing ReplicaSetUpdated: ReplicaSet "cluster-image-registry-operator-599c96bb7f" is progressing.\n* deployment openshift-machine-api/cluster-autoscaler-operator is progressing ReplicaSetUpdated: ReplicaSet "cluster-autoscaler-operator-669958dcf4" is progressing.\n* deployment openshift-marketplace/marketplace-operator is progressing ReplicaSetUpdated: ReplicaSet "marketplace-operator-56985cb9f5" is progressing.\n* deployment openshift-operator-lifecycle-manager/olm-operator is progressing ReplicaSetUpdated: ReplicaSet "olm-operator-789c689665" is progressing.\n* deployment openshift-service-ca-operator/service-ca-operator is progressing ReplicaSetUpdated: ReplicaSet "service-ca-operator-987957648" is progressing.\n* deployment openshift-service-catalog-apiserver-operator/openshift-service-catalog-apiserver-operator is progressing ReplicaSetUpdated: ReplicaSet "openshift-service-catalog-apiserver-operator-5dd59dbdf8" is progressing.
Aug 04 20:21:03.922 E ns/openshift-monitoring pod/telemeter-client-785dc9f558-4vdxn node/ip-10-0-185-68.us-west-1.compute.internal container=reload container exited with code 2 (Error): 
Aug 04 20:21:03.922 E ns/openshift-monitoring pod/telemeter-client-785dc9f558-4vdxn node/ip-10-0-185-68.us-west-1.compute.internal container=telemeter-client container exited with code 2 (Error): 
Aug 04 20:21:07.618 E ns/openshift-authentication-operator pod/authentication-operator-b8495fc6c-c7scc node/ip-10-0-250-156.us-west-1.compute.internal container=operator container exited with code 255 (Error): us":"False","type":"Degraded"},{"lastTransitionTime":"2020-08-04T20:20:51Z","message":"Progressing: not all deployment replicas are ready","reason":"_OAuthServerDeploymentNotReady","status":"True","type":"Progressing"},{"lastTransitionTime":"2020-08-04T20:10:09Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-08-04T19:50:29Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0804 20:20:57.161421       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"3e4a2c09-8ad5-4c8c-b432-7c353f5de1b4", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Progressing message changed from "Progressing: deployment's observed generation did not reach the expected generation" to "Progressing: not all deployment replicas are ready"\nI0804 20:21:06.284487       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0804 20:21:06.285135       1 controller.go:70] Shutting down AuthenticationOperator2\nI0804 20:21:06.285171       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0804 20:21:06.285207       1 unsupportedconfigoverrides_controller.go:162] Shutting down UnsupportedConfigOverridesController\nI0804 20:21:06.285232       1 status_controller.go:212] Shutting down StatusSyncer-authentication\nI0804 20:21:06.285246       1 remove_stale_conditions.go:83] Shutting down RemoveStaleConditions\nI0804 20:21:06.285259       1 controller.go:215] Shutting down RouterCertsDomainValidationController\nI0804 20:21:06.285274       1 management_state_controller.go:112] Shutting down management-state-controller-authentication\nI0804 20:21:06.285289       1 logging_controller.go:93] Shutting down LogLevelController\nI0804 20:21:06.285303       1 ingress_state_controller.go:157] Shutting down IngressStateController\nF0804 20:21:06.285583       1 builder.go:243] stopped\n
Aug 04 20:21:08.943 E ns/openshift-monitoring pod/prometheus-adapter-596797547d-dxpcw node/ip-10-0-185-68.us-west-1.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0804 20:01:21.071944       1 adapter.go:93] successfully using in-cluster auth\nI0804 20:01:21.698337       1 secure_serving.go:116] Serving securely on [::]:6443\n
Aug 04 20:21:09.676 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-192-57.us-west-1.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/08/04 20:03:05 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Aug 04 20:21:09.676 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-192-57.us-west-1.compute.internal container=prometheus-proxy container exited with code 2 (Error): 2020/08/04 20:03:06 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/08/04 20:03:06 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/08/04 20:03:06 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/08/04 20:03:06 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/08/04 20:03:06 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/08/04 20:03:06 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/08/04 20:03:06 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/08/04 20:03:06 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0804 20:03:06.203123       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/08/04 20:03:06 http.go:107: HTTPS: listening on [::]:9091\n2020/08/04 20:06:52 oauthproxy.go:774: basicauth: 10.128.2.9:56786 Authorization header does not start with 'Basic', skipping basic authentication\n2020/08/04 20:11:22 oauthproxy.go:774: basicauth: 10.128.2.9:60526 Authorization header does not start with 'Basic', skipping basic authentication\n2020/08/04 20:15:52 oauthproxy.go:774: basicauth: 10.128.2.9:36554 Authorization header does not start with 'Basic', skipping basic authentication\n2020/08/04 20:18:29 oauthproxy.go:774: basicauth: 10.130.0.21:42042 Authorization header does not start with 'Basic', skipping basic authentication\n2020/08/04 20:20:23 o
Aug 04 20:21:09.676 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-192-57.us-west-1.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-08-04T20:03:05.5588586Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=info ts=2020-08-04T20:03:05.558999026Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-08-04T20:03:05.5603366Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-08-04T20:03:10.696828527Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Aug 04 20:21:14.664 E ns/openshift-monitoring pod/node-exporter-sdvhz node/ip-10-0-250-156.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): -04T19:56:03Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-08-04T19:56:03Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-08-04T19:56:03Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-08-04T19:56:03Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-08-04T19:56:03Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-08-04T19:56:03Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-08-04T19:56:03Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-08-04T19:56:03Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-08-04T19:56:03Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-08-04T19:56:03Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-08-04T19:56:03Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-08-04T19:56:03Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-08-04T19:56:03Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-08-04T19:56:03Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-08-04T19:56:03Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-08-04T19:56:03Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-08-04T19:56:03Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-08-04T19:56:03Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-08-04T19:56:03Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-08-04T19:56:03Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-08-04T19:56:03Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-08-04T19:56:03Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-08-04T19:56:03Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-08-04T19:56:03Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Aug 04 20:21:21.016 E ns/openshift-monitoring pod/prometheus-adapter-596797547d-crzmj node/ip-10-0-185-68.us-west-1.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0804 20:01:21.052464       1 adapter.go:93] successfully using in-cluster auth\nI0804 20:01:22.059038       1 secure_serving.go:116] Serving securely on [::]:6443\nW0804 20:06:33.407258       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.Node ended with: too old resource version: 18911 (20089)\n
Aug 04 20:21:23.381 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-128-49.us-west-1.compute.internal node/ip-10-0-128-49.us-west-1.compute.internal container=setup init container exited with code 124 (Error): ................................................................................
Aug 04 20:21:27.404 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-128-49.us-west-1.compute.internal node/ip-10-0-128-49.us-west-1.compute.internal container=kube-controller-manager-recovery-controller container exited with code 255 (Error): cret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?allowWatchBookmarks=true&resourceVersion=25313&timeout=5m36s&timeoutSeconds=336&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0804 20:21:26.957833       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-config-managed/secrets?allowWatchBookmarks=true&resourceVersion=25313&timeout=7m25s&timeoutSeconds=445&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0804 20:21:26.960176       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-config/configmaps?allowWatchBookmarks=true&resourceVersion=27001&timeout=5m3s&timeoutSeconds=303&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0804 20:21:26.960375       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-config-managed/configmaps?allowWatchBookmarks=true&resourceVersion=27001&timeout=9m37s&timeoutSeconds=577&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0804 20:21:26.961428       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?allowWatchBookmarks=true&resourceVersion=27806&timeout=5m28s&timeoutSeconds=328&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0804 20:21:27.258185       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cert-recovery-controller-lock: timed out waiting for the condition\nI0804 20:21:27.258227       1 event.go:281] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"", Name:"", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' 0635b0a8-ee4f-4651-a036-b42dbe1843bc stopped leading\nF0804 20:21:27.258364       1 leaderelection.go:67] leaderelection lost\n
Aug 04 20:21:28.414 E ns/openshift-monitoring pod/node-exporter-gq69j node/ip-10-0-128-49.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): -04T19:55:58Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-08-04T19:55:58Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-08-04T19:55:58Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-08-04T19:55:58Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-08-04T19:55:58Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-08-04T19:55:58Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-08-04T19:55:58Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-08-04T19:55:58Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-08-04T19:55:58Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-08-04T19:55:58Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-08-04T19:55:58Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-08-04T19:55:58Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-08-04T19:55:58Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-08-04T19:55:58Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-08-04T19:55:58Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-08-04T19:55:58Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-08-04T19:55:58Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-08-04T19:55:58Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-08-04T19:55:58Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-08-04T19:55:58Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-08-04T19:55:58Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-08-04T19:55:58Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-08-04T19:55:58Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-08-04T19:55:58Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Aug 04 20:21:31.908 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-192-57.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-08-04T20:21:27.065Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-08-04T20:21:27.071Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-08-04T20:21:27.072Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-08-04T20:21:27.073Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-08-04T20:21:27.073Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-08-04T20:21:27.073Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-08-04T20:21:27.073Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-08-04T20:21:27.073Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-08-04T20:21:27.073Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-08-04T20:21:27.073Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-08-04T20:21:27.074Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-08-04T20:21:27.074Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-08-04T20:21:27.074Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-08-04T20:21:27.074Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-08-04T20:21:27.074Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-08-04T20:21:27.074Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-08-04
Aug 04 20:21:36.314 E ns/openshift-monitoring pod/node-exporter-fdb7p node/ip-10-0-170-11.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): -04T20:00:31Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-08-04T20:00:31Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-08-04T20:00:31Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-08-04T20:00:31Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-08-04T20:00:31Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-08-04T20:00:31Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-08-04T20:00:31Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-08-04T20:00:31Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-08-04T20:00:31Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-08-04T20:00:31Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-08-04T20:00:31Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-08-04T20:00:31Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-08-04T20:00:31Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-08-04T20:00:31Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-08-04T20:00:31Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-08-04T20:00:31Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-08-04T20:00:31Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-08-04T20:00:31Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-08-04T20:00:31Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-08-04T20:00:31Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-08-04T20:00:31Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-08-04T20:00:31Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-08-04T20:00:31Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-08-04T20:00:31Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Aug 04 20:21:40.780 E ns/openshift-controller-manager pod/controller-manager-lstdb node/ip-10-0-250-156.us-west-1.compute.internal container=controller-manager container exited with code 137 (Error): I0804 19:56:21.382664       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (v0.0.0-alpha.0-111-gb28647e)\nI0804 19:56:21.385185       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-is1jm1fc/stable-initial@sha256:0f56ff26b2d388481871be41986f024b0afd0efe84d591edf4afc67a75915a7e"\nI0804 19:56:21.385208       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-is1jm1fc/stable-initial@sha256:2e9fa701fb05ce0c7a3a0ce59d48165fbc50bedfbe3033f5eec1051fbda305b0"\nI0804 19:56:21.385316       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0804 19:56:21.385465       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Aug 04 20:21:40.864 E ns/openshift-controller-manager pod/controller-manager-bfh5m node/ip-10-0-182-102.us-west-1.compute.internal container=controller-manager container exited with code 137 (Error): server ("unable to decode an event from the watch stream: stream error: stream ID 367; INTERNAL_ERROR") has prevented the request from succeeding\nW0804 20:18:28.281117       1 reflector.go:340] runtime/asm_amd64.s:1357: watch of *v1.Image ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 315; INTERNAL_ERROR") has prevented the request from succeeding\nW0804 20:18:28.281608       1 reflector.go:340] runtime/asm_amd64.s:1357: watch of *v1.TemplateInstance ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 259; INTERNAL_ERROR") has prevented the request from succeeding\nW0804 20:18:28.281778       1 reflector.go:340] runtime/asm_amd64.s:1357: watch of *v1.Build ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 363; INTERNAL_ERROR") has prevented the request from succeeding\nW0804 20:18:28.281893       1 reflector.go:340] runtime/asm_amd64.s:1357: watch of *v1.ImageStream ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 387; INTERNAL_ERROR") has prevented the request from succeeding\nW0804 20:18:28.282037       1 reflector.go:340] runtime/asm_amd64.s:1357: watch of *v1.BuildConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 227; INTERNAL_ERROR") has prevented the request from succeeding\nW0804 20:18:28.282171       1 reflector.go:340] runtime/asm_amd64.s:1357: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 79; INTERNAL_ERROR") has prevented the request from succeeding\nW0804 20:18:28.282301       1 reflector.go:340] runtime/asm_amd64.s:1357: watch of *v1.DeploymentConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 365; INTERNAL_ERROR") has prevented the request from succeeding\n
Aug 04 20:22:10.266 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-185-68.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-08-04T20:21:59.558Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-08-04T20:21:59.563Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-08-04T20:21:59.564Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-08-04T20:21:59.564Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-08-04T20:21:59.564Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-08-04T20:21:59.565Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-08-04T20:21:59.565Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-08-04T20:21:59.565Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-08-04T20:21:59.565Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-08-04T20:21:59.565Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-08-04T20:21:59.565Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-08-04T20:21:59.565Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-08-04T20:21:59.565Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-08-04T20:21:59.565Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-08-04T20:21:59.566Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-08-04T20:21:59.566Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-08-04
Aug 04 20:22:11.316 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-7d8fb58748-ztzf9 node/ip-10-0-170-11.us-west-1.compute.internal container=snapshot-controller container exited with code 2 (Error): 
Aug 04 20:22:40.377 E ns/openshift-marketplace pod/certified-operators-79c7c756fd-jlqll node/ip-10-0-185-68.us-west-1.compute.internal container=certified-operators container exited with code 2 (Error): 
Aug 04 20:22:43.747 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-128-49.us-west-1.compute.internal node/ip-10-0-128-49.us-west-1.compute.internal container=kube-scheduler container exited with code 255 (Error): s&timeoutSeconds=568&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0804 20:22:36.707475       1 leaderelection.go:331] error retrieving resource lock openshift-kube-scheduler/kube-scheduler: Get https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps/kube-scheduler?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0804 20:22:37.077457       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://localhost:6443/api/v1/services?allowWatchBookmarks=true&resourceVersion=25727&timeout=9m26s&timeoutSeconds=566&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0804 20:22:42.451021       1 reflector.go:307] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to watch *v1.Pod: unknown (get pods)\nE0804 20:22:42.482003       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)\nE0804 20:22:42.491939       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)\nE0804 20:22:42.492730       1 leaderelection.go:331] error retrieving resource lock openshift-kube-scheduler/kube-scheduler: configmaps "kube-scheduler" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "openshift-kube-scheduler"\nE0804 20:22:42.514567       1 reflector.go:307] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nE0804 20:22:42.516015       1 reflector.go:307] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nE0804 20:22:43.563282       1 cache.go:444] Pod c988625e-86c3-4651-bd16-630a054cec4d updated on a different node than previously added to.\nF0804 20:22:43.563373       1 cache.go:445] Schedulercache is corrupted and can badly affect scheduling decisions\n
Aug 04 20:23:09.244 E ns/openshift-console pod/console-58489f5fd4-5tz8p node/ip-10-0-250-156.us-west-1.compute.internal container=console container exited with code 2 (Error): 2020-08-04T20:05:00Z cmd/main: cookies are secure!\n2020-08-04T20:05:00Z cmd/main: Binding to [::]:8443...\n2020-08-04T20:05:00Z cmd/main: using TLS\n
Aug 04 20:24:10.480 E ns/openshift-sdn pod/sdn-controller-bg92z node/ip-10-0-250-156.us-west-1.compute.internal container=sdn-controller container exited with code 2 (Error): 15] Allocated netid 3450704 for namespace "openshift-ingress"\nI0804 19:59:38.289750       1 subnets.go:149] Created HostSubnet ip-10-0-192-57.us-west-1.compute.internal (host: "ip-10-0-192-57.us-west-1.compute.internal", ip: "10.0.192.57", subnet: "10.131.0.0/23")\nI0804 19:59:40.004075       1 subnets.go:149] Created HostSubnet ip-10-0-185-68.us-west-1.compute.internal (host: "ip-10-0-185-68.us-west-1.compute.internal", ip: "10.0.185.68", subnet: "10.128.2.0/23")\nI0804 19:59:43.046671       1 subnets.go:149] Created HostSubnet ip-10-0-170-11.us-west-1.compute.internal (host: "ip-10-0-170-11.us-west-1.compute.internal", ip: "10.0.170.11", subnet: "10.129.2.0/23")\nI0804 20:10:34.748124       1 vnids.go:115] Allocated netid 2133109 for namespace "e2e-k8s-sig-storage-sig-api-machinery-secret-upgrade-3470"\nI0804 20:10:34.758092       1 vnids.go:115] Allocated netid 5922281 for namespace "e2e-k8s-sig-apps-deployment-upgrade-5681"\nI0804 20:10:34.775767       1 vnids.go:115] Allocated netid 15943294 for namespace "e2e-k8s-sig-apps-job-upgrade-2802"\nI0804 20:10:34.787034       1 vnids.go:115] Allocated netid 4957075 for namespace "e2e-check-for-critical-alerts-3357"\nI0804 20:10:34.794985       1 vnids.go:115] Allocated netid 7259721 for namespace "e2e-k8s-sig-apps-daemonset-upgrade-9743"\nI0804 20:10:34.803003       1 vnids.go:115] Allocated netid 12683355 for namespace "e2e-k8s-sig-apps-replicaset-upgrade-9866"\nI0804 20:10:34.827163       1 vnids.go:115] Allocated netid 2657851 for namespace "e2e-frontend-ingress-available-2062"\nI0804 20:10:34.893970       1 vnids.go:115] Allocated netid 16423119 for namespace "e2e-kubernetes-api-available-2568"\nI0804 20:10:34.911354       1 vnids.go:115] Allocated netid 13227480 for namespace "e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-1326"\nI0804 20:10:34.920410       1 vnids.go:115] Allocated netid 15196902 for namespace "e2e-k8s-service-lb-available-7013"\nI0804 20:10:34.933055       1 vnids.go:115] Allocated netid 13487086 for namespace "e2e-openshift-api-available-5730"\n
Aug 04 20:24:23.163 E ns/openshift-sdn pod/sdn-controller-wn7b8 node/ip-10-0-128-49.us-west-1.compute.internal container=sdn-controller container exited with code 2 (Error): I0804 19:50:40.894009       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0804 19:54:34.595903       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-is1jm1fc-90c52.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: unexpected EOF\n
Aug 04 20:24:34.668 E ns/openshift-sdn pod/sdn-controller-jpx5j node/ip-10-0-182-102.us-west-1.compute.internal container=sdn-controller container exited with code 2 (Error): I0804 19:51:05.872705       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0804 19:54:34.600273       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-is1jm1fc-90c52.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: unexpected EOF\n
Aug 04 20:24:48.329 E openshift-apiserver OpenShift API is not responding to GET requests
Aug 04 20:24:48.460 E ns/openshift-sdn pod/sdn-64fbd node/ip-10-0-128-49.us-west-1.compute.internal container=sdn container exited with code 255 (Error):  port "openshift-dns/dns-default:metrics" at 172.30.0.10:9153/TCP\nI0804 20:24:17.509255   94660 service.go:363] Adding new service port "openshift-insights/metrics:https" at 172.30.111.133:443/TCP\nI0804 20:24:17.509356   94660 service.go:363] Adding new service port "openshift-marketplace/certified-operators:grpc" at 172.30.235.29:50051/TCP\nI0804 20:24:17.509415   94660 service.go:363] Adding new service port "openshift-console/console:https" at 172.30.149.213:443/TCP\nI0804 20:24:17.509919   94660 proxier.go:766] Stale udp service openshift-dns/dns-default:dns -> 172.30.0.10\nI0804 20:24:17.600834   94660 proxier.go:368] userspace proxy: processing 0 service events\nI0804 20:24:17.600867   94660 proxier.go:347] userspace syncProxyRules took 90.583382ms\nI0804 20:24:17.669390   94660 proxier.go:1609] Opened local port "nodePort for openshift-ingress/router-default:https" (:31796/tcp)\nI0804 20:24:17.669758   94660 proxier.go:1609] Opened local port "nodePort for openshift-ingress/router-default:http" (:30526/tcp)\nI0804 20:24:17.669821   94660 proxier.go:1609] Opened local port "nodePort for e2e-k8s-service-lb-available-7013/service-test:" (:31502/tcp)\nI0804 20:24:17.703485   94660 service_health.go:98] Opening healthcheck "openshift-ingress/router-default" on port 31417\nI0804 20:24:17.714315   94660 proxy.go:305] openshift-sdn proxy services and endpoints initialized\nI0804 20:24:17.714360   94660 cmd.go:173] openshift-sdn network plugin registering startup\nI0804 20:24:17.714659   94660 cmd.go:177] openshift-sdn network plugin ready\nI0804 20:24:47.277653   94660 healthcheck.go:92] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0804 20:24:47.576586   94660 proxier.go:368] userspace proxy: processing 0 service events\nI0804 20:24:47.576617   94660 proxier.go:347] userspace syncProxyRules took 57.577362ms\nF0804 20:24:47.767415   94660 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Aug 04 20:24:49.819 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-182-102.us-west-1.compute.internal node/ip-10-0-182-102.us-west-1.compute.internal container=kube-scheduler container exited with code 255 (Error): 48.088590       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://localhost:6443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=25436&timeout=8m39s&timeoutSeconds=519&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0804 20:24:48.091713       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://localhost:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=32101&timeout=6m47s&timeoutSeconds=407&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0804 20:24:48.093566       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: Get https://localhost:6443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=24886&timeout=7m44s&timeoutSeconds=464&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0804 20:24:48.998043       1 reflector.go:307] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=25252&timeout=7m5s&timeoutSeconds=425&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0804 20:24:48.999123       1 reflector.go:307] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=25252&timeout=7m22s&timeoutSeconds=442&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0804 20:24:49.026684       1 leaderelection.go:288] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0804 20:24:49.026718       1 server.go:257] leaderelection lost\n
Aug 04 20:25:15.000 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-182-102.us-west-1.compute.internal node/ip-10-0-182-102.us-west-1.compute.internal container=kube-controller-manager-recovery-controller container exited with code 255 (Error): hBookmarks=true&resourceVersion=30486&timeout=8m40s&timeoutSeconds=520&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0804 20:25:14.090730       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?allowWatchBookmarks=true&resourceVersion=28367&timeout=8m4s&timeoutSeconds=484&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0804 20:25:14.092978       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?allowWatchBookmarks=true&resourceVersion=28367&timeout=8m47s&timeoutSeconds=527&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0804 20:25:14.093225       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?allowWatchBookmarks=true&resourceVersion=32300&timeout=7m28s&timeoutSeconds=448&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0804 20:25:14.095441       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-config/configmaps?allowWatchBookmarks=true&resourceVersion=30486&timeout=7m25s&timeoutSeconds=445&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0804 20:25:14.097385       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?allowWatchBookmarks=true&resourceVersion=32277&timeout=6m59s&timeoutSeconds=419&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0804 20:25:14.656135       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cert-recovery-controller-lock: timed out waiting for the condition\nF0804 20:25:14.656189       1 leaderelection.go:67] leaderelection lost\n
Aug 04 20:25:15.001 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-182-102.us-west-1.compute.internal node/ip-10-0-182-102.us-west-1.compute.internal container=setup init container exited with code 124 (Error): ................................................................................
Aug 04 20:25:23.050 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-182-102.us-west-1.compute.internal node/ip-10-0-182-102.us-west-1.compute.internal container=cluster-policy-controller container exited with code 255 (Error): eout=5m56s&timeoutSeconds=356&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0804 20:25:22.174000       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://localhost:6443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=25436&timeout=9m29s&timeoutSeconds=569&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0804 20:25:22.175148       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.HorizontalPodAutoscaler: Get https://localhost:6443/apis/autoscaling/v1/horizontalpodautoscalers?allowWatchBookmarks=true&resourceVersion=24886&timeout=7m57s&timeoutSeconds=477&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0804 20:25:22.176347       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Namespace: Get https://localhost:6443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=25727&timeout=6m31s&timeoutSeconds=391&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0804 20:25:22.177520       1 reflector.go:307] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: Failed to watch *v1.DeploymentConfig: Get https://localhost:6443/apis/apps.openshift.io/v1/deploymentconfigs?allowWatchBookmarks=true&resourceVersion=27283&timeout=6m28s&timeoutSeconds=388&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0804 20:25:22.178611       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.EndpointSlice: Get https://localhost:6443/apis/discovery.k8s.io/v1beta1/endpointslices?allowWatchBookmarks=true&resourceVersion=24886&timeout=6m55s&timeoutSeconds=415&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0804 20:25:22.839218       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cluster-policy-controller: timed out waiting for the condition\nF0804 20:25:22.839282       1 policy_controller.go:94] leaderelection lost\n
Aug 04 20:25:29.772 E ns/openshift-multus pod/multus-admission-controller-n4jh8 node/ip-10-0-250-156.us-west-1.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Aug 04 20:25:42.937 E ns/openshift-sdn pod/sdn-7kwp8 node/ip-10-0-185-68.us-west-1.compute.internal container=sdn container exited with code 255 (Error): 68] userspace proxy: processing 0 service events\nI0804 20:25:32.100690   91885 proxier.go:347] userspace syncProxyRules took 26.766687ms\nI0804 20:25:34.534079   91885 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-kube-scheduler/scheduler:https to [10.0.128.49:10259 10.0.182.102:10259 10.0.250.156:10259]\nI0804 20:25:34.651643   91885 proxier.go:368] userspace proxy: processing 0 service events\nI0804 20:25:34.651667   91885 proxier.go:347] userspace syncProxyRules took 27.008058ms\nI0804 20:25:36.804662   91885 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.81:6443 10.129.0.3:6443 10.130.0.72:6443]\nI0804 20:25:36.804703   91885 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.81:8443 10.129.0.3:8443 10.130.0.72:8443]\nI0804 20:25:36.829202   91885 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.81:6443 10.130.0.72:6443]\nI0804 20:25:36.829249   91885 roundrobin.go:217] Delete endpoint 10.129.0.3:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0804 20:25:36.829262   91885 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.81:8443 10.130.0.72:8443]\nI0804 20:25:36.829269   91885 roundrobin.go:217] Delete endpoint 10.129.0.3:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0804 20:25:36.923303   91885 proxier.go:368] userspace proxy: processing 0 service events\nI0804 20:25:36.923325   91885 proxier.go:347] userspace syncProxyRules took 26.571301ms\nI0804 20:25:37.037574   91885 proxier.go:368] userspace proxy: processing 0 service events\nI0804 20:25:37.037601   91885 proxier.go:347] userspace syncProxyRules took 26.302733ms\nF0804 20:25:42.676185   91885 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Aug 04 20:25:44.719 E ns/openshift-multus pod/multus-t4zt9 node/ip-10-0-170-11.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Aug 04 20:26:07.759 E ns/openshift-multus pod/multus-admission-controller-thzhg node/ip-10-0-128-49.us-west-1.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Aug 04 20:26:29.825 E ns/openshift-sdn pod/sdn-v6dc2 node/ip-10-0-170-11.us-west-1.compute.internal container=sdn container exited with code 255 (Error): 0.129.0.3:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0804 20:25:36.828646   69539 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.81:8443 10.130.0.72:8443]\nI0804 20:25:36.828661   69539 roundrobin.go:217] Delete endpoint 10.129.0.3:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0804 20:25:36.937662   69539 proxier.go:368] userspace proxy: processing 0 service events\nI0804 20:25:36.937689   69539 proxier.go:347] userspace syncProxyRules took 30.037503ms\nI0804 20:25:37.053808   69539 proxier.go:368] userspace proxy: processing 0 service events\nI0804 20:25:37.053834   69539 proxier.go:347] userspace syncProxyRules took 29.793716ms\nI0804 20:26:07.168813   69539 proxier.go:368] userspace proxy: processing 0 service events\nI0804 20:26:07.168837   69539 proxier.go:347] userspace syncProxyRules took 26.422577ms\nI0804 20:26:18.793473   69539 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.81:8443 10.129.0.66:8443 10.130.0.72:8443]\nI0804 20:26:18.793506   69539 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.81:6443 10.129.0.66:6443 10.130.0.72:6443]\nI0804 20:26:18.910821   69539 proxier.go:368] userspace proxy: processing 0 service events\nI0804 20:26:18.910839   69539 proxier.go:347] userspace syncProxyRules took 26.61797ms\nI0804 20:26:19.159243   69539 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-kube-controller-manager/kube-controller-manager:https to [10.0.128.49:10257 10.0.182.102:10257 10.0.250.156:10257]\nI0804 20:26:19.275629   69539 proxier.go:368] userspace proxy: processing 0 service events\nI0804 20:26:19.275656   69539 proxier.go:347] userspace syncProxyRules took 26.007594ms\nF0804 20:26:29.472314   69539 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Aug 04 20:27:00.332 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-182-102.us-west-1.compute.internal node/ip-10-0-182-102.us-west-1.compute.internal container=setup init container exited with code 124 (Error): ................................................................................
Aug 04 20:27:26.249 E ns/openshift-multus pod/multus-xrp4j node/ip-10-0-185-68.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Aug 04 20:28:22.871 E ns/openshift-multus pod/multus-qr4w9 node/ip-10-0-192-57.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Aug 04 20:29:26.051 E ns/openshift-multus pod/multus-pn66p node/ip-10-0-182-102.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Aug 04 20:30:20.934 E ns/openshift-machine-config-operator pod/machine-config-operator-66d779b69c-fwjk8 node/ip-10-0-250-156.us-west-1.compute.internal container=machine-config-operator container exited with code 2 (Error): e:"", Name:"machine-config", UID:"00a2b94f-7d2f-442b-88f4-3b66828f3958", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorVersionChanged' clusteroperator/machine-config-operator is bootstrapping to [{operator 0.0.1-2020-08-04-193341}]\nE0804 19:50:26.447709       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.ControllerConfig: the server could not find the requested resource (get controllerconfigs.machineconfiguration.openshift.io)\nE0804 19:50:26.501408       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.MachineConfigPool: the server could not find the requested resource (get machineconfigpools.machineconfiguration.openshift.io)\nE0804 19:50:27.545237       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.MachineConfigPool: the server could not find the requested resource (get machineconfigpools.machineconfiguration.openshift.io)\nI0804 19:50:31.484130       1 sync.go:61] [init mode] synced RenderConfig in 5.405733738s\nI0804 19:50:31.946857       1 sync.go:61] [init mode] synced MachineConfigPools in 462.4829ms\nI0804 19:51:24.066783       1 sync.go:61] [init mode] synced MachineConfigDaemon in 52.119886248s\nI0804 19:51:31.122735       1 sync.go:61] [init mode] synced MachineConfigController in 7.055900488s\nI0804 19:51:39.218324       1 sync.go:61] [init mode] synced MachineConfigServer in 8.095547756s\nI0804 19:52:34.226697       1 sync.go:61] [init mode] synced RequiredPools in 55.008326952s\nI0804 19:52:34.422010       1 sync.go:89] Initialization complete\nE0804 19:54:34.666937       1 leaderelection.go:331] error retrieving resource lock openshift-machine-config-operator/machine-config: Get https://172.30.0.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps/machine-config: unexpected EOF\n
Aug 04 20:32:16.901 E ns/openshift-machine-config-operator pod/machine-config-daemon-4wqls node/ip-10-0-185-68.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Aug 04 20:32:23.089 E ns/openshift-machine-config-operator pod/machine-config-daemon-9vmsb node/ip-10-0-128-49.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Aug 04 20:32:53.420 E ns/openshift-machine-config-operator pod/machine-config-daemon-qxp7g node/ip-10-0-192-57.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Aug 04 20:33:25.610 E ns/openshift-machine-config-operator pod/machine-config-daemon-kn4rp node/ip-10-0-170-11.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Aug 04 20:33:37.344 E ns/openshift-machine-config-operator pod/machine-config-controller-57c78d4c69-rw6fs node/ip-10-0-128-49.us-west-1.compute.internal container=machine-config-controller container exited with code 2 (Error): penshift.io/desiredConfig = rendered-worker-cb4fa7dfb86e07db32cf1f7a9c14e7f9\nI0804 20:00:44.305672       1 node_controller.go:452] Pool worker: node ip-10-0-170-11.us-west-1.compute.internal changed machineconfiguration.openshift.io/state = Done\nI0804 20:00:51.252393       1 node_controller.go:452] Pool worker: node ip-10-0-185-68.us-west-1.compute.internal changed machineconfiguration.openshift.io/currentConfig = rendered-worker-cb4fa7dfb86e07db32cf1f7a9c14e7f9\nI0804 20:00:51.252424       1 node_controller.go:452] Pool worker: node ip-10-0-185-68.us-west-1.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-cb4fa7dfb86e07db32cf1f7a9c14e7f9\nI0804 20:00:51.252434       1 node_controller.go:452] Pool worker: node ip-10-0-185-68.us-west-1.compute.internal changed machineconfiguration.openshift.io/state = Done\nI0804 20:01:00.254306       1 node_controller.go:435] Pool worker: node ip-10-0-185-68.us-west-1.compute.internal is now reporting ready\nI0804 20:01:08.487169       1 node_controller.go:435] Pool worker: node ip-10-0-192-57.us-west-1.compute.internal is now reporting ready\nI0804 20:01:13.316727       1 node_controller.go:435] Pool worker: node ip-10-0-170-11.us-west-1.compute.internal is now reporting ready\nI0804 20:06:35.745503       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool master\nI0804 20:06:35.791861       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool worker\nI0804 20:17:23.833123       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool master\nI0804 20:17:23.904270       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool worker\nI0804 20:20:49.187489       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool master\nI0804 20:20:49.329761       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool worker\n
Aug 04 20:36:12.865 E ns/openshift-monitoring pod/telemeter-client-574fdfc4b9-ldpxw node/ip-10-0-185-68.us-west-1.compute.internal container=telemeter-client container exited with code 2 (Error): 
Aug 04 20:36:12.865 E ns/openshift-monitoring pod/telemeter-client-574fdfc4b9-ldpxw node/ip-10-0-185-68.us-west-1.compute.internal container=reload container exited with code 2 (Error): 
Aug 04 20:36:12.888 E ns/openshift-monitoring pod/openshift-state-metrics-574fdd8979-z2vm7 node/ip-10-0-185-68.us-west-1.compute.internal container=openshift-state-metrics container exited with code 2 (Error): 
Aug 04 20:36:13.894 E ns/openshift-kube-storage-version-migrator pod/migrator-bdf87b7fd-x4s58 node/ip-10-0-185-68.us-west-1.compute.internal container=migrator container exited with code 2 (Error): 
Aug 04 20:36:13.919 E ns/openshift-marketplace pod/redhat-operators-5796bc9865-8ctc4 node/ip-10-0-185-68.us-west-1.compute.internal container=redhat-operators container exited with code 2 (Error): 
Aug 04 20:36:13.942 E ns/openshift-monitoring pod/thanos-querier-75df4496d5-vf4zd node/ip-10-0-185-68.us-west-1.compute.internal container=oauth-proxy container exited with code 2 (Error): 21:09 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/08/04 20:21:09 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/08/04 20:21:09 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/08/04 20:21:09 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/08/04 20:21:09 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/08/04 20:21:09 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/08/04 20:21:09 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/08/04 20:21:09 http.go:107: HTTPS: listening on [::]:9091\nI0804 20:21:09.090746       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/08/04 20:22:28 oauthproxy.go:774: basicauth: 10.130.0.47:60774 Authorization header does not start with 'Basic', skipping basic authentication\n2020/08/04 20:23:28 oauthproxy.go:774: basicauth: 10.130.0.47:33334 Authorization header does not start with 'Basic', skipping basic authentication\n2020/08/04 20:25:28 oauthproxy.go:774: basicauth: 10.130.0.47:46630 Authorization header does not start with 'Basic', skipping basic authentication\n2020/08/04 20:26:28 oauthproxy.go:774: basicauth: 10.130.0.47:51422 Authorization header does not start with 'Basic', skipping basic authentication\n2020/08/04 20:29:28 oauthproxy.go:774: basicauth: 10.130.0.47:57518 Authorization header does not start with 'Basic', skipping basic authentication\n2020/08/04 20:33:28 oauthproxy.go:774: basicauth: 10.130.0.47:60892 Authorization header does not start with 'Basic', skipping basic authentication\n2020/08/04 20:34:28 oauthproxy.go:774: basicauth: 10.130.0.47:33408 Authorization header does not start with 'Basic', skipping basic authentication\n
Aug 04 20:36:13.981 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-185-68.us-west-1.compute.internal container=config-reloader container exited with code 2 (Error): 2020/08/04 20:21:09 Watching directory: "/etc/alertmanager/config"\n
Aug 04 20:36:13.981 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-185-68.us-west-1.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/08/04 20:21:09 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/08/04 20:21:09 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/08/04 20:21:09 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/08/04 20:21:09 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/08/04 20:21:09 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/08/04 20:21:09 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/08/04 20:21:09 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/08/04 20:21:09 http.go:107: HTTPS: listening on [::]:9095\nI0804 20:21:09.622071       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Aug 04 20:36:14.008 E ns/openshift-marketplace pod/redhat-marketplace-7c8d6dbd66-x44d4 node/ip-10-0-185-68.us-west-1.compute.internal container=redhat-marketplace container exited with code 2 (Error): 
Aug 04 20:36:14.026 E ns/openshift-marketplace pod/certified-operators-7b9bf7cfcf-sw242 node/ip-10-0-185-68.us-west-1.compute.internal container=certified-operators container exited with code 2 (Error): 
Aug 04 20:36:15.301 E ns/openshift-console-operator pod/console-operator-6fcb4f655f-bhj7m node/ip-10-0-250-156.us-west-1.compute.internal container=console-operator container exited with code 255 (Error): ", Namespace:"openshift-console-operator", Name:"console-operator", UID:"742e4e7a-852a-4c63-8336-35ec4a96e4d2", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Progressing changed from True to False (""),Available changed from False to True ("")\nW0804 20:36:11.987764       1 reflector.go:326] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 1231; INTERNAL_ERROR") has prevented the request from succeeding\nI0804 20:36:14.157560       1 cmd.go:83] Received SIGTERM or SIGINT signal, shutting down controller.\nI0804 20:36:14.159093       1 controller.go:70] Shutting down Console\nI0804 20:36:14.159175       1 management_state_controller.go:112] Shutting down management-state-controller-console\nI0804 20:36:14.159269       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0804 20:36:14.159314       1 status_controller.go:212] Shutting down StatusSyncer-console\nI0804 20:36:14.159349       1 controller.go:138] shutting down ConsoleServiceSyncController\nI0804 20:36:14.159380       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0804 20:36:14.159398       1 controller.go:109] shutting down ConsoleResourceSyncDestinationController\nI0804 20:36:14.159418       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0804 20:36:14.159496       1 base_controller.go:49] Shutting down worker of UnsupportedConfigOverridesController controller ...\nI0804 20:36:14.159516       1 base_controller.go:39] All UnsupportedConfigOverridesController workers have been terminated\nI0804 20:36:14.159541       1 base_controller.go:49] Shutting down worker of LoggingSyncer controller ...\nI0804 20:36:14.159560       1 base_controller.go:39] All LoggingSyncer workers have been terminated\nF0804 20:36:14.159599       1 builder.go:243] stopped\n
Aug 04 20:36:15.591 E ns/openshift-machine-config-operator pod/machine-config-server-vlc6p node/ip-10-0-182-102.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): I0804 19:51:42.849698       1 start.go:38] Version: machine-config-daemon-4.4.0-202006242133.p0-16-g601c2285-dirty (601c2285f497bf7c73d84737b9977a0e697cb86a)\nI0804 19:51:42.850444       1 api.go:56] Launching server on :22624\nI0804 19:51:42.850728       1 api.go:56] Launching server on :22623\n
Aug 04 20:36:29.445 E ns/openshift-machine-config-operator pod/machine-config-server-kh978 node/ip-10-0-250-156.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): I0804 19:51:37.940881       1 start.go:38] Version: machine-config-daemon-4.4.0-202006242133.p0-16-g601c2285-dirty (601c2285f497bf7c73d84737b9977a0e697cb86a)\nI0804 19:51:37.942017       1 api.go:56] Launching server on :22624\nI0804 19:51:37.944826       1 api.go:56] Launching server on :22623\nI0804 19:56:34.189184       1 api.go:102] Pool worker requested by 10.0.139.124:47137\nI0804 19:56:38.607001       1 api.go:102] Pool worker requested by 10.0.200.191:3556\n
Aug 04 20:36:34.237 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-170-11.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-08-04T20:36:29.405Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-08-04T20:36:29.407Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-08-04T20:36:29.408Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-08-04T20:36:29.409Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-08-04T20:36:29.409Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-08-04T20:36:29.409Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-08-04T20:36:29.409Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-08-04T20:36:29.409Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-08-04T20:36:29.409Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-08-04T20:36:29.409Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-08-04T20:36:29.409Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-08-04T20:36:29.409Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-08-04T20:36:29.409Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-08-04T20:36:29.410Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-08-04T20:36:29.410Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-08-04T20:36:29.410Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-08-04
Aug 04 20:36:40.563 E ns/openshift-console pod/console-775d8cb488-k2xs4 node/ip-10-0-250-156.us-west-1.compute.internal container=console container exited with code 2 (Error): 2020-08-04T20:22:40Z cmd/main: cookies are secure!\n2020-08-04T20:22:40Z cmd/main: Binding to [::]:8443...\n2020-08-04T20:22:40Z cmd/main: using TLS\n
Aug 04 20:37:51.296 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Aug 04 20:38:46.130 E ns/openshift-monitoring pod/node-exporter-f2cz9 node/ip-10-0-185-68.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:38:46.148 E ns/openshift-image-registry pod/node-ca-b245g node/ip-10-0-185-68.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:38:46.171 E ns/openshift-cluster-node-tuning-operator pod/tuned-gjh66 node/ip-10-0-185-68.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:38:46.201 E ns/openshift-sdn pod/ovs-v6vdg node/ip-10-0-185-68.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:38:46.218 E ns/openshift-multus pod/multus-frkp7 node/ip-10-0-185-68.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:38:46.229 E ns/openshift-machine-config-operator pod/machine-config-daemon-kpnvf node/ip-10-0-185-68.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:38:46.245 E ns/openshift-dns pod/dns-default-dfqm4 node/ip-10-0-185-68.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:38:55.914 E ns/openshift-machine-config-operator pod/machine-config-daemon-kpnvf node/ip-10-0-185-68.us-west-1.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Aug 04 20:38:55.987 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable
Aug 04 20:39:03.139 E ns/openshift-monitoring pod/node-exporter-gb6pg node/ip-10-0-250-156.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:39:03.181 E ns/openshift-controller-manager pod/controller-manager-mqpk7 node/ip-10-0-250-156.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:39:03.220 E ns/openshift-cluster-node-tuning-operator pod/tuned-rtmdg node/ip-10-0-250-156.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:39:03.243 E ns/openshift-image-registry pod/node-ca-w776v node/ip-10-0-250-156.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:39:03.272 E ns/openshift-sdn pod/sdn-controller-vwcvn node/ip-10-0-250-156.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:39:03.337 E ns/openshift-sdn pod/ovs-gx66j node/ip-10-0-250-156.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:39:03.351 E ns/openshift-multus pod/multus-admission-controller-xkd92 node/ip-10-0-250-156.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:39:03.372 E ns/openshift-sdn pod/sdn-jsdch node/ip-10-0-250-156.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:39:03.391 E ns/openshift-multus pod/multus-fsd42 node/ip-10-0-250-156.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:39:03.413 E ns/openshift-dns pod/dns-default-7mcl7 node/ip-10-0-250-156.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:39:03.435 E ns/openshift-machine-config-operator pod/machine-config-daemon-vkhvl node/ip-10-0-250-156.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:39:03.452 E ns/openshift-machine-config-operator pod/machine-config-server-7mnhq node/ip-10-0-250-156.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:39:05.830 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-8c6d46bbc-mf7mt node/ip-10-0-170-11.us-west-1.compute.internal container=snapshot-controller container exited with code 2 (Error): 
Aug 04 20:39:05.846 E ns/openshift-monitoring pod/prometheus-adapter-8fdf88bd9-hxtnw node/ip-10-0-170-11.us-west-1.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0804 20:21:07.461443       1 adapter.go:93] successfully using in-cluster auth\nI0804 20:21:08.555905       1 secure_serving.go:116] Serving securely on [::]:6443\n
Aug 04 20:39:05.885 E ns/openshift-monitoring pod/openshift-state-metrics-574fdd8979-2q6kk node/ip-10-0-170-11.us-west-1.compute.internal container=openshift-state-metrics container exited with code 2 (Error): 
Aug 04 20:39:06.859 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-170-11.us-west-1.compute.internal container=config-reloader container exited with code 2 (Error): 2020/08/04 20:21:58 Watching directory: "/etc/alertmanager/config"\n
Aug 04 20:39:06.895 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-170-11.us-west-1.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/08/04 20:36:32 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Aug 04 20:39:06.895 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-170-11.us-west-1.compute.internal container=prometheus-proxy container exited with code 2 (Error): 2020/08/04 20:36:32 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/08/04 20:36:32 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/08/04 20:36:32 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/08/04 20:36:32 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/08/04 20:36:32 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/08/04 20:36:32 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/08/04 20:36:32 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/08/04 20:36:32 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0804 20:36:32.980417       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/08/04 20:36:32 http.go:107: HTTPS: listening on [::]:9091\n
Aug 04 20:39:06.895 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-170-11.us-west-1.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-08-04T20:36:32.269978254Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=info ts=2020-08-04T20:36:32.270079229Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-08-04T20:36:32.27155206Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-08-04T20:36:37.526350167Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Aug 04 20:39:07.886 E ns/openshift-monitoring pod/kube-state-metrics-6db78f8889-qvhcm node/ip-10-0-170-11.us-west-1.compute.internal container=kube-state-metrics container exited with code 2 (Error): 
Aug 04 20:39:07.901 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-170-11.us-west-1.compute.internal container=config-reloader container exited with code 2 (Error): 2020/08/04 20:36:26 Watching directory: "/etc/alertmanager/config"\n
Aug 04 20:39:07.901 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-170-11.us-west-1.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/08/04 20:36:27 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/08/04 20:36:27 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/08/04 20:36:27 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/08/04 20:36:27 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/08/04 20:36:27 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/08/04 20:36:27 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/08/04 20:36:27 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/08/04 20:36:27 http.go:107: HTTPS: listening on [::]:9095\nI0804 20:36:27.145115       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Aug 04 20:39:14.657 E ns/openshift-machine-config-operator pod/machine-config-daemon-vkhvl node/ip-10-0-250-156.us-west-1.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Aug 04 20:39:23.748 E ns/openshift-service-catalog-apiserver-operator pod/openshift-service-catalog-apiserver-operator-5dd59dbdf8-9w44d node/ip-10-0-182-102.us-west-1.compute.internal container=operator container exited with code 255 (Error): 063       1 workload_controller.go:347] No service bindings found, nothing to delete.\nI0804 20:39:01.045810       1 workload_controller.go:193] apiservice v1beta1.servicecatalog.k8s.io deleted\nI0804 20:39:02.537538       1 leaderelection.go:283] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0804 20:39:12.550810       1 leaderelection.go:283] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0804 20:39:16.302669       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0804 20:39:16.302708       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0804 20:39:16.310082       1 httplog.go:90] GET /metrics: (12.297488ms) 200 [Prometheus/2.15.2 10.131.0.21:52534]\nI0804 20:39:21.042415       1 workload_controller.go:347] No service bindings found, nothing to delete.\nI0804 20:39:21.050102       1 workload_controller.go:193] apiservice v1beta1.servicecatalog.k8s.io deleted\nI0804 20:39:22.274763       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0804 20:39:22.275421       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0804 20:39:22.275492       1 finalizer_controller.go:140] Shutting down FinalizerController\nI0804 20:39:22.275543       1 status_controller.go:212] Shutting down StatusSyncer-service-catalog-apiserver\nI0804 20:39:22.275589       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0804 20:39:22.275669       1 base_controller.go:49] Shutting down worker of UnsupportedConfigOverridesController controller ...\nI0804 20:39:22.275708       1 base_controller.go:39] All UnsupportedConfigOverridesController workers have been terminated\nI0804 20:39:22.275756       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0804 20:39:22.275799       1 builder.go:209] server exited\n
Aug 04 20:39:23.756 E clusterversion/version changed Failing to True: ClusterOperatorNotAvailable: Cluster operator machine-config is still updating
Aug 04 20:39:26.770 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-185-68.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-08-04T20:39:23.230Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-08-04T20:39:23.234Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-08-04T20:39:23.234Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-08-04T20:39:23.235Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-08-04T20:39:23.235Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-08-04T20:39:23.235Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-08-04T20:39:23.236Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-08-04T20:39:23.236Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-08-04T20:39:23.236Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-08-04T20:39:23.236Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-08-04T20:39:23.236Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-08-04T20:39:23.236Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-08-04T20:39:23.236Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-08-04T20:39:23.236Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-08-04T20:39:23.238Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-08-04T20:39:23.238Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-08-04
Aug 04 20:39:29.334 E ns/openshift-cluster-machine-approver pod/machine-approver-74cc946b49-rzkkr node/ip-10-0-182-102.us-west-1.compute.internal container=machine-approver-controller container exited with code 2 (Error): sed\nE0804 20:27:11.161677       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0804 20:27:12.162398       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0804 20:27:13.163277       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0804 20:27:14.163981       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0804 20:27:15.164792       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0804 20:27:20.934184       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: certificatesigningrequests.certificates.k8s.io is forbidden: User "system:serviceaccount:openshift-cluster-machine-approver:machine-approver-sa" cannot list resource "certificatesigningrequests" in API group "certificates.k8s.io" at the cluster scope\n
Aug 04 20:39:29.413 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-696454b6f9-wtf6t node/ip-10-0-182-102.us-west-1.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): 81] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"cc1490ba-b5fd-4e1e-b6ec-5bef38e37441", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-250-156.us-west-1.compute.internal pods/kube-controller-manager-ip-10-0-250-156.us-west-1.compute.internal container=\"cluster-policy-controller\" is not ready\nStaticPodsDegraded: nodes/ip-10-0-250-156.us-west-1.compute.internal pods/kube-controller-manager-ip-10-0-250-156.us-west-1.compute.internal container=\"kube-controller-manager\" is not ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-250-156.us-west-1.compute.internal pods/kube-controller-manager-ip-10-0-250-156.us-west-1.compute.internal container=\"kube-controller-manager\" is not ready"\nI0804 20:39:27.602878       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"cc1490ba-b5fd-4e1e-b6ec-5bef38e37441", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-250-156.us-west-1.compute.internal pods/kube-controller-manager-ip-10-0-250-156.us-west-1.compute.internal container=\"kube-controller-manager\" is not ready" to "NodeControllerDegraded: All master nodes are ready"\nI0804 20:39:28.207085       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nF0804 20:39:28.207865       1 builder.go:209] server exited\nI0804 20:39:28.244756       1 base_controller.go:74] Shutting down NodeController ...\n
Aug 04 20:39:50.488 E ns/openshift-console pod/console-775d8cb488-jk228 node/ip-10-0-182-102.us-west-1.compute.internal container=console container exited with code 2 (Error): 2020-08-04T20:22:32Z cmd/main: cookies are secure!\n2020-08-04T20:22:32Z cmd/main: Binding to [::]:8443...\n2020-08-04T20:22:32Z cmd/main: using TLS\n
Aug 04 20:40:03.330 E kube-apiserver Kube API started failing: Get https://api.ci-op-is1jm1fc-90c52.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Aug 04 20:40:05.653 E ns/openshift-machine-api pod/machine-api-controllers-5cdbd6cbf-5h6f5 node/ip-10-0-250-156.us-west-1.compute.internal container=nodelink-controller container exited with code 255 (Error): 
Aug 04 20:40:16.641 E clusteroperator/monitoring changed Degraded to True: UpdatingGrafanaFailed: Failed to rollout the stack. Error: running task Updating Grafana failed: reconciling Grafana Dashboard Definitions ConfigMaps failed: updating ConfigMap object failed: Put https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/configmaps/grafana-dashboard-node-rsrc-use: read tcp 10.129.0.80:37216->172.30.0.1:443: read: connection reset by peer
Aug 04 20:40:47.301 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Aug 04 20:41:50.360 E ns/openshift-monitoring pod/node-exporter-62zhx node/ip-10-0-170-11.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:41:50.384 E ns/openshift-image-registry pod/node-ca-8ptcj node/ip-10-0-170-11.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:41:50.399 E ns/openshift-cluster-node-tuning-operator pod/tuned-5zk9c node/ip-10-0-170-11.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:41:50.430 E ns/openshift-multus pod/multus-zpqvw node/ip-10-0-170-11.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:41:50.442 E ns/openshift-sdn pod/ovs-j5g2w node/ip-10-0-170-11.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:41:50.463 E ns/openshift-dns pod/dns-default-2rdff node/ip-10-0-170-11.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:41:50.471 E ns/openshift-machine-config-operator pod/machine-config-daemon-8tbt9 node/ip-10-0-170-11.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:41:59.468 E ns/openshift-machine-config-operator pod/machine-config-daemon-8tbt9 node/ip-10-0-170-11.us-west-1.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Aug 04 20:42:08.932 E ns/openshift-monitoring pod/prometheus-adapter-8fdf88bd9-dn9bt node/ip-10-0-192-57.us-west-1.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0804 20:21:19.151684       1 adapter.go:93] successfully using in-cluster auth\nI0804 20:21:19.882879       1 secure_serving.go:116] Serving securely on [::]:6443\nW0804 20:36:49.814655       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:133: Unexpected watch close - watch lasted less than a second and no items received\n
Aug 04 20:42:12.033 E ns/openshift-cluster-node-tuning-operator pod/tuned-gq8tl node/ip-10-0-182-102.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:42:12.088 E ns/openshift-controller-manager pod/controller-manager-bk47z node/ip-10-0-182-102.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:42:12.089 E ns/openshift-image-registry pod/node-ca-7fx98 node/ip-10-0-182-102.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:42:12.089 E ns/openshift-monitoring pod/node-exporter-dzjxj node/ip-10-0-182-102.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:42:12.112 E ns/openshift-sdn pod/sdn-controller-6xntp node/ip-10-0-182-102.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:42:12.125 E ns/openshift-sdn pod/sdn-sxbbv node/ip-10-0-182-102.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:42:12.146 E ns/openshift-multus pod/multus-admission-controller-8jqrj node/ip-10-0-182-102.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:42:12.171 E ns/openshift-sdn pod/ovs-56sl4 node/ip-10-0-182-102.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:42:12.194 E ns/openshift-multus pod/multus-874nb node/ip-10-0-182-102.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:42:12.209 E ns/openshift-dns pod/dns-default-zgb9p node/ip-10-0-182-102.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:42:12.227 E ns/openshift-machine-config-operator pod/machine-config-daemon-bt9p8 node/ip-10-0-182-102.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:42:12.246 E ns/openshift-machine-config-operator pod/machine-config-server-85qpg node/ip-10-0-182-102.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:42:19.808 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-170-11.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-08-04T20:42:18.052Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-08-04T20:42:18.055Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-08-04T20:42:18.078Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-08-04T20:42:18.079Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-08-04T20:42:18.079Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-08-04T20:42:18.079Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-08-04T20:42:18.079Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-08-04T20:42:18.079Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-08-04T20:42:18.079Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-08-04T20:42:18.079Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-08-04T20:42:18.086Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-08-04T20:42:18.086Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-08-04T20:42:18.086Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-08-04T20:42:18.086Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-08-04T20:42:18.086Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=info ts=2020-08-04T20:42:18.086Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=error ts=2020-08-04
Aug 04 20:42:27.090 E ns/openshift-machine-config-operator pod/machine-config-daemon-bt9p8 node/ip-10-0-182-102.us-west-1.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Aug 04 20:42:52.529 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-6db89494df-qhglt node/ip-10-0-128-49.us-west-1.compute.internal container=openshift-apiserver-operator container exited with code 255 (Error): 04 20:42:50.050582       1 migration_controller.go:327] Shutting down EncryptionMigrationController\nI0804 20:42:50.050600       1 state_controller.go:171] Shutting down EncryptionStateController\nI0804 20:42:50.050615       1 condition_controller.go:202] Shutting down EncryptionConditionController\nI0804 20:42:50.050630       1 prune_controller.go:204] Shutting down EncryptionPruneController\nI0804 20:42:50.050646       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0804 20:42:50.050664       1 prune_controller.go:232] Shutting down PruneController\nI0804 20:42:50.050679       1 finalizer_controller.go:148] Shutting down NamespaceFinalizerController_openshift-apiserver\nI0804 20:42:50.050697       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0804 20:42:50.050717       1 base_controller.go:73] Shutting down UnsupportedConfigOverridesController ...\nI0804 20:42:50.050735       1 base_controller.go:73] Shutting down LoggingSyncer ...\nI0804 20:42:50.050748       1 status_controller.go:212] Shutting down StatusSyncer-openshift-apiserver\nI0804 20:42:50.050765       1 base_controller.go:73] Shutting down RevisionController ...\nI0804 20:42:50.050781       1 base_controller.go:73] Shutting down  ...\nI0804 20:42:50.050794       1 remove_stale_conditions.go:84] Shutting down RemoveStaleConditions\nI0804 20:42:50.050943       1 base_controller.go:48] Shutting down worker of UnsupportedConfigOverridesController controller ...\nI0804 20:42:50.050957       1 base_controller.go:38] All UnsupportedConfigOverridesController workers have been terminated\nI0804 20:42:50.053322       1 base_controller.go:48] Shutting down worker of  controller ...\nI0804 20:42:50.053430       1 base_controller.go:38] All  workers have been terminated\nI0804 20:42:50.053504       1 apiservice_controller.go:215] Shutting down APIServiceController_openshift-apiserver\nI0804 20:42:50.053600       1 workload_controller.go:177] Shutting down OpenShiftAPIServerOperator\nF0804 20:42:50.053680       1 builder.go:243] stopped\n
Aug 04 20:42:54.109 E ns/openshift-console-operator pod/console-operator-6fcb4f655f-bdjcf node/ip-10-0-128-49.us-west-1.compute.internal container=console-operator container exited with code 255 (Error): or SIGINT signal, shutting down controller.\nI0804 20:42:50.725483       1 controller.go:70] Shutting down Console\nI0804 20:42:50.725739       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0804 20:42:50.725767       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0804 20:42:50.725803       1 status_controller.go:212] Shutting down StatusSyncer-console\nI0804 20:42:50.725814       1 controller.go:138] shutting down ConsoleServiceSyncController\nI0804 20:42:50.725823       1 management_state_controller.go:112] Shutting down management-state-controller-console\nI0804 20:42:50.725849       1 controller.go:109] shutting down ConsoleResourceSyncDestinationController\nI0804 20:42:50.725864       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0804 20:42:50.725911       1 base_controller.go:49] Shutting down worker of UnsupportedConfigOverridesController controller ...\nI0804 20:42:50.728511       1 base_controller.go:39] All UnsupportedConfigOverridesController workers have been terminated\nI0804 20:42:50.725927       1 base_controller.go:49] Shutting down worker of LoggingSyncer controller ...\nF0804 20:42:50.726965       1 builder.go:209] server exited\nI0804 20:42:50.727009       1 configmap_cafile_content.go:226] Shutting down client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\nI0804 20:42:50.727026       1 configmap_cafile_content.go:226] Shutting down client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nF0804 20:42:50.727066       1 builder.go:243] stopped\nI0804 20:42:50.727079       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nI0804 20:42:50.727480       1 secure_serving.go:222] Stopped listening on [::]:8443\nI0804 20:42:50.727517       1 dynamic_serving_content.go:144] Shutting down serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key\nI0804 20:42:50.728609       1 base_controller.go:39] All LoggingSyncer workers have been terminated\n
Aug 04 20:42:54.219 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-85f49cdfd5-r2svx node/ip-10-0-128-49.us-west-1.compute.internal container=operator container exited with code 255 (Error): o:90] GET /metrics: (2.319718ms) 200 [Prometheus/2.15.2 10.131.0.21:32988]\nI0804 20:42:06.826030       1 reflector.go:418] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: Watch close - *v1.ConfigMap total 1 items received\nI0804 20:42:12.553606       1 request.go:565] Throttling request took 157.210922ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0804 20:42:12.753665       1 request.go:565] Throttling request took 196.459883ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0804 20:42:23.673393       1 httplog.go:90] GET /metrics: (6.377986ms) 200 [Prometheus/2.15.2 10.128.2.23:52362]\nI0804 20:42:28.188581       1 httplog.go:90] GET /metrics: (2.690479ms) 200 [Prometheus/2.15.2 10.129.2.15:45728]\nI0804 20:42:32.552833       1 request.go:565] Throttling request took 163.507071ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0804 20:42:32.752835       1 request.go:565] Throttling request took 195.171419ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0804 20:42:39.357830       1 reflector.go:418] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: Watch close - *v1.ConfigMap total 31 items received\nI0804 20:42:51.867370       1 cmd.go:83] Received SIGTERM or SIGINT signal, shutting down controller.\nI0804 20:42:51.868020       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0804 20:42:51.873281       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nI0804 20:42:51.870332       1 operator.go:135] Shutting down OpenShiftControllerManagerOperator\nI0804 20:42:51.870349       1 status_controller.go:212] Shutting down StatusSyncer-openshift-controller-manager\nF0804 20:42:51.870678       1 builder.go:209] server exited\n
Aug 04 20:42:58.559 E ns/openshift-machine-config-operator pod/machine-config-operator-7998975d67-x8fx7 node/ip-10-0-128-49.us-west-1.compute.internal container=machine-config-operator container exited with code 2 (Error): I0804 20:39:29.477580       1 start.go:45] Version: machine-config-daemon-4.4.0-202006242133.p0-16-g601c2285-dirty (601c2285f497bf7c73d84737b9977a0e697cb86a)\nI0804 20:39:29.480983       1 leaderelection.go:242] attempting to acquire leader lease  openshift-machine-config-operator/machine-config...\nE0804 20:41:25.212377       1 event.go:319] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"machine-config", GenerateName:"", Namespace:"openshift-machine-config-operator", SelfLink:"/api/v1/namespaces/openshift-machine-config-operator/configmaps/machine-config", UID:"4ee74f5f-22a3-4d73-b91d-01fa09ee29ef", ResourceVersion:"42056", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63732167425, loc:(*time.Location)(0x27f9020)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"machine-config-operator-7998975d67-x8fx7_428c7c44-773c-459b-8c82-63b97f5a9284\",\"leaseDurationSeconds\":90,\"acquireTime\":\"2020-08-04T20:41:25Z\",\"renewTime\":\"2020-08-04T20:41:25Z\",\"leaderTransitions\":3}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "github.com/openshift/machine-config-operator/cmd/common/helpers.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'machine-config-operator-7998975d67-x8fx7_428c7c44-773c-459b-8c82-63b97f5a9284 became leader'\nI0804 20:41:25.212452       1 leaderelection.go:252] successfully acquired lease openshift-machine-config-operator/machine-config\nI0804 20:41:25.835530       1 operator.go:264] Starting MachineConfigOperator\n
Aug 04 20:42:59.590 E ns/openshift-machine-api pod/machine-api-operator-7b5d7c9d8c-5w2bz node/ip-10-0-128-49.us-west-1.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Aug 04 20:43:18.005 E ns/openshift-console pod/console-775d8cb488-mh7ml node/ip-10-0-128-49.us-west-1.compute.internal container=console container exited with code 2 (Error): 2020-08-04T20:36:20Z cmd/main: cookies are secure!\n2020-08-04T20:36:20Z cmd/main: Binding to [::]:8443...\n2020-08-04T20:36:20Z cmd/main: using TLS\n
Aug 04 20:43:24.343 E kube-apiserver Kube API started failing: Get https://api.ci-op-is1jm1fc-90c52.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: unexpected EOF
Aug 04 20:43:25.736 E clusteroperator/network changed Degraded to True: ApplyOperatorConfig: Error while updating operator configuration: could not apply (network.openshift.io/v1, Kind=ClusterNetwork) /default: could not retrieve existing (network.openshift.io/v1, Kind=ClusterNetwork) /default: Get https://api-int.ci-op-is1jm1fc-90c52.origin-ci-int-aws.dev.rhcloud.com:6443/apis/network.openshift.io/v1/clusternetworks/default: unexpected EOF
Aug 04 20:43:28.163 E ns/openshift-monitoring pod/prometheus-operator-7f897fb446-xn5zs node/ip-10-0-182-102.us-west-1.compute.internal container=prometheus-operator container exited with code 1 (Error): ts=2020-08-04T20:43:27.769100493Z caller=main.go:199 msg="Starting Prometheus Operator version '0.35.1'."\nts=2020-08-04T20:43:27.788930308Z caller=main.go:96 msg="Staring insecure server on :8080"\nts=2020-08-04T20:43:27.791454175Z caller=main.go:288 msg="Unhandled error received. Exiting..." err="communicating with server failed: Get https://172.30.0.1:443/version?timeout=32s: dial tcp 172.30.0.1:443: connect: connection refused"\n
Aug 04 20:44:20.916 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Aug 04 20:44:47.703 E ns/openshift-monitoring pod/node-exporter-cszjc node/ip-10-0-192-57.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:44:47.719 E ns/openshift-cluster-node-tuning-operator pod/tuned-lds8j node/ip-10-0-192-57.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:44:47.734 E ns/openshift-image-registry pod/node-ca-nmdrm node/ip-10-0-192-57.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:44:47.751 E ns/openshift-sdn pod/ovs-ngd65 node/ip-10-0-192-57.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:44:47.767 E ns/openshift-sdn pod/sdn-ll4hr node/ip-10-0-192-57.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:44:47.782 E ns/openshift-multus pod/multus-8pv8g node/ip-10-0-192-57.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:44:47.799 E ns/openshift-dns pod/dns-default-n6chs node/ip-10-0-192-57.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:44:47.811 E ns/openshift-machine-config-operator pod/machine-config-daemon-22tx2 node/ip-10-0-192-57.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:44:56.148 E ns/openshift-machine-config-operator pod/machine-config-daemon-22tx2 node/ip-10-0-192-57.us-west-1.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Aug 04 20:45:18.711 E ns/openshift-marketplace pod/certified-operators-7b9bf7cfcf-2zrd9 node/ip-10-0-185-68.us-west-1.compute.internal container=certified-operators container exited with code 2 (Error): 
Aug 04 20:45:19.714 E ns/openshift-marketplace pod/community-operators-54f7f7b87f-7p6cw node/ip-10-0-185-68.us-west-1.compute.internal container=community-operators container exited with code 2 (Error): 
Aug 04 20:45:27.296 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator openshift-apiserver is reporting a failure: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable
Aug 04 20:45:41.285 E ns/openshift-image-registry pod/node-ca-jsbjt node/ip-10-0-128-49.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:45:41.312 E ns/openshift-monitoring pod/node-exporter-4br4d node/ip-10-0-128-49.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:45:41.329 E ns/openshift-cluster-node-tuning-operator pod/tuned-shntr node/ip-10-0-128-49.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:45:41.344 E ns/openshift-controller-manager pod/controller-manager-b8k5x node/ip-10-0-128-49.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:45:41.372 E ns/openshift-sdn pod/sdn-controller-clq75 node/ip-10-0-128-49.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:45:41.387 E ns/openshift-sdn pod/ovs-9f58c node/ip-10-0-128-49.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:45:41.403 E ns/openshift-multus pod/multus-4trh9 node/ip-10-0-128-49.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:45:41.416 E ns/openshift-multus pod/multus-admission-controller-sjdck node/ip-10-0-128-49.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:45:41.451 E ns/openshift-dns pod/dns-default-nn4b2 node/ip-10-0-128-49.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:45:41.466 E ns/openshift-machine-config-operator pod/machine-config-daemon-kzhnz node/ip-10-0-128-49.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 04 20:45:41.479 E ns/openshift-machine-config-operator pod/machine-config-server-dfhl9 node/ip-10-0-128-49.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending