ResultSUCCESS
Tests 4 failed / 27 succeeded
Started2020-07-30 18:53
Elapsed2h15m
Work namespaceci-op-bmb43f6l
Refs master:720d3361
432:c628870c
pode91d3419-d295-11ea-b8bb-0a580a8104a2
repoopenshift/cluster-kube-controller-manager-operator
revision1

Test Failures


Cluster upgrade [sig-api-machinery] Kubernetes APIs remain available 34m39s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\s\[sig\-api\-machinery\]\sKubernetes\sAPIs\sremain\savailable$'
API "kubernetes-api-available" was unreachable during disruption for at least 4s of 34m39s (0%), this is currently sufficient to pass the test/job but not considered completely correct:

Jul 30 20:26:50.428 E kube-apiserver Kube API started failing: Get https://api.ci-op-bmb43f6l-b0725.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=15s: dial tcp 35.155.240.208:6443: connect: connection refused
Jul 30 20:26:51.260 E kube-apiserver Kube API is not responding to GET requests
Jul 30 20:26:51.351 I kube-apiserver Kube API started responding to GET requests
Jul 30 20:44:09.387 E kube-apiserver Kube API started failing: rpc error: code = Unknown desc = OK: HTTP status code 200; transport: missing content-type field
Jul 30 20:44:10.260 E kube-apiserver Kube API is not responding to GET requests
Jul 30 20:44:10.554 I kube-apiserver Kube API started responding to GET requests
				from junit_upgrade_1596142732.xml

Filter through log files


Cluster upgrade [sig-api-machinery] Kubernetes APIs remain available 34m39s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\s\[sig\-api\-machinery\]\sKubernetes\sAPIs\sremain\savailable$'
API "openshift-api-available" was unreachable during disruption for at least 2s of 34m39s (0%), this is currently sufficient to pass the test/job but not considered completely correct:

Jul 30 20:44:09.446 E kube-apiserver Kube API started failing: rpc error: code = Unavailable desc = transport is closing
Jul 30 20:44:10.260 - 1s    E kube-apiserver Kube API is not responding to GET requests
Jul 30 20:44:11.344 I kube-apiserver Kube API started responding to GET requests
				from junit_upgrade_1596142732.xml

Filter through log files


Cluster upgrade [sig-network-edge] Application behind service load balancer with PDB is not disrupted 35m40s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\s\[sig\-network\-edge\]\sApplication\sbehind\sservice\sload\sbalancer\swith\sPDB\sis\snot\sdisrupted$'
Service was unreachable during disruption for at least 1s of 31m25s (0%), this is currently sufficient to pass the test/job but not considered completely correct:

Jul 30 20:49:14.375 E ns/e2e-k8s-service-lb-available-8943 svc/service-test Service stopped responding to GET requests on reused connections
Jul 30 20:49:14.550 I ns/e2e-k8s-service-lb-available-8943 svc/service-test Service started responding to GET requests on reused connections
				from junit_upgrade_1596142732.xml

Filter through log files


openshift-tests [sig-arch] Monitor cluster while tests execute 40m9s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[sig\-arch\]\sMonitor\scluster\swhile\stests\sexecute$'
99 error level events were detected during this test run:

Jul 30 20:20:03.683 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-153-63.us-west-2.compute.internal node/ip-10-0-153-63.us-west-2.compute.internal container/cluster-policy-controller container exited with code 255 (Error): 1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.ClusterResourceQuota: Get https://localhost:6443/apis/quota.openshift.io/v1/clusterresourcequotas?allowWatchBookmarks=true&resourceVersion=13488&timeout=9m41s&timeoutSeconds=581&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0730 20:20:02.725741       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.Lease: Get https://localhost:6443/apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=34323&timeout=6m10s&timeoutSeconds=370&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0730 20:20:02.726770       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.NetworkPolicy: Get https://localhost:6443/apis/networking.k8s.io/v1/networkpolicies?allowWatchBookmarks=true&resourceVersion=13399&timeout=6m15s&timeoutSeconds=375&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0730 20:20:02.727831       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/secrets?allowWatchBookmarks=true&resourceVersion=33698&timeout=7m20s&timeoutSeconds=440&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0730 20:20:02.729073       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.PodTemplate: Get https://localhost:6443/api/v1/podtemplates?allowWatchBookmarks=true&resourceVersion=13391&timeout=6m53s&timeoutSeconds=413&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0730 20:20:02.730021       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.LimitRange: Get https://localhost:6443/api/v1/limitranges?allowWatchBookmarks=true&resourceVersion=13392&timeout=6m24s&timeoutSeconds=384&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0730 20:20:02.869671       1 leaderelection.go:277] failed to renew lease openshift-kube-controller-manager/cluster-policy-controller: timed out waiting for the condition\nF0730 20:20:02.869720       1 policy_controller.go:94] leaderelection lost\n
Jul 30 20:20:03.742 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-153-63.us-west-2.compute.internal node/ip-10-0-153-63.us-west-2.compute.internal container/setup init container exited with code 124 (Error): ................................................................................
Jul 30 20:22:52.073 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-162-59.us-west-2.compute.internal node/ip-10-0-162-59.us-west-2.compute.internal container/kube-scheduler container exited with code 255 (Error): .)\n	/usr/local/go/src/io/io.go:329\nk8s.io/kubernetes/vendor/golang.org/x/net/http2.readFrameHeader(0xc000c82038, 0x9, 0x9, 0x7f314c1f37e0, 0xc001d86000, 0x0, 0xc000000000, 0xc000745f28, 0xc0012807e0)\n	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:237 +0x87\nk8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc000c82000, 0xc000745ee0, 0x2, 0x0, 0x1)\n	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa1\nk8s.io/kubernetes/vendor/golang.org/x/net/http2.(*serverConn).readFrames(0xc000b97b00)\n	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/server.go:745 +0xa4\ncreated by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*serverConn).serve\n	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/server.go:850 +0x347\n\ngoroutine 3690 [select]:\nk8s.io/kubernetes/vendor/golang.org/x/net/http2.(*serverConn).serve(0xc000b97b00)\n	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/server.go:858 +0x588\nk8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Server).ServeConn(0xc00064ab80, 0x1ffcea0, 0xc001d86000, 0xc0008abc10)\n	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/server.go:472 +0x73a\nk8s.io/kubernetes/vendor/golang.org/x/net/http2.ConfigureServer.func1(0xc000720700, 0xc001d86000, 0x1f91d60, 0xc000fc0920)\n	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/server.go:298 +0xef\nnet/http.(*conn).serve(0xc000f8a820, 0x1fddd20, 0xc00074e810)\n	/usr/local/go/src/net/http/server.go:1800 +0x122c\ncreated by net/http.(*Server).Serve\n	/usr/local/go/src/net/http/server.go:2928 +0x384\nE0730 20:22:51.702725       1 event.go:273] Unable to write event: 'Post https://localhost:6443/api/v1/namespaces/default/events: dial tcp [::1]:6443: connect: connection refused' (may retry after sleeping)\n
Jul 30 20:23:17.151 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-162-59.us-west-2.compute.internal node/ip-10-0-162-59.us-west-2.compute.internal container/setup init container exited with code 124 (Error): ................................................................................
Jul 30 20:24:16.475 E clusteroperator/monitoring changed Degraded to True: UpdatingprometheusAdapterFailed: Failed to rollout the stack. Error: running task Updating prometheus-adapter failed: reconciling PrometheusAdapter Service failed: updating Service object failed: etcdserver: leader changed
Jul 30 20:25:15.645 E ns/openshift-machine-api pod/machine-api-operator-7f7fcb7cb5-kcw5f node/ip-10-0-162-59.us-west-2.compute.internal container/machine-api-operator container exited with code 2 (Error): 
Jul 30 20:27:33.999 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-dbdcb4d5f-lb6pb node/ip-10-0-204-243.us-west-2.compute.internal container/kube-storage-version-migrator-operator container exited with code 1 (Error): F\nI0730 20:10:03.924263       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0730 20:10:03.924276       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0730 20:14:21.389289       1 status_controller.go:172] clusteroperator/kube-storage-version-migrator diff {"status":{"conditions":[{"lastTransitionTime":"2020-07-30T19:59:55Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-07-30T20:03:36Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-07-30T20:14:21Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-07-30T19:59:55Z","reason":"NoData","status":"Unknown","type":"Upgradeable"}],"versions":[{"name":"operator","version":"0.0.1-2020-07-30-185809"},{"name":"kube-storage-version-migrator","version":""}]}}\nI0730 20:14:21.396887       1 event.go:278] Event(v1.ObjectReference{Kind:"Namespace", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"openshift-kube-storage-version-migrator-operator", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from False to True ("")\nI0730 20:27:33.168093       1 cmd.go:83] Received SIGTERM or SIGINT signal, shutting down controller.\nI0730 20:27:33.168389       1 builder.go:248] server exited\nI0730 20:27:33.168476       1 configmap_cafile_content.go:223] Shutting down client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\nI0730 20:27:33.168504       1 reflector.go:181] Stopping reflector *v1.ConfigMap (12h0m0s) from runtime/asm_amd64.s:1357\nI0730 20:27:33.168546       1 configmap_cafile_content.go:223] Shutting down client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nW0730 20:27:33.168592       1 builder.go:94] graceful termination failed, controllers failed with error: stopped\n
Jul 30 20:27:50.359 E ns/openshift-kube-storage-version-migrator pod/migrator-d4cbb8fcc-qpclt node/ip-10-0-170-175.us-west-2.compute.internal container/migrator container exited with code 2 (Error): I0730 20:14:20.139370       1 migrator.go:18] FLAG: --add_dir_header="false"\nI0730 20:14:20.139524       1 migrator.go:18] FLAG: --alsologtostderr="true"\nI0730 20:14:20.139535       1 migrator.go:18] FLAG: --kube-api-burst="1000"\nI0730 20:14:20.139546       1 migrator.go:18] FLAG: --kube-api-qps="40"\nI0730 20:14:20.139556       1 migrator.go:18] FLAG: --kubeconfig=""\nI0730 20:14:20.139565       1 migrator.go:18] FLAG: --log_backtrace_at=":0"\nI0730 20:14:20.139577       1 migrator.go:18] FLAG: --log_dir=""\nI0730 20:14:20.139586       1 migrator.go:18] FLAG: --log_file=""\nI0730 20:14:20.139594       1 migrator.go:18] FLAG: --log_file_max_size="1800"\nI0730 20:14:20.139602       1 migrator.go:18] FLAG: --logtostderr="true"\nI0730 20:14:20.139609       1 migrator.go:18] FLAG: --skip_headers="false"\nI0730 20:14:20.139617       1 migrator.go:18] FLAG: --skip_log_headers="false"\nI0730 20:14:20.139624       1 migrator.go:18] FLAG: --stderrthreshold="2"\nI0730 20:14:20.139632       1 migrator.go:18] FLAG: --v="2"\nI0730 20:14:20.139640       1 migrator.go:18] FLAG: --vmodule=""\nI0730 20:14:20.141258       1 reflector.go:175] Starting reflector *v1alpha1.StorageVersionMigration (0s) from k8s.io/client-go@v0.18.0-beta.2/tools/cache/reflector.go:125\n
Jul 30 20:27:56.432 E ns/openshift-insights pod/insights-operator-584bb5bd46-xb6xj node/ip-10-0-162-59.us-west-2.compute.internal container/operator container exited with code 2 (Error): /metrics: (1.444689ms) 200 [Prometheus/2.20.0 10.128.2.8:50840]\nI0730 20:26:26.572672       1 insightsuploader.go:122] Nothing to report since 2020-07-30T20:08:56Z\nI0730 20:26:28.043695       1 reflector.go:418] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Watch close - *v1.ConfigMap total 7 items received\nI0730 20:26:41.572919       1 insightsuploader.go:122] Nothing to report since 2020-07-30T20:08:56Z\nI0730 20:26:41.585190       1 status.go:320] The operator is healthy\nI0730 20:26:41.585237       1 status.go:430] No status update necessary, objects are identical\nI0730 20:26:45.936500       1 httplog.go:90] GET /metrics: (6.468921ms) 200 [Prometheus/2.20.0 10.129.2.10:34404]\nI0730 20:26:48.811522       1 httplog.go:90] GET /metrics: (2.44985ms) 200 [Prometheus/2.20.0 10.128.2.8:50840]\nI0730 20:26:50.062808       1 reflector.go:418] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Watch close - *v1.ConfigMap total 7 items received\nI0730 20:26:50.063332       1 reflector.go:418] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Watch close - *v1.ConfigMap total 0 items received\nI0730 20:26:56.573167       1 insightsuploader.go:122] Nothing to report since 2020-07-30T20:08:56Z\nI0730 20:27:11.573405       1 insightsuploader.go:122] Nothing to report since 2020-07-30T20:08:56Z\nI0730 20:27:15.936427       1 httplog.go:90] GET /metrics: (6.396277ms) 200 [Prometheus/2.20.0 10.129.2.10:34404]\nI0730 20:27:18.810728       1 httplog.go:90] GET /metrics: (1.706687ms) 200 [Prometheus/2.20.0 10.128.2.8:50840]\nI0730 20:27:26.573627       1 insightsuploader.go:122] Nothing to report since 2020-07-30T20:08:56Z\nI0730 20:27:41.573844       1 insightsuploader.go:122] Nothing to report since 2020-07-30T20:08:56Z\nI0730 20:27:45.939747       1 httplog.go:90] GET /metrics: (9.726162ms) 200 [Prometheus/2.20.0 10.129.2.10:34404]\nI0730 20:27:48.810639       1 httplog.go:90] GET /metrics: (1.670957ms) 200 [Prometheus/2.20.0 10.128.2.8:50840]\n
Jul 30 20:27:57.260 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-5549856d66-cqkwr node/ip-10-0-153-63.us-west-2.compute.internal container/cluster-storage-operator container exited with code 1 (Error):  (20m0s) from k8s.io/client-go@v0.19.0-rc.2/tools/cache/reflector.go:156\nI0730 20:27:56.131728       1 reflector.go:213] Stopping reflector *v1.Infrastructure (20m0s) from k8s.io/client-go@v0.19.0-rc.2/tools/cache/reflector.go:156\nI0730 20:27:56.131753       1 reflector.go:213] Stopping reflector *v1.Namespace (10m0s) from k8s.io/client-go@v0.19.0-rc.2/tools/cache/reflector.go:156\nI0730 20:27:56.131777       1 reflector.go:213] Stopping reflector *v1.ServiceAccount (10m0s) from k8s.io/client-go@v0.19.0-rc.2/tools/cache/reflector.go:156\nI0730 20:27:56.131797       1 reflector.go:213] Stopping reflector *v1.ClusterCSIDriver (20m0s) from k8s.io/client-go@v0.19.0-rc.2/tools/cache/reflector.go:156\nI0730 20:27:56.131814       1 reflector.go:213] Stopping reflector *v1.StorageClass (10m0s) from k8s.io/client-go@v0.19.0-rc.2/tools/cache/reflector.go:156\nI0730 20:27:56.131834       1 reflector.go:213] Stopping reflector *v1.ClusterRoleBinding (10m0s) from k8s.io/client-go@v0.19.0-rc.2/tools/cache/reflector.go:156\nI0730 20:27:56.131849       1 base_controller.go:136] Shutting down SnapshotCRDController ...\nI0730 20:27:56.131862       1 base_controller.go:136] Shutting down DefaultStorageClassController ...\nI0730 20:27:56.131873       1 base_controller.go:136] Shutting down LoggingSyncer ...\nI0730 20:27:56.131884       1 base_controller.go:136] Shutting down CSIDriverStarter ...\nI0730 20:27:56.131894       1 base_controller.go:136] Shutting down AWSEBSCSIDriverOperatorDeployment ...\nI0730 20:27:56.131899       1 base_controller.go:114] All AWSEBSCSIDriverOperatorDeployment post start hooks have been terminated\nI0730 20:27:56.131909       1 base_controller.go:136] Shutting down StatusSyncer_storage ...\nI0730 20:27:56.131913       1 base_controller.go:114] All StatusSyncer_storage post start hooks have been terminated\nI0730 20:27:56.131924       1 base_controller.go:136] Shutting down ManagementStateController ...\nW0730 20:27:56.132069       1 builder.go:97] graceful termination failed, controllers failed with error: stopped\n
Jul 30 20:28:06.082 E clusterversion/version changed Failing to True: MultipleErrors: Multiple errors are preventing progress:\n* Could not update deployment "openshift-authentication-operator/authentication-operator" (205 of 606)\n* Could not update deployment "openshift-cloud-credential-operator/cloud-credential-operator" (192 of 606)\n* Could not update deployment "openshift-cluster-node-tuning-operator/cluster-node-tuning-operator" (299 of 606)\n* Could not update deployment "openshift-cluster-samples-operator/cluster-samples-operator" (325 of 606)\n* Could not update deployment "openshift-cluster-storage-operator/csi-snapshot-controller-operator" (245 of 606)\n* Could not update deployment "openshift-console/downloads" (396 of 606)\n* Could not update deployment "openshift-controller-manager-operator/openshift-controller-manager-operator" (309 of 606)\n* Could not update deployment "openshift-image-registry/cluster-image-registry-operator" (262 of 606)\n* Could not update deployment "openshift-machine-api/cluster-autoscaler-operator" (225 of 606)\n* Could not update deployment "openshift-marketplace/marketplace-operator" (458 of 606)\n* Could not update deployment "openshift-monitoring/cluster-monitoring-operator" (365 of 606)\n* Could not update deployment "openshift-operator-lifecycle-manager/olm-operator" (438 of 606)\n* Could not update deployment "openshift-service-ca-operator/service-ca-operator" (471 of 606)
Jul 30 20:28:16.434 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-operator-66b5cdb8c8-mp774 node/ip-10-0-170-175.us-west-2.compute.internal container/aws-ebs-csi-driver-operator container exited with code 1 (Error): 
Jul 30 20:28:17.414 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-2thc5 node/ip-10-0-134-56.us-west-2.compute.internal container/csi-node-driver-registrar container exited with code 2 (Error): 
Jul 30 20:28:17.414 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-2thc5 node/ip-10-0-134-56.us-west-2.compute.internal container/csi-liveness-probe container exited with code 2 (Error): 
Jul 30 20:28:17.414 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-2thc5 node/ip-10-0-134-56.us-west-2.compute.internal container/csi-driver container exited with code 2 (Error): 
Jul 30 20:28:18.350 E ns/openshift-controller-manager pod/controller-manager-kc8wq node/ip-10-0-153-63.us-west-2.compute.internal container/controller-manager container exited with code 137 (Error): tch stream event decoding: unexpected EOF\nI0730 20:26:50.037719       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0730 20:26:50.037724       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0730 20:26:50.037722       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0730 20:26:50.037729       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0730 20:26:50.037729       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0730 20:26:50.037735       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0730 20:26:50.037736       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0730 20:26:50.037740       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0730 20:26:50.037749       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0730 20:26:50.037752       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0730 20:26:50.037754       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0730 20:26:50.037758       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0730 20:26:50.037759       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0730 20:26:50.037772       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0730 20:26:50.037803       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0730 20:26:50.037816       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0730 20:26:50.037890       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\n
Jul 30 20:28:40.876 E ns/openshift-monitoring pod/node-exporter-gfp65 node/ip-10-0-162-59.us-west-2.compute.internal container/node-exporter container exited with code 143 (Error): :112 collector=mountstats\nlevel=info ts=2020-07-30T20:09:09.074Z caller=node_exporter.go:112 collector=netclass\nlevel=info ts=2020-07-30T20:09:09.074Z caller=node_exporter.go:112 collector=netdev\nlevel=info ts=2020-07-30T20:09:09.074Z caller=node_exporter.go:112 collector=netstat\nlevel=info ts=2020-07-30T20:09:09.074Z caller=node_exporter.go:112 collector=nfs\nlevel=info ts=2020-07-30T20:09:09.074Z caller=node_exporter.go:112 collector=nfsd\nlevel=info ts=2020-07-30T20:09:09.074Z caller=node_exporter.go:112 collector=powersupplyclass\nlevel=info ts=2020-07-30T20:09:09.074Z caller=node_exporter.go:112 collector=pressure\nlevel=info ts=2020-07-30T20:09:09.074Z caller=node_exporter.go:112 collector=rapl\nlevel=info ts=2020-07-30T20:09:09.074Z caller=node_exporter.go:112 collector=schedstat\nlevel=info ts=2020-07-30T20:09:09.074Z caller=node_exporter.go:112 collector=sockstat\nlevel=info ts=2020-07-30T20:09:09.074Z caller=node_exporter.go:112 collector=softnet\nlevel=info ts=2020-07-30T20:09:09.074Z caller=node_exporter.go:112 collector=stat\nlevel=info ts=2020-07-30T20:09:09.074Z caller=node_exporter.go:112 collector=textfile\nlevel=info ts=2020-07-30T20:09:09.074Z caller=node_exporter.go:112 collector=thermal_zone\nlevel=info ts=2020-07-30T20:09:09.074Z caller=node_exporter.go:112 collector=time\nlevel=info ts=2020-07-30T20:09:09.074Z caller=node_exporter.go:112 collector=timex\nlevel=info ts=2020-07-30T20:09:09.074Z caller=node_exporter.go:112 collector=udp_queues\nlevel=info ts=2020-07-30T20:09:09.074Z caller=node_exporter.go:112 collector=uname\nlevel=info ts=2020-07-30T20:09:09.074Z caller=node_exporter.go:112 collector=vmstat\nlevel=info ts=2020-07-30T20:09:09.074Z caller=node_exporter.go:112 collector=xfs\nlevel=info ts=2020-07-30T20:09:09.075Z caller=node_exporter.go:112 collector=zfs\nlevel=info ts=2020-07-30T20:09:09.075Z caller=node_exporter.go:191 msg="Listening on" address=127.0.0.1:9100\nlevel=info ts=2020-07-30T20:09:09.075Z caller=tls_config.go:170 msg="TLS is disabled and it cannot be enabled on the fly." http2=false\n
Jul 30 20:29:06.229 E ns/openshift-monitoring pod/node-exporter-kvcjf node/ip-10-0-204-243.us-west-2.compute.internal container/node-exporter container exited with code 143 (Error): :112 collector=mountstats\nlevel=info ts=2020-07-30T20:08:56.458Z caller=node_exporter.go:112 collector=netclass\nlevel=info ts=2020-07-30T20:08:56.458Z caller=node_exporter.go:112 collector=netdev\nlevel=info ts=2020-07-30T20:08:56.458Z caller=node_exporter.go:112 collector=netstat\nlevel=info ts=2020-07-30T20:08:56.458Z caller=node_exporter.go:112 collector=nfs\nlevel=info ts=2020-07-30T20:08:56.458Z caller=node_exporter.go:112 collector=nfsd\nlevel=info ts=2020-07-30T20:08:56.458Z caller=node_exporter.go:112 collector=powersupplyclass\nlevel=info ts=2020-07-30T20:08:56.458Z caller=node_exporter.go:112 collector=pressure\nlevel=info ts=2020-07-30T20:08:56.458Z caller=node_exporter.go:112 collector=rapl\nlevel=info ts=2020-07-30T20:08:56.458Z caller=node_exporter.go:112 collector=schedstat\nlevel=info ts=2020-07-30T20:08:56.458Z caller=node_exporter.go:112 collector=sockstat\nlevel=info ts=2020-07-30T20:08:56.458Z caller=node_exporter.go:112 collector=softnet\nlevel=info ts=2020-07-30T20:08:56.458Z caller=node_exporter.go:112 collector=stat\nlevel=info ts=2020-07-30T20:08:56.458Z caller=node_exporter.go:112 collector=textfile\nlevel=info ts=2020-07-30T20:08:56.458Z caller=node_exporter.go:112 collector=thermal_zone\nlevel=info ts=2020-07-30T20:08:56.458Z caller=node_exporter.go:112 collector=time\nlevel=info ts=2020-07-30T20:08:56.458Z caller=node_exporter.go:112 collector=timex\nlevel=info ts=2020-07-30T20:08:56.458Z caller=node_exporter.go:112 collector=udp_queues\nlevel=info ts=2020-07-30T20:08:56.458Z caller=node_exporter.go:112 collector=uname\nlevel=info ts=2020-07-30T20:08:56.458Z caller=node_exporter.go:112 collector=vmstat\nlevel=info ts=2020-07-30T20:08:56.458Z caller=node_exporter.go:112 collector=xfs\nlevel=info ts=2020-07-30T20:08:56.458Z caller=node_exporter.go:112 collector=zfs\nlevel=info ts=2020-07-30T20:08:56.458Z caller=node_exporter.go:191 msg="Listening on" address=127.0.0.1:9100\nlevel=info ts=2020-07-30T20:08:56.458Z caller=tls_config.go:170 msg="TLS is disabled and it cannot be enabled on the fly." http2=false\n
Jul 30 20:29:13.882 E ns/openshift-monitoring pod/openshift-state-metrics-78454f48c4-lhht7 node/ip-10-0-170-175.us-west-2.compute.internal container/openshift-state-metrics container exited with code 2 (Error): 
Jul 30 20:29:29.435 E ns/openshift-monitoring pod/prometheus-adapter-7c764df574-8c4qv node/ip-10-0-204-199.us-west-2.compute.internal container/prometheus-adapter container exited with code 2 (Error): I0730 20:14:11.078543       1 adapter.go:94] successfully using in-cluster auth\nI0730 20:14:11.499922       1 dynamic_cafile_content.go:166] Starting request-header::/etc/tls/private/requestheader-client-ca-file\nI0730 20:14:11.499930       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/etc/tls/private/client-ca-file\nI0730 20:14:11.500108       1 dynamic_serving_content.go:129] Starting serving-cert::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\nI0730 20:14:11.500822       1 secure_serving.go:178] Serving securely on [::]:6443\nI0730 20:14:11.501012       1 tlsconfig.go:219] Starting DynamicServingCertificateController\nE0730 20:28:38.029086       1 webhook.go:197] Failed to make webhook authorizer request: subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\nE0730 20:28:38.029216       1 errors.go:77] subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\n
Jul 30 20:29:40.300 E ns/openshift-marketplace pod/redhat-operators-7ffc8c7489-h7gjw node/ip-10-0-170-175.us-west-2.compute.internal container/redhat-operators container exited with code 2 (Error): 
Jul 30 20:29:40.730 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-134-56.us-west-2.compute.internal container/config-reloader container exited with code 2 (Error): 2020/07/30 20:14:54 Watching directory: "/etc/alertmanager/config"\n
Jul 30 20:29:40.730 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-134-56.us-west-2.compute.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/07/30 20:14:54 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/30 20:14:54 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/30 20:14:54 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/30 20:14:54 oauthproxy.go:202: mapping path "/" => upstream "http://localhost:9093/"\n2020/07/30 20:14:54 oauthproxy.go:223: compiled skip-auth-regex => "^/metrics"\n2020/07/30 20:14:54 oauthproxy.go:229: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/30 20:14:54 oauthproxy.go:239: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/30 20:14:54 http.go:107: HTTPS: listening on [::]:9095\nI0730 20:14:54.947742       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Jul 30 20:29:49.791 E ns/openshift-monitoring pod/thanos-querier-5497594fbd-2hvd5 node/ip-10-0-134-56.us-west-2.compute.internal container/oauth-proxy container exited with code 2 (Error): \nI0730 20:14:48.892176       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/07/30 20:15:18 oauthproxy.go:782: basicauth: 10.128.0.9:37502 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/30 20:16:18 oauthproxy.go:782: basicauth: 10.128.0.9:43678 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/30 20:17:18 oauthproxy.go:782: basicauth: 10.128.0.9:45470 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/30 20:18:18 oauthproxy.go:782: basicauth: 10.128.0.9:48874 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/30 20:19:18 oauthproxy.go:782: basicauth: 10.128.0.9:51638 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/30 20:20:18 oauthproxy.go:782: basicauth: 10.128.0.9:55854 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/30 20:21:18 oauthproxy.go:782: basicauth: 10.128.0.9:58532 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/30 20:22:18 oauthproxy.go:782: basicauth: 10.128.0.9:33030 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/30 20:22:46 oauthproxy.go:782: basicauth: 10.129.0.48:40212 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/30 20:23:47 oauthproxy.go:782: basicauth: 10.129.0.48:46788 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/30 20:26:45 oauthproxy.go:782: basicauth: 10.129.0.48:34772 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/30 20:28:45 oauthproxy.go:782: basicauth: 10.129.0.48:39806 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/30 20:29:45 oauthproxy.go:782: basicauth: 10.129.0.48:43696 Authorization header does not start with 'Basic', skipping basic authentication\n
Jul 30 20:29:56.844 E ns/openshift-monitoring pod/grafana-54744bfbc7-w9zv5 node/ip-10-0-134-56.us-west-2.compute.internal container/grafana-proxy container exited with code 2 (Error): 
Jul 30 20:30:23.635 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-204-199.us-west-2.compute.internal container/prometheus container exited with code 2 (Error): level=error ts=2020-07-30T20:29:38.101Z caller=main.go:283 msg="Error loading config (--config.file=/etc/prometheus/config_out/prometheus.env.yaml)" err="open /etc/prometheus/config_out/prometheus.env.yaml: no such file or directory"\n
Jul 30 20:30:29.021 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-134-56.us-west-2.compute.internal container/rules-configmap-reloader container exited with code 2 (Error): 2020/07/30 20:14:54 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n2020/07/30 20:16:24 config map updated\n2020/07/30 20:16:24 successfully triggered reload\n2020/07/30 20:19:06 config map updated\n2020/07/30 20:19:06 successfully triggered reload\n
Jul 30 20:30:29.021 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-134-56.us-west-2.compute.internal container/prometheus-proxy container exited with code 2 (Error): 2020/07/30 20:14:55 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/07/30 20:14:55 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/30 20:14:55 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/30 20:14:55 oauthproxy.go:202: mapping path "/" => upstream "http://localhost:9090/"\n2020/07/30 20:14:55 oauthproxy.go:223: compiled skip-auth-regex => "^/metrics"\n2020/07/30 20:14:55 oauthproxy.go:229: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/07/30 20:14:55 oauthproxy.go:239: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/30 20:14:55 main.go:155: using htpasswd file /etc/proxy/htpasswd/auth\n2020/07/30 20:14:55 http.go:107: HTTPS: listening on [::]:9091\nI0730 20:14:55.404609       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/07/30 20:29:08 oauthproxy.go:782: basicauth: 10.131.0.29:53074 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/30 20:30:09 oauthproxy.go:782: basicauth: 10.131.0.29:54558 Authorization header does not start with 'Basic', skipping basic authentication\n
Jul 30 20:30:29.021 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-134-56.us-west-2.compute.internal container/prometheus-config-reloader container exited with code 2 (Error): ts=2020-07-30T20:14:54.564366556Z caller=main.go:87 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-07-30T20:14:54.565969975Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-07-30T20:14:59.712844949Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-07-30T20:14:59.712936694Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\nlevel=info ts=2020-07-30T20:14:59.868424534Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-07-30T20:17:59.849254224Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-07-30T20:21:00.050051843Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Jul 30 20:30:38.677 E ns/openshift-monitoring pod/node-exporter-76gll node/ip-10-0-204-199.us-west-2.compute.internal container/node-exporter container exited with code 143 (Error): :112 collector=mountstats\nlevel=info ts=2020-07-30T20:13:15.919Z caller=node_exporter.go:112 collector=netclass\nlevel=info ts=2020-07-30T20:13:15.919Z caller=node_exporter.go:112 collector=netdev\nlevel=info ts=2020-07-30T20:13:15.919Z caller=node_exporter.go:112 collector=netstat\nlevel=info ts=2020-07-30T20:13:15.919Z caller=node_exporter.go:112 collector=nfs\nlevel=info ts=2020-07-30T20:13:15.919Z caller=node_exporter.go:112 collector=nfsd\nlevel=info ts=2020-07-30T20:13:15.919Z caller=node_exporter.go:112 collector=powersupplyclass\nlevel=info ts=2020-07-30T20:13:15.919Z caller=node_exporter.go:112 collector=pressure\nlevel=info ts=2020-07-30T20:13:15.919Z caller=node_exporter.go:112 collector=rapl\nlevel=info ts=2020-07-30T20:13:15.919Z caller=node_exporter.go:112 collector=schedstat\nlevel=info ts=2020-07-30T20:13:15.919Z caller=node_exporter.go:112 collector=sockstat\nlevel=info ts=2020-07-30T20:13:15.919Z caller=node_exporter.go:112 collector=softnet\nlevel=info ts=2020-07-30T20:13:15.919Z caller=node_exporter.go:112 collector=stat\nlevel=info ts=2020-07-30T20:13:15.919Z caller=node_exporter.go:112 collector=textfile\nlevel=info ts=2020-07-30T20:13:15.919Z caller=node_exporter.go:112 collector=thermal_zone\nlevel=info ts=2020-07-30T20:13:15.919Z caller=node_exporter.go:112 collector=time\nlevel=info ts=2020-07-30T20:13:15.919Z caller=node_exporter.go:112 collector=timex\nlevel=info ts=2020-07-30T20:13:15.919Z caller=node_exporter.go:112 collector=udp_queues\nlevel=info ts=2020-07-30T20:13:15.919Z caller=node_exporter.go:112 collector=uname\nlevel=info ts=2020-07-30T20:13:15.919Z caller=node_exporter.go:112 collector=vmstat\nlevel=info ts=2020-07-30T20:13:15.919Z caller=node_exporter.go:112 collector=xfs\nlevel=info ts=2020-07-30T20:13:15.919Z caller=node_exporter.go:112 collector=zfs\nlevel=info ts=2020-07-30T20:13:15.920Z caller=node_exporter.go:191 msg="Listening on" address=127.0.0.1:9100\nlevel=info ts=2020-07-30T20:13:15.920Z caller=tls_config.go:170 msg="TLS is disabled and it cannot be enabled on the fly." http2=false\n
Jul 30 20:30:48.265 E ns/openshift-console pod/console-6c9b846557-ttrbd node/ip-10-0-153-63.us-west-2.compute.internal container/console container exited with code 2 (Error): 2020-07-30T20:17:57Z cmd/main: Flag inactivity-timeout is set to less then 300 seconds and will be ignored!\n2020-07-30T20:17:57Z cmd/main: cookies are secure!\n2020-07-30T20:17:57Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-07-30T20:18:07Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-07-30T20:18:17Z cmd/main: Binding to [::]:8443...\n2020-07-30T20:18:17Z cmd/main: using TLS\n2020-07-30T20:18:51Z auth: failed to get latest auth source data: discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n
Jul 30 20:30:52.194 E ns/openshift-monitoring pod/node-exporter-bbgln node/ip-10-0-153-63.us-west-2.compute.internal container/node-exporter container exited with code 143 (Error): :112 collector=mountstats\nlevel=info ts=2020-07-30T20:08:56.444Z caller=node_exporter.go:112 collector=netclass\nlevel=info ts=2020-07-30T20:08:56.444Z caller=node_exporter.go:112 collector=netdev\nlevel=info ts=2020-07-30T20:08:56.444Z caller=node_exporter.go:112 collector=netstat\nlevel=info ts=2020-07-30T20:08:56.444Z caller=node_exporter.go:112 collector=nfs\nlevel=info ts=2020-07-30T20:08:56.444Z caller=node_exporter.go:112 collector=nfsd\nlevel=info ts=2020-07-30T20:08:56.444Z caller=node_exporter.go:112 collector=powersupplyclass\nlevel=info ts=2020-07-30T20:08:56.444Z caller=node_exporter.go:112 collector=pressure\nlevel=info ts=2020-07-30T20:08:56.444Z caller=node_exporter.go:112 collector=rapl\nlevel=info ts=2020-07-30T20:08:56.444Z caller=node_exporter.go:112 collector=schedstat\nlevel=info ts=2020-07-30T20:08:56.444Z caller=node_exporter.go:112 collector=sockstat\nlevel=info ts=2020-07-30T20:08:56.444Z caller=node_exporter.go:112 collector=softnet\nlevel=info ts=2020-07-30T20:08:56.444Z caller=node_exporter.go:112 collector=stat\nlevel=info ts=2020-07-30T20:08:56.444Z caller=node_exporter.go:112 collector=textfile\nlevel=info ts=2020-07-30T20:08:56.444Z caller=node_exporter.go:112 collector=thermal_zone\nlevel=info ts=2020-07-30T20:08:56.444Z caller=node_exporter.go:112 collector=time\nlevel=info ts=2020-07-30T20:08:56.444Z caller=node_exporter.go:112 collector=timex\nlevel=info ts=2020-07-30T20:08:56.444Z caller=node_exporter.go:112 collector=udp_queues\nlevel=info ts=2020-07-30T20:08:56.444Z caller=node_exporter.go:112 collector=uname\nlevel=info ts=2020-07-30T20:08:56.444Z caller=node_exporter.go:112 collector=vmstat\nlevel=info ts=2020-07-30T20:08:56.444Z caller=node_exporter.go:112 collector=xfs\nlevel=info ts=2020-07-30T20:08:56.444Z caller=node_exporter.go:112 collector=zfs\nlevel=info ts=2020-07-30T20:08:56.444Z caller=node_exporter.go:191 msg="Listening on" address=127.0.0.1:9100\nlevel=info ts=2020-07-30T20:08:56.444Z caller=tls_config.go:170 msg="TLS is disabled and it cannot be enabled on the fly." http2=false\n
Jul 30 20:30:57.788 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-170-175.us-west-2.compute.internal container/prometheus container exited with code 2 (Error): level=error ts=2020-07-30T20:30:50.388Z caller=main.go:283 msg="Error loading config (--config.file=/etc/prometheus/config_out/prometheus.env.yaml)" err="open /etc/prometheus/config_out/prometheus.env.yaml: no such file or directory"\n
Jul 30 20:31:03.881 E ns/openshift-console pod/console-6c9b846557-zvqrf node/ip-10-0-204-243.us-west-2.compute.internal container/console container exited with code 2 (Error): 2020-07-30T20:17:54Z cmd/main: Flag inactivity-timeout is set to less then 300 seconds and will be ignored!\n2020-07-30T20:17:54Z cmd/main: cookies are secure!\n2020-07-30T20:17:54Z cmd/main: Binding to [::]:8443...\n2020-07-30T20:17:54Z cmd/main: using TLS\n
Jul 30 20:31:48.085 E ns/openshift-sdn pod/sdn-controller-5pphj node/ip-10-0-162-59.us-west-2.compute.internal container/sdn-controller container exited with code 2 (Error): nfigmap-upgrade-6441"\nI0730 20:18:47.937142       1 vnids.go:116] Allocated netid 3204553 for namespace "e2e-k8s-sig-apps-job-upgrade-3042"\nI0730 20:18:47.971191       1 vnids.go:116] Allocated netid 12323061 for namespace "e2e-k8s-sig-storage-sig-api-machinery-secret-upgrade-1730"\nI0730 20:18:47.996404       1 vnids.go:116] Allocated netid 6187270 for namespace "e2e-k8s-service-lb-available-8943"\nI0730 20:18:48.022769       1 vnids.go:116] Allocated netid 115433 for namespace "e2e-check-for-critical-alerts-9093"\nI0730 20:18:48.037616       1 vnids.go:116] Allocated netid 14291842 for namespace "e2e-k8s-sig-apps-daemonset-upgrade-2084"\nI0730 20:18:48.060945       1 vnids.go:116] Allocated netid 4245225 for namespace "e2e-k8s-sig-apps-deployment-upgrade-3563"\nI0730 20:18:48.084249       1 vnids.go:116] Allocated netid 2465436 for namespace "e2e-k8s-sig-apps-replicaset-upgrade-7818"\nE0730 20:19:26.625813       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.Node: Get https://api-int.ci-op-bmb43f6l-b0725.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=34324&timeout=7m45s&timeoutSeconds=465&watch=true: dial tcp 10.0.233.241:6443: connect: connection refused\nE0730 20:26:50.060292       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.Node: Get https://api-int.ci-op-bmb43f6l-b0725.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=41729&timeout=8m29s&timeoutSeconds=509&watch=true: dial tcp 10.0.183.135:6443: connect: connection refused\nI0730 20:29:02.061477       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0730 20:29:02.061477       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0730 20:29:02.061515       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0730 20:29:02.061498       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\n
Jul 30 20:32:18.192 E ns/openshift-sdn pod/ovs-ll5nx node/ip-10-0-204-243.us-west-2.compute.internal container/openvswitch container exited with code 137 (Error): |br0<->unix#1322: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-30T20:32:04.905Z|00553|connmgr|INFO|br0<->unix#1325: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-30T20:32:04.957Z|00554|connmgr|INFO|br0<->unix#1328: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-30T20:32:05.025Z|00555|connmgr|INFO|br0<->unix#1331: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-30T20:32:05.088Z|00556|connmgr|INFO|br0<->unix#1335: 3 flow_mods in the last 0 s (3 adds)\n2020-07-30T20:32:05.101Z|00557|connmgr|INFO|br0<->unix#1337: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-30T20:32:05.143Z|00558|connmgr|INFO|br0<->unix#1341: 1 flow_mods in the last 0 s (1 adds)\n2020-07-30T20:32:05.144Z|00559|connmgr|INFO|br0<->unix#1343: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-30T20:32:05.181Z|00560|connmgr|INFO|br0<->unix#1347: 3 flow_mods in the last 0 s (3 adds)\n2020-07-30T20:32:05.193Z|00561|connmgr|INFO|br0<->unix#1349: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-30T20:32:05.241Z|00562|connmgr|INFO|br0<->unix#1353: 1 flow_mods in the last 0 s (1 adds)\n2020-07-30T20:32:05.247Z|00563|connmgr|INFO|br0<->unix#1355: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-30T20:32:05.295Z|00564|connmgr|INFO|br0<->unix#1360: 3 flow_mods in the last 0 s (3 adds)\n2020-07-30T20:32:05.296Z|00565|connmgr|INFO|br0<->unix#1361: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-30T20:32:05.343Z|00566|connmgr|INFO|br0<->unix#1365: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-30T20:32:05.374Z|00567|connmgr|INFO|br0<->unix#1368: 1 flow_mods in the last 0 s (1 adds)\n2020-07-30T20:32:05.380Z|00568|connmgr|INFO|br0<->unix#1370: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-30T20:32:05.413Z|00569|connmgr|INFO|br0<->unix#1373: 3 flow_mods in the last 0 s (3 adds)\n2020-07-30T20:32:05.462Z|00570|connmgr|INFO|br0<->unix#1376: 1 flow_mods in the last 0 s (1 adds)\n2020-07-30T20:32:05.489Z|00571|connmgr|INFO|br0<->unix#1379: 3 flow_mods in the last 0 s (3 adds)\n2020-07-30T20:32:05.520Z|00572|connmgr|INFO|br0<->unix#1382: 1 flow_mods in the last 0 s (1 adds)\n
Jul 30 20:33:08.375 E ns/openshift-multus pod/multus-admission-controller-ssjw9 node/ip-10-0-162-59.us-west-2.compute.internal container/multus-admission-controller container exited with code 137 (Error): 
Jul 30 20:33:51.446 E ns/openshift-multus pod/multus-admission-controller-z7p7t node/ip-10-0-204-243.us-west-2.compute.internal container/multus-admission-controller container exited with code 137 (Error): 
Jul 30 20:34:17.636 E ns/openshift-multus pod/multus-pmml8 node/ip-10-0-204-243.us-west-2.compute.internal container/kube-multus container exited with code 137 (Error): 
Jul 30 20:34:52.818 E ns/openshift-sdn pod/ovs-gjszv node/ip-10-0-162-59.us-west-2.compute.internal container/openvswitch container exited with code 137 (Error): 3|connmgr|INFO|br0<->unix#1188: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-30T20:32:15.546Z|00484|connmgr|INFO|br0<->unix#1190: 3 flow_mods in the last 0 s (3 adds)\n2020-07-30T20:32:15.576Z|00485|connmgr|INFO|br0<->unix#1194: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-30T20:32:15.599Z|00486|connmgr|INFO|br0<->unix#1197: 1 flow_mods in the last 0 s (1 adds)\n2020-07-30T20:32:15.620Z|00487|connmgr|INFO|br0<->unix#1200: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-30T20:32:15.629Z|00488|connmgr|INFO|br0<->unix#1202: 3 flow_mods in the last 0 s (3 adds)\n2020-07-30T20:32:15.660Z|00489|connmgr|INFO|br0<->unix#1206: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-30T20:32:15.674Z|00490|connmgr|INFO|br0<->unix#1209: 1 flow_mods in the last 0 s (1 adds)\n2020-07-30T20:32:15.697Z|00491|connmgr|INFO|br0<->unix#1212: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-30T20:32:15.711Z|00492|connmgr|INFO|br0<->unix#1214: 3 flow_mods in the last 0 s (3 adds)\n2020-07-30T20:32:15.740Z|00493|connmgr|INFO|br0<->unix#1217: 1 flow_mods in the last 0 s (1 adds)\n2020-07-30T20:32:15.778Z|00494|connmgr|INFO|br0<->unix#1220: 3 flow_mods in the last 0 s (3 adds)\n2020-07-30T20:32:15.802Z|00495|connmgr|INFO|br0<->unix#1223: 1 flow_mods in the last 0 s (1 adds)\n2020-07-30T20:32:15.823Z|00496|connmgr|INFO|br0<->unix#1226: 3 flow_mods in the last 0 s (3 adds)\n2020-07-30T20:32:15.844Z|00497|connmgr|INFO|br0<->unix#1229: 1 flow_mods in the last 0 s (1 adds)\n2020-07-30T20:33:07.994Z|00498|connmgr|INFO|br0<->unix#1235: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-30T20:33:08.020Z|00499|connmgr|INFO|br0<->unix#1238: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-30T20:33:08.044Z|00500|bridge|INFO|bridge br0: deleted interface veth58c083ca on port 5\n2020-07-30T20:33:12.569Z|00501|bridge|INFO|bridge br0: added interface veth70029744 on port 80\n2020-07-30T20:33:12.606Z|00502|connmgr|INFO|br0<->unix#1241: 5 flow_mods in the last 0 s (5 adds)\n2020-07-30T20:33:12.642Z|00503|connmgr|INFO|br0<->unix#1244: 2 flow_mods in the last 0 s (2 deletes)\n
Jul 30 20:35:15.643 E ns/openshift-multus pod/multus-c9ksq node/ip-10-0-170-175.us-west-2.compute.internal container/kube-multus container exited with code 137 (Error): 
Jul 30 20:36:09.685 E ns/openshift-multus pod/multus-nddqk node/ip-10-0-204-199.us-west-2.compute.internal container/kube-multus container exited with code 137 (Error): 
Jul 30 20:37:57.349 E ns/openshift-machine-config-operator pod/machine-config-operator-5b9dd85f5d-g6tlv node/ip-10-0-204-243.us-west-2.compute.internal container/machine-config-operator container exited with code 2 (Error): 4 +0000 UTC m=+2.699938152,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}\nI0730 19:59:35.522075       1 sync.go:72] [init mode] synced MachineConfigPools in 34.011865ms\nI0730 20:01:15.599811       1 sync.go:72] [init mode] synced MachineConfigDaemon in 1m40.07495942s\nI0730 20:02:08.703381       1 sync.go:72] [init mode] synced MachineConfigController in 53.09379493s\nI0730 20:02:11.817401       1 sync.go:72] [init mode] synced MachineConfigServer in 3.108491984s\nI0730 20:03:06.834574       1 sync.go:72] [init mode] synced RequiredPools in 55.012817657s\nI0730 20:03:06.876463       1 event.go:278] Event(v1.ObjectReference{Kind:"", Namespace:"", Name:"machine-config", UID:"82bcc2ef-c674-4913-ac22-aca6c1ccd3fb", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorVersionChanged' clusteroperator/machine-config-operator version changed from [] to [{operator 0.0.1-2020-07-30-185809}]\nI0730 20:03:07.443624       1 sync.go:103] Initialization complete\nI0730 20:03:08.058579       1 recorder_logging.go:35] &Event{ObjectMeta:{dummy.1626a051153ce2a4  dummy    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:SecretCreated,Message:Created Secret/worker-user-data-managed -n openshift-machine-api because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2020-07-30 20:03:08.058526372 +0000 UTC m=+215.239849349,LastTimestamp:2020-07-30 20:03:08.058526372 +0000 UTC m=+215.239849349,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}\nE0730 20:03:34.697698       1 leaderelection.go:320] error retrieving resource lock openshift-machine-config-operator/machine-config: Get https://172.30.0.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps/machine-config: unexpected EOF\n
Jul 30 20:39:55.875 E ns/openshift-machine-config-operator pod/machine-config-daemon-j7ssh node/ip-10-0-204-243.us-west-2.compute.internal container/oauth-proxy container exited with code 143 (Error): 
Jul 30 20:40:34.539 E ns/openshift-machine-config-operator pod/machine-config-daemon-pf66q node/ip-10-0-170-175.us-west-2.compute.internal container/oauth-proxy container exited with code 143 (Error): 
Jul 30 20:40:48.361 E ns/openshift-machine-config-operator pod/machine-config-daemon-srrx7 node/ip-10-0-204-199.us-west-2.compute.internal container/oauth-proxy container exited with code 143 (Error): 
Jul 30 20:41:00.183 E ns/openshift-machine-config-operator pod/machine-config-daemon-mfpk2 node/ip-10-0-153-63.us-west-2.compute.internal container/oauth-proxy container exited with code 143 (Error): 
Jul 30 20:41:21.067 E ns/openshift-machine-config-operator pod/machine-config-controller-5854cfd444-frwwq node/ip-10-0-162-59.us-west-2.compute.internal container/machine-config-controller container exited with code 2 (Error): e.internal changed machineconfiguration.openshift.io/state = Done\nE0730 20:13:22.371302       1 render_controller.go:459] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again\nI0730 20:13:22.371327       1 render_controller.go:376] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again\nI0730 20:13:25.068745       1 node_controller.go:463] Pool worker: node ip-10-0-134-56.us-west-2.compute.internal changed machineconfiguration.openshift.io/currentConfig = rendered-worker-b86a813e36e20688b10697c8bbedef37\nI0730 20:13:25.068770       1 node_controller.go:463] Pool worker: node ip-10-0-134-56.us-west-2.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-b86a813e36e20688b10697c8bbedef37\nI0730 20:13:25.068777       1 node_controller.go:463] Pool worker: node ip-10-0-134-56.us-west-2.compute.internal changed machineconfiguration.openshift.io/state = Done\nI0730 20:13:42.612718       1 node_controller.go:446] Pool worker: node ip-10-0-170-175.us-west-2.compute.internal is now reporting ready\nI0730 20:13:54.274708       1 node_controller.go:446] Pool worker: node ip-10-0-204-199.us-west-2.compute.internal is now reporting ready\nI0730 20:13:57.354581       1 node_controller.go:446] Pool worker: node ip-10-0-134-56.us-west-2.compute.internal is now reporting ready\nI0730 20:14:51.075784       1 node_controller.go:468] Pool worker: node ip-10-0-204-199.us-west-2.compute.internal changed labels\nI0730 20:14:53.660045       1 node_controller.go:468] Pool worker: node ip-10-0-170-175.us-west-2.compute.internal changed labels\nI0730 20:14:55.326977       1 node_controller.go:468] Pool worker: node ip-10-0-134-56.us-west-2.compute.internal changed labels\n
Jul 30 20:43:19.626 E ns/openshift-machine-config-operator pod/machine-config-server-vzn6l node/ip-10-0-153-63.us-west-2.compute.internal container/machine-config-server container exited with code 2 (Error): I0730 20:02:09.853044       1 start.go:38] Version: machine-config-daemon-4.6.0-202006240615.p0-122-gcd925586-dirty (cd925586f71d709a36a11945a7abf8104e3d2046)\nI0730 20:02:09.853620       1 api.go:69] Launching server on :22624\nI0730 20:02:09.853673       1 api.go:69] Launching server on :22623\nI0730 20:09:53.224017       1 api.go:116] Pool worker requested by address:"10.0.233.241:64434" User-Agent:"Ignition/2.3.0"\n
Jul 30 20:43:23.580 E ns/openshift-machine-config-operator pod/machine-config-server-q869h node/ip-10-0-204-243.us-west-2.compute.internal container/machine-config-server container exited with code 2 (Error): I0730 20:02:11.258183       1 start.go:38] Version: machine-config-daemon-4.6.0-202006240615.p0-122-gcd925586-dirty (cd925586f71d709a36a11945a7abf8104e3d2046)\nI0730 20:02:11.258758       1 api.go:69] Launching server on :22624\nI0730 20:02:11.258816       1 api.go:69] Launching server on :22623\nI0730 20:09:46.648091       1 api.go:116] Pool worker requested by address:"10.0.183.135:17076" User-Agent:"Ignition/2.3.0"\n
Jul 30 20:43:27.470 E ns/openshift-machine-config-operator pod/machine-config-server-hfgrh node/ip-10-0-162-59.us-west-2.compute.internal container/machine-config-server container exited with code 2 (Error): I0730 20:02:10.455985       1 start.go:38] Version: machine-config-daemon-4.6.0-202006240615.p0-122-gcd925586-dirty (cd925586f71d709a36a11945a7abf8104e3d2046)\nI0730 20:02:10.456797       1 api.go:69] Launching server on :22624\nI0730 20:02:10.456907       1 api.go:69] Launching server on :22623\nI0730 20:09:52.311750       1 api.go:116] Pool worker requested by address:"10.0.233.241:56620" User-Agent:"Ignition/2.3.0"\n
Jul 30 20:43:32.567 E ns/openshift-marketplace pod/community-operators-68d7f67ddd-56r84 node/ip-10-0-170-175.us-west-2.compute.internal container/community-operators container exited with code 2 (Error): 
Jul 30 20:43:32.591 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-operator-7598dff9d6-xgbs5 node/ip-10-0-170-175.us-west-2.compute.internal container/aws-ebs-csi-driver-operator container exited with code 1 (Error): 
Jul 30 20:43:32.649 E ns/openshift-marketplace pod/redhat-marketplace-5bc46b95b9-tf9ls node/ip-10-0-170-175.us-west-2.compute.internal container/redhat-marketplace container exited with code 2 (Error): 
Jul 30 20:43:33.763 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-170-175.us-west-2.compute.internal container/rules-configmap-reloader container exited with code 2 (Error): 2020/07/30 20:30:56 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Jul 30 20:43:33.763 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-170-175.us-west-2.compute.internal container/prometheus-proxy container exited with code 2 (Error): 2020/07/30 20:30:57 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/07/30 20:30:57 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/30 20:30:57 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/30 20:30:57 oauthproxy.go:202: mapping path "/" => upstream "http://localhost:9090/"\n2020/07/30 20:30:57 oauthproxy.go:223: compiled skip-auth-regex => "^/metrics"\n2020/07/30 20:30:57 oauthproxy.go:229: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/07/30 20:30:57 oauthproxy.go:239: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/30 20:30:57 main.go:155: using htpasswd file /etc/proxy/htpasswd/auth\n2020/07/30 20:30:57 http.go:107: HTTPS: listening on [::]:9091\nI0730 20:30:57.130973       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Jul 30 20:43:33.763 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-170-175.us-west-2.compute.internal container/prometheus-config-reloader container exited with code 2 (Error): ts=2020-07-30T20:30:56.428932451Z caller=main.go:87 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-07-30T20:30:56.430626651Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-07-30T20:31:01.665759956Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-07-30T20:31:01.6658585Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\n
Jul 30 20:43:33.967 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-170-175.us-west-2.compute.internal container/config-reloader container exited with code 2 (Error): 2020/07/30 20:29:23 Watching directory: "/etc/alertmanager/config"\n
Jul 30 20:43:33.967 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-170-175.us-west-2.compute.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/07/30 20:29:24 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/30 20:29:24 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/30 20:29:24 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/30 20:29:24 oauthproxy.go:202: mapping path "/" => upstream "http://localhost:9093/"\n2020/07/30 20:29:24 oauthproxy.go:223: compiled skip-auth-regex => "^/metrics"\n2020/07/30 20:29:24 oauthproxy.go:229: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/30 20:29:24 oauthproxy.go:239: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/30 20:29:24 http.go:107: HTTPS: listening on [::]:9095\nI0730 20:29:24.524331       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Jul 30 20:43:55.914 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-134-56.us-west-2.compute.internal container/prometheus container exited with code 2 (Error): level=error ts=2020-07-30T20:43:50.974Z caller=main.go:283 msg="Error loading config (--config.file=/etc/prometheus/config_out/prometheus.env.yaml)" err="open /etc/prometheus/config_out/prometheus.env.yaml: no such file or directory"\n
Jul 30 20:45:16.459 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Jul 30 20:45:36.108 E clusteroperator/authentication changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver
Jul 30 20:45:40.756 E ns/openshift-cluster-machine-approver pod/machine-approver-fb96646f6-rs5zq node/ip-10-0-153-63.us-west-2.compute.internal container/machine-approver-controller container exited with code 2 (Error):  1 main.go:237] Starting Machine Approver\nI0730 20:43:45.561196       1 reflector.go:175] Starting reflector *v1.ClusterOperator (0s) from github.com/openshift/cluster-machine-approver/status.go:99\nI0730 20:43:45.569643       1 reflector.go:175] Starting reflector *v1beta1.CertificateSigningRequest (0s) from github.com/openshift/cluster-machine-approver/main.go:239\nI0730 20:43:45.661245       1 main.go:147] CSR csr-5qfjf added\nI0730 20:43:45.661336       1 main.go:150] CSR csr-5qfjf is already approved\nI0730 20:43:45.661376       1 main.go:147] CSR csr-9pz2l added\nI0730 20:43:45.661404       1 main.go:150] CSR csr-9pz2l is already approved\nI0730 20:43:45.661435       1 main.go:147] CSR csr-czmgj added\nI0730 20:43:45.661461       1 main.go:150] CSR csr-czmgj is already approved\nI0730 20:43:45.661519       1 main.go:147] CSR csr-dm68x added\nI0730 20:43:45.661556       1 main.go:150] CSR csr-dm68x is already approved\nI0730 20:43:45.661597       1 main.go:147] CSR csr-j95q4 added\nI0730 20:43:45.661623       1 main.go:150] CSR csr-j95q4 is already approved\nI0730 20:43:45.661651       1 main.go:147] CSR csr-mzspq added\nI0730 20:43:45.661677       1 main.go:150] CSR csr-mzspq is already approved\nI0730 20:43:45.661705       1 main.go:147] CSR csr-sb5f5 added\nI0730 20:43:45.661730       1 main.go:150] CSR csr-sb5f5 is already approved\nI0730 20:43:45.661759       1 main.go:147] CSR csr-58hc6 added\nI0730 20:43:45.661793       1 main.go:150] CSR csr-58hc6 is already approved\nI0730 20:43:45.661827       1 main.go:147] CSR csr-7pctf added\nI0730 20:43:45.661862       1 main.go:150] CSR csr-7pctf is already approved\nI0730 20:43:45.661893       1 main.go:147] CSR csr-ssmjd added\nI0730 20:43:45.661919       1 main.go:150] CSR csr-ssmjd is already approved\nI0730 20:43:45.661949       1 main.go:147] CSR csr-w7rgr added\nI0730 20:43:45.661975       1 main.go:150] CSR csr-w7rgr is already approved\nI0730 20:43:45.662003       1 main.go:147] CSR csr-2g5xr added\nI0730 20:43:45.662029       1 main.go:150] CSR csr-2g5xr is already approved\n
Jul 30 20:45:42.925 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-86467f585d-sdcdt node/ip-10-0-153-63.us-west-2.compute.internal container/kube-storage-version-migrator-operator container exited with code 1 (Error): status":"False","type":"Degraded"},{"lastTransitionTime":"2020-07-30T20:44:12Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-07-30T20:14:21Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-07-30T19:59:55Z","reason":"NoData","status":"Unknown","type":"Upgradeable"}]}}\nI0730 20:44:12.793867       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"76313b21-4674-4396-86ef-7ba96c4acc61", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Degraded message changed from "TargetDegraded: \"kube-storage-version-migrator/namespace.yaml\" (string): Get https://172.30.0.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator: unexpected EOF\nTargetDegraded: " to "",Progressing changed from True to False ("")\nI0730 20:45:41.600922       1 cmd.go:83] Received SIGTERM or SIGINT signal, shutting down controller.\nI0730 20:45:41.601559       1 reflector.go:181] Stopping reflector *v1.ClusterOperator (10m0s) from runtime/asm_amd64.s:1357\nI0730 20:45:41.601755       1 reflector.go:181] Stopping reflector *v1.Deployment (10m0s) from runtime/asm_amd64.s:1357\nI0730 20:45:41.601841       1 reflector.go:181] Stopping reflector *unstructured.Unstructured (12h0m0s) from runtime/asm_amd64.s:1357\nI0730 20:45:41.601918       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from runtime/asm_amd64.s:1357\nI0730 20:45:41.601963       1 base_controller.go:101] Shutting down StatusSyncer_kube-storage-version-migrator ...\nI0730 20:45:41.602033       1 base_controller.go:101] Shutting down LoggingSyncer ...\nI0730 20:45:41.602063       1 controller.go:123] Shutting down KubeStorageVersionMigratorOperator\nW0730 20:45:41.602157       1 builder.go:94] graceful termination failed, controllers failed with error: stopped\n
Jul 30 20:45:47.212 E ns/openshift-machine-api pod/machine-api-operator-7fb99b7f4f-r97dq node/ip-10-0-153-63.us-west-2.compute.internal container/machine-api-operator container exited with code 2 (Error): 
Jul 30 20:46:03.763 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-204-199.us-west-2.compute.internal container/config-reloader container exited with code 2 (Error): 2020/07/30 20:31:08 Watching directory: "/etc/alertmanager/config"\n
Jul 30 20:46:03.763 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-204-199.us-west-2.compute.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/07/30 20:31:08 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/30 20:31:08 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/30 20:31:08 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/30 20:31:08 oauthproxy.go:202: mapping path "/" => upstream "http://localhost:9093/"\n2020/07/30 20:31:08 oauthproxy.go:223: compiled skip-auth-regex => "^/metrics"\n2020/07/30 20:31:08 oauthproxy.go:229: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/30 20:31:08 oauthproxy.go:239: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/30 20:31:08 http.go:107: HTTPS: listening on [::]:9095\nI0730 20:31:08.838325       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Jul 30 20:46:03.885 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-88cbb7cc6-tt5pd node/ip-10-0-204-199.us-west-2.compute.internal container/operator container exited with code 1 (Error): 3.757725       1 operator.go:146] Starting syncing operator at 2020-07-30 20:44:13.757717679 +0000 UTC m=+955.423089423\nI0730 20:44:13.788948       1 operator.go:148] Finished syncing operator at 31.222397ms\nI0730 20:44:13.788999       1 operator.go:146] Starting syncing operator at 2020-07-30 20:44:13.788993414 +0000 UTC m=+955.454365298\nI0730 20:44:14.242545       1 operator.go:148] Finished syncing operator at 453.540767ms\nI0730 20:44:14.242597       1 operator.go:146] Starting syncing operator at 2020-07-30 20:44:14.242592963 +0000 UTC m=+955.907964683\nI0730 20:44:14.821739       1 operator.go:148] Finished syncing operator at 579.133506ms\nI0730 20:45:59.515161       1 cmd.go:88] Received SIGTERM or SIGINT signal, shutting down controller.\nI0730 20:45:59.515577       1 dynamic_serving_content.go:145] Shutting down serving-cert::/tmp/serving-cert-235762193/tls.crt::/tmp/serving-cert-235762193/tls.key\nI0730 20:45:59.515653       1 reflector.go:181] Stopping reflector *v1.ClusterOperator (20m0s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0730 20:45:59.515719       1 reflector.go:181] Stopping reflector *v1beta1.CustomResourceDefinition (32m55.185433482s) from k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:117\nI0730 20:45:59.515779       1 reflector.go:181] Stopping reflector *v1.Deployment (25m27.477825549s) from k8s.io/client-go/informers/factory.go:135\nI0730 20:45:59.515834       1 reflector.go:181] Stopping reflector *v1.CSISnapshotController (20m0s) from github.com/openshift/client-go/operator/informers/externalversions/factory.go:101\nI0730 20:45:59.515871       1 base_controller.go:136] Shutting down ManagementStateController ...\nI0730 20:45:59.515891       1 base_controller.go:136] Shutting down StatusSyncer_csi-snapshot-controller ...\nI0730 20:45:59.515906       1 base_controller.go:136] Shutting down LoggingSyncer ...\nW0730 20:45:59.515995       1 builder.go:97] graceful termination failed, controllers failed with error: stopped\n
Jul 30 20:46:04.065 E ns/openshift-marketplace pod/redhat-operators-868bd9544d-g4kch node/ip-10-0-204-199.us-west-2.compute.internal container/redhat-operators container exited with code 2 (Error): 
Jul 30 20:46:04.091 E ns/openshift-monitoring pod/prometheus-adapter-7658748b5c-v5hzc node/ip-10-0-204-199.us-west-2.compute.internal container/prometheus-adapter container exited with code 2 (Error): I0730 20:29:28.331175       1 adapter.go:94] successfully using in-cluster auth\nI0730 20:29:29.539299       1 dynamic_cafile_content.go:166] Starting request-header::/etc/tls/private/requestheader-client-ca-file\nI0730 20:29:29.539349       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/etc/tls/private/client-ca-file\nI0730 20:29:29.540339       1 dynamic_serving_content.go:129] Starting serving-cert::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\nI0730 20:29:29.540674       1 secure_serving.go:178] Serving securely on [::]:6443\nI0730 20:29:29.540764       1 tlsconfig.go:219] Starting DynamicServingCertificateController\n
Jul 30 20:46:04.993 E ns/openshift-monitoring pod/thanos-querier-794cfbc57c-v9kq7 node/ip-10-0-204-199.us-west-2.compute.internal container/oauth-proxy container exited with code 2 (Error): vider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/30 20:30:03 oauthproxy.go:202: mapping path "/" => upstream "http://localhost:9090/"\n2020/07/30 20:30:03 oauthproxy.go:223: compiled skip-auth-regex => "^/metrics"\n2020/07/30 20:30:03 oauthproxy.go:229: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/07/30 20:30:03 oauthproxy.go:239: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/30 20:30:03 main.go:155: using htpasswd file /etc/proxy/htpasswd/auth\nI0730 20:30:03.350827       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/07/30 20:30:03 http.go:107: HTTPS: listening on [::]:9091\n2020/07/30 20:32:45 oauthproxy.go:782: basicauth: 10.129.0.48:52758 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/30 20:33:45 oauthproxy.go:782: basicauth: 10.129.0.48:55484 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/30 20:36:45 oauthproxy.go:782: basicauth: 10.129.0.48:35068 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/30 20:37:45 oauthproxy.go:782: basicauth: 10.129.0.48:37664 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/30 20:39:45 oauthproxy.go:782: basicauth: 10.129.0.48:42854 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/30 20:41:45 oauthproxy.go:782: basicauth: 10.129.0.48:48110 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/30 20:42:45 oauthproxy.go:782: basicauth: 10.129.0.48:50672 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/30 20:45:00 oauthproxy.go:782: basicauth: 10.129.0.48:43174 Authorization header does not start with 'Basic', skipping basic authentication\n
Jul 30 20:46:09.040 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver
Jul 30 20:46:30.019 E ns/e2e-k8s-sig-apps-job-upgrade-3042 pod/foo-2ddjv node/ip-10-0-204-199.us-west-2.compute.internal container/c container exited with code 137 (Error): 
Jul 30 20:46:30.037 E ns/e2e-k8s-sig-apps-job-upgrade-3042 pod/foo-klvqn node/ip-10-0-204-199.us-west-2.compute.internal container/c container exited with code 137 (Error): 
Jul 30 20:46:31.372 - 45s   E openshift-apiserver OpenShift API is not responding to GET requests
Jul 30 20:46:58.679 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-170-175.us-west-2.compute.internal container/prometheus container exited with code 2 (Error): level=error ts=2020-07-30T20:46:56.358Z caller=main.go:283 msg="Error loading config (--config.file=/etc/prometheus/config_out/prometheus.env.yaml)" err="open /etc/prometheus/config_out/prometheus.env.yaml: no such file or directory"\n
Jul 30 20:47:32.861 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Jul 30 20:47:47.871 E clusteroperator/authentication changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver
Jul 30 20:48:16.734 E ns/openshift-insights pod/insights-operator-6584f44fc-sbgnz node/ip-10-0-162-59.us-west-2.compute.internal container/operator container exited with code 2 (Error): :56492]\nI0730 20:44:14.532127       1 httplog.go:90] GET /metrics: (2.649071ms) 200 [Prometheus/2.20.0 10.129.2.30:46718]\nI0730 20:44:42.937002       1 httplog.go:90] GET /metrics: (5.947632ms) 200 [Prometheus/2.20.0 10.128.2.21:56492]\nI0730 20:44:44.526427       1 httplog.go:90] GET /metrics: (3.03962ms) 200 [Prometheus/2.20.0 10.129.2.30:46718]\nI0730 20:45:12.938038       1 httplog.go:90] GET /metrics: (7.299008ms) 200 [Prometheus/2.20.0 10.128.2.21:56492]\nI0730 20:45:14.525601       1 httplog.go:90] GET /metrics: (2.335934ms) 200 [Prometheus/2.20.0 10.129.2.30:46718]\nI0730 20:45:42.936581       1 httplog.go:90] GET /metrics: (5.844114ms) 200 [Prometheus/2.20.0 10.128.2.21:56492]\nI0730 20:45:44.525593       1 httplog.go:90] GET /metrics: (2.335835ms) 200 [Prometheus/2.20.0 10.129.2.30:46718]\nI0730 20:45:45.550982       1 status.go:320] The operator is healthy\nI0730 20:45:45.551045       1 status.go:430] No status update necessary, objects are identical\nI0730 20:46:14.540125       1 httplog.go:90] GET /metrics: (16.798793ms) 200 [Prometheus/2.20.0 10.129.2.30:46718]\nI0730 20:46:44.533579       1 httplog.go:90] GET /metrics: (10.293728ms) 200 [Prometheus/2.20.0 10.129.2.30:46718]\nI0730 20:47:12.947959       1 httplog.go:90] GET /metrics: (7.318594ms) 200 [Prometheus/2.20.0 10.131.0.24:57812]\nI0730 20:47:14.524836       1 httplog.go:90] GET /metrics: (1.561079ms) 200 [Prometheus/2.20.0 10.129.2.30:46718]\nI0730 20:47:42.940730       1 httplog.go:90] GET /metrics: (10.252119ms) 200 [Prometheus/2.20.0 10.131.0.24:57812]\nI0730 20:47:44.525131       1 httplog.go:90] GET /metrics: (1.854665ms) 200 [Prometheus/2.20.0 10.129.2.30:46718]\nI0730 20:47:45.548445       1 status.go:320] The operator is healthy\nI0730 20:47:45.548499       1 status.go:430] No status update necessary, objects are identical\nI0730 20:48:12.937246       1 httplog.go:90] GET /metrics: (6.647204ms) 200 [Prometheus/2.20.0 10.131.0.24:57812]\nI0730 20:48:14.524622       1 httplog.go:90] GET /metrics: (1.436223ms) 200 [Prometheus/2.20.0 10.129.2.30:46718]\n
Jul 30 20:48:26.277 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-5745c86545-x8l5j node/ip-10-0-162-59.us-west-2.compute.internal container/cluster-storage-operator container exited with code 1 (Error): Error on reading termination message from logs: failed to try resolving symlinks in path "/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-5745c86545-x8l5j_df9ab3b6-c2e3-4a5b-a62b-8e70a7ac59e5/cluster-storage-operator/0.log": lstat /var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-5745c86545-x8l5j_df9ab3b6-c2e3-4a5b-a62b-8e70a7ac59e5/cluster-storage-operator/0.log: no such file or directory
Jul 30 20:49:03.900 E kube-apiserver failed contacting the API: Get https://api.ci-op-bmb43f6l-b0725.origin-ci-int-aws.dev.rhcloud.com:6443/apis/config.openshift.io/v1/clusterversions?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dversion&resourceVersion=67235&timeout=5m6s&timeoutSeconds=306&watch=true: dial tcp 54.245.106.166:6443: connect: connection refused
Jul 30 20:49:33.239 E ns/openshift-marketplace pod/redhat-operators-868bd9544d-df2qz node/ip-10-0-170-175.us-west-2.compute.internal container/redhat-operators container exited with code 2 (Error): 
Jul 30 20:49:33.283 E ns/openshift-marketplace pod/redhat-marketplace-5bc46b95b9-9kz8m node/ip-10-0-170-175.us-west-2.compute.internal container/redhat-marketplace container exited with code 2 (Error): 
Jul 30 20:49:58.297 E ns/openshift-marketplace pod/community-operators-68d7f67ddd-q8z8d node/ip-10-0-170-175.us-west-2.compute.internal container/community-operators container exited with code 2 (Error): 
Jul 30 20:50:35.259 E ns/openshift-monitoring pod/telemeter-client-75bcb9bb8b-gwzgv node/ip-10-0-134-56.us-west-2.compute.internal container/reload container exited with code 2 (Error): 
Jul 30 20:50:35.259 E ns/openshift-monitoring pod/telemeter-client-75bcb9bb8b-gwzgv node/ip-10-0-134-56.us-west-2.compute.internal container/telemeter-client container exited with code 2 (Error): 
Jul 30 20:50:35.483 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-7dcfdf5c85-th8vm node/ip-10-0-134-56.us-west-2.compute.internal container/csi-provisioner container exited with code 2 (Error): 
Jul 30 20:50:35.483 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-7dcfdf5c85-th8vm node/ip-10-0-134-56.us-west-2.compute.internal container/csi-driver container exited with code 2 (Error): 
Jul 30 20:50:35.483 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-7dcfdf5c85-th8vm node/ip-10-0-134-56.us-west-2.compute.internal container/csi-resizer container exited with code 2 (Error): 
Jul 30 20:50:35.483 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-7dcfdf5c85-th8vm node/ip-10-0-134-56.us-west-2.compute.internal container/csi-attacher container exited with code 2 (Error): 
Jul 30 20:50:35.483 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-7dcfdf5c85-th8vm node/ip-10-0-134-56.us-west-2.compute.internal container/csi-snapshotter container exited with code 2 (Error): 
Jul 30 20:50:35.548 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-134-56.us-west-2.compute.internal container/rules-configmap-reloader container exited with code 2 (Error): 2020/07/30 20:43:54 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Jul 30 20:50:35.548 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-134-56.us-west-2.compute.internal container/prometheus-proxy container exited with code 2 (Error): 2020/07/30 20:43:55 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/07/30 20:43:55 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/30 20:43:55 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/30 20:43:55 oauthproxy.go:202: mapping path "/" => upstream "http://localhost:9090/"\n2020/07/30 20:43:55 oauthproxy.go:223: compiled skip-auth-regex => "^/metrics"\n2020/07/30 20:43:55 oauthproxy.go:229: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/07/30 20:43:55 oauthproxy.go:239: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/30 20:43:55 main.go:155: using htpasswd file /etc/proxy/htpasswd/auth\n2020/07/30 20:43:55 http.go:107: HTTPS: listening on [::]:9091\nI0730 20:43:55.336795       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/07/30 20:48:12 oauthproxy.go:782: basicauth: 10.129.2.28:37082 Authorization header does not start with 'Basic', skipping basic authentication\n
Jul 30 20:50:35.548 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-134-56.us-west-2.compute.internal container/prometheus-config-reloader container exited with code 2 (Error): ts=2020-07-30T20:43:54.390889384Z caller=main.go:87 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-07-30T20:43:54.392491063Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-07-30T20:43:59.636736406Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-07-30T20:43:59.636830678Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\n
Jul 30 20:50:35.630 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-134-56.us-west-2.compute.internal container/config-reloader container exited with code 2 (Error): 2020/07/30 20:43:54 Watching directory: "/etc/alertmanager/config"\n
Jul 30 20:50:35.630 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-134-56.us-west-2.compute.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/07/30 20:43:55 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/30 20:43:55 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/30 20:43:55 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/30 20:43:55 oauthproxy.go:202: mapping path "/" => upstream "http://localhost:9093/"\n2020/07/30 20:43:55 oauthproxy.go:223: compiled skip-auth-regex => "^/metrics"\n2020/07/30 20:43:55 oauthproxy.go:229: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/30 20:43:55 oauthproxy.go:239: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/30 20:43:55 http.go:107: HTTPS: listening on [::]:9095\nI0730 20:43:55.064664       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\nE0730 20:49:15.356964       1 webhook.go:109] Failed to make webhook authenticator request: Post https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews: dial tcp 172.30.0.1:443: connect: connection refused\n2020/07/30 20:49:15 oauthproxy.go:790: requestauth: 10.129.2.30:42320 Post https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews: dial tcp 172.30.0.1:443: connect: connection refused\n
Jul 30 20:50:36.404 E ns/openshift-monitoring pod/prometheus-adapter-7658748b5c-vfhr5 node/ip-10-0-134-56.us-west-2.compute.internal container/prometheus-adapter container exited with code 2 (Error): I0730 20:29:10.134389       1 adapter.go:94] successfully using in-cluster auth\nI0730 20:29:11.524098       1 dynamic_cafile_content.go:166] Starting request-header::/etc/tls/private/requestheader-client-ca-file\nI0730 20:29:11.524100       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/etc/tls/private/client-ca-file\nI0730 20:29:11.524595       1 secure_serving.go:178] Serving securely on [::]:6443\nI0730 20:29:11.524702       1 dynamic_serving_content.go:129] Starting serving-cert::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\nI0730 20:29:11.529873       1 tlsconfig.go:219] Starting DynamicServingCertificateController\nE0730 20:48:59.548568       1 webhook.go:197] Failed to make webhook authorizer request: subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\nE0730 20:48:59.548708       1 errors.go:77] subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\n
Jul 30 20:50:36.453 E ns/openshift-kube-storage-version-migrator pod/migrator-7bfb5496f6-hd72j node/ip-10-0-134-56.us-west-2.compute.internal container/migrator container exited with code 2 (Error): ator.go:18] FLAG: --log_backtrace_at=":0"\nI0730 20:27:49.133840       1 migrator.go:18] FLAG: --log_dir=""\nI0730 20:27:49.133847       1 migrator.go:18] FLAG: --log_file=""\nI0730 20:27:49.133854       1 migrator.go:18] FLAG: --log_file_max_size="1800"\nI0730 20:27:49.133861       1 migrator.go:18] FLAG: --logtostderr="true"\nI0730 20:27:49.133868       1 migrator.go:18] FLAG: --skip_headers="false"\nI0730 20:27:49.133874       1 migrator.go:18] FLAG: --skip_log_headers="false"\nI0730 20:27:49.133881       1 migrator.go:18] FLAG: --stderrthreshold="2"\nI0730 20:27:49.133888       1 migrator.go:18] FLAG: --v="2"\nI0730 20:27:49.133895       1 migrator.go:18] FLAG: --vmodule=""\nI0730 20:27:49.135554       1 reflector.go:175] Starting reflector *v1alpha1.StorageVersionMigration (0s) from k8s.io/client-go@v0.18.0-beta.2/tools/cache/reflector.go:125\nI0730 20:44:09.545042       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nW0730 20:44:09.557354       1 reflector.go:402] k8s.io/client-go@v0.18.0-beta.2/tools/cache/reflector.go:125: watch of *v1alpha1.StorageVersionMigration ended with: very short watch: k8s.io/client-go@v0.18.0-beta.2/tools/cache/reflector.go:125: Unexpected watch close - watch lasted less than a second and no items received\nE0730 20:44:09.558468       1 reflector.go:178] k8s.io/client-go@v0.18.0-beta.2/tools/cache/reflector.go:125: Failed to list *v1alpha1.StorageVersionMigration: Get https://172.30.0.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations?resourceVersion=49699: dial tcp 172.30.0.1:443: connect: connection refused\nE0730 20:44:11.410236       1 reflector.go:178] k8s.io/client-go@v0.18.0-beta.2/tools/cache/reflector.go:125: Failed to list *v1alpha1.StorageVersionMigration: Get https://172.30.0.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations?resourceVersion=49699: dial tcp 172.30.0.1:443: connect: connection refused\nI0730 20:49:03.647459       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\n
Jul 30 20:50:53.541 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-204-199.us-west-2.compute.internal container/prometheus container exited with code 2 (Error): level=error ts=2020-07-30T20:50:51.836Z caller=main.go:283 msg="Error loading config (--config.file=/etc/prometheus/config_out/prometheus.env.yaml)" err="open /etc/prometheus/config_out/prometheus.env.yaml: no such file or directory"\n